Academic literature on the topic 'CMS High Level Trigger'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'CMS High Level Trigger.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "CMS High Level Trigger"

1

Gori, Valentina. "The CMS high level trigger." International Journal of Modern Physics: Conference Series 31 (January 2014): 1460297. http://dx.doi.org/10.1142/s201019451460297x.

Full text
Abstract:
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Trocino, Daniele. "The CMS High Level Trigger." Journal of Physics: Conference Series 513, no. 1 (June 11, 2014): 012036. http://dx.doi.org/10.1088/1742-6596/513/1/012036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bagliesi, Giuseppe. "CMS high-level trigger selection." European Physical Journal C 33, S1 (March 31, 2004): s1035—s1037. http://dx.doi.org/10.1140/epjcd/s2004-03-1804-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Richardson, Clint. "CMS High Level Trigger Timing Measurements." Journal of Physics: Conference Series 664, no. 8 (December 23, 2015): 082045. http://dx.doi.org/10.1088/1742-6596/664/8/082045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Giordano, Domenico. "The CMS High-Level Trigger Selection." Nuclear Physics B - Proceedings Supplements 150 (January 2006): 299–303. http://dx.doi.org/10.1016/j.nuclphysbps.2004.08.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Afaq, Anzar, William Badgett, Gerry Bauer, Kurt Biery, Vincent Boyer, James Branson, Angela Brett, et al. "The CMS High Level Trigger System." IEEE Transactions on Nuclear Science 55, no. 1 (2008): 172–75. http://dx.doi.org/10.1109/tns.2007.910980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Agostino, L., G. Bauer, B. Beccati, U. Behrens, J. Berryhil, K. Biery, T. Bose, et al. "Commissioning of the CMS High Level Trigger." Journal of Instrumentation 4, no. 10 (October 19, 2009): P10005. http://dx.doi.org/10.1088/1748-0221/4/10/p10005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Perrotta, Andrea. "Performance of the CMS High Level Trigger." Journal of Physics: Conference Series 664, no. 8 (December 23, 2015): 082044. http://dx.doi.org/10.1088/1742-6596/664/8/082044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tosi, M. "Tracking at High Level Trigger in CMS." Nuclear and Particle Physics Proceedings 273-275 (April 2016): 2494–96. http://dx.doi.org/10.1016/j.nuclphysbps.2015.09.436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Donato, Silvio. "CMS trigger performance." EPJ Web of Conferences 182 (2018): 02037. http://dx.doi.org/10.1051/epjconf/201818202037.

Full text
Abstract:
During its second run of operation (Run 2), started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach 2 · 1034 cm-2s-1 with an average pileup of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offine reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT have been greatly improved; in particular, new approaches for the online track reconstruction lead to a drastic reduction of the computing time, and to much improved performances. This document will describe the performance of the upgraded trigger system in Run 2.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "CMS High Level Trigger"

1

Lorusso, Marco. "FPGA implementation of muon momentum assignment with machine learning at the CMS level-1 trigger." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23211/.

Full text
Abstract:
With the advent of the High-Luminosity phase of the LHC (HL-LHC), the instantaneous luminosity of the Large Hadron Collider at CERN is expected to increase up to around 7.5 x 10^34 cm^-2s^-1. Therefore, new strategies for data acquisition and processing will be necessary, in preparation for the higher number of signals produced inside the detectors, that would eventually make the trigger and readout electronics currently in use at the LHC experiments obsolete. In the context of an upgrade of the trigger system of the Compact Muon Solenoid (CMS), new reconstruction algorithms, aiming for an improved performance, are being developed. For what concerns the online tracking of muons, one of the figures that is being improved is the accuracy of the transverse momentum (pT) measurement. Machine Learning techniques have already been considered as a promising solution for this problem, as they make possible, with the use of more information collected by the detector, to build models able to predict the pT with an improved precision. In this Master Thesis, a step further in increasing the performance of the pT assignment is taken by implementing such models onto a type of programmable processing unit called Field Programmable Gate Array (FPGA). FPGAs, indeed, allow a smaller latency with a relatively small loss in accuracy with respect to traditional inference algorithms running on a CPU, both important aspects for a trigger system. The analysis carried out in this work uses data obtained through Monte Carlo simulations of muons crossing the barrel region of the CMS muon chambers, and compare the results with the pT assigned by the current (Phase-1) CMS Level 1 Barrel Muon Track Finder (BMTF) trigger system. Together with the final results, the steps needed to create an accelerated inference machine for muon pT are also presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Fengwangdong. "Measurement of jet production in association with a Z boson at the LHC and jet energy correction calibration at high level trigger in CMS." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/251804.

Full text
Abstract:
This PhD dissertation presents the measurement of the cross section of jet production in association with a Z boson in proton-proton collisions at the Large Hadron Collider in CERN, with a center-of-mass energy of 8 TeV in 2012 and of 13 TeV in 2015. The data used for this analysis were collected by the Compact Muon Solenoid (CMS) detector, with an integrated luminosity of 19.6 fb-1 in 2012 and of 2.25 fb-1 in 2015. The differential cross section is measured as a function of jet multiplicity, jet transverse momentum and rapidity, and the scalar sum of jet transverse momenta. The rapidity correlations between the Z boson and jets are also measured benefiting from the large statistics of data taken in 2012. All distributions of measured observables are obtained after correcting detector effects using unfolding approach, and the results of two leptonic decaying channels of Z boson are combined. Coming along with the systematic and statistical uncertainties, the measurement is compared to different theoretical predictions at different accuracy levels. The predictions are from MADGRAPH 5, SHERPA 2 (for 8 TeV analysis only), MADGRAPH_AMC@NLO, and fixed next-to-next-to-leading order (for 13 TeV analysis only). Thanks to the unprecedented high energy and the large statistics of data, precision measurement is accomplished in a physical phase space never reached before. This measurement provides precise systematics for different theoretical models. It also quantifies the improvement with higher order of perturbative quantum chromodynamics calculations on matrix elements relative to the leading order multi-leg approach. In particular to the rapidity correlation study, new matching schemes (FxFx and MEPS@NLO) for next-to-leading order matrix elements and parton shower show significant improvements with respect to the MLM matching scheme for leading order multi-leg matrix elements and parton shower. This measurement also gives precise background estimation for the measurements of many other processes in Standard Model like top quark production and gauge boson couplings, and for new physics searches such as Supersymmetry. In this thesis, the jet energy correction and calibration for the high level trigger system of CMS are also depicted. From 2012 to 2015, the Large Hadron Collider was upgraded, not only with the center-of-mass energy of the beams enlarged, but also with the instantaneous luminosity increased. The time distance between two particle bunches in a beam is reduced. As a result, the reconstructed momenta of the jets produced in each bunch crossing are significantly contaminated by multiple interactions. A dedicated technical approach has been developed for correcting the reconstructed jet momenta. The corrections have been calibrated and configured for the data taking in 2015 and 2016.
Cette thèse présente une mesure de la section efficace de production de jets associés à un boson Z dans les collisions proton-proton du Grand Collisionneur de Hadron (LHC) situé au CERN, avec des énergies dans le centre de masse de 8 TeV et 13 TeV, respectivement pour les années 2012 et 2015. Les données utilisées pour cette analyse ont été collectées par le détecteur Compact Muon Solenoid (CMS). Elles constituent des échantillons de luminosités intégrées de 19.6 fb⁻¹ et 2.25 fb⁻¹, respectivement pour 2012 et 2015. Nous mesurons la section efficace différentielle en fonction de la multiplicité de jets, de l’impulsion transverse et de la rapidité des jets, et en fonction de la somme scalaire des impulsions transverses des jets. La corrélation entre les rapidités du boson Z et des jets est aussi mesurée et bénéficie de la large statistique prise en 2012. Toutes les distributions d’observables mesurées sont obtenues après corrections pour les effets détecteurs et les résultats des canaux de désintégration muonique et électronique du boson Z sont combinés. Tenant compte des incertitudes statistiques et systématiques, les mesures sont comparées à différentes prédictions théoriques ayant différents niveaux de précision. Les prédictions sont obtenues de MADGRAPH 5, SHERPA 2 (pour l’analyse à 8 TeV uniquement), MADGRAPH_AMC@NLO, et un modèle fixé au NNLO (pour l’analyse à 13 TeV uniquement). Par ces mesures de précisions, et en particulier celle de la corrélation de rapidités, nous avons acquis une compréhension plus approfondie de la chromodynamique quantique dans son régime perturbatif. Grâce à la plus haute énergie jamais atteinte en laboratoire, et à la grande statistique disponible, nous avons sondé avec précision des endroits de l’espace des phases jusque là inaccessibles.Dans cette thèse, les corrections et la calibration de l’énergie des jets pour le haut niveau de sélection de CMS est également présentée. Durant la période de 2012 à 2015, le LHC a été amélioré, non seulement l’énergie dans le centre de masse a augmenté, mais la luminosité instantanée a aussi été amplifiée. L’écart temporelle entre deux paquets de particules dans les faisceaux du LHC a été réduite. L’une des conséquences est que l’impulsion reconstruite pour les jets produits lors d’un croisement de faisceau à une contribution significative venant des multiples interactions ayant lieux lors du croisement des paquets. Une approche technique dédiée a été développée pour corriger l'impulsion des jets. Les corrections obtenues ont été calibrées aux données prises en 2015 et 2016.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Qun. "Measurement of the differential cross section of Z boson production in association with jets at the LHC." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/271719.

Full text
Abstract:
This thesis presents the measurement of the differential cross section of Z boson pro-duction in association with jets (Z+jets) in proton-proton collision at the center-of-massenergy of 13 TeV. The data has been recorded with the CMS detector at the LHC duringthe year 2015, corresponding to an integrated luminosity of 2.19 fb −1 .A study of theCMS muon High Level Trigger (HLT) with the data collected in 2016 is also presented.The goal of analysis is to perform a first measurement at 13 TeV of the cross sections ofZ+jets as a function of the jet multiplicity, its dependence on the transverse momentumof the Z boson, the jet kinematic variables (transverse momentum and rapidity), thescalar sum of the jet momenta, and the balance in the transverse momentum betweenthe reconstructed jet recoil and the Z boson. The results are obtained by correctingthe detector effects, and are unfolded to particle level. The measurement are com-pared to four predictions using different approximations: at the leading-order (LO),next-to-leading-order (NLO) and next-to-next-to-leading order (NNLO) accuracy. Thefirst two calculations used M AD G RAPH 5_ A MC@NLO interfaced with PYTHIA 8 for theparton showering and hadronisation, one of which includes matrix elements (MEs) atLO, another includes one-loop corrections (NLO). The third is a fixed-order calculationwith NNLO accuracy for Z+1 jet using the N -jettiness subtraction scheme (N jetti ). Thefourth uses the GENEVA program with an NNLO calculation combined with higher-order resummation.A series of studies on the HLT double muon trigger are also included. Since 2015 theLHC reached higher luminosity, more events are produced inside the CMS detector persecond, which resulted in more challenges for the trigger system. The work presentedincludes the monitoring, validation and the calibration of the muon trigger paths since2016.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
4

Brooke, James. "Level-1 trigger studies for the CMS experiment." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Diotalevi, Tommaso. "CMS level-1 trigger muon momentum assignment with machine learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16326/.

Full text
Abstract:
With the advent of the High-Luminosity phase of the LHC (HL-LHC), the instantaneous luminosity of the Large Hadron Collider at CERN will increase up to 7,5 10^34 cm^-2s^-1. Therefore, new algorithmic techniques for data acquisition and processing will be necessary, in preparation for a high pile-up environment that would eventually make the current electronics and trigger devices obsolete. Nowadays, Machine Learning techniques represent a promising alternative to this problem, as they make possible the selection of multiple information - collected by the detector - and build from them different models, able to predict with a certain efficiency fundamental physical quantities, including the transverse momentum pT. The analysis presented in this Master Thesis consists in the production of such models - with data obtained through Monte Carlo simulations - capable of predicting the transverse momentum of muons crossing the Barrel region of the CMS muon chambers, and compare the results with the pT assigned by the current CMS Level 1 Barrel Muon Track Finder (BMTF) trigger system.
APA, Harvard, Vancouver, ISO, and other styles
6

James, Tom. "A hardware track-trigger for CMS at the High Luminosity LHC." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/60593.

Full text
Abstract:
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is designed to study a wide range of high energy physics phenomena. It employs a large all-silicon tracker within a 3.8 T magnetic solenoid, which allows precise measurements of transverse momentum (pT) and vertex position. This tracking detector will be upgraded to coincide with the installation of the High-Luminosity LHC, which will provide up to about 10^35 / cm^2 / s to CMS, or 200 collisions per 25 ns bunch crossing. This new tracker must maintain the nominal physics performance in this more challenging environment. Novel tracking modules that utilise closely spaced silicon sensors to discriminate on track pT have been developed that would allow the readout of only hits compatible with pT > 2−3 GeV tracks to off-detector trigger electronics. This would allow the use of tracking information at the Level-1 trigger of the experiment, a requirement to keep the Level-1 triggering rate below the 750 kHz target, while maintaining physics sensitivity. This thesis presents a concept for an all Field Programmable Gate Array (FPGA) based track finder using a fully time-multiplexed architecture. A hardware demonstrator has been assembled to prove the feasibility and capability of such a system. The track finding demonstrator uses a projective binning algorithm called a Hough Transform to form track-candidates, which are then cleaned and fitted by a combinatorial Kalman Filter. Both of these algorithms are implemented in FPGA firmware. This demonstrator system, composed of eight Master Processor Virtex-7 (MP7) processing boards, is able to successfully find tracks in one eighth of the tracker solid angle at a time, within the expected 4 μs latency constraint. The performance for a variety of physics scenarios is studied, as well as the proposed scaling of the demonstrator to the final system and new technologies.
APA, Harvard, Vancouver, ISO, and other styles
7

Prvan, Marina. "Algorithms for the Level-1 trigger with the HGCAL calorimeter for theCMS HL-LHC upgrade." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX094.

Full text
Abstract:
L'instrumentation moderne en physique des particules (HEP) fait face à une augmentation exponentielle du volume de données provenant des détecteurs. Cela conduit à une augmentation du volume de données, qui requière une mise à niveau des détecteurs. Aussi l'évolution des détecteurs est liée à la nécessité de suivre les évoluutions technologiques, ainsi qu'à la necessité de remplacer des parties du détecteur endomagées par les radiations. En particulier les détecteurs auprès du Large Hadron Collider (LHC) devront être mise à niveau pour la phase de haute luminosité (HL-LHC).Cette thèse décrit le travail de recherche effectué dans le contexte du calorimètre de haute granularité (HGCAL). Dans l’environnement difficile du LHC, avec des volumes de données plus élevés, plus de radiation, et plus d’empilement (PU), et où le nombre d’événements intéressant est faible, il est essentiel de fournir une décision de qualité en vue de garder ou non les données de l’événement. Ce processus, appelé déclenchement, doit opérer en temps réel, en prenant en compte les contraintes de communication et de capacité de calcul des processeurs disponibles. Les conditions d’opération du système de déclenchement sont difficiles car les algorithmes doivent être exécutés en un temps limité, sans possibilité de revoir la décision à postériori puisque les événements non sélectionnés sont définitivement perdus.Pour satisfaire aux contraintes du HL-LHC, le système de déclenchement actuel doit être mis à niveau. Cette thèse présente les études réalisées pour la conception du nouveau système. Les études présentées concernent les aspects essentiels de la chaine de déclenchement, depuis la lecture des éléments de détecteurs à pixels et l'électronique de sélection frontale (FE) jusqu'au flot de données en sortie d'électronique dorsale (BE). Tout d'abord, la conception des modules du HGCAL est revue de façon à former des cellules de déclenchement à partir des cellules hexagonales, afin de réduire le volume de données par un regroupement des cellules de lecture. Lorsque le module est défini, une part important du travail est consacré aux stratégies en vue de réduire les données au niveau du FE et du BE. Des architectures sont étudiées en vue d'une génération de primitives de déclenchement pour laquelle une approche en deux étapes pour une aggrégation en 3D est pproposée. La première étape consiste en la recherche de régions d'intérêts (ROIs) dans le détecteur, et est basé sur un algorithme de reconstruction des traces (TA), qui permet l'identification des gerbes électromagnétiques (EM) et la sélection d'un germe pour le signal. Aussi, il est montré que plus de germes de signaux peuvent être sélectionées lorsque une paramétrisation des gerbes EM est utilisée dans le TA.Finalement, le TA est utilisé dans un algorithme d'apprentissage pour la génération des ROIs. Cela conduit à une image de la gerbe, et un réseau de neurones (NN) est appliqué pour effectuer la classification (gerbes EM ou PU). Nous avons comparés plusieurs modèles de NN et leur performances (précision de la classification) sont mesurées en fonction de la compelxité du modèle (nombre total de paramètres). Le meilleur compromis est ainsi obtenu entre la qualité de la décision et les contraintes sur le processeur
Modern instrumentation in high energy physics (HEP) is facing the exponential growth of amount of data from the sensor arrays. This results in an enormous increase of output data volume, which requires in-time upgrades of detectors in HEP experiments. Also, the detector evolution is driven by the need to follow the newest technological trends as well as to replace parts of the mechanical construction that are damaged by radiation. In particular, the detectors at the Large Hadron Collider (LHC) will have to be upgraded before entering the high luminosity (HL) operational phase.This thesis describes the research work done in context of the High Granularity Calorimeter (HGCAL) project, which is part of the upgrade of the Compact Muon Solenoid (CMS) detector. In the challenging environment of HL-LHC, with higher data rates, harder radiation, and high pile-up (PU), where the number of „interesting“ events is low, it is essential to provide a quality decision on whether to read-out the event data or not. This process, called the trigger, should operate in real-time, under constrains of communications and processing limits from the available hardware. The working conditions of the trigger are challenging since the algorithm must be executed in a very limited time, without the possibility to revisit the decision of keeping the event for further processing or not.To cope with HL-LHC requirements, the current trigger system must be upgraded. The thesis presents the related studies that were necessary for the design of such trigger. The presented studies relates to key aspects that were necessary along the whole trigger path from the detector sensors read-out and front-end (FE) selection to the back-end (BE) electronics output data flow. First, a re-design of the mechanical HGCAL construction is studied on forming hexagonal sensor cells as well as larger polyhex structures of trigger cells (TCs) used to reduce the amount of data by using grouping of cells. Once the sensor module of the future HGCAL detector is defined, a large amount of work is devoted to the strategies for further FE data reduction and BE reconstruction studies. Architectures are explored for a possible trigger primitive generation from which a two-step approach for a direct 3D clustering is proposed. The first step consists of finding the regions of interest (ROIs) in the detector and is based on a designed tracking algorithm (TA), which enables the identification of electromagnetic (EM) shower tracks and of a signal seed selection. Also, it is shown that more signal seeds can be selected when an EM shower parametrization is used in the TA.Finally, the TA is used in the machine learning study for the ROI generation procedure. It results in an image of the physical shower, and a neural network (NN) is applied to perform the data classification (EM-like or PU-like). We have compared several NN models and the performance (classification accuracy) is measured against the model complexity (the total number of model parameters). The best trade-off is obtained between the quality of the decision-making process and the requirements on the hardware processing power
APA, Harvard, Vancouver, ISO, and other styles
8

Rose, A. W. "The level-1 Trigger of the CMS experiment the LHC and the Super-LHC." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gasperini, Simone. "Performance of the CMS barrel muon trigger algorithms for high luminosity LHC." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17740/.

Full text
Abstract:
In questa tesi sono state misurate le prestazioni di un nuovo algoritmo per il trigger muonico locale delle camere a deriva di CMS (Compact Muon Solenoid). L'algoritmo è stato sviluppato e proposto da un gruppo di ricercatori dell'INFN di Padova, in vista del futuro upgrade sull'acceleratore di particelle LHC (Large Hadron Collider), che diventerà High Luminosity LHC, ed il rispettivo upgrade di CMS. In particolare, sono stati svolti studi sulle efficienze e sulle risoluzioni spaziali del nuovo algoritmo comparandole con quelle del sistema di trigger attualmente in uso. Questo lavoro è stato realizzato sviluppando uno strumento di analisi che sfrutta il pacchetto software di ROOT. Le misure di efficienza e risoluzione sono state ottenute con il metodo del Tag & Probe, utilizzando un campione di dati raccolti durante il run di collisioni protone-protone nel settembre 2018.
APA, Harvard, Vancouver, ISO, and other styles
10

Mohlalisi, Seforo. "Implementation of a Custom Muon High Level Trigger Monitoring System." Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/6541.

Full text
Abstract:
A Large Ion Collider Experiment (ALICE) is one of the 4 major experiments at the Large Hadron Collider (LHC) at CERN. Its main aim is to investigate the physics of strongly interacting matter in proton-proton, nucleus-nucleus and nucleus-proton or proton-nucleus collisions at ultra high energy densi- ties, where the Quark Gluon Plasma (QGP) is expected to form. The experiment is expected to produce data at very high rates of about 25 Gbytes/s however the bandwidth to permanent storage is limited to about 10% (1.25 Gbyte/s) of the total expected data rates. In order to reduce data to permanent storage a special level of selecting interesting/relevant physics events is required. In ALICE the trigger (selection) of signals is issued based on a series of levels varying from levels 0 (L0) up to the High Level Trigger (HLT). For the ALICE muon spectrometer, the role of the trigger is to select events containing muon tracks, with the transverse momentum (pt) above a given threshold. Due to the limited spatial resolution of the muon trigger chambers a pt cut above a few GeV with the L0 trigger is not possible. While the L0 signal for the muon spectrometer is issued at about 700 - 800 ns, the HLT is delivered at about 1 ms. The role of the HLT is to perform online and o ine reconstruction of the ALICE muon spectrometer data in order to improve the measured (L0) pT resolution. In this way a better separation between relevant physics events and unwanted events (background) can be attainable, which could eventu- ally lead to lower trigger rates. The HLT is designed to improve signal- to-background ratio in the raw data transferred to the storage. In order to facilitate online/o ine data analysis the HLT monitoring system which will enable the user to graphically view the events during the reconstruc- tion phase was developed in this study. The system will read and decode the reconstructed events from the HLT analysis chain using the HLT Online Monitoring Environment including ROOT (HOMER) and displaying them on the ALICE Event Visualization Environment (ALIEVE). In addition, the ii utility, dHLTdumpraw, that inspects, with ner detail, the contents of all muon HLT internal data blocks and the detector data link (DDL) raw data stream is also described.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "CMS High Level Trigger"

1

Braun, Nils. Combinatorial Kalman Filter and High Level Trigger Reconstruction for the Belle II Experiment. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24997-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

James, Thomas Owen. Hardware Track-Trigger for CMS: At the High Luminosity LHC. Springer International Publishing AG, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

James, Thomas Owen. A Hardware Track-Trigger for CMS: At the High Luminosity LHC. Springer, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Braun, Nils. Combinatorial Kalman Filter and High Level Trigger Reconstruction for the Belle II Experiment. Springer International Publishing AG, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Braun, Nils. Combinatorial Kalman Filter and High Level Trigger Reconstruction for the Belle II Experiment. Springer, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

David, Scorey, Geddes Richard, and Harris Chris. Part I The Bermuda Market and Form, 2 Introduction to the Bermuda Form. Oxford University Press, 2018. http://dx.doi.org/10.1093/law/9780198754404.003.0002.

Full text
Abstract:
This chapter first describes the three interrelated categories of problems identified by insurers leading up to the introduction of the Bermuda Form. These include the unreliability of stated policy limits of liability; exposure to injuries, damage and events taking place in the distant past; and uncertainty of what body or bodies of law would control the interpretation of a policy. It then discusses the hallmark features of the Bermuda Form, covering the occurrence first reported trigger and continuous policy, bifurcated occurrence definition/time element pollution exclusion, high per occurrence retention/broader principles of loss aggregation, no duty to defend the insured or pay sums on its behalf, expectation and experience of rate or level of injury or property damage, and law and principles governing interpretation/arbitration.
APA, Harvard, Vancouver, ISO, and other styles
7

Aquino, Melinda. Postthoracotomy Pain Syndrome. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190271787.003.0016.

Full text
Abstract:
Postthoracotomy pain syndrome (PTPS) affects approximately 50% of patients who undergo thoracic surgery for lung cancer. The pain can be very severe and may be associated with a high level of disability. The pain can be harsh and unrelenting, preventing patients from performing basic activity of daily living. Several modalities of pain management can be effective for PTPS. Appropriate pain management starts preoperatively with preemptive analgesia with oral medications. Regional anesthetic techniques, including thoracic epidural and thoracic paravertebral block/catheter, can be utilized intraoperatively and postoperatively. For patients who develop PTPS, a pain specialist should be consulted, and a multidisciplinary pain management approach should be designed, with treatments that may include injections (paravertebral nerve blocks, intercostal nerve blocks, trigger-point injections), physical therapy, and oral pain medications.
APA, Harvard, Vancouver, ISO, and other styles
8

Wise, Matt, and Paul Frost. Critical illness. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0147.

Full text
Abstract:
Critical illness can be considered to be any disease process which causes physiological instability that leads to disability or death within minutes or hours. Fortunately, physiological instability associated with critical illness is easily detected by perturbations of simple clinical observations such as blood pressure, heart rate, respiratory rate, oxygen saturations, level of consciousness, and urine output. Individual abnormalities in these observations are sensitive for the presence of critical illness but non-specific. Specificity for critical illness improves as the number of abnormal clinical observations increases. Over recent years, a greater appreciation of the importance of deviations in simple clinical observations as a method of detecting critical illness has led to the development of a number of ‘early warning’ or ‘track and trigger’ systems. These systems attribute a score according to the magnitude and number of abnormal observations that are present, and a high score prompts immediate medical review. Although intuitively sensible, the evidence that these systems are effective in ameliorating or preventing critical illness is currently lacking. This chapter looks at the approach to diagnosis of critical illness, including the pitfalls in diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
9

Jadoul, Michel, Laura Labriola, and Eric Goffin. Viral infections in patients on dialysis. Edited by Jonathan Himmelfarb. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199592548.003.0271.

Full text
Abstract:
From the early days of hemodialysis, viral hepatitis has been recognized as common in dialyzed patients.The prevalence and incidence of HBV infection have decreased markedly over the last decades in HD units. Still, the infectivity of HBV is very high. Vaccinating HD patients, preferably prior to starting dialysis, together with the strict application of hygienic precautions and adequate screening of blood donors remains required, together with the segregation of infective (HBV+) patients in a separate dialysis ward. The level of aminotransferases is markedly lower in HD patients than in the general population: any level above the normal range should thus trigger the suspicion of acute hepatitis (viral or not). The treatment of HBV infection in HD patients is rarely required, unless they are scheduled for a kidney transplant.Screening for HCV infection usually relies on a modern ELISA test. The prevalence and incidence of HCV infection in HD patients has also decreased substantially but remains higher than in the general population. The risk of post-transfusional HCV is currently extremely low, at least in western countries. The actual application of basic hygienic precautions is crucial if nosocomial transmission of HCV is to be prevented. These include optimal hand hygiene practices (hydroalcoholic solution use before contact with patient and after gloves withdrawal), the systematic wearing of gloves, to be changed between patients/stations, an adequate separation of the clean and contaminated items and circuits within the HD unit, and regular cleaning/disinfection of potentially contaminated surfaces. The necessity and usefulness to isolate HCV positive patients in a separate dialysis ward has not been demonstrated and is not recommended by current KDIGO guidelines. The field of the treatment of HCV infection is changing rapidly with many orally active drugs, some of which can be used even in dialysis patients.
APA, Harvard, Vancouver, ISO, and other styles
10

Frew, Anthony. Air pollution. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0341.

Full text
Abstract:
Any public debate about air pollution starts with the premise that air pollution cannot be good for you, so we should have less of it. However, it is much more difficult to determine how much is dangerous, and even more difficult to decide how much we are willing to pay for improvements in measured air pollution. Recent UK estimates suggest that fine particulate pollution causes about 6500 deaths per year, although it is not clear how many years of life are lost as a result. Some deaths may just be brought forward by a few days or weeks, while others may be truly premature. Globally, household pollution from cooking fuels may cause up to two million premature deaths per year in the developing world. The hazards of black smoke air pollution have been known since antiquity. The first descriptions of deaths caused by air pollution are those recorded after the eruption of Vesuvius in ad 79. In modern times, the infamous smogs of the early twentieth century in Belgium and London were clearly shown to trigger deaths in people with chronic bronchitis and heart disease. In mechanistic terms, black smoke and sulphur dioxide generated from industrial processes and domestic coal burning cause airway inflammation, exacerbation of chronic bronchitis, and consequent heart failure. Epidemiological analysis has confirmed that the deaths included both those who were likely to have died soon anyway and those who might well have survived for months or years if the pollution event had not occurred. Clean air legislation has dramatically reduced the levels of these traditional pollutants in the West, although these pollutants are still important in China, and smoke from solid cooking fuel continues to take a heavy toll amongst women in less developed parts of the world. New forms of air pollution have emerged, principally due to the increase in motor vehicle traffic since the 1950s. The combination of fine particulates and ground-level ozone causes ‘summer smogs’ which intensify over cities during summer periods of high barometric pressure. In Los Angeles and Mexico City, ozone concentrations commonly reach levels which are associated with adverse respiratory effects in normal and asthmatic subjects. Ozone directly affects the airways, causing reduced inspiratory capacity. This effect is more marked in patients with asthma and is clinically important, since epidemiological studies have found linear associations between ozone concentrations and admission rates for asthma and related respiratory diseases. Ozone induces an acute neutrophilic inflammatory response in both human and animal airways, together with release of chemokines (e.g. interleukin 8 and growth-related oncogene-alpha). Nitrogen oxides have less direct effect on human airways, but they increase the response to allergen challenge in patients with atopic asthma. Nitrogen oxide exposure also increases the risk of becoming ill after exposure to influenza. Alveolar macrophages are less able to inactivate influenza viruses and this leads to an increased probability of infection after experimental exposure to influenza. In the last two decades, major concerns have been raised about the effects of fine particulates. An association between fine particulate levels and cardiovascular and respiratory mortality and morbidity was first reported in 1993 and has since been confirmed in several other countries. Globally, about 90% of airborne particles are formed naturally, from sea spray, dust storms, volcanoes, and burning grass and forests. Human activity accounts for about 10% of aerosols (in terms of mass). This comes from transport, power stations, and various industrial processes. Diesel exhaust is the principal source of fine particulate pollution in Europe, while sea spray is the principal source in California, and agricultural activity is a major contributor in inland areas of the US. Dust storms are important sources in the Sahara, the Middle East, and parts of China. The mechanism of adverse health effects remains unclear but, unlike the case for ozone and nitrogen oxides, there is no safe threshold for the health effects of particulates. Since the 1990s, tax measures aimed at reducing greenhouse gas emissions have led to a rapid rise in the proportion of new cars with diesel engines. In the UK, this rose from 4% in 1990 to one-third of new cars in 2004 while, in France, over half of new vehicles have diesel engines. Diesel exhaust particles may increase the risk of sensitization to airborne allergens and cause airways inflammation both in vitro and in vivo. Extensive epidemiological work has confirmed that there is an association between increased exposure to environmental fine particulates and death from cardiovascular causes. Various mechanisms have been proposed: cardiac rhythm disturbance seems the most likely at present. It has also been proposed that high numbers of ultrafine particles may cause alveolar inflammation which then exacerbates preexisting cardiac and pulmonary disease. In support of this hypothesis, the metal content of ultrafine particles induces oxidative stress when alveolar macrophages are exposed to particles in vitro. While this is a plausible mechanism, in epidemiological studies it is difficult to separate the effects of ultrafine particles from those of other traffic-related pollutants.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "CMS High Level Trigger"

1

Braun, Nils. "Fast Reconstruction for the High Level Trigger." In Springer Theses, 51–91. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24997-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

James, Tom. "Track Finding for the Level-1 Trigger of the CMS Experiment." In Springer Proceedings in Physics, 296–302. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1313-4_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thea, Alessandro. "The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II." In Springer Proceedings in Physics, 355–59. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1313-4_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Codispoti, Giuseppe, Simone Bologna, Glenn Dirkx, Christos Lazaridis, Alessandro Thea, and Tom Williams. "Common Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger." In Springer Proceedings in Physics, 319–23. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1313-4_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Helstrup, H., J. Lien, V. Lindenstruth, D. Röhrich, B. Skaali, T. Steinbeck, K. Ullaland, A. Vestbø, and A. Wiebalck. "High Level Trigger System for the LHC ALICE Experiment." In Lecture Notes in Computer Science, 494–502. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46043-8_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pezzutto, Simon, Juan Francisco De Negri, Sonja Gantioler, David Moser, and Wolfram Sparber. "Public Research and Development Funding for Photovoltaics in Europe—Past, Present, and Future." In Smart and Sustainable Planning for Cities and Regions, 117–28. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-57764-3_8.

Full text
Abstract:
AbstractThe use of photovoltaic technology is crucial to meet Europe´s ambitious climate and energy objectives set for 2030. To facilitate this shift, technological innovation is a key prerequisite, and the provision of public funding for related research and development is an important trigger. For this study, a vast set of data has been collected to explore how the EU and its Member States, plus Norway and Turkey, have so far invested in photovoltaic research and development. Based on historic values and actual trends, the authors additionally outline the possible future evolution of the investigated public funding. The study aims to shed light on the development of funding from the early 1970s until 2017 (most recent data available) and provide a forecast for 2030 (based on a business-as-usual scenario). According to results, at the national level, public funding had a considerable and steady rise after the OPEC´s oil embargo in 1973, reaching a first peak in the mid-1980s. The authors predict that, according to the most recent trends, by 2030, these will surpass 200 million € annually. In comparison, EU funding has steadily increased since its inception in the late 1980s up until 2007, but its evolvement is distinctively different, evidencing high fluctuations. The cumulative stock is also examined. National sources outweigh EU programs by a factor of almost five, and the stock should surpass 7 billion € by 2030. Based on the analysis and related insights, recommendations are elaborated on how the development of funding could inform policy strategies and actions to support research and development for photovoltaic technology.
APA, Harvard, Vancouver, ISO, and other styles
7

Pozzobon, Nicola. "A Level 1 Tracking Trigger for the CMS Experiment." In Astroparticle, Particle, Space Physics, Radiation Interaction, Detectors and Medical Physics Applications, 648–52. WORLD SCIENTIFIC, 2012. http://dx.doi.org/10.1142/9789814405072_0096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cukierman, Alex. "Current Challenges to World Financial Stability: To What Extent is the Past a Guide for the Future?" In Business, Management and Economics. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.107432.

Full text
Abstract:
The chapter discusses current challenges to world financial stability in light of lessons that have been learned from past financial crises. Although there are parallels between the current situation and some of the previous crises, the current situation differs in several important respects. It comes after a decade of extremely low nominal and real interest rates along with subdued inflation and, due to fiscal and monetary policy measures deployed during the GFC and the COVID-19 pandemic, debt/GDP ratios and central banks (CBs) balance sheets are at historically high levels. The recent upsurge of inflation prompted a worldwide process of increase in policy interest rates and reduction in CB assets. An undesirable side effect of this process is that it may trigger several mechanisms that endanger world financial stability. Recent developments in Fintech and the global economic disruptions caused by the war in Ukraine create novel financial vulnerabilities that differ from previous financial crises. The rapid growth of fintech institutions poses new regulatory challenges at the national and international levels. Although no crisis has materialized to date, those developments have increased the odds of a systemic global crisis. Measures designed to mitigate financial vulnerabilities are briefly discussed in the concluding section.
APA, Harvard, Vancouver, ISO, and other styles
9

Erçetin, Şefika Şule, Halime Güngör, and Şuay Nilhan Açıkalın. "Different Sides of a Reality." In Advances in Religious and Cultural Studies, 125–32. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0148-0.ch010.

Full text
Abstract:
This study's aim is to initiate a discussion of the other side of stigma: that is positive stigma. In this chapter, we discuss different sides of the stigma phenomenon with three major concepts; intelligence quotient, chaos theory, quantum theory with the last unverified part of the Standard Model of particle physics- Higgs Boson and also known as the God Particle. Lastly, we focus on positive stigma in terms of leadership as a case of future trends for this discussion. This discussion is expected to trigger off a high level of academic and research interest in the field of stigma.
APA, Harvard, Vancouver, ISO, and other styles
10

Bodzemir, Fatih, and Jennifer M. Martin. "Natural Environments, Ecosystems, Conflict, and Wellbeing." In Mental Health Policy, Practice, and Service Accessibility in Contemporary Society, 244–67. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7402-6.ch013.

Full text
Abstract:
This chapter examines the correlation between environmental issues and wellbeing. A broad literature review illustrates that changing climate, increasing populations, and degrading natural environments have negative impacts on health and wellbeing. The focus of this chapter is on conflicts arising from the limited supply of natural resources and competing needs, interests, and demands. This can create high levels of tension and division within communities that erodes community spirit, support, and connectedness as people compete for limited resources. The conflict arising from such disputes has negative impacts on social cohesion and the high levels of stress experienced, without adequate supports, can trigger mental ill health. The example of basin level water conflict in Turkey is used to illustrate this.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "CMS High Level Trigger"

1

Beauceron, Stephanie. "The CMS High Level Trigger." In 36th International Conference on High Energy Physics. Trieste, Italy: Sissa Medialab, 2013. http://dx.doi.org/10.22323/1.174.0505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Covarelli, R., and Marvin L. Marshak. "The CMS High-Level Trigger." In 10TH CONFERENCE ON THE INTERSECTIONS OF PARTICLE AND NUCLEAR PHYSICS. AIP, 2009. http://dx.doi.org/10.1063/1.3293780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brigljevic, V., G. Bruno, E. Cano, S. Cittolin, S. Erhan, D. Gigi, F. Glege, et al. "The CMS high level trigger." In 2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515). IEEE, 2003. http://dx.doi.org/10.1109/nssmic.2003.1351855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Afaq, A., W. Badgett, G. Bauer, K. Biery, V. Boyer, J. Branson, A. Brett, et al. "The CMS High Level Trigger System." In 2007 15th IEEE-NPSS Real-Time Conference. IEEE, 2007. http://dx.doi.org/10.1109/rtc.2007.4382773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carrera Jarrin, Edgar Fernando. "Performance of the CMS High-Level Trigger." In 35th International Conference of High Energy Physics. Trieste, Italy: Sissa Medialab, 2011. http://dx.doi.org/10.22323/1.120.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sert, Hale. "CMS Run 2 High Level Trigger Performance." In European Physical Society Conference on High Energy Physics. Trieste, Italy: Sissa Medialab, 2020. http://dx.doi.org/10.22323/1.364.0165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

BROOKE, Jim. "Performance of the CMS Level-1 Trigger." In 36th International Conference on High Energy Physics. Trieste, Italy: Sissa Medialab, 2013. http://dx.doi.org/10.22323/1.174.0508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tosi, Mia. "Tracking at High Level Trigger in CMS." In Technology and Instrumentation in Particle Physics 2014. Trieste, Italy: Sissa Medialab, 2015. http://dx.doi.org/10.22323/1.213.0204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Laurent. "CMS High Level Trigger performance at 13 TeV." In The 39th International Conference on High Energy Physics. Trieste, Italy: Sissa Medialab, 2019. http://dx.doi.org/10.22323/1.340.0226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Xuyang. "The High Level Trigger of the CMS experiment." In Fourth Annual Large Hadron Collider Physics. Trieste, Italy: Sissa Medialab, 2016. http://dx.doi.org/10.22323/1.276.0207.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "CMS High Level Trigger"

1

Halyo, Valerie, and Christopher Tully. GPU/MIC Acceleration of the LHC High Level Trigger to Extend the Physics Reach at the LHC. Office of Scientific and Technical Information (OSTI), April 2015. http://dx.doi.org/10.2172/1177677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keefer, Philip, and Benjamin Roseth. Grand Corruption in the Contracting Out of Public Services: Lessons from a Pilot Study in Colombia. Inter-American Development Bank, June 2021. http://dx.doi.org/10.18235/0003335.

Full text
Abstract:
Do targeted transparency interventions reduce corrupt behavior when corrupt actors are few and politically influential; their behavior imposes small costs on numerous individuals; and corrupt behavior is difficult to observe? Results from a study of informal audits and text messages to parents, meant to curb corruption in the School Meals Program of Colombia, suggests that they can. Theory is pessimistic that transparency interventions can change the behavior of actors who exert significant influence over supervisory authorities. Moreover, inherent methodological obstacles impede the identification of treatment effects. Results substantiate the presence of these obstacles, especially considerable spillovers from treated to control groups. Despite spillovers, we find that parental and operator behavior are significantly different between treatment and control groups. Additional evidence explains why operator behavior changed: out of concern that systematic evidence of corrupt behavior would trigger enforcement actions by high-level enforcement agencies outside of the political jurisdictions where they are most influential.
APA, Harvard, Vancouver, ISO, and other styles
3

Lichter, Amnon, David Obenland, Nirit Bernstein, Jennifer Hashim, and Joseph Smilanick. The role of potassium in quality of grapes after harvest. United States Department of Agriculture, October 2015. http://dx.doi.org/10.32747/2015.7597914.bard.

Full text
Abstract:
Objectives: The objectives of the proposal were to study how potassium (K) enters the berry and in what tissues it accumulates, to determine what is the sensitive phenological stage that is responsive to K, to study the influence of K on sugar translocation, to determine if K has effects on expression of genes in source and sink organs and to study applied aspects of the responses to K at the vineyard level. During the research it was realized that K acts externally so a major part of the original objectives had to be deserted and new ones, i.e. the role of K in enhancing water loss from the berry, had to be developed. In addition, the US partners developed practical objectives of understanding the interaction of K application and water deficit as well as application of growth regulators. Background: In our preliminary data we showed that application of K at mid-ripening enhanced sugar accumulation of table grapes. This finding is of major implications to both early and late harvested grapes and it was essential to understand the mode of action of this treatment. Our major hypothesis was that K enters the berry and by that increases sugar translocation into the berry. In addition it was important to cover practical issues of the application which may influence its efficacy and its reproducibility. Conclusions: The major conclusion from the research was that our initial hypothesis was wrong. Mineral analysis of pulp tissue indicated that upon application of K there was a significant increase in most of the major minerals. Subsequently, we developed a new hypothesis that K acts by increasing the water loss from the berry. In vitro studies of K-treated berries corroborated this hypothesis showing greater weight-loss of treated berries. This was not necessarily expressed in the vineyard as in some experiments berry weight remained unchanged, suggesting that the vine compensated for the enhanced water loss. Importantly, we also discovered that the efficacy of different K salts was strongly correlated to the pH of the salt solution: basic K salts had better efficacy than neutral or acidic salts and modifying the pH of the same salt changed its efficacy. It was therefore suggested that K changes the properties of the cuticle making it more susceptible to water loss. Of the practical aspects it was found that application of K to the clusters was sufficient to trigger its affect and that dual application of K had a stronger effect than single application. With regard to timing, it was realized that application of K after veraison was affective and the berries responded also when ripe. While the effect of K application was significant at harvest, it was mostly insignificant one week after application, suggesting that prolonged exposure to K was required. Implications: The scientific implications of the study are that the external mineral composition of the berry may have a significant role in sugar accumulation and that water loss may have an important role in sugar accumulation in grapes. It is not entirely clear how K modulates the cuticle but according to the literature its incorporation into the cuticle may increase its polarity and facilitate generation of "water bridges" between the flesh and the environment. The practical implications of this study are very significant because realizing the mode of action of K can facilitate a much more efficient application strategy. For example, it can be understood that sprays must be directed to the clusters rather than the whole vines and it can be predicted that the length of exposure is important. Also, by increasing the pH of simple K salts, the efficacy of the treatment can be enhanced, saving in the costs of the treatment. Finally, the ability of grape growers to apply K in a safe and knowledgeable way can have significant impact on the length of the season of early grape cultivars and improve the flavor of high grape yields which may otherwise have compromised sugar levels.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography