Littérature scientifique sur le sujet « Data correlation with time stamp »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Data correlation with time stamp ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Data correlation with time stamp"

1

Jensen, Terry, Roy Brown, Gay Riegel, Lalan S. Wilfong et John Russell Hoverman. « Time stamps used to measure the patient's clinic experience. » Journal of Clinical Oncology 34, no 7_suppl (1 mars 2016) : 150. http://dx.doi.org/10.1200/jco.2016.34.7_suppl.150.

Texte intégral
Résumé :
150 Background: In 2013, a patient reported satisfaction survey indicated 19% of patients waited 20-40 minutes, 8% 40-60 minutes and 4% over 1 hour. We initiated a project to objectively quantify the components of wait times to investigate opportunities for improvement. Methods: Utilizing existing technology in the practice management system, clinic staff use the Day List feature to capture time stamps as patients move through the clinic. We focused on provider appointments but these visits could also include business office, labs, infusion and diagnostics. It was important to define where the wait(s) occurred. The Time Stamp durations measured are as follows: Arrival to Depart – duration of each appointment; Arrival to site to Exam Start – duration of activity until ready to be seen by the provider, includes rooming, labs and business office activity. Used to compare to the patient satisfaction survey responses; Exam Start to Depart – the provider portion of the office visit, includes patient wait plus exam time. Three reports are generated: Time Stamp Error Report indicating the completeness of data collection; Average Wait Times Report with appointment counts by physician by site and average durations; Provider Wait Times Report with office visit counts, Wait Time Category counts ( < 10 min, 10-20, 20-40, 40-60, and > 1 hour ) and average durations. Results: There was a correlation calculation to the patient satisfaction survey of .779, with long wait times more likely to be underreported by patients. Site and physician data were available for review at site Quality Committees. The data can be used by the site to improve processes, such as lab and infusion room scheduling. Time stamps are used to communicate patient readiness for next steps in the office visit. The time stamps provide objective data to discuss patient complaints with staff. Conclusions: Patient wait times are a valued measure of patient satisfaction and quality. Full utilization of the Day List and supporting technology allows us to objectively monitor and improve this aspect of patient care. Table 1: Sample Provider Report [Table: see text]
Styles APA, Harvard, Vancouver, ISO, etc.
2

Fisher-Levine, Merlin, Rebecca Boll, Farzaneh Ziaee, Cédric Bomme, Benjamin Erk, Dimitrios Rompotis, Tatiana Marchenko, Andrei Nomerotski et Daniel Rolles. « Time-resolved ion imaging at free-electron lasers using TimepixCam ». Journal of Synchrotron Radiation 25, no 2 (20 février 2018) : 336–45. http://dx.doi.org/10.1107/s1600577517018306.

Texte intégral
Résumé :
The application of a novel fast optical-imaging camera, TimepixCam, to molecular photoionization experiments using the velocity-map imaging technique at a free-electron laser is described. TimepixCam is a 256 × 256 pixel CMOS camera that is able to detect and time-stamp ion hits with 20 ns timing resolution, thus making it possible to record ion momentum images for all fragment ions simultaneously and avoiding the need to gate the detector on a single fragment. This allows the recording of significantly more data within a given amount of beam time and is particularly useful for pump–probe experiments, where drifts, for example, in the timing and pulse energy of the free-electron laser, severely limit the comparability of pump–probe scans for different fragments taken consecutively. In principle, this also allows ion–ion covariance or coincidence techniques to be applied to determine angular correlations between fragments.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Guo, Canyang, Genggeng Liu et Chi-Hua Chen. « Air Pollution Concentration Forecast Method Based on the Deep Ensemble Neural Network ». Wireless Communications and Mobile Computing 2020 (5 octobre 2020) : 1–13. http://dx.doi.org/10.1155/2020/8854649.

Texte intégral
Résumé :
The global environment has become more polluted due to the rapid development of industrial technology. However, the existing machine learning prediction methods of air quality fail to analyze the reasons for the change of air pollution concentration because most of the prediction methods take more focus on the model selection. Since the framework of recent deep learning is very flexible, the model may be deep and complex in order to fit the dataset. Therefore, overfitting problems may exist in a single deep neural network model when the number of weights in the deep neural network model is large. Besides, the learning rate of stochastic gradient descent (SGD) treats all parameters equally, resulting in local optimal solution. In this paper, the Pearson correlation coefficient is used to analyze the inherent correlation of PM2.5 and other auxiliary data such as meteorological data, season data, and time stamp data which are applied to cluster for enhancing the performance. Extracted features are helpful to build a deep ensemble network (EN) model which combines the recurrent neural network (RNN), long short-term memory (LSTM) network, and gated recurrent unit (GRU) network to predict the PM2.5 concentration of the next hour. The weights of the submodel change with the accuracy of them in the validation set, so the ensemble has generalization ability. The adaptive moment estimation (Adam) an algorithm for stochastic optimization is used to optimize the weights instead of SGD. In order to compare the overall performance of different algorithms, the mean absolute error (MAE) and mean absolute percentage error (MAPE) are used as accuracy metrics in the experiments of this study. The experiment results show that the proposed method achieves an accuracy rate (i.e., MAE=6.19 and MAPE=16.20%) and outperforms the comparative models.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Khairul Anuar, Noor Hafizah, Mohd Amri Md Yunus, Muhammad Ariff Baharudin, Sallehuddin Ibrahim, Shafishuhaza Sahlan et Mahdi Faramarzi. « An assessment of stingless beehive climate impact using multivariate recurrent neural networks ». International Journal of Electrical and Computer Engineering (IJECE) 13, no 2 (1 avril 2023) : 2030. http://dx.doi.org/10.11591/ijece.v13i2.pp2030-2039.

Texte intégral
Résumé :
<p>A healthy bee colony depends on various elements, including a stable habitat, a sufficient source of food, and favorable weather. This paper aims to assess the stingless beehive climate data and examine the precise short-term forecast model for hive weight output. The dataset was extracted from a single hive, for approximately 36-hours, at every seven seconds time stamp. The result represents the correlation analysis between all variables. The evaluation of root-mean-square error (RMSE), as well as the RMSE performance from various types of topologies, are tested on four different forecasting window sizes. The proposed forecast model considers seven of input vectors such as hive weight, an inside temperature, inside humidity, outside temperature, outside humidity, the dewpoint, and bee count. The various network architecture examined for minimal RMSE are long short-term memory (LSTM) and gated recurrent units (GRU). The LSTM1X50 topology was found to be the best fit while analyzing several forecasting windows sizes for the beehive weight forecast. The results obtained indicate a significant unusual symptom occurring in the stingless bee colonies, which allow beekeepers to make decisions with the main objective of improving the colony’s health and propagation.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
5

Meyerson, William U., Sarah K. Fineberg, Ye Kyung Song, Adam Faber, Garrett Ash, Fernanda C. Andrade, Philip Corlett, Mark B. Gerstein et Rick H. Hoyle. « Estimation of Bedtimes of Reddit Users : Integrated Analysis of Time Stamps and Surveys ». JMIR Formative Research 7 (17 janvier 2023) : e38112. http://dx.doi.org/10.2196/38112.

Texte intégral
Résumé :
Background Individuals with later bedtimes have an increased risk of difficulties with mood and substances. To investigate the causes and consequences of late bedtimes and other sleep patterns, researchers are exploring social media as a data source. Pioneering studies inferred sleep patterns directly from social media data. While innovative, these efforts are variously unscalable, context dependent, confined to specific sleep parameters, or rest on untested assumptions, and none of the reviewed studies apply to the popular Reddit platform or release software to the research community. Objective This study builds on this prior work. We estimate the bedtimes of Reddit users from the times tamps of their posts, test inference validity against survey data, and release our model as an R package (The R Foundation). Methods We included 159 sufficiently active Reddit users with known time zones and known, nonanomalous bedtimes, together with the time stamps of their 2.1 million posts. The model’s form was chosen by visualizing the aggregate distribution of the timing of users’ posts relative to their reported bedtimes. The chosen model represents a user’s frequency of Reddit posting by time of day, with a flat portion before bedtime and a quadratic depletion that begins near the user’s bedtime, with parameters fitted to the data. This model estimates the bedtimes of individual Reddit users from the time stamps of their posts. Model performance is assessed through k-fold cross-validation. We then apply the model to estimate the bedtimes of 51,372 sufficiently active, nonbot Reddit users with known time zones from the time stamps of their 140 million posts. Results The Pearson correlation between expected and observed Reddit posting frequencies in our model was 0.997 on aggregate data. On average, posting starts declining 45 minutes before bedtime, reaches a nadir 4.75 hours after bedtime that is 87% lower than the daytime rate, and returns to baseline 10.25 hours after bedtime. The Pearson correlation between inferred and reported bedtimes for individual users was 0.61 (P<.001). In 90 of 159 cases (56.6%), our estimate was within 1 hour of the reported bedtime; 128 cases (80.5%) were within 2 hours. There was equivalent accuracy in hold-out sets versus training sets of k-fold cross-validation, arguing against overfitting. The model was more accurate than a random forest approach. Conclusions We uncovered a simple, reproducible relationship between Reddit users’ reported bedtimes and the time of day when high daytime posting rates transition to low nighttime posting rates. We captured this relationship in a model that estimates users’ bedtimes from the time stamps of their posts. Limitations include applicability only to users who post frequently, the requirement for time zone data, and limits on generalizability. Nonetheless, it is a step forward for inferring the sleep parameters of social media users passively at scale. Our model and precomputed estimated bedtimes of 50,000 Reddit users are freely available.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sheu, Ruey-Kai, Mayuresh Pardeshi, Lun-Chi Chen et Shyan-Ming Yuan. « STAM-CCF : Suspicious Tracking Across Multiple Camera Based on Correlation Filters ». Sensors 19, no 13 (9 juillet 2019) : 3016. http://dx.doi.org/10.3390/s19133016.

Texte intégral
Résumé :
There is strong demand for real-time suspicious tracking across multiple cameras in intelligent video surveillance for public areas, such as universities, airports and factories. Most criminal events show that the nature of suspicious behavior are carried out by un-known people who try to hide themselves as much as possible. Previous learning-based studies collected a large volume data set to train a learning model to detect humans across multiple cameras but failed to recognize newcomers. There are also several feature-based studies aimed to identify humans within-camera tracking. It would be very difficult for those methods to get necessary feature information in multi-camera scenarios and scenes. It is the purpose of this study to design and implement a suspicious tracking mechanism across multiple cameras based on correlation filters, called suspicious tracking across multiple cameras based on correlation filters (STAM-CCF). By leveraging the geographical information of cameras and YOLO object detection framework, STAM-CCF adjusts human identification and prevents errors caused by information loss in case of object occlusion and overlapping for within-camera tracking cases. STAM-CCF also introduces a camera correlation model and a two-stage gait recognition strategy to deal with problems of re-identification across multiple cameras. Experimental results show that the proposed method performs well with highly acceptable accuracy. The evidences also show that the proposed STAM-CCF method can continuously recognize suspicious behavior within-camera tracking and re-identify it successfully across multiple cameras.
Styles APA, Harvard, Vancouver, ISO, etc.
7

McAlister, Merritt. « "Downright Indifference" : Examining Unpublished Decisions in the Federal Courts of Appeals ». Michigan Law Review, no 118.4 (2020) : 533. http://dx.doi.org/10.36644/mlr.118.4.downright.

Texte intégral
Résumé :
Nearly 90 percent of the work of the federal courts of appeals looks nothing like the opinions law students read in casebooks. Over the last fifty years, the so-called “unpublished decision” has overtaken the federal appellate courts in response to a caseload volume “crisis.” These are often short, perfunctory decisions that make no law; they are, one federal judge said, “not safe for human consumption.” The creation of the inferior unpublished decision also has created an inferior track of appellate justice for a class of appellants: indigent litigants. The federal appellate courts routinely shunt indigent appeals to a second-tier appellate process in which judicial staff attorneys resolve appeals without oral argument or meaningful judicial oversight. For the system’s most vulnerable participants, the promise of an appeal as of right often becomes a rubber stamp: “You lose.” This work examines the product of that second-class appellate justice system by filling two critical gaps in the existing literature. First, it compiles comprehensive data on the use of unpublished decisions across the circuits over the last twenty years. The data reveal, for the first time, that the courts’ continued—and increasing—reliance on unpublished decisions has no correlation to overall caseload volume. Second, it examines the output of the second-tier appellate justice system from the perspective of the litigants themselves. Relying on a procedural justice framework, this work develops a taxonomy of unpublished decisions and argues for minimum standards for reason-giving in most unpublished decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Davies, Alyse, Margaret Allman-Farinelli, Katherine Owen, Louise Signal, Cameron Hosking, Leanne Wang et Adrian Bauman. « Feasibility Study Comparing Physical Activity Classifications from Accelerometers with Wearable Camera Data ». International Journal of Environmental Research and Public Health 17, no 24 (13 décembre 2020) : 9323. http://dx.doi.org/10.3390/ijerph17249323.

Texte intégral
Résumé :
Device-based assessments are frequently used to measure physical activity (PA) but contextual measures are often lacking. There is a need for new methods, and one under-explored option is the use of wearable cameras. This study tested the use of wearable cameras in PA measurement by comparing intensity classifications from accelerometers with wearable camera data. Seventy-eight 18–30-year-olds wore an Actigraph GT9X link accelerometer and Autographer wearable camera for three consecutive days. An image coding schedule was designed to assess activity categories and activity sub-categories defined by the 2011 Compendium of Physical Activities (Compendium). Accelerometer hourly detailed files processed using the Montoye (2020) cut-points were linked to camera data using date and time stamps. Agreement was examined using equivalence testing, intraclass correlation coefficient (ICC) and Spearman’s correlation coefficient (rho). Fifty-three participants contributing 636 person-hours were included. Reliability was moderate to good for sedentary behavior (rho = 0.77), light intensity activities (rho = 0.59) and moderate-to-vigorous physical activity (MVPA) (rho = 0.51). The estimates of sedentary behavior, light activity and MVPA from the two methods were similar, but not equivalent. Wearable cameras are a potential complementary tool for PA measurement, but practical challenges and limitations exist. While wearable cameras may not be feasible for use in large scale studies, they may be feasible in small scale studies where context is important.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Huang, Huiqun, Xi Yang et Suining He. « Multi-Head Spatio-Temporal Attention Mechanism for Urban Anomaly Event Prediction ». Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no 3 (9 septembre 2021) : 1–21. http://dx.doi.org/10.1145/3478099.

Texte intégral
Résumé :
Timely forecasting the urban anomaly events in advance is of great importance to the city management and planning. However, anomaly event prediction is highly challenging due to the sparseness of data, geographic heterogeneity (e.g., complex spatial correlation, skewed spatial distribution of anomaly events and crowd flows), and the dynamic temporal dependencies. In this study, we propose M-STAP, a novel Multi-head Spatio-Temporal Attention Prediction approach to address the problem of multi-region urban anomaly event prediction. Specifically, M-STAP considers the problem from three main aspects: (1) extracting the spatial characteristics of the anomaly events in different regions, and the spatial correlations between anomaly events and crowd flows; (2) modeling the impacts of crowd flow dynamic of the most relevant regions in each time step on the anomaly events; and (3) employing attention mechanism to analyze the varying impacts of the historical anomaly events on the predicted data. We have conducted extensive experimental studies on the crowd flows and anomaly events data of New York City, Melbourne and Chicago. Our proposed model shows higher accuracy (41.91% improvement on average) in predicting multi-region anomaly events compared with the state-of-the-arts.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hakim, Wahyu, Arief Achmad et Chang-Wook Lee. « Land Subsidence Susceptibility Mapping in Jakarta Using Functional and Meta-Ensemble Machine Learning Algorithm Based on Time-Series InSAR Data ». Remote Sensing 12, no 21 (4 novembre 2020) : 3627. http://dx.doi.org/10.3390/rs12213627.

Texte intégral
Résumé :
Areas at risk of land subsidence in Jakarta can be identified using a land subsidence susceptibility map. This study evaluates the quality of a susceptibility map made using functional (logistic regression and multilayer perceptron) and meta-ensemble (AdaBoost and LogitBoost) machine learning algorithms based on a land subsidence inventory map generated using the Sentinel-1 synthetic aperture radar (SAR) dataset from 2017 to 2020. The land subsidence locations were assessed using the time-series interferometry synthetic aperture radar (InSAR) method based on the Stanford Method for Persistent Scatterers (StaMPS) algorithm. The mean vertical deformation maps from ascending and descending tracks were compared and showed a good correlation between displacement patterns. Persistent scatterer points with mean vertical deformation value were randomly divided into two datasets: 50% for training the susceptibility model and 50% for validating the model in terms of accuracy and reliability. Additionally, 14 land subsidence conditioning factors correlated with subsidence occurrence were used to generate land subsidence susceptibility maps from the four algorithms. The receiver operating characteristic (ROC) curve analysis showed that the AdaBoost algorithm has higher subsidence susceptibility prediction accuracy (81.1%) than the multilayer perceptron (80%), logistic regression (79.4%), and LogitBoost (79.1%) algorithms. The land subsidence susceptibility map can be used to mitigate disasters caused by land subsidence in Jakarta, and our method can be applied to other study areas.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Data correlation with time stamp"

1

Yang, Hsueh-szu, et Benjamin Kupferschmidt. « Time Stamp Synchronization in Video Systems ». International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Texte intégral
Résumé :
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ahsan, Ramoza. « Time Series Data Analytics ». Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/529.

Texte intégral
Résumé :
Given the ubiquity of time series data, and the exponential growth of databases, there has recently been an explosion of interest in time series data mining. Finding similar trends and patterns among time series data is critical for many applications ranging from financial planning, weather forecasting, stock analysis to policy making. With time series being high-dimensional objects, detection of similar trends especially at the granularity of subsequences or among time series of different lengths and temporal misalignments incurs prohibitively high computation costs. Finding trends using non-metric correlation measures further compounds the complexity, as traditional pruning techniques cannot be directly applied. My dissertation addresses these challenges while meeting the need to achieve near real-time responsiveness. First, for retrieving exact similarity results using Lp-norm distances, we design a two-layered time series index for subsequence matching. Time series relationships are compactly organized in a directed acyclic graph embedded with similarity vectors capturing subsequence similarities. Powerful pruning strategies leveraging the graph structure greatly reduce the number of time series as well as subsequence comparisons, resulting in a several order of magnitude speed-up. Second, to support a rich diversity of correlation analytics operations, we compress time series into Euclidean-based clusters augmented by a compact overlay graph encoding correlation relationships. Such a framework supports a rich variety of operations including retrieving positive or negative correlations, self correlations and finding groups of correlated sequences. Third, to support flexible similarity specification using computationally expensive warped distance like Dynamic Time Warping we design data reduction strategies leveraging the inexpensive Euclidean distance with subsequent time warped matching on the reduced data. This facilitates the comparison of sequences of different lengths and with flexible alignment still within a few seconds of response time. Comprehensive experimental studies using real-world and synthetic datasets demonstrate the efficiency, effectiveness and quality of the results achieved by our proposed techniques as compared to the state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hedlund, Tobias, et Xingya Zhou. « Correlation and Graphical Presentation of Event Data from a Real-Time System ». Thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-88741.

Texte intégral
Résumé :

Event data from different parts of a system might be found recorded in event logs. Often the individual logs only show a small part of the system, but by correlating different sources into a consistent context it will be possible to gain further information and a wider view. This would facilitate in finding source of errors or certain behaviors within the system.

This thesis will present the correlation possibilities between event data from different layers of the Ericsson Connectivity Packet Platform (CPP). This was done first by developing and using a test base application for the OSE operating system through which the event data can be recorded for the same test cases. The log files containing the event data have been studied and results will be presented regarding format, structure and content. For reading and storing the event data, suggestions of interpreters and data models are also provided. Finally a prototype application will be presented, which will provide the defined interpreters, data models and a graphical user interface to represent the event data and event data correlations. The programming was conducted using Java and the application is implemented as an Eclipse Plug-in. With the help of the application the user will get a better overview and a more intuitive way of working with the event data.

Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Kang M. Eng Massachusetts Institute of Technology. « Learning time series data using cross correlation and its application in bitcoin price prediction ». Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91884.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
In this work, we developed an quantitative trading algorithm for bitcoin that is shown to be profitable. The algorithm establishes a framework that combines parametric variables and non-parametric variables in a logistical regression model, capturing information in both the static states and the evolution of states. The combination improves the performance of the strategy. In addition, we demonstrated that we can discovery curve similarity of time series using cross correlation and L2 distance. The similarity metrics can be efficiently computed using convolution and can help us learn from the past instance using an ensemble voting scheme.
by Kang Zhang.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Huo, Shiyin. « Detecting Self-Correlation of Nonlinear, Lognormal, Time-Series Data via DBSCAN Clustering Method, Using Stock Price Data as Example ». The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1321989426.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

黎文傑 et Man-kit Lai. « Some results on the statistical analysis of directional data ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211550.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lai, Man-kit. « Some results on the statistical analysis of directional data / ». [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13787950.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zheng, Xueying, et 郑雪莹. « Robust joint mean-covariance model selection and time-varying correlation structure estimation for dependent data ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899703.

Texte intégral
Résumé :
In longitudinal and spatio-temporal data analysis, repeated measurements from a subject can be either regional- or temporal-dependent. The correct specification of the within-subject covariance matrix cultivates an efficient estimation for mean regression coefficients. In this thesis, robust estimation for the mean and covariance jointly for the regression model of longitudinal data within the framework of generalized estimating equations (GEE) is developed. The proposed approach integrates the robust method and joint mean-covariance regression modeling. Robust generalized estimating equations using bounded scores and leverage-based weights are employed for the mean and covariance to achieve robustness against outliers. The resulting estimators are shown to be consistent and asymptotically normally distributed. Robust variable selection method in a joint mean and covariance model is considered, by proposing a set of penalized robust generalized estimating equations to estimate simultaneously the mean regression coefficients, the generalized autoregressive coefficients and innovation variances introduced by the modified Cholesky decomposition. The set of estimating equations select important covariate variables in both mean and covariance models together with the estimating procedure. Under some regularity conditions, the oracle property of the proposed robust variable selection method is developed. For these two robust joint mean and covariance models, simulation studies and a hormone data set analysis are carried out to assess and illustrate the small sample performance, which show that the proposed methods perform favorably by combining the robustifying and penalized estimating techniques together in the joint mean and covariance model. Capturing dynamic change of time-varying correlation structure is both interesting and scientifically important in spatio-temporal data analysis. The time-varying empirical estimator of the spatial correlation matrix is approximated by groups of selected basis matrices which represent substructures of the correlation matrix. After projecting the correlation structure matrix onto the space spanned by basis matrices, varying-coefficient model selection and estimation for signals associated with relevant basis matrices are incorporated. The unique feature of the proposed model and estimation is that time-dependent local region signals can be detected by the proposed penalized objective function. In theory, model selection consistency on detecting local signals is provided. The proposed method is illustrated through simulation studies and a functional magnetic resonance imaging (fMRI) data set from an attention deficit hyperactivity disorder (ADHD) study.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
9

Abou-Galala, Feras Moustafa. « True-time all optical performance monitoring by means of optical correlation ». Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180549555.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Aslan, Sipan. « Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data ». Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612426/index.pdf.

Texte intégral
Résumé :
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Data correlation with time stamp"

1

King, Wayne M. Multitaper spectral estimation and time-domain cross-correlation in FMRI data analysis : Actual and simulated data. 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Alsuwailem, Abdullah M. A low-cost microprocessor-based correlator for high bandwidth data : Polarity correlation is performed from zero-crossing time data logged by DMA controllers at speeds of up to 65KHz for photon correlation laser doppler velocity measurement. Bradford, 1986.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

van der Wal, Jenneke. A Featural Typology of Bantu Agreement. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198844280.001.0001.

Texte intégral
Résumé :
The Bantu languages are in some sense remarkably uniform (subject, verb, order (SVO) basic word order, noun classes, verbal morphology), but this extensive language family also show a wealth of morphosyntactic variation. Two core areas in which such variation is attested are subject and object agreement. The book explores the variation in Bantu subject and object marking on the basis of data from 75 Bantu languages, discovering striking patterns (the Relation between Asymmetry and Non-Doubling Object Marking (RANDOM), and the Asymmetry Wants Single Object Marking (AWSOM) correlation), and providing a novel syntactic analysis. This analysis takes into account not just phi agreement, but also nominal licensing and information structure. A Person feature, associated with animacy, definiteness, or givenness, is shown to be responsible for differential object agreement, while at the same time accounting for doubling vs. non-doubling object marking—a hybrid solution to an age-old debate in Bantu comparative morphosyntax. It is furthermore proposed that low functional heads can Case-license flexibly downwards or upwards, depending on the relative topicality of the two arguments involved. This accounts for the properties of symmetric object marking in ditransitives (for Appl), and subject inversion constructions (for v). By keeping Agree constant and systematically determining which featural parameters are responsible for the attested variation, the proposed analysis argues for an emergentist view of features and parameters (following Biberauer 2018, 2019), and against both Strong Uniformity and Strong Modularity.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Boothroyd, Andrew T. Principles of Neutron Scattering from Condensed Matter. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198862314.001.0001.

Texte intégral
Résumé :
The book contains a comprehensive account of the theory and application of neutron scattering for the study of the structure and dynamics of condensed matter. All the principal experimental techniques available at national and international neutron scattering facilities are covered. The formal theory is presented, and used to show how neutron scattering measurements give direct access to a variety of correlation and response functions which characterize the equilibrium properties of bulk matter. The determination of atomic arrangements and magnetic structures by neutron diffraction and neutron optical methods is described, including single-crystal and powder diffraction, diffuse scattering from disordered structures, total scattering, small-angle scattering, reflectometry, and imaging. The principles behind the main neutron spectroscopic techniques are explained, including continuous and time-of-flight inelastic scattering, quasielastic scattering, spin-echo spectroscopy, and Compton scattering. The scattering cross-sections for atomic vibrations in solids, diffusive motion in atomic and molecular fluids, and single-atom and cooperative magnetic excitations are calculated. A detailed account of neutron polarization analysis is given, together with examples of how polarized neutrons can be exploited to obtain information about structural and magnetic correlations which cannot be obtained by other methods. Alongside the theoretical aspects, the book also describes the essential practical information needed to perform experiments and to analyse and interpret the data. Exercises are included at the end of each chapter to consolidate and enhance understanding of the material, and a summary of relevant results from mathematics, quantum mechanics, and linear response theory, is given in the appendices.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Texte intégral
Résumé :
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Data correlation with time stamp"

1

Székely, Gábor J., et Maria L. Rizzo. « Time Series and Distance Correlation ». Dans The Energy of Data and Distance Correlation, 365–74. Boca Raton : Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9780429157158-21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Shumway, Robert H., et David S. Stoffer. « Correlation and Stationary Time Series ». Dans Time Series : A Data Analysis Approach Using R, 17–35. Boca Raton : CRC Press, Taylor & Francis Group, 2019. : Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9780429273285-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Pusheng. « Correlation Queries in Spatial Time Series Data ». Dans Encyclopedia of GIS, 1–4. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-23519-6_221-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Pusheng. « Correlation Queries in Spatial Time Series Data ». Dans Encyclopedia of GIS, 368–71. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-17885-1_221.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Pusheng. « Correlation Queries in Spatial Time Series Data ». Dans Encyclopedia of GIS, 176–79. Boston, MA : Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_221.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Amagata, Daichi, et Takahiro Hara. « Correlation Set Discovery on Time-Series Data ». Dans Lecture Notes in Computer Science, 275–90. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27618-8_21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liu, Qian, Jinyan Li, Limsoon Wong et Kotagiri Ramamohanarao. « Efficient Mining of Pan-Correlation Patterns from Time Course Data ». Dans Advanced Data Mining and Applications, 234–49. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49586-6_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rong, Chuitian, Lili Chen, Chunbin Lin et Chao Yuan. « Parallel Variable-Length Motif Discovery in Time Series Using Subsequences Correlation ». Dans Web and Big Data, 164–75. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60290-1_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yohapriyaa, M., et M. Uma. « Multi-variant Classification of Depression Severity Using Social Media Networks Based on Time Stamp ». Dans Intelligent Data Communication Technologies and Internet of Things, 553–64. Singapore : Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7610-9_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wei, Wenjing, Xiaoyi Jia, Yang Liu et Xiaohui Yu. « Travel Time Forecasting with Combination of Spatial-Temporal and Time Shifting Correlation in CNN-LSTM Neural Network ». Dans Web and Big Data, 297–311. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96890-2_25.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Data correlation with time stamp"

1

Peneti, Subhashini, et B. Padmaja Rani. « Data leakage prevention system with time stamp ». Dans 2016 International Conference on Information Communication and Embedded Systems (ICICES). IEEE, 2016. http://dx.doi.org/10.1109/icices.2016.7518934.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kimachi, Akira, et Shigeru Ando. « Real-time range imaging by phase-stamp method using correlation image sensor ». Dans Electronic Imaging 2007, sous la direction de J. Angelo Beraldin, Fabio Remondino et Mark R. Shortis. SPIE, 2007. http://dx.doi.org/10.1117/12.696311.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wasko, Wojciech, Dotan David Levi, Teferet Geula et Amit Mandelbaum. « Artificial Accurate Time-stamp in Network Adapters ». Dans 2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, 2021. http://dx.doi.org/10.1109/ispa-bdcloud-socialcom-sustaincom52081.2021.00218.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Koponen, Pekka, Antti Hilden et Pertti Pakonen. « Measurement Data Based Time Stamp Synchronization in Power Quality Data Post Processing ». Dans 2021 IEEE Madrid PowerTech. IEEE, 2021. http://dx.doi.org/10.1109/powertech46648.2021.9494909.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kurihara, Toru, Maoto Iwata et Shigeru Ando. « Phase-Stamp Particle-Trajectory Velocimetry Using Correlation Image Sensor ». Dans ASME/JSME 2007 5th Joint Fluids Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/fedsm2007-37216.

Texte intégral
Résumé :
In this paper, we propose a new measurement scheme, called phase-stamp particle trajectory velocimetry (PSPTV) using correlation image sensor (CIS). The correlation image sensor, developed by us, is the device which outputs the temporal correlation between incident light intensity and three reference signals which is common among whole pixel. By using the correlation image sensor with the single frequency reference signals, the time at which tracer particle is getting through is embedded in the form of the phase of the reference signals in each pixel. It provides single frame high resolution measurement of the 2D velocity field, and also good time resolution for transient phenomenon. We show some experimental results and confirmed its performance.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Jingning, Tianming Yang, Zuoheng Li et Ke Zhou. « TSPSCDP : A Time-Stamp Continuous Data Protection Approach Based on Pipeline Strategy ». Dans 2008 Japan-China Joint Workshop on Frontier of Computer Science and Technology (FCST). IEEE, 2008. http://dx.doi.org/10.1109/fcst.2008.27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Annaiyappa, Pradeep, John Macpherson et Eric Cayeux. « Best Practices to Improve Accurate Time Stamping of Data at the Well Site ». Dans IADC/SPE International Drilling Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/208732-ms.

Texte intégral
Résumé :
Abstract Due to the nature of the drilling process, there are several companies collecting data at the rig. Each company's data acquisition system applies its own time stamp to the data. Subsequent aggregation of data, for example in a data lake, relies on synchronized time stamps applied to the different data sources in order to collate the data. Unfortunately, synchronized time stamping is rarely true. This paper documents the different sources or errors in time stamping of data and provides some best practices to help mitigate some of these causes. There are many reasons for unsynchronized time stamping of data from different sources. It can be as simple as clock synchronization at the rig: each data providing or producing company has an independent clock. It can also be due to where the time stamp is applied: for example, at the data source or on data reception. Additionally, it can be due to how the time stamp is applied: at the start of the interval, the mid-point, or the end. Many of the protocols used at the well site have a high latency, mud pulse or electro-magnetic (EM) telemetry, or even WITS (Wellsite Information Transfer Standard), where the actual acquisition time may vary significantly from the time stamp. Perhaps finally, time stamping of derived data is always problematic given the unsynchronized nature of data sources. Synchronization of clocks within the data acquisition network is extremely important. The resolution of time synchronization depends on purpose: motion control for example demands high-resolution time keeping. However, for the purposes of local time stamping, synchronization to a Network Time Server with a resolution of one millisecond is sufficient. The issue is on agreeing on the common source, and agreeing on passage of the time signal through firewalls. Time stamping is a more involved matter, calling for agreement on standards and a degree of metadata transparency. The paper describes in some detail sender versus receiver time stamping, the downhole to surface time-stamp chain, and time stamping of derived data. Systems automation and interoperability at the rig site – allowing plug and play access to equipment and applications – rely on an agreed upon network synchronization scheme. Indeed, designing applications that must handle uncertain time adds considerable complexity and cost, not to mention the impact on reliability. This paper presents an ordered approach to a quite resolvable problem.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ahsan, Ramoza, Rodica Neamtu, Muzammil Bashir, Elke A. Rundensteiner et Gabor Sarkozy. « Correlation-Based Analytics of Time Series Data ». Dans 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378155.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Papadimitriou, Spiros, Jimeng Sun et Philip Yu. « Local Correlation Tracking in Time Series ». Dans Sixth International Conference on Data Mining (ICDM'06). IEEE, 2006. http://dx.doi.org/10.1109/icdm.2006.99.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mueen, Abdullah, Hossein Hamooni et Trilce Estrada. « Time Series Join on Subsequence Correlation ». Dans 2014 IEEE International Conference on Data Mining (ICDM). IEEE, 2014. http://dx.doi.org/10.1109/icdm.2014.52.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Data correlation with time stamp"

1

Duvvuri, Sarvani, et Srinivas S. Pulugurtha. Researching Relationships between Truck Travel Time Performance Measures and On-Network and Off-Network Characteristics. Mineta Transportation Institute, juillet 2021. http://dx.doi.org/10.31979/mti.2021.1946.

Texte intégral
Résumé :
Trucks serve significant amount of freight tonnage and are more susceptible to complex interactions with other vehicles in a traffic stream. While traffic congestion continues to be a significant ‘highway’ problem, delays in truck travel result in loss of revenue to the trucking companies. There is a significant research on the traffic congestion mitigation, but a very few studies focused on data exclusive to trucks. This research is aimed at a regional-level analysis of truck travel time data to identify roads for improving mobility and reducing congestion for truck traffic. The objectives of the research are to compute and evaluate the truck travel time performance measures (by time of the day and day of the week) and use selected truck travel time performance measures to examine their correlation with on-network and off-network characteristics. Truck travel time data for the year 2019 were obtained and processed at the link level for Mecklenburg County, Wake County, and Buncombe County, NC. Various truck travel time performance measures were computed by time of the day and day of the week. Pearson correlation coefficient analysis was performed to select the average travel time (ATT), planning time index (PTI), travel time index (TTI), and buffer time index (BTI) for further analysis. On-network characteristics such as the speed limit, reference speed, annual average daily traffic (AADT), and the number of through lanes were extracted for each link. Similarly, off-network characteristics such as land use and demographic data in the near vicinity of each selected link were captured using 0.25 miles and 0.50 miles as buffer widths. The relationships between the selected truck travel time performance measures and on-network and off-network characteristics were then analyzed using Pearson correlation coefficient analysis. The results indicate that urban areas, high-volume roads, and principal arterial roads are positively correlated with the truck travel time performance measures. Further, the presence of agricultural, light commercial, heavy commercial, light industrial, single-family residential, multi-family residential, office, transportation, and medical land uses increase the truck travel time performance measures (decrease the operational performance). The methodological approach and findings can be used in identifying potential areas to serve as truck priority zones and for planning decentralized delivery locations.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Upadhyaya, Shrini K., Abraham Shaviv, Abraham Katzir, Itzhak Shmulevich et David S. Slaughter. Development of A Real-Time, In-Situ Nitrate Sensor. United States Department of Agriculture, mars 2002. http://dx.doi.org/10.32747/2002.7586537.bard.

Texte intégral
Résumé :
Although nitrate fertilizers are critical for enhancing crop production, excess application of nitrate fertilizer can result in ground water contamination leading to the so called "nitrate problem". Health and environmental problems related to this "nitrate problem" have led to serious concerns in many parts of the world including the United States and Israel. These concerns have resulted in legislation limiting the amount of nitrate N in drinking water to 10mg/g. Development of a fast, reliable, nitrate sensor for in-situ application can be extremely useful in dynamic monitoring of environmentally sensitive locations and applying site-specific amounts of nitrate fertilizer in a precision farming system. The long range objective of this study is to develop a fast, reliable, real-time nitrate sensor. The specific objective of this one year feasibility study was to explore the possible use of nitrate sensor based on mid-IR spectroscopy developed at UCD along with the silver halide fiber ATR (i.e. attenuated total internal reflection) sensor developed at TAU to detect nitrate content in solution and soil paste in the presence of interfering compounds. Experiments conducted at Technion and UCD clearly demonstrate the feasibility of detecting nitrate content in solutions as well as soil pastes using mid-IR spectroscopy and an ATR technique. When interfering compounds such as carbonates, bicarbonates, organic matter etc. are present special data analysis technique such as singular value decomposition (SYD) or cross correlation was necessary to detect nitrate concentrations successfully. Experiments conducted in Israel show that silver halide ATR fiber based FEWS, particularly flat FEWS, resulted in low standard error and high coefficient of determination (i.e. R² values) indicating the potential of the flat Fiberoptic Evanescent Wave Spectroscopy (FEWS) for direct determinations of nitrate. Moreover, they found that it was possible to detect nitrate and other anion concentrations using anion exchange membranes and M1R spectroscopy. The combination of the ion-exchange membranes with fiberoptices offers one more option to direct determination of nitrate in environmental systems.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Falfushynska, Halina I., Bogdan B. Buyak, Hryhorii V. Tereshchuk, Grygoriy M. Torbin et Mykhailo M. Kasianchuk. Strengthening of e-learning at the leading Ukrainian pedagogical universities in the time of COVID-19 pandemic. [б. в.], juin 2021. http://dx.doi.org/10.31812/123456789/4442.

Texte intégral
Résumé :
Distance education has become the mandatory component of higher education establishments all over the world including Ukraine regarding COVID-19 lockdown and intentions of Universities to render valuable knowledge and provide safe educational experience for students. The present study aimed to explore the student’s and academic staff’s attitude towards e-learning and the most complicated challenges regarding online learning and distance education. Our findings disclosed that the online learning using Zoom, Moodle, Google Meet, BigBlueButton and Cisco has become quite popular among the students and academic staff in Ukraine in time of the lockdown period and beyond. Based on the Principal Component Analysis data processing we can conclude that students’ satisfaction and positive e-learning perception are in a good correlation with quality of e-learning resources and set of apps which are used while e-learning and distance education. Also, education style, methods, and manner predict willingness of students to self-study. The self-motivation, time-management, lack of practice, digital alienation, positive attitude towards ICT, and instruction strategy belong to the most important challenges of COVID-19 lockdown based on the students and academic staff interviews. Online learning on daily purpose should be used in the favor of strengthening of classical higher education rather than replacing the former. Blended education is the best alternative to face-to-face education, because the communication with mentor in a live environmental even virtual should have ushered the learners to complete online learning and improve its results.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kuropiatnyk, D. I. Actuality of the problem of parametric identification of a mathematical model. [б. в.], décembre 2018. http://dx.doi.org/10.31812/123456789/2885.

Texte intégral
Résumé :
The purpose of the article is to study the possibilities of increasing the efficiency of a mathematical model by identifying the parameters of an object. A key factor for parametrization can be called the consideration of properties of the values of the model at a specific time point, which allows a deeper analysis of data dependencies and correlation between them. However, such a technique does not always work, because in advance it is impossible to predict that the parameters can be substantially optimized. In addition, it is necessary to take into account the fact that minimization reduces the values of parameters without taking into account their real physical properties. The correctness of the final values will be based on dynamically selected parameters, which allows you to modify the terms of use of the system in real time. In the development process, the values of experimentally obtained data with the model are compared, which allows you to understand the accuracy of minimization. When choosing the most relevant parameters, various minimization functions are used, which provides an opportunity to cover a wide range of theoretical initial situations. Verification of the correctness of the decision is carried out with the help of a quality function, which can identify the accuracy and correctness of the optimized parameters. It is possible to choose different types of functional quality, depending on the characteristics of the initial data. The presence of such tools during parametrization allows for varied analysis of the model, testing it on various algorithms, data volumes and conditions of guaranteed convergence of functional methods.
Styles APA, Harvard, Vancouver, ISO, etc.
5

King, E. L., A. Normandeau, T. Carson, P. Fraser, C. Staniforth, A. Limoges, B. MacDonald, F. J. Murrillo-Perez et N. Van Nieuwenhove. Pockmarks, a paleo fluid efflux event, glacial meltwater channels, sponge colonies, and trawling impacts in Emerald Basin, Scotian Shelf : autonomous underwater vehicle surveys, William Kennedy 2022011 cruise report. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331174.

Texte intégral
Résumé :
A short but productive cruise aboard RV William Kennedy tested various new field equipment near Halifax (port of departure and return) but also in areas that could also benefit science understanding. The GSC-A Gavia Autonomous Underwater Vehicle equipped with bathymetric, sidescan and sub-bottom profiler was successfully deployed for the first time on Scotian Shelf science targets. It surveyed three small areas: two across known benthic sponge, Vazella (Russian Hat) within a DFO-directed trawling closure area on the SE flank of Sambro Bank, bordering Emerald Basin, and one across known pockmarks, eroded cone-shaped depression in soft mud due to fluid efflux. The sponge study sites (~ 150 170 m water depth) were known to lie in an area of till (subglacial diamict) exposure at the seabed. The AUV data identified gravel and cobble-rich seabed, registering individual clasts at 35 cm gridded resolution. A subtle variation in seabed texture is recognized in sidescan images, from cobble-rich on ridge crests and flanks, to limited mud-rich sediment in intervening troughs. Correlation between seabed topography and texture with the (previously collected) Vazella distribution along two transects is not straightforward. However there may be a preference for the sponge in the depressions, some of which have a thin but possibly ephemeral sediment cover. Both sponge study sites depict a hereto unknown morphology, carved in glacial deposits, consisting of a series of discontinuous ridges interpreted to be generated by erosion in multiple, continuous, meandering and cross-cutting channels. The morphology is identical to glacial Nye, or mp;lt;"N-mp;lt;"channels, cut by sub-glacial meltwater. However their scale (10 to 100 times mp;lt;"typicalmp;gt;" N-channels) and the unique eroded medium, (till rather than bedrock), presents a rare or unknown size and medium and suggests a continuum in sub-glacial meltwater channels between much larger tunnel valleys, common to the eastward, and the bedrock forms. A comparison is made with coastal Nova Scotia forms in bedrock. The Emerald Basin AUV site, targeting pockmarks was in ~260 to 270 m water depth and imaged eight large and one small pockmark. The main aim was to investigate possible recent or continuous fluid flux activity in light of ocean acidification or greenhouse gas contribution; most accounts to date suggested inactivity. While a lack of common attributes marking activity is confirmed, creep or rotational flank failure is recognized, as is a depletion of buried diffuse methane immediately below the seabed features. Discovery of a second, buried, pockmark horizon, with smaller but more numerous erosive cones and no spatial correlation to the buried diffuse gas or the seabed pockmarks, indicates a paleo-event of fluid or gas efflux; general timing and possible mechanisms are suggested. The basinal survey also registered numerous otter board trawl marks cutting the surficial mud from past fishing activity. The AUV data present a unique dataset for follow-up quantification of the disturbance. Recent realization that this may play a significant role in ocean acidification on a global scale can benefit from such disturbance quantification. The new pole-mounted sub-bottom profiler collected high quality data, enabling correlation of recently recognized till ridges exposed at the seabed as they become buried across the flank and base of the basin. These, along with the Nye channels, will help reconstruct glacial behavior and flow patterns which to date are only vaguely documented. Several cores provide the potential for stratigraphic dating of key horizons and will augment Holocene environmental history investigations by a Dalhousie University student. In summary, several unique features have been identified, providing sufficient field data for further compilation, analysis and follow-up publications.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Knight, R. D., et H. A. J. Russell. Quantifying the invisible : pXRF analyses of three boreholes, British Columbia and Ontario. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331176.

Texte intégral
Résumé :
Portable X-ray fluorescence (pXRF) technology collects geochemical data at a fraction of the cost of traditional laboratory methods. Although the pXRF spectrometer provides concentrations for 41 elements, only a subset of these elements meet the criteria for definitive, quantitative, and qualitative data. However, high-quality pXRF data obtained by correct application of analytical protocols, can provide robust insight to stratigraphy and sediment characteristics that are often not observed by, for example, visual core logging, grain size analysis, and geophysical logging. We present examples of geochemical results obtained from pXRF analysis of drill core samples from three boreholes located in Canada, that demonstrate: 1) Definitive stratigraphic boundaries observed in geochemical changes obtained from 380 analyses collected over 150 m of core, which intersects three Ordovician sedimentary formations and Precambrian granite. These boundaries could not be reconciled by traditional visual core logging methods. 2) Significant elemental concentration changes observed in 120 samples collected in each of two ~120 m deep boreholes located in a confined paleo-glacial foreland basin. The collected geochemical data provide insight to sediment provenance and stratigraphic relationships that were previously unknown. 3) Abrupt changes in the geochemical signature in a subset of 135 samples collected from a 151 m deep borehole intersecting Quaternary glacial derived till, sands, and ahomogeneous silt and clay succession. These data provide a platform for discussion on ice sheet dynamics, changes in depositional setting, and changes in provenance. Results from each of these studies highlights previously unknown (invisible) geological information revealed through geochemical analyses. A significant benefit of using pXRF technology is refining sampling strategies in near real time and the ability to increase sample density at geochemical boundaries with little increase in analysis time or budget. The data also provide an opportunity to establish a chemostratigraphic framework that complements other stratigraphic correlation techniques, including geophysical methods. Overall, data collected with pXRF technology provide new insights into topics such as spatial correlations, facies changes, provenance changes, and depositional environment changes.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Brown, Yolanda, Twonia Goyer et Maragaret Harvey. Heart Failure 30-Day Readmission Frequency, Rates, and HF Classification. University of Tennessee Health Science Center, décembre 2020. http://dx.doi.org/10.21007/con.dnp.2020.0002.

Texte intégral
Résumé :
30 Day Hospital Readmission Rates, Frequencies, and Heart Failure Classification for Patients with Heart Failure Background Congestive heart failure (CHF) is the leading cause of mortality, morbidity, and disability worldwide among patients. Both the incidence and the prevalence of heart failure are age dependent and are relatively common in individuals 40 years of age and older. CHF is one of the leading causes of inpatient hospitalization readmission in the United States, with readmission rates remaining above the 20% goal within 30 days. The Center for Medicare and Medicaid Services imposes a 3% reimbursement penalty for excessive readmissions including those who are readmitted within 30 days from prior hospitalization for heart failure. Hospitals risk losing millions of dollars due to poor performance. A reduction in CHF readmission rates not only improves healthcare system expenditures, but also patients’ mortality, morbidity, and quality of life. Purpose The purpose of this DNP project is to determine the 30-day hospital readmission rates, frequencies, and heart failure classification for patients with heart failure. Specific aims include comparing computed annual re-admission rates with national average, determine the number of multiple 30-day re-admissions, provide descriptive data for demographic variables, and correlate age and heart failure classification with the number of multiple re-admissions. Methods A retrospective chart review was used to collect hospital admission and study data. The setting occurred in an urban hospital in Memphis, TN. The study was reviewed by the UTHSC Internal Review Board and deemed exempt. The electronic medical records were queried from July 1, 2019 through December 31, 2019 for heart failure ICD-10 codes beginning with the prefix 150 and a report was generated. Data was cleaned such that each patient admitted had only one heart failure ICD-10 code. The total number of heart failure admissions was computed and compared to national average. Using age ranges 40-80, the number of patients re-admitted withing 30 days was computed and descriptive and inferential statistics were computed using Microsoft Excel and R. Results A total of 3524 patients were admitted for heart failure within the six-month time frame. Of those, 297 were re-admitted within 30 days for heart failure exacerbation (8.39%). An annual estimate was computed (16.86%), well below the national average (21%). Of those re-admitted within 30 days, 50 were re-admitted on multiple occasions sequentially, ranging from 2-8 re-admissions. The median age was 60 and 60% male. Due to the skewed distribution (most re-admitted twice), nonparametric statistics were used for correlation. While graphic display of charts suggested a trend for most multiple re-admissions due to diastolic dysfunction and least number due to systolic heart failure, there was no statistically significant correlation between age and number or multiple re-admissions (Spearman rank, p = 0.6208) or number of multiple re-admissions and heart failure classification (Kruskal Wallis, p =0.2553).
Styles APA, Harvard, Vancouver, ISO, etc.
8

Samach, Alon, Douglas Cook et Jaime Kigel. Molecular mechanisms of plant reproductive adaptation to aridity gradients. United States Department of Agriculture, janvier 2008. http://dx.doi.org/10.32747/2008.7696513.bard.

Texte intégral
Résumé :
Annual plants have developed a range of different mechanisms to avoid flowering (exposure of reproductive organs to the environment) under adverse environmental conditions. Seasonal environmental events such as gradual changes in day length and temperature affect the timing of transition to flowering in many annual and perennial plants. Research in Arabidopsis and additional species suggest that some environmental signals converge on transcriptional regulation of common floral integrators such as FLOWERING LOCUS T (FT). Here we studied environmental induction of flowering in the model legume Medicago truncatula. Similarly to Arabidopsis, the transition to flowering in M. truncatula is hastened by long photoperiods and long periods of vernalization (4°C for 2-3 weeks). Ecotypes collected in Israel retain a vernalization response even though winter temperatures are way above 4°C. Here we show that this species is also highly responsive (flowers earlier) to mild ambient temperatures up to 19°C simulating winter conditions in its natural habitat. Physiological experiments allowed us to time the transition to flowering due to low temperatures, and to compare it to vernalization. We have made use of natural variation, and induced mutants to identify key genes involved in this process, and we provide here data suggesting that an FT gene in M.truncatula is transcriptionally regulated by different environmental cues. Flowering time was found to be correlated with MtFTA and MtFTB expression levels. Mutation in the MtFTA gene showed a late flowering phenotype, while over-expressing MtFTA in Arabidopsis complemented the ft- phenotype. We found that combination of 4°C and 12°C resulted in a synergistic increase in MtFTB expression, while combining 4°C and long photoperiods caused a synergistic increase in MtFTA expression. These results suggest that the two vernalization temperatures work through distinct mechanisms. The early flowering kalil mutant expressed higher levels of MtFTA and not MtFTB suggesting that the KALIL protein represses MtFTA specifically. The desert ecotype Sde Boker flowers earlier in response to short treatments of 8-12oc vernalization and expresses higher levels of MtFTA. This suggests a possible mechanism this desert ecotype developed to flower as fast as possible and finish its growth cycle before the dry period. MtFTA and FT expression are induced by common environmental cues in each species, and expression is repressed under short days. Replacing FT with the MtFTA gene (including regulatory elements) caused high MtFTA expression and early flowering under short days suggesting that the mechanism used to repress flowering under short days has diversified between the two species.The circadian regulated gene, GIGANTEA (GI) encodes a unique protein in Arabidopsis that is involved in flowering mechanism. In this research we characterized how the expression of the M.truncatula GI ortholog is regulated by light and temperature in comparison to its regulation in Arabidopsis. In Arabidopsis GI was found to be involved in temperature compensation to the clock. In addition, GI was found to be involved in mediating the effect of temperature on flowering time. We tested the influence of cold temperature on the MtGI gene in M.truncatula and found correlation between MtGI levels and extended periods of 12°C treatment. MtGI elevation that was found mostly after plants were removed from the cold influence preceded the induction of MtFT expression. This data suggests that MtGI might be involved in 12°C cold perception with respect to flowering in M.truncatula. GI seems to integrate diverse environmental inputs and translates them to the proper physiological and developmental outputs, acting through several different pathways. These research enabled to correlate between temperature and circadian clock in M.truncatula and achieved a better understanding of the flowering mechanism of this species.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs et Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, octobre 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Texte intégral
Résumé :
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
Styles APA, Harvard, Vancouver, ISO, etc.
10

Splitter, Gary, et Menachem Banai. Microarray Analysis of Brucella melitensis Pathogenesis. United States Department of Agriculture, 2006. http://dx.doi.org/10.32747/2006.7709884.bard.

Texte intégral
Résumé :
Original Objectives 1. To determine the Brucella genes that lead to chronic macrophage infection. 2. To identify Brucella genes that contribute to infection. 3. To confirm the importance of Brucella genes in macrophages and placental cells by mutational analysis. Background Brucella spp. is a Gram-negative facultative intracellular bacterium that infects ruminants causing abortion or birth of severely debilitated animals. Brucellosis continues in Israel, caused by B. melitensis despite an intensive eradication campaign. Problems with the Rev1 vaccine emphasize the need for a greater understanding of Brucella pathogenesis that could improve vaccine designs. Virulent Brucella has developed a successful strategy for survival in its host and transmission to other hosts. To invade the host, virulent Brucella establishes an intracellular niche within macrophages avoiding macrophage killing, ensuring its long-term survival. Then, to exit the host, Brucella uses placenta where it replicates to high numbers resulting in abortion. Also, Brucella traffics to the mammary gland where it is secreted in milk. Missing from our understanding of brucellosis is the surprisingly lillie basic information detailing the mechanisms that permit bacterial persistence in infected macrophages (chronic infection) and dissemination to other animals from infected placental cells and milk (acute infection). Microarray analysis is a powerful approach to determine global gene expression in bacteria. The close genomic similarities of Brucella species and our recent comparative genomic studies of Brucella species using our B. melitensis microarray, suqqests that the data obtained from studying B. melitensis 16M would enable understanding the pathogenicity of other Brucella organisms, particularly the diverse B. melitensis variants that confound Brucella eradication in Israel. Conclusions Results from our BARD studies have identified previously unknown mechanisms of Brucella melitensis pathogenesis- i.e., response to blue light, quorum sensing, second messenger signaling by cyclic di-GMP, the importance of genomic island 2 for lipopolysaccharide in the outer bacterial membrane, and the role of a TIR domain containing protein that mimics a host intracellular signaling molecule. Each one of these pathogenic mechanisms offers major steps in our understanding of Brucella pathogenesis. Strikingly, our molecular results have correlated well to the pathognomonic profile of the disease. We have shown that infected cattle do not elicit antibodies to the organisms at the onset of infection, in correlation to the stealth pathogenesis shown by a molecular approach. Moreover, our field studies have shown that Brucella exploit this time frame to transmit in nature by synchronizing their life cycle to the gestation cycle of their host succumbing to abortion in the last trimester of pregnancy that spreads massive numbers of organisms in the environment. Knowing the bacterial mechanisms that contribute to the virulence of Brucella in its host has initiated the agricultural opportunities for developing new vaccines and diagnostic assays as well as improving control and eradication campaigns based on herd management and linking diagnosis to the pregnancy status of the animals. Scientific and Agricultural Implications Our BARD funded studies have revealed important Brucella virulence mechanisms of pathogenesis. Our publication in Science has identified a highly novel concept where Brucella utilizes blue light to increase its virulence similar to some plant bacterial pathogens. Further, our studies have revealed bacterial second messengers that regulate virulence, quorum sensing mechanisms permitting bacteria to evaluate their environment, and a genomic island that controls synthesis of its lipopolysaccharide surface. Discussions are ongoing with a vaccine company for application of this genomic island knowledge in a Brucella vaccine by the U.S. lab. Also, our new technology of bioengineering bioluminescent Brucella has resulted in a spin-off application for diagnosis of Brucella infected animals by the Israeli lab by prioritizing bacterial diagnosis over serological diagnosis.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie