To see the other types of publications on this topic, follow the link: Data shift.

Dissertations / Theses on the topic 'Data shift'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data shift.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ginzinger, Simon Wolfgang. "Bioinformatics Methods for NMR Chemical Shift Data." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-80776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stenekap, Daniel. "Classification of Gear-shift data using machine learning." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53445.

Full text
Abstract:
Today, automatic transmissions are the industrial standard in heavy-duty vehicles. However, tolerances and component wear can cause factory calibrated gearshifts to have deviations that have a negative impact on clutch durability and driver comfort. An adaptive shift process could solve this problem by recognizing when pre-calibrated values are out-dated. The purpose of this thesis is to examine the classification of shift types using machine learning for the future goal of an adaptive gearshift process. Recent papers concerning machine learning on time-series are reviewed. Adata set is collected and validated using hand-engineered features and unsupervised learning. Four deep neural networks (DNN) models are trained on raw and normalized shift data. Three of the models show good generalization and perform with accuracies above 90%. An adaption of the fully convolutional network (FCN) used in [1] shows promise due to relative size and ability to learn the raw data sets. An adaptation of the multi-variate long short time memory fully convolutional network (MLSTMFCN) used in [2] is superior on normalized data sets. This thesis shows that DNN structures can be used to distinguish between time-series of shift data. However, much effort remains since a database for shift types is necessary for this work to continue.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Christopher C. "Data Processing for NASA's TDRSS DAMA Channel." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611474.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
Presently, NASA's Space Network (SN) does not have the ability to receive random messages from satellites using the system. Scheduling of the service must be done by the owner of the spacecraft through Goddard Space Flight Center (GSFC). The goal of NASA is to improve the current system so that random messages, that are generated on board the satellite, can be received by the SN. The messages will be requests for service that the satellites control system deems necessary. These messages will then be sent to the owner of the spacecraft where appropriate action and scheduling can take place. This new service is known as the Demand Assignment Multiple Access system (DAMA).
APA, Harvard, Vancouver, ISO, and other styles
4

Davidson, H. D. "A reliable data channel for underwater communications using phase shift keying." Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Raje, Satyajeet. "Data Fusion Ontology:Enabling a Paradigm Shift from Data Warehousing to Crowdsourcing for Accelerated Pace of Research." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460993523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Ting. "Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00768315.

Full text
Abstract:
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
APA, Harvard, Vancouver, ISO, and other styles
7

Wear, Steven M. "Shift-invariant image reconstruction of speckle-degraded images using bispectrum estimation /." Online version of thesis, 1990. http://hdl.handle.net/1850/11219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fisher, J. "Substituent chemical shifts in N.M.R." Thesis, University of Liverpool, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Lixia. "Passive phase shift modulation for high-speed data transmission in implantable closed-loop neuroprostheses." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1446837/.

Full text
Abstract:
In designing a closed-loop implantable neuroprosthesis, naturally-occurring nerve signals will be used to control the stimulation. A telemetry system, capable of transmitting power to the implant and relaying two raw electroneurograms out of the body at high speed, yet having simple circuitry and working over a wide range of coupling coefficient with low power loss, is required. However, at the start of this project, no such device was available. This thesis describes a novel method. Passive Phase Shift Modulation (PPSM), developed by the author, for high-speed passive reverse signalling via an inductively coupled link. A telemetry system based on PPSM is also devised by the author and presented in the thesis. In this system, power is provided to the implant and signals are conveyed out of the body continuously via the same radio-frequency (RF) channel. Unlike conventional Passive Impedance Modulation, it synchronously shorts the implant power-receiving coil for half the carrier cycle when a digital binary bit '1' is transmitted. This stores the energy in the coil and then releases it back to the circuit in time to generate a transient current surge in the transmitting coil, indicating a modulation. The scheme transmits phase shift modulation, but results in amplitude modulation. The transient responses under PPSM have been explored in great depth by two approaches: 1) mathematically deducing the analytical solutions and 2) a semi-numerical method using modem computer aided tools. The results are comparable. The influence of the circuit parameters has been analysed, showing that the signal link does not disturb the optimised efficient power transfer link. The telemetry system implementing PPSM is designed and built on the bench. It consists of a transmitter, an implant circuit, an extemal circuit and two coils. The digital logic is implemented by programmable gate arrays, bringing the electronic design up to date with modem technology. The performance is evaluated, and agrees with that predicted by theory and simulation. PPSM has advantages in speed, energy efficiency, and simple circuitry with comparable working range compared to previous methods. It enables transmission of signals with large bandwidth without necessarily increasing the frequency of a carrier. It is, therefore, a satisfactory method for designing a practical feed-back controlled neuroprosthesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Chiu, Frank Kwok King. "Data communications using coherent minimum frequency shift keying on intrabuilding polyphase power line networks." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/26225.

Full text
Abstract:
The suitability of Coherent Minimum Frequency Shift Keying (CMFSK) modulation for data communications on polyphase intrabuilding power distribution circuits is examined. An actual modem was designed and implemented. Average bit error rate (BER) versus received Eb/No measurements were taken for an industrial, commercial, and residential power line environments at 1.2kbps, 4.8kbps, and 19.2kbps data rate. The 19.2kbps BER measurements indicate that a majority of errors are caused by impulses occurring in the power lines, while other errors are caused by momentary reductions of received Eb/No. Occurance of errors coincides mostly with impulses on the power line which are highly periodic with the ac mains voltage. In addition, the BER measurements reveal that CMFSK modulation at 1.2kbps and 4.8kbps data rate is less affected by impulse noise than at 19.2kbps. This finding is attributed to the increased resistance to impulse noise effects as the bit duration is increased. A baseband spectrum spreading technique is proposed and successfully tested to implement low data rate transmissions. Spread spectrum signalling overcomes potential narrow band impairments by sending a wide1 band signal over the power lines. In addition, the reduced power spectral density of the spread spectrum transmission reduces narrow band interference to other power line communications users as well as AM radios and allows higher output power to compensate for path attenuations.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Hellner, Joakim. "Introducing quality assessment and efficient management of cellular thermal shift assay mass spectrometry data." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-311792.

Full text
Abstract:
Recent advances in molecular biology has led to the discovery of many new potential drugs. However, difficulties with in situ analysis of ligand binding prevents quick advancement in clinical trials, which stresses the need for better direct methods. A relatively new methodology, called Cellular Thermal Shift Assay (CETSA), allows for detection of ligand binding in a cells natural environment and can be used in combination with Mass Spectrometry (MS) for readout. With help from the Pelago Bioscience team, I developed a pipeline for processing of CETSA MS data and a web based system for viewing the results. The system, called CETSA Analytics, also evaluates the results relevance and helps its users to locate information efficiently. CETSA Analytics is currently being tested by Pelago Bioscience AB as a tool for experimental data distribution.
APA, Harvard, Vancouver, ISO, and other styles
12

Mkhaliphi, Mkhuseli Bruce. "Reconstruction of Functions From Non-uniformly Distributed Sampled Data in Shift-Invariant Frame Subspaces." Master's thesis, Faculty of Engineering and the Built Environment, 2018. http://hdl.handle.net/11427/30079.

Full text
Abstract:
The focus of this research is to study and implement efficient iterative reconstruction algorithms. Iterative reconstruction algorithms are used to reconstruct bandlimited signals in shift-invariant L2 subspaces from a set of non-uniformly distributed sampled data. The Shannon-Whittaker reconstruction formula commonly used in uniform sampling problems is insufficient in reconstructing function from non-uniformly distributed sampled data. Therefore new techniques are required. There are many traditional approaches for non-uniform sampling and reconstruction methods where the Adaptive Weights (AW) algorithm is considered to be the most efficient. Recently, the Partitions of Unity (PoU) algorithm has been suggested to outperform the AW although there has been much literature covering its numerical performance. A study and analysis of the implementation of the Adaptive Weights (AW) and Partitions of Unity (PoU) reconstruction methods is conducted. The algorithms consider the missing data problem, defined as reconstructing continuous-time (CT) signals from non-uniform samples which resulted from missing samples on a uniform grid. Mainly, the algorithms convert the non-uniform grid to a uniform grid. The implemented iterative methods construct CT bandlimited functions in frame subspaces. Bandlimited functions are considered to be a superposition of basis functions, named frames. PoU is a variation of AW, they differ by the choice of frame because each frame produces a different approximation operator and convergence rate. If efficiency is defined as the norm convergence and computational time of an algorithm, then among the two methods, discussed, the PoU method is more efficient. The AW method is slow and converged to a higher error than that of the PoU. However, AW compensates for its slowness and less accuracy by being convergent and robust for large sampling gaps and less sensitive to the sampling irregularities. The impact of additive white Gaussian noise on the performance of the two algorithms is also investigated. The numerical tools utilized in this research consist of the theory of discrete irregular sampling, frames, and iterative techniques. The developed software provides a platform for sampling signals under non-ideal conditions with real devices.
APA, Harvard, Vancouver, ISO, and other styles
13

Elson, J. Scott. "Simulation and performance analysis of Cellular Digital Packet Data." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/45069.

Full text
Abstract:

As the wireless telecommunications industry becomes more widely accepted, the need for mobile data communication has followed the rise in mobile voice communication. Cellular Digital Packet Data (CDPD) offers an unobtrusive data service that overlays the existing Advanced Mobile Phone Service (AMPS) in a cost effective manner that will be attractive to most service providers. Using idle time between voice traffic, CDPD uses Gaussian Minimum Shift Keying (GMSK) to send bursts of Reed-Solomon encoded information while channel hopping to avoid interfering with voice transmissions.

This thesis assesses the performance of CDPD for different channel environments through simulation. Specifically, Gaussian noise, Rayleigh fading, co-channel interference models are incorporated to identify the performance of the system.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Xi. "Automatic 13C Chemical Shift Reference Correction of Protein NMR Spectral Data Using Data Mining and Bayesian Statistical Modeling." UKnowledge, 2019. https://uknowledge.uky.edu/biochem_etds/40.

Full text
Abstract:
Nuclear magnetic resonance (NMR) is a highly versatile analytical technique for studying molecular configuration, conformation, and dynamics, especially of biomacromolecules such as proteins. However, due to the intrinsic properties of NMR experiments, results from the NMR instruments require a refencing step before the down-the-line analysis. Poor chemical shift referencing, especially for 13C in protein Nuclear Magnetic Resonance (NMR) experiments, fundamentally limits and even prevents effective study of biomacromolecules via NMR. There is no available method that can rereference carbon chemical shifts from protein NMR without secondary experimental information such as structure or resonance assignment. To solve this problem, we constructed a Bayesian probabilistic framework that circumvents the limitations of previous reference correction methods that required protein resonance assignment and/or three-dimensional protein structure. Our algorithm named Bayesian Model Optimized Reference Correction (BaMORC) can detect and correct 13C chemical shift referencing errors before the protein resonance assignment step of analysis and without a three-dimensional structure. By combining the BaMORC methodology with a new intra-peaklist grouping algorithm, we created a combined method called Unassigned BaMORC that utilizes only unassigned experimental peak lists and the amino acid sequence. Unassigned BaMORC kept all experimental three-dimensional HN(CO)CACB-type peak lists tested within ± 0.4 ppm of the correct 13C reference value. On a much larger unassigned chemical shift test set, the base method kept 13C chemical shift referencing errors to within ± 0.45 ppm at a 90% confidence interval. With chemical shift assignments, Assigned BaMORC can detect and correct 13C chemical shift referencing errors to within ± 0.22 at a 90% confidence interval. Therefore, Unassigned BaMORC can correct 13C chemical shift referencing errors when it will have the most impact, right before protein resonance assignment and other downstream analyses are started. After assignment, chemical shift reference correction can be further refined with Assigned BaMORC. To further support a broader usage of these new methods, we also created a software package with web-based interface for the NMR community. This software will allow non-NMR experts to detect and correct 13C referencing errors at critical early data analysis steps, lowering the bar of NMR expertise required for effective protein NMR analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Xiaozhou. "Language shift regarding Canada's French-speaking population: Data comparability and trends from 1971 to 2001." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27304.

Full text
Abstract:
The importance of establishing valid language shift trends regarding Canada's French-speaking populations and the historical comparability of Canadian census language data are considered. Based on an empirical theory, comparability breaks in language data since 1971 are identified and evaluated. The proportions of Canadian-born persons of French mother tongue, and of French home language, to the total population of relevant birth regions are then adjusted separately, to reduce the impact of comparability breaks. The resulting language shift trends regarding the French-speaking populations are portrayed using language vitality indices for reference cohorts. It is found that for the whole of Canada, and for the provinces of Quebec and New Brunswick, the vitality of French among the Canadian-born rebounded in 2001, after consistently decreasing from 1971 to 1991. It is also observed that for other regions, the vitality of French went down continuously, indicating a sustained and aggravated assimilation over the past 30 years.
APA, Harvard, Vancouver, ISO, and other styles
16

Biyoghe, Joel S. "Design and implementation of a high data rate QPSK demodulator for nanosatellites." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2744.

Full text
Abstract:
Thesis (Master of Engineering in Electrical Engineering)--Cape Peninsula University of Technology, 2017.
This dissertation presents the development of a quadrature phase shift keying (QPSK) demodulator for nanosatellites that complies with both the limited resources associated with nanosatellites as well as the flexibility and configurability required for a software defined radio (SDR) platform. This research project is a component of a bigger project, which is to develop a high-speed receiver for nanosatellites, and aims to provide a practical solution to the need for communication technologies that support emerging nanosatellite applications, such as Earth observation and communications. The development of the QPSK demodulator follows an all-digital implementation approach. The main reason for selecting this approach is to have a system that is flexible and reconfigurable to comply with the SDR requirements. Another reason for selecting this approach is to comply with the low noise system, low power consumption as well as the small size and weight requirements associated with nanosatellites. The QPSK demodulator is implemented on an IGLOO2 Field Programmable Gate Array (FPGA), due to its robustness to radiation and high-speed capability. Initially, the techniques used to design each subsystem of the QPSK demodulator are selected. Then, algorithms to digitally implement the designed subsystems are produced. Thereafter, the code for the digital QPSK demodulator is written and verified in Matlab first. The simulation of the Matlab-based QPSK demodulator performs satisfactorily. Subsequently, the code to implement the QPSK demodulator on an FPGA (IGLOO2) has been written in Libero, using VHSIC Hardware Description Language (VHDL). The resulting FPGA-based QPSK demodulator has been emulated in Libero (an integration and development environment (IDE) for Microsemi FPGAs) using a test-bench as well as other analysis tools. The test-bench results are visualized using Modelsim. The results show that the demodulator can support data rates up to 13.25 Mbps if 16 samples-per-symbols are used, and up to 26.5 Mbps if 8 samples-per-symbols are used. It also has a very good bit-error-rate performance, which is simulated to be within a factor of 5 of the theoretical limit of QPSK modulation. Finally, the demodulator consumes less than 15 mW at the maximum operating speed. and has been coded to mitigate the effects of space radiation and noise contriution by the demodulator itself.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Dijin, James A. McCorduck, and Kamilo Feher. "FQPSK ANALOG/DIGITAL IMPLEMENTATIONS FOR LOW TO ULTRA HIGH DATA RATES IN 1Gb/s RANGE SYSTEMS." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606735.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
For simpler implementations of ultra high bit rate systems, combined analog/digital techniques, described here in, provide implementations with the smallest number of component count utilizing minimal “real-estate” and smallest DC power. While digital implementations with tradition Read Only Memory (ROM) and Digital to Analog Converters (DAC’s) have been proven in several commercial, NASA -CCSDS recommended, and U.S. DoD-IRIG standardized Feher’s QPSK (FQPSK) [2,3] products, such implementations can be further simplified, and in particular for ultra high bit rate product applications. Several waveform generating techniques such as linear approximation, analog approximation and mixed analog and linear approximations are investigated using preliminary simulation results.
APA, Harvard, Vancouver, ISO, and other styles
18

許志光 and Chi-kwong Hui. "Knowledge-based approach to roster scheduling problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B30408982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

WEST, KAREN FRANCES. "AN EXTENSION TO THE ANALYSIS OF THE SHIFT-AND-ADD METHOD: THEORY AND SIMULATION (SPECKLE, ATMOSPHERIC TURBULENCE, IMAGE RESTORATION)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188021.

Full text
Abstract:
The turbulent atmosphere degrades images of objects viewed through it by introducing random amplitude and phase errors into the optical wavefront. Various methods have been devised to obtain true images of such objects, including the shift-and-add method, which is examined in detail in this work. It is shown theoretically that shift-and-add processing may preserve diffraction-limited information in the resulting image, both in the point source and extended object cases, and the probability of ghost peaks in the case of an object consisting of two point sources is discussed. Also, a convergence rate for the shift-and-add algorithm is established and simulation results are presented. The combination of shift-and-add processing and Wiener filtering is shown to provide excellent image restorations.
APA, Harvard, Vancouver, ISO, and other styles
20

Agarwal, Shweta S. "QUADRATURE PHASE SHIFT KEYING-DIRECT SEQUENCE SPREAD SPECTRUM-CODE DIVISION MULTIPLE ACCESS WITH DISPARATE QUADRATURE CHIP AND DATA RATES." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1134508354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dall'Ora, Chiara. "The association of nurses' shift characteristics, missed vital signs observations and sickness absence : retrospective observational study using routinely collected data." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/417870/.

Full text
Abstract:
When organising shift work, healthcare managers are required to cover the service across 24 hours in a way that maximises job performance – which includes minimising sickness absence related to work, and creating conditions that allow nursing staff to perform their scheduled tasks. This study aimed to investigate the association between characteristics of shift work in acute hospital wards and nursing staff job performance, in terms of sickness absence and compliance with vital signs observations. This was a retrospective longitudinal observational study using routinely collected data on nursing staff shifts, missed vital signs observations and sickness absence. The study took place in all acute inpatient general wards at a large teaching hospital in the South of England over a three years period. Shift and sickness data were extracted from the electronic shift system and overtime shifts datasets, which are both linked to the hospital payroll. These contain individual records of shifts worked, dates, start and end time, ward and grade for all nurses employed by the hospital. Vital signs observations data were extracted from a database of records made using the VitalPAC™ system. Generalised linear mixed models were used to model the association between shift work characteristics, sickness absence episodes and compliance with vital signs observations. This doctoral research provides new knowledge regarding the association of shift characteristics and job performance outcomes. It found that working high proportions of 12 hours or more shifts is associated with higher sickness absence, regardless of how many days nursing staff had worked in the previous seven days. An association between working 12 hours or more shifts and delaying vital signs observations was found for health care assistants. Drawing on a large and diverse sample size and using objective data, this study is the first in nursing to demonstrate that there is an association between long shifts and job performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Simpson, Leonie Ruth. "Divide and conquer attacks on shift register based stream ciphers." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Amaya, Austin J. "Beurling-Lax Representations of Shift-Invariant Spaces, Zero-Pole Data Interpolation, and Dichotomous Transfer Function Realizations: Half-Plane/Continuous-Time Versions." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27636.

Full text
Abstract:
Given a full-range simply-invariant shift-invariant subspace M of the vector-valued L2 space on the unit circle, the classical Beurling-Lax-Halmos (BLH) theorem obtains a unitary operator-valued function W so that M may be represented as the image of of the Hardy space H2 on the disc under multiplication by W. The work of Ball-Helton later extended this result to find a single function representing a so-called dual shift-invariant pair of subspaces (M,MÃ ) which together form a direct-sum decomposition of L2. In the case where the pair (M,MÃ ) are finite-dimensional perturbations of the Hardy space H2 and its orthogonal complement, Ball-Gohberg-Rodman obtained a transfer function realization for the representing function W; this realization was parameterized in terms of zero-pole data computed from the pair (M,MÃ ). Later work by Ball-Raney extended this analysis to the case of nonrational functions W where the zero-pole data is taken in an infinite-dimensional operator theoretic sense. The current work obtains analogues of these various results for arbitrary dual shift-invariant pairs (M,MÃ ) of the L2 spaces on the real line; here, shift-invariance refers to invariance under the translation group. These new results rely on recent advances in the understanding of continuous-time infinite-dimensional input-state-output linear systems which have been codified in the book by Staffans.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Vanier, Antoine. "The concept measurement, and integration of response shift phenomenon in Patient-Reported Outcomes data analyses : on certain methodological and statistical considerations." Thesis, Nantes, 2016. http://www.theses.fr/2016NANT1009/document.

Full text
Abstract:
Les données rapportées par les patients sont maintenant fréquemment utilisées en recherche biomédicale. Ces instruments permettent la mesure de concepts subjectifs tels que la qualité de vie, les niveaux d’anxiété, de douleur, de fatigue. L’interprétation d’une différence de score au cours du temps était basée sur l’hypothèse que le sens des concepts et échelles restai stable au cours du temps dans l’esprit des individus. Cette hypothèse semble aujourd’hui dépassée. L’auto-évaluation d’un concept est maintenant comprise comme contingente de la représentation subjective qu’à un sujet du dit concept, cette représentation pouvant changer au cours du temps, surtout après avoir vécu un évènement de santé : ce phénomène est connu comme le « response shift ». Depuis la fin des années 1990s, l’investigation de ce phénomène est devenue un sujet d’intérêt majeur en psychométrie. Si des développements ont vu le jour, ce sujet reste récent et donc accompagné de débats variés que ce soit sur le plan théorique ou méthodologique. Aussi, l’objectif général de cette thèse est d’investiguer certaines problématiques méthodologiques et statistiques liées au response shift. Ce manuscrit est composé de trois travaux principaux : un état de l’art et une synthèse des travaux conduits à un niveau international depuis que le response shift est étudié, une étude pilote des performances de la procédure d’Oort (une méthode populaire de détection de response shift) par simulations et un travail théorique sur les liens entre response shift et complexité sémantique des concepts mesurés et items utilisés
Patient-Reported Outcomes are increasingly used in health-related research. These instruments allow the assessment of subjective concepts such as Health-Related Quality of Life, anxiety level, pain or fatigue. Initially, the interpretation of a difference in score over time was based on the assumption that the meaning of concepts and measurement scales remains stable in individuals’ minds over time. This assumption has been challenged. Indeed, the self-assessment of a concept is now understood as a contingency of the subjective meaning a subject has of this concept, which can change over time especially as a result of a salient medical event: the “response shift” phenomenon. Since the end of the 1990s, researches on response shift phenomenon has become of prime interest in the field of health-related research. If developments have been made, it is still a young field with various scientific debates on a theoretical, methodological and statistical level. Thus, the broad objective of this thesis is to investigate some methodological and statistical issues regarding response shift concept, detection and integration into PRO data analyses. The manuscript is composed of three main works: a state of the art and synthesis of the works conducted at an international level since response shift phenomenon is investigated, a pilot study investigating the statistical performances of the Oort’s Procedure (a popular method of response shift detection using Structural Equation Modeling) by simulations and a theoretical work about the links between response shift occurrence and semantic complexity of concepts measured and items used
APA, Harvard, Vancouver, ISO, and other styles
25

Maisey, Gemma. "Mining for sleep data: An investigation into the sleep of fly-In fly-out shift workers in the mining industry and potential solutions." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2023. https://ro.ecu.edu.au/theses/2618.

Full text
Abstract:
Shift work in the mining industry is a risk factor for sleep loss leading to impaired alertness, which may adversely impact health and safety risks. This risk is being increasingly recognised by leaders and shift workers in the mining industry, however, there is limited knowledge available on the extent of sleep loss and other potential contributing factors. Furthermore, knowledge of the efficacy of individual interventions to assist shift workers to improve their sleep, and the management of risk at an organisational level is scarce. This PhD thesis involved three studies. The first two studies involved the recruitment of 88 shift workers on a fly-in, fly-out (FIFO) mining operation in Western Australia (WA), undertaken within a business-as-usual model. The third study develops a diagnostic tool to support the systematic assessment of an organisation's Fatigue Risk Management System (FRMS). Study 1 (Chapter 4) investigated sleep behaviours, the prevalence of risk of sleep disorders and the predicted impact on alertness across the roster schedule. Sleep was objectively measured using wrist-activity monitors for the 21-day study period and biomathematical modelling was used to predict alertness across the roster schedule. The prevalence of risk for sleep problems and disorders was determined using scientifically validated sleep questionnaires. We found sleep loss was significantly greater following days shift and night shift compared to days off, which resulted in a 20% reduced alertness across the 14 consecutive shifts at the mining operation. Shift workers reported a high prevalence of risk for sleep disorders including shift work disorder (44%), obstructive sleep apnoea (OSA) (31%) and insomnia (8%); a high proportion of shift workers were obese with a body mass index (BMI) > 30kg/m2 (23%) and consumed hazardous levels of alcohol (36%). All of which may have contributed to sleep loss. In addition, the design of shifts and rosters, specifically, early morning shift start times ( < 06:00) and long shift durations ( > 12 hrs.) may have also adversely impacted sleep duration, as they did not allow for sufficient sleep opportunity. Study 2 (Chapter 5) was a randomised control trial (RCT) that investigated the efficacy of interventions to improve sleep, which included a two-hour sleep education program and biofeedback on sleep through a smartphone application. Sleep was objectively measured using wrist-activity monitors across two roster cycles (42 days) with an intervention received on day 21. Our results were inconclusive and suggest that further research is required to determine the efficacy of these commonly used interventions in the mining industry. In line with the results from Study 1, our interventions may not have been effective in improving sleep duration as the shift and roster design did not allow adequate time off between shifts for sleep ( ≥ 7 h) and daily routines. Study 3 (Chapter 6) used a modified Delphi process that involved 16 global experts, with experience and knowledge in sleep science, chronobiology, and applied fatigue risk management within occupational settings, to define and determine the elements considered essential as part of an FRMS. This study resulted in the development of an FRMS diagnostic tool to systematically assist an organisation in assessing its current level of implementation of an FRMS. The results of the studies within this PhD thesis present several potential benefits for the mining industry. These include an enhanced understanding of the extent of sleep loss and the potential impact on alertness, in addition to contributing factors, including shift and roster design elements and unmanaged sleep disorders. The development of the FRMS diagnostic tool may practically guide mining operations on the elements required to manage risk. These findings may also inform government, occupational health and safety regulatory authorities and shift work organisations more broadly, on the need to identify and manage fatigue, as a result of sleep loss, as a critical risk.
APA, Harvard, Vancouver, ISO, and other styles
26

Beyan, Cigdem. "Object Tracking For Surveillance Applications Using Thermal And Visible Band Video Data Fusion." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612743/index.pdf.

Full text
Abstract:
Individual tracking of objects in the video such as people and the luggages they carry is important for surveillance applications as it would enable deduction of higher level information and timely detection of potential threats. However, this is a challenging problem and many studies in the literature track people and the belongings as a single object. In this thesis, we propose using thermal band video data in addition to the visible band video data for tracking people and their belongings separately for indoor applications using their heat signatures. For object tracking step, an adaptive, fully automatic multi object tracking system based on mean-shift tracking method is proposed. Trackers are refreshed using foreground information to overcome possible problems which may occur due to the changes in object&rsquo
s size, shape and to handle occlusion, split and to detect newly emerging objects as well as objects that leave the scene. By using the trajectories of objects, owners of the objects are found and abandoned objects are detected to generate an alarm. Better tracking performance is also achieved compared a single modality as the thermal reflection and halo effect which adversely affect tracking are eliminated by the complementing visible band data.
APA, Harvard, Vancouver, ISO, and other styles
27

Naftali, Eran 1971. "First order bias and second order variance of the Maximum Likelihood Estimator with application to multivariate Gaussian data and time delay and Doppler shift estimation." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Keller, Theresa [Verfasser]. "The prevalence sex-shift in single disease and multimorbid asthma and rhinitis during puberty: an individual participant data meta-analysis of European birth cohorts / Theresa Keller." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2018. http://d-nb.info/1170876374/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Anderson, Christopher. "BANDWIDTH LIMITED 320 MBPS TRANSMITTER." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/607635.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
With every new spacecraft that is designed comes a greater density of information that will be stored once it is in operation. This, coupled with the desire to reduce the number of ground stations needed to download this information from the spacecraft, places new requirements on telemetry transmitters. These new transmitters must be capable of data rates of 320 Mbps and beyond. Although the necessary bandwidth is available for some non-bandwidth-limited transmissions in Ka-Band and above, many systems will continue to rely on more narrow allocations down to X-Band. These systems will require filtering of the modulation to meet spectral limits. The usual requirements of this filtering also include that it not introduce high levels of inter-symbol interference (ISI) to the transmission. These constraints have been addressed at CE by implementing a DSP technique that pre-filters a QPSK symbol set to achieve bandwidth-limited 320 Mbps operation. This implementation operates within the speed range of the radiation-hardened digital technologies that are currently available and consumes less power than the traditional high-speed FIR techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

AlShammeri, Mohammed. "Dynamic Committees for Handling Concept Drift in Databases (DCCD)." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23498.

Full text
Abstract:
Concept drift refers to a problem that is caused by a change in the data distribution in data mining. This leads to reduction in the accuracy of the current model that is used to examine the underlying data distribution of the concept to be discovered. A number of techniques have been introduced to address this issue, in a supervised learning (or classification) setting. In a classification setting, the target concept (or class) to be learned is known. One of these techniques is called “Ensemble learning”, which refers to using multiple trained classifiers in order to get better predictions by using some voting scheme. In a traditional ensemble, the underlying base classifiers are all of the same type. Recent research extends the idea of ensemble learning to the idea of using committees, where a committee consists of diverse classifiers. This is the main difference between the regular ensemble classifiers and the committee learning algorithms. Committees are able to use diverse learning methods simultaneously and dynamically take advantage of the most accurate classifiers as the data change. In addition, some committees are able to replace their members when they perform poorly. This thesis presents two new algorithms that address concept drifts. The first algorithm has been designed to systematically introduce gradual and sudden concept drift scenarios into datasets. In order to save time and avoid memory consumption, the Concept Drift Introducer (CDI) algorithm divides the number of drift scenarios into phases. The main advantage of using phases is that it allows us to produce a highly scalable concept drift detector that evaluates each phase, instead of evaluating each individual drift scenario. We further designed a novel algorithm to handle concept drift. Our Dynamic Committee for Concept Drift (DCCD) algorithm uses a voted committee of hypotheses that vote on the best base classifier, based on its predictive accuracy. The novelty of DCCD lies in the fact that we employ diverse heterogeneous classifiers in one committee in an attempt to maximize diversity. DCCD detects concept drifts by using the accuracy and by weighing the committee members by adding one point to the most accurate member. The total loss in accuracy for each member is calculated at the end of each point of measurement, or phase. The performance of the committee members are evaluated to decide whether a member needs to be replaced or not. Moreover, DCCD detects the worst member in the committee and then eliminates this member by using a weighting mechanism. Our experimental evaluation centers on evaluating the performance of DCCD on various datasets of different sizes, with different levels of gradual and sudden concept drift. We further compare our algorithm to another state-of-the-art algorithm, namely the MultiScheme approach. The experiments indicate the effectiveness of our DCCD method under a number of diverse circumstances. The DCCD algorithm generally generates high performance results, especially when the number of concept drifts is large in a dataset. For the size of the datasets used, our results showed that DCCD produced a steady improvement in performance when applied to small datasets. Further, in large and medium datasets, our DCCD method has a comparable, and often slightly higher, performance than the MultiScheme technique. The experimental results also show that the DCCD algorithm limits the loss in accuracy over time, regardless of the size of the dataset.
APA, Harvard, Vancouver, ISO, and other styles
31

Topham, Emma. "Assertion and accommodation : a study of the assertive language in the conversations of school-age (5-13 years) girls." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/20961.

Full text
Abstract:
This study aimed to investigate the use of accommodation of assertive utterances (AUs) in the conversations of 49 girls aged 5;0-13;1. Based on the findings of earlier research that the use of such language is more closely related to age than to gender, it was predicted speakers would accommodate their use of and response to assertive utterances as a result of their partner's age. Naturalistic language from these speakers was collected over a year, and evidence of accommodation was observed in all speakers. Fewer AUs were used with younger speakers compared to older ones, and those used with younger girls were more likely to be produced with the sole purpose of controlling the hearer's behaviour. In addition, AUs were more likely to be complied with, or accepted, when they were produced by older girls. Given what is known about the types of language used by powerful/powerless individuals, it appears that these speakers consider age to be an indicator of status. A particularly interesting finding was that it was the age of a speaker in relation to other members of the conversation that influenced their use of and response to AUs, rather than the age of the speaker alone.
APA, Harvard, Vancouver, ISO, and other styles
32

Lawson, Siobhan. "Can real time data be used as an effective input for lighting control to influence human behaviour in a physical space against the backdrop of the global shift toward an experience economy?" Thesis, KTH, Ljusdesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297947.

Full text
Abstract:
The use of real time data as an input for lighting control is an emerging element for designers to implement into lighting schemes but does it add any value or have the ability to influence human behaviour? The recent development in technological capabilities, the demand within the emerging experience economy and the hybridisation of digital and physical realms make this a current and relevant investigation. This study aims to understand the relationship between light and behaviour and the potential of real time data to enhance it by initiating and curating lighting effect in a physical space. In this context, the experience economy describes the business model of providing meaningful and memorable experiences to customers as a core feature of a product or service while real time data describes the harvesting of information as it happens. Through reviewing literature and interviewing professionals in the field of both light and data the results conclude that light does influence behaviour in the context of attention, movement and emotion. Analysis of case studies and technological enablers indicate in-space sensors to be a valuable source of data which can be used effectively to trigger light scenes that respond instantly, with relevance to occupants inhabiting the built environment. Trend reports and industry luminaries forecast strong predictions for the merging of physical and digital worlds as a means of providing memorable and meaningful experiences for retail consumers. It is recommended that lighting designers educate themselves in preparation for the inevitable growing demand for such experiences.
APA, Harvard, Vancouver, ISO, and other styles
33

Naab-Levy, Adam O. "Enhanced Distance Measuring Equipment Data Broadcast Design, Analysis, Implementation, and Flight-Test Validation." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1449158180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bickel, Steffen. "Learning under differing training and test distributions." Phd thesis, Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2009/3333/.

Full text
Abstract:
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
Eines der wichtigsten Probleme im Maschinellen Lernen ist das Trainieren von Vorhersagemodellen aus Trainingsdaten und das Ableiten von Vorhersagen für Testdaten. Vorhersagemodelle basieren üblicherweise auf der Annahme, dass Trainingsdaten aus der gleichen Verteilung gezogen werden wie Testdaten. In der Praxis ist diese Annahme oft nicht erfüllt, zum Beispiel, wenn Trainingsdaten durch Fragebögen gesammelt werden. Hier steht meist nur eine verzerrte Zielpopulation zur Verfügung, denn Teile der Population können unterrepräsentiert sein, nicht erreichbar sein, oder ignorieren die Aufforderung zum Ausfüllen des Fragebogens. In vielen Anwendungen stehen nur sehr wenige Trainingsdaten aus der Testverteilung zur Verfügung, weil solche Daten teuer oder aufwändig zu sammeln sind. Daten aus alternativen Quellen, die aus ähnlichen Verteilungen gezogen werden, sind oft viel einfacher und günstiger zu beschaffen. Die vorliegende Arbeit beschäftigt sich mit dem Lernen von Vorhersagemodellen aus Trainingsdaten, deren Verteilung sich von der Testverteilung unterscheidet. Es werden verschiedene Problemstellungen behandelt, die von unterschiedlichen Annahmen über die Beziehung zwischen Trainings- und Testverteilung ausgehen. Darunter fallen auch Multi-Task-Lernen und Lernen unter Covariate Shift und Sample Selection Bias. Es werden mehrere neue Modelle hergeleitet, die direkt den Unterschied zwischen Trainings- und Testverteilung charakterisieren, ohne dass eine einzelne Schätzung der Verteilungen nötig ist. Zentrale Bestandteile der Modelle sind Gewichtungsfaktoren, mit denen die Trainingsverteilung durch Umgewichtung auf die Testverteilung abgebildet wird. Es werden kombinierte Modelle zum Lernen mit verschiedenen Trainings- und Testverteilungen untersucht, für deren Schätzung nur ein einziges Optimierungsproblem gelöst werden muss. Die kombinierten Modelle können mit zwei Optimierungsschritten approximiert werden und dadurch kann fast jedes gängige Vorhersagemodell so erweitert werden, dass verzerrte Trainingsverteilungen korrigiert werden. In Fallstudien zu Email-Spam-Filterung, HIV-Therapieempfehlung, Zielgruppenmarketing und anderen Anwendungen werden die neuen Modelle mit Referenzmethoden verglichen.
APA, Harvard, Vancouver, ISO, and other styles
35

Zijlstra, Peter, and Christiaan Visser. "Developing Business Models in the Video Game Industry : An evaluation to strategic choices made by small and medium-sized development studios." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-18618.

Full text
Abstract:
Digitalization has given rise to new opportunities for small and medium-sized video game development studios. No longer bound by physical products and creative restrains, the de-veloper has been empowered with independency. This qualitative study is aimed to under-stand how a development studio develops their business model and how underlying strate-gy is formulated. Additionally we evaluate the degree of innovativeness of the business model in terms of radical and incremental innovation according to Damanpour (1991). To achieve this we present a comprehensive literature review as to gain a more theoretical un-derstanding of industry mechanics and to be able to comprehend reasoning behind existing business models. We structure the dynamics of the business model by analyzing nine busi-ness model aspects as suggested by Osterwalder, Pigneur and Clark (2010). Following our theoretical framework we gain practical input from four separate case studies. An interpret-ative research method is used to gain better understanding of reasoning and choices made. We interpret our findings following a narrative approach which shows that the digitaliza-tion has preluded a paradigm shift in the sense that development studios have started to adopt activities otherwise performed by key partners. As barriers dissipate small and me-dium-sized development studios try to make sense of the current industry, but struggle in doing so. Having to reinvent themselves we conclude that a focus towards creating thicker customer relationships is considered and the idea of seeing games as a service is acknowl-edged to depict the future of the industry. The conclusions of this study contribute to both academic science and industry practice.
APA, Harvard, Vancouver, ISO, and other styles
36

Anota, Amelie. "Analyse longitudinale de la qualité de vie relative à la santé en cancérologie." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA3010/document.

Full text
Abstract:
La qualité de vie relative à la santé (QdV) est désormais un des objectifs majeurs des essais cliniques en cancérologie pour pouvoir s’assurer du bénéfice clinique de nouvelles stratégies thérapeutiques pour le patient. Cependant, les résultats des données de QdV restent encore peu pris en compte en pratique clinique en raison de la nature subjective et dynamique de la QdV. De plus, les méthodes statistiques pour son analyse longitudinale doivent être capables de tenir compte de l’occurrence des données manquantes et d’un potentiel effet Response Shift reflétant l’adaptation du patient vis-à-vis de la maladie et de la toxicité du traitement. Ces méthodes doivent enfin proposer des résultats facilement compréhensibles par les cliniciens.Dans cette optique, les objectifs de ce travail ont été de faire le point sur ces facteurs limitants et de proposer des méthodes adéquates pour une interprétation robuste des données de QdV longitudinales. Ces travaux sont centrés sur la méthode du temps jusqu’à détérioration d’un score de QdV (TJD), en tant que modalité d’analyse longitudinale, ainsi que sur la caractérisation de l’occurrence de l’effet Response Shift.Les travaux menés ont donné lieu à la création d’un package R pour l’analyse longitudinale de la QdV selon la méthode du TJD avec une interface facile d’utilisation. Certaines recommandations ont été proposées sur les définitions de TJD à appliquer selon les situations thérapeutiques et l’occurrence ou non d’un effet Response Shift. Cette méthode attractive pour les cliniciens a été appliquée dans le cadre de deux essais de phase précoces I et IL La méthode de pondération par probabilité inversée du score de propension a été investiguée conjointement avec la méthode du TJD afin de tenir compte de l’occurrence de données manquantes dépendant des caractéristiques des patients. Une comparaison de trois approches statistiques pour l’analyse longitudinale a montré la performance du modèle linéaire mixte et permet de donner quelques recommandations pour l’analyse longitudinale selon le design de l’étude. Cette étude a également montré l’impact de l’occurrence de données manquantes informatives sur les méthodes d’analyse longitudinale. Des analyses factorielles et modèles issus de la théorie de réponse à l’item ont montré leur capacité à caractériser la Response Shift conjointement avec la méthode Then-test. Enfin, bien que les modèles à équation structurelles soient régulièrement appliqués pour caractériser cet effet sur le questionnaire de QdV générique SF-36, ils semblent peu adaptés à la structure des questionnaires spécifiques du cancer du groupe « European Organization of Research and Treatment of Cancer » (EORTC
Health-related quality of life (HRQoL) has become one of the major objectives of oncology clinical trials to ensure the clinical benefit of new treatment strategies for the patient. However, the results of HRQoL data remain poorly used in clinical practice due to the subjective and dynamic nature of HRQoL. Moreover, statistical methods for its longitudinal analysis hâve to take into account the occurrence of missing data and the potential Response Shift effect reflecting patient’s adaptation of the disease and treatment toxicities. Finally, these methods should also propose some results easy understandable for clinicians.In this context, this work aimed to review these limiting factors and to propose some suitable methods for a robust interprétation of longitudinal HRQoL data. This work is focused on both the Time to HRQoL score détérioration (TTD) as a modality of longitudinal analysis and the characterization of the occurrence of the Response Shift effect.This work has resulted in the création of an R package for the longitudinal HRQoL analysis according to the TTD with an easy to use interface. Some recommendations were proposed on the définitions of the TTD to apply according to the therapeutic settings and the potential occurrence of the Response Shift effect. This attractive method was applied in two early stage I and II trials. The inverse probability weighting method of the propensity score was investigated in conjunction with the TTD method to take into account the occurrence of missing data depending on patients’ characteristics. A comparison between three statistical approaches for the longitudinal analysis showed the performance of the linear mixed model and allows to give some recommendations for the longitudinal analysis according to the study design. This study also highlighted the impact of the occurrence of informative missing data on the longitudinal statistical methods. Factor analyses and Item Response Theory models showed their ability to characterize the occurrence of the Response Shift in conjunction with the Then- test method. Finally, although the structural équations modeling are often used to characterize this effect on the SF-36 generic questionnaire, they seem not appropriated to the particular structure of the HRQoL cancer spécifie questionnaires of the European Organization of Research and Treatment of Cancer (EORTC) HRQoL group
APA, Harvard, Vancouver, ISO, and other styles
37

Åsbrink, Stefan E. "Nonlinearities and regime shifts in financial time series /." Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögsk.] (EFI), 1997. http://www.hhs.se/efi/summary/439.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sallé, Guillaume. "Apprentissage génératif à bas régime des données pour la segmentation d'images en oncologie." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0032.

Full text
Abstract:
En apprentissage statistique, la performance des modèles est influencée par divers biais inhérents aux données, comme la rareté des données (data scarcity) et le décalage de domaine (domain shift). Cette thèse s’intéresse à réduire leur influence dans le cadre de la segmentation de structures pathologiques en imagerie médicale. En particulier, notre objectif est de réduire les écarts de données au niveau des régions d’intérêt (ROI) entre les domaines source d’entrainement et cible de déploiement, qu’ils soient intrinsèques aux données ou induits par la faible quantité de données disponibles. Dans ce but, nous proposons une stratégie d’augmentation de données adaptative, basée sur l’analyse de la distribution des intensités des ROI dans le domaine de déploiement. Une première contribution, que nous qualifions d’augmentation naïve, consiste à altérer l’apparence des ROI du domaine d’entrainement pour mieux correspondre aux caractéristiques des ROI du domaine de déploiement. Une seconde étape, complétant la première, rend l’altération plus réaliste par rapport aux propriétés du domaine cible grâce à un modèle génératif d’harmonisation one-shot, applicable dans toutes les situations de disponibilité de données. De cette façon, nous renforçons la robustesse du modèle de segmentation en aval pour les ROI dont les caractéristiques étaient initialement sous-représentées à l’entrainement. Nous évaluons notre approche à différents régimes de données et divers contextes cliniques, notamment en IRM, TDM et radiographie pulmonaire. En outre, notre approche a montré des résultats impressionnants lors d’un challenge de segmentation de tumeurs a la conférence MICCAI 2022
In statistical learning, the performance of models is affected by various biases present within the data, including data scarcity and domain shift. This thesis focuses on reducing their impact in the field of pathological structure segmentation in medical imaging. Our goal is to minimize data discrepancies at the region of interest (ROI) level between the training source domain and the target deployment domain, whether it is intrinsic to the data or caused by the limited data availability. To this end, we present an adaptive data augmentation strategy, based on the analysis of the intensity distribution of the ROIs in the deployment domain.A first contribution, which we call naive augmentation, consists of altering the appearance of the training ROIs to better match the characteristics of the ROIs in the deployment domain. A second augmentation, complementing the first, makes the alteration more realistic relative to the properties of the target domain by harmonizing the characteristics of the altered image. For this, we employ a generative model trained on a single unlabeled image from the deployment domain (one-shot approach), making this technique usable in any data regime encountered. In this way, we enhance the robustness of the downstream segmentation model forROIs whose characteristics were initially underrepresented in the deployment domain. The effectiveness of this method is evaluated under various data regimes and in different clinical contexts (MRI, CT, CXR). Our approach demonstrated impressive results in a tumor segmentation challenge at MICCAI 2022
APA, Harvard, Vancouver, ISO, and other styles
39

Rehn, Rasmus. "Stochastic modeling of yield curve shifts usingfunctional data analysis." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147342.

Full text
Abstract:
This thesis approaches the problem of modeling the multivariate distribution of interest rates by implementing a novel tool of statistics known as functional data analysis (FDA). This is done by viewing yield curve shifts as distinct but continuous stochastic objects defined over a continuum of maturities. Based on these techniques, we provide two stochastic models with different assumptions regarding the temporal dependence of yield curve shifts and compare their performance with empirical data. The study finds that both models replicate the distributions of yield changes with medium- and long-term maturities, whereas none of the models perform satisfactory at the short segment of the yield curve. Both models, however, appear to accurately capture the cross-sectional dependence.
APA, Harvard, Vancouver, ISO, and other styles
40

Teo, Sui-Guan. "Analysis of nonlinear sequences and streamciphers." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63358/1/Sui-Guan_Teo_Thesis.pdf.

Full text
Abstract:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
APA, Harvard, Vancouver, ISO, and other styles
41

Tseng, Chaw-Wu. "Vibration of rotating-shaft design spindles with flexible bases /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/7129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Purbawati, Anike. "Modulation de la fréquence d'un oscillateur spintronique (STNO) pour des applications de communication sans fil." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAY023/document.

Full text
Abstract:
Les Oscillateurs Spintronique (STNO) sont un nouveau type d'oscillateurs à fréquence radio (RF) qui utilisent l'effet « Spin Transfer Torque (STT) » dans un dispositif de jonction tunnel magnétique (MTJ) pour produire des oscillations entretenues à haute fréquence. Les STNO fournissent des solutions compactes pour la communication sans fil utilisées dans « wireless sensor network (WSN) » car leur fréquence peut être réglée via un courant continu. Ce réglage de fréquence permet de coder l'information via « Frequency shift keying (FSK) » par modulation numérique entre deux valeurs discrètes sans besoin d'un RF mixer, ce qui conduit à des composants RF potentiellement moins complexes. Dans cette thèse, la faisabilité de FSK a été étudiée pour des STNO MTJ à aimantation dans le plan en vue des communications sans fil utilisées dans les WSN. Les paramètres abordés dans cette étude sont le décalage de fréquence et le taux de modulation maximum, auquel la fréquence peut être décalée entre deux valeurs discrètes.Pour caractériser le taux de modulation maximum, des simulations macrospin et des études expérimentales ont été réalisées. Les simulations révèlent que le taux de modulation maximum pour FSK par courant est limité par la fréquence de relaxation du STNO, qui est de l'ordre de quelques centaines de MHz pour les STNO à aimantation dans le plan. Cela signifie que le taux de modulation maximum est limité à quelques centaines de Mbps, ce qui est ciblé ici pour une communication sans fil à débit de données modéré utilisées dans les WSN. Des études expérimentales du FSK par modulation de courant dans les STNO ont été effectuées pour des STNO autonomes et pour des STNO intégrés dans des systèmes hyperfréquences. Le FSK sur les STNO autonomes montre un décalage de fréquence autour de 200 MHz (le décalage de fréquence entre ≈ 8,9 GHz et ≈9,1 GHz) au taux de modulation de 10Mbps. Ce taux de modulation est inférieur à la limite supérieure donnée par la fréquence de relaxation du STNO comme prévu dans la simulation numérique en raison du bruit de phase relativement élevé du dispositif mesuré. Afin de tester la faisabilité du STNO dans les systèmes hyperfréquences, la modulation FSK des STNO a été effectuée sur un émetteur de carte de circuit imprimé (PCB). L'émetteur de PCB a été réalisé et développé par le partenaire du projet Mosaic FP7, TUD University. L'analyse confirme qu'un changement de fréquence autour de 300 MHz (le décalage de fréquence entre ≈9 GHz et ≈9,3 GHz) a été observé avec un taux de modulation de 20 Mbps. Le taux de données est limité par les caractéristiques de l'émetteur de PCB et non intrinsèque au STNO. Les études de simulation et d'expérience de la modulation de fréquence des STNO démontrent que le débit de données est adéquat pour la communication sans fil utilisée dans WSN.Cependant, d'autres améliorations dans les matériaux et la nanofabrication de STNO sont nécessaires pour améliorer la puissance de sortie et améliorer les caractéristiques spectrales des oscillations pour pousser les débits de données à des valeurs plus élevées avec un grand décalage de fréquence
Spin Transfer Nano-Oscillators (STNOs) are a novel type of Radio Frequency (RF) oscillators that make use of the Spin Transfer Torque (STT) effect in a magnetic tunnel junction (MTJ) device to produce high-frequency auto-oscillations. STNOs are attractive for applications in wireless communications due to their nanometric size and their frequency tuning capabilities via either a dc current or an applied field. This frequency tuning permits to encode the information via frequency shift keying (FSK) by digital modulation of the current or applied field between two discrete values without the need of an external RF mixer, leading to potentially less complex RF components. In this thesis, the feasibility of the digital frequency modulation (frequency shift keying (FSK)) using in-plane magnetized MTJ STNOs has been studied. For this, the maximum modulation rate, up to which a signal can be modulated or the frequency can be shifted between two discrete values, is an important aspect that need to be characterized.The characterization of the maximum modulation rate for in-plane magnetized MTJ STNOs has been studied via numerical macrospin simulation for different modulation configurations, i.e. modulation by a sinusoidal RF current and a sinusoidal RF field. It revealed that the maximum modulation rate under RF current modulation is given by the amplitude relaxation frequency fp of the STNO. Under RF field modulation, i.e. an RF field applied parallel to the easy axis, an enhanced modulation rate above fp can be achieved since the frequency is modulated directly via the field and not via the amplitude. This suggests an important strategy for the design of STNO-based wireless communications and to achieve high data rates. Besides numerical simulation, experimental studies of frequency shift keying (FSK) by current modulation in STNOs have been also demonstrated. The first demonstration is the FSK in standalone STNOs. The analysis confirmed that the FSK was successfully observed with a frequency shift around 200MHz (the frequency shift between ≈8.9 GHz and ≈9.1 GHz) at the modulation rate of 10Mbps. This modulation rate is however less than the upper limit, which is given by the relaxation frequency fp of the STNO as predicted in the numerical simulation, because of the relatively high phase noise of the device measured. In order to test the feasibility of the STNO within microwave systems, the FSK modulation of STNOs was performed on a printed circuit board (PCB) emitter. FSK with a frequency shift around 300MHz (the frequency shift between ≈9 GHz and ≈9.3 GHz) was observed with a modulation rate of 20 Mbps. The data rate here was limited by characteristics of the PCB emitter and not intrinsic to the STNO. The simulation and experiment studies of frequency modulation of STNOs demonstrate that the data rate of is adequate for wireless communication used in WSN. However, further improvements in materials and nanofabrication of STNOs are required to enhance the output power and improve the spectral characteristics of the oscillations to push the data rates to higher values with large frequency shift
APA, Harvard, Vancouver, ISO, and other styles
43

Åsbrink, Stefan E. "Nonlinearities and regime shifts in financial time series." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-866.

Full text
Abstract:
This volume contains four essays on various topics in the field of financial econometrics. All four discuss the properties of high frequency financial data and its implications on the model choice when an estimate of the capital asset return volatility is in focus. The interest lies both in characterizing "stylized facts" in such series with time series models and in predicting volatility. The first essay, entitled A Survey of Recent Papers Considering the Standard & Poor 500 Composite Stock Index, presents recent empirical findings and stylized facts in the financial market from 1987 to 1996 and gives a brief introduction to the research field of capital asset return volatitlity models and properties of high frequency financial data. As the title indicates, the survey is restricted to research on the well known Standard & Poor 500 index. The second essay, with the title, Stylized Facts of Daily Return Series and the Hidden Markov Model, investigates the properties of the hidden Markov Model, HMM, and its capability of reproducing stylized facts of financial high frequency data. The third essay, Modelling the Conditional Mean and Conditional Variance: A combined Smooth Transition and Hidden Markov Approach with an Application to High Frequency Series, investigates the consequences of combining a nonlinear parameterized conditional mean with an HMM for the conditional variance when characterization of stylized facts is considered. Finally, the fourth essay entitled, Volatility Forecasting for Option Pricing on Exchange Rates and Stock Prices, investigates the volatility forecasting performance of some of the most frequently used capital asset return volatility models such as the GARCH with normal and t-distributed errors, the EGARCH and the HMM. The prediction error minimization approach is also investigated. Each essay is self-contained and could, in principle, be read in any order chosen by the reader. This, however, requires a working knowledge of the properties of the HMM. For readers less familiar with the research field the first essay may serve as an helpful introduction to the following three essays.

Diss. Stockholm : Handelshögsk.

APA, Harvard, Vancouver, ISO, and other styles
44

Abecidan, Rony. "Stratégies d'apprentissage robustes pour la détection de manipulation d'images." Electronic Thesis or Diss., Centrale Lille Institut, 2024. http://www.theses.fr/2024CLIL0025.

Full text
Abstract:
Aujourd'hui, la manipulation d'images à des fins non éthiques est courante, notamment sur les réseaux sociaux et dans la publicité. Les utilisateurs malveillants peuvent par exemple créer des images synthétiques convaincantes pour tromper le public ou dissimuler des messages dans des images numériques, posant des risques pour la sécurité nationale. Les chercheurs en analyse forensique d'image travaillent donc avec les forces de l'ordre pour détecter ces manipulations. Les méthodes d'analyse forensique les plus avancées utilisent notamment des réseaux neuronaux convolutifs pour les détecter. Cependant, ces réseaux sont entraînés sur des données préparées par des équipes de recherche, qui diffèrent largement des données réelles rencontrées en pratique. Cet écart réduit considérablement l'efficacité opérationnelle des détecteurs de manipulations d'images. Cette thèse vise précisément à améliorer l'efficacité des détecteurs de manipulation d'images dans un contexte pratique, en atténuant l'impact de ce décalage de données. Deux stratégies complémentaires sont explorées, toutes deux issues de la littérature en apprentissage automatique : 1. Créer des modèles capables d'apprendre à généraliser sur de nouvelles bases de données ou 2. Sélectionner, voire construire, des bases d'entraînement représentatives des images à examiner. Pour détecter des manipulations sur un grand nombre d'images non étiquetées, les stratégies d'adaptation de domaine cherchant à plonger les distributions d'entraînement et d'évaluation dans un espace latent où elles coïncident peuvent se révéler utiles. Néanmoins, on ne peut nier la faible efficacité opérationnelle de ces stratégies, étant donné qu'elles supposent un équilibre irréaliste entre images vraies et manipulées parmi les images à examiner. En plus de cette hypothèse problématique, les travaux de cette thèse montrent que ces stratégies ne fonctionnent que si la base d'entraînement guidant la détection est suffisamment proche de la base d'images sur laquelle on cherche à évaluer, une condition difficile à garantir pour un praticien. Généraliser sur un petit nombre d'images non étiquetées est encore plus difficile bien que plus réaliste. Dans la seconde partie de cette thèse, nous abordons ce scénario en examinant l'influence des opérations de développement d'images traditionnelles sur le phénomène de décalage de données en détection de manipulation d'images. Cela nous permet de formuler des stratégies pour sélectionner ou créer des bases d'entraînement adaptées à un petit nombre d'images. Notre contribution finale est une méthodologie qui exploite les propriétés statistiques des images pour construire des ensembles d'entraînement pertinents vis-à-vis des images à examiner. Cette approche réduit considérablement le problème du décalage de données et permet aux praticiens de développer des modèles sur mesure pour leur situation
Today, it is easier than ever to manipulate images for unethical purposes. This practice is therefore increasingly prevalent in social networks and advertising. Malicious users can for instance generate convincing deep fakes in a few seconds to lure a naive public. Alternatively, they can also communicate secretly hidding illegal information into images. Such abilities raise significant security concerns regarding misinformation and clandestine communications. The Forensics community thus actively collaborates with Law Enforcement Agencies worldwide to detect image manipulations. The most effective methodologies for image forensics rely heavily on convolutional neural networks meticulously trained on controlled databases. These databases are actually curated by researchers to serve specific purposes, resulting in a great disparity from the real-world datasets encountered by forensic practitioners. This data shift addresses a clear challenge for practitioners, hindering the effectiveness of standardized forensics models when applied in practical situations.Through this thesis, we aim to improve the efficiency of forensics models in practical settings, designing strategies to mitigate the impact of data shift. It starts by exploring literature on out-of-distribution generalization to find existing strategies already helping practitioners to make efficient forensic detectors in practice. Two main frameworks notably hold promise: the implementation of models inherently able to learn how to generalize on images coming from a new database, or the construction of a representative training base allowing forensics models to generalize effectively on scrutinized images. Both frameworks are covered in this manuscript. When faced with many unlabeled images to examine, domain adaptation strategies matching training and testing bases in latent spaces are designed to mitigate data shifts encountered by practitioners. Unfortunately, these strategies often fail in practice despite their theoretical efficiency, because they assume that scrutinized images are balanced, an assumption unrealistic for forensic analysts, as suspects might be for instance entirely innocent. Additionally, such strategies are tested typically assuming that an appropriate training set has been chosen from the beginning, to facilitate adaptation on the new distribution. Trying to generalize on a few images is more realistic but much more difficult by essence. We precisely deal with this scenario in the second part of this thesis, gaining a deeper understanding of data shifts in digital image forensics. Exploring the influence of traditional processing operations on the statistical properties of developed images, we formulate several strategies to select or create training databases relevant for a small amount of images under scrutiny. Our final contribution is a framework leveraging statistical properties of images to build relevant training sets for any testing set in image manipulation detection. This approach improves by far the generalization of classical steganalysis detectors on practical sets encountered by forensic analyst and can be extended to other forensic contexts
APA, Harvard, Vancouver, ISO, and other styles
45

Johnson, Kevin Russell. "Advancements in Thermal Integrity Profiling Data Analysis." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6520.

Full text
Abstract:
Thermal Integrity Profiling (TIP) is a relatively new non-destructive test method for evaluating the post-construction quality of drilled shafts. Therein anomalies in a shaft are indicated by variations in its thermal profile when measured during the curing stages of the concrete. A considerable benefit with this method is in the ability to detect anomalies both inside and outside the reinforcement cage, as well as provide a measure of lateral cage alignment. Similarly remarkable, early developments showed that the shape of a temperature profile (with depth) matched closely with the shape of the shaft, thus allowing for a straightforward interpretation of data. As with any test method, however, the quality of the results depends largely on the level of analysis and the way in which test data is interpreted, which was the focus of this study. This dissertation presents the findings from both field data and computer models to address and improve TIP analysis methods, specifically focusing on: (1) the analysis of non-uniform temperature distributions caused by external boundary conditions, (2) proper selection of temperature-radius relationships, and (3) understanding the effects of time on analysis. Numerical modeling was performed to identify trends in the temperature distributions in drilled shafts during concrete hydration. Specifically, computer generated model data was used to identify the patterns of the non-linear temperature distributions that occur at the ends of a shaft caused by the added heat loss boundary in the longitudinal direction. Similar patterns are observed at locations in a shaft where drastic changes in external boundary conditions exist (e.g. shafts that transition from soil to water or air). Numerical modeling data was also generated to examine the relationship between measured temperatures and shaft size/shape which is a fundamental concept of traditional TIP analysis. A case study involving a shaft from which 24hrs of internal temperature data was investigated and compared to results from a computer generated model made to mimic the field conditions of the shaft. Analysis of field collected and model predicted data was performed to examine the treatment of non-linear temperature distributions at the ends of the shaft and where a mid-shaft change in boundary was encountered. Additionally, the analysis was repeated for data over a wide range of concrete ages to examine the effects of time on the results of analysis. Finally, data from over 200 field tested shafts was collected and analyzed to perform a statistical evaluation of the parameters used for interpretation of the non-linear distributions at the top and bottom of each shaft. This investigation incorporated an iterative algorithm which determined the parameters required to provide a best-fit solution for the top and bottom of each shaft. A collective statistical evaluation of the resulting parameters was then used to better define the proper methods for analyzing end effects. Findings revealed that the effects of non-uniform temperature distributions in drilled shaft thermal profiles can be offset with a curve-fitting algorithm defined by a hyperbolic tangent function that closely matches the observed thermal distribution. Numerical models and statistical evaluations provided a rationale for proper selection of the function defining parameters. Additionally, numerical modeling showed that the true temperature-to-radius relationship in drilled shafts is non-linear, but in most cases a linear approximation is well suited. Finally, analysis of both model and field data showed that concrete age has virtually no effect on the final results of thermal profile analysis, as long as temperature measurements are taken within the dominate stages of concrete hydration.
APA, Harvard, Vancouver, ISO, and other styles
46

Brueckman, Christina. "Reliability analysis of discrete fracture network projections from borehole to shaft scale discontinuity data." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58405.

Full text
Abstract:
When listing the risk factors that may impact the feasibility and success of a block cave operation, Brown (2003) highlights the adequacy of the geotechnical data available as a primary risk. Detailed data on the major structures, rock mass properties, and in situ stresses are necessary to assess the caveability of the orebody, and the excavation stability on the operating levels below, including the potential for fault slip and rockburst hazards when mining in a higher stress environment. The source of this essential data, especially at feasibility-level design, is almost always limited to borehole data. This is emphasized by Laubscher (2000) who notes that most block cave mines are designed solely on borehole data. When restricted to borehole data, significant effort is expended on obtaining oriented core and/or televiewer logs to derive critical data regarding the frequency and orientation of discontinuities and the presence of major faults. Subsequent analysis of the spatial relationships between discontinuities is facilitated by the use of Discrete Fracture Network (DFN) modelling. The value of DFN models for assessing in situ fragmentation and rock mass strength identifies a critical limitation of borehole data. Required DFN inputs include the orientation, intensity, and size distributions of the discontinuities to allow the stochastic generation of a representative fracture network. The evaluation of the discontinuity orientation is relatively easy, intensity or spacing is possible with sufficient effort, but the discontinuity size is not possible given the small “observation window” of a borehole. This thesis reports the results from research carried out to compare analyses of discontinuity data sampled across different spatial scales to improve our understanding and reduce uncertainty in the characterization and projection of discontinuity networks, specifically with respect to fracture spacing and persistence within the rock mass. This work is undertaken using discontinuity data from a deep geotechnical borehole and co-located large diameter shaft. The close proximity of the borehole and shaft provided an opportunity to ground-truth borehole projections based on traditional core logging and televiewer logs. The comparative analysis was completed with the use of DFN modelling. The improved understanding of the spacing and persistence of the discontinuities will aid in further development of guidelines for rapid geotechnical characterization.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
47

Muller, Bruno. "Transfer Learning through Kernel Alignment : Application to Adversary Data Shifts in Automatic Sleep Staging." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0037.

Full text
Abstract:
L’objectif de cette thèse est l’amélioration d’un système de classification automatique des stades de sommeil par la prise en compte des variabilités inter-et-intra individuelles, ces dernières ayant un effet détrimentaire sur la classification. Nous nous intéressons en particulier à la détection des épisodes de sommeil paradoxal au cours de la nuit. Nos recherches se focalisent sur le transfert d’apprentissage et la sélection de détecteurs adaptés, permettant l’individualisation de l’analyse par l’exploitation des propriétés des données observées. Nous avons focalisé nos travaux sur l’application des méthodes d’alignement de noyau, dans un premier temps au travers de l’utilisation du kernel-target alignment étudié ici de manière duale, c’est-à-dire à noyau fixé et optimisé par rapport aux étiquettes recherchées des données de tests. Dans un second temps, nous avons introduit le kernel-cross alignment, permettant d’exploiter plus fortement l’information contenue dans les données d’apprentissage. Les idées développées dans le cadre de ces travaux ont été étendues à la sélection automatique d’un ensemble d’apprentissage adapté à un ensemble de test donné. Les contributions de ces travaux sont à la fois méthodologiques et algorithmiques, à portée générale, mais également centrées sur l’application
This doctoral project aims at improving an automatic sleep staging system by taking into account inter-and-intra-individual variabilities, the latter having adversary effects on the classification. We focus on the detection of Rapid-Eye Movement periods during sleep. The core of our research is transfer learning and the selection of suitable detector(s) among a set, allowing the individualisation of the analysis by the exploitation of the observed data properties. We focus on the application of kernel alignment methods, firstly through the use of kernel-target alignment, studied here in a dual way, i.e. the kernel is fixed and the criterion is optimised with respect to the sought target labels. In a second step, we introduced kernel-cross alignment, allowing to take more efficiently advantage of the information contained in the training data. The ideas developed in the framework of this work have been extended to automatically selecting one or more efficient training sets for a given test set. The contributions of this work are both methodological and algorithmic, general in scope, but also focused on the application
APA, Harvard, Vancouver, ISO, and other styles
48

Penna, Lyta. "Implementation issues in symmetric ciphers." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sarr, Jean Michel Amath. "Étude de l’augmentation de données pour la robustesse des réseaux de neurones profonds." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS072.

Full text
Abstract:
Dans cette thèse, nous avons considéré le problème de robustesse des réseaux de neurones. C’est-à-dire que nous avons considéré le cas où le jeu d’apprentissage et le jeu de déploiement ne sont pas indépendamment et identiquement distribués suivant la même source. On appelle cette hypothèse : l’hypothèse i.i.d. Notre principal outil de travail a été l’augmentation de données. En effet, une revue approfondie de la littérature et des expériences préliminaires nous ont montré le potentiel de régularisation de l’augmentation des données. Ainsi, dans un premier temps, nous avons cherché à utiliser l’augmentation de données pour rendre les réseaux de neurones plus robustes à divers glissements de données synthétiques et naturels. Un glissement de données étant simplement une violation de l’hypothèse i.i.d. Cependant, les résultats de cette approche se sont révélés mitigés. En effet, nous avons observé que dans certains cas l’augmentation de données pouvait donner lieu à des bonds de performance sur le jeu de déploiement. Mais ce phénomène ne se produisait pas à chaque fois. Dans certains cas, augmenter les données pouvait même réduire les performances sur le jeu de déploiement. Nous proposons une explication granulaire à ce phénomène dans nos conclusions. Une meilleure utilisation de l’augmentation des données pour la robustesse des réseaux de neurones consiste à générer des tests de résistance ou "stress test" pour observer le comportement d’un modèle lorsque divers glissements de données surviennent. Ensuite, ces informations sur le comportement du modèle sont utilisées pour estimer l’erreur sur l’ensemble de déploiement même sans étiquettes, nous appelons cela l’estimation de l’erreur de déploiement. Par ailleurs, nous montrons que l’utilisation d’augmentation de données indépendantes peut améliorer l’estimation de l’erreur de déploiement. Nous croyons que cet usage de l’augmentation de données permettra de mieux cerner quantitativement la fiabilité des réseaux de neurones lorsqu’ils seront déployés sur de nouveaux jeux de données inconnus
In this thesis, we considered the problem of the robustness of neural networks. That is, we have considered the case where the learning set and the deployment set are not independently and identically distributed from the same source. This hypothesis is called : the i.i.d hypothesis. Our main research axis has been data augmentation. Indeed, an extensive literature review and preliminary experiments showed us the regularization potential of data augmentation. Thus, as a first step, we sought to use data augmentation to make neural networks more robust to various synthetic and natural dataset shifts. A dataset shift being simply a violation of the i.i.d assumption. However, the results of this approach have been mixed. Indeed, we observed that in some cases the augmented data could lead to performance jumps on the deployment set. But this phenomenon did not occur every time. In some cases, the augmented data could even reduce performance on the deployment set. In our conclusion, we offer a granular explanation for this phenomenon. Better use of data augmentation toward neural network robustness is to generate stress tests to observe a model behavior when various shift occurs. Then, to use that information to estimate the error on the deployment set of interest even without labels, we call this deployment error estimation. Furthermore, we show that the use of independent data augmentation can improve deployment error estimation. We believe that this use of data augmentation will allow us to better quantify the reliability of neural networks when deployed on new unknown datasets
APA, Harvard, Vancouver, ISO, and other styles
50

Castrezana, Sergio Javier. "Patterns of Differentiation Among Allopatric Drosophila mettleri Populations." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1313%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography