Dissertations / Theses on the topic 'Features variability'

To see the other types of publications on this topic, follow the link: Features variability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Features variability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Crossen, Samantha Lokelani. "Investigation of Variability in Cognitive State Assessment based on Electroencephalogram-derived Features." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316025164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasapollo, Claudio. "spatio-temporal Variability of Plant features and Motile Invertebrates in Posidonia oceanica Seagrass Meadows." Thesis, Open University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.525851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baker, Brendan J. "Speaker verification incorporating high-level linguistic features." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/17665/1/Brendan_Baker_Thesis.pdf.

Full text
Abstract:
Speaker verification is the process of verifying or disputing the claimed identity of a speaker based on a recorded sample of their speech. Automatic speaker verification technology can be applied to a variety of person authentication and identification applications including forensics, surveillance, national security measures for combating terrorism, credit card and transaction verification, automation and indexing of speakers in audio data, voice based signatures, and over-the-phone security access. The ubiquitous nature of modern telephony systems allows for the easy acquisition and delivery of speech signals for processing by an automated speaker recognition system. Traditionally, approaches to automatic speaker verification have involved holistic modelling of low-level acoustic-based features in order to characterise physiological aspects of a speaker such as the length and shape of the vocal tract. Although the use of these low-level features has proved highly successful, there are numerous other sources of speaker specific information in the speech signal that have largely been ignored. In spontaneous and conversational speech, perceptually higher levels of in- formation such as the linguistic content, pronunciation idiosyncrasies, idiolectal word usage, speaking rates and prosody, can also provide useful cues as to identify of a speaker. The main aim of this work is to explore the incorporation of higher levels of information into the verification process. Specifically, linguistic constructs such as words, syllables and phones are examined for their usefulness as features for text-independent speaker verification. Two main approaches to incorporating these linguistic features are explored. Firstly, the direct modelling of linguistic feature sequences is examined. Stochastic language models are used to model word and phonetic sequences obtained from automatically obtained transcripts. Experimentation indicates that significant speaker characterising information is indeed contained in both word and phone-level transcripts. It is shown, however, that model estimation issues arise when limited speech is available for training. This speaker model estimation problem is addressed by employing an adaptive model training strategy that significantly improves the performance and extended the usefulness of both lexical and phonetic techniques to short training length situations. An alternate approach to incorporating linguistic information is also examined. Rather than modelling the high-level features independently of acoustic information, linguistic information is instead used to constrain and aid acoustic- based speaker verification techniques. It is hypothesised that a ext-constrained" approach provides direct benefits by facilitating more detailed modelling, as well as providing useful insight into which articulatory events provide the most useful speaker-characterising information. A novel framework for text-constrained speaker verification is developed. This technique is presented as a generalised framework capable of using di®erent feature sets and modelling paradigms, and is based upon the use of a newly defined pseudo-syllabic segmentation unit. A detailed exploration of the speaker characterising power of both broad phonetic and syllabic events is performed and used to optimise the system configuration. An evaluation of the proposed text- constrained framework using cepstral features demonstrates the benefits of such an approach over holistic approaches, particularly in extended training length scenarios. Finally, a complete evaluation of the developed techniques on the NIST2005 speaker recognition evaluation database is presented. The benefit of including high-level linguistic information is demonstrated when a fusion of both high- and low-level techniques is performed.
APA, Harvard, Vancouver, ISO, and other styles
4

Baker, Brendan J. "Speaker verification incorporating high-level linguistic features." Queensland University of Technology, 2008. http://eprints.qut.edu.au/17665/.

Full text
Abstract:
Speaker verification is the process of verifying or disputing the claimed identity of a speaker based on a recorded sample of their speech. Automatic speaker verification technology can be applied to a variety of person authentication and identification applications including forensics, surveillance, national security measures for combating terrorism, credit card and transaction verification, automation and indexing of speakers in audio data, voice based signatures, and over-the-phone security access. The ubiquitous nature of modern telephony systems allows for the easy acquisition and delivery of speech signals for processing by an automated speaker recognition system. Traditionally, approaches to automatic speaker verification have involved holistic modelling of low-level acoustic-based features in order to characterise physiological aspects of a speaker such as the length and shape of the vocal tract. Although the use of these low-level features has proved highly successful, there are numerous other sources of speaker specific information in the speech signal that have largely been ignored. In spontaneous and conversational speech, perceptually higher levels of in- formation such as the linguistic content, pronunciation idiosyncrasies, idiolectal word usage, speaking rates and prosody, can also provide useful cues as to identify of a speaker. The main aim of this work is to explore the incorporation of higher levels of information into the verification process. Specifically, linguistic constructs such as words, syllables and phones are examined for their usefulness as features for text-independent speaker verification. Two main approaches to incorporating these linguistic features are explored. Firstly, the direct modelling of linguistic feature sequences is examined. Stochastic language models are used to model word and phonetic sequences obtained from automatically obtained transcripts. Experimentation indicates that significant speaker characterising information is indeed contained in both word and phone-level transcripts. It is shown, however, that model estimation issues arise when limited speech is available for training. This speaker model estimation problem is addressed by employing an adaptive model training strategy that significantly improves the performance and extended the usefulness of both lexical and phonetic techniques to short training length situations. An alternate approach to incorporating linguistic information is also examined. Rather than modelling the high-level features independently of acoustic information, linguistic information is instead used to constrain and aid acoustic- based speaker verification techniques. It is hypothesised that a ext-constrained" approach provides direct benefits by facilitating more detailed modelling, as well as providing useful insight into which articulatory events provide the most useful speaker-characterising information. A novel framework for text-constrained speaker verification is developed. This technique is presented as a generalised framework capable of using di®erent feature sets and modelling paradigms, and is based upon the use of a newly defined pseudo-syllabic segmentation unit. A detailed exploration of the speaker characterising power of both broad phonetic and syllabic events is performed and used to optimise the system configuration. An evaluation of the proposed text- constrained framework using cepstral features demonstrates the benefits of such an approach over holistic approaches, particularly in extended training length scenarios. Finally, a complete evaluation of the developed techniques on the NIST2005 speaker recognition evaluation database is presented. The benefit of including high-level linguistic information is demonstrated when a fusion of both high- and low-level techniques is performed.
APA, Harvard, Vancouver, ISO, and other styles
5

Hardman-Mountford, Nicholas John. "Environmental variability in the Gulf of Guinea large marine ecosystem : physical features, forcing and fisheries." Thesis, University of Warwick, 2000. http://wrap.warwick.ac.uk/1125/.

Full text
Abstract:
This thesis examines the forcing and behaviour of oceanographic physical features, relevant to recruitment in fish populations, in the Gulf of Guinea Large Marine Ecosystem, on seasonal and interannual time scales. Remotely sensed sea-surface temperature (SST) data covering the period 1981–1991 was used to identify and describe a number of oceanographic features, including the Senegalese Upwelling influence, the Ghana and Côte d’Ivoire coastal upwelling, river run-off, fronts and the previously unrecorded observation of shelf-break cooling along the coast of Liberia and Sierra Leone during the boreal winter. Interannual variability in SST was observed on an approximate three year scale and an extended warm phase was noted between 1987 and 1991. Principal components analysis (PCA) was used to further investigate the variance structure of these SST data and this technique was shown to be able to accurately define boundaries of the Gulf of Guinea system and its constituent subsystems. River discharge data from throughout the Gulf of Guinea was also investigated using PCA, confirming the hydroclimatic regions identified by Mahé and Olivry (1999). The boundaries between these regions correspond closely to those identified between subsystems in the SST data, suggesting a degree of coupling between oceanographic and meteorological variability in the Gulf of Guinea. To further investigate this coupling, local climate data and global/basin scale indices were compared qualitatively and statistically with remotely sensed and in situ SST data and indices of interannual variability in oceanographic features. A new basin scale index was proposed as a measure of zonal atmospheric variability in the subtropical North Atlantic (SNAZI) and this was shown to be the dominant mode of climate variability forcing SST in the Gulf of Guinea. The implications of these results for fisheries recruitment dynamics are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Püschel, Georg, Christoph Seidl, Mathias Neufert, André Gorzel, and Uwe Aßmann. "Test Modeling of Dynamic Variable Systems using Feature Petri Nets." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-126018.

Full text
Abstract:
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
APA, Harvard, Vancouver, ISO, and other styles
7

Alfayez, Hanan Mohammed. "A study of variability predictors and clinical features of treated incidence of schizophrenia in Riyadh, Saudi Arabia." Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/a-study-of-variability-predictors-and-clinical-features-of-treated-incidence-of-schizophrenia-in-riyadh-saudi-arabia(96b62291-569b-48cf-9868-a1d3fc2c5e34).html.

Full text
Abstract:
This Ph.D. thesis is presented as three separate papers, and the overall aim of this research is to describe and achieve a broader and clearer understanding of the epidemiology, aetiology and symptomatology of schizophrenia in Saudi Arabia. The first study provides knowledge about the epidemiology of schizophrenia by investigating the incidence in Riyadh, Saudi Arabia and using the incidence data to describe heterogeneity across districts in Riyadh. In addition, the study tests whether variation in incidence occurs according to nationality, sex, age, marital status, employment status, and income. The second study evaluates the five-factor model in Saudi schizophrenia patients by factor analysis of OPCRIT items as rated from the health records. It also tested whether there was any association between the five factors and demographic data included in OPCRIT. The third study describes the duration of untreated psychosis in Riyadh to identify any association between both patient demographic factors and their first pathway to care with their duration pf untreated psychosis (DUP). The chosen study design for the whole research was a retrospective case note study of all incident cases of schizophrenia over a 2 years period presenting in the capital city of Saudi Arabia. The first study is an epidemiological study with an ecological design, which determines the incidence of schizophrenia amongst the population in Riyadh and to identify associations between incidence of schizophrenia and demographic and socio-environmental characteristics. The second study is a Factor analysis of OPCCI items from a total of 421 schizophrenia patients in Riyadh who presented between 2009 and 2011, while the third study a descriptive DUP (duration of untreated psychosis) study which focused on describing the duration of untreated psychosis in and to identify any association between the DUP and both patients demographic factors and their first pathway to care. The results showed that the incidence rate of schizophrenia in Saudi Arabia is similar to those recorded in Western countries with an associations between schizophrenia incidence and younger age, male gender, single status and unemployment. Lack of association between population density and area level income with schizophrenia incidence was also confirmed. The second study produced five-symptom dimensions, mania, depressions, reality distortion, disorganisation, and manic/bizzare delusions explaining 33% of the total variance. Different dimensions were differently associated with the demographic/premorbid risk factors. Results of the third study showed that the median DUP was 1.41 years. Older age at onset, single mariatl status and higher educational were associated with shorter DUP. Long DUP was associated with help seeking from traditional healers. This thesis has presented a comprehensive picture of the epidemiology of schizophrenia in the capital city of Saudi Arabia, duration of untreated psychosis and a factor analysis of symptoms of schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
8

Eriksson, Magnus. "Engineering Families of Software-Intensive Systems using Features, Goals and Scenarios." Doctoral thesis, Umeå : Department of Computing Science, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Di, Fusco Greta. "A Reliable Downscaling of ECG Signals for the Detection of T wave Heterogeneity Features." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
In cardiovascular disease the definition and the detection of the ECG parameters related to repolarization dynamics in post MI patients is still a crucial unmet need. In addition, the use of a 3D sensor in the implantable medical devices would be a crucial mean in the assessment or prediction of Heart Failure status, but the inclusion of such feature is limited by hardware and firmware constraints. The aim of this thesis is the definition of a reliable surrogate of the 500 Hz ECG signal to reach the aforementioned objective. To evaluate the worsening of reliability due to sampling frequency reduction on delineation performance, the signals have been consecutively down sampled by a factor 2, 4, 8 thus obtaining the ECG signals sampled at 250, 125 and 62.5 Hz, respectively. The final goal is the feasibility assessment of the detection of the fiducial points in order to translate those parameters into meaningful clinical parameter for Heart Failure prediction, such as T waves intervals heterogeneity and variability of areas under T waves. An experimental setting for data collection on healthy volunteers has been set up at the Bakken Research Center in Maastricht. A 16 – channel ambulatory system, provided by TMSI, has recorded the standard 12 – Leads ECG, two 3D accelerometers and a respiration sensor. The collection platform has been set up by the TMSI property software Polybench, the data analysis of such signals has been performed with Matlab. The main results of this study show that the 125 Hz sampling rate has demonstrated to be a good candidate for a reliable detection of fiducial points. T wave intervals proved to be consistently stable, even at 62.5 Hz. Further studies would be needed to provide a better comparison between sampling at 250 Hz and 125 Hz for areas under the T waves.
APA, Harvard, Vancouver, ISO, and other styles
10

Swinney, Tyler C. "Sources of Variability in Ceramic Artifacts Recovered from Refuse-Filled Pit Features at the Hahn’s Field Site, Hamilton County, Ohio." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427983448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Luna-Mendez, Jaime-Andres. "Epidemiological and clinical variability of amyotrophic lateral sclerosis between geographic areas and populations : focus on Africa and Latin America." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0036/document.

Full text
Abstract:
La sclérose latérale amyotrophique (SLA) est une maladie neurodégénérative fatale. De récents travaux suggèrent une variabilité en termes d'incidence, de mortalité et de caractéristiques cliniques selon les régions géographiques et les populations. Cette thèse offre une mise à jour sur l'hétérogénéité de la SLA, accompagnée de deux études épidémiologiques et cliniques originales en Afrique et en Amérique latine. La première est une étude hospitalière multicentrique dans huit pays africains qui décrit et compare les caractéristiques sociodémographiques et cliniques, les traitements, les facteurs pronostics et la durée de survie chez les patients atteints de la SLA. Certaines caractéristiques sont plus spécifiques aux cas africains par rapport aux occidentaux, telles qu’une proportion plus élevée de patients masculins, un âge de début plus jeune, une proportion plus faible de début bulbaire et une survie plus courte que prévue. Le sous-continent et le traitement par riluzole sont des facteurs indépendamment liés à la survie du patient. La seconde étude en population générale a estimé les taux de mortalité en Équateur, pays composé d’une population majoritairement métisse. Ces résultats soutiennent l’hypothèse d’une survenue plus faible de cas SLA dans les populations métisses d’Amérique latine comparée aux populations caucasiennes d'Europe et d'Amérique du Nord. Les taux de mortalité standardisés ont été comparés entre les groupes ethniques, montrant des différences significatives entre le groupe métisse et le groupe comprenant les autres ethnies (indigènes, asiatiques et arabes). Ce travail fournit des données originales et fiables pour améliorer nos connaissances sur la SLA en Afrique et en Amérique latine. Une collaboration internationale et multidisciplinaire est cruciale pour comprendre la variabilité de la SLA dans différentes populations
Amyotrophic lateral sclerosis (ALS) is a rare neurodegenerative disorder with an invariable fatal outcome. Current evidence supports ALS variability in terms of incidence, mortality and clinical features between geographic areas and populations. This dissertation offers an updated review of ALS heterogeneity along with two original epidemiological and clinical studies in Africa and Latin America. First, a multicenter hospital-based study in eight African countries that described and compared the sociodemographic characteristics, clinical features, treatments, prognoses and survival times of patients with ALS. Certain specific characteristics were different in African cases compared to Western cases like higher proportion of male patients, younger age at onset, lower proportion of bulbar onset and shorter survival than expected. Subcontinental location and riluzole treatment are independently associated with survival. Second, a population-based study estimated ALS mortality rates in Ecuador, a predominant admixed population. The findings support a lower ALS occurrence in admixed populations from Latin America compared to European and Northern American populations. Standardized mortality rates were compared among ethnic groups with significant differences between admixed and other ethnic groups (Indigenous, Asians and Arabs). This work provides original and reliable data to improve our knowledge of ALS in Africa and Latin America. An international and multidisciplinary collaboration is crucial to understand ALS variability in different populations
APA, Harvard, Vancouver, ISO, and other styles
12

Johnson, Earl E., and Todd A. Ricketts. "Dispensing Rates of Four Common Hearing Aid Product Features: Associations With Variations in Practice Among Audiologists." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etsu-works/1697.

Full text
Abstract:
The purpose of the study was to develop and examine a list of potential variables that may account for variability in the dispensing rates of four common hearing aid features. A total of 29 potential variables were identified and placed into the following categories: (1) characteristics of the audiologist, (2) characteristics of the hearing aids dispensed by the audiologist, (3) characteristics of the audiologist?s patient population, and (4) evidence-based practice grades of recommendation for each feature. The potentially associative variables then were examined using regression analyses from the responses of 257 audiologists to a dispensing practice survey. There was a direct relation between price and level of hearing aid technology with the frequency of dispensing product features. There was also a direct relation between the belief by the audiologist that a feature might benefit patients and the frequency of dispensing that feature. In general, the results suggested that personal differences among audiologists and the hearing aids audiologists choose to dispense are related more strongly to dispensing rates of product features than to differences in characteristics of the patient population served by audiologists. An additional finding indicated that evidence-based practice recommendations were inversely related to dispensing rates of product features. This finding, however, may not be the result of dispensing trends as much as hearing aid manufacturing trends.
APA, Harvard, Vancouver, ISO, and other styles
13

Pham-Hung, d'Alexandry d'Orengiani Anne-Laure. "The accessory glycoprotein gp3 of canine Coronavirus type 1 : investigations of sequence variability in feline host and of the basic features of the different variants." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA114831/document.

Full text
Abstract:
Les différents génotypes de Coronavirus canins (CCoV-I/II) et félins (FCoV-I/II) sont phylogénétiquement proches, suggérant des transmissions inter-espèces entre chiens et chats. Lors d’analyses de séquences menées sur des chats infectés, des souches félines atypiques ont pu être mises en évidence, contenant un gène S de type FCoV-I, un gène N de type CCoV-I, ainsi que la présence du gène ORF3, spécifique à CCoV-I. Dans ces souches, le gène ORF3 est présent avec une ou deux délétions toujours identiques, conduisant à la synthèse de protéines tronquées gp3-Δ1 et gp3-Δ2. Les délétions de protéines accessoires étant déjà impliquées dans les transmissions inter-espèces, une étude de caractérisation de la protéine gp3 et de ses différentes formes a été menée. Les trois protéines s’oligomérisent de manière covalente et sont retenues dans le réticulum endoplasmique, en absence de signal spécifique de rétention. Les délétions influencent le niveau d’expression des protéines en cellules félines, où seule l’expression de gp3-Δ1 est visible, alors qu’elles conservent toutes une expression optimale en cellules canines. En l’absence de souches de Coronavirus cultivables en laboratoire contenant le gène ORF3, des cellules canines exprimant l’une des protéines gp3 ont été infectées par une souche CCoV-II. Dans ce modèle, les protéines gp3 ne modifient pas le cycle viral. Dans un contexte d’émergence de nouveaux Coronavirus, la compréhension des mécanismes moléculaires de changement d’hôte est cruciale et les Coronavirus félins et canins peuvent représenter un modèle d’étude utile
The different genotypes of canine (CCoV-I/II) and feline (FCoV-I/II) Coronaviruses share a close phylogenetic relationship, suggesting inter-species transmissions between cats and dogs. Through sequence analyses of cat samples, atypical FCoV strains, harbouring an S gene related to FCoV-I, an N gene close to the CCoV-I cluster and the ORF3 gene, peculiar to CCoV-I, were discovered. This ORF3 gene was systematically truncated in feline samples, displaying either one or two identical deletions, leading to the translation of gp3-Δ1 and gp3-Δ2. As deletions in accessory proteins have already been involved in host-switch, studies of the different variants of gp3 were conducted. Results demonstrate that all proteins oligomerize through covalent bonds and are retained in the ER, without any specific retention signal. Deletions influence the expression level with a proper expression of the three proteins in canine cells, whereas only gp3-Δ1 expression is sustained in feline cells. As no isolates of Coronavirus harbouring the ORF3 gene exists, cells expressing the different gp3 proteins have been infected with a CCoV-II strain. In this model, the gp3 proteins do not influence the viral life cycle. In the light of emergence of new Coronaviruses, investigations on their molecular mechanisms during the host-switch are crucial and canine and feline Coronaviruses could represent a useful model
APA, Harvard, Vancouver, ISO, and other styles
14

Quandt, Lisa-Ann [Verfasser], and S. C. [Akademischer Betreuer] Jones. "Variability of a Summer Block in Medium-Range and Subseasonal Ensemble Forecasts and Investigation of Surface Impacts and Relevant Dynamical Features / Lisa-Ann Quandt ; Betreuer: S. C. Jones." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1140118390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Baum, David. "Variabilitätsextraktion aus makrobasierten Software-Generatoren." Master's thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-132719.

Full text
Abstract:
Die vorliegende Arbeit beschäftigt sich mit der Frage, wie Variabilitätsinformationen aus den Quelltext von Generatoren extrahiert werden können. Zu diesem Zweck wurde eine Klassifizierung von Variablen entwickelt, die im Vergleich zu bestehenden Ansätzen eine genauere Identifikation von Merkmalen ermöglicht. Zudem bildet die Unterteilung die Basis der Erkennung von Merkmalinteraktionen und Cross-tree-Constraints. Weiterhin wird gezeigt, wie die gewonnenen Informationen durch Merkmalmodelle dargestellt werden können. Da diese auf dem Generator-Quelltext basieren, liefern sie Erkenntnisse über den Lösungsraum der Domäne. Es wird sichtbar, aus welchen Implementierungskomponenten ein Merkmal besteht und welche Beziehungen es zwischen Merkmalen gibt. Allerdings liefert ein automatisch generiertes Merkmalmodell nur wenig Erkenntnisse über den Lösungsraum. Außerdem wurde ein Prototyp entwickelt, der eine Automatisierung des beschriebenen Extraktionsprozesses ermöglicht.
APA, Harvard, Vancouver, ISO, and other styles
16

Tempera, Fernando. "Benthic habitats of the extended Faial Island shelf and their relationship to geologic, oceanographic and infralittoral biologic features." Thesis, University of St Andrews, 2009. http://hdl.handle.net/10023/726.

Full text
Abstract:
This thesis presents a new template for multidisciplinary habitat mapping that combines the analyses of seafloor geomorphology, oceanographic proxies and modelling of associated biologic features. High resolution swath bathymetry of the Faial and western Pico shelves is used to present the first state-of-the-art geomorphologic assessment of submerged island shelves in the Azores. Solid seafloor structures are described in previously unreported detail together with associated volcanic, tectonic and erosion processes. The large sedimentary expanses identified in the area are also investigated and the large bedforms identified are discussed in view of new data on the local hydrodynamic conditions. Coarse-sediment zones of types hitherto unreported for volcanic island shelves are described using swath data and in situ imagery together with sub-bottom profiles and grainsize information. The hydrodynamic and geological processes producing these features are discussed. New oceanographic information extracted from satellite imagery is presented including yearly and seasonal sea surface temperature and chlorophyll-a concentration fields. These are used as proxies to understand the spatio-temporal variability of water temperature and primary productivity in the immediate island vicinity. The patterns observed are discussed, including onshore-offshore gradients and the prevalence of colder/more productive waters in the Faial-Pico passage and shelf areas in general. Furthermore, oceanographic proxies for swell exposure and tidal currents are derived from GIS analyses and shallow-water hydrographic modelling. Finally, environmental variables that potentially regulate the distribution of benthic organisms (seafloor nature, depth, slope, sea surface temperature, chlorophyll-a concentration, swell exposure and maximum tidal currents) are brought together and used to develop innovative statistical models of the distribution of six macroalgae taxa dominant in the infralittoral (articulated Corallinaceae, Codium elisabethae, Dictyota spp., Halopteris filicina, Padina pavonica and Zonaria tournefortii). Predictive distributions of these macroalgae are spatialized around Faial island using ordered logistic regression equations and raster fields of the explanatory variables found to be statistically significant. This new approach represents a potentially highly significant step forward in modelling benthic communities not only in the Azores but also in other oceanic island shelves where the management of benthic species and biotopes is critical to preserve ecosystem health.
APA, Harvard, Vancouver, ISO, and other styles
17

Anderson, David. "Feature tracking validation of storm tracks in model data." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PADOAN, ANDREA. "Statistical methods for mass spectrometry data analysis and identification of prostaste cancer biomarkers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50248.

Full text
Abstract:
BACKGROUND Prostate Cancer (PCa) is the most common cancer among males in Europe. Patients developing early PCa sometimes refer non-specific symptoms, namely lower urinary tract symptoms (LUTS), and they usually undergo medical investigations based on Prostate Specific Antigen (PSA) and Digital Rectal Examination (DRE). Suspicious results of one or both testings are prerequisite to Prostate Biopsy. However, due to PSA low sensitivity/specificity in predicting positive prostate biopsy, the identification of new PCa biomarkers is actually a real need. MALDI-TOF/MS protein profiling could be a valuable technology for biomarkers identification. However, up to now its use is laden with lack of reproducibility that confounds scientific inferences and limits its broader use. AIMS Goal of this study is to analyze urine collected after prostatic massage in patients referring LUTS, to identify candidate biomarker for PCa, by using MALDI-TOF/MS. We considered important aspects of MALDI-TOF/MS label-free proteomic profiling, in order to assess features reproducibility and to propose appropriate strategy to handle both measurement error and limit of detection (LOD) problems. The study results should aid in reducing the number of worthless first-biopsied and assist Urologists on differential diagnosis of PCa. METHODS In a cross-sectional study, we collected urine obtained after DRE from 205 patients that referred LUTS to consultants at the Urological Unit at University of Padova. All patients undergone to prostate biopsy for suspicious PCa. Urines were dialyzed and analyzed by MALDI-TOF/MS in reflectron mode. For the MALDI-TOF/MS reproducibility evaluation, we analyzed a urine pooled from 10 reference samples, spiked with 12.58 pmol of a 1589.9 m/z internal standard (IS) peptide. For the inter-run variability assessment, 14 aliquots were dialyzed by MALDI-TOF/MS. For the intra-run study, an aliquot was divided into 26 separate sub-aliquots and analyzed by MALDI- TOF/MS. To estimate the signal detection limit (sLOD), serial dilution up to 1/256 of a urine pool were analyzed in triplicate. We evaluated the sLOD and adjusted the data appropriately to reduce its variability. We investigated six data normalization approaches - the mean, median, internal standard, relative intensity, total ion current and linear rescaling normalization. Between-spectrum and the overall spectra variability were evaluated by the coefficient of variation (CV). An optimized signal detection strategy was also evaluated to overcome peak detection algorithms errors. Measurement errors and with-in subject variances were evaluated by an external dataset, made of urine repeatedly collected from 20 reference subjects. Intra class correlation coefficient (ICC), Regression Calibration (RCAL) and SIMEX analyses were used to estimate unbiased logistic regression coefficients relating MALDI-TOF/MS features with Patients biopsy outcome. Monte Carlo simulations were used to estimate influence of different LOD adjustment methods on ICC and RCAL. RESULTS Initially, we evaluated the intra- and inter-run on data obtained from automatic peak detection. Normalization methods performed almost similarly in both studies, except IS, which resulted in an increased CV. Calculated sLOD varied with spectra m/z. After sLOD adjustment, raw and normalized data showed a reduction in CVs, while median and mean normalizations performed better, especially in the intra-assay study. However, by optimizing the peak signal detection, the overall features variability drastically decreased. Median normalization with sLOD correction remained the preferable choice for further analyses. Evaluating the external dataset, we found that most of the MALDI-TOF/MS variability is intrinsic to the biological matrix. By using substitution of below LOD values by LOD/2, simulation studies showed that ICC estimations were poorly affected by LOD, when measurement error σ is less that 0.36 and values below LOD are less that 50%. Comparing results from naïve logistic regression, RCAL and SIMEX, measurement error appeared to cause a "bias toward the null". However, SIMEX estimations seemed to correct for a smaller amount of bias than RCAL. Overall, we found eight MALDI-TOF/MS features associated with positive biopsy results. CONCLUSION Findings from the reproducibility study showed that the major contributing factor for MALDI-TOF/MS profiling variability is the peak detection process. So, a new algorithm suited for MALDI-TOF reflectron mode is desirable for its applications in profiling studies. However, normalization strategies aid in increasing MALDI-TOF/MS label-free data reproducibility, especially with sLOD correction. Despite urine does not seem to be a promising biological fluid for proteomic biomarker discovers, RCAL and SIMEX appeared valuable approaches to obtain regression coefficients adjusted for biological and instrumental errors on MALDI-TOF/MS features.
APA, Harvard, Vancouver, ISO, and other styles
19

Akram, Asif, and Qammer Abbas. "COMPARISON OF VARIABILITY MODELING TECHNIQUES." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-9643.

Full text
Abstract:

Variability in complex systems offering rich set of features is a seriouschallenge to their users in term of flexibility with many possible variants fordifferent application contexts and maintainability. During the long period oftime, much effort has been made to deal with these issues. An effort in thisregard is developing and implementing different variability modelingtechniques.This thesis argues the explanation of three modeling techniques namedconfigurable components, feature models and function-means trees. The maincontribution to the research includes:• A comparison of above mentioned variability modeling techniques in asystematic way,• An attempt to find the integration possibilities of these modelingtechniques based on literature review, case studies, comparison,discussions, and brainstorming.The comparison is based on three case studies each of which is implemented inall above mentioned three modeling techniques and a set of generic aspects ofthese techniques which are further divided into characteristics. At the end, acomprehensive discussion on the comparison is presented and in final sectionsome integration possibility are proposed on the basis of case studies,characteristics, commonalities and experience gained through theimplementation of case studies and literature review.

APA, Harvard, Vancouver, ISO, and other styles
20

Oliinyk, Olesia. "Applying Hierarchical Feature Modeling in Automotive Industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3820.

Full text
Abstract:
Context. Variability management (VM) in automotive domain is a promising solution to reduction of complexity. Feature modeling, as starting point of VM, deals with analysis and representation of available features in terms of commonalities and variabilities. The work is done in the context of an automotive industry – Adam Opel AG. Objectives. This work studies the automotive specific problems with respect to feature modeling, investigates what decomposition and structuring approaches exist in literature, and which one of them satisfies the industrial requirements. The approach to feature modeling is synthesized, evaluated and documented. Methods. In this work a case study with survey and literature review is performed. Survey uses semi structured interview and workshops as data collection methods. Systematic review includes articles from Compendex, Inspec, IEEE Xplore, ACM Digital Library, Science Direct and Engineering Village. Approach selection is based on mapping requirements against discovered approaches and discussion with industry practitioner on the regular meetings. Evaluation is proposed according to Goal Question Metric paradigm. Results. The approach that can be followed in the case organization is described and evaluated. The reasoning behind feature modeling approach construction and selection can be generalized for other organizations as well. Conclusions. We conclude that there is no perfect approach that would solve all the problems connected to automotive software. However, structuring approaches can be complementary and while combining give a good results. Tool support that integrates into the whole development cycle is important, as the amount of information cannot be processed using simple feature modeling tools. There is a need for further investigation in both directions – tool support and structuring approaches. The tactics that are proposed here should be introduced in organizations and formally evaluated.
Tel. +4917661965859
APA, Harvard, Vancouver, ISO, and other styles
21

Lösch, Felix. "Optimization of variability in software product lines a semi-automatic method for visualization, analysis, and restructuring of variability in software product lines." Berlin Logos-Verl, 2008. http://d-nb.info/992075904/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kreivys, Deividas. "Programų sistemų variantiškumo modelių, aprašytų požymių diagramomis, tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100825_154910-43049.

Full text
Abstract:
Požymis – tai savitas, charakteringas sistemos atributas. FODA (angl. Feature Orented Domain Analysis) požymius apibūdina kaip žinomas, savitas bei vartotojui matomas sistemos charakteristikas, tuo tarpu funkcijos, objektai ir aspektai yra naudojami apibūdinti vidines sistemos detales. Požymių modeliavimas susitelkia ties labai matomų išorinių produkto charakteristikų apibūdinimu, kalbant apie produkto bendrumą bei variantiškumą, o ne apie detalų sistemos apibūdinimą. Požymių modeliavimo rezultatas yra požymių diagramos. Tai yra grafinė kalba naudojama atvaizduoti bei modeliuoti sistemos arba komponento variantiškumus aukštesniame abstrakcijos lygyje, daţniausiai pradiniuose projektavimo lygiuose, tokiuose kaip reikalavimų specifikavime kuriant programinę įrangą. Šiame darbe atliekamas programų sistemų variantiškumo modelių aprašytų požymių diagramomis tyrimas specifikavimo, sintaksės validavimo, sudėtingumo įvertinimo ir konfigūravimo aspektais. Darbe aprašomas autoriaus (bendraautorius: P. Žaliaduonis) sukurtas požymių modeliavimo įrankis leidžia vartotojui specifikuoti, modeliuoti, validuoti, įvertinti ir dokumentuoti programų sistemos produktų linijos požymių variantiškumo modelius.
Feature Modeling is a domain modeling technique used in software product line development and generative software engineering that addresses the development of reusable software. A feature model defines common and variable elements of a family of software systems or products of a product line – the domain. It can be used to derive members of the system family built from a common set of reusable assets. The concept of product line, if applied systematically, allows for the dramatic increase of software design quality, productivity, provides a capability for mass customization and leads to the „industrial‟ software design. In this work, the author describes the way of product line variability specification using feature diagrams. The presented approach deals with specification of feature model elements, syntax validation, complexity evaluation and feature diagram configuration aspects. The developed software, described in this thesis, allows the user to specify features, design, validate, evaluate and document system product line variability models.
APA, Harvard, Vancouver, ISO, and other styles
23

Doo, Seung Ho. "Analysis, Modeling & Exploitation of Variability in Radar Images." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461256996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Qiu, Bite, and Xu Han. "Modeling support for Application Families." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-979.

Full text
Abstract:

This paper is based on the XAP system (eXtended Application Provisioning) and serves for the modeling of application family. The importance of modeling application family is increasing rapidly. To improve a mechanism to express the structure, properties of concepts, features and implementations within an application family becomes necessary and important. Feature tree is a well accepted means for the product line. We can use and improve it to suit our requirements in the following way

In the degree project, we create a tool to model application family with reusability, commonality and variability. The hierarchy, feature properties and dependencies are graphically represented.

APA, Harvard, Vancouver, ISO, and other styles
25

Aktaruzzaman, M. "FEATURE EXTRACTION AND CLASSIFICATION THROUGH ENTROPY MEASURES." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/277947.

Full text
Abstract:
Entropy is a universal concept that represents the uncertainty of a series of random events. The notion “entropy" is differently understood in different disciplines. In physics, it represents the thermodynamical state variable; in statistics it measures the degree of disorder. On the other hand, in computer science, it is used as a powerful tool for measuring the regularity (or complexity) in signals or time series. In this work, we have studied entropy based features in the context of signal processing. The purpose of feature extraction is to select the relevant features from an entity. The type of features depends on the signal characteristics and classification purpose. Many real world signals are nonlinear and nonstationary and they contain information that cannot be described by time and frequency domain parameters, instead they might be described well by entropy. However, in practice, estimation of entropy suffers from some limitations and is highly dependent on series length. To reduce this dependence, we have proposed parametric estimation of various entropy indices and have derived analytical expressions (when possible) as well. Then we have studied the feasibility of parametric estimations of entropy measures on both synthetic and real signals. The entropy based features have been finally employed for classification problems related to clinical applications, activity recognition, and handwritten character recognition. Thus, from a methodological point of view our study deals with feature extraction, machine learning, and classification methods. The different versions of entropy measures are found in the literature for signals analysis. Among them, approximate entropy (ApEn), sample entropy (SampEn) followed by corrected conditional entropy (CcEn) are mostly used for physiological signals analysis. Recently, entropy features are used also for image segmentation. A related measure of entropy is Lempel-Ziv complexity (LZC), which measures the complexity of a time-series, signal, or sequences. The estimation of LZC also relies on the series length. In particular, in this study, analytical expressions have been derived for ApEn, SampEn, and CcEn of an auto-regressive (AR) models. It should be mentioned that AR models have been employed for maximum entropy spectral estimation since many years. The feasibility of parametric estimates of these entropy measures have been studied on both synthetic series and real data. In feasibility study, the agreement between numeral estimates of entropy and estimates obtained through a certain number of realizations of the AR model using Montecarlo simulations has been observed. This agreement or disagreement provides information about nonlinearity, nonstationarity, or nonGaussinaity presents in the series. In some classification problems, the probability of agreement or disagreement have been proved as one of the most relevant features. VII After feasibility study of the parametric entropy estimates, the entropy and related measures have been applied in heart rate and arterial blood pressure variability analysis. The use of entropy and related features have been proved more relevant in developing sleep classification, handwritten character recognition, and physical activity recognition systems. The novel methods for feature extraction researched in this thesis give a good classification or recognition accuracy, in many cases superior to the features reported in the literature of concerned application domains, even with less computational costs.
APA, Harvard, Vancouver, ISO, and other styles
26

Galindo, Duarte José Ángel. "Evolution, testing and configuration of variability systems intensive." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S008/document.

Full text
Abstract:
Une particularité importante du logiciel est sa capacité à être adapté et configuré selon différents scénarios. Récemment, la variabilité du logiciel a été étudiée comme un concept de première classe dans différents domaines allant des lignes de produits logiciels aux systèmes ubiquitaires. La variabilité est la capacité d'un produit logiciel à varier en fonction de différentes circonstances. Les systèmes à forte variabilité mettent en jeu des produits logiciels où la gestion de la variabilité est une activité d'ingénierie prédominante. Les diverses parties de ces systèmes sont couramment modélisées en utilisant des formes différentes de ''modèle de variabilité'', qui est un formalisme de modélisation couramment utilisé. Les modèles de caractéristiques (feature models) ont été introduits par Kang et al. en 1990 et sont une représentation compacte d'un ensemble de configurations pour un système à forte variabilité. Le grand nombre de configurations d'un modèle de caractéristiques ne permet pas une analyse manuelle. De fait, les mécanismes assistés par ordinateur sont apparus comme une solution pour extraire des informations utiles à partir de modèles de caractéristiques. Ce processus d'extraction d'information à partir de modèles de caractéristiques est appelé dans la littérature scientifique ''analyse automatisée de modèles de caractéristiques'' et a été l'un des principaux domaines de recherche ces dernières années. Plus de trente opérations d'analyse ont été proposées durant cette période. Dans cette thèse, nous avons identifié différentes questions ouvertes dans le domaine de l'analyse automatisée et nous avons considéré plusieurs axes de recherche. Poussés par des scénarios du monde réel (e.g., la téléphonie mobile ou la vidéo protection), nous avons contribué à appliquer, adapter ou étendre des opérations d'analyse automatisée pour l’évolution, le test et la configuration de systèmes à forte variabilité
The large number of configurations that a feature model can encode makes the manual analysis of feature models an error prone and costly task. Then, computer-aided mechanisms appeared as a solution to extract useful information from feature models. This process of extracting information from feature models is known as ''Automated Analysis of Feature models'' that has been one of the main areas of research in the last years where more than thirty analysis operations have been proposed. In this dissertation we looked for different tendencies in the automated analysis field and found several research opportunities. Driven by real-world scenarios such as smart phone or videosurveillance domains, we contributed applying, adapting or extending automated analysis operations in variability intensive systems evolution, testing and configuration
APA, Harvard, Vancouver, ISO, and other styles
27

Žaliaduonis, Paulius. "Požymių diagramų ir uml klasių diagramų integravimo tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100826_105259-30962.

Full text
Abstract:
Programų sistemų kūrimas, kai yra daug užsakovų, kurių reikalavimai skiriasi, yra sudėtingas procesas ir reikalauja aprašyti galimus programų sistemos variantus. Programų variantiškumui aprašyti naudojami kuriamos sistemos požymių modeliai. Sistemos požymių modeliavimas yra svarbus variantiškumo aprašymo metodas. Sistemos požymių variantiškumo modeliai aprašo aibę programų sistemų, kurios dar vadinamos programų sistemų linija. Programų sistemų linija yra eilė panašių programų kurios dalinasi bendrais atributais. Tiksliau apibūdinti programų sistemų linijai yra nustatomi sistemų atributai ir jų tarpusavio sąryšiai, jie yra pavaizduojami požymių diagramose. Požymis tai savitas, charakteringas sistemos atributas, kuris nusako matomus sistemos atributus, tačiau nesigilina į detalų sistemos apibūdinimą. Greitam ir kokybiškam programų sistemos variantiškumo modeliavimui reikalingas geras įrankis. Tam skirtas požymių diagramų modeliavimo įrankis, nes sukurti požymių modeliai yra informatyvūs ir gali lengvai perteikti sistemos variantiškumo informaciją. Tačiau programų sistemos požymių diagrama neturi techninės informacijos, kuri yra reikalinga programos kūrimui. Ši informacija yra saugoma UML modeliuose. Programos UML modelį galima išplėsti variantiškumo informacija, papildant jį sistemos požymių modelio informacija. Magistrinio projekto metu buvo sukurtas įrankis (FD2), kuris įgyvendina požymių diagramos susiejimą su UML klasių diagrama. Magistriniame darbe tiriamas sistemų... [toliau žr. visą tekstą]
Feature modeling is important approach to deal system variability at higher abstraction level. Variability models define the variability of a software product line. Unfortunately, it is not integrated into a modeling framework like the Unified Modeling Language (UML). To use it in conjunction with UML, it is important to integrate feature modeling into UML. This thesis describes the way how feature variability models can be linked with existing UML models and how it is done in the feature modeling tool FD2. The feature modeling tool is described and the complete example provided. Chapter 2 discusses the way of Feature model integration with UML model. Chapter 3 describes the implementation of FD2 tool. Chapter 4 discusses the advantages and disadvantages of FD2 tool. Chapter 5 provides examples and discusses their results. In conclusion this thesis propose feature modeling integration with UML modeling, discusses the program developed during master project, provides 2 examples and discusses their results, points out some issues requiring further work.
APA, Harvard, Vancouver, ISO, and other styles
28

Reinhartz-Berger, Iris, Kathrin Figl, and Øystein Haugen. "Investigating styles in variability modeling: Hierarchical vs. constrained styles." Elsevier, 2017. http://dx.doi.org/10.1016/j.infsof.2017.01.012.

Full text
Abstract:
Context: A common way to represent product lines is with variability modeling. Yet, there are different ways to extract and organize relevant characteristics of variability. Comprehensibility of these models and the ease of creating models are important for the efficiency of any variability management approach. Objective: The goal of this paper is to investigate the comprehensibility of two common styles to organize variability into models - hierarchical and constrained - where the dependencies between choices are specified either through the hierarchy of the model or as cross-cutting constraints, respectively. Method: We conducted a controlled experiment with a sample of 90 participants who were students with prior training in modeling. Each participant was provided with two variability models specified in Common Variability Language (CVL) and was asked to answer questions requiring interpretation of provided models. The models included 9 to 20 nodes and 8 to 19 edges and used the main variability elements. After answering the questions, the participants were asked to create a model based on a textual description. Results: The results indicate that the hierarchical modeling style was easier to comprehend from a subjective point of view, but there was also a significant interaction effect with the degree of dependency in the models, that influenced objective comprehension. With respect to model creation, we found that the use of a constrained modeling style resulted in higher correctness of variability models. Conclusions: Prior exposure to modeling style and the degree of dependency among elements in the model determine what modeling style a participant chose when creating the model from natural language descriptions. Participants tended to choose a hierarchical style for modeling situations with high dependency and a constrained style for situations with low dependency. Furthermore, the degree of dependency also influences the comprehension of the variability model.
APA, Harvard, Vancouver, ISO, and other styles
29

Gollasch, David. "Conceptual Variability Management in Software Families with Multiple Contributors." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-202775.

Full text
Abstract:
To offer customisable software, there are two main concepts yet: software product lines that allow the product customisation based on a fixed set of variability and software ecosystems, allowing an open product customisation based on a common platform. Offering a software family that enables external developers to supply software artefacts means to offer a common platform as part of an ecosystem and to sacrifice variability control. Keeping full variability control means to offer a customisable product as a product line, but without the support for external contributors. This thesis proposes a third concept of variable software: partly open software families. They combine a customisable platform similar to product lines with controlled openness similar to ecosystems. As a major contribution of this thesis a variability modelling concept is proposed which is part of a variability management for these partly open software families. This modelling concept is based on feature models and extends them to support open variability modelling by means of interfaces, structural interface specifications and the inclusion of semantic information. Additionally, the introduction of a rights management allows multiple contributors to work with the model. This is required to enable external developers to use the model for the concrete extension development. The feasibility of the proposed model is evaluated using a prototypically developed modelling tool and by means of a case study based on a car infotainment system.
APA, Harvard, Vancouver, ISO, and other styles
30

Mounirou, Lawani A. "Etude du ruissellement et de l’érosion à différentes échelles spatiales sur le bassin versant de Tougou en zone sahélienne du Burkina Faso : quantification et transposition des données." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20039/document.

Full text
Abstract:
La variabilité spatio-temporelle du ruissellement et de l'érosion hydrique n'est pas un fait nouveau. Leurs caractéristiques s'estiment généralement avec une marge raisonnable sur des parcelles d'un à quelques dizaines de m². Avec l'accroissement de la surface, l'hétérogénéité du milieu croît ce qui induit un effet d'échelle. Le passage de la parcelle au bassin versant n'est pas totalement maîtrisé compte tenu de la complexité et de la variabilité des facteurs mis en jeu. L'objectif de cette thèse est de comprendre les processus de ruissellement et de l'érosion dans différents environnements et à différentes échelles spatiales, d'identifier les sources de variation, puis de développer une méthodologie de transposition des résultats de l'échelle parcellaire à l'exutoire du bassin versant. À cet effet, un réseau de dix-huit parcelles expérimentales de différentes tailles, de deux unités hydrologiques ont permis de quantifier le ruissellement et les pertes en terre sur les principaux états de surface du bassin versant de Tougou.Les résultats obtenus sur les micro-parcelles de 1m², les parcelles de 50 et 150m², les unités hydrologiques de 6 et 34 ha et le bassin versant de 37km², montrent que, tant sur sols cultivés que sur sols dénudés, la lame ruisselée ainsi que les pertes en terres diminuent lorsque la superficie augmente, pour une même pluie et dans des conditions comparables d'humidité préalable des sols. Ce phénomène d'effet d'échelle de la superficie sur l'écoulement et l'érosion est connu des hydrologues qui se heurtent toujours à l'écueil de l'extrapolation des résultats obtenus sur petites superficies à des superficies plus grandes. Nos résultats montrent que l'effet d'échelle observé sur le ruissellement et l'érosion est dû principalement à l'hétérogénéité spatiale des sols (propriétés hydrodynamiques, microrelief) et à sa variabilité (état des variables) et que la dynamique temporelle de l'intensité de la pluie ne fait que l'amplifier.Les résultats obtenus lors des essais de transposition permettent de soutenir avec raison qu'une meilleure extrapolation des données de l'échelle parcellaire à l'échelle du bassin viendra de la prise en compte des questions de la connectivité hydrologique.En définitive, cette étude met en avant l'intérêt d'effectuer des mesures de ruissellement et d'érosion sur des unités homogènes en termes d'occupation du sol qui peuvent représenter une mosaïque hétérogène de surfaces homogènes. La localisation sur le bassin versant et le taux de connectivité de ces unités hydrologiques à l'intérieur desquelles les processus dominants du ruissellement et d'érosion se manifestent peuvent permettre d'approcher la résolution du problème de transfert d'échelle
The spatio-temporal variability of runoff and erosion is not new fact. Their characteristics are generally estimated with a reasonable margin on plots of a few tens of square meters. With the increase of the surface, the heterogeneity of environment increases which induces a scale effect. The passage of the plot to the catchment is not totally controlled because of the complexity and variability of factors come into play. The objective of this thesis is to understand the processes of runoff and erosion in different environments and at different spatial scales, to identify the sources of variation, and to develop a methodology for implementation of the results of field scale to the basin outlet. To this end, a network of eighteen plots of different sizes, two hydrological units were used to quantify runoff and soil loss on the main surface features Watershed Tougou.The results obtained on micro-plots of 1 m², plots of 50 and 150 m², hydrologic units of 6 and 34 ha and the catchment area of 37 km², show that, both in cultivated soils and on bare soils, the runoff excess decreases as the area increases, for the same rain and prior comparable humidity conditions of the soil. This phenomenon of the scale effect of the area on runoff is known to hydrologists who still face the challenge of extrapolating results obtained on small areas to larger areas. Our results show that the scale effect observed on the runoff is mainly due to the spatial heterogeneity of soils (hydraulic properties, microrelief) and its variability (state of the variables) and that temporal dynamics of the intensity of rain just amplifies it. The results obtained in tests of transposition can maintain with reason that a better extrapolation of data from the field scale across the pond comes from the consideration of the issues of hydrologic connectivity.Ultimately, this study highlights the value of measurements of runoff on homogeneous units in terms of land use that may represent a heterogeneous mosaic of homogeneous areas. The location on the watershed and the rate of connectivity of the hydrologic units within which the dominant processes of runoff occur can allow approach the solution of the problem of scale transfer
APA, Harvard, Vancouver, ISO, and other styles
31

Silalahi, Parsaoran. "Evaluation expérimentale des effets de la sélection sur des caractères de reproduction et de robustesse dans une population de porcs Large White." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLA009/document.

Full text
Abstract:
Des progrès importants ont été obtenus dans les principales populations porcines pour les caractères inclus dans l'objectif de reproduction, à savoir la croissance, l'efficacité alimentaire, la composition de la carcasse et, dans les lignées maternelles, la prolificité des truies. Les animaux sélectionnés pour une forte efficacité productive peuvent être particulièrement sensibles à des problèmes comportementaux, physiologiques ou immunologiques, c'est-à-dire être moins robustes. Ces effets défavorables de la sélection sont souvent difficiles à mettre en évidence, car les caractères correspondants ne sont pas systématiquement enregistrés dans les programmes de sélection. L'utilisation d’un stock de sperme congelé est une méthode élégante pour estimer les évolutions génétiques pour un grand nombre de caractères (habituellement non enregistrés). Deux groupes expérimentaux (L77 et L98) ont été produits par l'insémination de truies LW, nées en 1997-1998, soit avec du sperme congelé stocké à partir des verrats LW de 1977, soit avec du sperme frais de verrats nés en 1998. Cette étude a montré que deux décennies de sélection ont permis des progrès importants pour les principaux caractères d'intérêt, mais ont également affecté de façon défavorable des caractères tels que la longévité, le risque de mortalité, la variabilité de caractères, qui suggèrent un effet défavorable de la sélection sur la robustesse des porcs. Nos résultats soulignent la nécessité d'intégrer des caractères liés à la robustesse dans l'objectif de sélection des populations porcines. Il est donc nécessaire de poursuivre les recherches afin de mieux caractériser les différentes composantes de la robustesse et leur impact sur l’efficience, le bien-être et la santé des porcs afin de pouvoir définir les objectifs de sélection les plus pertinents pour l’avenir
Large improvements have been obtained in major pig populations for traits included in the breeding goal, i.e. growth, feed efficiency, carcass composition and, in maternal lines, sow prolificacy. Animals selected for high production efficiency may in particular be more sensitive to behavioral, physiological, or immunological problems, I.e., be less robust. These adverse effects of selection are often difficult to reveal, as corresponding traits are not routinely recorded in breeding programs. The use of stored frozen semen has been shown to be an elegant method to estimate genetic trends for a large number of (usually not recorded) traits. Two experimental groups (L77 and L98) were produced by inseminating French Large White (LW) sows born in 1997-1998 with either stored frozen semen from the above-mentioned 1977 LW boars or with fresh semen from LW boars born in 1998. This study has shown that 2 decades of selection have resulted in large gains for major traits of interest, but have also adversely affected traits such as longevity, risk of mortality, trait variability, which tend to indicate an unfavorable effect of selection on pig robustness. Our results stress the necessity to integrate robustness related traits in the breeding goal of pig populations. Thus, further research is needed to better characterize the different components of robustness and their impact on pig efficiency, welfare and health to be able to define the most relevant breeding objectives for the future
APA, Harvard, Vancouver, ISO, and other styles
32

Seidl, Christoph. "Integrated Management of Variability in Space and Time in Software Families." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-218036.

Full text
Abstract:
Software Product Lines (SPLs) and Software Ecosystems (SECOs) are approaches to capturing families of closely related software systems in terms of common and variable functionality (variability in space). SPLs and especially SECOs are subject to software evolution to adapt to new or changed requirements resulting in different versions of the software family and its variable assets (variability in time). Both dimensions may be interconnected (e.g., through version incompatibilities) and, thus, have to be handled simultaneously as not all customers upgrade their respective products immediately or completely. However, there currently is no integrated approach allowing variant derivation of features in different version combinations. In this thesis, remedy is provided in the form of an integrated approach making contributions in three areas: (1) As variability model, Hyper-Feature Models (HFMs) and a version-aware constraint language are introduced to conceptually capture variability in time as features and feature versions. (2) As variability realization mechanism, delta modeling is extended for variability in time, and a language creation infrastructure is provided to devise suitable delta languages. (3) For the variant derivation procedure, an automatic version selection mechanism is presented as well as a procedure to derive large parts of the application order for delta modules from the structure of the HFM. The presented integrated approach enables derivation of concrete software systems from an SPL or a SECO where both features and feature versions may be configured.
APA, Harvard, Vancouver, ISO, and other styles
33

Al-Mter, Yusur. "Automatic Prediction of Human Age based on Heart Rate Variability Analysis using Feature-Based Methods." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166139.

Full text
Abstract:
Heart rate variability (HRV) is the time variation between adjacent heartbeats. This variation is regulated by the autonomic nervous system (ANS) and its two branches, the sympathetic and parasympathetic nervous system. HRV is considered as an essential clinical tool to estimate the imbalance between the two branches, hence as an indicator of age and cardiac-related events.This thesis focuses on the ECG recordings during nocturnal rest to estimate the influence of HRV in predicting the age decade of healthy individuals. Time and frequency domains, as well as non-linear methods, are explored to extract the HRV features. Three feature-based methods (support vector machine (SVM), random forest, and extreme gradient boosting (XGBoost)) were employed, and the overall test accuracy achieved in capturing the actual class was relatively low (lower than 30%). SVM classifier had the lowest performance, while random forests and XGBoost performed slightly better. Although the difference is negligible, the random forest had the highest test accuracy, approximately 29%, using a subset of ten optimal HRV features. Furthermore, to validate the findings, the original dataset was shuffled and used as a test set and compared the performance to other related research outputs.
APA, Harvard, Vancouver, ISO, and other styles
34

Püschel, Georg, Christoph Seidl, Thomas Schlegel, and Uwe Aßmann. "Using Variability Management in Mobile Application Test Modeling." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-143917.

Full text
Abstract:
Mobile applications are developed to run on fast-evolving platforms, such as Android or iOS. Respective mobile devices are heterogeneous concerning hardware (e.g., sensors, displays, communication interfaces) and software, especially operating system functions. Software vendors cope with platform evolution and various hardware configurations by abstracting from these variable assets. However, they cannot be sure about their assumptions on the inner conformance of all device parts and that the application runs reliably on each of them—in consequence, comprehensive testing is required. Thereby, in testing, variability becomes tedious due to the large number of test cases required to validate behavior on all possible device configurations. In this paper, we provide remedy to this problem by combining model-based testing with variability concepts from Software Product Line engineering. For this purpose, we use feature-based test modeling to generate test cases from variable operational models for individual application configurations and versions. Furthermore, we illustrate our concepts using the commercial mobile application “runtastic” as example application.
APA, Harvard, Vancouver, ISO, and other styles
35

Ljungberg, Malin. "Design of High Performance Computing Software for Genericity and Variability." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Maßen, Thomas von der [Verfasser]. "Feature-basierte Modellierung und Analyse von Variabilität in Produktlinienanforderungen / Thomas von der Maßen." Aachen : Shaker, 2007. http://d-nb.info/1166508129/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Karatas, Ahmet Serkan. "Analysis Of Extended Feature Models With Constraint Programming." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612082/index.pdf.

Full text
Abstract:
In this dissertation we lay the groundwork of automated analysis of extended feature models with constraint programming. Among different proposals, feature modeling has proven to be very effective for modeling and managing variability in Software Product Lines. However, industrial experiences showed that feature models often grow too large with hundreds of features and complex cross-tree relationships, which necessitates automated analysis support. To address this issue we present a mapping from extended feature models, which may include complex feature-feature, feature-attribute and attribute-attribute cross-tree relationships as well as global constraints, to constraint logic programming over finite domains. Then, we discuss the effects of including complex feature attribute relationships on various analysis operations defined on the feature models. As new types of variability emerge due to the inclusion of feature attributes in cross-tree relationships, we discuss the necessity of reformulation of some of the analysis operations and suggest a revised understanding for some other. We also propose new analysis operations arising due to the nature of the new variability introduced. Then we propose a transformation from extended feature models to basic/cardinality-based feature models that may be applied under certain circumstances and enables using SAT or BDD solvers in automated analysis of extended feature models. Finally, we discuss the role of the context information in feature modeling, and propose to use context information in staged configuration of feature-models.
APA, Harvard, Vancouver, ISO, and other styles
38

Bronge, Erica. "Visualization of Feature Dependency Structures : A case study at Scania CV AB." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205625.

Full text
Abstract:
As many automotive companies have moved towards a higher degree of variability in the product lines they offer their customers, a necessary need has emerged for so called feature dependency structures that are used to describe product feature dependencies and verify order validity. In this study, the possibility of using a node-link graph representation to visualize such a feature dependency structure and the associated affordances and limitations were investigated by the implementation of a case study at the Swedish automotive company Scania CV AB. Qualitative data gathering methods such as contextual inquiry and semi-structured interviews with employees were used to identify key tasks and issues involved in maintenance and analysis of Scania’s in-house feature dependency structure. These findings were used together with user-supported iterative prototyping to create a few visualization prototypes intended to provide support with performance of some of the identified tasks. User evaluation of the prototypes showed that a node-link graph representation was a viable solution to support users with structure maintenance, exhibiting the following affordances: structure exploration, overview and context. Furthermore, the major limitations of the tested representation were found to be lookup of specific information and access to detail. The findings of this study are expected to be of use for other automotive companies that employ a high degree of feature variability in their product lines through the use of complex feature dependency structures.
I samband med att flera fordonstillverkare gått över till att erbjuda en allt större grad av varians i de produktlinjer man erbjuder sina kunder så har ett nödvändigt behov uppstått av att ha regelverk som beskriver de beroenden som finns mellan produktegenskaper och verifierar att inkomna ordrar är giltiga. I den här studien så har möjligheten att visualisera den typen av regelverk med en så kallad ”node-link”-graf samt de styrkor och svagheter som följer med en sådan representation undersökts genom en fallstudie på den svenska fordonstillverkaren Scania CV AB. Med hjälp av kvalitativa datainsamlingsmetoder som så kallad ”Contextual inquiry” och semistrukturerade intervjuer med anställda specialiserade på underhåll av Scanias egna egenskapsregelverk så kunde nyckeluppgifter och svårigheter relaterade till regelverket identifieras. Dessa upptäckter användes sedan tillsammans med användarcentrerat iterativt prototypande för att skapa ett antal visualiseringsprototyper avsedda att underlätta utförandet av några av de tidigare identifierade uppgifterna. Användarutvärdering av prototyperna visade att en visualisering baserad på en ”node-link”-representation var en gångbar lösning som kunde underlätta för användarna. Dess styrkor var att stödja utforskande av strukturen med bra överblick av innehållet och bibehållet sammanhang. Representation var dock svag när det gällde att stödja användaren i att leta upp specifik information och att tillhandahålla mer ingående detaljer. Dessa resultat förväntas vara användbara för andra fordonstillverkare som bygger sina produktlinjer på en hög grad av varians med hjälp av komplexa beroenderegelverk för produktegenskaper.
APA, Harvard, Vancouver, ISO, and other styles
39

Gómez, Llana Abel. "MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/15075.

Full text
Abstract:
La Ingeniería de Líneas de Productos Software -Software Product Line Engineerings (SPLEs) en inglés- es una técnica de desarrollo de software que busca aplicar los principios de la fabricación industrial para la obtención de aplicaciones informáticas: esto es, una Línea de productos Software -Software Product Line (SPL)- se emplea para producir una familia de productos con características comunes, cuyos miembros, sin embargo, pueden tener características diferenciales. Identificar a priori estas características comunes y diferenciales permite maximizar la reutilización, reduciendo el tiempo y el coste del desarrollo. Describir estas relaciones con la suficiente expresividad se vuelve un aspecto fundamental para conseguir el éxito. La Ingeniería Dirigida por Modelos -Model Driven Engineering (MDE) en inglés- se ha revelado en los últimos años como un paradigma que permite tratar con artefactos software con un alto nivel de abstracción de forma efectiva. Gracias a ello, las SPLs puede aprovecharse en granmedida de los estándares y herramientas que han surgido dentro de la comunidad de MDE. No obstante, aún no se ha conseguido una buena integración entre SPLE y MDE, y como consecuencia, los mecanismos para la gestión de la variabilidad no son suficientemente expresivos. De esta manera, no es posible integrar la variabilidad de forma eficiente en procesos complejos de desarrollo de software donde las diferentes vistas de un sistema, las transformaciones de modelos y la generación de código juegan un papel fundamental. Esta tesis presenta MULTIPLE, un marco de trabajo y una herramienta que persiguen integrar de forma precisa y eficiente los mecanismos de gestión de variabilidad propios de las SPLs dentro de los procesos de MDE. MULTIPLE proporciona lenguajes específicos de dominio para especificar diferentes vistas de los sistemas software. Entre ellas se hace especial hincapié en la vista de variabilidad ya que es determinante para la especificación de SPLs.
Gómez Llana, A. (2012). MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15075
Palancia
APA, Harvard, Vancouver, ISO, and other styles
40

Dayibas, Orcun. "Feature Oriented Domain Specific Language For Dependency Injection In Dynamic Software Product Lines." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611071/index.pdf.

Full text
Abstract:
Base commonality of the Software Product Line (SPL) Engineering processes is to analyze commonality and variability of the product family though, SPLE defines many various processes in different abstraction levels. In this thesis, a new approach to configure (according to requirements) components as building blocks of the architecture is proposed. The main objective of this approach is to support domain design and application design processes in SPL context. Configuring the products is made into a semi-automatic operation by defining a Domain Specific Language (DSL) which is built on top of domain and feature-component binding model notions. In order to accomplish this goal, dependencies of the components are extracted from the software by using the dependency injection method and these dependencies are made definable in CASE tools which are developed in this work.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Yang [Verfasser], Gunter [Gutachter] Saake, and Andreas [Gutachter] Nürnberger. "Automated extraction of feature and variability information from natural language requirement specifications / Yang Li ; Gutachter: Gunter Saake, Andreas Nürnberger." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1226932002/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Eyal, Salman Hamzeh. "Recovering traceability links between artifacts of software variants in the context of software product line engineering." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20008/document.

Full text
Abstract:
L'ingénierie des lignes de produits logiciels (Software Product Line Engineering-SPLE en Anglais) est une discipline qui met en œuvre des principes de réutilisation pour le développement efficace de familles de produits. Une famille de produits logiciels est un ensemble de logiciels similaires, ayant des fonctionnalités communes, mais néanmoins différents selon divers aspects; nous parlerons des différentes variantes d'un logiciel. L'utilisation d'une ligne de produit permet de développer les nouveaux produits d'une famille plus vite et d'augmenter la qualité de chacun d'eux. Ces avantages sont liés au fait que les éléments communs aux membres d'une même famille (besoin, architecture, code source, etc.) sont réutilisés et adaptés. Créer de toutes pièces une ligne de produits est une tâche difficile, coûteuse et longue. L'idée sous-jacente à ce travail est qu'une ligne de produits peut être créée par la réingénierie de logiciels similaires (de la même famille) existants, qui ont été préalablement développés de manière ad-hoc. Dans ce contexte, la contribution de cette thèse est triple. La première contribution est la proposition d'une approche pour l'identification des liens de traçabilité entre les caractéristiques (features) d'une application et les parties du code source qui les implémentent, et ce pour toutes les variantes d'une application. Ces liens sont utiles pour générer (dériver) de nouveaux logiciels par la sélection de leurs caractéristiques. L'approche proposée est principalement basée sur l'amélioration de la technique conventionnelle de recherche d'information (Information Retrieval –IR en Anglais) et des approches les plus récentes dans ce domaine. Cette amélioration est liée à deux facteurs. Le premier facteur est l'exploitation des informations liées aux éléments communs ou variables des caractéristiques et du code source des produits logiciels analysés. Le deuxième facteur concerne l'exploitation des similarités et des dépendances entre les éléments du code source. Les résultats que nous avons obtenus par expérimentation confirment l'efficacité de notre approche. Dans la deuxième contribution, nous appliquons nos résultats précédents (contribution no 1) à l'analyse d'impact (Change Impact Analysis –CIA en Anglais). Nous proposons un algorithme permettant à un gestionnaire de ligne de produit ou de produit de détecter quelles les caractéristiques (choix de configuration du logiciel) impactées par une modification du code. Cet algorithme améliore les résultats les plus récents dans ce domaine en permettant de mesurer à quel degré la réalisation d'une caractéristique est impactée par une modification. Dans la troisième contribution nous exploitons à nouveau ces liens de traçabilité (contribution No 1) pour proposer une approche permettant de satisfaire deux objectifs. Le premier concerne l'extraction de l'architecture de la ligne de produits. Nous proposons un ensemble d'algorithmes pour identifier les points de variabilité architecturale à travers l'identification des points de variabilité au niveau des caractéristiques. Le deuxième objectif concerne l'identification des liens de traçabilité entre les caractéristiques et les éléments de l'architecture de la ligne de produits. Les résultats de l'expérimentation montre que l'efficacité de notre approche dépend de l'ensemble des configurations de caractéristiques utilisé (disponibles via les variantes de produits analysés)
Software Product Line Engineering (SPLE) is a software engineering discipline providing methods to promote systematic software reuse for developing short time-to-market and quality products in a cost-efficient way. SPLE leverages what Software Product Line (SPL) members have in common and manages what varies among them. The idea behind SPLE is to builds core assets consisting of all reusable software artifacts (such as requirements, architecture, components, etc.) that can be leveraged to develop SPL's products in a prescribed way. Creating these core assets is driven by features provided by SPL products.Unfortunately, building SPL core assets from scratch is a costly task and requires a long time which leads to increasing time-to-market and up-front investment. To reduce these costs, existing similar product variants developed by ad-hoc reuse should be re-engineered to build SPLs. In this context, our thesis proposes three contributions. Firstly, we proposed an approach to recover traceability links between features and their implementing source code in a collection of product variants. This helps to understand source code of product variants and facilitate new product derivation from SPL's core assets. The proposed approach is based on Information Retrieval (IR) for recovering such traceability links. In our experimental evaluation, we showed that our approach outperforms the conventional application of IR as well as the most recent and relevant work on the subject. Secondly, we proposed an approach, based on traceability links recovered in the first contribution, to study feature-level Change Impact Analysis (CIA) for changes made to source code of features of product variants. This approach helps to conduct change management from a SPL's manager point of view. This allows him to decide which change strategy should be executed, as there is often more than one change that can solve the same problem. In our experimental evaluation, we proved the effectiveness of our approach in terms of the most used metrics on the subject. Finally, based on traceability recovered in the first contribution, we proposed an approach to contribute for building Software Product Line Architecture (SPLA) and linking its elements with features. Our focus is to identify mandatory components and variation points of components. Therefore, we proposed a set of algorithms to identify this commonality and variability across a given collection of product variants. According to the experimental evaluation, the efficiency of these algorithms mainly depends on the available product configurations
APA, Harvard, Vancouver, ISO, and other styles
43

Wende, Christian. "Language Family Engineering with Features and Role-Based Composition." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-88985.

Full text
Abstract:
The benefits of Model-Driven Software Development (MDSD) and Domain-Specific Languages (DSLs) wrt. efficiency and quality in software engineering increase the demand for custom languages and the need for efficient methods for language engineering. This motivated the introduction of language families that aim at further reducing the development costs and the maintenance effort for custom languages. The basic idea is to exploit the commonalities and provide means to enable systematic variation among a set of related languages. Current techniques and methodologies for language engineering are not prepared to deal with the particular challenges of language families. First, language engineering processes lack means for a systematic analysis, specification and management of variability as found in language families. Second, technical approaches for a modular specification and realisation of languages suffer from insufficient modularity properties. They lack means for information hiding, for explicit module interfaces, for loose coupling, and for flexible module integration. Our first contribution, Feature-Oriented Language Family Engineering (LFE), adapts methods from Software Product Line Engineering to the domain of language engineering. It extends Feature-Oriented Software Development to support metamodelling approaches used for language engineering and replaces state-of-the-art processes by a variability- and reuse-oriented LFE process. Feature-oriented techniques are used as means for systematic variability analysis, variability management, language variant specification, and the automatic derivation of custom language variants. Our second contribution, Integrative Role-Based Language Composition, extends existing metamodelling approaches with roles. Role models introduce enhanced modularity for object-oriented specifications like abstract syntax metamodels. We introduce a role-based language for the specification of language components, a role-based composition language, and an extensible composition system to evaluate role-based language composition programs. The composition system introduces integrative, grey-box composition techniques for language syntax and semantics that realise the statics and dynamics of role composition, respectively. To evaluate the introduced approaches and to show their applicability, we apply them in three major case studies. First, we use feature-oriented LFE to implement a language family for the ontology language OWL. Second, we employ role-based language composition to realise a component-based version of the language OCL. Third, we apply both approaches in combination for the development of SumUp, a family of languages for mathematical equations.
APA, Harvard, Vancouver, ISO, and other styles
44

Vogt, Robert Jeffery. "Automatic speaker recognition under adverse conditions." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/36195/1/Robert_Vogt_Thesis.pdf.

Full text
Abstract:
Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.
APA, Harvard, Vancouver, ISO, and other styles
45

Schroeter, Julia. "Feature-based configuration management of reconfigurable cloud applications." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141415.

Full text
Abstract:
A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality.
APA, Harvard, Vancouver, ISO, and other styles
46

Nytorpe, Piledahl Staffan, and Daniel Dahlberg. "Detektering av stress från biometrisk data i realtid." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-31248.

Full text
Abstract:
At the time of writing, stress and stress related disease have become the most common reasons for absence in the workplace in Sweden. The purpose of the work presented here is to identify and notify people managing unhealthy levels of stress. Since symptoms of mental stress manifest through functions of the Sympathetic Nervous System (SNS), they are best measured through monitoring of SNS changes and phenomena. In this study, changes in the sympathetic control of heart rate were recorded and analyzed using heart rate variability analysis and a simple runner’s heart rate sensor connected to a smartphone. Mental stress data was collected through stressful video gaming. This was compared to data from non-stressful activities, physical activity and extremely stressful activities such as public speaking events. By using the period between heartbeats and selecting features from the frequency domain, a simple machine learning algorithm could differentiate between the types of data and thus could effectively recognize mental stress. The study resulted in a collection of 100 data points, an algorithm to extract features and an application to continuously collect and classify sequences of heart periods. It also revealed an interesting relationship in the data between different subjects. The fact that continuous stress monitoring can be achieved using minimally intrusive sensors is the greatest benefit of these results, especially when connsidering its potential value in the identification and prevention of stress related disease.
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Msie', Deen Ra'Fat. "Construction de lignes de produits logiciels par rétro-ingénierie de modèles de caractéristiques à partir de variantes de logiciels : l'approche REVPLINE." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20024/document.

Full text
Abstract:
Les lignes de produits logicielles constituent une approche permettant de construire et de maintenir une famille de produits logiciels similaires mettant en œuvre des principes de réutilisation. Ces principes favorisent la réduction de l'effort de développement et de maintenance, raccourcissent le temps de mise sur le marché et améliorent la qualité globale du logiciel. La migration de produits logiciels similaires vers une ligne de produits demande de comprendre leurs similitudes et leurs différences qui s'expriment sous forme de caractéristiques (features) offertes. Dans cette thèse, nous nous intéressons au problème de la construction d'une ligne de produits à partir du code source de ses produits et de certains artefacts complémentaires comme les diagrammes de cas d'utilisation, quand ils existent. Nous proposons des contributions sur l'une des étapes principales dans cette construction, qui consiste à extraire et à organiser un modèle de caractéristiques (feature model) dans un mode automatisé. La première contribution consiste à extraire des caractéristiques dans le code source de variantes de logiciels écrits dans le paradigme objet. Trois techniques sont mises en œuvre pour parvenir à cet objectif : l'Analyse Formelle de Concepts, l'Indexation Sémantique Latente et l'analyse des dépendances structurelles dans le code. Elles exploitent les parties communes et variables au niveau du code source. La seconde contribution s'attache à documenter une caractéristique extraite par un nom et une description. Elle exploite le code source mais également les diagrammes de cas d'utilisation, qui contiennent, en plus de l'organisation logique des fonctionnalités externes, des descriptions textuelles de ces mêmes fonctionnalités. En plus des techniques précédentes, elle s'appuie sur l'Analyse Relationnelle de Concepts afin de former des groupes d'entités d'après leurs relations. Dans la troisième contribution, nous proposons une approche visant à organiser les caractéristiques, une fois documentées, dans un modèle de caractéristiques. Ce modèle de caractéristiques est un arbre étiqueté par des opérations et muni d'expressions logiques qui met en valeur les caractéristiques obligatoires, les caractéristiques optionnelles, des groupes de caractéristiques (groupes ET, OU, OU exclusif), et des contraintes complémentaires textuelles sous forme d'implication ou d'exclusion mutuelle. Ce modèle est obtenu par analyse d'une structure obtenue par Analyse Formelle de Concepts appliquée à la description des variantes par les caractéristiques. L'approche est validée sur trois cas d'étude principaux : ArgoUML-SPL, Health complaint-SPL et Mobile media. Ces cas d'études sont déjà des lignes de produits constituées. Nous considérons plusieurs produits issus de ces lignes comme s'ils étaient des variantes de logiciels, nous appliquons notre approche, puis nous évaluons son efficacité par comparaison entre les modèles de caractéristiques extraits automatiquement et les modèles de caractéristiques initiaux (conçus par les développeurs des lignes de produits analysées)
The idea of Software Product Line (SPL) approach is to manage a family of similar software products in a reuse-based way. Reuse avoids repetitions, which helps reduce development/maintenance effort, shorten time-to-market and improve overall quality of software. To migrate from existing software product variants into SPL, one has to understand how they are similar and how they differ one from another. Companies often develop a set of software variants that share some features and differ in other ones to meet specific requirements. To exploit existing software variants and build a software product line, a feature model must be built as a first step. To do so, it is necessary to extract mandatory and optional features in addition to associate each feature with its name. Then, it is important to organize the mined and documented features into a feature model. In this context, our thesis proposes three contributions.Thus, we propose, in this dissertation as a first contribution a new approach to mine features from the object-oriented source code of a set of software variants based on Formal Concept Analysis, code dependency and Latent Semantic Indexing. The novelty of our approach is that it exploits commonality and variability across software variants, at source code level, to run Information Retrieval methods in an efficient way. The second contribution consists in documenting the mined feature implementations based on Formal Concept Analysis, Latent Semantic Indexing and Relational Concept Analysis. We propose a complementary approach, which aims to document the mined feature implementations by giving names and descriptions, based on the feature implementations and use-case diagrams of software variants. The novelty of our approach is that it exploits commonality and variability across software variants, at feature implementations and use-cases levels, to run Information Retrieval methods in an efficient way. In the third contribution, we propose an automatic approach to organize the mined documented features into a feature model. Features are organized in a tree which highlights mandatory features, optional features and feature groups (and, or, xor groups). The feature model is completed with requirement and mutual exclusion constraints. We rely on Formal Concept Analysis and software configurations to mine a unique and consistent feature model. To validate our approach, we applied it on three case studies: ArgoUML-SPL, Health complaint-SPL, Mobile media software product variants. The results of this evaluation validate the relevance and the performance of our proposal as most of the features and its constraints were correctly identified
APA, Harvard, Vancouver, ISO, and other styles
48

Rodrigues, Larissa Cristina Moraes. "Representação de variabilidade estrutural de dados por meio de famílias de esquemas de banco de dados." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-24022017-145907/.

Full text
Abstract:
Diferentes organizações dentro de um mesmo domínio de aplicação costumam ter requisitos de dados bastante semelhantes. Apesar disso, cada organização também tem necessidades específicas, que precisam ser consideradas no projeto e desenvolvimento dos sistemas de bancos de dados para o domínio em questão. Dessas necessidades específicas, resultam variações estruturais nos dados das organizações de um mesmo domínio. As técnicas tradicionais de modelagem conceitual de banco de dados (como o Modelo Entidade-Relacionamento - MER - e a Linguagem Unificada de Modelagem - UML) não nos permitem expressar em um único esquema de dados essa variabilidade. Para abordar esse problema, este trabalho de mestrado propôs um novo método de modelagem conceitual baseado no uso de Diagramas de Características de Banco de Dados (DBFDs, do inglês Database Feature Diagrams). Esse método foi projetado para apoiar a criação de famílias de esquemas conceituais de banco de dados. Uma família de esquemas conceituais de banco de dados compreende todas as possíveis variações de esquemas conceituais de banco de dados para um determinado domínio de aplicação. Os DBFDs são uma extensão do conceito de Diagrama de Características, usado na Engenharia de Linhas de Produtos de Software. Por meio dos DBFDs, é possível gerar esquemas conceituais de banco de dados personalizados para atender às necessidades específicas de usuários ou organizações, ao mesmo tempo que se garante uma padronização no tratamento dos requisitos de dados de um domínio de aplicação. No trabalho, também foi desenvolvida uma ferramenta Web chamada DBFD Creator, para facilitar o uso do novo método de modelagem e a criação dos DBFDs. Para avaliar o método proposto neste trabalho, foi desenvolvido um estudo de caso no domínio de dados experimentais de neurociência. Por meio do estudo de caso, foi possível concluir que o método proposto é viável para modelar a variabilidade de dados de um domínio de aplicação real. Além disso, foi realizado um estudo exploratório com um grupo de pessoas que receberam treinamentos, executaram tarefas e preencheram questionários de avaliação sobre o método de modelagem e a sua ferramenta de software de apoio. Os resultados desse estudo exploratório mostraram que o método proposto é reprodutível e que a ferramenta de software tem boa usabilidade, amparando de forma apropriada a execução do passo-a-passo do método.
Different organizations within the same application domain usually have very similar data requirements. Nevertheless, each organization also has specific needs that should be considered in the design and development of database systems for that domain. These specific needs result in structural variations in data from organizations of the same domain. The traditional techniques of database conceptual modeling (such as Entity Relationship Model - ERM - and Unified Modeling Language - UML) do not allow to express this variability in a single data schema. To address this problem, this work proposes a new conceptual modeling method based on the use of Database Feature Diagrams (DBFDs). This method was designed to support the creation of families of conceptual database schemas. A family of conceptual database schemas includes all possible variations of database conceptual schemas for a particular application domain. The DBFDs are an extension of the concept of Features Diagram used in the Software Product Lines Engineering. Through DBFDs, it is possible to generate customized database conceptual schemas to address the specific needs of users or organizations at the same time we ensure a standardized treatment of the data requirements of an application domain. At this work, a Web tool called DBFD Creator was also developed to facilitate the use of the new modeling method and the creation of DBFDs. To evaluate the method proposed in this work, a case study was developed on the domain of neuroscience experimental data. Through the case study, it was possible to conclude that the proposed method is feasible to model data variability of a real application domain. In addition, an exploratory study was conducted with a group of people who have received training, executed tasks and filled out evaluation questionnaires about the modeling method and its supporting software tool. The results of this exploratory study showed that the proposed method is reproducible and that the software tool has good usability, properly supporting the execution of the method\'s step-by-step procedure.
APA, Harvard, Vancouver, ISO, and other styles
49

Hung, Meng-Pai. "THE EVALUATION OF THE EAST GREENLAND SEA ODDEN ICE FEATURE USING THE COMMUNITY CLIMATE SYSTEM MODEL3.0 (CCSM3.0)." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250265410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Munir, Qaiser, and Muhammad Shahid. "Software Product Line:Survey of Tools." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57888.

Full text
Abstract:

software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specificneeds of a particular market segment or mission. The main attractive part of SPL is developing a set of common assets which includes requirements, design, test plans, test cases, reusable software components and other artifacts. Tools for the development of softwareproduct line are very few in number. The purpose of these tools is to support the creation, maintenance and using different versions ofproduct line artifacts. This requires a development environment that supports the management of assets and product development,processes and sharing of assets among different products.

The objective of this master thesis is to investigate the available tools which support Software Product Line process and itsdevelopment phases. The work is carried out in two steps, in the first step available Software Product Line tools are explored and a list of tools is prepared, managed and a brief introduction of each tool is presented. The tools are classified into different categoriesaccording to their usage, relation between the tools is established for better organization and understanding. In the second step, two tools Pure::variant and MetaEdit+ are selected and the quality factors such as Usability, Performance, Reliability, MemoryConsumption and Capacity are evaluated.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography