Dissertations / Theses on the topic 'Automatic distillation of structure'

To see the other types of publications on this topic, follow the link: Automatic distillation of structure.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automatic distillation of structure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Takase, Hiroshi. "Systematic Structure Synthesis of Distillation-Based Separation Processes." Kyoto University, 2018. http://hdl.handle.net/2433/232063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

LeThanh, Huong. "Automatic discourse structure generation using rhetorical structure theory." Thesis, Middlesex University, 2004. http://eprints.mdx.ac.uk/8002/.

Full text
Abstract:
This thesis addresses a difficult problem in text processing: creating a System to automatically derive rhetorical structures of text. Although the rhetorical structure has proven to be useful in many fields of text processing such as text summarisation and information extraction, Systems that automatically generate rhetorical structures with high accuracy are difficult to find. This is beccause discourse is one of the biggest and yet least well defined areas in linguistics. An agreement amongst researchcrs on the best method for nnalysing thc rhetorical structure of text has not been found. This thesis focuses on investigating a method to generate the rhetorical structures of text. By exploiting different cohesive devices, it proposes a method to recognise rhetorical relations between spans by checking for the appearance of these devices. These factors include cue phrases, noun-phrase cues, verb-phrase cues, reference words, time references, substitution words, ellipses, and syntactic information. The discourse analyser is divided into two levels: sentence-level and text-level. The former uses syntactic information and cue phrases to segment sentences into elementary discourse units and to generate a rhetorical structure for each sentence. The latter derives rhetorical relations between large spans and then replaces each sentence by its corresponding rhetorical structure to produce the rhetorical structure of text. The rhetorical structure at the text-level is derived by selecting rhetorical relations to connect adjacent and non-overlapping spans to form a discourse structure that covers the entire text. Constraints of textual organisation and textual adjacency are effectively used in a beam search to reduce the search space in generating such rhetorical structures. Experiments carried out in this research received 89.4% F-score for the discourse segmentation, 52.4% F-score for the sentence-level discourse analyser and 38.1% F-score for the final output of the System. It shows that this approach provides good performance cumparison with current research in discourse.
APA, Harvard, Vancouver, ISO, and other styles
3

Albaret, Christian. "Contribution à l'étude et à la commande des colonnes de distillation multiconstituants." Phd thesis, École Nationale Supérieure des Mines de Paris, 1992. http://pastel.archives-ouvertes.fr/pastel-00838235.

Full text
Abstract:
Nous présentons dans un premier temps un cadre de construction de modèles de colonne de distillation prenant en compte de manière générale une large classe de modèles thermo- dynamiques pour un nombre quelconque de constituants et le comportement hydrodynamique des plateaux. Après un rappel des résultats de la littérature pour les colonnes binaires, nous démontrons un résultat d'existence et d'unicité du point stationnaire d'un modèle de colonne à 'débits molaires constants' à un nombre quelconque de constituants et un résultat d'inversion de l'équilibre thermodynamique. Nous définissons la notion de fonction 'simplexe-relative' pour décrire les propriétés de fonctions d'un simplexe sur lui-même construites à partir des modèles thermodynamiques. Les résultats s'appuient sur la théorie du degré topologique appliquée à ces fonctions. Nous présentons ensuite une extension de la méthode de réduction de modèle par agrégation et son utilisation pour la commande. Nous testons la commande obtenue en simulation sur un modèle de colonne de distillation industrielle à haute pureté.
APA, Harvard, Vancouver, ISO, and other styles
4

Leitch, Megan. "Quantitative Structure-Flux Relationships of Membrane Distillation Materials for Water Desalination." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/780.

Full text
Abstract:
Membrane distillation (MD) is an emergent water desalination technology with potential for scalable, sustainable production of fresh water from highly concentrated brines. Wider adoption of MD technology depends upon improvements to process efficiency. In recent years, researchers have published a number of experimental papers seeking to improve mass and heat transport properties of MD membranes. However, an imperfect understanding of how intrinsic membrane geometry affects MD performance limits efforts to optimize membrane structure. The objective of this dissertation is to help elucidate effects of membrane structure on MD flux, permeability, and thermal performance, with a focus on novel fibrous membranes. Mechanistic and empirical modeling methods were employed to relate the structural characteristics of bacterial nanocellulose and electrospun polymeric membranes to experimentally-measured MD performance. Through these experimental and modeling studies, three conclusions are reached. First, the MD community can hasten the search for optimal membrane structures by improving the quality and reproducibility of reported experimental data. Review of published and newly-collected MD data shows that feed and permeate stream channel geometry and flow non-idealities can substantially affect measured performance metrics for MD membranes. If these factors are accounted for by careful characterization of convective heat transfer coefficients, membrane permeability and thermal efficiency can be definitively deduced. A new methodology is presented for determining convective heat transfer coefficient using experimentally-validated Nusselt correlations. Accurate reporting of cassette heat transfer metrics will facilitate inter-study experimental reproducibility and comparison. Second, use of dimensional analysis to empirically model MD transport is effective for predicting vapor flux in fibrous membranes. Advantages of the model include its use of easily-measurable structural parameters tailored specifically for fibrous membranes and the incorporation of all relevant vapor, membrane, and system characteristics into a mathematically simple, yet theoretically sound, regression model. The new model predicts MD flux more accurately than the mechanistic Dusty Gas Model or previously published empirical MD models. Dimensional-analysis-based transport models may be generalizable for a variety of novel membrane types, lead to a more rigorous understanding of structural influences on vapor transport processes, and guide the development of high-performance membrane structures. Finally, MD process efficiency can benefit by development of highly porous, scalable membrane materials. Bacterial nanocellulose aerogel membranes exhibit substantial improvements in intrinsic permeability and thermal efficiency as compared to traditional phase-inversion membranes, suggesting that there is an opportunity to advance MD process viability through improved membrane design. By mimicking the porosity and pore-interconnectivity of nanocellulose aerogels, novel membrane materials can achieve high thermal efficiency and low mass transport resistance. This dissertation contributes experimental data and modeling techniques to improve knowledge of membrane structural effects on MD performance. These contributions have implications for the wider adoption of MD technology through better reproducibility of published experimental results, enhanced transport modeling to optimize membrane structure, and demonstrated thermal efficiency of a highly porous materials.
APA, Harvard, Vancouver, ISO, and other styles
5

Karanikola, Vasiliki, Andrea F. Corral, Hua Jiang, A. Eduardo Sáez, Wendell P. Ela, and Robert G. Arnold. "Effects of membrane structure and operational variables on membrane distillation performance." ELSEVIER SCIENCE BV, 2017. http://hdl.handle.net/10150/623056.

Full text
Abstract:
A bench-scale, sweeping gas, flat-sheet Membrane Distillation (MD) unit was used to assess the importance of membrane architecture and operational variables to distillate production rate. Sweeping gas membrane distillation (SGMD) was simulated for various membrane characteristics (material, pore size, porosity and thickness), spacer dimensions and operating conditions (influent brine temperature, sweep gas flow rate and brine flow rate) based on coupled mass and energy balances. Model calibration was carried out using four membranes that differed in terms of material selection, effective pore size, thickness and porosity. Membrane tortuosity was the lone fitting parameter. Distillate fluxes and temperature profiles from experiments matched simulations over a wide range of operating conditions. Limitations to distillate production were then investigated via simulations, noting implications for MD design and operation. Under the majority of conditions investigated, membrane resistance to mass transport provided the primary limitation to water purification rate. The nominal or effective membrane pore size and the lumped parameter epsilon/delta tau (porosity divided by the product of membrane tortuosity and thickness) were primary determinants of membrane resistance to mass transport. Resistance to Knudsen diffusion dominated membrane resistance at pore diameters <0.3 mu m. At larger pore sizes, a combination of resistances to intra-pore molecular diffusion and convection across the gas-phase boundary layer determined mass transport resistance. Findings are restricted to the module design flow regimes considered in the modeling effort. Nevertheless, the value of performance simulation to membrane distillation design and operation is well illustrated.
APA, Harvard, Vancouver, ISO, and other styles
6

O'Hanlon, Ken. "Automatic music transcription using structure and sparsity." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8818.

Full text
Abstract:
Automatic Music Transcription seeks a machine understanding of a musical signal in terms of pitch-time activations. One popular approach to this problem is the use of spectrogram decompositions, whereby a signal matrix is decomposed over a dictionary of spectral templates, each representing a note. Typically the decomposition is performed using gradient descent based methods, performed using multiplicative updates based on Non-negative Matrix Factorisation (NMF). The final representation may be expected to be sparse, as the musical signal itself is considered to consist of few active notes. In this thesis some concepts that are familiar in the sparse representations literature are introduced to the AMT problem. Structured sparsity assumes that certain atoms tend to be active together. In the context of AMT this affords the use of subspace modelling of notes, and non-negative group sparse algorithms are proposed in order to exploit the greater modelling capability introduced. Stepwise methods are often used for decomposing sparse signals and their use for AMT has previously been limited. Some new approaches to AMT are proposed by incorporation of stepwise optimal approaches with promising results seen. Dictionary coherence is used to provide recovery conditions for sparse algorithms. While such guarantees are not possible in the context of AMT, it is found that coherence is a useful parameter to consider, affording improved performance in spectrogram decompositions.
APA, Harvard, Vancouver, ISO, and other styles
7

Duchnowski, Paul. "A new structure for automatic speech recognition." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/17333.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
Includes bibliographical references (leaves 102-110).
by Paul Duchnowski.
Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Schrimpf, Natalie Margaret. "Effects of Topic Structure on Automatic Summarization." Thesis, Yale University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10957338.

Full text
Abstract:

Automatic summarization involves finding the most important information in a text in order to create a reduced version of that text that conveys the same meaning as the original. In this dissertation, I present a method for using topic information to influence which content is selected for a summary.

This dissertation addresses questions such as how to represent the meaning of a document for automatic tasks. For tasks such as automatic summarization, there is a tradeoff between using sophisticated linguistic methods and using methods that can easily and efficiently be used by automatic systems. This research seeks to find a balance between these two goals by using linguistically-motivated methods that can be used to improve automatic summarization performance. Another question addressed in this work is the balance between summary coverage and length. A summary must be long enough to convey the information from the original text but short enough to be useful in place of the original document. This dissertation explores the use of topics to increase coverage while reducing redundancy.

There are several issues that affect summary quality. These include information coverage, redundancy, and coherence. This dissertation focuses on achieving coverage of all distinct concepts in a text by incorporating topic structure. During the summarization process, emphasis is placed on including information from all topics in order to produce summaries that cover the range of information present in the original documents. In this work, several notions of what constitutes a topic are explored, with particular focus on defining topics using information from Rhetorical Structure Theory (Mann and Thompson 1988). The results of incorporating topics into a summarization system show that topic structure improves automatic summarization performance.

The contributions of this dissertation include demonstrating that focusing on coverage of the different topics in a text improves summaries, and topic structure is an effective way to achieve this coverage. This research also shows the effectiveness of a simple modular method for incorporating topics into summarization that allows for comparison of different notions of topic and summarization techniques.

APA, Harvard, Vancouver, ISO, and other styles
9

Murugesan, Viyash. "Optimization of Nanocomposite Membrane for Membrane Distillation." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36534.

Full text
Abstract:
In this study, effects of nanoparticles, including 7 nm TiO2, 200 nm TiO2, and hydrophilic and hydrophobic SiO2 with mean diameter in the range of 15–20 nm and their concentration on the membrane properties and vacuum membrane distillation (VMD) performance were evaluated. The effect of membrane thickness and support materials were also investigated. The membranes were characterised extensively in terms of morphology (SEM), water contact angle, water liquid entrance pressure (LEPw), surface roughness, and pore size. While the best nanocomposite membranes with 200 nm TiO2 Nanoparticles(NPs) were obtained at 2% particle concentration, the optimal particle concentration was 5% when 7 nm TiO2 was integrated. Using nanocomposite membrane containing 2 wt% TiO2 – 200 nm nanoparticles, VMD flux of 2.1 kg/m2h and LEPw of 34 PSI was obtained with 99% salt rejection. Furthermore, it was observed that decreasing the membrane thickness would increase the portion of finger-like layer in membrane and reduce the spongy-like layer when hydrophilic nanoparticles were used. Using continuous flow VMD, a flux of 3.1 kg/m2h was obtained with neat PVDF membranes, which was 600% higher than the flux obtained by the static flow VMD with the same membrane at the same temperature and vacuum pressure. The fluxes of both static and flow-cell VMD increased with temperature. Furthermore, it was evident that the continuous flow VMD at 2 LPM yielded 300% or higher flux than static VMD at any given temperature, indicating strong effects of turbulence provided in the flow-cell VMD.
APA, Harvard, Vancouver, ISO, and other styles
10

Hillard, Dustin Lundring. "Automatic sentence structure annotation for spoken language processing /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/6080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Brown, Matthew. "Automatic production of property structure from natural language." Thesis, University of Reading, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Guo, Yufan. "Automatic analysis of information structure in biomedical literature." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Constantin, Alexandru. "Automatic structure and keyphrase analysis of scientific publications." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/automatic-structure-and-keyphrase-analysis-of-scientific-publications(2cfe0b83-5cbb-4305-942c-031945437056).html.

Full text
Abstract:
Purpose. This work addresses an escalating problem within the realm of scientific publishing, that stems from accelerated publication rates of article formats difficult to process automatically. The amount of manual labour required to organise a comprehensive corpus of relevant literature has long been impractical. This has, in effect, reduced research efficiency and delayed scientific advancement. Two complementary approaches meant to alleviate this problem are detailed and improved upon beyond the current state-of-the-art, namely logical structure recovery of articles and keyphrase extraction. Methodology. The first approach targets the issue of flat-format publishing. It performs a structural analysis of the camera-ready PDF article and recognises its fine-grained organisation over logical units. The second approach is the application of a keyphrase extraction algorithm that relies on rhetorical information from the recovered structure to better contour an article’s true points of focus. A recount of the scientific article’s function, content and structure is provided, along with insights into how different logical components such as section headings or the bibliography can be automatically identified and utilised for higher-quality keyphrase extraction. Findings. Structure recovery can be carried out independently of an article’s formatting specifics, by exploiting conventional dependencies between logical components. In addition, access to an article’s logical structure is beneficial across term extraction approaches, reducing input noise and facilitating the emphasis of regions of interest. Value. The first part of this work details a novel method for recovering the rhetorical structure of scientific articles that is competitive with state-of-the-art machine learning techniques, yet requires no layout-specific tuning or prior training. The second part showcases a keyphrase extraction algorithm that outperforms other solutions in an established benchmark, yet does not rely on collection statistics or external knowledge sources in order to be proficient.
APA, Harvard, Vancouver, ISO, and other styles
14

Androutsos, Panagiotis. "Automatic structure and fault detection of semiconductor micrograph images." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0005/MQ45981.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Nieto, Oriol. "Discovering structure in music| Automatic approaches and perceptual evaluations." Thesis, New York University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3705329.

Full text
Abstract:

This dissertation addresses the problem of the automatic discovery of structure in music from audio signals by introducing novel approaches and proposing perceptually enhanced evaluations. First, the problem of music structure analysis is reviewed from the perspectives of music information retrieval (MIR) and music perception and cognition (MPC), including a discussion of the limitations and current challenges in both disciplines. When discussing the existing methods of evaluating the outputs of algorithms that discover musical structure, a transparent open source software called mir eval, which contains implementations to these evaluations, is introduced. Then, four MIR algorithms are presented: one to compress music recordings into audible summaries, another to discover musical patterns from an audio signal, and two for the identification of the large-scale, non-overlapping segments of a musical piece. After discussing these techniques, and given the differences when perceiving the structure of music, the idea of applying more MPC-oriented approaches is considered to obtain perceptually relevant evaluations for music segmentation. A methodology to automatically obtain the most difficult tracks for machines to annotate is presented in order to include them in a design of a human study to collect multiple human annotations. To select these tracks, a novel open source framework called music structural analysis framework (MSAF) is introduced. This framework contains the most relevant music segmentation algorithms and it uses mir eval to transparently evaluate them. Moreover, MSAF makes use of the JSON annotated music specification (JAMS), a new format to contain multiple annotations for several tasks in a single file, which simplifies the dataset design and the analysis of agreement across different human references. The human study to collect additional annotations (which are stored in JAMS files) is described, where five new annotations for fifty tracks are stored. Finally, these additional annotations are analyzed, confirming the problem of having ground-truth datasets with a single annotator per track due to the high degree of disagreement among annotators for the challenging tracks. To alleviate this, these annotations are merged to produce a more robust human reference annotation. Lastly, the standard F-measure of the hit rate measure to evaluate music segmentation is analyzed when access to additional annotations is not possible, and it is shown, via multiple human studies, that precision seems more perceptually relevant than recall.

APA, Harvard, Vancouver, ISO, and other styles
16

Nguyen, Nhat-Tan. "Automatic segmentation of the bony structure of the shoulder." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23644/23644.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Poudyal, Prakash. "Automatic extraction and structure of arguments in legal documents." Doctoral thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24848.

Full text
Abstract:
A argumentação desempenha um papel fundamental na comunicação humana ao formular razões e tirar conclusões. Desenvolveu-se um sistema automático para identificar argumentos jurídicos de forma eficaz em termos de custos a partir da jurisprudência. Usando 42 leis jurídicas do Tribunal Europeu dos Direitos Humanos (ECHR), anotou-se os documentos para estabelecer um conjunto de dados “padrão-ouro”. Foi então desenvolvido e testado um processo composto por 3 etapas para mineração de argumentos. A primeira etapa foi avaliar o melhor conjunto de recursos para identificar automaticamente as frases argumentativas do texto não estruturado. Várias experiencias foram conduzidas dependendo do tipo de características disponíveis no corpus, a fim de determinar qual abordagem que produzia os melhores resultados. No segundo estágio, introduziu-se uma nova abordagem de agrupamento automático (para agrupar frases num argumento legal coerente), através da utilização de dois novos algoritmos: o “Algoritmo de Identificação do Grupo Apropriado”, ACIA e a “Distribuição de orações no agrupamento de Cluster”, DSCA. O trabalho inclui também um sistema de avaliação do algoritmo de agrupamento que permite ajustar o seu desempenho. Na terceira etapa do trabalho, utilizou-se uma abordagem híbrida de técnicas estatísticas e baseadas em regras para categorizar as orações argumentativas. No geral, observa-se que o nível de precisão e utilidade alcançado por essas novas técnicas é viável como base para uma estrutura geral de argumentação e mineração; Abstract: Automatic Extraction and Structure of Arguments in Legal Documents Argumentation plays a cardinal role in human communication when formulating reasons and drawing conclusions. A system to automatically identify legal arguments cost-effectively from case-law was developed. Using 42 legal case-laws from the European Court of Human Rights (ECHR), an annotation was performed to establish a ‘gold-standard’ dataset. Then a three-stage process for argument mining was developed and tested. The first stage aims at evaluating the best set of features for automatically identifying argumentative sentences within unstructured text. Several experiments were conducted, depending upon the type of features available in the corpus, in order to determine which approach yielded the best result. In the second stage, a novel approach to clustering (for grouping sentences automatically into a coherent legal argument) was introduced through the development of two new algorithms: the “Appropriate Cluster Identification Algorithm”,(ACIA) and the “Distribution of Sentence to the Cluster Algorithm” (DSCA). This work also includes a new evaluation system for the clustering algorithm, which helps tuning it for performance. In the third stage, a hybrid approach of statistical and rule-based techniques was used in order to categorize argumentative sentences. Overall, it’s possible to observe that the level of accuracy and usefulness achieve by these new techniques makes it viable as the basis of a general argument-mining framework.
APA, Harvard, Vancouver, ISO, and other styles
18

Eisenberg, Joshua Daniel. "Automatic Extraction of Narrative Structure from Long Form Text." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3912.

Full text
Abstract:
Automatic understanding of stories is a long-time goal of artificial intelligence and natural language processing research communities. Stories literally explain the human experience. Understanding our stories promotes the understanding of both individuals and groups of people; various cultures, societies, families, organizations, governments, and corporations, to name a few. People use stories to share information. Stories are told –by narrators– in linguistic bundles of words called narratives. My work has given computers awareness of narrative structure. Specifically, where are the boundaries of a narrative in a text. This is the task of determining where a narrative begins and ends, a non-trivial task, because people rarely tell one story at a time. People don’t specifically announce when we are starting or stopping our stories: We interrupt each other. We tell stories within stories. Before my work, computers had no awareness of narrative boundaries, essentially where stories begin and end. My programs can extract narrative boundaries from novels and short stories with an F1 of 0.65. Before this I worked on teaching computers to identify which paragraphs of text have story content, with an F1 of 0.75 (which is state of the art). Additionally, I have taught computers to identify the narrative point of view (POV; how the narrator identifies themselves) and diegesis (how involved in the story’s action is the narrator) with F1 of over 0.90 for both narrative characteristics. For the narrative POV, diegesis, and narrative level extractors I ran annotation studies, with high agreement, that allowed me to teach computational models to identify structural elements of narrative through supervised machine learning. My work has given computers the ability to find where stories begin and end in raw text. This allows for further, automatic analysis, like extraction of plot, intent, event causality, and event coreference. These tasks are impossible when the computer can’t distinguish between which stories are told in what spans of text. There are two key contributions in my work: 1) my identification of features that accurately extract elements of narrative structure and 2) the gold-standard data and reports generated from running annotation studies on identifying narrative structure.
APA, Harvard, Vancouver, ISO, and other styles
19

Ginsberg, David W. "Variable structure control systems." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/18787.

Full text
Abstract:
The primary aims of this thesis, is to provide a body of knowledge on variable structure system theory and to apply the developed design concepts to control practical systems. It introduces the concept of a structure. The main aim in designing variable structure controllers, is to synthesize a variable structure system from two or more single structure systems, in such a way that the ensuing system out-performs its component structures. When a sliding mode is defined, the ensuing closed loop behaviour of the system is invariant to plant parameter changes and external disturbances. A variable structure controller was designed for a servo motor and successfully applied to the system. In practice, the phase plane representative point does not slide at infinite frequency with infinitesimal amplitude along the switching surface(s). Thus, the concept of a quasi-sliding regime was introduced. For high performance system specifications, the phase plane representative point could cycle about the origin. In some instances, sliding could be lost. For high speed applications, a novel design modification ensured that the system did not lose sliding. In addition, the controller could track a rapidly changing set point. Successful results support the developed theory.
APA, Harvard, Vancouver, ISO, and other styles
20

Corrado, Joseph R. "Robust fixed-structure controller synthesis." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/12945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kypuros, Javier Angel. "Variable structure model synthesis for switched systems /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fairley, Stuart Martin. "A stereo tracking and structure recovery system." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.670262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hu, Shiyan. "Automatic image analysis and structure segmentation for brain medial temporal lobe." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119386.

Full text
Abstract:
In this thesis, two new automatic image segmentation techniques are proposed and used to analyze medical magnetic resonance (MR) images of human brain medial temporal lobes. The first segmentation technique is an adaptive multi-contrast MR image based appearance modeling scheme, which combines level set and active appearance modeling methods and incorporates multi-contrast MR images, into segmentation. The contribution of each multi-contrast image to the segmentation is established by the correlation between each multi-contrast test gray image and its corresponding synthesized gray image. The latter is a linear combination of gray eigen-images characterized by principle component analysis as a result of converting a set of possibly correlated shape training images and multi-contrast gray training images into a set of linearly uncorrelated shape and multi-contrast gray eigen-images. Both the contributing weights and the linear combination model parameters are iteratively updated to minimize a weighted sum of least-square intensity differences between test image and synthesized image. The resulting model parameters are then used to linearly combine the shape related eigen-images to form a synthesized shape image as a final segmentation. In segmenting the hippocampi and amygdalae from MR images, this segmentation scheme with adaptive contributing weights is shown to provide better performance as measured by mean Dice κ values than its counterpart with equal contributing weights. The second segmentation technique is a two-stage segmentation approach, motivated by a concept of using local patches with similar intensity levels to make a collective decision so as to more accurately segment the structure boundary. In the two-stage segmentation, an appearance model-based global segmentation is employed as a first-stage segmentation to identify a coarse contour, while a patch-based local refinement is used as a second stage segmentation to make a local area correction, but only on a small set of voxels along the coarse contour identified earlier. In this thesis, the first stage segmentation uses only T1 images instead of multi-contrast images to avoid an increase in computational complexity. It is shown that the two-stage segmentation outperforms its one-stage counterpart in segmenting MR images of the human brain medial temporal lobe structures, including the hippocampus (HC), the amygdala (AG), the entorhinal/perirhinal cortex (EPC), and the parahippocampal cortex (PHC). Medial temporal lobe volumes, estimated by applying the two-stage segmentation on an MR database of 306 subjects with healthy brain development across a 4 to 18 year age range, are further analyzed and sex-specific growth patterns are derivedto help better understand puberty-related and sexually dimorphic brain maturation. Sexual-maturity level, measured by puberty scores, is used to partition the database into two groups: before and during puberty. The structure volumes for boys are largerthan those for girls but the difference varies between the AG, HC, EPC, and PHC. Age-related volumetric growth is observed in the left and right AG, left and right HC, right EPC, and left PHC and these volumetric changes are statistically significant, but only before puberty. After onset of puberty, volumetric growth tends to correlate with sexual maturity level. When evaluated with head size normalized volumes, we find smaller volumes of the right HC, left and right PHC, for more sexually matureboys, and larger volumes of the left HC for more sexually mature girls. These findings suggest that the rising levels of testosterone in boys and estrogen in girls might have opposite effects, especially for the HC and the PHC. Our findings on sex-specific and sexual maturity-related volumes may be useful in better understanding the medial temporal lobe developmental differences and related learning, memory, and emotion differences between boys and girls during puberty.
Dans ce travail de thèse, deux nouvelles méthodes de segmentation automatique d'Image par Résonance Magnétique (IRM) sont proposées. Ces deux méthodes sont appliquées et validées dans le cadre de la segmentation des structures du lobe medio temporal du cerveau humain. La première méthode de segmentation proposée repose sur un modèle d'apparence de forme adaptatif utilisant simultanément plusieurs séquences d'IRM. Cette méthode combine des ensembles de niveaux et une modélisation d'apparence de formes actives tout en incorporant l'information issue de différents contrastes en IRM. L'importance de chaque séquence IRM est estimée automatiquement au sein d'un schéma adaptatif en utilisant la corrélation des différents contrastes avec les images synthétisées correspondantes. Ces images synthétisées sont construites par combinaison linéaire des images propres obtenues par Analyse en Composante Principale (ACP). Les paramètres du model de forme ainsi que les poids attribués à chaque contraste sont ensuite estimés itérativement via une minimisation pondérée des moindres carrés entre l'image test et l'image synthétisée. Finalement, les paramètres obtenus sont utilisés pour combiner les formes propres afin d'obtenir la segmentation finale. La méthode proposée est validée dans le contexte de la segmentation de deux structures cérébrales: l'hippocampe et l'amygdale. Lors de cette validation, l'amélioration de la qualité de segmentation apportée pour le schéma adaptatif, comparé à utilisation de poids identiques pour chaque contraste, est étudiée en utilisant l'index Kappa comme mesure de qualité. La deuxième méthode de segmentation proposée combine deux étapes: une initialisation globale par un modèle d'apparence de forme puis un raffinement local utilisant une approche par patches. Cette seconde étape tire avantage des performancesdes méthodes de type moyennes non locales qui estiment la similarité de deux échantillons en calculant la distance de niveau de gris de leur voisinage sous forme de patches. Dans l'approche proposée, le modèle d'apparence est destiné à obtenir rapidement une première estimation de la segmentation utilisée comme initialisation. Ensuite, le raffinement local par patches est appliqué uniquement sur un petit ensemble de voxels le long des bords de la segmentation initiale. Une validation est effectuée sur plusieurs structures du lobe medio temporal. Cette validation démontre l'intérêt de combiner les deux étapes décrites comparé à l'utilisation non conjointe de chacune des méthodes. Une étude volumétrique des structures du lobe medio temporale est menée à l'aide de la seconde méthode sur 306 sujets sains âgés de 4 à 18 ans. L'influence du sexe des sujets sur la croissance des structures est étudiée afin de permettre une meilleure compréhension des différences de maturation cérébrale et des phénomènes liés à la puberté. Le niveau de maturité sexuelle, mesuré à l'aide du score de la puberté, est utilisé afin de séparer la base de données en deux groupes: avant et pendant la puberté. Des changements liés à l'âge sont observés pour l'AG droite et gauche, l'HC droit et gauche, le CEP droit and le CPHC gauche. Ces changements sont statistiquement significatifs mais seulement avant la puberté. Pendant la puberté, l'augmentation du volume tend à être corrélé avec le niveau de maturité sexuel. En utilisant les volumes normalisés par la taille de la tête, des volumes plus faibles de l'HC droit et du CPHC droit et gauche sont observé pour les garçons les plus matures sexuellement et des volumes plus grands pour l'HC gauche sont obtenus pour les filles les plus matures sexuellement. Ces résultats sur la croissance des structures du lobe medio temporal selon le sexe et la maturité peuvent participer à l'amélioration de notre compréhension des différences observées entre le filles et le garçons durant la puberté au niveau de l'apprentissage, de la mémoire et des émotions.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Gang. "A hybrid approach to the automatic planning of discourse structures." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Harju, Johansson Janne. "A structure utilizing inexact primal-dual interior-point method for analysis of linear differential inclusions /." Licentiate thesis, Linköping : Department of Electrical Engineering, Linköping University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Zuolong. "Study on Structure and Vacuum Membrane Distillation Performance of PVDF Composite Membranes: Influence of Molecular Weight and Blending." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30677.

Full text
Abstract:
In this study, membranes were made from three polyvinylidene fluoride (PVDF) polymers individually and the blend systems of high (H) and low (L) molecular weight PVDF by phase inversion process. After investigating membrane casting solutions’ viscous and thermodynamic properties, the membranes so fabricated were characterized by scanning electron microscopy, gas permeation tests, porosity measurement, contact angle (CA) and liquid entry pressure of water (LEPw) measurement, and further subjected to vacuum membrane distillation (VMD) in a scenario that was applicable for cooling processes, where the feed water temperature was maintained at 27℃. It was found that PVDF solutions’ viscosities and thermodynamic instabilities were determined by the types of PVDF employed in single polymer systems and the mixing ratios of two PVDF polymers in blend systems. Thus the membrane properties and performances were influenced by the aforesaid factors as well. In single polymer systems, it was found that the membrane surface roughness and porosity increased with an increase in molecular weight. Among all the membranes casted in this study, the water vapor flux of VMD was found to be the highest at the intermediate range of H:L ratio, i.e., 4:6, at which the thickness of the sponge-like layer showed a minimum, the finger-like macro-voids formed a more orderly single-layer structure, and the LEPw showed a minimum. A conclusion can be made that blend systems of high molecular weight PVDF polymers and low molecular weight PVDF polymers could be used to optimize membrane performance in vacuum membrane distillation.
APA, Harvard, Vancouver, ISO, and other styles
27

Fallqvist, Marcus. "Automatic Volume Estimation Using Structure-from-Motion Fused with a Cellphone's Inertial Sensors." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144194.

Full text
Abstract:
The thesis work evaluates a method to estimate the volume of stone and gravelpiles using only a cellphone to collect video and sensor data from the gyroscopesand accelerometers. The project is commissioned by Escenda Engineering withthe motivation to replace more complex and resource demanding systems with acheaper and easy to use handheld device. The implementation features popularcomputer vision methods such as KLT-tracking, Structure-from-Motion, SpaceCarving together with some Sensor Fusion. The results imply that it is possible toestimate volumes up to a certain accuracy which is limited by the sensor qualityand with a bias.
I rapporten framgår hur volymen av storskaliga objekt, nämligen grus-och stenhögar,kan bestämmas i utomhusmiljö med hjälp av en mobiltelefons kamerasamt interna sensorer som gyroskop och accelerometer. Projektet är beställt avEscenda Engineering med motivering att ersätta mer komplexa och resurskrävandesystem med ett enkelt handhållet instrument. Implementationen använderbland annat de vanligt förekommande datorseendemetoderna Kanade-Lucas-Tommasi-punktspårning, Struktur-från-rörelse och 3D-karvning tillsammans medenklare sensorfusion. I rapporten framgår att volymestimering är möjligt mennoggrannheten begränsas av sensorkvalitet och en bias.
APA, Harvard, Vancouver, ISO, and other styles
28

Gómez-Mendoza, Juan Bernardo. "A contribution to mouth structure segmentation in images towards automatic mouth gesture recognition." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00770660.

Full text
Abstract:
This document presents a series of elements for approaching the task of segmenting mouth structures in facial images, particularly focused in frames from video sequences. Each stage is treated separately in different Chapters, starting from image pre-processing and going up to segmentation labeling post-processing, discussing the technique selection and development in every case. The methodological approach suggests the use of a color based pixel classification strategy as the basis of the mouth structure segmentation scheme, complemented by a smart pre-processing and a later label refinement. The main contribution of this work, along with the segmentation methodology itself, is based in the development of a color-independent label refinement technique. The technique, which is similar to a linear low pass filter in the segmentation labeling space followed by a nonlinear selection operation, improves the image labeling iteratively by filling small gaps and eliminating spurious regions resulting from a prior pixel classification stage. Results presented in this document suggest that the refiner is complementary to image pre-processing, hence achieving a cumulative effect in segmentation quality. At the end, the segmentation methodology comprised by input color transformation, preprocessing, pixel classification and label refinement, is put to test in the case of mouth gesture detection in images aimed to command three degrees of freedom of an endoscope holder.
APA, Harvard, Vancouver, ISO, and other styles
29

Hall, A. R. "Automatic speech recognition using morpheme structure rules for word hypothesis and dictionary generation." Thesis, London South Bank University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.352963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Gomez, Ramirez Jorge Mario. "Optimisation numérique du fonctionnement, du dimensionnement et de la structure d'une colonne de distillation catalytique représentée par un modèle de transfert." Pau, 2005. http://www.theses.fr/2005PAUU3012.

Full text
Abstract:
Nous proposons, comme objectifs dans cette thèse, de résoudre le problème de conception optimale de la distillation catalytique, en nous basant sur une formulation MINLP du problème d'optimisation et un modèle de transfert intégral (modèle de non équilibre). L'utilisation de ce modèle de non équilibre, pour l'optimisation de la colonne, présente deux avantages majeurs : i) la notion d'efficacité de plateau n'apparaît plus ii) les dimensions des internes de la colonne (hauteur, longueur de déversoir. . . ) interviennent dans le modèle et peuvent donc être optimisées. Les contraintes relatives à l'hydrodynamique de la colonne (engorgement, pleurage, perte de charge, entraînement) sont également prises en compte. Le modèle, dans notre cas, fait apparaître deux types de sections : i) des sections de séparation pure, sans réaction ; partant du modèle hydrodynamique du double film, le transfert de matière est décrit dans chaque phase grâce aux coefficients de transfert ii) des sections purement réactives ; la réaction a lieu en phase liquide et est modélisée par une cinétique globale de type Langmuir–Hinshelwood. Ce découplage (section de séparation / section réactive), retenu pour décrire au mieux le pilote installé au LaTEP, permet, de plus, l'utilisation des coefficients de transfert obtenus pour les systèmes non réactifs. Le choix de la stratégie de résolution (gestion des variables et des contraintes d'optimisation) et des différents algorithmes utilisés est cependant un point critique. Nous proposons une stratégie pour l'optimisation globale, qui combine, sur deux niveaux, le Recuit Simulé (RS) et la Programmation Quadratique Successive (SQP)
The objective of this contribution is to propose a Mixed Integer Non Linear Programming (MINLP) formulation for optimal design of a catalytic distillation column based on a generic Non-Equilibrium Model (NEQ). The use of this NEQ model presents two main advantages: i) the computation of tray efficiencies is entirely avoided ii) the geometrical parameters of the column's hardware can be optimized. The minimization of the total annualized cost is submitted to three sets of constraints: the model equations, the product specification and the tray hydraulic equations. The solution strategy for the optimization uses a combination of Simulated Annealing and Sequential Quadratic Programming. Catalytic distillation of ETBE is considered as illustrative example. The results of the optimization are discussed. Pre and post optimal sensitivity analysis is also performed. According to our pilot plant design, the model relies on separation stages (trays) and reactive stages. The separation stages are cross flow sieve trays, of a single pass. In this work, the non-equilibrium model for the separation stages is based on of the works by Krishnamurthy and Taylor who presented a non-equilibrium model for a non-reactive distillation column. This model uses the two film theory and the heat and mass transfer coefficients to determine the flux at the interface (Integral NEQ model). For the optimization purposes, the Integral NEQ model is a good trade off between the complexity of the Differential NEQ model and the Equilibrium model
APA, Harvard, Vancouver, ISO, and other styles
31

Inagaki, Yasuyoshi, Shigeki Matsubara, and Makoto Ohara. "AUTOMATIC EXTRACTION OF TRANSLATION PATTERNS FROM BILINGUAL LEGAL CORPUS." IEEE, 2003. http://hdl.handle.net/2237/15084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bixian, Luo, Luo Jian, and Zeng Wei. "THE GREAT FREQUENCY DEVIATION AUTOMATIC MEASURING OF TELEMETRY TRANSMITTER." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/606806.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
At present, there is no means of instrument direct measurement to frequency deviation when it is up 500kHz. But the frequency deviation of high bit rate telemetry transmitter is 700kHz or more. In this paper, an indirect measurement method using spectrum analyzer and counter is put forward. It effectively solves the measurement problem of frequency deviation and frequency response of high bit rate telemetry transmitters. Measuring theory, summary of experiences and difficulties in measuring work, have been deeply studied with the viewpoint of how to avoid the limitation of different methods of measurement. Focused on the establishment of an automatic measuring system, expert system, skilled data and software of the system are studied in detail. The data for comparison is also supplied. Finally, the analysis to the measuring error and general uncertainty is given.
APA, Harvard, Vancouver, ISO, and other styles
33

Marinov, Martin C. [Verfasser]. "Automatic generation of structure preserving models for computer aided geometric design / Martin Cvetanov Marinov." Aachen : Shaker, 2006. http://d-nb.info/999981455/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lacson, Ronilda Covar 1968. "Automatic analysis of medical dialogue in the home hemodialysis domain : structure induction and summarization." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34467.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 129-134).
Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We demonstrate the utility of this structural abstraction by incorporating it into an automatic dialogue summarizer. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans and significantly outperform random selections (p<0.0001) in precision and recall.
(cont.) In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically-generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naive summarizer (p<0.05). This is a significant result because it spares the physician from the need to wade through irrelevant material ample in dialogue transcripts. This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.
by Ronilda Covar Lacson.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Redmond, Brian L. "A workcell control and communications structure for the Georgia Institute of Technology Flexible Automation Laboratory." Thesis, Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/18946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

羅普倫 and Po-lun Law. "Model-based variable-structure control of robot manipulators in joint space and in Cartesian space." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31212463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Romeo, Lauren Michele. "The Structure of the lexicon in the task of the automatic acquisition of lexical information." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/325420.

Full text
Abstract:
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
Lexical semantic class information for nouns is critical for a broad variety of Natural Language Processing (NLP) tasks including, but not limited to, machine translation, discrimination of referents in tasks such as event detection and tracking, question answering, named entity recognition and classification, automatic construction and extension of ontologies, textual inference, etc. One approach to solve the costly and time-consuming manual construction and maintenance of large-coverage lexica to feed NLP systems is the Automatic Acquisition of Lexical Information, which involves the induction of a semantic class related to a particular word from distributional data gathered within a corpus. This is precisely why current research on methods for the automatic production of high- quality information-rich class-annotated lexica, such as the work presented here, is expected to have a high impact on the performance of most NLP applications. In this thesis, we address the automatic acquisition of lexical information as a classification problem. For this reason, we adopt machine learning methods to generate a model representing vectorial distributional data which, grounded on known examples, allows for the predictions of other unknown words. The main research questions we investigate in this thesis are: (i) whether corpus data provides sufficient distributional information to build efficient word representations that result in accurate and robust classification decisions and (ii) whether automatic acquisition can handle also polysemous nouns. To tackle these problems, we conducted a number of empirical validations on English nouns. Our results confirmed that the distributional information obtained from corpus data is indeed sufficient to automatically acquire lexical semantic classes, demonstrated by an average overall F1-Score of almost 0.80 using diverse count-context models and on different sized corpus data. Nonetheless, both the State of the Art and the experiments we conducted highlighted a number of challenges of this type of model such as reducing vector sparsity and accounting for nominal polysemy in distributional word representations. In this context, Word Embeddings (WE) models maintain the “semantics” underlying the occurrences of a noun in corpus data by mapping it to a feature vector. With this choice, we were able to overcome the sparse data problem, demonstrated by an average overall F1-Score of 0.91 for single-sense lexical semantic noun classes, through a combination of reduced dimensionality and “real” numbers. In addition, the WE representations obtained a higher performance in handling the asymmetrical occurrences of each sense of regular polysemous complex-type nouns in corpus data. As a result, we were able to directly classify such nouns into their own lexical-semantic class with an average overall F1-Score of 0.85. The main contribution of this dissertation consists of an empirical validation of different distributional representations used for nominal lexical semantic classification along with a subsequent expansion of previous work, which results in novel lexical resources and data sets that have been made freely available for download and use.
APA, Harvard, Vancouver, ISO, and other styles
38

Almotiri, Jasem. "A Multi-Anatomical Retinal Structure Segmentation System for Automatic Eye Screening Using Morphological Adaptive Fuzzy Thresholding." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10975223.

Full text
Abstract:

Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue to detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This thesis proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images.

The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogenous anatomical structures.

APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Xuerui, and Li Zhao. "Navigation and Automatic Ground Mapping by Rover Robot." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6185.

Full text
Abstract:
This project is mainly based on mosaicing of images and similarity measurements with different methods. The map of a floor is created from a database of small-images that have been captured by a camera-mounted robot scanning the wooden floor of a living room. We call this ground mapping. After the ground mapping, the robot can achieve self-positioning on the map by using novel small images it captures as it displaces on the ground. Similarity measurements based on the Schwartz inequality have been used to achieve the ground mapping, as well as to position the robot once the ground map is available. Because the natural light affects the gray value of images, this effect must be accounted for in the envisaged similarity measurements. A new approach to mosaicing is suggested. It uses the local texture orientation, instead of the original gray values, in ground mapping as well as in positioning. Additionally, we report on ground mapping results using other features, gray-values as features. The robot can find its position with few pixel errors by using the novel approach and similarity measurements based on the Schwartz inequality.
APA, Harvard, Vancouver, ISO, and other styles
40

Nikolaus, Ulrich, and Julia Dobroschke. "Automatic conversion of PDF-based, layout-oriented typesetting data to DAISY: potentials and limitations." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-38042.

Full text
Abstract:
Only two percent of new books released in Germany are professionally edited for visually impaired people. However, more and more print publications are made available to the public in digital formats through online content delivery platforms like “libreka!”. The automatic conversion of such contents into DAISY would considerably increase the number of publications available in accessible formats. Still, most data available on “libreka!” is published as non-tagged PDF. In this paper, we examine the potential for automatic conversion of “libreka!”-based content into DAISY, while also analyzing the potentials and limitations of current conversion tools.
APA, Harvard, Vancouver, ISO, and other styles
41

Mokdad, Ali G. "DEVELOPING TOOLS FOR RNA STRUCTURAL ALIGNMENT." Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1143320655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Shahaf, Dafna. "Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/210.

Full text
Abstract:
When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives; search engines, our most popular navigational tools, are limited in their capacity to explore such complex stories. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents that maximizes coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. The overarching theme of this work is formalizing characteristics of good maps, and providing efficient algorithms (with theoretical guarantees) to optimize them. Moreover, as information needs vary from person to person, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with real-world datasets demonstrate that the method is able to produce maps which help users acquire knowledge efficiently. We believe that metro maps could be powerful tools for any Web user, scientist, or intelligence analyst trying to process large amounts of data.
APA, Harvard, Vancouver, ISO, and other styles
43

Radoux, Christopher John. "The automatic detection of small molecule binding hotspots on proteins : applying hotspots to structure-based drug design." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/275133.

Full text
Abstract:
Locating a ligand-binding site is an important first step in structure-guided drug discovery, but current methods typically assess the pocket as a whole, doing little to suggest which regions and interactions are the most important for binding. This thesis introduces Fragment Hotspot Maps, a grid-based method that samples atomic propensities derived from interactions in the Cambridge Structural Database (CSD) with simple molecular probes. These maps specifically highlight fragment-binding sites and their corresponding pharmacophores, offering more precision over other binding site prediction methods. The method is validated by scoring the positions of 21 fragment and lead pairs. Fragment atoms are found in the highest scoring parts of the map corresponding to their atom type, with a median percentage rank of 98%. This is reduced to 72% for lead atoms, showing that the method can differentiate between the hotspots, and the warm spots later used during fragment elaboration. For ligand-bound structures, they provide an intuitive visual guide within the binding site, directing medicinal chemists where to grow the molecule and alerting them to suboptimal interactions within the original hit. These calculations are easily accessible through a simple to use web application, which only requires an input PDB structure or code. High scoring specific interactions predicted by the Fragment Hotspot Maps can be used to guide existing computer aided drug discovery methods. The Hotspots Python API has been created to allow these work flows to be executed programmatically through a single Python script. Two of the functions use scores from the Fragment Hotspot Maps to guide virtual screening methods, docking and field-based ligand screening. Docking virtual screening performance is improved by using a constraint selected from the highest scoring polar interaction. The field-based ligand screener uses modified versions of the Fragment Hotspot Maps directly to predict and score the binding pose. This workflow gave comparable results to docking, and for one target, Glucocorticoid receptor (GCR), showed much better results, highlighting its potential as an orthogonal approach. Fragment Hotspot Maps can be used at multiple stages of the drug discovery process, and research into these applications is ongoing. Their utility in the following areas are currently being explored: to assess ligandability for both individual structures and across proteomes, to aid in library design, to assess pockets throughout a molecular dynamics trajectory, to prioritise crystallographic fragment hits and to guide hit-to-lead development.
APA, Harvard, Vancouver, ISO, and other styles
44

Nikolaus, Ulrich, and Julia Dobroschke. "Automatic conversion of PDF-based, layout-oriented typesetting data to DAISY: potentials and limitations." Tagungsband zu: DAISY International Technical Conference : Barrierefreie Aufbereitung von Dokumenten, 21. - 27. September 2009, Leipzig/Germany. - Leipzig : DZB, 2009. - S. 115 - 127, 2009. https://slub.qucosa.de/id/qucosa%3A797.

Full text
Abstract:
Only two percent of new books released in Germany are professionally edited for visually impaired people. However, more and more print publications are made available to the public in digital formats through online content delivery platforms like “libreka!”. The automatic conversion of such contents into DAISY would considerably increase the number of publications available in accessible formats. Still, most data available on “libreka!” is published as non-tagged PDF. In this paper, we examine the potential for automatic conversion of “libreka!”-based content into DAISY, while also analyzing the potentials and limitations of current conversion tools.
APA, Harvard, Vancouver, ISO, and other styles
45

Lind, Ingela. "Regressor and Structure Selection : Uses of ANOVA in System Identification." Doctoral thesis, Linköping : Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Escorcia, Gutierrez José. "Image Segmentation Methods for Automatic Detection of the Anatomical Structure of the Eye in People with Diabetic Retinopathy." Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/671543.

Full text
Abstract:
Aquesta tesi s'emmarca dins del pla integral de prevenció precoç de la Retinopatia Diabètica (RD) posat en marxa pel govern espanyol seguint les recomanacions de l'Organització Mundial de la Salut de promoure iniciatives que consciencien sobre la importància de fer revisions oculars regulars entre les persones amb diabetis. Per tal de poder determinar el nivell de retinopatia diabètica cal localitzar i identificar diferents tipus de lesions a la retina. Per poder fer-ho, cal que primer s'eliminin de la imatge les estructures anatòmiques normals de l'ull (vasos sanguinis, disc òptic i fòvea) a fi de fer més visibles les anomalies. Aquesta tesi s'ha centrat en aquest pas de neteja de la imatge. En primer lloc, aquesta tesi proposa un nou marc per a la segmentació ràpida i automàtica del disc òptic basat en la Teoria del Portafoli de Markowitz. En base a aquesta teoria es proposa un model innovador de fusió de colors capaç d'admetre qualsevol metodologia de segmentació en el camp de la imatge mèdica. Aquest enfoc s'estructura com una etapa de pre-processament potent i en temps real que es podria integrar-se a la pràctica clínica diària, permetent accelerar el diagnòstic de la DR a causa de la seva simplicitat, rendiment i velocitat. La segona contribució d'aquesta tesi és un mètode per fer simultàniament una segmentació dels vasos sanguinis i la detecció de la zona avascular foveal, reduint considerablement el temps de processament d'imatges. A més a més, el primer component de l'espai de color xyY (que representa els valors de crominància) és el que predomina en l'estudi dels diferents components de color desenvolupat en aquesta tesi, centrat en la segmentació dels vasos sanguinis i la detecció de fòvea. Finalment, es proposa una recopilació automàtica de mostres per fer una interpolació estadística del color i que són utilitzades en l'algorisme de segmentació de Convexity Shape Prior. La tesi també proposa un altre mètode de segmentació dels vasos sanguinis que es basa en una selecció de característiques efectiva basada arbres de decisions. S'ha aconseguit trobar les 5 característiques més rellevants per segmentar aquestes estructures oculars. La validació mitjançant tres tècniques de classificació diferents (arbres de decisions, xarxes neuronals i màquines de suport vectorial).
Esta tesis se enmarca dentro del plan integral de prevención contra la Retinopatía Diabética (RD), ejecutado por el Gobierno de España alineado a las políticas de la Organización Mundial de la Salud para promover iniciativas que conciencien a la población con diabetes sobre la importancia de exámenes oculares de manera periódica. Para poder determinar el nivel de retinopatía diabética hace falta localizar e identificar diferentes tipos de lesiones en la retina. Para conseguirlo primero se han de eliminar de la imagen las estructures anatómicas normales del ojo (vasos sanguíneos, disco óptico y fóvea) para hacer visibles las anomalías. Esta tesis se ha centrado en este paso de limpieza de la imagen. En primer lugar, esta tesis propone un novedoso enfoque para la segmentación rápida y automática del disco óptico basado en la Teoría de Portafolio de Markowitz. En base a esta teoría se propone un innovador modelo de fusión de color capaz de soportar cualquier metodología de segmentación en el campo de las imágenes médicas. Este enfoque se estructura como una etapa de preprocesamiento potente y en tiempo real que podría integrarse en la práctica clínica diaria para acelerar el diagnóstico de RD debido a su simplicidad, rendimiento y velocidad. La segunda contribución de esta tesis es un método para segmentar simultáneamente los vasos sanguíneos y detectar la zona avascular foveal, reduciendo considerablemente el tiempo de procesamiento para tal tarea. Adicionalmente, la primera componente del espacio de color xyY (que representa los valores de crominancia) es la que predomina del estudio de las diferentes componentes de color realizado en esta tesis para la segmentación de vasos sanguíneos y la detección de la fóvea. Finalmente, se propone una recolección automática de muestras para interpolarlas basadas en la información estadística de color y que a su vez son la base del algoritmo Convexity Shape Prior. La tesis también propone otro método de segmentación de vasos sanguíneos basado en una selección efectiva de características soportada en árboles de decisión. Se ha conseguido encontrar las 5 características más relevantes para la segmentación de estas estructuras oculares. La validación utilizando tres técnicas de clasificación (árbol de decisión, red neuronal artificial y máquina de soporte vectorial).
This thesis is framed within the comprehensive plan for early prevention of Diabetic Retinopathy (DR) launched by the Spain government following the World Health Organization to promote initiatives that raise awareness of the importance of regular eye exams among people with diabetes. To determine the level of diabetic retinopathy, we need to find and identify different types of lesions in the eye fundus. First, the normal anatomic structures of the eye (blood vessels, optic disc and fovea) must be removed from the image, in order to make visible the abnormalities. This thesis has focused on this step of image cleaning. This thesis proposes a novel framework for fast and fully automatic optic disc segmentation based on Markowitz's Modern Portfolio Theory to generate an innovative color fusion model capable of admitting any segmentation methodology in the medical imaging field. This approach acts as a powerful and real-time pre-processing stage that could be integrated into daily clinical practice to accelerate the diagnosis of DR due to its simplicity, performance, and speed. This thesis's second contribution is a method to simultaneously make a blood vessel segmentation and foveal avascular zone detection, considerably reducing the required image processing time. In addition, the first component of the xyY color space representing the chrominance values is the most supported according to the approach developed in this thesis for blood vessel segmentation and fovea detection. Finally, several samples are collected for a color interpolation procedure based on statistic color information and are used by the well-known Convexity Shape Prior segmentation algorithm. The thesis also proposes another blood vessel segmentation method that relies on an effective feature selection based on decision tree learning. This method is validated using three different classification techniques (i.e., Decision Tree, Artificial Neural Network, and Support Vector Machine).
APA, Harvard, Vancouver, ISO, and other styles
47

Garimella, Srinivas Murthy Stroud Charles E. "Built-in self test for regular structure embedded cores in system-on-chip." Auburn, Ala., 2005. http://repo.lib.auburn.edu/EtdRoot/2005/SPRING/Electrical_and_Computer_Engineering/Thesis/GARIMELLA_SRINIVAS_32.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bouchigny, Sylvain. "Développement d'une cible polarisée de pur HD : Analyse et distillation du HD Diffusion compton virtuelle résonante sur le nucléon à TJNAF." Phd thesis, Université Paris Sud - Paris XI, 2004. http://tel.archives-ouvertes.fr/tel-00008119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Stuckner, Joshua Andrew. "Investigating the origin of localized plastic deformation in nanoporous gold by in situ electron microscopy and automatic structure quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100733.

Full text
Abstract:
Gold gains many useful properties when it is formed into a nanoporous structure, but it also becomes macroscopically brittle due to flow localization and may therefore be unreliable for many applications. The goal of this work was to establish processing/structure/property relationships of nanoporous gold, discover controllable structure features, and understand the role of structure on flow localization. The nanoporous gold structure, consisting of a 3D network of nanoscale gold ligaments, was quantified with an automatic software developed for this work called AQUAMI, which uses computer vision techniques to make statistically reliable numbers of repeatable and unbiased measurements per image. AQUAMI increased the efficiency and accuracy of characterization in this work, allowed for the conduction of more experiments, and provided better confidence in morphology and size distribution of the complex NPG microstructural features. Nanoporous gold was synthesized while varying numerous processing factors such as dealloying time, annealing time, and mechanical agitation. Through the expanded scope of synthesis experiments and detailed analysis, it was discovered that the curvature of the ligaments and the distribution width of ligament diameters could be controlled through processing. In situ tensile experiments in SEM and TEM revealed that large ligaments arrested crack propagation while curved ligaments increase ductility by straightening in the tensile direction and forming geometrically required defects, which inhibit dislocation activity. Through synthesis and microstructure characterization, two new controllable structure features were discovered experimentally. In situ mechanical testing revealed the role these structures play on the deformation behavior and flow localization of nanoporous gold.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
50

Peldszus, Andreas [Verfasser], Manfred [Akademischer Betreuer] Stede, Manfred [Gutachter] Stede, and Chris [Gutachter] Reed. "Automatic recognition of argumentation structure in short monological texts / Andreas Peldszus ; Gutachter: Manfred Stede, Chris Reed ; Betreuer: Manfred Stede." Potsdam : Universität Potsdam, 2018. http://d-nb.info/1218404221/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography