Tesis sobre el tema "Automatic threshold"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Automatic threshold.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 39 mejores tesis para su investigación sobre el tema "Automatic threshold".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Braseth, Jørgen. "Automatic Configuration for Collective Construction : Automatic parameter setting for response threshold agents in collective construction". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8748.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Xie, Kaicheng. "Automatic Utility Meter Reading". Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1270587412.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jeuthe, Julius. "Automatic Tissue Segmentation of Volumetric CT Data of the Pelvic Region". Thesis, Linköpings universitet, Medicinsk informatik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133153.

Texto completo
Resumen
Automatic segmentation of human organs allows more accurate calculation of organ doses in radiationtreatment planning, as it adds prior information about the material composition of imaged tissues. For instance, the separation of tissues into bone, adipose tissue and remaining soft tissues allows to use tabulated material compositions of those tissues. This approximation is not perfect because of variability of tissue composition among patients, but is still better than no approximation at all. Another use for automated tissue segmentationis in model based iterative reconstruction algorithms. An example of such an algorithm is DIRA, which is developed at the Medical Radiation Physics and the Center for Medical Imaging Science and Visualization(CMIV) at Linköpings University. DIRA uses dual-energy computed tomography (DECT) data to decompose patient tissues into two or three base components. So far DIRA has used the MK2014 algorithm which segments human pelvis into bones, adipose tissue, gluteus maximus muscles and the prostate. One problem was that MK2014 was limited to 2D and it was not very robust. Aim: The aim of this thesis work was to extend the MK2014 to 3D as well as to improve it. The task was structured to the following activities: selection of suitable segmentation algorithms, evaluation of their results and combining of those to an automated segmentation algorithm. Of special interest was image registration usingthe Morphon. Methods: Several different algorithms were tested.  For instance: Otsu's method followed by threshold segmentation; histogram matching followed by threshold segmentation, region growing and hole-filling; affine phase-based registration and the Morphon. The best-performing algorithms were combined into the newly developed JJ2016. Results: For the segmentation of adipose tissue and the bones in the eight investigated data sets, the JJ2016 algorithm gave better results than the MK2014. The better results of the JJ2016 were achieved by: (i) a new segmentation algorithm for adipose tissue which was not affected by the amount of air surrounding the patient and segmented smaller regions of adipose tissue and (ii) a new filling algorithm for connecting segments of compact bone. The JJ2016 algorithm also estimates a likely position for the prostate and the rectum by combining linear and non-linear phase-based registration for atlas based segmentation. The estimated position (center point) was in most cases close to the true position of the organs. Several deficiencies of the MK2014 algorithm were removed but the improved version (MK2014v2) did not perform as well as the JJ2016. Conclusions: JJ2016 performed well for all data sets. The JJ2016 algorithm is usable for the intended application, but is (without further improvements) too slow for interactive usage. Additionally, a validation of the algorithm for clinical use should be performed on a larger number of data sets, covering the variability of patients in shape and size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

SILVA, Joberth de Nazaré. "Detecção automática de massas em mamografias digitais usando Quality Threshold clustering e MVS". Universidade Federal do Maranhão, 2013. http://tedebc.ufma.br:8080/jspui/handle/tede/1834.

Texto completo
Resumen
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-16T18:29:06Z No. of bitstreams: 1 JoberthSilva.pdf: 6383640 bytes, checksum: f18918eb45c49cb426b560e4daddf994 (MD5)
Made available in DSpace on 2017-08-16T18:29:06Z (GMT). No. of bitstreams: 1 JoberthSilva.pdf: 6383640 bytes, checksum: f18918eb45c49cb426b560e4daddf994 (MD5) Previous issue date: 2013-02-20
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Breast cancer is worldwide the most common form of cancer affecting woman, sometimes in their lives, at the proportion of either one to nine or one to thirteen women who reach the age of ninety in the west world (LAURENCE, 2006). Breast cancer is caused by frequent reproduction of cells in various parts of the human body. At certain times, and for reasons yet unknown, some cells begin to reproduce at a higher speed, causing the onset of cellular masses called neoplasias, or tumors, which are new tissue formation, but from pathological origin. This work has proposed a method of automatic detection of masses in digital mammograms, using the Quality Threshold (QT), and the Supporting Vector Machine (MVS). The images processing steps were as follows: firstly, the pre-processing phase took place which consisted of removing the background image, smoothing it with a low pass filter, to increase the degree of contrast, and then, in sequence, accomplishing an enhancement of the Wavelet Transform (WT) by changing their coefficients with a linear function. After the pre-processing phase, came the segmentation with the use of the QT which divided the image in to clusters with pre-defined diameters. Then, the post-processing occurred with the selection of the best candidates to mass formed by the MVS analysis of the shape descriptors. For the extraction phase of texture features the Haralick descriptors and the function correlogram were used. As for the classification stage, the MVS was used again for training, validation of the MVS model and final test. The achieved results were: sensitivity of 92. 31%, specificity of 82.2%, accuracy of 83,53%, a false positive rate per image of 1.12 and an area under a FROC curve of 0.8033.
O câncer de mama é, mundialmente, a forma mais comum de câncer em mulheres afetando, em algum momento suas vidas, aproximadamente uma em cada nove a uma em cada treze mulheres que atingem os noventa anos no mundo ocidental (LAURANCE, 2006). O câncer de mama é ocasionado pela reprodução frequente de células de diversas partes do corpo humano. Em certos momentos e por motivos ainda desconhecidos algumas células começam a se reproduzir com uma velocidade maior, ocasionando o surgimento de massas celulares denominadas de neoplasias ou tumores que são tecidos de formação nova, mas de origem patológica. Neste trabalho foi proposto um método de detecção automática de massas em mamografias digitais usando o Quality Threshold (QT), e a Máquina de Vetores de Suporte (MVS). As etapas de processamento das imagens foram as seguintes: primeiramente veio a fase de pré-processamento que consiste em retirar o fundo da imagem, suavizá-la com um filtro passa-baixa, aumentar a escala de contraste, e na sequencia realizar um realce com a Transformada de Wavelet (WT) através da alteração dos seus coeficientes com uma função linear. Após a fase de pré-processamento vem a seguimentação utilizando o QT que segmenta a imagem em clusters com diâmetros pré-definidos. Em seguida, vem o pós-processamento com a seleção dos melhores candidatos à massa feita através da análise dos descritores de forma pela MVS. Para fase de extração de características de textura foram utiliza os descritores de Haralick e a função correlograma. Já na fase de classificação a MVS novamente foi utilizada para o treinamento, validação do modelo MVS e teste final. Os resultados alcançados foram: sensibilidade de 92,31%, especificidade de 82,2%, Acurácia de 83,53%, uma taxa de falsos positivos por imagem de 1,12 e uma área sob a curva FROC de 0,8033.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Zai Yong. "Simultaneous fault diagnosis of automotive engine ignition systems using pairwise coupled relevance vector machine, extracted pattern features and decision threshold optimization". Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493967.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Anderson, Foery Kristen R. "Triggering the Lombard effect: Examining automatic thresholds". Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1460856.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Schairer, Kim, Elizabeth Kolberg, Douglas H. Keefe, Denis Fitzpatrick, Daniel Putterman y Patrick Feeney. "Automated Wideband Acoustic Reflex Threshold Test". Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/1803.

Texto completo
Resumen
Acoustic reflex thresholds (ARTs) are used clinically as a cross check for behavioral results and as a measure of 7th and 8th cranial nerve function. In clinical test batteries, ARTs are measured as a change in middle ear admittance of a pure tone probe in the presence of a pure tone or broadband noise (BBN) reflex activator. ARTs measured using a wideband probe may yield lower thresholds because the criterion change for 'present' reflexes can be observed across a range of frequencies rather than at a single frequency. ARTs were elicited in a group of 25 adults with normal hearing using a 226-Hz probe and a wideband (250 to 8000 Hz) probe, and activators of 500, 1000, and 2000-Hz and broadband noise (BBN). Wideband ARTs were estimated using an automated adaptive method. Lower mean ARTs were obtained for the wideband method compared to the clinical method by as much as 5-10 dB for tonal activators and 15 dB for BBN. Clinical benefits of lower ARTs include reduced activator levels during threshold estimation, and present rather than absent responses in some ears with absent ART using the clinical method. Results are encouraging for the automated adaptive ART procedure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Djiallis, Caroline Helen. "Variability of the automated perimetric threshold response". Thesis, Cardiff University, 2005. http://orca.cf.ac.uk/54548/.

Texto completo
Resumen
The thesis investigated aspects of the perimetric threshold estimate with the aim of facilitating the outcome of the visual field examination. The difference in performance of the three current short-duration commercially available algorithms, SITA Standard, SITA Fast and TOP was investigated, relative to their respective 'gold standards' and to each other, in two separate studies of normal individuals and of patients with open angle glaucoma (OAG). The results for the normal individuals suggested that the TOP algorithm will overestimate the severity of the field loss relative to the Octopus Threshold and SITA Fast algorithms. However, for the patients with OAG, SITA Fast represented a good compromise between performance and examination duration. The inherent differences within- and between-algorithm for TOP suggests that an alternative should be utilised in clinical practice. The characteristics of the Frequency-of-seeing (FOS) curves for W-W perimetry and for SWAP were investigated for varying eccentricities in normal individuals and in patients with OAG. In the normal individuals, the slope of the FOS curve flattened and the magnitude of the 50th percentile decreased with increase in eccentricity for W-W perimetry and for SWAP. The magnitude of the slope was flatter at any given eccentricity for SWAP than for W-W perimetry. In patients with OAG, the magnitude of the slope was moderately correlated with the severity of field loss for W-W perimetry and for SWAP. The flatter slope of the FOS curve will always yield greater variability for SWAP than for W-W perimetry. The number of incorrect responses to the False-negative catch trials was investigated in patients with OAG as a function of the fatigue effect. No significant difference was found in the prevalence of incorrect responses with increase in fatigue. The prevalence of incorrect responses was modestly correlated with increasing severity of field loss. W-W perimetry, SWAP, SITA, TOP, FOS, False-negative catch trials.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Van, Tonder Jessica Jacqueline. "Automated smartphone threshold audiometry : validity and time-efficiency". Diss., University of Pretoria, 2016. http://hdl.handle.net/2263/60435.

Texto completo
Resumen
Automated smartphone-based threshold audiometry has the potential to provide affordable audiometric services in underserved contexts where adequate resources and infrastructure are lacking. This study investigated the validity of the threshold version (hearTest) of the hearScreen™ smartphone-based application using inexpensive smartphones (Android OS) and calibrated supra-aural headphones. A repeated-measures, within-subject, study design was employed, comparing automated smartphone audiometry air conduction thresholds (0.5 to 8 kHz) to conventional audiometry thresholds. A total of 95 participants, with varying degrees of hearing sensitivity, were included in the study. 30 participants were adults, with known bilateral hearing losses of varying degrees (mean age of 59 years, 21.8 SD; 56.7% female). 65 participants were adolescents (mean age of 16.5 years, 1.2 SD; 70.8% female), of which 61 had normal hearing and 4 had mild hearing losses. Within the adult sample, 70.6% of thresholds obtained through smartphone and conventional audiometry corresponded within 5 dB. There was no significant difference between smartphone (6.75 min average, 1.5 SD) and conventional audiometry test duration (6.65 min average, 2.5 SD). Within the adolescent sample, 84.7% of audiometry thresholds obtained at 0.5, 2 and 4 kHz corresponded within 5 dB. At 1 kHz 79.3% of the thresholds differed by 10 dB or less. There was a significant difference (p<.01) between smartphone (7.09 min, 1.2 SD) and conventional audiometry test duration (3.23 min, 0.6 SD). The hearTest application using calibrated supra-aural headphones provided valid air conduction hearing thresholds. Therefore, it is evident that using inexpensive smartphones with calibrated headphones provides a cost-effective way to provide access to threshold air conduction audiometry.
Dissertation (M Communication Pathology)--University of Pretoria, 2016.
Speech-Language Pathology and Audiology
M Communication Pathology
Unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pierce, Luke. "NANOPIPELINED THRESHOLD SYNTHESIS USING GATE REPLICATION". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/694.

Texto completo
Resumen
Threshold logic gates allow for complex multi-input functions to be implemented using a single gate reducing the power and area of the circuit. Clocked based threshold gates have the additional advantage of its capability of being nanopipelined to increase network throughput. To produce a threshold network the proposed algorithm accepts a traditional algebraic boolean network as an input and resynthesizes it into a nanopipelined threshold logic network. The algorithm is the first to our knowledge that synthesizes in a manner to not only minimize the number of clusters produced from synthesizing the algebraic boolean network but also to minimize associated buffer insertion overhead in producing a clocked threshold gate network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Sebastian, Johny. "A Temperature stabilised CMOS VCO based on amplitude control". Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33447.

Texto completo
Resumen
Speed, power and reliability of analogue integrated circuits (IC) exhibit temperature dependency through the modulation of one or several of the following variables: band gap energy of the semiconductor, mobility, carrier diffusion, current density, threshold voltage, interconnect resistance, and variability in passive components. Some of the adverse effects of temperature variations are observed in current and voltage reference circuits, and frequency drift in oscillators. Thermal instability of a voltage-controlled oscillator (VCO) is a critical design factor for radio frequency ICs, such as transceiver circuits in communication networks, data link protocols, medical wireless sensor networks and microelectromechanical resonators. For example, frequency drift in a transceiver system results in severe inter-symbol interference in a digital communications system. Minimum transconductance required to sustain oscillation is specified by Barkhausen’s stability criterion. However it is common practice to design oscillators with much more transconductance enabling self-startup. As temperature is increased, several of the variables mentioned induce additional transconductance to the oscillator. This in turn translates to a negative frequency drift. Conventional approaches in temperature compensation involve temperature-insensitive biasing proportional-to-absolute temperature, modifying the control voltage terminal of the VCO using an appropriately generated voltage. Improved frequency stability is reported when compensation voltage closely follows the frequency drift profile of the VCO. However, several published articles link the close association between oscillation amplitude and oscillation frequency. To the knowledge of this author, few published journal articles have focused on amplitude control techniques to reduce frequency drift. This dissertation focuses on reducing the frequency drift resulting from temperature variations based on amplitude control. A corresponding hypothesis is formulated, where the research outcome proposes improved frequency stability in response to temperature variations. In order to validate this principle, a temperature compensated VCO is designed in schematic and in layout, verified using a simulation program with integrated circuit emphasis tool using the corresponding process design kit provided by the foundry, and prototyped using standard complementary metal oxide semiconductor technology. Periodic steady state (PSS) analysis is performed using the open loop VCO with temperature as the parametric variable in five equal intervals from 0 – 125 °C. A consistent negative frequency shift is observed in every temperature interval (≈ 11 MHz), with an overall frequency drift of 57 MHz. However similar PSS analysis performed using a VCO in the temperature stabilised loop demonstrates a reduced negative frequency drift of 3.8 MHz in the first temperature interval. During the remaining temperature intervals the closed loop action of the amplitude control loop overcompensates for the negative frequency drift, resulting in an overall frequency spread of 4.8 MHz. The negative frequency drift in the first temperature interval of 0 to 25 °C is due to the fact that amplitude control is not fully effective, as the oscillation amplitude is still building up. Using the temperature stabilised loop, the overall frequency stability has improved to 16 parts per million (ppm)/°C from an uncompensated value of 189 ppm/°C. The results obtained are critically evaluated and conclusions are drawn. Temperature stabilised VCOs are applicable in applications or technologies such as high speed-universal serial bus, serial advanced technology attachment where frequency stability requirements are less stringent. The implications of this study for the existing body of knowledge are that better temperature compensation can be obtained if any of the conventional compensation schemes is preceded by amplitude control.
Dissertation (MEng)--University of Pretoria, 2013.
Electrical, Electronic and Computer Engineering
unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Brusehafer, Katja. "Automated chromosome damage analysis to investigate thresholds for genotoxic agents". Thesis, Swansea University, 2013. https://cronfa.swan.ac.uk/Record/cronfa43178.

Texto completo
Resumen
Genotoxicology involves the assessment of a substance’s ability to induce DNA damage after exposure to humans. DNA damage is an underlying cause of mutations that are likely to initiate carcinogenesis. Furthermore, the investigation of low dose responses in genotoxicology testing helps to improve health risk assessment by establishing whether DNA reactive compounds follow linear or non-linear (thresholded) dose response relationships. The current assumption for direct acting genotoxins is that the relationship between exposure to genotoxic chemicals, DNA damage formation and the induction of mutagenic changes is linear. However, it is known that mutations are not produced directly by DNA adducts as DNA repair activity limits the proportion of adducts processed into mutations. It is therefore possible, that no observed effect levels (NOEL) may exist for some genotoxins. The main aim of this thesis was to improve in vitro genotoxicity testing by assessing the low dose response relationships for the genotoxic agents mitomycin-C (MMC), 4-nitroquinoline 1-oxide (4NQO) and cytosine arabinoside (araC). Furthermore, the automated micronucleus slide scoring system Metafer was validated and used for these studies. In addition, the mechanism of action of each test component was further investigated by follow up experiments to gain a better understanding of the processes involved in this type of damage. The in vitro micronucleus assay for the detection of chromosomal damage revealed non­linear dose response relationships following low dose exposure of MMC and araC, while 4NQO revealed a weak clastogenic potential. The semi-automated scoring protocol for the Metafer-System proved to be a rapid and accurate system for scoring micronuclei. DNA repair plays most likely a major role in these non-linear responses by removing genetic damage induced at low levels. Furthermore, p53 was shown to be involved in the DNA damage response in human lymphoblastoid cells, through cell cycle delay and the induction of apoptosis. In addition, this work confirmed that a proper dosing regime, accurate toxicity measurements and the appropriate choice of cell type are cmcial criteria for defining the dose response relationships and the induction of genotoxicity and cytotoxicity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Kuhner, Joseph T. "Automating the Detection of Precipitation and Wind Characteristics in Navy Ocean Acoustic Data". ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2567.

Texto completo
Resumen
A challenge in Underwater Acoustics is identifying the independent variables associated with an environment’s ambient noise. A strict definition of ambient noise would focus on non-transient signatures and exclude transient impacts from marine mammals, pelagic fish species, man-made sources, or weather events such as precipitation or wind speeds. Recognizing transient signatures in acoustic spectra is an essential element for providing environmental intelligence to the U.S. Navy, specifically the acoustic signatures from meteorological events. While weather event detection in acoustic spectra has been shown in previous studies, leveraging these concepts via U.S. Navy assets is largely an unknown. Environmental intelligence collection can be improved by detecting precipitation events and establishing wind velocities with acoustic signatures. This will further improve meteorological models by enabling validation from both manned and unmanned sub-surface assets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Pacey, Ian Edward. "Variability of the perimetric response in normals and in glaucoma". Thesis, Aston University, 1998. http://publications.aston.ac.uk/14648/.

Texto completo
Resumen
This study investigated the variability of response associated with various perimetric techniques, with the aim of improving the clinical interpretation of automated static threshold perirnetry. Evaluation of a third generation of perimetric threshold algorithms (SITA) demonstrated a reduction in test duration by approximately 50% both in normal subjects and in glaucoma patients. SITA produced a slightly higher, but clinically insignificant, Mean Sensitivity than with the previous generations of algorithms. This was associated with a decreased between-subject variability in sensitivity and hence, lower confidence intervals for normality. In glaucoma, the SITA algorithms gave rise to more statistically significant visual field defects and a similar between-visit repeatability to the Full Threshold and FASTPAC algorithms. The higher estimated sensitivity observed with SITA compared to Full Threshold and FASTPAC were not attributed to a reduction in the fatigue effect. The investigation of a novel method of maintaining patient fixation, a roving fixation target which paused immediately prior lo the stimulus presentation, revealed a greater degree of fixational instability with the roving fixation target compared to the conventional static fixation target. Previous experience with traditional white-white perimetry did not eradicate the learning effect in short-wavelength automated perimetry (SWAP) in a group of ocular hypertensive patients. The learning effect was smaller in an experienced group of patients compared to a naive group of patients, but was still at a significant level to require that patients should undertake a series of at least three familiarisation tests with SWAP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Homer, Daniel C. "Population Fit Threshold: Fully Automated Signal Map generation for Baseline Correction in NMR-based Metabolomics". Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1271689072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Larsson, Patrik. "Automatisk FAQ med Latent Semantisk Analys". Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-53672.

Texto completo
Resumen

I denna uppsats presenteras teknik för att automatiskt besvara frågor skrivna i naturligt språk, givet att man har tillgång till en samling tidigare ställda frågor och deras respektive svar.

Jag bygger ett prototypsystem som utgår från en databas med epost-konversationer från HP Help Desk. Systemet kombinerar Latent Semantisk Analys med en täthetsbaserad klustringsalgoritm och en enkel klassificeringsalgoritm för att identifiera frekventa svar och besvara nya frågor.

De automatgenererade svaren utvärderas automatiskt och resultaten jämförs med de som tidigare presenterats för samma datamängd. Inverkan av olika parametrar studeras också i detalj.

Studien visar att detta tillvägagångssätt ger goda resultat, utan att man behöver utföra någon som helst lingvistisk förbearbetning.

Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Hurpeau, Jean-Christophe. "Étude de modifications de la sensibilité cutanée après microchirurgie reconstructrice des membres supérieurs et de la main en particulier : métrologie et modélisation informatique associées". Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL151N.

Texto completo
Resumen
Parmi les conséquences des lésions de nerfs périphériques sur la sensibilité cutanée, les médecins observent a- une baisse des seuils de sensibilité cutanée à la pression, b- une hypersensibilité au froid engendrant parfois des sensations de douleur (paresthésies, hyperpathie, allodynie). Les objectifs de ce travail sont a- de concevoir un dispositif de mesure automatisé du seuil de sensibilité cutanée à la pression et b- de proposer une modélisation à partir d'une hypothèse initiale suggérant les causes possibles de ces symptômes. Une démarche basée sur une analyse systématique des problèmes posés par la conception du dispositif et des solutions techniques envisageables nous permet de sélectionner puis de valider le principe d'un appareil proposé dans les années 80. Nous en présentons les inconvénients, mais les améliorations suggérées sont jugées trop complexes pour une utilisation clinique. Le froid et les lésions nerveuses provoquent des réductions des vitesses de conduction des influx nerveux nés de la stimulation de la peau. L’hypothèse sur laquelle repose la modélisation est que ces modifications peuvent engendrer des désynchronisations du message nerveux périphérique qui le rendent de ce fait incompréhensible par les centres nerveux. Grâce à une étude bibliographie rassemblant les connaissances (expérimentales et théoriques) sur les conséquences du froid et des lésions sur la propagation nerveuse, nous pouvons mettre en équations ces effets puis proposer une quantification des désynchronisations obtenues. Une étude statistique nous permet de montrer l'influence des conditions de refroidissement et de la gravité de la lésion sur l'ampleur des désynchronisations. Certains résultats sont interprétés aux vues des observations cliniques, d'autres sont analysés de façon théorique. Sans aboutir à une validation de l'hypothèse, tache nécessitant des résultats expérimentaux supplémentaires, nous avons pu, néanmoins proposer des éléments nouveaux sur les facteurs périphériques pouvant expliquer l'hypersensibilité au froid. Nous présentons pour conclure une méthode de validation de l'hypothèse reposant sur la recherche des conditions nécessaires pour une stimulation désynchronisée sur des sujets sains
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Chakraborty, Debaditya. "Detection of Faults in HVAC Systems using Tree-based Ensemble Models and Dynamic Thresholds". University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543582336141076.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Mancino, Massimo. "Design of an automated system for continuous monitoring of dairy cow behaviour in free-stall barns". Doctoral thesis, Università di Catania, 2017. http://hdl.handle.net/10761/3952.

Texto completo
Resumen
Change in cows behaviours is one of the indicators useful to help identifying when animals become ill. The need to analyse a large number of animals at a time due to the increase in the herd dimension in intensive farming has led to the use of automated systems. Among automated systems, inertial sensor-based systems have been utilised to distinguish behavioural patterns in livestock animals. In this field, the overall aim of this thesis work, which was inherent to the field of the Precision Livestock Farming, was to contribute to the improvement of the systems based on wearable sensors that are able to recognise the main behavioural activities (i.e., lying, standing, feeding, and walking) of dairy cows housed in a free-stall barn. This objective was achieved through different steps aimed at producing an advance in the state of the art. A novel algorithm, characterised by a linear computational time, was implemented with the aim to improve real-time monitoring and analysis of walking behaviour of dairy cows. The algorithm computed the number of steps of each cow from accelerometer data by making use of statistically defined thresholds. Algorithm accuracy was carried out by computing total error (E equal to 9.5 %) and Relative Measurement Error (RME between 2.4% and 4.8%). A new classifier was assessed to recognise the cow feeding and standing behavioural activities by using statistically defined thresholds computed from accelerometer data. The accuracy of the classification was assessed by computing of the Misclassification Rate (MR equal to 5.56%). A new data acquisition system assessed in a free-stall barn allowed the acquisition of data from different sensor devices, with a sampling frequency of 4 Hz, during the animals daily routine. It required a simple installation into the building and it did not need any preliminary calibration. The performance of this system was assessed by computing a Stored Data Index (DSI) that resulted equal to 83%. Finally, the overall design of an automated monitoring system based on wearable sensors was proposed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Norgren, Tommy y Jonathan Styrud. "Non-periodic sampling schemes for control applications". Thesis, Uppsala universitet, Signaler och System, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154500.

Texto completo
Resumen
In recent years, research in the field of automation has been advancing quickly in the direction of wireless networks of sensors and actuators. This development has introduced a need to reduce the amount of communication. A number of different alternative schemes have been proposed. They are usually divided into event-triggered schemes and self-triggered ones. The main purpose of this Master's thesis was to further develop and evaluate the sesampling schemes, focusing on their needed communication. The effect on control performance by the different schemes was also taken into account. Because of the difficulty in performing a theoretical comparison, the thesis focused on evaluating the schemes in simulations and in experiments on real industrial processes. The results indicate that simply using a slower periodic scheme may reduce as much communication without losing much performance as the more flexible schemes. This would imply that investing further into the other schemes may be of waste. However, using an event-triggered scheme with improvements introduced in this report may offer some advantages when it comes to performance and simplicity in setup. Maybe more importantly, it is safer during rapidly changing conditions, which also makes it very unlikely that a slow periodic sampler would ever be implemented on a real system. The results in general are very positive with communication reductions of over 90% when using a well tuned base sampling interval and over 99% when the comparison is made to current implementations in the industry, all without significant loss of performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Van, Rooijen Lorijn. "Une approche combinatoire du problème de séparation pour les langages réguliers". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0229/document.

Texto completo
Resumen
Le problème de séparation pour une classe de langages S est le suivant : étant donnés deux langages L1 et L2, existe-t-il un langage appartenant à S qui contient L1, en étant disjoint de L2 ? Si les langages à séparer sont des langages réguliers, le problème de séparation pour la classe S est plus général que le problème de l'appartenance à cette classe, et nous fournit des informations plus détaillées sur la classe. Ce problème de séparation apparaît dans un contexte algébrique sous la forme des parties ponctuelles, et dans un contexte profini sous la forme d'un problème de séparation topologique. Pour quelques classes de langages spécifiques, ce problème a été étudié en utilisant des méthodes profondes de la théorie des semigroupes profinis.Dans cette thèse, on s'intéresse, dans un premier temps, à la décidabilité de ce problème pour plusieurs sous-classes des langages réguliers. Dans un second temps, on s'intéresse à obtenir un langage séparateur, s'il existe, ainsi qu'à la complexité de ces problèmes.Nous établissons une approche générique pour prouver que le problème de séparation est décidable pour une classe de langages donnée. En utilisant cette approche, nous obtenons la décidabilité du problème de séparation pour les langages testables par morceaux, les langages non-ambigus, les langages localement testables, et les langages localement testables à seuil. Ces classes correspondent à des fragments de la logique du premier ordre, et sont parmi lesclasses de langages réguliers les plus étudiées. De plus, cette approche donne une description d'un langage séparateur, pourvu qu'il existe
The separation problem, for a class S of languages, is the following: given two input languages, does there exist a language in S that contains the first language and that is disjoint from the second langage ?For regular input languages, the separation problem for a class S subsumes the classical membership problem for this class, and provides more detailed information about the class. This separation problem first emerged in an algebraic context in the form of pointlike sets, and in a profinite context as a topological separation problem. These problems have been studied for specific classes of languages, using involved techniques from the theory of profinite semigroups.In this thesis, we are not only interested in showing the decidability of the separation problem for several subclasses of the regular languages, but also in constructing a separating language, if it exists, and in the complexity of these problems.We provide a generic approach, based on combinatorial arguments, to proving the decidability of this problem for a given class. Using this approach, we prove that the separation problem is decidable for the classes of piecewise testable languages, unambiguous languages, and locally (threshold) testable languages. These classes are defined by different fragments of first-order logic, and are among the most studied classes of regular languages. Furthermore, our approach yields a description of a separating language, in case it exists
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Campbell, Robert David James. "Information processing in microtubules". Thesis, Queensland University of Technology, 2002.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Benabdallah, Mohammed. "Spéciation de l'étain en traces dans l'environnement aquatique par spectrophotométrie d'absorption atomique électrothermique". Pau, 1987. http://www.theses.fr/1987PAUU3026.

Texto completo
Resumen
La technique de l'absorption atomique au four graphite est utilisée seule pour le dosage de l'étain total et associée à la chromatographie liquide à haute performance pour la spéciation; un couplage automatisé original a été mis au point, puis appliqué à l'étude de la séparation en chromatographie d'exclusion stérique de composés organostanniques de la famille des butylétains. La limite de détection de ce système est de 25 mg d'étain en tête de colonne. Une modification du passeur injecteur automatique d'échantillons utilisé dans cette étude a été concue. Elle a permis de faire passer le rapport du volume d'échantillon aspiré à celui réellement injecté dans le four d'environ 7% à près de 100%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Chen, W. J. y 陳王政. "Automatic Threshold Selection for Segmentation". Thesis, 1993. http://ndltd.ncl.edu.tw/handle/12793964146666445124.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

SWARUP, JYOTI. "OBJECT SEGMENTATION USING REGION GROWING AND EDGE CONSTRAINTS". Thesis, 2013. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14248.

Texto completo
Resumen
This thesis focuses upon Object Segmentation which plays an important role in the field of computer vision. The image segmentation problem is concerned with partitioning an image into multiple regions according to some homogeneity criterion. Object segmentation is used to typically locate objects and boundaries in images. The proposed object segmentation method is an integration of region growing and edge information. This method automatically selects the initial seed and determines the threshold with the help of a 20x20 window across the center pixel for single seeded region growing. Automatic threshold is determined by the difference between mean and median of this window, whereas, minimum distance between mean and all pixels of window help in initial seed selection. The grown region is used for object segmentation by placing edge constraints over it to obtain nearest strong canny edges. Further, certain morphological operations are performed to obtain precise results. The proposed algorithm is applied to state-of-the art database PASCAL VOC 2005 and compared to the method proposed by Xavier Bresson based on the Active Contour model. This algorithm produces successful results and accompanying ground truth annotations helps in determining precision and recall parameters. On the basis of evaluation of these parameters it can be said that proposed method produce good segmentation results with high precision.
SEBA SUSAN (Asst. Professor) Dept. of Information Technology
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Tseng, I.-Chun y 曾奕鈞. "Automatic Recognition, Identification and Maintenance Threshold Determination for Pavement MarkingsMaster Thesis". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/61873129184760624716.

Texto completo
Resumen
碩士
國立臺灣大學
土木工程學研究所
103
The specifications of pavement markings in Taiwan focus on marking’s material, painting location and its form. The only specification describes the maintenance standard is Taiwan Roadway Traffic Road Markings Specifications. However, it barely describes the rough maintenance threshold and does not mention the measurement. Nowadays, pavement markings get repainted after the milling of pavement. Sometimes it depends on the judgment by government officials eyes, which is not objective. Pavement markings provide important information for drivers while driving, hence it is necessary to establish a complete specification to set up a maintenance threshold and the measurement. A previous research used intensity, collected by pavement profile 2005 (PPS-2005), to identify pavement marking. Based on this, a better completeness-index calculation was constructed to describe the marking’s situation in situ. Then, a huge range detection was done in this research to obtain the binary pavement marking pictures. With these pictures, a questionnaire was created to collect the viewpoint of marking repainting from people. After using satisfaction to analyze the contents from questionnaire, this research raised two suggesting repainting complete index, one for words and the other for other pavement markings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Gritzman, Ashley Daniel. "Adaptive threshold optimisation for colour-based lip segmentation in automatic lip-reading systems". Thesis, 2016. http://hdl.handle.net/10539/22664.

Texto completo
Resumen
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in ful lment of the requirements for the degree of Doctor of Philosophy. Johannesburg, September 2016
Having survived the ordeal of a laryngectomy, the patient must come to terms with the resulting loss of speech. With recent advances in portable computing power, automatic lip-reading (ALR) may become a viable approach to voice restoration. This thesis addresses the image processing aspect of ALR, and focuses three contributions to colour-based lip segmentation. The rst contribution concerns the colour transform to enhance the contrast between the lips and skin. This thesis presents the most comprehensive study to date by measuring the overlap between lip and skin histograms for 33 di erent colour transforms. The hue component of HSV obtains the lowest overlap of 6:15%, and results show that selecting the correct transform can increase the segmentation accuracy by up to three times. The second contribution is the development of a new lip segmentation algorithm that utilises the best colour transforms from the comparative study. The algorithm is tested on 895 images and achieves percentage overlap (OL) of 92:23% and segmentation error (SE) of 7:39 %. The third contribution focuses on the impact of the histogram threshold on the segmentation accuracy, and introduces a novel technique called Adaptive Threshold Optimisation (ATO) to select a better threshold value. The rst stage of ATO incorporates -SVR to train the lip shape model. ATO then uses feedback of shape information to validate and optimise the threshold. After applying ATO, the SE decreases from 7:65% to 6:50%, corresponding to an absolute improvement of 1:15 pp or relative improvement of 15:1%. While this thesis concerns lip segmentation in particular, ATO is a threshold selection technique that can be used in various segmentation applications.
MT2017
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Weng, Yu-Yen y 翁郁硯. "A study on automatic deep learning image transfer using multi-threshold color transfer". Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5396050%22.&searchmode=basic.

Texto completo
Resumen
碩士
國立中興大學
資訊管理學系所
107
In recent years, with the rapid development of deep learning technology, many theses are gradually devoted themselves to the use of computers in the field of Generative art. It is mainly through the feature extraction ability of the deep neural network to let the computer automatically learn and create. An image transfer technology is to transfer the color and style of the image which you prefer (referred to as a target image) to an origin image (referred to as a content image). This paper proposes an image style transfer technology based on the color and style of the target image, which improves the problem of past study that only considering color transfer between image, or the limitation of only using deep learning to do style transfer. In the image feature extraction of deep learning, color is one of the judgment factors of style transfer. Therefore, in this thesis, the local color transfer method of content image and target image is proposed. First, multi-threshold cutting is performed according to the luminance distribution of the two image pixels, and then color transfer is performed for each region. Next, using deep learning to select effective features for the target image, and the convolutional layer is judged by the structural similarity index (SSIM) and the black block to determine the degree of effective features. Selecting a convolutional layer with more effective features, which can improve the limitations of the deep learning style transfer that requires artificial control parameters. In the color and style transfer process of the image, the proposed method can improve image quality that automatically simulates the color and style of the target image and control parameters without human.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Lin, Guan-Shian y 林冠憲. "An automatic method for determining fringe-contrast threshold in applying phase-shifting technique to 3D measurement of flip-chip solder bumps". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/74272146374990757186.

Texto completo
Resumen
碩士
聖約翰科技大學
自動化及機電整合研究所
94
Because the electronic products face unceasingly light、 thin、 short、small and multi-purpose, it is more and more important to be provided with high I/O lead bonding of the flip chip packaging. In order to increase I/O density, the pitch of flip chip and the height of solder bump become more and more small, resulting the yield of flip-chip packaging to decrease. The most important factors of influencing the yield of flip-chip packaging are the height and volume of solder bump. Therefore, three-dimensional (3D) measurement is needed in manufacturing process to increase the yield of flip-chip packaging. Phase-shifting technique can be used to perform 3D measurement of flip-chip solder bumps. However, it will face a difficult choice of using a proper lighting condition due to the flip chip possesses simultaneously high reflective solder bumps and low reflective substrate. To create a proper lighting of a fringe pattern on the solder bumps will also cause underexposure on the substrate. The resulting fringe-contrast values on the projected substrate become quite low, and a pseudo surface height value could be obtained. The pseudo surface height value can be eliminated by the previous research. However, it was done manually. This research proposes a bisection method of image subtraction to automatically determine the optimal fringe-contrast threshold value for enhancing the effectiveness of applying phase-shifting technique to the 3D measurement of flip-chip solder bumps.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Lin, Yu-Yun y 林郁芸. "Design Automation for Sub-Threshold Operational Amplifier Circuits". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/80723540973249436258.

Texto completo
Resumen
碩士
國立中央大學
電機工程學系
105
Power has become the primary design constraint for chip designers today. To reduce power and increase service time, low-voltage low-power design becomes more and more important. One of the possible ways to achieve this goal is sub-threshold circuit design. By operating transistors at the region that Vdd is less than the transistor threshold voltage (Vdd
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Cheng, Chung-Chuan y 鄭中川. "Automated Lung Segmentation Based on Grey-level Threshold". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/4f5feu.

Texto completo
Resumen
碩士
朝陽科技大學
資訊管理系碩士班
94
Lung is one of the most important organs in the body. Medical Image analysis can be used to aid the diagnosis for the clinicians, to trace the progression of diseases. By applying the object segmentation technique in medical images, the contours of organs can be detected and used to analyze the tissue characteristics. Subsequently, these can be provided to doctors and researchers to aid their works. X-ray is the most widely used imaging modality for diagnosing diseases occurred in the chest and other anatomical organs. It is cheap and commonly taken routinely. This study investigates PA View chest radiographs. Canny edge detector and active contour models (ACM) are widely used techniques for object segmentation in medical images. However, the main weakness of the ACM is that an initial contour must be given so that the contour can be attracted to a proper position. In addition, it is time-consuming. For application in lung segmentation, it is easy to be interfered by the rib cage. This study proposes a fast method for lung segmentation. Salient features of chest images are used to locate the lung field. After the Canny edge detector has been adopted to detect edges and to find the approximated contours of the III lung lobes. Finally, the contours of the lung lobe with great accuracy can be obtained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ravindra, G. "Information Theoretic Approach To Extractive Text Summarization". Thesis, 2006. https://etd.iisc.ac.in/handle/2005/452.

Texto completo
Resumen
Automatic text summarization techniques, which can reduce a source text to a summary text by content generalization or selection have assumed signifi- cance in recent times due to the ever expanding information explosion created by the World Wide Web. Summaries generated by generalization of information are called abstracts and those generated by selection of portions of text (sentences, phrases etc.) are called extracts. Further, summaries could for each document separately or multiple documents could be summarized together to produce a single summary. The challenges in making machines generate extracts or abstracts are primarily due to the lack of understanding of human cognitive processes. Summary generated by humans seems to be influenced by their moral, emotional and ethical stance on the subject and their background knowledge of the content being summarized.These characteristics are hardly understood and difficult to model mathematically. Further automatic summarization is very much handicapped by limitations of existing computing resources and lack of good mathematical models of cognition. In view of these, the role of rigorous mathematical theory in summarization has been limited hitherto. The research reported in this thesis is a contribution towards bringing in the awesome power of well-established concepts information theory to the field of summarization. Contributions of the Thesis The specific focus of this thesis is on extractive summarization. Its domain spans multi-document summarization as well as single document summarization. In the whole thesis the word "summarization" and "summary", imply extract generation and sentence extracts respectively. In this thesis, two new and novel summarizers referred to as ESCI (Extractive Summarization using Collocation Information) and De-ESCI (Dictionary enhanced ESCI) have been proposed. In addition, an automatic summary evaluation technique called DeFuSE (Dictionary enhanced Fuzzy Summary Evaluator) has also been introduced.The mathematical basis for the evolution of the scoring scheme proposed in this thesis and its relationship with other well-known summarization algorithms such as latent Semantic Indexing (LSI) is also derived. The work detailed in this thesis is specific to the domain of extractive summarization of unstructured text without taking into account the data set characteristics such as the positional importance of sentences. This is to ensure that the summarizer works well for a broad class of documents, and to keep the proposed models as generic as possible. Central to the proposed work is the concept of "Collocation Information of a word", its quantification and application to summarization. "Collocation Information" (CI) is the amount of information (Shannon’s measure) that a word and its collocations together contribute to the total information in the document(s) being summarized.The CI of a word has been computed using Shannon’s measure for information using a joint probability distribution. Further, a base value of CI called "Discrimination Threshold" (DT) has also been derived. To determine DT, sentences from a large collection of documents covering various topics including the topic covered by the document(s) being summarized were broken down into sequences of word collocations.The number of possible neighbors for a word within a specified collocation window was determined. This number has been called the "cardinality of the collocating set" and is represented as |ℵ (w)|. It is proved that if |ℵ (w)| determined from this large document collection for any word w is fixed, then the maximum value of the CI for a word w is proportional to |ℵ (w)|. This constrained maximum is the "Discrimination Threshold" and is used as the base value of CI. Experimental evidence detailed in this thesis shows that sentences containing words with CI greater than DT are most likely to be useful in an extract. Words in every sentence of the document(s) being summarized have been assigned scores based on the difference between their current value of CI and their respective DT. Individual word scores have been summed to derive a score for every sentence. Sentences are ranked according to their scores and the first few sentences in the rank order have been selected as the extract summary. Redundant and semantically similar sentences have been excluded from the selection process using a simple similarity detection algorithm. This novel method for extraction has been called ESCI in this thesis. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. The number of levels of abstraction varies based on the sense tag given to each word in the collocation being abstracted. Once abstractions have been determined, Expectation- Maximization algorithm is used to determine the probability value of each collocation at every level of abstraction. A combination of abstracted collocations from various levels is then chosen and sentences are assigned scores based on collocation information of these abstractions.This summarization scheme has been referred to as De-ESCI (Dictionary enhanced ESCI). It had been observed in many human summary data sets that the factual attribute of the human determines the choice of noun and verb pairs. Similarly, the emotional attribute of the human determines the choice of the number of noun and adjective pairs. In order to bring these attributes into the machine generated summaries, two variants of DeESCI have been proposed. The summarizer with the factual attribute has been called as De-ESCI-F, while the summarizer with the emotional attribute has been called De-ESCI-E in this thesis. Both create summaries having two parts. First part of the summary created by De-ESCI-F is obtained by scoring and selecting only those sentences where a fixed number of nouns and verbs occur.The second part of De-ESCI-F is obtained by ranking and selecting those sentences which do not qualify for the selection process in the first part. Assigning sentence scores and selecting sentences for the second part of the summary is exactly like in ESCI. Similarly, the first part of De-ESCI-E is generated by scoring and selecting only those sentences where fixed number of nouns and adjectives occur. The second part of the summary produced by De-ESCI-E is exactly like the second part in De-ESCI-F. As the model summary generated by human summarizers may or may not contain sentences with preference given to qualifiers (adjectives), the automatic summarizer does not know apriori whether to choose sentences with qualifiers over those without qualifiers. As there are two versions of the summary produced by De-ESCI-F and De-ESCIE, one of them should be closer to the human summarizer’s point of view (in terms of giving importance to qualifiers). This technique of choosing the best candidate summary has been referred to as De-ESCI-F/E. Performance Metrics The focus of this thesis is to propose new models and sentence ranking techniques aimed at improving the accuracy of the extract in terms of sentences selected, rather than on the readability of the summary. As a result, the order of sentences in the summary is not given importance during evaluation. Automatic evaluation metrics have been used and the performance of the automatic summarizer has been evaluated in terms of precision, recall and f-scores obtained by comparing its output with model human generated extract summaries. A novel summary evaluator called DeFuSE has been proposed in this thesis, and its scores are used along with the scores given by a standard evaluator called ROUGE. DeFuSE evaluates an extract in terms of precision, recall and f-score relying on The use of WordNet hypernymy structure to identify semantically similar sentences in a document. It also uses fuzzy set theory to compute the extent to which a sentence from the machine generated extract belongs to the model summary. Performance of candidate summarizers has been discussed in terms of percentage improvement in fscore relative to the baselines. Average of ROUGE and DeFuSE f-score for every summary is computed, and the mean value of these scores is used to compare performance improvement. Performance For illustrative purposes, DUC 2002 and DUC 2003 multi-document data sets have been used. From these data sets only the 400 word summaries of DUC 2002 and track-4 (novelty track) summaries of DUC 2003 are useful for evaluation of sentence extracts and hence only these have been used. f-score has been chosen as a measure of performance. Standard baselines such as coverage, size and lead and also probabilistic baselines have been used to measure percentage improvement in f-score of candidate summarizers relative to these baselines. Further, summaries generated by MEAD using centroid and length as features for ranking (MEAD-CL), MEAD using positional, centroid and length as features for ranking (MEAD-CLP), Microsoft Word automatic summarizer (MS-Word) and Latent Semantic Indexing (LSI) based summarizer were used to compare the performance of the proposed summarization schemes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Ravindra, G. "Information Theoretic Approach To Extractive Text Summarization". Thesis, 2006. http://hdl.handle.net/2005/452.

Texto completo
Resumen
Automatic text summarization techniques, which can reduce a source text to a summary text by content generalization or selection have assumed signifi- cance in recent times due to the ever expanding information explosion created by the World Wide Web. Summaries generated by generalization of information are called abstracts and those generated by selection of portions of text (sentences, phrases etc.) are called extracts. Further, summaries could for each document separately or multiple documents could be summarized together to produce a single summary. The challenges in making machines generate extracts or abstracts are primarily due to the lack of understanding of human cognitive processes. Summary generated by humans seems to be influenced by their moral, emotional and ethical stance on the subject and their background knowledge of the content being summarized.These characteristics are hardly understood and difficult to model mathematically. Further automatic summarization is very much handicapped by limitations of existing computing resources and lack of good mathematical models of cognition. In view of these, the role of rigorous mathematical theory in summarization has been limited hitherto. The research reported in this thesis is a contribution towards bringing in the awesome power of well-established concepts information theory to the field of summarization. Contributions of the Thesis The specific focus of this thesis is on extractive summarization. Its domain spans multi-document summarization as well as single document summarization. In the whole thesis the word "summarization" and "summary", imply extract generation and sentence extracts respectively. In this thesis, two new and novel summarizers referred to as ESCI (Extractive Summarization using Collocation Information) and De-ESCI (Dictionary enhanced ESCI) have been proposed. In addition, an automatic summary evaluation technique called DeFuSE (Dictionary enhanced Fuzzy Summary Evaluator) has also been introduced.The mathematical basis for the evolution of the scoring scheme proposed in this thesis and its relationship with other well-known summarization algorithms such as latent Semantic Indexing (LSI) is also derived. The work detailed in this thesis is specific to the domain of extractive summarization of unstructured text without taking into account the data set characteristics such as the positional importance of sentences. This is to ensure that the summarizer works well for a broad class of documents, and to keep the proposed models as generic as possible. Central to the proposed work is the concept of "Collocation Information of a word", its quantification and application to summarization. "Collocation Information" (CI) is the amount of information (Shannon’s measure) that a word and its collocations together contribute to the total information in the document(s) being summarized.The CI of a word has been computed using Shannon’s measure for information using a joint probability distribution. Further, a base value of CI called "Discrimination Threshold" (DT) has also been derived. To determine DT, sentences from a large collection of documents covering various topics including the topic covered by the document(s) being summarized were broken down into sequences of word collocations.The number of possible neighbors for a word within a specified collocation window was determined. This number has been called the "cardinality of the collocating set" and is represented as |ℵ (w)|. It is proved that if |ℵ (w)| determined from this large document collection for any word w is fixed, then the maximum value of the CI for a word w is proportional to |ℵ (w)|. This constrained maximum is the "Discrimination Threshold" and is used as the base value of CI. Experimental evidence detailed in this thesis shows that sentences containing words with CI greater than DT are most likely to be useful in an extract. Words in every sentence of the document(s) being summarized have been assigned scores based on the difference between their current value of CI and their respective DT. Individual word scores have been summed to derive a score for every sentence. Sentences are ranked according to their scores and the first few sentences in the rank order have been selected as the extract summary. Redundant and semantically similar sentences have been excluded from the selection process using a simple similarity detection algorithm. This novel method for extraction has been called ESCI in this thesis. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. The number of levels of abstraction varies based on the sense tag given to each word in the collocation being abstracted. Once abstractions have been determined, Expectation- Maximization algorithm is used to determine the probability value of each collocation at every level of abstraction. A combination of abstracted collocations from various levels is then chosen and sentences are assigned scores based on collocation information of these abstractions.This summarization scheme has been referred to as De-ESCI (Dictionary enhanced ESCI). It had been observed in many human summary data sets that the factual attribute of the human determines the choice of noun and verb pairs. Similarly, the emotional attribute of the human determines the choice of the number of noun and adjective pairs. In order to bring these attributes into the machine generated summaries, two variants of DeESCI have been proposed. The summarizer with the factual attribute has been called as De-ESCI-F, while the summarizer with the emotional attribute has been called De-ESCI-E in this thesis. Both create summaries having two parts. First part of the summary created by De-ESCI-F is obtained by scoring and selecting only those sentences where a fixed number of nouns and verbs occur.The second part of De-ESCI-F is obtained by ranking and selecting those sentences which do not qualify for the selection process in the first part. Assigning sentence scores and selecting sentences for the second part of the summary is exactly like in ESCI. Similarly, the first part of De-ESCI-E is generated by scoring and selecting only those sentences where fixed number of nouns and adjectives occur. The second part of the summary produced by De-ESCI-E is exactly like the second part in De-ESCI-F. As the model summary generated by human summarizers may or may not contain sentences with preference given to qualifiers (adjectives), the automatic summarizer does not know apriori whether to choose sentences with qualifiers over those without qualifiers. As there are two versions of the summary produced by De-ESCI-F and De-ESCIE, one of them should be closer to the human summarizer’s point of view (in terms of giving importance to qualifiers). This technique of choosing the best candidate summary has been referred to as De-ESCI-F/E. Performance Metrics The focus of this thesis is to propose new models and sentence ranking techniques aimed at improving the accuracy of the extract in terms of sentences selected, rather than on the readability of the summary. As a result, the order of sentences in the summary is not given importance during evaluation. Automatic evaluation metrics have been used and the performance of the automatic summarizer has been evaluated in terms of precision, recall and f-scores obtained by comparing its output with model human generated extract summaries. A novel summary evaluator called DeFuSE has been proposed in this thesis, and its scores are used along with the scores given by a standard evaluator called ROUGE. DeFuSE evaluates an extract in terms of precision, recall and f-score relying on The use of WordNet hypernymy structure to identify semantically similar sentences in a document. It also uses fuzzy set theory to compute the extent to which a sentence from the machine generated extract belongs to the model summary. Performance of candidate summarizers has been discussed in terms of percentage improvement in fscore relative to the baselines. Average of ROUGE and DeFuSE f-score for every summary is computed, and the mean value of these scores is used to compare performance improvement. Performance For illustrative purposes, DUC 2002 and DUC 2003 multi-document data sets have been used. From these data sets only the 400 word summaries of DUC 2002 and track-4 (novelty track) summaries of DUC 2003 are useful for evaluation of sentence extracts and hence only these have been used. f-score has been chosen as a measure of performance. Standard baselines such as coverage, size and lead and also probabilistic baselines have been used to measure percentage improvement in f-score of candidate summarizers relative to these baselines. Further, summaries generated by MEAD using centroid and length as features for ranking (MEAD-CL), MEAD using positional, centroid and length as features for ranking (MEAD-CLP), Microsoft Word automatic summarizer (MS-Word) and Latent Semantic Indexing (LSI) based summarizer were used to compare the performance of the proposed summarization schemes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Mahomed, Faheema. "Validation of automated threshold audiometry : a systematic review and meta-analysis". Diss., 2013. http://hdl.handle.net/2263/33368.

Texto completo
Resumen
The need for hearing health care services across the world far outweighs the capacity to deliver these services with the present shortage of hearing health care personnel. Automated test procedures coupled with telemedicine may assist in extending services. Automated threshold audiometry has existed for many decades; however, there has been a lack of systematic evidence supporting its clinical use. The aim of this study was to systematically review the current body of peer-reviewed publications on the validity (test-retest reliability and accuracy) of automated threshold audiometry. A meta-analysis was thereafter conducted to combine and quantify the results of individual reports so that an overall assessment of validity based on existing evidence could be made for automated threshold audiometry. A systematic literature review and meta-analysis was conducted using peerreviewed publications. A multifaceted approach, covering several databases and employing different search strategies, was utilized to ensure comprehensive coverage and crosschecking of search findings. Publications were obtained using the following three databases: Medline, SCOPUS and PubMed, and by inspecting the reference list of relevant reports. Reports were selected based according to inclusion and an exclusion criterion, thereafter data extraction was conducted. Subsequently, the meta-analysis combined and quantified data to determine the validity of automated threshold audiometry. In total, 29 articles met the inclusion criteria. The outcomes from these studies indicated that two types of automated threshold testing procedures have been utilized, the ‘method of limits’ and ‘method of adjustments’. Reported findings suggest accurate and reliable thresholds when utilizing automated audiometry. Most of the reports included data on adult populations using air conduction testing, limited data on children, bone conduction testing and the effects of hearing status on automated threshold testing were however reported. The meta-analysis revealed that test-retest reliability for automated threshold audiometry was within typical testretest reliability for manual audiometry. Furthermore, the meta-analysis showed comparable overall average differences between manual and automated air conduction audiometry (0.4 dB, 6.1 SD) compared to test-retest differences for manual (1.3 dB, 6.1 SD) and automated (0.3 dB, 6.9 SD) air conduction audiometry. Overall, no significant differences (p>0.01; Summarized Data ANOVA) were obtained in any of the comparisons between test-retest reliability (manual and automated) and accuracy. Current evidence demonstrates that automated threshold audiometry can produce an accurate measure of hearing threshold. The differences between automated and manual audiometry fall within typical test-retest and inter-tester variability. Despite its long history however, validation is still limited for (i) automated bone conduction audiometry; (ii) automated audiometry in children and difficult-to-test populations and; (iii) automated audiometry with different types and degrees of hearing loss.
Dissertation (MCommunication Pathology)--University of Pretoria, 2013.
gm2014
Speech-Language Pathology and Audiology
unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

"Modeling and Implementation of Threshold Logic Circuits and Architectures". Doctoral diss., 2010. http://hdl.handle.net/2286/R.I.8637.

Texto completo
Resumen
abstract: Threshold logic has long been studied as a means of achieving higher performance and lower power dissipation, providing improvements by condensing simple logic gates into more complex primitives, effectively reducing gate count, pipeline depth, and number of interconnects. This work proposes a new physical implementation of threshold logic, the threshold logic latch (TLL), which overcomes the difficulties observed in previous work, particularly with respect to gate reliability in the presence of noise and process variations. Simple but effective models were created to assess the delay, power, and noise margin of TLL gates for the purpose of determining the physical parameters and assignment of input signals that achieves the lowest delay subject to constraints on power and reliability. From these models, an optimized library of standard TLL cells was developed to supplement a commercial library of static CMOS gates. The new cells were then demonstrated on a number of automatically synthesized, placed, and routed designs. A two-stage 2's complement integer multiplier designed with CMOS and TLL gates utilized 19.5% less area, 28.0% less active power, and 61.5% less leakage power than an equivalent design with the same performance using only static CMOS gates. Additionally, a two-stage 32-instruction 4-way issue queue designed with CMOS and TLL gates utilized 30.6% less area, 31.0% less active power, and 58.9% less leakage power than an equivalent design with the same performance using only static CMOS gates.
Dissertation/Thesis
Ph.D. Computer Science 2010
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Chu, Jiun-Jye y 車俊傑. "Modelling of automated guided vehicle system with relative threshold values in just-in-time environment". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/17755882189931359827.

Texto completo
Resumen
碩士
國立臺灣大學
工業工程研究所
84
In Just-in-time (JIT) production system, labor or forklifts are widely used for material handling. In addition, kanban system are implemented to obtain production efficiency of pull strategy and to maintain low level of inventory. To replace forklift with Automated Guide Vehicle System (AGVS), information transfer of kanban systems should be changed for automatic material handling equipments. This thesis presents AGVS implementation procedures for JIT systems. The introduced AGVS is dispatched by input and output thresholds imposed on the input and output queues of work stations. Threshold values are computed based on process time. The physical states of queues are detected by sensing devices and compared with thresholds to dispatch vehicles. Methematical model is used to ca;culate required number of vehicles. Three guided-path models of AGVS are introduced: chessboard unidirectional, tandem bi- directional, and cyclic bi-directional guided-path models. Simulation models are setablished to updated by analyzing the simulation results. Moreover, based on the performance evaluation, the best guided -path model should be adopted for actual system implementation. The throughput and average process inventory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

"Threshold Logic Properties and Methods: Applications to Post-CMOS Design Automation and Gene Regulation Modeling". Doctoral diss., 2012. http://hdl.handle.net/2286/R.I.14924.

Texto completo
Resumen
abstract: Threshold logic has been studied by at least two independent group of researchers. One group of researchers studied threshold logic with the intention of building threshold logic circuits. The earliest research to this end was done in the 1960's. The major work at that time focused on studying mathematical properties of threshold logic as no efficient circuit implementations of threshold logic were available. Recently many post-CMOS (Complimentary Metal Oxide Semiconductor) technologies that implement threshold logic have been proposed along with efficient CMOS implementations. This has renewed the effort to develop efficient threshold logic design automation techniques. This work contributes to this ongoing effort. Another group studying threshold logic did so, because the building block of neural networks - the Perceptron, is identical to the threshold element implementing a threshold function. Neural networks are used for various purposes as data classifiers. This work contributes tangentially to this field by proposing new methods and techniques to study and analyze functions implemented by a Perceptron After completion of the Human Genome Project, it has become evident that most biological phenomenon is not caused by the action of single genes, but due to the complex interaction involving a system of genes. In recent times, the `systems approach' for the study of gene systems is gaining popularity. Many different theories from mathematics and computer science has been used for this purpose. Among the systems approaches, the Boolean logic gene model has emerged as the current most popular discrete gene model. This work proposes a new gene model based on threshold logic functions (which are a subset of Boolean logic functions). The biological relevance and utility of this model is argued illustrated by using it to model different in-vivo as well as in-silico gene systems.
Dissertation/Thesis
Ph.D. Computer Science 2012
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Stevens, Tynan. "Analysis of Functional MRI for Presurgical Mapping: Reproducibility, Automated Thresholds, and Diagnostic Accuracy". 2010. http://hdl.handle.net/10222/13050.

Texto completo
Resumen
Examination of functional brain anatomy is a crucial step in the process of surgical removal of many brain tumors. Functional magnetic resonance imaging (fMRI) is a promising technology capable of mapping brain function non-invasively. To be successfully applied to presurgical mapping, there are questions of diagnostic accuracy that remain to be addressed. One of the greatest difficulties in implementing fMRI is the need to define an activation threshold for producing functional maps. There is as of yet no consensus on the best approach to this problem, and a priori statistical approaches are generally considered insufficient because they are not specific to individual patient data. Additionally, low signal to noise and sensitivity to magnetic susceptibility effects combine to make the production of activation maps technically demanding. This contributes to a wide range of estimates of reproducibility and validity for fMRI, as the results are sensitive to changes in acquisition and processing strategies. Test-retest fMRI imaging at the individual level, and receiver operator characteristic (ROC) analysis of the results can address both of these concerns simultaneously. In this work, it is shown that the area under the ROC curve (AUC) can be used as an indicator of reproducibility, and that this is dependent on the image thresholds used. Production of AUC profiles can thus be used to optimize the selection of individual thresholds on the basis of detecting stable activation patterns, rather than a priori significance levels. The ROC analysis framework developed provides a powerful tool for simultaneous control of protocol reproducibility and data driven threshold selection, at the individual level. This tool can be used to guide optimal acquisition and processing strategies, and as part of a quality assurance program for implementing presurgical fMRI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Klein, Annette [Verfasser]. "Automated assessment of hearing threshold in neonates by means of extrapolated DPOAE I-O-functions / Annette Klein". 2005. http://d-nb.info/979063078/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!