Siga este enlace para ver otros tipos de publicaciones sobre el tema: Semi-automatic analysis.

Tesis sobre el tema "Semi-automatic analysis"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 32 mejores tesis para su investigación sobre el tema "Semi-automatic analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zheng, Yilei. "IFSO: A Integrated Framework For Automatic/Semi-automatic Software Refactoring and Analysis". Digital WPI, 2004. https://digitalcommons.wpi.edu/etd-theses/241.

Texto completo
Resumen
To automatically/semi-automatically improve internal structures of a legacy system, there are several challenges: most available software analysis algorithms focus on only one particular granularity level (e.g., method level, class level) without considering possible side effects on other levels during the process; the quality of a software system cannot be judged by a single algorithm; software analysis is a time-consuming process which typically requires lengthy interactions. In this thesis, we present a framework, IFSO (Integrated Framework for automatic/semi-automatic Software refactoring and analysis), as a foundation for automatic/semi-automatic software refactoring and analysis. Our proposed conceptual model, LSR (Layered Software Representation Model), defines an abstract representation for software using a layered approach. Each layer corresponds to a granularity level. The IFSO framework, which is built upon the LSR model for component-based software, represents software at the system level, component level, class level, method level and logic unit level. Each level can be customized by different algorithms such as cohesion metrics, design heuristics, design problem detection and operations independently. Cooperating between levels together, a global view and an interactive environment for software refactoring and analysis are presented by IFSO. A prototype was implemented for evaluation of our technology. Three case studies were developed based on the prototype: three metrics, dead code removing, low coupled unit detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zheng, Yilei. "IFSO an integrated framework for automatic/semi-automatic software refactoring and analysis". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0423104-153906.

Texto completo
Resumen
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: LSR model; IFSO framework; automatic/semi-automatic software refactoring; software refactoring framework. Includes bibliographical references (p. 89-92).
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chu, Calvin School of Biomedical Engineering UNSW. "Development of a semi-automatic method for cellular migration and division analysis". Awarded by:University of New South Wales. School of Biomedical Engineering, 2005. http://handle.unsw.edu.au/1959.4/20543.

Texto completo
Resumen
Binary image processing algorithms have been implemented in this study to create a background subtraction mask for the segmentation of cellular time lapse images. The complexity in the development of the background subtraction mask stems from the inherent difficulties in contrast resolution at the cellular boundaries. Coupling the background subtraction mask with the path reconstruction method via superposition of overlapping binary segmented objects in sequential time lapse images produces a semi-automatic method for cellular tracking. In addition to the traditional center of mass or centroid approximation, a novel quasi-center of mass (QCM) derived from the local maxima of the distance transformation (DT) has also been proposed in this study. Furthermore, image isolation and separation between spreading/motile and mitotic cells allows the extraction of both migratory and divisional cellular information. DT application to isolated mitotic cells permits the ability to identify distinct morphologic phases of cellular division. Application of standard bivariate statistics allows the characterization of cellular migration and growth. Determination of Hotelling???s confidence ellipse from cellular trajectory data elucidates the biased or unbiased migration of cellular populations. We investigated whether it was possible to describe the trajectory as a simple binomial process, where trajectory directions are classified into a sequence of (8) discrete states. A significant proportion of trajectories did not follow the binomial model. Additionally, a preliminary relationship between the image background area, approximate number of counted cells in an image frame, and imaging time is proposed from the segmentation of confluent monolayer cellular cultures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Jing. "Implementation of a Semi-automatic Tool for Analysis of TEM Images of Kidney Samples". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180813.

Texto completo
Resumen
Glomerular disease is a cause for chronic kidney disease and it damages the function of the kidneys. One symptom of glomerular disease is proteinuria, which means that large amounts of protein are emerged in the urine. To be more objective,transmission electron microscopy (TEM) imaging of tissue biopsies of kidney are used when measuring proteinuria. Foot process effacement (FPE), which is defined as less than1 ”slit”(gap)/micrometer at the glomerular basement membrane (GBM). Measuring FPE is one way to detect proteinuria using kidney TEM images, this technique is a time-consuming task and used to be measured manually by an expert. This master thesis project aims at developing a semi-automatic way to detect the FPE patients as well as a graphic user interface (GUI) to make the methods and results easily accessible for the user. To compute the slits/micrometer for each image, the GBM needs to be segmented from the background. The proposed work flow combines various filters and mathematical morphology to obtain the outer contour of the GBM. The outer contour is then smoothed, and unwanted parts are removed based on distance information and angle differences between points on the contour. The length is then computed by weighted chain code counts. At last, an iterative algorithm is used to locate the positions of the "slits" using both gradient and binary information of the original images. If necessary, the result from length measurement and "slits" counting can be manually corrected  by the user. A tool for manual measurement is also provided as an option. In this case, the user can add anchor points on the outer contour of the GBM and then the length is automatically measured and "slit" locations are detected. For very difficult images, the users can also mark all "slits" locations by hand. To evaluate the performance and the accuracy, data from five patients are tested,for each patient six images are available. The images are 2048 by 2048 gray-scale indexed 8 bit images and the scale is 0.008 micrometer/pixel. The one FPE patient in the dataset is successfully distinguished.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Delisle, Sylvain. "Text processing without a priori domain knowledge: Semi-automatic linguistic analysis for incremental knowledge acquisition". Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6574.

Texto completo
Resumen
Technical texts are an invaluable source of the domain-specific knowledge which plays a crucial role in advanced knowledge-based systems today. However, acquiring such knowledge has always been a major difficulty in the construction of these systems--this critical obstacle is sometimes referred to as the "knowledge acquisition bottleneck". In order to lessen the burden on the knowledge engineer's shoulders, several approaches have been proposed in the literature. A few of these suggest processing texts pertaining to the domain of interest in order to extract the knowledge they contain and thus facilitate the domain modelling. We herein propose a new approach to knowledge acquisition from texts; this approach is comprised of a new methodology and computational framework for the implementation of a linguistic processor which represents the central component of a system for the acquisition of knowledge from text. The system, named TANKA, is not given the complete domain model beforehand. It is designed to process technical texts in order to incrementally build a knowledge base containing a conceptual model of the domain. TANKA is an intelligent assistant to the knowledge engineer; when it cannot proceed entirely on its own, the user is asked to collaborate. In the process, the system acquires knowledge from text; it can be said to learn about the domain. The originality of the research is due mainly to the fact that we do not assume significant a priori domain-specific (semantic) knowledge: this assumption represents a severe constraint on the natural language processor. The only external elements of knowledge we consider in the proposed framework are "off-the-shelf" publicly available and domain-independent repositories, such as a basic dictionary containing surface syntactic information (i.e. The Collins) and a lexical database (i.e. WordNet). Other components of the proposed framework are general-purpose. The parser (DIPETT) is domain-independent with a large coverage of English: our approach relies on full syntactic analysis. The Case-based semantic analyzer (HAIKU) is semi-automatic: it interacts with the user in order to get his$\sp1$ approval of the analysis it has just proposed and negotiates refined elements of the analysis when necessary. The combined processing of DIPETT and HAIKU allows TANKA, the encompassing system$\sp2$, to acquire knowledge, based on the conceptual elements produced by HAIKU. The thesis also describes experiments that have been conducted on a Prolog implementation of both of these text analysis components. The approach presented in the thesis is general and in principle portable to any domain in which suitable technical texts are available. The thesis presents theoretical considerations as well as engineering aspects of the many facets of this research work. We also provide a detailed discussion of many future work items that could be added to what has already been accomplished in order to make the framework even more productive. (Abstract shortened by UMI.) ftn$\sp1$In order to lighten the text, the terms 'he' and 'his' have been used generically to refer equally to persons of either sex. No discrimination is either implied or intended. $\sp2$DIPETT and HAIKU constitute a conceptual analyzer that can be used independently of TANKA or within a different encompassing system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ramnerö, David. "Semi-automatic Training Data Generation for Cell Segmentation Network Using an Intermediary Curator Net". Thesis, Uppsala universitet, Bildanalys och människa-datorinteraktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332724.

Texto completo
Resumen
In this work we create an image analysis pipeline to segment cells from microscopy image data. A portion of the segmented images are manually curated and this curated data is used to train a Curator network to filter the whole dataset. The curated data is used to train a separate segmentation network to improve the cell segmentation. This technique can be easily applied to different types of microscopy object segmentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bhatti, Nadeem Verfasser], Dieter W. [Akademischer Betreuer] [Fellner y Tobias [Akademischer Betreuer] Schreck. "Visual Semantic Analysis to Support Semi Automatic Modeling of Service Descriptions / Nadeem Bhatti. Betreuer: Dieter W. Fellner ; Tobias Schreck". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2011. http://d-nb.info/1105562689/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bhatti, Nadeem [Verfasser], Dieter W. [Akademischer Betreuer] Fellner y Tobias [Akademischer Betreuer] Schreck. "Visual Semantic Analysis to Support Semi Automatic Modeling of Service Descriptions / Nadeem Bhatti. Betreuer: Dieter W. Fellner ; Tobias Schreck". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2011. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-26405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lösch, Felix. "Optimization of variability in software product lines a semi-automatic method for visualization, analysis, and restructuring of variability in software product lines". Berlin Logos-Verl, 2008. http://d-nb.info/992075904/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Colaninno, Nicola. "Semi-automatic land cover classification and urban modelling based on morphological features : remote sensing, geographical information systems, and urban morphology : defining models of land occupation along the Mediterranean side of Spain". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/396219.

Texto completo
Resumen
From a global point of view, as argued by Levy (1999), the modern city has undergone radical changes in its physical form, either in terms of territorial expansion as well as in terms of interna! physical transformations. Today, approximately 75% of the European population lives in urban areas ,which makes the urban fulure of the conlinent a major cause of concern (Brazil, Cavalcanti, & Longo, 2014). lndeed, the demand for urban land, both within and around the cities, is becoming increasingly acule (European Environmenl Agency, 2006). Ouring the last decades, also Spain has been undergoing an important process of urban growth, which has implied the consumption of a large amounl of land, al hough the overall population growth rale, mostly along certain specific geographic areas, has remained at least unchanged or even, in sorne cases, il has also decreased. Such a phenomenon has been quite remarkable along the Mediterranean side. As argued by Gaja (2008), the urban development in Spain has been strongly linked to the model of economic development , which relies, since its launch in the 50's, onlhree main factors , i.e.:emigration, building, and mass tourism. Nowadays , in Spain, and mostly along the Medilerranean side, several urban areas are facing important phenomena of urban sprawl, also feared by he European Union. An accurate information about the pattern of land use/land cover, over time, is a fundamental requirement for a better understanding of the urban models. Currently, even though plenty of approaches to the image classification, through Remote Sensing (RS) techniques, have been advanced, Land Cover/Land Use classification is still an exciting challenge (Weng, 2010). Actually, the increasing development of RS and GIS technologies, during the last decades, has provided further capabiliies for measuring, analysing, understanding, modelling the "physical expressions" of urban growth phenomena, either in terms of pattern and process (Bhatta, 2012), and based on land use/land cover mapping and change delection over time. Based on such a technological approach, here we first aim to set up a suitable methodology for detecting generalized land cover classes based on an assisted automatic (or semi-aulomatic) pixel-based approach, calibrated upon Landsat Thematic Mapper (TM) mullispectral imagery, at 30 meters of spatial resolution. Beside, through the use of Geographical lnformation Syslem (GIS) we provide a spatial analysis and modelling of different urban models, from a morphological standpoint, in order to define the main pattern of land occupation al municipal scale, and along the Mediterranean side of Spain, al the year 2011. We focus on two main issues. On one hand, RS techniques have been used to set up a proper semi-automatic classification methodology, based on the use of Landsat imagery, capable of handling huge geographical areas quickly and efficiently. This process is basically aimed to detect the urban areas, at the year 2011, along the Mediterranean side of Spain, depending on the administrative division of Autonomous Communities. On the other hand, the spatial patterns of urban settlements have been analysed by using a GIS platform for quantifying a set of spatial metrics about the urban form. Hence, once get the quantification of different morphological features, including the analysis aboul either the urban profile, the urban texture, and the street network pattern, an automatic classification of different urban morphological models has been proposed, based on a stalistical approaches, namely factor and cluster analysis
Desde un punto de vista global,como sostiene Levy (1999), la ciudad moderna ha experimentado cambios radicales en su forma física, ya sea en términos de expansión territorial, así como en términos de transformaci ones internas. Hoy en día, aproximadamente el 75% de la población europea vive en zonas urbanas, lo que hace del futuro urbano delcontinente, una causa importante de preocupación (Brasil, Cavalcanti, y Longo, 2014). De hecho, la demanda de suelo urbano, dentro y alrededor de las ciudades , es cada vez más aguda (Agencia Europea de Medio Ambiente,2006). Durante las últimas décadas, también España ha experimentado un importante proceso de crecimiento urbano que ha implicado el consumo de una gran cantidad de tierra, aunque la tasa de crecimiento de la población en general, sobre todo a lo largo de ciertas áreas geográficas específicas , se ha mantenido al menos sin cambios o incluso, en algunos casos, también ha disminuido. Este fenómeno ha sido muy evidente a lo largo de la vertiente mediterránea. Como sostiene Gaja (2008), el desarrollo urbano en España se ha visto fuertemente vinculado con el modelo de desarrollo económico, que se basa, desde su lanzamiento en la década de los 50,en tres factores principales, a saber: la emigración, la construcción y el turismo de masas. Hoy en día, en España, y sobre todo a lo largo de la vertiente mediterránea, varias zonas urbanas se enfrentan a fenómenos importantes de expansión urbana, también temidos por la Unión Europea. Al respecto,un requisito fundamental para mejorar la comprensión y el estudio de los modelos urbanos es obtener en eltiempo una información precisa sobre los patrones de cubiertas y uso de suelo. Actualmente, a pesar de la existencia de numerosos métodos para la clasificación de imágenes digitales a través de técnicas de teledetección, para ext raer información sobre cobertura/uso de suelo, este enfoque sigue siendo un reto apasionante (Weng, 2010). El creciente desarrollo de las tecnologías de RS y GIS, durante las últimas décadas, ha proporcionado nuevas capacidades para medir, analizar, comprender, y modelar las "expresiones físicas" de los fenómenos de crecimiento urbano, en términos de patrones y procesos (Bhatta, 2012), y con base en el mapeo y análisis de cambios de cobertura/uso de suelo a través el tiempo. Basándose en un enfoque tecnológico, el primero objetivo es establecer una metodología adecuada para la detección de clases de cobertura de la tierra generalizadas que encuentra su fundamento en una asistido automático (o semiautomático), enfoque basado en píxeles, calibradas en Landsat Thematic Mapper (TM) imágenes multiespectrales, a 30 metros de resolución espacial. Al lado, a través del uso del Sistema de Información Geográfica (SIG), es posible proveer un análisis espacial y la modelización de diferentes modelos urbanos, desde un punto de vista morfológico, con el fin de definir el patrón principal de la ocupación del suelo a escala municipal a lo largo de la vertiente mediterránea de España, en el año 2011. En particular no enfocamos en dos cuestiones principales. Por un lado, las técnicas de RS se han utilizado para establecer una metodología de clasificación semi-automático adecuada, basada en el uso de imágenes Landsat, capaz de manejar grandes zonas geográficas de forma rápida y eficiente. Este proceso, básicamente, va dirigido a detectar las áreas urbanas, en el año 2011, a lo largo de la vertiente mediterránea de España, según la división administrativa de las Comunidades Autónomas. Por otro lado, los patrones espaciales de asentamientos urbanos han sido analizados mediante el uso de una plataforma GIS para cuantificar un conjunto de métricas espaciales sobre la forma urbana. Finalmente, una vez obtenida la cuantificación de diferentes características morfológicas, se ha proporcionado una clasificación automática de los diferentes modelos morfológicos urbanos, basada en un enfoque estadístico, es decir, análisis factorial y clúster.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Moshir, Moghaddam Kianosh. "Automated Reasoning Support for Invasive Interactive Parallelization". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84830.

Texto completo
Resumen
To parallelize a sequential source code, a parallelization strategy must be defined that transforms the sequential source code into an equivalent parallel version. Since parallelizing compilers can sometimes transform sequential loops and other well-structured codes into parallel ones automatically, we are interested in finding a solution to parallelize semi-automatically codes that compilers are not able to parallelize automatically, mostly because of weakness of classical data and control dependence analysis, in order to simplify the process of transforming the codes for programmers.Invasive Interactive Parallelization (IIP) hypothesizes that by using anintelligent system that guides the user through an interactive process one can boost parallelization in the above direction. The intelligent system's guidance relies on a classical code analysis and pre-defined parallelizing transformation sequences. To support its main hypothesis, IIP suggests to encode parallelizing transformation sequences in terms of IIP parallelization strategies that dictate default ways to parallelize various code patterns by using facts which have been obtained both from classical source code analysis and directly from the user.In this project, we investigate how automated reasoning can supportthe IIP method in order to parallelize a sequential code with an acceptable performance but faster than manual parallelization. We have looked at two special problem areas: Divide and conquer algorithms and loops in the source codes. Our focus is on parallelizing four sequential legacy C programs such as: Quick sort, Merge sort, Jacobi method and Matrix multipliation and summation for both OpenMP and MPI environment by developing an interactive parallelizing assistance tool that provides users with the assistanceneeded for parallelizing a sequential source code.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Goussard, Charl Leonard. "Semi-automatic extraction of primitive geometric entities from point clouds". Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52449.

Texto completo
Resumen
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: This thesis describes an algorithm to extract primitive geometric entities (flat planes, spheres or cylinders, as determined by the user's inputs) from unstructured, unsegmented point clouds. The algorithm extracts whole entities or only parts thereof. The entity boundaries are computed automatically. Minimal user interaction is required to extract these entities. The algorithm is accurate and robust. The algorithm is intended for use in the reverse engineering environment. Point clouds created in this environment typically have normal error distributions. Comprehensive testing and results are shown as well as the algorithm's usefulness in the reverse engineering environment.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf 'n algoritme wat primitiewe geometriese entiteite (plat vlakke, sfere of silinders na gelang van die gebruiker se inset) pas op ongestruktureerde, ongesegmenteerde puntewolke. Die algoritme pas geslote geometriese entiteite of slegs dele daarvan. Die grense van hierdie entiteite word automaties bereken. Minimale gebruikersinteraksie word benodig om die geometriese entiteite te pas. Die algoritme is akkuraat en robuust. Die algoritme is ontwikkel vir gebruik in die truwaartse ingenieurswese omgewing. Puntewolke opgemeet in hierdie omgewing het tipies meetfoute met 'n normaal verdeling. Omvattende toetsing en resultate word getoon en daarmee ook die nut wat die algoritme vir die gebruiksomgewing inhou.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhou, Jianlong. "Semi-automatic transfer function generation for volumetric data visualization using contour tree analyses". Phd thesis, Faculty of Engineering and Information Technologies, 2011. http://hdl.handle.net/2123/9326.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Рябов, С. І. "Дослідження процесу одночасного двостороннього шліфування торців поршневого пальця". Thesis, Чернігів, 2021. http://ir.stu.cn.ua/123456789/25296.

Texto completo
Resumen
Рябов, С. І. Дослідження процесу одночасного двостороннього шліфування торців поршневого пальця : випускна кваліфікаційна робота : 133 "Галузеве машинобудування" / С. І. Рябов ; керівник роботи А. В. Кологойда ; НУ "Чернігівська політехніка", кафедра автомобільного транспорту та галузевого машинобудування. – Чернігів, 2021. – 64 с.
У першому розділі визначено актуальність теми дослідження, проаналізовано наукові роботи у сфері одночасного двостороннього шліфування торців поршневих пальців та визначено мету та завдання дослідження. У другому розділі представлено призначення та область використання напівавтомата 3342АДО та його компонування. Розраховано модернізацію барабана подач виробів. Підібрано діаметри шківів та розраховано шпоночне з'єднання на зминання. У третьому розділі представлено загальні відомості про поршневі пальці. Спроектовано схему шліфування та розрахована похибка обробленого торця пальця за допомогою програмного забезпечення MathCad. Запропоновано спосіб правки шліфувальних кругів. Проведено частотний аналіз барабана подач виробів у пакеті програм SolidWorks.
The first section identifies the relevance of the research topic, analyzes scientific work in the field of simultaneous grinding of the ends of the piston fingers and defines the purpose and objectives of the study. The second section presents the purpose and scope of the semi-automatic 3342ADO and its layout. The modernization of the product feed drum is calculated. The diameters of the pulleys were selected and the keyway was designed for crimping. The third section provides general information about the piston pins. The grinding scheme is designed and the error of the machined finger end is calculated using MathCad software. A method of straightening grinding wheels is proposed. The frequency analysis of the product feed drum in the SolidWorks software package was performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Teljstedt, Erik Christopher. "Separating Tweets from Croaks : Detecting Automated Twitter Accounts with Supervised Learning and Synthetically Constructed Training Data". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192656.

Texto completo
Resumen
In this thesis, we have studied the problem of detecting automated Twitter accounts related to the Ukraine conflict using supervised learning. A striking problem with the collected data set is that it was initially lacking a ground truth. Traditionally, supervised learning approaches rely on manual annotation of training sets, but it incurs tedious work and becomes expensive for large and constantly changing collections. We present a novel approach to synthetically generate large amounts of labeled Twitter accounts for detection of automation using a rule-based classifier. It significantly reduces the effort and resources needed and speeds up the process of adapting classifiers to changes in the Twitter-domain. The classifiers were evaluated on a manually annotated test set of 1,000 Twitter accounts. The results show that rule-based classifier by itself achieves a precision of 94.6% and a recall of 52.9%. Furthermore, the results showed that classifiers based on supervised learning could learn from the synthetically generated labels. At best, the these machine learning based classifiers achieved a slightly lower precision of 94.1% compared to the rule-based classifier, but at a significantly better recall of 93.9%
Detta exjobb har undersökt problemet att detektera automatiserade Twitter-konton relaterade till Ukraina-konflikten genom att använda övervakade maskininlärningsmetoder. Ett slående problem med den insamlade datamängden var avsaknaden av träningsexempel. I övervakad maskininlärning brukar man traditionellt manuellt märka upp en träningsmängd. Detta medför dock långtråkigt arbete samt att det blir dyrt förstora och ständigt föränderliga datamängder. Vi presenterar en ny metod för att syntetiskt generera uppmärkt Twitter-data (klassifieringsetiketter) för detektering av automatiserade konton med en regel-baseradeklassificerare. Metoden medför en signifikant minskning av resurser och anstränging samt snabbar upp processen att anpassa klassificerare till förändringar i Twitter-domänen. En utvärdering av klassificerare utfördes på en manuellt uppmärkt testmängd bestående av 1,000 Twitter-konton. Resultaten visar att den regelbaserade klassificeraren på egen hand uppnår en precision på 94.6% och en recall på 52.9%. Vidare påvisar resultaten att klassificerare baserat på övervakad maskininlärning kunde lära sig från syntetiskt uppmärkt data. I bästa fall uppnår dessa maskininlärningsbaserade klassificerare en något lägre precision på 94.1%, jämfört med den regelbaserade klassificeraren, men med en betydligt bättre recall på 93.9%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Amura, Annamaria. "Design of a semi-automatic methodology supporting the graphic documentation for the restoration of artifacts". Doctoral thesis, Urbino, 2021. http://hdl.handle.net/11576/2683497.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Li, Xiaodong. "Observation et commande de quelques systèmes à paramètres distribués". Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00456850.

Texto completo
Resumen
L'objectif principal de cette thèse consiste à étudier plusieurs thématiques : l'étude de l'observation et la commande d'un système de structure flexible et l'étude de la stabilité asymptotique d'un système d'échangeurs thermiques. Ce travail s'inscrit dans le domaine du contrôle des systèmes décrits par des équations aux dérivées partielles (EDP). On s'intéresse au système du corps-poutre en rotation dont la dynamique est physiquement non mesurable. On présente un observateur du type Luenberger de dimension infinie exponentiellement convergent afin d'estimer les variables d'état. L'observateur est valable pour une vitesse angulaire en temps variant autour d'une constante. La vitesse de convergence de l'observateur peut être accélérée en tenant compte d'une seconde étape de conception. La contribution principale de ce travail consiste à construire un simulateur fiable basé sur la méthode des éléments finis. Une étude numérique est effectuée pour le système avec la vitesse angulaire constante ou variante en fonction du temps. L'influence du choix de gain est examinée sur la vitesse de convergence de l'observateur. La robustesse de l'observateur est testée face à la mesure corrompue par du bruit. En mettant en cascade notre observateur et une loi de commande stabilisante par retour d'état, on souhaite obtenir une stabilisation globale du système. Des résultats numériques pertinents permettent de conjecturer la stabilité asymptotique du système en boucle fermée. Dans la seconde partie, l'étude est effectuée sur la stabilité exponentielle des systèmes d'échangeurs thermiques avec diffusion et sans diffusion. On établit la stabilité exponentielle du modèle avec diffusion dans un espace de Banach. Le taux de décroissance optimal du système est calculé pour le modèle avec diffusion. On prouve la stabilité exponentielle dans l'espace Lp pour le modèle sans diffusion. Le taux de décroissance n'est pas encore explicité dans ce dernier cas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

DOTTA, GIULIA. "Semi-automatic analysis of landslide spatio-temporal evolution". Doctoral thesis, 2017. http://hdl.handle.net/2158/1076767.

Texto completo
Resumen
Le tecniche di rilevamento rappresentano un utile strumento per rilevare e caratterizzare i processi gravitativi di versante, in particolare attraverso l’uso di approcci volti ad individuare le aree in movimento. Nel dettaglio, tecniche come il laser scanner terrestre e la fotogrammetria digitale permettono di ottenere rappresentazioni ad alta risoluzione dello scenario osservato sotto forma di una nuvola di punti (point cloud) in tre dimensioni. Durante gli ultimi anni, l’uso delle nuvole di punti per investigare i cambiamenti morfologici a scala temporale e spaziale, è notevolmente aumentato. In questo contesto è maturato il presente progetto di ricerca, durante il quale, l’efficacia dell’utilizzo delle nuvole di punti per la caratterizzazione e il monitoraggio di versanti instabili è stata testata e valutata attraverso lo sviluppo di un tool semi-automatico in linguaggio di programmazione MATLAB. Lo strumento di analisi proposto consente di investigare le principali caratteristiche morfologiche dei versanti instabili indagati e di determinare le variazioni morfologiche e gli spostamenti dalla comparazione di nuvole di punti acquisite in tempi differenti. In seguito, attraverso una tecnica di clustering, il codice permette di estrapolare i gruppi le zone interessate da spostamenti significativi e calcolarne l’area. Il tool introdotto è stato testato su due casi di studio contraddistinti da differenti caratteristiche geologiche e da diversi fenomeni di instabilità: l’ammasso roccioso di San Leo (RN) e il versante presso l’abitato di Ricasoli (AR). Per entrambi i casi di studio, sono state individuate e descritte le aree caratterizzate da deformazione superficiale o accumulo di materiale e le aree caratterizzate da distacco di materiale. Inoltre, sono stati approfonditi i fattori che influenzano i risultati della change detection tra nuvole di punti. Remote sensing techniques represent a powerful instrument to detect and characterise earth’s surface processes, especially using change detection approaches. In particular, TLS (Terrestrial Laser Scanner) and UAV (Unmanned Aerial Vehicles) photogrammetry technique allow to obtain high-resolution representations of the observed scenario as a threedimensional array of points defined by x, y and z coordinates, namely point cloud. During the last years, the use of 3D point clouds to investigate the morphological changes occurring over a range of spatial and temporal scales, is considerably increased. During the three-years PhD research programme, the effectiveness of point cloud exploitation for slope characterization and monitoring was tested and evaluated by developing and applying a semi-automatic MATLAB tool. The proposed tool allows to investigate the main morphological characteristics of unstable slopes by using point clouds and to point out any spatio-temporal morphological changes, by comparing point clouds acquired at different times. Once defined a change detection threshold, the routine permits to execute a cluster analysis and automatically separate zones characterized by significant distances and compute their area. The introduced tool was tested on two test sites characterized by different geological setting and instability phenomena: the San Leo rock cliff (Rimini province, Emilia Romagna region, northern Italy) and a clayey slope near Ricasoli village (Arezzo province, Tuscany region, central Italy). For both case of studies, the main displacement or accumulation zones and detachment zone were mapped and described. Furthermore, the factors influencing the change detection results are discussed in details.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Playfair, Nicholas Grant. "Determination of the accuracy of semi-automatic and fully automatic 2d lateral cephalometric analysis programs". 2013. http://hdl.handle.net/1993/22037.

Texto completo
Resumen
AIM: To evaluate the accuracy of current semi-automatic and fully automatic 2D lateral cephalometric analysis programs. MATERIALS AND METHODS: 60 lateral cephalometric radiographs were randomly selected and grouped based their skeletal malocclusions to form 3 equal groups of 20 Class I, 20 Class II and 20 Class III. These radiographs were then analyzed via traditional hand-based analysis. The values obtained from this method of analysis were compared to 4 subsequent methods of analysis. These consisted of semi-automatic analysis using Dolphin Imaging software, semi-automatic analysis using Kodak Orthodontic Imaging software, fully automatic analysis using Kodak Orthodontic Imaging software and fully automatic analysis combined with limited landmark changes using Kodak Orthodontic Imaging software. RESULTS: ICC tests were completed to compare the gold standard hand-based analysis to the 4 subsequent methods. The values obtained from semi-automatic Dolphin and Kodak Orthodontic Imaging software were found to be comparable to hand-based analysis. Whereas, the values obtained from the fully automatic mode of Kodak Orthodontic Imaging software were not found to be comparable to hand-based analysis. CONCLUSIONS: Digital cephalometric programs can be used as an accurate method when performing lateral cephalometric analyses. The fully automatic mode of these programs should only be used as a support to diagnosis and not as a diagnostic tool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chen, Ai-Chi y 陳愛琪. "Semi-automatic analysis of carotid plaque composition in magnetic resonance imaging". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/58491102918795300605.

Texto completo
Resumen
碩士
中原大學
生物醫學工程研究所
103
Atherosclerotic that lead to plaque rupture and thrombosis is the main mechanism and major risk factor of stroke. However, whether plaque will rupture or not depends on the composition of the plaque. There are multiple pathologic stages from plaque formation to rupture including inflammation, autoimmune response, metabolism and thrombosis. Thus, simply showing the extent of carotid artery stenosis without the knowledge of plaque stability can no longer satisfy the clinical needs. Hence it is necessary to evaluate the morphology together with the functionality in order to understand the possibility of plaque rupture. Since the plaque caused by atherosclerosis results in difference in plaque characteristics and components using difference weighted image techniques in MRI study can generate different signal intensity . In this study, four different weighted images, T1 weighted, T2 weighted, 3 dimensional time of flight (3D TOF) and contrast-enhanced study were used to evaluate the plaque types. In one single MRI study, there are multiple sequences and each sequence has multiple slices. It is time consuming and error-prone to have a radiologist to interpret the MRI images manually. Therefore, we propose a new software program to evaluate these four sequences. It segments the region of interest in the carotid artery, semi-automatically. In the same time, it assesses the carotid artery diameter and plaque area to evaluate plaque type and composition. The results indicate that using this software can reduce 34% of processing time. At the same time, the classification accuracies for hemorrhage, calcification, lipid and fibrous plaques are 83.3%, 100%, 100% and 100%, respectively. It demonstrates that using this new method, it can assist the radiologist and clinician in image interpretation and clinical decision-making in managing carotid artery atherosclerosis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Bhatti, Nadeem. "Visual Semantic Analysis to Support Semi Automatic Modeling of Service Descriptions". Phd thesis, 2011. https://tuprints.ulb.tu-darmstadt.de/2640/1/Diss_Nadeem_Bhatti.pdf.

Texto completo
Resumen
A new trend Web service ecosystems for Service-Oriented Architectures (SOAs) and Web services is emerging. Services can be offered and traded like products in these ecosystems. The explicit formalization of services' non-functional parameters, e.g. price plans and legal aspects, as Service Descriptions (SD) is one of the main challenges to establish such Web service ecosystems. The manual modeling of Service Descriptions (SDs) is a tedious and cumbersome task. In this thesis, we introduce the innovative approach Visual Semantic Analysis (VSA) to support semi-automatic modeling of service descriptions in Web service ecosystems. This approach combines the semantic analysis and interactive visualization techniques to support the analysis, modeling, and reanalysis of services in an iterative loop. For example, service providers can analyze first the price plans of the already existing services and extract semantic information from them (e.g. cheapest offers and functionalities). Then they can reuse the extracted semantics to model the price plans of their new services. Afterwards, they can reanalyze the new modeled price plans with the already existing services to check their market competitiveness in Web service ecosystems. The experts from different domains, e.g. service engineers, SD modeling experts, and price plan experts, were interviewed in a study to identify the requirements for the VSA approach. These requirements cover aspects related to the analysis of already exiting services and reuse of the analysis results to model new services. Based on the user requirements, we establish a generic process model for the Visual Semantic Analysis. It defines sub processes and transitions between them. Additionally, the technologies used and the data processed in these sub processes are also described. We present also the formal specification of this generic process model that serves as a basis for the conceptual framework of the VSA. A conceptual framework of the VSA elucidates structure and behavior of the Visual Semantic Analysis System. It specifies also system components of the VSA system and interaction between them. Additionally, we present the external interface of the VSA system for the communication with Web service ecosystems. Finally, we present the results of a user study conducted by means of the VSA system that is developed on the base of the VSA conceptual framework. The results of this user study show that the VSA system leads to strongly significant improvement of the time efficiency and offers better support for the analysis, modeling and reanalysis of service descriptions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Rocha, Carolina da Ponte. "Semi-Automatic histology analysis of ovine pelvic tissues with 3D structural and quantitative analysis". Master's thesis, 2018. https://repositorio-aberto.up.pt/handle/10216/113152.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Rocha, Carolina da Ponte. "Semi-Automatic histology analysis of ovine pelvic tissues with 3D structural and quantitative analysis". Dissertação, 2018. https://repositorio-aberto.up.pt/handle/10216/113152.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wu, Chia-Ling y 吳佳凌. "Develope Semi-Automatic Detection Analysis System of IMT and Plaque for Carotid Artery". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/r2qq48.

Texto completo
Resumen
碩士
中原大學
生物醫學工程研究所
103
Calculation of Intima-Media Thickness (IMT) and plaque area is not only time-consuming but mainly rely on the experiences of operators. Therefore, this study proposed a semi-automatic system for calculating the size of carotid plaque and the average thickness of intimal -medial tunica. This system could decline time cost in large-scale as well as reduce measurement error. Followings were procedures taken in this study: firstly, pre-treated ambiguous ultrasound imaging, strengthened and smoothed its boundary; then manually segmented the region of interest; finally utilized automatic algorithms to estimate the area of IMT and plaque. The findings indicated that while comparing the results of system-computed and manual-selected manners, the proposed system demonstrated well repeatability, less human error, more accurate, and had consistency with expert. This system would facilitate medical professionals in obtaining the information of plaque area and carotid IMT. Keywords: real time, heart rate variability, spectral analysis, embedded system
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Seresht, Shadi Moradi. "A methodology for semi-automatic assistance in elicitation and analysis of textual user requirements". Thesis, 2008. http://spectrum.library.concordia.ca/975921/1/MR45325.pdf.

Texto completo
Resumen
Requirements Engineering (RE) is a sub-discipline within Software Engineering increasingly recognized as a critical component in the success of a software development project. With the escalating complexity of software requirements, problems of traditional requirements engineering techniques, including the use of natural language text, are becoming increasingly apparent. This research aims to assist software analysts in dealing with the challenges that exist in correctly understanding user requirements during the interactive process of requirement elicitation and analysis. It proposes a methodology related to visualization of textual requirements and was of making them shared, reviewed and debated by the stakeholders. The proposed methodology serves as a basis for a semi-automated process aimed at capturing the conceptual model of the software system under development and its high-level services from user requirements text. The extracted information can be used by analysts in their in-depth study of the requirements text and in avoiding the risks associated with specifying poor or invalid requirements. The approach is based on a syntactic analysis and formalization of the text written in natural language and enriched with domain-related information extracted from reusable domain-specific data models. The applicability of this research is illustrated with a case study. A prototype implementing our methodology is developed as a proof-of-concept. The results of controlled experiments designed to evaluate our approach prove the validity of the methodology. The thesis discusses future work, issues, problems, and priorities. Furthermore, it proposes recommendations for textual requirements comprehension research
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Chang, Yung-Chieh y 張詠傑. "IVIM-DWI MRI of Allograft Kidneys in 48 hours after Transplantation: A Semi-automatic Quantitative Analysis". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/45913440866778452720.

Texto completo
Resumen
碩士
國立中興大學
電機工程學系所
103
A successful kidney transplant for the patients with renal failure of end stage is the best treatment, but most of kidney transplant patients have delayed graft function phenomenon. The possible causes of postoperative complications, such as arterial stenosis, venous thrombosis or renal ischemia-reperfusion injury, and the results are tend to increase the immunogenicity and the risk of acute rejection by the foreign graft hence reduce survival rate. Patients with severe kidney disease execute the magnetic resonance imaging (MRI) examination with gadolinium-containing contrast agent may occur nephrogenic systemic fibrosis symptom. Traditional magnetic resonance angiography techniques of intravenous contrast agent can not be applied to the patient with transplanted kidney. In the no contrast agent case, it can accurately and efficiently assess renal vascular patency and perfusion status. We can expect that it is suitable for the patients with severe kidney disease and kidney transplantion. The goal of this study is to examine IVIM MRI and NATIVE-TrueFISP MRA pulse sequence in patients after renal transplantation. We apply mathematical algorithms on IVIM MR images to do semi-automatic segmentation and quantification of the renal parenchyma, renal medulla and renal cortex to track patients after a kidney transplant which we can evaluate the micro- and macro circulation status of graft kidneys. The experimental results are considerably useful in early diagnosis of delayed graft function and cab is benefitical to understand the mechanism of development of the disorder.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Lee, Chia-Jung y 李佳融. "Wall Motion and Shape Analysis System of 3D LV Dynamic Modal with Semi-Automatic Border Delineation". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/39883229365684236039.

Texto completo
Resumen
碩士
中原大學
醫學工程研究所
90
This paper reports an improved methodology of LV contour delineating procedure of Transesophageal Echocardiography (TEE) image of heart. The active contour algorithm was used to delineate LV chamber from initial images in Cylinder coordinates (Polarized Scan). The LV chamber was delineated again (Z-Cube Scan), using previous delineated information in Cartesian coordinates. Using this, the Left Ventricular of TEE image can be traced more efficiently and correctly. From the delineating contour of LV, the LV volume can be calculated by Simpson’s rule. The ejection fraction (EF) can be calculated from the volume at end diastolic volume (EDV) and end systolic volume ESV. The above procedure was initiated using a set of hand-traced pattern to improve the efficiency of delineation process. The procedure was proved effectively and accurately. This paper uses volume validation to prove the effectiveness of methodology for delineating border. The volume calculated from the images was compared to the volume of water balloon. The result shows the volume calculated by this methodology of using Polarized Scan and Z-Cube Scan is more accuracy than the preciously reported studies. The coefficient of correlation is 0.9685. Because the volume of LV does not have the gold standard, the LV calculated volume was tested to the volume calculated by Tomtec heart analysis system. The result shows positive correlation. The coefficient of correlation is 0.9364. With the excellence of LV border delineating, this paper also reported the capability of calculating LV volume, parameters of LV shape analysis and wall motion analysis. The Bull’s Eye display method was used to illustrate the results of shape analysis and wall motion analysis. Texturing the LV 3D model by this Bull’s Eye Image, this system can display the unusual repositions of LV border in heartbeat.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Ness, Steven. "The Orchive: A system for semi-automatic annotation and analysis of a large collection of bioacoustic recordings". Thesis, 2013. http://hdl.handle.net/1828/5109.

Texto completo
Resumen
Advances in computer technology have enabled the collection, digitization and automated processing of huge archives of bioacoustic sound. Many of the tools previ- ously used in bioacoustics work well with small to medium-sized audio collections, but are challenged when processing large collections of tens of terabytes to petabyte size. In this thesis, a system is presented that assists researchers to listen to, view, anno- tate and run advanced audio feature extraction and machine learning algorithms on these audio recordings. This system is designed to scale to petabyte size. In addition, this system allows citizen scientists to participate in the process of annotating these large archives using a casual game metaphor. In this thesis, the use of this system to annotate a large audio archive called the Orchive will be evaluated. The Orchive contains over 20,000 hours of orca vocalizations collected over the course of 30 years, and represents one of the largest continuous collections of bioacoustic recordings in the world. The effectiveness of our semi-automatic approach for deriving knowledge from these recordings will be evaluated and results showing the utility of this system will be shown.
Graduate
0984
sness@sness.net
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Zhou, Chengqian. "Development of semi-automatic data analysis algorithms to examine the influence of sensory stimuli on locomotion and striatal neural activities in rodent models". Thesis, 2021. https://hdl.handle.net/2144/42598.

Texto completo
Resumen
Our brains are constantly integrating sensory and movement information as we navigate within our environment. Through millennium of evolution, human has learned to associate sensory information as cues to form motor plans. One example is that for almost all cultures, dance and music are tightly linked together an integrated form of performance and entertainment. However, it is yet unclear how our brain processes sensory information to coordinate a specific movement plan. In this Master Thesis, I investigated the brain region striatum that is known to play crucial roles in movement coordination and habit formation. To examine the effect of sensory inputs on locomotor behavior and striatal neural activities, we performed calcium imaging from the striatum of head fixed mice during voluntary locomotion. We injected AAV-syn-GCaMP7f virus into the striatum region to express the genetically encoded calcium sensor GCaMP7 in striatal neurons, and used a custom fluorescent microscope to measure intracellular calcium change from hundreds of labelled cells simultaneously. To examine how audio-visual stimulation impact movement behavior, we tracked mice’s speed using a spherical treadmill while applying sustained period of audio-visual stimulation at either 10 Hz or 145 Hz. To quantify the influence of audio-visual stimulation on different locomotion features, I developed several semi-automated algorithms in MATLAB to classify locomotion features, such as stationary periods, motion events, acceleration periods, deceleration period, and motion transitions. Furthermore, I optimized calcium imaging data processing pipelines and correlated striatal neural activity to various locomotion features. We found that audiovisual stimulation at both 10Hz and 145Hz increased locomotion, characterized as an increase in the percentage of time mice spent in motion events and a corresponding decrease in stationary period. However, only the145Hz stimulation, but not 10Hz stimulation promoted motion onset/offset transitions, and increased acceleration/deceleration probability. These results demonstrate that audiovisual stimulation can modulate locomotor activities in rodent models, and different patterns of audiovisual stimulation can selectively modulate different movement parameters. We also found that audiovisual stimulation increased the firing frequency of most responsive neurons regardless of mice’s movement state, suggesting that sensory information can further increase the excitability of some motion related striatal neurons both when the mice are moving and staying still. These results provide direct evidence that noninvasive audiovisual stimulation can modulate striatal neural activity, suggesting a basis for developing future noninvasive sensory stimulation based exercise and dance therapies for motor disorders that involve the striatum, such as Parkinson's disease. Future analysis of how audiovisual stimulation selectively modulates individual striatal neuron and striatal network during different aspects of movement will provide a more in-depth understanding of how sensory stimulation promote movement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Yu-ChengChen y 陳友政. "A Semi-Automatic Biomechanical Analysis System based on K-Means Clustering and Finite Element Method - a Case Study of Dental Implants". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/gksz9q.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Yang, Fu-Hua y 楊富樺. "A study of influence of aging and swimming exercise on the changes of dendritic spine morphology by using Matlab based semi-automatic spine analysis software". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/zjv625.

Texto completo
Resumen
碩士
中原大學
生物醫學工程研究所
106
The dendritic spines are the primary recipients of excitatory synaptic input in the brain. Previous research shows that memory and diseases are correlated with dendritic spine morphology. Dendritic spine analysis software is usually very expensive. If we can construct the dendritic spine analysis method for free, it will benefit many researchers. Our semi-auto software, NUUspine, is written based on Matlab, which could define the dendritic spine types according to manually measured dendritic spine head width neck length and total length by Reconstruct software. The data output includes dendritic spine type, density, and area in excel profile. We used NUUspine to discuss the influence of aging and swimming exercise on the changes of memory and dendritic spine morphology. Experimental groups are divided into control without exercise, control with exercise, induced aging without exercise, and induced aging with exercise. We used behavioral test to assess rat aging. Golgi-Cox staining was applied to visualize spines. The dendritic spines on the dendritic branches of granule cell in hippocampal DG and pyramidal neurons in hippocampal CA3, CA1 were analyzed by using NUUspine software. The Morris water maze test showed aging decrease spatial memory. Continuous swimming improved spatial memory. The classify dendritic spines in hippocampal CA1 showed that the number of filopodium type in induced aging without exercise group was more than induced aging with exercise group. The number of mushroom type in induced aging with exercise group was more than induced aging without exercise group. But total number of spines showed no different between groups. Above all, exercise can improve spatial memory and cause dendritic spine morphology turn to mature type. Moreover, we adopted data from a published study to verify the accuracy of NUUspine software, the result from our software showed no difference to the published one. We successfully establish the software which can analyze dendritic spines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Mouland, Darrell. "Semi-automated characterization of thin-section petrographic images /". 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía