Dissertations / Theses on the topic 'Shot segmentation'

To see the other types of publications on this topic, follow the link: Shot segmentation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'Shot segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kayaalp, Isil Burcun. "Video Segmentation Using Partially Decoded Mpeg Bitstream." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1092758/index.pdf.

Full text
Abstract:
In this thesis, a mixed type video segmentation algorithm is implemented to find the scene cuts in MPEG compressed video data. The main aim is to have a computationally efficient algorithm for real time applications. Due to this reason partial decoding of the bitstream is used in segmentation. As a result of partial decoding, features such as bitrate, motion vector type, and DC images are implemented to find both continuous and discontinuous scene cuts on a MPEG-2 coded general TV broadcast data. The results are also compared with techniques found in literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Naha, Shujon. "Zero-shot Learning for Visual Recognition Problems." IEEE, 2015. http://hdl.handle.net/1993/31806.

Full text
Abstract:
In this thesis we discuss different aspects of zero-shot learning and propose solutions for three challenging visual recognition problems: 1) unknown object recognition from images 2) novel action recognition from videos and 3) unseen object segmentation. In all of these three problems, we have two different sets of classes, the “known classes”, which are used in the training phase and the “unknown classes” for which there is no training instance. Our proposed approach exploits the available semantic relationships between known and unknown object classes and use them to transfer the appearance models from known object classes to unknown object classes to recognize unknown objects. We also propose an approach to recognize novel actions from videos by learning a joint model that links videos and text. Finally, we present a ranking based approach for zero-shot object segmentation. We represent each unknown object class as a semantic ranking of all the known classes and use this semantic relationship to extend the segmentation model of known classes to segment unknown class objects.
October 2016
APA, Harvard, Vancouver, ISO, and other styles
3

luo, sai. "Semantic Movie Scene Segmentation Using Bag-of-Words Representation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Volkmer, Timo, and timovolkmer@gmx net. "Semantics of Video Shots for Content-based Retrieval." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090220.122213.

Full text
Abstract:
Content-based video retrieval research combines expertise from many different areas, such as signal processing, machine learning, pattern recognition, and computer vision. As video extends into both the spatial and the temporal domain, we require techniques for the temporal decomposition of footage so that specific content can be accessed. This content may then be semantically classified - ideally in an automated process - to enable filtering, browsing, and searching. An important aspect that must be considered is that pictorial representation of information may be interpreted differently by individual users because it is less specific than its textual representation. In this thesis, we address several fundamental issues of content-based video retrieval for effective handling of digital footage. Temporal segmentation, the common first step in handling digital video, is the decomposition of video streams into smaller, semantically coherent entities. This is usually performed by detecting the transitions that separate single camera takes. While abrupt transitions - cuts - can be detected relatively well with existing techniques, effective detection of gradual transitions remains difficult. We present our approach to temporal video segmentation, proposing a novel algorithm that evaluates sets of frames using a relatively simple histogram feature. Our technique has been shown to range among the best existing shot segmentation algorithms in large-scale evaluations. The next step is semantic classification of each video segment to generate an index for content-based retrieval in video databases. Machine learning techniques can be applied effectively to classify video content. However, these techniques require manually classified examples for training before automatic classification of unseen content can be carried out. Manually classifying training examples is not trivial because of the implied ambiguity of visual content. We propose an unsupervised learning approach based on latent class modelling in which we obtain multiple judgements per video shot and model the users' response behaviour over a large collection of shots. This technique yields a more generic classification of the visual content. Moreover, it enables the quality assessment of the classification, and maximises the number of training examples by resolving disagreement. We apply this approach to data from a large-scale, collaborative annotation effort and present ways to improve the effectiveness for manual annotation of visual content by better design and specification of the process. Automatic speech recognition techniques along with semantic classification of video content can be used to implement video search using textual queries. This requires the application of text search techniques to video and the combination of different information sources. We explore several text-based query expansion techniques for speech-based video retrieval, and propose a fusion method to improve overall effectiveness. To combine both text and visual search approaches, we explore a fusion technique that combines spoken information and visual information using semantic keywords automatically assigned to the footage based on the visual content. The techniques that we propose help to facilitate effective content-based video retrieval and highlight the importance of considering different user interpretations of visual content. This allows better understanding of video content and a more holistic approach to multimedia retrieval in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Juan. "Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.

Full text
Abstract:
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
APA, Harvard, Vancouver, ISO, and other styles
6

Ren, Jinchang. "Semantic content analysis for effective video segmentation, summarisation and retrieval." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4251.

Full text
Abstract:
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Barbieri, Tamires Tessarolli de Souza. "Representação de tomadas como suporte à segmentação em cenas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13032015-101933/.

Full text
Abstract:
A área de Personalização de Conteúdo tem sido foco de pesquisas recentes em Ciências da Computação, sendo a segmentação automática de vídeos digitais em cenas uma linha importante no suporte à composição de serviços de personalização, tais como recomendação ou sumarização de conteúdo. Uma das principais abordagens de segmentação em cenas se baseia no agrupamento de tomadas relacionadas. Logo, para que esse processo seja bem sucedido, é necessário que as tomadas estejam bem representadas. Porém, percebe-se que esse tópico tem sido deixado em segundo plano pelas pesquisas relacionadas à segmentação. Assim, este trabalho tem o objetivo de desenvolver um método baseado nas características visuais dos quadros, que possibilite aprimorar a representação de tomadas de vídeos digitais e, consequentemente, contribuir para a melhoria do desempenho de técnicas de segmentação em cenas.
The Content Personalization area has been the focus of recent researches in Computer Science and the automatic scene segmentation of digital videos is an important field supporting the composition of personalization services, such as content recommendation or summarization. One of the main approaches for scene segmentation is based on the clustering of related shots. Thus, in order to this process to be successful, is necessary to properly represent shots. However, we can see that the works reported on the literature have left this topic in backgroud. Therefore, this work aims to develop a method based on frames visual features, which enables to improve video shots representation and, consequently, the performance of scene segmentation techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Cámara, Chávez Guillermo. "Analyse du contenu vidéo par apprentissage actif." Cergy-Pontoise, 2007. http://www.theses.fr/2007CERG0380.

Full text
Abstract:
L’objet de cette thèse est de proposer un système d’indexation semi-automatique et de recherche interactive pour la vidéo. Nous avons développé un algorithme de détection des plans automatique sans paramètre, ni seuil. Nous avons choisi un classifieur SVM pour sa capacité à traiter des caractéristiques de grandes dimensions tout en préservant des garanties de généralisation pour peu d’exemples d’apprentissage. Nous avons étudié plusieurs combinaisons de caractéristiques et de fonctions noyaux et présenté des résultats intéressants pour la tâche de détection de plan de TRECVID 2006. Nous avons proposé un système interactif de recherche de contenu vidéo : RETINVID, qui permet de réduire le nombre d’images à annoter par l’utilisateur. Ces images sont sélectionnées pour leur capacité à accroître la connaissance sur les données. Nous avons effectué de nombreuses simulations sur les données de la tâche de concepts haut-niveaux de TRECVID 2005
This thesis presents work towards a unified framework for semi-automated video indexing and interactive retrieval. To create an efficient index, a set of representative key frames are selected from the entire video content. We developed an automatic shot boundary detection algorithm to get rid of parameters and thresholds. We adopted a SVM classifier due to its ability to use very high dimensional feature spaces while at the same time keeping strong generalization guarantees from few training examples. We deeply evaluated the combination of features and kernels and present interesting results obtained, for shot extraction TRECVID 2006 Task. We then propose an interactive video retrieval system: RETINVID, to significantly reduce the number of key frames annotated by the user. The key frames are selected based on their ability to increase the knowledge of the data. We perform an experiment against the 2005 TRECVID benchmark for high-level task
APA, Harvard, Vancouver, ISO, and other styles
9

Leibe, Bastian. "Interleaved object categorization and segmentation /." Konstanz : Hartung-Gorre Verlag, 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thompson, Andrew. "Hierarchical Segmentation of Videos into Shots and Scenes using Visual Content." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28827.

Full text
Abstract:
With the large amounts of video data available, it has become increasingly important to have the ability to quickly search through and browse through these videos. With that in mind, the objective of this project is to facilitate the process of searching through videos for specific content by creating a video search tool, with an immediate goal of automatically performing a hierarchical segmentation of videos, particularly full-length movies, before carrying out a search for a specific query. We approach the problem by first segmenting the video into its film units. Once the units have been extracted, various similarity measures between features, that are extracted from the film units, can be used to locate specific sections in the movie. In order to be able to properly search through a film, we must first have access to its basic units. A movie can be broken down into a hierarchy of three units: frames, shots, and scenes. The important first step in this process is to partition the film into shots. Shot detection, the process of locating the transitions between different cameras, is executed by performing a color reduction, using the 4-Histograms method to calculate the distance between neighboring frames, applying a second order derivative to the resulting distance vector, and finally using the automatically calculated threshold to locate shot cuts. Scene detection is generally a more difficult task when compared to shot detection. After the shot boundaries of a video have been detected, the next step towards scene detection is to calculate a certain similarity measure which can then be used to cluster shots into scenes. Various keyframe extraction algorithms and similarity measures from the literature were considered and compared. Frame sampling for obtaining keyframe sets and Bhattacharya distance for similarity measure were selected for use in the shot detection algorithm. A binary shot similarity map is then created using the keyframe sets and Bhattacharya distance similarity measure. Next, a temporal distance weight and a predetermined threshold are applied to the map to obtain the final binary similarity map. The last step uses the proposed algorithm to locate the shot clusters along the diagonal which correspond to scenes. These methods and measures were successfully implemented in the Video Search Tool to hierarchically segment videos into shots and scenes.
APA, Harvard, Vancouver, ISO, and other styles
11

Hug, Johannes Michael. "Semi-automatic segmentation of medical imagery /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mason, Keith John. "A stakeholder approach to the segmentation of the short haul business air travel market." Thesis, University of Plymouth, 1995. http://hdl.handle.net/10026.1/450.

Full text
Abstract:
The marketing literature deals inadequately with markets which show characteristics of both consumer and industrial markets. In this work such markets are called hybrid markets. The research attempts to find an appropriate research approach for the short haul business related air travel market, which has hybrid market characteristics. Recent studies of the business travel market (Stephenson and Fox, 1987, Toh and Hu. 1988 and 1990) have investigated corporate and traveller attitude towards frequent flier programmes (see Glossary). However, as yet the airline marketing literature has not investigated the role the purchasing organisation (the employer of the traveller) has to play in a decision to purchase business related air travel. Market segmentation is selected as a suitable tool to investigate the business travel market. However, a review of the literature on segmentation for both consumer and industrial products reveals that an approach suited to the characteristics of this market is not available. Consequently a two stage research approach for hybrid markets is developed. A case study of nine companies in the first stage of the research is used to develop an understanding of corporate involvement in the purchase of business air travel, and identifies three key stakeholder groups in the purchase. They are the traveller, the travel organiser, and the 'organisation'. The second stage of the research collects data on the stakeholders. Traveller data on the importance of product elements in the purchase are used in a benefit segmentation of the market. The attitude data from 827 business travellers is analysed by factor analysis to identify six principal purchase benefits. These six benefits account for 60.6% of the variance in the data. Six factor scores for each respondent are calculated and then investigated by ak means iterative partitioning cluster analysis. A robust three cluster solution is discovered; i. e. three benefit segments are present in the short haul business travel market, based on traveller attitude. Cross-validation tests are carried out to test the stability of this solution. The three segments are investigated to evaluate the influence in the purchase decision of other organisational stakeholders. Differences between segments are found in the travel policy of the employing organisation, class of travel allowed to travellers, and purchase behaviour. The research indicates that for hybrid markets such as business travel, the role of the employing organisation may be important in purchase decisions. Consequently, it is recommended that future reserach should assess corporate involvement in purchases of products that have both consumer and industrial elements. The evaluation of the influences of various stakeholder groups in purchase decisions in hybrid markets may reveal previously overlooked marketing opportunities.
APA, Harvard, Vancouver, ISO, and other styles
13

Rosado-Toro, Jose A. "Right Ventricle Segmentation Using Cardiac Magnetic Resonance Images." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612450.

Full text
Abstract:
The world health organization has identified cardiovascular disease as the leading cause of non-accidental deaths in the world. The heart is identified as diseased when it is not operating at peak efficiency. Early diagnosis of heart disease can impact treatment and improve a patient's outcome. An early sign of a diseased heart is a reduction in its pumping ability, which can be measured by performing functional evaluations. These are typically focused on the ability of the ventricles to pump blood to the lungs (right ventricle) or to the rest of the body (left ventricle). Non-invasive imaging modalities such as cardiac magnetic resonance have allowed the use of quantitative methods for ventricular functional evaluation. The evaluation still requires the tracing of the ventricles in the end-diastolic and end-systolic phases. Even though manual tracing is still considered the gold standard, it is prone to intra- and inter-observer variability and is time consuming. Therefore, substantial research work has been focused on the development of semi- and fully automated ventricle segmentation algorithms. In 2009 a medical imaging conference issued a challenge for short-axis left ventricle segmentation. A semi-automated technique using polar dynamic programming generated results that were within human variability. This is because a path in a polar coordinate system yields a circular object in the Cartesian grid and the left ventricle can be approximated as a circular object. In 2012 there was a right ventricle segmentation challenge, but no polar dynamic programming algorithms were proposed. One reason may be that polar dynamic programming can only segment circular shapes. To use polar dynamic programming for the segmentation of the right ventricle we first expanded the capability of the technique to segment non-circular shapes. We apply this new polar dynamic programming in a framework that uses user-selected landmarks to segment the right ventricle in the four chamber view. We also explore the use of four chamber right ventricular segmentation to segment short-axis views of the right ventricle.
APA, Harvard, Vancouver, ISO, and other styles
14

Harders, Matthias. "Haptically assisted interactive 3D segmentation of the intestinal system /." [S.l.] : [s.n.], 2002. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Choe, Chong Pyo. "The role of pair-rule genes in Tribolium segmentation." Diss., Manhattan, Kan. : Kansas State University, 2006. http://hdl.handle.net/2097/209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lievre, Maryline. "Analyse multi-échelles et modélisation de la croissance foliaire chez Arabidopsis thaliana : mise au point et test d’un pipeline d’analyses permettant une analyse intégrée du développement de la cellule à la pousse entière." Thesis, Montpellier, SupAgro, 2014. http://www.theses.fr/2014NSAM0051/document.

Full text
Abstract:
Ce travail est basé sur le constat du manque de méthodes permettant l'analyse intégrée des processus contrôlant le développement végétatif d'Arabidopsis thaliana dans les études phénotypiques multi-échelles. Un phénotypage préliminaire de la croissance foliaire de 91 génotypes a permis de sélectionner 3 mutants et des variables d'intérêt pour une étude plus poussée du développement de la pousse. Un pipeline de méthodes d'analyses combinant techniques d'analyse d'images et modèles statistiques a été développé pour intégrer les mesures faites à l'échelle de la feuille et de la pousse. Des modèles multi-phasiques à changements de régime semi-markovien ont été estimés pour chaque génotype permettant une caractérisation plus pertinente des mutants. Ces modèles ont validé l'hypothèse selon laquelle le développement de la rosette peut être découpé en une suite de phases de développement, pouvant varier selon les génotypes. Ils ont aussi mis en évidence le rôle structurant de la variable «trichome abaxial», bien que les phases de développement ne puissent être entièrement expliquées par ce trait. Un 2nd pipeline d'analyses combinant une méthode semi-automatique de segmentation d'images de l'épiderme foliaire et l'analyse des surfaces de cellules par un modèle de mélange de lois gamma à paramètres liés par une loi d'échelle a été développé. Ce modèle nous a permis d'estimer la loi du nombre de cycles d'endoréduplication. Nous avons mis en évidence que cette loi dépendait du rang de la feuille.Le cadre d'analyses multi-échelles développé et testé durant cette thèse devrait être assez générique pour être appliqué à d'autres espèces végétales dans diverses conditions environnementales
This study is based on the observation of a lack of methods enabling the integrated analysis of the processes controlling the vegetative development in Arabidopsis thaliana during multi-scale phenotypic studies. A preliminary leaf growth phenotyping of 91 genotypes enabled to select 3 mutants and different variables of interest for a more in depth analysis of the shoot development.We developed a pipeline of analysis methods combining image analysis techniques and statistical models to integrate the measurements made at the leaf and shoot scales. Semi-Markov switching models were built for each genotype, allowing a more thorough characterization of the studied mutants. These models validated the hypothesis that the rosette can be structured into successive developmental phases that could change depending on the genotype. They also highlighted the structuring role of the ‘abaxial trichomes' variable, although the developmental phases cannot be explained entirely by this trait. We developed a second pipeline of analysis methods combining a semi-automatic method for segmenting leaf epidermis images, and the analysis of the obtained cell areas using a gamma mixture model whose parameters of gamma components are tied by a scaling rule. This model allowed us to estimate the mean number of endocycles. We highlighted that this mean number of endocycles was function of the leaf rank.The multi-scale pipeline of analysis methods that we developed and tested during this PhD should be sufficiently generic to be applied to other plant species in various environmental conditions
APA, Harvard, Vancouver, ISO, and other styles
17

Bjurström, Håkan, and Jon Svensson. "Assessment of Grapevine Vigour Using Image Processing." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1342.

Full text
Abstract:

This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.

APA, Harvard, Vancouver, ISO, and other styles
18

Davhana, Shandukani Albert. "The influence of mobile internet on advertising to consumers in the short–term insurance industry / by Shandukani A. Davhana." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4479.

Full text
Abstract:
Marketing and advertisement activities are transforming as new digital media streams emerge. It is believed that the first major digital transition took place when broadcast media such as television and cinema, also called first screen, to the PC Internet, referred to as the second screen, entered the media industry. The last couple of years saw an expanding transition into the third screen, which is the mobile handset, commonly known as cellphones in South Africa. The rapid explosion of mobile phones and other mobile devices has created a new marketing channel. The use of Short Messaging Service, Multimedia Message Service, Graphic WAP Banners, and Video Clips to communicate with customers through their mobile devices / cellphones has gained popularity, making the mobile phone the ultimate medium for one–to–one or one–to–many marketing. And the more mobile handsets penetrate the mass market, the greater are the opportunities for advertising experiences. This exploratory study investigates the impact / effectiveness of mobile advertising to consumers in the short–term insurance industry. The study briefly focuses on whether marketers are reaping the benefits of using this medium to communicate and market their products and services to the identified target market. The findings indicate that mobile advertising has an impact on consumers in the short–term insurance industry. It was also envisaged that where mobile advertising seems to have no effect, the root of the problem lies in the mass marketing approach. Customers are looking for full customisation of mobile marketing messages, based on their individual requirements, tastes, preferences, location, time, and it should also add value to consumers. For maximum impact, it is also recommended that marketers should build measurements, targeting and optimisation into their campaign processes.
Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
19

Hlaváčová, Linda. "Možnosti optimalizace životního cyklu zákazníka e-shopu." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193163.

Full text
Abstract:
If the e-shop wants to have loyal customers, they have to begin actively build the relationship with them. This master thesis discusses the methods by which can e-shop start working on building a strong customer relationship and put it in the context of modern trends in online marketing and current advanced communication technologies. Customer lifecycle optimisation which is closely related with data analysis, communication, online marketing and business process improving is not yet very known topic in the Czech Republic. It is so despite the methods aren't difficult to learn new tools or other entry barriers. Master thesis should provide the e-shop owners and their marketers with comprehensive guide how to apply the knowledge about the customer lifecycle to their e-shop.
APA, Harvard, Vancouver, ISO, and other styles
20

Félicien, Vallet. "Structuration automatique de talk shows télévisés." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00635495.

Full text
Abstract:
Les problématiques modernes de conservation du patrimoine numérique ont rendu les compagnies professionnelles d'archivage demandeuses de nouveaux outils d'indexation et en particulier de méthodes de structuration automatique. Dans cette thèse, nous nous intéressons à un genre télévisuel à notre connaissance peu analysé : le talk show. Inspirés de travaux issus de la communauté des sciences humaines et plus spécifiquement d'études sémiologiques, nous proposons, tout d'abord, une réflexion sur la structuration d'émissions de talk show. Ensuite, ayant souligné qu'un schéma de structuration ne peut avoir de sens que s'il s'inscrit dans une démarche de résolution de cas d'usage, nous proposons une évaluation de l'organisation ainsi dégagée au moyen d'une expérience utilisateur. Cette dernière met en avant l'importance des locuteurs et l'avantage d'utiliser le tour de parole comme entité atomique en lieu et place du plan (shot), traditionnellement adopté dans les travaux de structuration. Ayant souligné l'importance de la segmentation en locuteurs pour la structuration d'émissions de talk show, nous y consacrons spécifiquement la seconde partie de cette thèse. Nous proposons tout d'abord un état de l'art des techniques utilisées dans ce domaine de recherche et en particulier des méthodes non-supervisées. Ensuite sont présentés les résultats d'un premier travail de détection et regroupement des tours de parole. Puis, un système original exploitant de manière plus efficace l'information visuelle est enfin proposé. La validité de la méthode présentée est testée sur les corpus d'émissions Le Grand Échiquier et On n'a pas tout dit. Au regard des résultats, notre dernier système se démarque avantageusement des travaux de l'état de l'art. Il conforte l'idée que les caractéristiques visuelles peuvent être d'un grand intérêt -- même pour la résolution de tâches supposément exclusivement audio comme la segmentation en locuteurs -- et que l'utilisation de méthodes à noyau dans un contexte multimodal peut s'avérer très performante.
APA, Harvard, Vancouver, ISO, and other styles
21

Guevara, Alvez Pamela Beatriz. "Inference of a human brain fiber bundle atlas from high angular resolution diffusion imaging." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00638766.

Full text
Abstract:
La structure et l'organisation de la substance blanche du cerveau humain ne sont pas encore complètement connues. L'Imagerie par Résonance Magnétique de diffusion (IRMd) offre une approche unique pour étudier in vivo la structure des tissus cérébraux, permettant la reconstruction non invasive des trajectoires des faisceaux de fibres du cerveau en utilisant la tractographie. Aujourd'hui, les techniques récentes d'IRMd avec haute résolution angulaire (HARDI) ont largement amélioré la qualité de la tractographie par rapport à l'imagerie du tenseur de diffusion standard (DTI). Toutefois, les jeux de données de tractographie résultant sont très complexes et comprennent des millions de fibres, ce qui nécessite une nouvelle génération de méthodes d'analyse. Au-delà de la cartographie des principales voies de la substance blanche, cette nouvelle technologie ouvre la voie à l'étude des faisceaux d'association courts, qui ont rarement été étudiés avant et qui sont au centre de cette thèse. L'objectif est d'inférer un atlas des faisceaux de fibres du cerveau humain et une méthode qui permet le mappage de cet atlas à tout nouveau cerveau.Afin de surmonter la limitation induite par la taille et la complexité des jeux de données de tractographie, nous proposons une stratégie à deux niveaux, qui enchaîne des regroupements de fibres intra- et inter-sujet. Le premier niveau, un regroupement intra-sujet, est composé par plusieurs étapes qui effectuent un regroupement hiérarchique et robuste des fibres issues de la tractographie, pouvant traiter des jeux de données contenant des millions de fibres. Le résultat final est un ensemble de quelques milliers de faisceaux de fibres homogènes représentant la structure du jeu de données de tractographie dans sa totalité. Cette représentation simplifiée de la substance blanche peut être utilisée par plusieurs études sur la structure des faisceaux individuels ou des analyses de groupe. La robustesse et le coût de l'extensibilité de la méthode sont vérifiés à l'aide de jeux de fibres simulés. Le deuxième niveau, un regroupement inter-sujet, rassemble les faisceaux obtenus dans le premier niveau pour une population de sujets et effectue un regroupement après normalisation spatiale. Il produit en sortie un modèle composé d'une liste de faisceaux de fibres génériques qui peuvent être détectés dans la plupart de la population. Une validation avec des jeux de données simulées est appliqué afin d'étudier le comportement du regroupement inter-sujet sur une population de sujets alignés avec une transformation affine. La méthode a été appliquée aux jeux de fibres calculés à partir des données HARDI de douze cerveaux adultes. Un nouveau atlas des faisceaux HARDI multi-sujet, qui représente la variabilité de la forme et la position des faisceaux à travers les sujets, a été ainsi inféré. L'atlas comprend 36 faisceaux de la substance blanche profonde, dont certains représentent quelques subdivisions des faisceaux connus, et 94 faisceaux d'association courts de la substance blanche superficielle. Enfin, nous proposons une méthode de segmentation automatique de mappage de cet atlas à tout nouveau sujet.
APA, Harvard, Vancouver, ISO, and other styles
22

Prášil, Zdeněk. "Využití data miningu v řízení podniku." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-150279.

Full text
Abstract:
The thesis is focused on data mining and its use in management of an enterprise. The thesis is structured into theoretical and practical part. Aim of the theoretical part was to find out: 1/ the most used methods of the data mining, 2/ typical application areas, 3/ typical problems solved in the application areas. Aim of the practical part was: 1/ to demonstrate use of the data mining in small Czech e-shop for understanding of the structure of the sale data, 2/ to demonstrate, how the data mining analysis can help to increase marketing results. In my analyses of the literature data I found decision trees, linear and logistic regression, neural network, segmentation methods and association rules are the most used methods of the data mining analysis. CRM and marketing, financial institutions, insurance and telecommunication companies, retail trade and production are the application areas using the data mining the most. The specific tasks of the data mining focus on relationships between marketing sales and customers to make better business. In the analysis of the e-shop data I revealed the types of goods which are buying together. Based on this fact I proposed that the strategy supporting this type of shopping is crucial for the business success. As a conclusion I proved the data mining is methods appropriate also for the small e-shop and have capacity to improve its marketing strategy.
APA, Harvard, Vancouver, ISO, and other styles
23

"Efficient techniques for video shot segmentation and retrieval." Thesis, 2007. http://library.cuhk.edu.hk/record=b6074424.

Full text
Abstract:
Video segmentation is the first step to most content-based video analysis. In this thesis, several methods have been proposed to detect shot transitions including cut and wipe. In particular, a new cut detection method is proposed to apply multi-adaptive thresholds during three-step processing of frame-by-frame discontinuity values. A "likelihood value", which measures the possibility of the presence of a cut at each step of processing, is used to reduce the influence of threshold selection to the detection performance. A wipe detection algorithm is also proposed in our thesis to detect various wipe effects with accurate frame ranges. In the algorithm, we carefully model a wipe based on its properties and then use the model to remove possible confusion caused by motion or other transition effects.
With the segmented video shots, video indexing and retrieval systems retrieve video shots using shot-based similarity matching based on the features of shot key-frames. Most shot-based similarity matching methods focus on low-level features such as color and texture. Those methods are often not effective enough in video retrieval due to the large gap between semantic interpretation of videos and the low level features. In this thesis, we propose an attention-driven video retrieval method by using an efficient spatiotemporal attention detection framework. Within the framework, we propose an efficient method for focus of attention (FOA) detection which involves combining adaptively the spatial and motion attention to form an overall attention map. Without computing motion explicitly, it detects motion attention using the rank deficiency of gray scale gradient tensors. We also propose an attention-driven shot matching method using primarily FOA. The matching method boosts the attended regions in the respective shots by converting attention values to importance factors in the process of shot similarity matching. Experiment results demonstrate the advantages of the proposed method in shot similarity matching.
Li, Shan.
"September 2007."
Adviser: Moon-Chuen Lee.
Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (p. 150-168).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
24

Hsieh, Yu-Ting, and 謝語婷. "Key Frame Detection Based on Video Shot Segmentation using Maximally Stable Extremal Regions." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/15100289314756697636.

Full text
Abstract:
碩士
國立臺灣海洋大學
資訊工程學系
98
This thesis presents an approach for key frame detection based on video shot segmentation for video summarization, which facilitates the user to understand and search video content rapidly. Toward a sematic based video summarization, it remains as a challenge in the detection of hidden camera operation. In general, the structure of a video sequence can be divided into four levels, images, shots, scenes and video clips. A scene could involve several shots, where each shot is supposed to contain a consistent human visual perception in appearance and activity. The camera operations can be categorized into pan, zoom in and zoom out, pause, and abrupt shot change. The visual objects in a video shot suffer from geometrical distortion including translation rotation and scaling for a specific camera operator. Traditional key frame extraction methods, usually ignored the camera operation, limit the accuracy of video summarization. In this thesis, we propose a key frame detection based on video shot segmentation using the detection of maximally stable extremal regions, which is insensitive to camera operation. The region-based approach makes similarity analysis between neighboring frames and achieves the invariant goals of translation, rotation, zoom in and zoom out. In this thesis, we use the first frame of a video shot as the training sample to determine the segmentation parameter that results in maximal stable regions using the voting scheme of Hough transform. The segmentation parameter is used to segment consecutive frame. The shot boundary is defined on the frame which has a large distance to its neighboring frame according to the segmented regions. In the experiment, the proposed method is verified via a variety of video sequences. Experimental results demonstrate that the proposed method is effective to recover the geometrical distortion and achieve the goal of accurate video summarization.
APA, Harvard, Vancouver, ISO, and other styles
25

Kao, Chun-Yi, and 高俊義. "On the Study of Shot Segmentationin Compressed Domain." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/85629301959803638377.

Full text
Abstract:
碩士
逢甲大學
電機工程所
91
In order to support the new functionalities of the multimedia application, the development of techniques for fast and efficient analysis of video streams is essential. Partitioning a video sequence into shots is the first step toward video structure parsing and contend-based video indexing and retrieval. Given that video is often stored efficiently in compressed domain, the costly overhead of decompression can be reduced by analyzing the compressed data directly. In this research, we propose an object-based algorithm for detecting shot-boundary that is directly applicable to MPEG-2 compressed domain. Inspired by object-based video compressed standard, we segment video frame into two objects: foreground object and background object. The foreground object that is influenced either by fast global motion or large scaled object motion is unreliable for shot-boundary detection. Therefore, the proposed algorithm first extract background region and then detect video cuts, which include hard cuts, fades and dissolves by utilizing the compressed domain data in these regions. To detect hard cuts in video, we evaluate the DC block difference and further calculate the cumulative DC block difference for the determination of the candidates of gradual transitions: fades and dissolves. By observing the temporal curve of DC value around edge, we find that dissolves always occur in the down-concave regions and fades happen in down hill to zero or upper hill from zero. We will propose some methods to detect these effects and further study the usefulness of other compressed domain data to support the detection of shots.
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Chun-Yen, and 林群雁. "The Study of Benefit Segmentation, Market Segmentation and Customer Satisfaction--The Case of Individual Coffee Shop in Kaohsiung." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/26946135088179056995.

Full text
Abstract:
碩士
高雄餐旅學院
餐旅管理研究所
95
The purpose of this research is to determine the market segmentation of individual coffee shop as a way to separate goal market for the business. The influences of population variables on benefit segmentation and market segmentation were being studied, as well as the influences of market segmentation influence on customer satisfaction. This research uses fixed quantity sampling method, and 330 samples were collected. Findings shows that there are 3 different groups of coffee shops in Kaohsiung (environment courtship, cheap-fresh, quality ambiance). The findings also shows that part of the population variables affect market segmentation, and market segmentation affect customer satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
27

Hsieh, Meng-Hua, and 謝孟樺. "Context-Aware Short Text Segmentation based on Word Co-occurrence Model." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/pmthuv.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
106
Text mining plays an extremely important role and application in the era of information explosion. Before any text mining techniques, the most important prerequisite is word segmentation. The quality of the word segmentation result significantly affects the performance of the follow-up text mining applications. The problem of word segmentation is more obvious and important in Chinese text analysis. The existing word segmentation method has had good results in official articles such as news articles. However, the performance of the existing word segmentation on short sentences and informal sentences still needs improvement. In this thesis, we propose a word segmentation method that takes into account the semantic context. The main idea is to create a co-occurrence dictionary by using Wikipedia articles, and use the co-occurrence dictionary to improve the performance of word segmentation. Through experimental evaluation, we verify that the accuracy of our proposal is 7.84\% higher than that of the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Huang, Hsiao-Hui, and 黃曉蕙. "Automatic Short-Axis Left Ventricle Segmentation: Application to MOLLI Myocardium T1 Mapping." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/r6k97z.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
103
Researchers widely utilized a standardized myocardial segmentation of American Heart Association (AHA-17) to measure myocardial perfusion, functions of left ventricle, and coronary functions for clinical investigations. However, to achieve AHA-17 segmentation, in general, researchers manually select the region-of-interests (ROI) and calculate the average T1 values of the ROI. The procedure has to be repeated 17 times to reconstruct the AHA-17 diagram. It is a time-consuming task for researchers of large-scale databases. This study presents an automatic segmentation for the cardiac magnetic imaging of short-axis in modified Look-Locker inversion recovery (MOLLI) data sets. In this study, the automatic segmentation is divided into two parts, the segmentation of the LV blood pool region and the LV walls. We used an image- synthesis method and layer-growing method to improve the segmentation accuracy. Results demonstrated the accuracy of the obtained myocardium mask is significantly improved by using the layer-growing method. In summary, this study presents a practical and robust tool for application of automatic myocardium segmentation for MOLLI data sets.
APA, Harvard, Vancouver, ISO, and other styles
29

Shine, Chen Young, and 陳楊祥. "The Segmentation Research of Male-Leisure-Shoe''s Market in Tai- wan." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/10566589799983769723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chang, Kai Cheng, and 張凱程. "Using Sequence-to-Sequence with Long Short Term Memory Model for Chinese Word Segmentation." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107CGU05392021%22.&searchmode=basic.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

"Short-term persistence in emerging market closed-end funds performance and capital market segmentation." Tulane University, 2012.

Find full text
Abstract:
archives@tulane.edu
In the first chapter of this dissertation, using monthly data from January 1995 to December 2010 as a sample for emerging market closed-end funds (CEF), I demonstrate that a common factor in exists among stock returns, net asset value returns, and the difference in SP and NAV closed-end fund returns in emerging markets. Using a time series and a cross-section regression, furthermore, I find persistence in stock price returns, net asset value returns, but not in their differences in the short term. I demonstrate that these factors explain that fund managers' skills affect persistence. Additionally, I analyze market factors that explain the predictability of future CEF performances. The results provide support for CEF persistence in emerging markets, whereas CEF remains inconsistent with "hot hand" investment strategies. In the second chapter, I use monthly data from January, 1980 to May, 2009 for a sample of 35 closed-end funds that invest in emerging and developed markets. I find that discounts/premiums in emerging market and developed market forecast both share price (SP) and net asset value (NAV) returns, with the forecasting power of the latter being stronger. Additional tests show that the fund discounts/premiums contain information about future macroeconomic factors of their corresponding emerging and developed markets. I also document a strong association between investors' expectations of future macroeconomic conditions and the difference in SP and NAV returns. The results support a rational market segmentation explanation for the discounts/premiums in emerging market closed-end funds but they are not consistent with a straight forward investor sentiment explanation.
1
Admin
APA, Harvard, Vancouver, ISO, and other styles
32

jie, jhang ren, and 張荏傑. "The analysis of market segmentation of cultural creative industry - take example for Glove Puppet Show." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27001113997256786442.

Full text
Abstract:
碩士
佛光大學
經濟學系
95
The analysis of market segmentation of cultural creative industry - take example for Glove Puppet Show Abstract In the 21st century, under the operation of economic system and the trend of valuing leisure, the development of cultural creative industry is considered an important factor to enhance national economic development and social quality of life, and glove puppet show is one of them. Glove puppet show follows the evolvement of the times to change. In order to maintain its tradition, keep its competitiveness, and conform to the trend of the times, the glove puppet show with marketing ways of new commercialization slowly takes shape. It changes a lot in puppetry, costumes, plots, narration, acousto-optic effect, photo editing, etc. Through delivery of questionnaires and statistical analysis of reliability, factors, cluster, etc., this essay uses the market segmentation method to sort out the outline of consumer’s expectations or demand to glove puppet show in Yilan area and develops the service quality of products that answers to customer’s expectations to strengthen the competitiveness of industry operating of glove puppet show. The study found out that: (1) It can be divided into 5 groups, independent calculation type, fashionable enjoyment type, traditional conservation type, natural convenience type, showy selfhood type, to be explored. (2) The study found that, in the characteristics and interests of products of glove puppet show, the significant behavior of divergence valuing of various market segmentations includes: traditional literature, plot content, quality, acousto-optic effect, convenience, diversity, and message speed. (3) In dubbing language and narration dubbing, the differences are not particularly significant. So entrepreneurs can try to use different dubbing languages and use different people for narration dubbing. It is needless to let a person to interpret all the roles. (4) Speaking of gender, in various consumer groups, the feminine proportion is more than the masculine proportion in independent calculation and traditional conservation these two groups. (5) On education level, research institute educational background is mainly on independent calculation and tradition conservation these two types. But university and college educational background are mainly on fashionable enjoyment and natural convenience these two types. (6) We found out that consumers of natural convenience type on Type 4 have seriously used network to illegally download play sets of glove puppet show. It needs the government to take the problem more seriously. Key words:cultural creative industry, glove puppet show, market segmentation, Analysis of Reliability, Analysis of Factors, Analysis of Cluster
APA, Harvard, Vancouver, ISO, and other styles
33

Venter, Dewald. "Market segmentation of visitors to two distinct regional tourism events in South Africa." Thesis, 2010. http://hdl.handle.net/10352/124.

Full text
Abstract:
Thesis (M.Tech. - Tourism and PR Management, Dept. of Hospitality)--Vaal University of Technology.
The purpose of this study was to segment the various markets attending the Transvalia Open Air Show (Vaal Region) and the Cherry Festival (Free State). A comparison of the various segments enabled the researcher to identify key success factors with regard to market segmentation for tourism events to be implemented in the Vaal Region. It will also enable organisers to target the correct tourist market segments for both events and provide guidelines for improving the planning and marketing of events in both regions. This study therefore aimed to compare the market segments of two tourism events, the Cherry Festival. held in Ficksburg which is located in the Free State and the Transvalia Open Air Show, held in the Vaal Region. Questionnaires were distributed amongst visitors on the festival grounds as well as in areas surrounding the festival grounds. The study was based on availability sampling since only visitors who were willing to parttcipate in the survey completed the questionnaires. A total of 550 questionnaires was distributed, of which 472 were suitable for use. At the Transvalia Open Air Show 273 questionnaires were completed, of which 260 were usable. Students were trained by the researcher to assist in the survey. The questtonnaires were distributed on the show grounds. The data were used to compile graphs and tables so that a profile of each festival could be designed The variables that were the focal point of this study were gender, occupation, language, visitors' province of ongin, group size, number of days spent at these events and average spend. These results can contribute to better marketing and more targeted markets to create a larger number of attendants. The organisers can determine what type of entertainment, music and activities the attendants favour, so that all the elements of the event can then be marketed as a whole. Feedback also allows the organisers to improve the facilities and services available at the events.
APA, Harvard, Vancouver, ISO, and other styles
34

Jaramillo, Carlos Anthony. "Evolution of the insect segmentation hierarchy : comparative analyses of the patterning process in long and short germ insects /." 2003. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3088750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Chiao-Ning, and 陳巧寧. "Automatic Extracellular Volume Fraction Mapping in the Myocardium: Deformable Image Registration Combined with Short-Axis Left Ventricle Segmentation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5hby4y.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
104
Amount clinical cardiovascular magnetic resonance (CMR) imaging techniques, extracellular volume fraction (ECV) has drawn much attention due to its applications on the focal of myocardial infraction, diffuse myocardial fibrosis and other heart diseases. ECV is estimated by the difference of T1 values between pre- and post- administrations of contrast agent and hematocrit. Accurate T1 estimation and deformable image registration are both crucial for ECV mapping. However, the changes of image contrasts between contrast administrations is a challenge for image registration. In this thesis, we propose a registration method with automatic left-ventricle walls segmentation and multiple initial T1 values. Compared to previous methods, this proposed method prominently reduced the errors of T1 fitting and significantly improve the overlap rate between pre and post contrast images as well as the accuracy of the ECV mapping. In addition, the segmentation results are helpful for ECV quantization and clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
36

"Data mining applied to direct marketing and market segmentation." Tese, MAXWELL, 2001. http://www.maxwell.lambda.ele.puc-rio.br/cgi-bin/db2www/PRG_0991.D2W/SHOW?Cont=1891:pt&Mat=&Sys=&Nr=&Fun=&CdLinPrg=pt.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, An-Cheng, and 張安政. "Automated Left Ventricle Segmentation in Cardiac Short-Axis MR Images Using Cost-Volume Filtering and Novel Myocardial Contour Processing Framework." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/08095156356149825986.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
103
Cardiovascular diseases are often associated with abnormal left ventricular (LV) cardiac parameters, such as deviation of ejection fraction (EF) and cardiac output. These information can be extracted from cardiac magnetic resonance (CMR) scans of the heart, which involves image segmentation in CMR images. Previous works on left ventricle segmentation in CMR images are often hindered by complex inner heart wall geometry or they require a more involved operator intervention. In this work, we employ novel cost-volume filtering (CVF) scheme combined with novel myocardial contour processing framework to overcome the segmentation difficulty resulted from MR imaging artifacts and inner heart wall irregularities (e.g., papillary muscle and trabeculae carneae). Result shows improved accuracy and robustness over previous works. In clinical aspects, quantitative analysis shows close agreement between manually and automatically determined cardiac functions with no systematic bias in EF estimation error.
APA, Harvard, Vancouver, ISO, and other styles
38

"The project segmentation of caixa econômica federal: the managers' perception." Tese, MAXWELL, 2004. http://www.maxwell.lambda.ele.puc-rio.br/cgi-bin/db2www/PRG_0991.D2W/SHOW?Cont=5431:pt&Mat=&Sys=&Nr=&Fun=&CdLinPrg=pt.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ansari, Salim. "Head versus tail: germ cell-less initiates axis formation via homeobrain and zen1 in a beetle." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-002E-E48E-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Ya-Lan, and 張雅嵐. "The Study on the Customer Segmentations by Investments of Short or Long term and Expected Return: A Case Study of T Bank." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/fbcdk8.

Full text
Abstract:
碩士
輔仁大學
統計資訊學系應用統計碩士班
103
With the deregulation of finance laws and regulations, products of domestic banks were expanded the time deposit to diversification. However, the selling strategy of banks still focused on single product but not focused on the customer’s need to provide their services. In order to maintain the long-term customer relationship, how to provide the most appropriate services to customers will improve customer loyalty and be a great help on selling products to customers in the future. In this study, Data were collected from Bank T. Using the binary logistic regression model to construct the model for “Customer's investment preference by short or long term” and “Customer’s investment by expected return”. At the first, doing the multicollinear diagnostics and reduced the multicollinearity in the data set by combining independent variables. And then, used stratified random sampling to select 80% of entire training dataset. For training dataset done the oversampling which proportion was 1:2, and resampling was done 30 times to construct 30 binary logistic models. The final important variables would be select if the number of voting is more than or equal to 15 times from the 30 models and Cramer’s value is more than or equal to 0.1. And used the final important variables to build a best model. This study used the forecast result of “Customer's investment preference by short or long term” and “Customer’s investment by expected return” to do the cluster analysis, in order to segmentation the customer. To understand the status of each group of customers purchase financial products, as the bank sales related products according to customers' investment preferences. In study, we discovered that the customers of “balanced financial type” mainly purchase “investment-oriented insurance products”, “bond funds” or “equity fund”, another customers of “investment conservative type” mainly purchase “traditional insurance” or “NTD time deposit” and the customers of “investment professional type” mainly purchase “investment-oriented insurance products”, “equity fund” or “Exchange Traded Funds”.
APA, Harvard, Vancouver, ISO, and other styles
41

Tripathy, Srimant P., and H. Ögmen. "Sensory memory is allocated exclusively to the current event-segment." 2018. http://hdl.handle.net/10454/16722.

Full text
Abstract:
Yes
The Atkinson-Shiffrin modal model forms the foundation of our understanding of human memory. It consists of three stores (Sensory Memory (SM), also called iconic memory, Short-Term Memory (STM), and Long-Term Memory (LTM)), each tuned to a different time-scale. Since its inception, the STM and LTM components of the modal model have undergone significant modifications, while SM has remained largely unchanged, representing a large capacity system funneling information into STM. In the laboratory, visual memory is usually tested by presenting a brief static stimulus and, after a delay, asking observers to report some aspect of the stimulus. However, under ecological viewing conditions, our visual system receives a continuous stream of inputs, which is segmented into distinct spatio-temporal segments, called events. Events are further segmented into event-segments. Here we show that SM is not an unspecific general funnel to STM but is allocated exclusively to the current event-segment. We used a Multiple-Object Tracking (MOT) paradigm in which observers were presented with disks moving in different directions, along bi-linear trajectories, i.e., linear trajectories, with a single deviation in direction at the mid-point of each trajectory. The synchronized deviation of all of the trajectories produced an event stimulus consisting of two event-segments. Observers reported the pre-deviation or the post-deviation directions of the trajectories. By analyzing observers' responses in partial- and full-report conditions, we investigated the involvement of SM for the two event-segments. The hallmarks of SM hold only for the current event segment. As the large capacity SM stores only items involved in the current event-segment, the need for event-tagging in SM is eliminated, speeding up processing in active vision. By characterizing how memory systems are interfaced with ecological events, this new model extends the Atkinson-Shiffrin model by specifying how events are stored in the first stage of multi-store memory systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Gilbert, Annie. "Le chunking perceptif de la parole : sur la nature du groupement temporel et son effet sur la mémoire immédiate." Thèse, 2012. http://hdl.handle.net/1866/8941.

Full text
Abstract:
Dans de nombreux comportements qui reposent sur le rappel et la production de séquences, des groupements temporels émergent spontanément, créés par des délais ou des allongements. Ce « chunking » a été observé tant chez les humains que chez certains animaux et plusieurs auteurs l’attribuent à un processus général de chunking perceptif qui est conforme à la capacité de la mémoire à court terme. Cependant, aucune étude n’a établi comment ce chunking perceptif s’applique à la parole. Nous présentons une recension de la littérature qui fait ressortir certains problèmes critiques qui ont nui à la recherche sur cette question. C’est en revoyant ces problèmes qu’on propose une démonstration spécifique du chunking perceptif de la parole et de l’effet de ce processus sur la mémoire immédiate (ou mémoire de travail). Ces deux thèmes de notre thèse sont présentés séparément dans deux articles. Article 1 : The perceptual chunking of speech: a demonstration using ERPs Afin d’observer le chunking de la parole en temps réel, nous avons utilisé un paradigme de potentiels évoqués (PÉ) propice à susciter la Closure Positive Shift (CPS), une composante associée, entre autres, au traitement de marques de groupes prosodiques. Nos stimuli consistaient en des énoncés et des séries de syllabes sans sens comprenant des groupes intonatifs et des marques de groupements temporels qui pouvaient concorder, ou non, avec les marques de groupes intonatifs. Les analyses démontrent que la CPS est suscitée spécifiquement par les allongements marquant la fin des groupes temporels, indépendamment des autres variables. Notons que ces marques d’allongement, qui apparaissent universellement dans la langue parlée, créent le même type de chunking que celui qui émerge lors de l’apprentissage de séquences par des humains et des animaux. Nos résultats appuient donc l’idée que l’auditeur chunk la parole en groupes temporels et que ce chunking perceptif opère de façon similaire avec des comportements verbaux et non verbaux. Par ailleurs, les observations de l’Article 1 remettent en question des études où on associe la CPS au traitement de syntagmes intonatifs sans considérer les effets de marques temporels. Article 2 : Perceptual chunking and its effect on memory in speech processing:ERP and behavioral evidence Nous avons aussi observé comment le chunking perceptif d’énoncés en groupes temporels de différentes tailles influence la mémoire immédiate d’éléments entendus. Afin d’observer ces effets, nous avons utilisé des mesures comportementales et des PÉ, dont la composante N400 qui permettait d’évaluer la qualité de la trace mnésique d’éléments cibles étendus dans des groupes temporels. La modulation de l’amplitude relative de la N400 montre que les cibles présentées dans des groupes de 3 syllabes ont bénéficié d’une meilleure mise en mémoire immédiate que celles présentées dans des groupes plus longs. D’autres mesures comportementales et une analyse de la composante P300 ont aussi permis d’isoler l’effet de la position du groupe temporel (dans l’énoncé) sur les processus de mise en mémoire. Les études ci-dessus sont les premières à démontrer le chunking perceptif de la parole en temps réel et ses effets sur la mémoire immédiate d’éléments entendus. Dans l’ensemble, nos résultats suggèrent qu’un processus général de chunking perceptif favorise la mise en mémoire d’information séquentielle et une interprétation de la parole « chunk par chunk ».
In numerous behaviors involving the learning and production of sequences, temporal groups emerge spontaneously, created by delays or a lengthening of elements. This chunking has been observed across behaviors of both humans and animals and is taken to reflect a general process of perceptual chunking that conforms to capacity limits of short-term memory. Yet, no research has determined how perceptual chunking applies to speech. We provide a literature review that bears out critical problems, which have hampered research on this question. Consideration of these problems motivates a principled demonstration that aims to show how perceptual chunking applies to speech and the effect of this process on immediate memory (or “working memory”). These two themes are presented in separate papers in the format of journal articles. Paper 1: The perceptual chunking of speech: a demonstration using ERPs To observe perceptual chunking on line, we use event-related potentials (ERPs) and refer to the neural component of Closure Positive Shift (CPS), which is known to capture listeners’ responses to marks of prosodic groups. The speech stimuli were utterances and sequences of nonsense syllables, which contained intonation phrases marked by pitch, and both phrase-internal and phrase-final temporal groups marked by lengthening. Analyses of CPSs show that, across conditions, listeners specifically perceive speech in terms of chunks marked by lengthening. These lengthening marks, which appear universally in languages, create the same type of chunking as that which emerges in sequence learning by humans and animals. This finding supports the view that listeners chunk speech in temporal groups and that this perceptual chunking operates similarly for speech and non-verbal behaviors. Moreover, the results question reports that relate CPS to intonation phrasing without considering the effects of temporal marks. Paper 2: Perceptual chunking and its effect on memory in speech processing: ERP and behavioral evidence We examined how the perceptual chunking of utterances in terms of temporal groups of differing size influences immediate memory of heard speech. To weigh these effects, we used behavioural measures and ERPs, especially the N400 component, which served to evaluate the quality of the memory trace for target lexemes heard in the temporal groups. Variations in the amplitude of the N400 showed a better memory trace for lexemes presented in groups of 3 syllables compared to those in groups of 4 syllables. Response times along with P300 components revealed effects of position of the chunk in the utterance. This is the first study to demonstrate the perceptual chunking of speech on-line and its effects on immediate memory of heard elements. Taken together the results suggest that a general perceptual chunking enhances a buffering of sequential information and a processing of speech on a chunk-by-chunk basis.
APA, Harvard, Vancouver, ISO, and other styles
43

Tavares, Francisco Miguel Moreira Miranda Melo. "Identificação de Conteúdos de Vídeo baseada na Análise de Textura e Movimento." Master's thesis, 2017. http://hdl.handle.net/10316/82928.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
Nowadays, video contents are indispensable in our lives. This presents itself as a powerful marketing tool and is a source of quick information, entertainment and culture.In the scope of television a great development has been achieved, in terms of digital contents through the set-top box and new functionalities are included as video recording and other video applications. The present thesis proposes identify video contents on television, more precisely, understand if the contents are commercial or tv shows, across computer vision techniques, image processing and MPEG-7 descriptors. From the point of view of advertising contents, were identifying and filtered by classes the video signatures of each tv channel and then stored in a database. The data stored in the database, it’s not under image type but through bins or descriptors, providing an efficient analysis.Furthermore, it’s required to have knowledge about tv shows. There are evident features like a logo presence, with different colors, texture or shapes. These properties identify the current channel and content type. This work proposes automatically detect and recognize logos and other objects throughout the broadcast, presenting several methods for each type of logos.The system stands out for the versability to manage input arguments, according to the features and purpose desired, producing better results for the end user.In all cases of test, the results obtained prove that all methods proposed are able to correctly identify video contents on television. In detail, the main features and power computacional from each method will be presented. All proposed algorithms were developed and tested under MATLAB and their frames extracted across FFMPEG.
Os conteúdos de vídeo são imprescindíveis nos dias de hoje. Apresentam-se como uma ferramenta poderosa de marketing e são uma fonte de informação rápida, de entretenimento e cultura.No contexto da televisão, tem-se assistido a grandes desenvolvimentos, no que diz respeito à digitalização de conteúdos através de set-top box (STB) e novas funcionalidades têm-se juntado como a gravação de conteúdos (DVR) e outras aplicações.A presente dissertação propõe identificar conteúdos de video, mais concretamente, perceber se é programa ou publicidade, através de técnicas de visão por computador, processamento de imagem e descritores MPEG-7.Do ponto de vista de deteção de publicidade, foram identificados e filtrados por classes os separadores de vídeo de cada canal e armazenados numa base de dados. O conteúdo guardado não se encontra sob a forma de imagem, porém através de atributos de cada método considerado, por exemplo, descritores ou bins, o que permite uma eficiente indexação e correspondência dos dados.Da mesma forma, é necessário ter conhecimento sobre o programa. Há características que se destacam como a presença de um logo, com diferentes formas, cores e texturas, os logos identificam a estação e, por vezes, o tipo de conteúdo. Este trabalho, propõe detectar e reconhecer de forma automática a presença de logos e outros objectos que permaneçam ao longo da transmissão, apresentando vários métodos para cada tipo de logo. O sistema distingue-se pela versatilidade do algoritmo ao regular os parâmetros, consoante as características do ambiente e finalidade desejada, produzindo resultados mais satisfatórios. Em todos os casos de teste, os resultados obtidos demonstram que estes métodos propostos são capazes de identificar correctamente os conteúdos de vídeo presentes em televisão. Serão apresentadas as características mais benéficas bem como a performance computacional de cada método apresentado.Todos os algoritmos propostos foram desenvolvidos e testados em MATLAB e os frames extraídos através do FFMPEG.
APA, Harvard, Vancouver, ISO, and other styles
44

Baxová, Tereza. "Analýza sluchové percepce dětí předškolního věku." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-355938.

Full text
Abstract:
Diploma thesis has special education theme. This thesis deals with the auditory perception in preschool children. The goal of the work is to evaluate the level of auditory perception of children in an ordinary preschool class. We focus on listening, auditory differentiation, short-term auditory memory, auditory analysis and synthesis, and perception and reproduction of rhythm. In order to answer the research questions, we created a test which is designed in accordance to the auditory perception development tables. The results show that on average the children score 82.8 percent in the test. The most difficult part of the test is listening with the average score of 61.0 percent. On the other hand, the most successful part is the perception and reproduction of rhythm with average score of 89.2 percent. KEYWORDS Preschool Child, Communication, Auditory Perception, Listening, Word Discrimination, Short-term Auditory Memory, Phonological Segmentation and Blending, Perception and Reproduction of Rhythm
APA, Harvard, Vancouver, ISO, and other styles
45

Soares, Antoine Pedro Mendes. "Critérios de segmentação na estratégia digital de aquisição de clientes no mercado de alojamento local." Master's thesis, 2018. http://hdl.handle.net/10400.14/29395.

Full text
Abstract:
O presente relatório procede à segmentação de potenciais clientes da Hostogether, uma start-up a desenvolver uma plataforma geradora de uma comunidade de proprietários de casas secundárias para alojamento local de curta duração. Define, igualmente, as propostas de valor para cada segmento alvo da empresa, tornando possível aumentar a capacidade de aquisição digital de clientes. O método utilizado foi o de Investigação / Ação, através de revisão de literatura – com enfoque na nova economia de partilha, plataformas digitais e mercado B2B e da análise de dados fornecidos pela Hostogether. Com este relatório de estágio pretende ter-se contribuído para o sucesso da Hostogether no mercado de alojamento local.
This report proceeds to the segmentation of potential clients of Hostogether, which is a start-up that is forming a community of short-term rental owners supported by a platform. Besides that, it also defines the value propositions for each company segments making it possible to increase the companies’ capacity of digital client acquisition. The method used in this report was the Action Research, using the literature review with focus on the new sharing economy, digital platforms and B2B market and the analysis of data provided by the company. With this internship, I intended to have contributed to the success of Hostogether in the short-term rental market.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography