Academic literature on the topic 'Shot segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Shot segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Shot segmentation"

1

Yan, Zhenggang, Yue Yu, and Mohammad Shabaz. "Optimization Research on Deep Learning and Temporal Segmentation Algorithm of Video Shot in Basketball Games." Computational Intelligence and Neuroscience 2021 (September 6, 2021): 1–10. http://dx.doi.org/10.1155/2021/4674140.

Full text
Abstract:
The analysis of the video shot in basketball games and the edge detection of the video shot are the most active and rapid development topics in the field of multimedia research in the world. Video shots’ temporal segmentation is based on video image frame extraction. It is the precondition for video application. Studying the temporal segmentation of basketball game video shots has great practical significance and application prospects. In view of the fact that the current algorithm has long segmentation time for the video shot of basketball games, the deep learning model and temporal segmentation algorithm based on the histogram for the video shot of the basketball game are proposed. The video data is converted from the RGB space to the HSV space by the boundary detection of the video shot of the basketball game using deep learning and processing of the image frames, in which the histogram statistics are used to reduce the dimension of the video image, and the three-color components in the video are combined into a one-dimensional feature vector to obtain the quantization level of the video. The one-dimensional vector is used as the variable to perform histogram statistics and analysis on the video shot and to calculate the continuous frame difference, the accumulated frame difference, the window frame difference, the adaptive window’s mean, and the superaverage ratio of the basketball game video. The calculation results are combined with the set dynamic threshold to optimize the temporal segmentation of the video shot in the basketball game. It can be seen from the comparison results that the effectiveness of the proposed algorithm is verified by the test of the missed detection rate of the video shots. According to the test result of the split time, the optimization algorithm for temporal segmentation of the video shot in the basketball game is efficiently implemented.
APA, Harvard, Vancouver, ISO, and other styles
2

Bak, Hui-Yong, and Seung-Bo Park. "Comparative Study of Movie Shot Classification Based on Semantic Segmentation." Applied Sciences 10, no. 10 (May 14, 2020): 3390. http://dx.doi.org/10.3390/app10103390.

Full text
Abstract:
The shot-type decision is a very important pre-task in movie analysis due to the vast information, such as the emotion, psychology of the characters, and space information, from the shot type chosen. In order to analyze a variety of movies, a technique that automatically classifies shot types is required. Previous shot type classification studies have classified shot types by the proportion of the face on-screen or using a convolutional neural network (CNN). Studies that have classified shot types by the proportion of the face on-screen have not classified the shot if a person is not on the screen. A CNN classifies shot types even in the absence of a person on the screen, but there are certain shots that cannot be classified because instead of semantically analyzing the image, the method classifies them only by the characteristics and patterns of the image. Therefore, additional information is needed to access the image semantically, which can be done through semantic segmentation. Consequently, in the present study, the performance of shot type classification was improved by preprocessing the semantic segmentation of the frame extracted from the movie. Semantic segmentation approaches the images semantically and distinguishes the boundary relationships among objects. The representative technologies of semantic segmentation include Mask R-CNN and Yolact. A study was conducted to compare and evaluate performance using these as pretreatments for shot type classification. As a result, the average accuracy of shot type classification using a frame preprocessed with semantic segmentation increased by 1.9%, from 93% to 94.9%, when compared with shot type classification using the frame without such preprocessing. In particular, when using ResNet-50 and Yolact, the classification of shot type showed a 3% performance improvement (to 96% accuracy from 93%).
APA, Harvard, Vancouver, ISO, and other styles
3

Tapu, Ruxandra, and Titus Zaharia. "Video Segmentation and Structuring for Indexing Applications." International Journal of Multimedia Data Engineering and Management 2, no. 4 (October 2011): 38–58. http://dx.doi.org/10.4018/jmdem.2011100103.

Full text
Abstract:
This paper introduces a complete framework for temporal video segmentation. First, a computationally efficient shot extraction method is introduced, which adopts the normalized graph partition approach, enriched with a non-linear, multiresolution filtering of the similarity vectors involved. The shot boundary detection technique proposed yields high precision (90%) and recall (95%) rates, for all types of transitions, both abrupt and gradual. Next, for each detected shot, the authors construct a static storyboard by introducing a leap keyframe extraction method. The video abstraction algorithm is 23% faster than existing techniques for similar performances. Finally, the authors propose a shot grouping strategy that iteratively clusters visually similar shots under a set of temporal constraints. Two different types of visual features are exploited: HSV color histograms and interest points. In both cases, the precision and recall rates present average performances of 86%.
APA, Harvard, Vancouver, ISO, and other styles
4

Boccignone, G., A. Chianese, V. Moscato, and A. Picariello. "Foveated shot detection for video segmentation." IEEE Transactions on Circuits and Systems for Video Technology 15, no. 3 (March 2005): 365–77. http://dx.doi.org/10.1109/tcsvt.2004.842603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tian, Pinzhuo, Zhangkai Wu, Lei Qi, Lei Wang, Yinghuan Shi, and Yang Gao. "Differentiable Meta-Learning Model for Few-Shot Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12087–94. http://dx.doi.org/10.1609/aaai.v34i07.6887.

Full text
Abstract:
To address the annotation scarcity issue in some cases of semantic segmentation, there have been a few attempts to develop the segmentation model in the few-shot learning paradigm. However, most existing methods only focus on the traditional 1-way segmentation setting (i.e., one image only contains a single object). This is far away from practical semantic segmentation tasks where the K-way setting (K > 1) is usually required by performing the accurate multi-object segmentation. To deal with this issue, we formulate the few-shot semantic segmentation task as a learning-based pixel classification problem, and propose a novel framework called MetaSegNet based on meta-learning. In MetaSegNet, an architecture of embedding module consisting of the global and local feature branches is developed to extract the appropriate meta-knowledge for the few-shot segmentation. Moreover, we incorporate a linear model into MetaSegNet as a base learner to directly predict the label of each pixel for the multi-object segmentation. Furthermore, our MetaSegNet can be trained by the episodic training mechanism in an end-to-end manner from scratch. Experiments on two popular semantic segmentation datasets, i.e., PASCAL VOC and COCO, reveal the effectiveness of the proposed MetaSegNet in the K-way few-shot semantic segmentation task.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Guanyi, and He Zhao. "One-Shot Image Segmentation with U-Net." Journal of Physics: Conference Series 1848, no. 1 (April 1, 2021): 012113. http://dx.doi.org/10.1088/1742-6596/1848/1/012113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nascimento, Jacinto C., and Gustavo Carneiro. "One Shot Segmentation: Unifying Rigid Detection and Non-Rigid Segmentation Using Elastic Regularization." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 12 (December 1, 2020): 3054–70. http://dx.doi.org/10.1109/tpami.2019.2922959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Xiaoyu, Xiaotian Lou, Lianfa Bai, and Jing Han. "Residual Pyramid Learning for Single-Shot Semantic Segmentation." IEEE Transactions on Intelligent Transportation Systems 21, no. 7 (July 2020): 2990–3000. http://dx.doi.org/10.1109/tits.2019.2922252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liao, ShengBo, Jingmeng Sun, and Haitao Yang. "Research on Long Shot Segmentation in Basketball Video." International Journal of Multimedia and Ubiquitous Engineering 10, no. 12 (December 31, 2015): 183–94. http://dx.doi.org/10.14257/ijmue.2015.10.12.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yang, Jingmeng Sun, Yifei Liu, and Yueqiu Han. "Research on Close Shot Segmentation in Sports Video." International Journal of Multimedia and Ubiquitous Engineering 11, no. 1 (January 31, 2016): 255–66. http://dx.doi.org/10.14257/ijmue.2016.11.1.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Shot segmentation"

1

Kayaalp, Isil Burcun. "Video Segmentation Using Partially Decoded Mpeg Bitstream." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1092758/index.pdf.

Full text
Abstract:
In this thesis, a mixed type video segmentation algorithm is implemented to find the scene cuts in MPEG compressed video data. The main aim is to have a computationally efficient algorithm for real time applications. Due to this reason partial decoding of the bitstream is used in segmentation. As a result of partial decoding, features such as bitrate, motion vector type, and DC images are implemented to find both continuous and discontinuous scene cuts on a MPEG-2 coded general TV broadcast data. The results are also compared with techniques found in literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Naha, Shujon. "Zero-shot Learning for Visual Recognition Problems." IEEE, 2015. http://hdl.handle.net/1993/31806.

Full text
Abstract:
In this thesis we discuss different aspects of zero-shot learning and propose solutions for three challenging visual recognition problems: 1) unknown object recognition from images 2) novel action recognition from videos and 3) unseen object segmentation. In all of these three problems, we have two different sets of classes, the “known classes”, which are used in the training phase and the “unknown classes” for which there is no training instance. Our proposed approach exploits the available semantic relationships between known and unknown object classes and use them to transfer the appearance models from known object classes to unknown object classes to recognize unknown objects. We also propose an approach to recognize novel actions from videos by learning a joint model that links videos and text. Finally, we present a ranking based approach for zero-shot object segmentation. We represent each unknown object class as a semantic ranking of all the known classes and use this semantic relationship to extend the segmentation model of known classes to segment unknown class objects.
October 2016
APA, Harvard, Vancouver, ISO, and other styles
3

luo, sai. "Semantic Movie Scene Segmentation Using Bag-of-Words Representation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Volkmer, Timo, and timovolkmer@gmx net. "Semantics of Video Shots for Content-based Retrieval." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090220.122213.

Full text
Abstract:
Content-based video retrieval research combines expertise from many different areas, such as signal processing, machine learning, pattern recognition, and computer vision. As video extends into both the spatial and the temporal domain, we require techniques for the temporal decomposition of footage so that specific content can be accessed. This content may then be semantically classified - ideally in an automated process - to enable filtering, browsing, and searching. An important aspect that must be considered is that pictorial representation of information may be interpreted differently by individual users because it is less specific than its textual representation. In this thesis, we address several fundamental issues of content-based video retrieval for effective handling of digital footage. Temporal segmentation, the common first step in handling digital video, is the decomposition of video streams into smaller, semantically coherent entities. This is usually performed by detecting the transitions that separate single camera takes. While abrupt transitions - cuts - can be detected relatively well with existing techniques, effective detection of gradual transitions remains difficult. We present our approach to temporal video segmentation, proposing a novel algorithm that evaluates sets of frames using a relatively simple histogram feature. Our technique has been shown to range among the best existing shot segmentation algorithms in large-scale evaluations. The next step is semantic classification of each video segment to generate an index for content-based retrieval in video databases. Machine learning techniques can be applied effectively to classify video content. However, these techniques require manually classified examples for training before automatic classification of unseen content can be carried out. Manually classifying training examples is not trivial because of the implied ambiguity of visual content. We propose an unsupervised learning approach based on latent class modelling in which we obtain multiple judgements per video shot and model the users' response behaviour over a large collection of shots. This technique yields a more generic classification of the visual content. Moreover, it enables the quality assessment of the classification, and maximises the number of training examples by resolving disagreement. We apply this approach to data from a large-scale, collaborative annotation effort and present ways to improve the effectiveness for manual annotation of visual content by better design and specification of the process. Automatic speech recognition techniques along with semantic classification of video content can be used to implement video search using textual queries. This requires the application of text search techniques to video and the combination of different information sources. We explore several text-based query expansion techniques for speech-based video retrieval, and propose a fusion method to improve overall effectiveness. To combine both text and visual search approaches, we explore a fusion technique that combines spoken information and visual information using semantic keywords automatically assigned to the footage based on the visual content. The techniques that we propose help to facilitate effective content-based video retrieval and highlight the importance of considering different user interpretations of visual content. This allows better understanding of video content and a more holistic approach to multimedia retrieval in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Juan. "Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.

Full text
Abstract:
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
APA, Harvard, Vancouver, ISO, and other styles
6

Ren, Jinchang. "Semantic content analysis for effective video segmentation, summarisation and retrieval." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4251.

Full text
Abstract:
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Barbieri, Tamires Tessarolli de Souza. "Representação de tomadas como suporte à segmentação em cenas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13032015-101933/.

Full text
Abstract:
A área de Personalização de Conteúdo tem sido foco de pesquisas recentes em Ciências da Computação, sendo a segmentação automática de vídeos digitais em cenas uma linha importante no suporte à composição de serviços de personalização, tais como recomendação ou sumarização de conteúdo. Uma das principais abordagens de segmentação em cenas se baseia no agrupamento de tomadas relacionadas. Logo, para que esse processo seja bem sucedido, é necessário que as tomadas estejam bem representadas. Porém, percebe-se que esse tópico tem sido deixado em segundo plano pelas pesquisas relacionadas à segmentação. Assim, este trabalho tem o objetivo de desenvolver um método baseado nas características visuais dos quadros, que possibilite aprimorar a representação de tomadas de vídeos digitais e, consequentemente, contribuir para a melhoria do desempenho de técnicas de segmentação em cenas.
The Content Personalization area has been the focus of recent researches in Computer Science and the automatic scene segmentation of digital videos is an important field supporting the composition of personalization services, such as content recommendation or summarization. One of the main approaches for scene segmentation is based on the clustering of related shots. Thus, in order to this process to be successful, is necessary to properly represent shots. However, we can see that the works reported on the literature have left this topic in backgroud. Therefore, this work aims to develop a method based on frames visual features, which enables to improve video shots representation and, consequently, the performance of scene segmentation techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Cámara, Chávez Guillermo. "Analyse du contenu vidéo par apprentissage actif." Cergy-Pontoise, 2007. http://www.theses.fr/2007CERG0380.

Full text
Abstract:
L’objet de cette thèse est de proposer un système d’indexation semi-automatique et de recherche interactive pour la vidéo. Nous avons développé un algorithme de détection des plans automatique sans paramètre, ni seuil. Nous avons choisi un classifieur SVM pour sa capacité à traiter des caractéristiques de grandes dimensions tout en préservant des garanties de généralisation pour peu d’exemples d’apprentissage. Nous avons étudié plusieurs combinaisons de caractéristiques et de fonctions noyaux et présenté des résultats intéressants pour la tâche de détection de plan de TRECVID 2006. Nous avons proposé un système interactif de recherche de contenu vidéo : RETINVID, qui permet de réduire le nombre d’images à annoter par l’utilisateur. Ces images sont sélectionnées pour leur capacité à accroître la connaissance sur les données. Nous avons effectué de nombreuses simulations sur les données de la tâche de concepts haut-niveaux de TRECVID 2005
This thesis presents work towards a unified framework for semi-automated video indexing and interactive retrieval. To create an efficient index, a set of representative key frames are selected from the entire video content. We developed an automatic shot boundary detection algorithm to get rid of parameters and thresholds. We adopted a SVM classifier due to its ability to use very high dimensional feature spaces while at the same time keeping strong generalization guarantees from few training examples. We deeply evaluated the combination of features and kernels and present interesting results obtained, for shot extraction TRECVID 2006 Task. We then propose an interactive video retrieval system: RETINVID, to significantly reduce the number of key frames annotated by the user. The key frames are selected based on their ability to increase the knowledge of the data. We perform an experiment against the 2005 TRECVID benchmark for high-level task
APA, Harvard, Vancouver, ISO, and other styles
9

Leibe, Bastian. "Interleaved object categorization and segmentation /." Konstanz : Hartung-Gorre Verlag, 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thompson, Andrew. "Hierarchical Segmentation of Videos into Shots and Scenes using Visual Content." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28827.

Full text
Abstract:
With the large amounts of video data available, it has become increasingly important to have the ability to quickly search through and browse through these videos. With that in mind, the objective of this project is to facilitate the process of searching through videos for specific content by creating a video search tool, with an immediate goal of automatically performing a hierarchical segmentation of videos, particularly full-length movies, before carrying out a search for a specific query. We approach the problem by first segmenting the video into its film units. Once the units have been extracted, various similarity measures between features, that are extracted from the film units, can be used to locate specific sections in the movie. In order to be able to properly search through a film, we must first have access to its basic units. A movie can be broken down into a hierarchy of three units: frames, shots, and scenes. The important first step in this process is to partition the film into shots. Shot detection, the process of locating the transitions between different cameras, is executed by performing a color reduction, using the 4-Histograms method to calculate the distance between neighboring frames, applying a second order derivative to the resulting distance vector, and finally using the automatically calculated threshold to locate shot cuts. Scene detection is generally a more difficult task when compared to shot detection. After the shot boundaries of a video have been detected, the next step towards scene detection is to calculate a certain similarity measure which can then be used to cluster shots into scenes. Various keyframe extraction algorithms and similarity measures from the literature were considered and compared. Frame sampling for obtaining keyframe sets and Bhattacharya distance for similarity measure were selected for use in the shot detection algorithm. A binary shot similarity map is then created using the keyframe sets and Bhattacharya distance similarity measure. Next, a temporal distance weight and a predetermined threshold are applied to the map to obtain the final binary similarity map. The last step uses the proposed algorithm to locate the shot clusters along the diagonal which correspond to scenes. These methods and measures were successfully implemented in the Video Search Tool to hierarchically segment videos into shots and scenes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Shot segmentation"

1

Bouassida, Ines, and Abdel-Rahmen El Lahga. Public–Private Wage Disparities, Employment, and Labor Market Segmentation in Tunisia. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198799863.003.0004.

Full text
Abstract:
The dysfunction of the Tunisian labor market is exacerbated particularly by the segmentation between public and private sector employment. These different segments differ in terms of returns to human capital, social protection and mobility, affecting career development and the wage structure in the economy. In this chapter, we present the patterns of wage distribution in Tunisia across important socioeconomic groups and a detailed analysis of the wage gap between public and private sectors. Our results show particularly that while in the bottom sector of the wage distribution the positive wage gap between public and private sectors is mainly attributable to the composition or characteristics of workers, the wage gap in the upper sector of the distribution is due to returns to characteristics effect. The public-sector wage premium explains the strong preference in public positions.
APA, Harvard, Vancouver, ISO, and other styles
2

Burton, Derek, and Margaret Burton. The skeleton, support and movement. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198785552.003.0003.

Full text
Abstract:
Buoyancy largely supports fish, reducing the role of the skeleton, which functions as an attachment for muscle involved in movement and in protection, as exoskeleton (scales, scutes, bony plates) and as endoskeleton (vertebral column, skull). The general organization of fish skeletons and their component parts are described, as well as bone and cartilage. The interesting occurrence of acellular bone, additional to cellular bone, in teleosts is considered. Fish show metameric segmentation with myotomes on either side of the vertebral column, the latter acting as a compression strut, preventing shortening. Myotome muscle is organized into linear units named sarcomeres which contract by means of protein fibres, myosin and actin, sliding past each other. Usually fish body wall muscles occur as a thin outer layer of aerobic red muscle, with an inner thick region of anaerobic white muscle. Interspecific variability in the relative roles of myotomes and fin musculature in swimming is discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Hoff, Timothy J. Retail Thinking Comes to Health Care. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190626341.003.0003.

Full text
Abstract:
Retail thinking and tactics are beginning to find their way into health care delivery, further impacting the ability to have strong, dyadic doctor-patient relationships. External forces described in Chapter 2 and poor patient experiences provide fertile soil for their growth. The retail rhetoric consists of heavy emphasis on “value,” “transparency,” “branding,” and “consumer activation.” The implementation of retail tactics into health care shifts the emphasis from relational to transactional forms of exchange, the latter emphasizing short-duration exchanges between buyer and seller, standardized obligations, and economic satisfaction. Retail approaches give large health care organizations greater power given their scale and resources to engage in key retail tactics such as data analytics, market segmentation, marketing, and price competition. There are tangible reasons for bringing some aspects of retail thinking into health care. Their application, however, brings risks for patients and their care, and threatens to undermine doctor-patient relationships further.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Shot segmentation"

1

Li, Zuoxin, Fuqiang Zhou, and Lu Yang. "Fast Single Shot Instance Segmentation." In Computer Vision – ACCV 2018, 257–72. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dawoud, Youssef, Julia Hornauer, Gustavo Carneiro, and Vasileios Belagiannis. "Few-Shot Microscopy Image Cell Segmentation." In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track, 139–54. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67670-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shao, Chenzhi, Haifeng Li, and Lin Ma. "Visual Cognitive Mechanism Guided Video Shot Segmentation." In Cognitive Computing – ICCC 2019, 186–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23407-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Wei-Kuang, and Shang-Hong Lai. "A Motion-Aided Video Shot Segmentation Algorithm." In Advances in Multimedia Information Processing — PCM 2002, 336–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Boyu, Chang Liu, Bohao Li, Jianbin Jiao, and Qixiang Ye. "Prototype Mixture Models for Few-Shot Semantic Segmentation." In Computer Vision – ECCV 2020, 763–78. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58598-3_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Haochen, Xudong Zhang, Yutao Hu, Yandan Yang, Xianbin Cao, and Xiantong Zhen. "Few-Shot Semantic Segmentation with Democratic Attention Networks." In Computer Vision – ECCV 2020, 730–46. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58601-0_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Perry, Jonathan, and Amanda S. Fernandez. "EyeSeg: Fast and Efficient Few-Shot Semantic Segmentation." In Computer Vision – ECCV 2020 Workshops, 570–82. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lyu, Shuchang, Guangliang Cheng, and Qimin Ding. "Deep Similarity Fusion Networks for One-Shot Semantic Segmentation." In Lecture Notes in Computer Science, 181–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41404-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yuwei, Fanman Meng, Hongliang Li, Qingbo Wu, Xiaolong Xu, and Shuai Chen. "A New Local Transformation Module for Few-Shot Segmentation." In MultiMedia Modeling, 76–87. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37734-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bu, XiaoQing, Jianming Wang, Jiayu Liang, Kunliang Liu, Yukuan Sun, and Guanghao Jin. "One-Shot Video Object Segmentation Initialized with Referring Expression." In Pattern Recognition and Computer Vision, 416–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31723-2_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Shot segmentation"

1

Ranathunga, L., R. Zainuddin, and N. A. Abdullah. "Conventional video shot segmentation to semantic shot segmentation." In 2011 IEEE 6th International Conference on Industrial and Information Systems (ICIIS). IEEE, 2011. http://dx.doi.org/10.1109/iciinfs.2011.6038064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weber, Mark, Jonathon Luiten, and Bastian Leibe. "Single-Shot Panoptic Segmentation." In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020. http://dx.doi.org/10.1109/iros45743.2020.9341546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Raza, Hasnain, Mahdyar Ravanbakhsh, Tassilo Klein, and Moin Nabi. "Weakly Supervised One Shot Segmentation." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Caelles, S., K. K. Maninis, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and L. Van Gool. "One-Shot Video Object Segmentation." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Wei-kuang, and Shang-Hong Lai. "Integrated video shot segmentation algorithm." In Electronic Imaging 2003, edited by Minerva M. Yeung, Rainer W. Lienhart, and Chung-Sheng Li. SPIE, 2003. http://dx.doi.org/10.1117/12.476299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Kai, Wei Zhai, and Yang Cao. "Self-Supervised Tuning for Few-Shot Segmentation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/142.

Full text
Abstract:
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples. It is a challenging task since the dense prediction can only be achieved under the guidance of latent features defined by sparse annotations. Existing meta-learning based method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space. To address this issue, this paper presents an adaptive tuning framework, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme, augmenting category-specific descriptors for label prediction. Specifically, a novel self-supervised inner-loop is firstly devised as the base learner to extract the underlying semantic features from the support image. Then, gradient maps are calculated by back-propagating self-supervised loss through the obtained features, and leveraged as guidance for augmenting the corresponding elements in the embedding space. Finally, with the ability to continuously learn from different episodes, an optimization-based meta-learner is adopted as outer loop of our proposed framework to gradually refine the segmentation results. Extensive experiments on benchmark PASCAL-5i and COCO-20i datasets demonstrate the superiority of our proposed method over state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
7

Honbu, Yuma, and Keiji Yanai. "Few-Shot and Zero-Shot Semantic Segmentation for Food Images." In ICMR '21: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3463947.3469234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shaban, Amirreza, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots. "One-Shot Learning for Semantic Segmentation." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yanjun Qi, A. Hauptmann, and Ting Liu. "Supervised classification for video shot segmentation." In 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698). IEEE, 2003. http://dx.doi.org/10.1109/icme.2003.1221710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dahyot, Rozenn, Niall Rea, and Anil C. Kokaram. "Sport video shot segmentation and classification." In Visual Communications and Image Processing 2003, edited by Touradj Ebrahimi and Thomas Sikora. SPIE, 2003. http://dx.doi.org/10.1117/12.503127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography