Dissertations / Theses on the topic 'Videos analytics'

To see the other types of publications on this topic, follow the link: Videos analytics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Videos analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abdallah, Raed. "Intelligent crime detection and behavioral pattern mining : a comprehensive study." Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7031.

Full text
Abstract:
Face à l'évolution rapide du paysage criminel, les agences de maintien de l'ordre (LEA) sont confrontées à des défis croissants dans les enquêtes criminelles contemporaines. Cette thèse de doctorat entreprend une exploration transformative, stimulée par la nécessité urgente de révolutionner les méthodologies d'enquête et d'armer les LEA avec des outils de pointe pour lutter efficacement contre la criminalité. Ancré dans cette motivation impérative, ce travail de recherche navigue méticuleusement à travers diverses sources de données, y compris le réseau complexe des médias sociaux, les systèmes de surveillance vidéo omniprésents et les plateformes en ligne expansives, reconnaissant leur rôle fondamental dans la détection moderne du crime. L'étude vise à doter les LEA de capacités avancées en matière de détection intelligente du crime, compte tenu de la montée en puissance des interactions numériques. Les chercheurs explorent les complexités des médias sociaux, des vidéos de surveillance et des données en ligne, mettant l'accent sur la nécessité de renforcer les stratégies de maintien de l'ordre avec des solutions technologiques de pointe. La thèse présente trois objectifs pivots : La thèse a trois objectifs clés : automatiser l'identification des suspects en utilisant la science des données, les outils big data et les modèles ontologiques ; réaliser une analyse en temps réel des médias sociaux pour détecter rapidement les crimes dans le bruit numérique en utilisant des modèles sophistiqués ; améliorer la surveillance vidéo en intégrant des algorithmes de deep learning pour une détection rapide et précise des crimes liés aux couteaux, marquant une avancée significative dans la technologie de surveillance. Naviguer dans ce domaine de recherche présente des défis significatifs, notamment l'intégration de données hétérogènes et le développement de techniques de prétraitement efficaces. L'analyse en temps réel des subtilités des médias sociaux exige des modèles ontologiques compétents. La conception des systèmes de surveillance vidéo intelligents nécessite la fusion d'algorithmes de deep learning de pointe avec un traitement vidéo en temps réel, garantissant à la fois la rapidité et la précision dans la détection des crimes. Cette thèse présente des solutions novatrices pour la détection criminelle moderne. À travers ICAD, un système intelligent d'analyse et de détection en temps réel, les enquêtes sont automatisées et rationalisées. CRI-MEDIA, un cadre ontologique, permet une détection précise des crimes sur les médias sociaux. De plus, la recherche se penche sur la surveillance vidéo des crimes liés aux couteaux avec SVSS, intégrant des modèles de deep learning avancés. Cette intégration révolutionne les méthodes d'enquête, élevant les capacités des agences de maintien de l'ordre face à la complexité du crime numérique. Le texte complet comprend 1 235 caractères, espaces inclus. La validation expérimentale dans des scénarios criminels réels est essentielle pour garantir l'intégrité de la recherche. Les méthodologies sont rigoureusement testées dans des situations authentiques, utilisant des données provenant d'enquêtes réelles. Ces expériences confirment l'efficacité des solutions proposées, tout en fournissant des insights précieux pour des améliorations futures. Les résultats mettent en lumière l'applicabilité pratique de ces méthodes, leur flexibilité dans divers contextes de maintien de l'ordre et leur contribution à la sécurité publique
In the face of a rapidly evolving criminal landscape, law enforcement agencies (LEAs) grapple with escalating challenges in contemporary criminal investigations. This PhD thesis embarks on a transformative exploration, encouraged by an urgent need to revolutionize investigative methodologies and arm LEAs with state-of-the-art tools to combat crime effectively. Rooted in this imperative motivation, the research meticulously navigates diverse data sources, including the intricate web of social media networks, omnipresent video surveillance systems, and expansive online platforms, recognizing their fundamental roles in modern crime detection. The contextual backdrop of this research is the pressing demand to empower LEAs with advanced capabilities in intelligent crime detection. The surge in digital interactions necessitates a paradigm shift, compelling researchers to delve deep into the labyrinth of social media, surveillance footage, and online data. This context underscores the urgency to fortify law enforcement strategies with cutting-edge technological solutions. Motivated by urgency, the thesis focuses on three core objectives: firstly, automating suspect identification through the integration of data science, big data tools, and ontological models, streamlining investigations and empowering law enforcement with advanced inference rules; secondly, enabling real-time detection of criminal events within digital noise via intricate ontological models and advanced inference rules, providing actionable intelligence and supporting informed decision-making for law enforcement; and thirdly, enhancing video surveillance by integrating advanced deep learning algorithms for swift and precise detection of knife-related crimes, representing a pioneering advancement in video surveillance technology. Navigating this research terrain poses significant challenges. The integration of heterogeneous data demands robust preprocessing techniques, enabling the harmonious fusion of disparate data types. Real-time analysis of social media intricacies necessitates ontological models adept at discerning subtle criminal nuances within the digital tapestry. Moreover, designing Smart Video Surveillance Systems necessitates the fusion of state-of-the-art deep learning algorithms with real-time video processing, ensuring both speed and precision in crime detection. Against these challenges, the thesis contributes innovative solutions at the forefront of contemporary crime detection technology. The research introduces ICAD, an advanced framework automating suspect identification and revolutionizing investigations. CRI-MEDIA tackles social media crime challenges using a streamlined process and enriched criminal ontology. Additionally, SVSS, a Smart Video Surveillance System, swiftly detects knife-related crimes, enhancing public safety. Integrating ICAD, CRI-MEDIA, and SVSS, this work pioneers intelligent crime detection, empowering law enforcement with unprecedented capabilities in the digital age. Critical to the integrity of the research, the proposed methodologies undergo rigorous experimentation in authentic criminal scenarios. Real-world data gathered from actual investigations form the crucible wherein ICAD, CRI-MEDIA, and SVSS are tested. These experiments serve as a litmus test, affirming not only the viability of the proposed solutions but also offering nuanced insights for further refinement. The results underscore the practical applicability of these methodologies, their adaptability in diverse law enforcement contexts, and their role in enhancing public safety and security
APA, Harvard, Vancouver, ISO, and other styles
2

Carpani, Valerio. "CNN-based video analytics." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
The content of this thesis illustrates the six months work done during my internship at TKH Security Solutions - Siqura B.V. in Gouda, Netherlands. The aim of this thesis is to investigate on convolutional neural networks possible usage, from two different point of view: first we propose a novel algorithm for person re-identification, second we propose a deployment chain, for bringing research concepts to product ready solutions. In existing works, the person re-identification task is assumed to be independent of the person detection task. In this thesis instead, we consider the two tasks as linked. In fact, features produced by an object detection convolutional neural network (CNN) contain useful information, which is not being used by current re-identification methods. We propose several solutions for learning a metric on CNN features to distinguish between different identities. Then the best of these solutions is compared with state of the art alternatives on the popular Market-1501 dataset. Results show that our method outperforms them in computational efficiency, with only a reasonable loss in accuracy. For this reason, we believe that the proposed method can be more appropriate than current state of the art methods in situations where the computational efficiency is critical, such as embedded applications. The deployment chain we propose in this thesis has two main goals: it must be flexible for introducing new advancement in networks architecture, and it must be able to deploy neural networks both on server and embedded platforms. We tested several frameworks on several platforms and we ended up with a deployment chain that relies on the open source format ONNX.
APA, Harvard, Vancouver, ISO, and other styles
3

Pettersson, Johan, and Robin Veteläinen. "A comparison of solutions to measure Quality of Service for video streams." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188514.

Full text
Abstract:
There are more and more people watching video streams over the Internet, and this has led to an increase in companies that compete for viewers. To improve the users experience, these companies can measure how their services are performing. The aim of this thesis was to recommend a way to measure the quality of service for a real time video streaming service. Three methods were presented; to buy the information from a content delivery network, extend existing analytics software or build a custom solution using packet sniffing. It was decided to extend existing analytics software. An evaluation was made on which software to extend. Four solutions were compared: Google Analytics, Mixpanel, Ooyala IQ and Piwik. The comparison was made using the analytic hierarchy process, comparing each alternative in their performance in criteria such as API maturity, flexibility, visualization and support. The recommended software to extend when building a real time video streaming service is Ooyala IQ which excel at flexibility and is easy to implement into existing solutions. It also had great capacity, offering no limit on how many events it can track per month, and finally it offers great dedicated support via telephone or email.
Det finns fler och fler personer som tittar på video strömmar på Internet, detta har lett till att nya företag har startats som konkurerar om tittare. För att förbättra kundupplevelsen kan man mäta hur tjänsten presterar. Målet med examensarbetet var att rekommendera hur man kan mäta tjänstekvalite för en realtidsvideoströmningstjänst. Tre olika lösningsförslag presenterades; att köpa informationen från en content delivery network, att bygga vidare på tillgängliga analytisk mjukvara eller att bygga ett eget paketsniffarprogram. Det bestämdes att bygga vidare på tillgänglig analytisk mjukvara. Fyra olika mjukvara jämfördes: Google Analytics, Mixpanel, Ooyala IQ och Piwik. Jämförelsen gjordes med hjälp av analytical hierarchy process, de olika alternativen jämfördes med avseende på: hur moget API:et var, flexibilitet, visualiseringen av data och support. Rekommendationen är att använda sig av Ooyala IQ som utmärker sig med avseende på flexibilitet, det var enkelt att använda deras API i sin egen lösning, det fanns ingen gräns på hur många händelser man kunde lagra per månad, och slutligen så fanns det dedikerad supportpersonal att nå via telefon eller email.
APA, Harvard, Vancouver, ISO, and other styles
4

Hassan, Waqas. "Video analytics for security systems." Thesis, University of Sussex, 2013. http://sro.sussex.ac.uk/id/eprint/43406/.

Full text
Abstract:
This study has been conducted to develop robust event detection and object tracking algorithms that can be implemented in real time video surveillance applications. The aim of the research has been to produce an automated video surveillance system that is able to detect and report potential security risks with minimum human intervention. Since the algorithms are designed to be implemented in real-life scenarios, they must be able to cope with strong illumination changes and occlusions. The thesis is divided into two major sections. The first section deals with event detection and edge based tracking while the second section describes colour measurement methods developed to track objects in crowded environments. The event detection methods presented in the thesis mainly focus on detection and tracking of objects that become stationary in the scene. Objects such as baggage left in public places or vehicles parked illegally can cause a serious security threat. A new pixel based classification technique has been developed to detect objects of this type in cluttered scenes. Once detected, edge based object descriptors are obtained and stored as templates for tracking purposes. The consistency of these descriptors is examined using an adaptive edge orientation based technique. Objects are tracked and alarm events are generated if the objects are found to be stationary in the scene after a certain period of time. To evaluate the full capabilities of the pixel based classification and adaptive edge orientation based tracking methods, the model is tested using several hours of real-life video surveillance scenarios recorded at different locations and time of day from our own and publically available databases (i-LIDS, PETS, MIT, ViSOR). The performance results demonstrate that the combination of pixel based classification and adaptive edge orientation based tracking gave over 95% success rate. The results obtained also yield better detection and tracking results when compared with the other available state of the art methods. In the second part of the thesis, colour based techniques are used to track objects in crowded video sequences in circumstances of severe occlusion. A novel Adaptive Sample Count Particle Filter (ASCPF) technique is presented that improves the performance of the standard Sample Importance Resampling Particle Filter by up to 80% in terms of computational cost. An appropriate particle range is obtained for each object and the concept of adaptive samples is introduced to keep the computational cost down. The objective is to keep the number of particles to a minimum and only to increase them up to the maximum, as and when required. Variable standard deviation values for state vector elements have been exploited to cope with heavy occlusion. The technique has been tested on different video surveillance scenarios with variable object motion, strong occlusion and change in object scale. Experimental results show that the proposed method not only tracks the object with comparable accuracy to existing particle filter techniques but is up to five times faster. Tracking objects in a multi camera environment is discussed in the final part of the thesis. The ASCPF technique is deployed within a multi-camera environment to track objects across different camera views. Such environments can pose difficult challenges such as changes in object scale and colour features as the objects move from one camera view to another. Variable standard deviation values of the ASCPF have been utilized in order to cope with sudden colour and scale changes. As the object moves from one scene to another, the number of particles, together with the spread value, is increased to a maximum to reduce any effects of scale and colour change. Promising results are obtained when the ASCPF technique is tested on live feeds from four different camera views. It was found that not only did the ASCPF method result in the successful tracking of the moving object across different views but also maintained the real time frame rate due to its reduced computational cost thus indicating that the method is a potential practical solution for multi camera tracking applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Asif, Muhammad. "Video analytics for intelligent surveillance systems." Thesis, University of Strathclyde, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.530322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Höferlin, Benjamin [Verfasser]. "Scalable Visual Analytics for Video Surveillance / Benjamin Höferlin." München : Verlag Dr. Hut, 2014. http://d-nb.info/1050331842/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Guangchun. "Video Analytics with Spatio-Temporal Characteristics of Activities." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc799541/.

Full text
Abstract:
As video capturing devices become more ubiquitous from surveillance cameras to smart phones, the demand of automated video analysis is increasing as never before. One obstacle in this process is to efficiently locate where a human operator’s attention should be, and another is to determine the specific types of activities or actions without ambiguity. It is the special interest of this dissertation to locate spatial and temporal regions of interest in videos and to develop a better action representation for video-based activity analysis. This dissertation follows the scheme of “locating then recognizing” activities of interest in videos, i.e., locations of potentially interesting activities are estimated before performing in-depth analysis. Theoretical properties of regions of interest in videos are first exploited, based on which a unifying framework is proposed to locate both spatial and temporal regions of interest with the same settings of parameters. The approach estimates the distribution of motion based on 3D structure tensors, and locates regions of interest according to persistent occurrences of low probability. Two contributions are further made to better represent the actions. The first is to construct a unifying model of spatio-temporal relationships between reusable mid-level actions which bridge low-level pixels and high-level activities. Dense trajectories are clustered to construct mid-level actionlets, and the temporal relationships between actionlets are modeled as Action Graphs based on Allen interval predicates. The second is an effort for a novel and efficient representation of action graphs based on a sparse coding framework. Action graphs are first represented using Laplacian matrices and then decomposed as a linear combination of primitive dictionary items following sparse coding scheme. The optimization is eventually formulated and solved as a determinant maximization problem, and 1-nearest neighbor is used for action classification. The experiments have shown better results than existing approaches for regions-of-interest detection and action recognition.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Ning. "A Wireless Traffic Surveillance System Using Video Analytics." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc68005/.

Full text
Abstract:
Video surveillance systems have been commonly used in transportation systems to support traffic monitoring, speed estimation, and incident detection. However, there are several challenges in developing and deploying such systems, including high development and maintenance costs, bandwidth bottleneck for long range link, and lack of advanced analytics. In this thesis, I leverage current wireless, video camera, and analytics technologies, and present a wireless traffic monitoring system. I first present an overview of the system. Then I describe the site investigation and several test links with different hardware/software configurations to demonstrate the effectiveness of the system. The system development process was documented to provide guidelines for future development. Furthermore, I propose a novel speed-estimation analytics algorithm that takes into consideration roads with slope angles. I prove the correctness of the algorithm theoretically, and validate the effectiveness of the algorithm experimentally. The experimental results on both synthetic and real dataset show that the algorithm is more accurate than the baseline algorithm 80% of the time. On average the accuracy improvement of speed estimation is over 3.7% even for very small slope angles.
APA, Harvard, Vancouver, ISO, and other styles
9

Barracu, Maria Antonietta. "Tecniche, metodologie e strumenti per la Web Analytics, con particolare attenzione sulla Video Analytics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1919/.

Full text
Abstract:
In questa tesi viene affrontato il tema del tracciamento video, analizzando le principali tecniche, metodologie e strumenti per la video analytics. L'intero lavoro, è stato svolto interamente presso l'azienda BitBang, dal reperimento di informazioni e materiale utile, fino alla stesura dell'elaborato. Nella stessa azienda ho avuto modo di svolgere il tirocinio, durante il quale ho approfondito gli aspetti pratici della web e video analytics, osservando il lavoro sul campo degli specialisti del settore e acquisendo familiarità con gli strumenti di analisi dati tramite l'utilizzo delle principali piattaforme di web analytics. Per comprendere a pieno questo argomento, è stato necessario innanzitutto conoscere la web analytics di base. Saranno illustrate quindi, le metodologie classiche della web analytics, ovvero come analizzare il comportamento dei visitatori nelle pagine web con le metriche più adatte in base alle diverse tipologie di business, fino ad arrivare alla nuova tecnica di tracciamento eventi. Questa nasce subito dopo la diffusione nelle pagine dei contenuti multimediali, i quali hanno portato a un cambiamento nelle modalità di navigazione degli utenti e, di conseguenza, all'esigenza di tracciare le nuove azioni generate su essi, per avere un quadro completo dell'esperienza dei visitatori sul sito. Non sono più sufficienti i dati ottenuti con i tradizionali metodi della web analytics, ma è necessario integrarla con tecniche nuove, indispensabili se si vuole ottenere una panoramica a 360 gradi di tutto ciò che succede sul sito. Da qui viene introdotto il tracciamento video, chiamato video analytics. Verranno illustrate le principali metriche per l'analisi, e come sfruttarle al meglio in base alla tipologia di sito web e allo scopo di business per cui il video viene utilizzato. Per capire in quali modi sfruttare il video come strumento di marketing e analizzare il comportamento dei visitatori su di esso, è necessario fare prima un passo indietro, facendo una panoramica sui principali aspetti legati ad esso: dalla sua produzione, all'inserimento sulle pagine web, i player per farlo, e la diffusione attraverso i siti di social netwok e su tutti i nuovi dispositivi e le piattaforme connessi nella rete. A questo proposito viene affrontata la panoramica generale di approfondimento sugli aspetti più tecnici, dove vengono mostrate le differenze tra i formati di file e i formati video, le tecniche di trasmissione sul web, come ottimizzare l'inserimento dei contenuti sulle pagine, la descrizione dei più famosi player per l'upload, infine un breve sguardo sulla situazione attuale riguardo alla guerra tra formati video open source e proprietari sul web. La sezione finale è relativa alla parte più pratica e sperimentale del lavoro. Nel capitolo 7 verranno descritte le principali funzionalità di due piattaforme di web analytics tra le più utilizzate, una gratuita, Google Analytics e una a pagamento, Omniture SyteCatalyst, con particolare attenzione alle metriche per il tracciamento video, e le differenze tra i due prodotti. Inoltre, mi è sembrato interessante illustrare le caratteristiche di alcune piattaforme specifiche per la video analytics, analizzando le più interessanti funzionalità offerte, anche se non ho avuto modo di testare il loro funzionamento nella pratica. Nell'ultimo capitolo vengono illustrate alcune applicazioni pratiche della video analytics, che ho avuto modo di osservare durante il periodo di tirocinio e tesi in azienda. Vengono descritte in particolare le problematiche riscontrate con i prodotti utilizzati per il tracciamento, le soluzioni proposte e le questioni che ancora restano irrisolte in questo campo.
APA, Harvard, Vancouver, ISO, and other styles
10

Höferlin, Markus Johannes [Verfasser], and Daniel [Akademischer Betreuer] Weiskopf. "Video visual analytics / Markus Johannes Höferlin. Betreuer: Daniel Weiskopf." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2013. http://d-nb.info/1037955935/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ajiboye, Soladoye Oyebowale. "Video big data : an agile architecture for systematic exploration and analytics." Thesis, University of Sussex, 2017. http://sro.sussex.ac.uk/id/eprint/71047/.

Full text
Abstract:
Video is currently at the forefront of most business and natural environments. In surveillance, it is the most important technology as surveillance systems reveal information and patterns for solving many security problems including crime prevention. This research investigates technologies that currently drive video surveillance systems with a view to optimization and automated decision support. The investigation reveals some features and properties that can be optimised to improve performance and derive further benefits from surveillance systems. These aspects include system-wide architecture, meta-data generation, meta-data persistence, object identification, object tagging, object tracking, search and querying sub-systems. The current less-than-optimum performance is attributable to many factors, which include massive volume, variety, and velocity (the speed at which streaming video transmit to storage) of video data in surveillance systems. Research contributions are 2-fold. First, we propose a system-wide architecture for designing and implementing surveillance systems, based on the authors' system architecture for generating meta-data. Secondly, we design a simulation model of a multi-view surveillance system from which the researchers generate simulated video streams in large volumes. From each video sequence in the model, the authors extract meta-data and apply a novel algorithm for predicting the location of identifiable objects across a well-connected camera cluster. This research provide evidence that independent surveillance systems (for example, security cameras) can be unified across a geographical location such as a smart city, where each network is administratively owned and managed independently. Our investigation involved 2 experiments - first, the implementation of a web-based solution where we developed a directory service for managing, cataloguing, and persisting metadata generated by the surveillance networks. The second experiment focused on the set up, configuration and the architecture of the surveillance system. These experiments involved the investigation and demonstration of 3 loosely coupled service-oriented APIs – these services provided the capability to generate the query-able metadata. The results of our investigations provided answers to our research questions - the main question being “to what degree of accuracy can we predict the location of an object in a connected surveillance network”. Our experiment also provided evidence in support of our hypothesis – “it is feasible to ‘explore' unified surveillance data generated from independent surveillance networks”.
APA, Harvard, Vancouver, ISO, and other styles
12

Medler, Ben. "Play with data - an exploration of play analytics and its effect on player expereinces." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44888.

Full text
Abstract:
In a time of 'Big Data,' 'Personal Informatics' and 'Infographics' the definitions of data visualization and data analytics are splintering rapidly. When one compares how Fortune 500 companies are using analytics to optimize their supply chains and lone individuals are visualizing their Twitter messages, we can see how multipurpose these areas are becoming. Visualization and analytics are frequently exhibited as tools for increasing efficiency and informing future decisions. At the same time, they are used to produce artworks that alter our perspectives of how data is represented and analyzed. During this time of turbulent reflection within the fields of data visualization and analytics, digital games have been going through a similar period of data metamorphosis as players are increasingly being connected and tracked through various platform systems and social networks. The amount of game-related data collected and shared today greatly exceeds that of previous gaming eras and, by utilizing the domains of data visualization and analytics, this increased access to data is poised to reshape, and continue to reshape, how players experience games. This dissertation examines how visualization, analytics and games intersect into a domain with a fluctuating identity but has the overall goal to analyze game-related data. At this intersection exists play analytics, a blend of digital systems and data analysis methods connecting players, games and their data. Play analytic systems surround the experience of playing a game, visualizing data collected from players and act as external online hubs where players congregate. As part of this dissertation's examination of play analytics, over eighty systems are analyzed and discussed. Additionally, a user study was conducted to test the effects play analytic systems have on a player's gameplay behavior. Both studies are used to highlight how play analytic systems function and are experienced by players. With millions of players already using play analytics systems, this dissertation provides a chronicle of the current state of play analytics, how the design of play analytics systems may shift in the future and what it means to play with data.
APA, Harvard, Vancouver, ISO, and other styles
13

Kurzhals, Kuno [Verfasser], and Daniel [Akademischer Betreuer] Weiskopf. "Visual analytics of eye-tracking and video data / Kuno Kurzhals ; Betreuer: Daniel Weiskopf." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2018. http://d-nb.info/1181099412/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Godfrey, Jason Michael. "Exploring Video Analytics as a Course Assessment Tool for Online Writing Instruction Stakeholders." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/8802.

Full text
Abstract:
Online Writing Instruction (OWI) programs, like online learning classes in general, are becoming more popular in post-secondary education. Yet few articles discuss how to tailor course assessment methods to an exclusively online environment. This thesis explores video analytics as a possible course assessment tool for online writing classrooms. Video analytics allow instructors, course designers, and writing program administrators to view how many students are engaging in video-based course materials. Additionally, video analytics can provide information about how active students are in their data-finding methods while they watch. By means of example, this thesis examines video analytics from one semester of a large western university’s online first-year writing sections (n=283). This study finds that video analytics afford stakeholders knowledge of patterns in how students interact with video-based course materials. Assuming the end goal of course assessment is to provide meaningful insight that will help improve student and teacher experience, video analytics can be a powerful, dynamic course assessment tool.
APA, Harvard, Vancouver, ISO, and other styles
15

Goujet, Raphaël. "Hero.coli : a video game empowering stealth learning of synthetic biology : a continuous analytics-driven game design approach." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB175.

Full text
Abstract:
Les jeux vidéo ont prouvé leur valeur en tant que loisir et qu'outil pédagogique, que ce soit dans l'éducation ou dans le monde professionnel. Cependant, les jeux éducatifs doivent intégrer des stratégies pédagogiques et être finement ajustés pour être efficaces, et adoptés. La biologie de synthèse est une discipline émergente d'ingénierie centrée sur la conception de systèmes vivants pour accomplir des fonctions contrôlées. Elle partage des concepts avec les jeux vidéo de crafting et d'ingénierie. Nous avons conçu le premier jeu vidéo de biologie de synthèse, baptisé Hero.Coli, pour la vulgarisation et l'éducation. Pour intéresser et engager les joueurs volontaires et involontaires, c'est-à-dire les internautes lambda et les étudiants, notre principale stratégie est l'utilisation de techniques d'UX et d'apprentissage furtif. Cela consiste à créer un jeu éducatif sans coupure dans l'expérience (phases explicites d'apprentissage ou d'évaluation), par imitation des jeux commerciaux à succès. Les données d'utilisation ont été analysées en continu pour pouvoir améliorer le jeu, en identifiant les problèmes de game design, les mauvaises compréhensions révélées au posttest ainsi que les phases d'apprentissage réussies. J'ai validé l'utilité du jeu en comparant les pré- et posttests des joueurs (n=89). En moyenne, le pourcentage de réponses correctes s'accroît de 32 points de pourcentage par question entre le prétest et le posttest. Les plus grands accroissements se produisent pour les questions de plus haut niveau conceptuel, par opposition aux questions portant sur le lexique. Cela correspond à ce que l'on peut attendre d'un apprentissage furtif, qui met plus l'accent sur le fonctionnement (les mécaniques de jeu) que sur le lexique. J'ai ensuite corrélé différents paramètres des traces des joueurs avec leurs scores de posttest. Enfin, nous avons aussi établi à partir des caractéristiques des joueurs que l'intérêt pour la biologie est plus critique que la formation pour expliquer la variance dans le score. Ces résultats pourraient conduire à des innovations en apprentissage adaptatif comme des retours personnalisés, que ce soit virtuellement ou en présentiel. De façon plus générale, la méthodologie de développement d'Hero.Coli peut servir d'exemple pour le développement futur de solutions d'apprentissage par le jeu : conception, suivi (tracking et analytics), itération rapide et test, et évaluation finale
Video games have demonstrated their value as a hobby and as a pedagogic tool, both in academic and professional fields. However, learning video games have to integrate pedagogical strategies and be fine-tuned to be efficient and adopted. Synthetic biology is an emerging field focusing on engineering living systems to achieve controlled functions. It shares concepts with crafting and engineering games. We designed the first synthetic biology crafting game, named Hero.Coli, for popularization and learning. In order to engage both forced and voluntary users, ie students and citizens, our main pedagogical strategy is stealth learning. This means creating an educational game with no interruption in the experience - due to explicit learning or assessment phases -, mimicking successful mainstream games. I used embedded analytics to continuously refine this new pedagogical tool, by spotting the bottlenecks and issues in level design, the eventual misconceptions revealed in posttests, and the learning successes. I validated the usefulness of the game by comparing pre- and posttests of players (n=89). I found an average of 32 percentage point increase between pretest and posttest correct answer rate per question. The higher achievements stemmed mainly from higher-order thinking questions as compared to lexical questions. This is in line with our expectation from the chosen stealth learning strategy, which prioritizes function - game mechanics - over lexicon. I then correlated different user tracking parameters to their posttest scores. Lastly, by analyzing surveys, we also revealed that interest in biology is more critical than education to explain the variance in learning. These results could lead to future adaptive learning improvements including user-tailored feedbacks, in-game or in-class. Overall, the Hero.coli framework facilitates future implementations of game-based learning solutions by exemplifying a methodological approach of game development: design, tracking and analytics, quick iteration and testing, and final evaluation
APA, Harvard, Vancouver, ISO, and other styles
16

Arun, Ashutosh. "A novel Road User Safety Field Theory for traffic safety assessment applying video analytics." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/234039/1/Ashutosh_Arun_Thesis.pdf.

Full text
Abstract:
This thesis introduces a new Road User Safety Field Theory to proactively assess traffic safety by studying the interactions of various road users at signalised intersections. The proposed theory combines road traffic environmental factors, vehicle capabilities and personal characteristics to determine the extent and strength of road users’ safety ‘bubble’ or field across various traffic interactions. By applying the Artificial Intelligence-based video data analytics, the proposed Road User Safety Field Theory is found to better estimate crash risks in terms of crash frequency and severity than traditional traffic conflict techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Mathonat, Romain. "Rule discovery in labeled sequential data : Application to game analytics." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI080.

Full text
Abstract:
Exploiter des jeux de données labelisés est très utile, non seulement pour entrainer des modèles et mettre en place des procédures d'analyses prédictives, mais aussi pour améliorer la compréhension d'un domaine. La découverte de sous-groupes a été l'objet de recherches depuis deux décennies. Elle consiste en la découverte de règles couvrants des ensembles d'objets ayant des propriétés intéressantes, qui caractérisent une classe cible donnée. Bien que de nombreux algorithmes de découverte de sous-groupes aient été proposés à la fois dans le cas des données transactionnelles et numériques, la découverte de règles dans des données séquentielles labelisées a été bien moins étudiée. Dans ce contexte, les stratégies d'exploration exhaustives ne sont pas applicables à des cas d'application rééls, nous devons donc nous concentrer sur des approches heuristiques. Dans cette thèse, nous proposons d'appliquer des modèles de bandit manchot ainsi que la recherche arborescente de Monte Carlo à l'exploration de l'espace de recherche des règles possibles, en utilisant un compromis exploration-exploitation, sur différents types de données tels que les sequences d'ensembles d'éléments, ou les séries temporelles. Pour un budget temps donné, ces approches trouvent un ensemble des top-k règles decouvertes, vis-à-vis de la mesure de qualité choisie. De plus, elles ne nécessitent qu'une configuration légère, et sont indépendantes de la mesure de qualité utilisée. A notre connaissance, il s'agit de la première application de la recherche arborescente de Monte Carlo au cas de la fouille de données séquentielles labelisées. Nous avons conduit des études appronfondies sur différents jeux de données pour illustrer leurs plus-values, et discuté leur résultats quantitatifs et qualitatifs. Afin de valider le bon fonctionnement d'un de nos algorithmes, nous proposons un cas d'utilisation d'analyse de jeux vidéos, plus précisémment de matchs de Rocket League. La decouverte de règles intéressantes dans les séquences d'actions effectuées par les joueurs et leur exploitation dans un modèle de classification supervisée montre l'efficacité et la pertinence de notre approche dans le contexte difficile et réaliste des données séquentielles de hautes dimensions. Elle permet la découverte automatique de techniques de jeu, et peut être utilisée afin de créer de nouveaux modes de jeu, d'améliorer le système de classement, d'assister les commentateurs de "e-sport", ou de mieux analyser l'équipe adverse en amont, par exemple
It is extremely useful to exploit labeled datasets not only to learn models and perform predictive analytics but also to improve our understanding of a domain and its available targeted classes. The subgroup discovery task has been considered for more than two decades. It concerns the discovery of rules covering sets of objects having interesting properties, e.g., they characterize a given target class. Though many subgroup discovery algorithms have been proposed for both transactional and numerical data, discovering rules within labeled sequential data has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. In this thesis, we propose to apply bandit models and Monte Carlo Tree Search to explore the search space of possible rules using an exploration-exploitation trade-off, on different data types such as sequences of itemset or time series. For a given budget, they find a collection of top-k best rules in the search space w.r.t chosen quality measure. They require a light configuration and are independent from the quality measure used for pattern scoring. To the best of our knowledge, this is the first time that the Monte Carlo Tree Search framework has been exploited in a sequential data mining setting. We have conducted thorough and comprehensive evaluations of our algorithms on several datasets to illustrate their added-value, and we discuss their qualitative and quantitative results. To assess the added-value of one or our algorithms, we propose a use case of game analytics, more precisely Rocket League match analysis. Discovering interesting rules in sequences of actions performed by players and using them in a supervised classification model shows the efficiency and the relevance of our approach in the difficult and realistic context of high dimensional data. It supports the automatic discovery of skills and it can be used to create new game modes, to improve the ranking system, to help e-sport commentators, or to better analyse opponent teams, for example
APA, Harvard, Vancouver, ISO, and other styles
18

Motta, Ricardo J. "An analytical model for the colorimetric characterization of color CRTs /." Online version of thesis, 1991. http://hdl.handle.net/1850/10937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Whitson, Robert Henry. "The interpretive spiral: an analytical rubric for videogame interpretation." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44698.

Full text
Abstract:
In this work, I propose an analytical rubric called the Interpretive Spiral designed to examine the process through which players create meaning in videogames, by examining their composition in three categories, across four levels of interaction. The most familiar of the categories I propose is the Mechanical, which refers to the rules, logic, software and hardware that composes the core of videogames. My second category, which I call the Thematic, is a combination of Arsenault and Perron's Narrative Spiral of gameplay, proposed in their Magic Cycle of Gameplay model (accounting for embedded text, videos, dialog and voiceovers) and Jason Begy's audio-visual level of his Tripartite Model of gameplay (accounting for graphics, sound effects, music and icons), though it also accounts for oft-neglected features such as interface and menu design. The third category, the Affective, refers to the emotional response and metaphorical parallels inspired by the combination of the other two levels. The first level of interaction I explore actually precedes gameplay, as it is common for players to begin interpreting games before playing them, and is called the Pre-Play Level of interpretation. Next I examine the Fundamental Level of interpretation, which entails the learning phase of gameplay. The Secondary Level of gameplay is the longest level of play and describes the shift from learning the game to informed, self-conscious play. The Third and final, elective level of interpretation, is where the player forms connections between his gameplay experience, and other concepts and experiences that exist outside of the game artifact. To put my model through its paces, I apply the model in its entirety to three influential and critically acclaimed videogames, and in part to several other titles.
APA, Harvard, Vancouver, ISO, and other styles
20

Winblad, Emanuel. "Visualization of web site visit and usage data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110576.

Full text
Abstract:
This report documents the work and results of a master’s thesis in Media Tech- nology that has been carried out at the Department of Science and Technology at Linköping University with the support of Sports Editing Sweden AB (SES). Its aim is to create a solution which aids the users of SES’ web CMS products in gaining insight into web site visit and usage statistics. The resulting solu- tion is the concept and initial version of a web based service. This service has been developed through an agile process with user centered design in mind and provides a graphical user interface which makes high use of visualizations to achieve the project goal.
APA, Harvard, Vancouver, ISO, and other styles
21

Deterding, Sebastian [Verfasser], and Uwe [Akademischer Betreuer] Hasebrink. "Modes of Play : A Frame Analytic Account of Video Game Play / Sebastian Deterding. Betreuer: Uwe Hasebrink." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2014. http://d-nb.info/1054422311/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Rawahi, Manal N. K. "Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/23359.

Full text
Abstract:
CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation.
APA, Harvard, Vancouver, ISO, and other styles
23

Aminmansour, Sina. "Video analytics for the detection of near-miss incidents at railway level crossings and signal passed at danger events." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/112765/1/Sina_Aminmansour_Thesis.pdf.

Full text
Abstract:
Railway collisions remain a significant safety and financial concern for the Australian railway industry. Collecting data about events which could potentially lead to collisions helps to better understand the causal factors of railway collisions. In this thesis, we introduced Artificial Intelligence and Computer Vision algorithms which use cameras installed on trains to automatically detect Near-miss incidents at railway level crossings, and Signal Passed at Danger (SPAD) events. A SPAD is an event when a train passes a red signal without authority due to technical or human errors. Our experimental results demonstrate that it is possible to reliably detect these events.
APA, Harvard, Vancouver, ISO, and other styles
24

Detton, Alan James. "The Creation of a 3D Interactive Human Neural Development Resource and Its Evaluation Through a Video Analytic Usability Study." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1337966847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Prantl, Daniel, and Christopher Wallbaum. "Videography on the Way to the Analytical Short Film: Managing the ambiguity in interaction regarding video material." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34614.

Full text
Abstract:
This chapter gives a brief overview of research methods using video material, lead by the question how these manage the ambiguity lying in interaction regarding this footage. The argument is put forward that, from a perspective of symbolic interactionism, in order to adequately make assertions regarding video material it is necessary to use video itself as a key statement in scientific discourse.
APA, Harvard, Vancouver, ISO, and other styles
26

MÁGNO, Carlos. "Avaliação da disponibilidade de video surveillance as service (VSAAS)." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17311.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-11T12:45:47Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) AVALIAÇÃO DA DISPONIBILIDADE DE VIDEO SURVEILLANCE AS A SERVICE (VSAAS) - CarlosMagno_vFinal_.pdf: 9082644 bytes, checksum: c59974ef3892ac2c00ead6db128cf7f6 (MD5)
Made available in DSpace on 2016-07-11T12:45:47Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) AVALIAÇÃO DA DISPONIBILIDADE DE VIDEO SURVEILLANCE AS A SERVICE (VSAAS) - CarlosMagno_vFinal_.pdf: 9082644 bytes, checksum: c59974ef3892ac2c00ead6db128cf7f6 (MD5) Previous issue date: 2015-09-04
CNPq
Nos últimos anos, sistemas de Video Surveillance as a Service (VSaaS) apresentam um aumento significativo na demanda por técnicas de segurança que elevem os níveis de confiabilidade do serviço. Em paralelo, o paradigma de Computação em Nuvem tornou-se uma importante ferramenta para serviços remotos da computação. O VSaaS entrega armazenamento de grande quantidade de dados. Em 2012, 50% do armazenamento em big data que necessitou serem analisados foram de vídeo de vigilância. Em geral, os vídeos têm um alto significado para seus proprietários, não permitindo longos períodos de interrupção. Com o objetivo de evitar baixos desempenhos e ampliar a qualidade dos serviços de vídeo são necessários mecanismos para garantir alta disponibilidade em VSaaS. Entretanto, esta tarefa é difícil sem gerar impacto no custo. O presente trabalho propõe dois sistemas de VSaaS que foram submetidos a análise de disponibilidade, por meio de modelos analíticos (RBD, CTMC e SPN). O primeiro sistema, denominado doméstico, foi caracterizado pelos elementos essenciais para uma estrutura básica do VSaaS para ser utilizado em casas e pequenos comércios. Estes sistemas geraram três arquiteturas que foram modeladas para a obtenção de fórmulas fechadas, elas são importantes para realização de análises. O modelo da arquitetura 1 foi validado e as outras arquiteturas variaram dessas. A arquitetura 3 teve a maior disponibilidade entre as outras arquiteturas, por possuir a quantidade maior de componentes replicados. O downtime (em horas) desta arquitetura comparada com a sem replicações foi em 36,89%. Por ela ter a maior disponibilidade, foi realizada uma análise de sensibilidade que mostrou o componente “Node” como o de maior impacto. No segundo sistema, foi apresentado um VSaaS de uma empresa, chamado empresarial, gerando 18 (dezoito) arquiteturas, uma delas comparada a arquitetura A1 (sem redundância), obteve uma redução significativa do downtime de 30% com um pequeno aumento no custo na ordem de 7%. Caso um determinado serviço exija um downtime menor, outra análise apontou uma arquitetura com redução de 80% ao aumentar 30% do custo. Diante desse panorama foram propostas e analisadas arquiteturas que podem auxiliar administradores a tomar importantes decisões na implementação de VSaaS.
In the last few years, Video Surveillance as a Service VSaaS has shown the significant increase in demand for security mechanisms to ensure reliability higher levels. In parallel, the Cloud Computing paradigm has become an important tool for remote computing services. VSaaS, for example, allows for storage large amounts of data. In 2012, 50% of big data storage were surveillance video and in general, videos have a high significance for their owners, not allowing long periods interruption. To avoid video services with low performance and increase the quality, mechanisms to ensure high availability in VSaaS are required. However, this task is difficult without generating a major impact on cost, so this paper proposes two VSaaS systems who underwent an availability analysis, using analytical models (RBD, CTMC, and SPN). The first system, entitled domestic, was characterized by essential elements of a basic structure VSaaS, for use in homes and small businesses. This system generated three architectures that were modeled to obtain closed formulas; they are important to performing analyzes. The model architecture one was validated, and other architectures vary these. The architecture three had the highest availability of the other architectures, by owning the largest number of replicated components. The downtime (in hours) this architecture compared to a without replication was 36.89%. For having the highest availability, a sensitivity analysis showed the "Node"component as the most relevant. In the second system, was showed a VSaaS in a company and has generated eighteen architectures. One of them compared to a baseline, we obtained a significant reduction in downtime (30%) and a small increase in cost (on the order of 7%). In case, of the service requires less downtime, another analysis pointed an architecture with a reduction 80% of downtime and increased 30% in the cost. We propose and analyze architectures that can help administrators make important decisions in the VSaaS implementation.
APA, Harvard, Vancouver, ISO, and other styles
27

Петрунів, Орест Романович. "Система відеоспостереження розумного дому з інтелектуальним розпізнаванням загроз." Master's thesis, КПІ Ім. Ігоря Сiкорського, 2019. https://ela.kpi.ua/handle/123456789/31704.

Full text
Abstract:
У роботі розглянуто проблему в області автоматизованого розпізнавання загроз, показано основні особливості існуючих рішень та додатків,їх переваги та недоліки. При забезпеченні певного об’єкту відеоспостереженням, недостатньо просто зберігати отримані дані, необхідно активно розпізнавати ситуації які розгортаються перед камерами. Людина схильна до помилок, стомлення, втрати концентрації, а коли мова йде про розподілену систему камер - задача розпізнавання стає непосильною. Тут на допомогу приходять інтелектуальні системи, здатні забезпечувати автоматичне розпізнавання потенціальних загроз. Визначено завдання для системи інтелектуального розпізнавання загроз у відеоряді за допомогою технологій нейронних мереж, відібрано мережі для виконання основних задач такої системи та способи їх навчання, які найбільш підходять для даної задачі. Описано структури нейроних мереж та проведено експерименти по навчанню та їх роботі. Система забезпечує автоматичне розпізнавання ситуацій на основі відеоряду в режимі реального часу. Дозволяє зменшити витрати коштів на працівників та збільшити ефективність роботи охоронної системи. . Розмір пояснювальної записки – 85 аркушів, містить 15 ілюстрацій, 25 таблиць, 2 додатки
The paper considers the problem in the field of automated threat recognition, shows the main features of existing solutions and applications, their advantages and disadvantages. When providing video surveillance for some areas, it is not enough to simply store the data received, it is necessary to actively recognize the situations that unfold in front of the cameras. A person is prone to mistakes, fatigue, loss of concentration, and when it comes to a distributed camera system, the task of recognition becomes impossible for a human. This is where intelligent systems capable of automatically detecting potential threats come to the rescue. The tasks for the system of intelligent detection of threats in the video series with the help of neural network technologies are determined, the networks for tackling major tasks of such system are selected, ways of their training explain. Architecture of neural networks are described and experiments on training and their work are conducted. The system provides automatic recognition of situations on the basis of video sequence in real time. Explanatory note size – 85 pages, contains 15 illustrations, 25 tables, 2 applications​.
APA, Harvard, Vancouver, ISO, and other styles
28

Wallbaum, Christopher. "The analytical short film: Form – Functions – Excursus – Criteria." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34615.

Full text
Abstract:
This chapter presents the form of the Analytical Short Film (ASF), its functions for communication, education and research and criteria for its validity. Embedded is a fundamental reflection about the relations between something whole and its parts to give reasons for dealing with blurredness in both, practice in lessons and video practice.
APA, Harvard, Vancouver, ISO, and other styles
29

Wallbaum, Christopher. "Comparing international music lessons on video." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A33770.

Full text
Abstract:
Video-recorded music lessons (on multi angle DVDs) were used to inspire and improve understanding among experts from different cultures and discourses of music education. To make the process manageable and focused we developed the Analytical Short Film (2-3 minutes) to address particular areas of interest and starting points for debate. We asked selected music teachers from seven nation-states to allow a typical and (in their opinion) good lesson to be recorded. We also asked the students and their parents for permission. At a symposium, national experts and researchers presented views on „their“ lessons through Analytical Short Films. Discussion included implicit and explicit comparisons. The presenters also used a lesson from one of the other countries to stimulate discussion about assumptions in and challenges to their own views. We documented all comparisons made and compared these to derive cross cultural categories (tertia comparationis). These categories should be relevant for understanding what makes a music lesson „good“. The different perspectives and discussions offered by the authors in this book provide rich and diverse material for researchers, teachers and teacher educators.
APA, Harvard, Vancouver, ISO, and other styles
30

Yao, Lijie. "Situated Visualization in Motion." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG093.

Full text
Abstract:
Dans ma thèse, je définis ce qu'est la visualisation en mouvement et j'apporte plusieurs contributions sur la manière de visualiser et de concevoir des visualisations localisées en mouvement. Dans la visualisation localisée de données, les données sont directement visualisées à proximité de l'espace physique, de l'objet ou de la personne auxquels elles se réfèrent. Les visualisations localisées sont souvent utiles dans des contextes où le référent des données et l'observateur sont en mouvement relatif. Imaginez, par exemple, un coureur regardant une visualisation sur un bracelet de fitness qu'il porte ou sur un écran public alors qu'il passe devant. De tels scénarios d'utilisation mobile et dynamique peuvent affecter la lecture de visualisations localisées. Il est donc important de comprendre comment optimiser la conception des visualisations pour ces contextes. En d'autres termes, il est d'abord nécessaire de définir des encodages de données localisées efficaces et visuellement stables; puis de les étudier lorsque des facteurs de mouvement sont impliqués. A ce titre, je définis d'abord la visualisation en mouvement comme des représentations de données visuelles utilisées dans des contextes qui présentent un mouvement relatif entre un observateur et une visualisation entière. Je classe la visualisation en mouvement en trois catégories : (a) observateur en mouvement et visualisation stationnaire, (b) visualisation en mouvement et observateur stationnaire, et (c) observateur et visualisation tous deux en mouvement. Pour analyser les opportunités et les défis de la conception de visualisations en mouvement, je propose un agenda de recherche. Pour commencer, j'explore avec quelle précision les observateurs peuvent lire une visualisation en mouvement. A cette fin, je mène une série d'études empiriques sur la perception de l'estimation de la proportion de la magnitude. Mes résultats montrent que les participants peuvent obtenir des informations fiables à partir de visualisations en mouvement, même s'ils se déplacent à grande vitesse et selon des trajectoires irrégulières. Sur la base de mes résultats de perception, je cherche à répondre à la question de savoir comment concevoir et intégrer la visualisation en mouvement dans des contextes réels. J'utilise la natation comme scénario d'application, car la natation possède des données riches et dynamiques. J'implémente un outil de prospection technologique qui permet à des concepteurs d'intégrer les visualisations en mouvement à une vidéo de natation. Les concepteurs peuvent modifier en direct les encodages visuels, l'état de mouvement ainsi que l'emplacement des visualisations. Les visualisations utilisent des données réelles liées à la course. Mon évaluation montre que la conception de visualisations en mouvement nécessite plus que ce que proposent les outils de conception de visualisations traditionnelles : la visualisation doit être placée dans son contexte (par exemple, son référent de données, son arrière-plan) mais doit également pouvoir être prévisualisée avec son déplacement réel. Le contexte complet avec les effets de mouvement peut affecter les décisions de conception. Ensuite, je continue à travailler pour comprendre l'impact du contexte sur la conception de visualisations en mouvement et son expérience utilisateur. J'utilise les jeux vidéo comme plateforme de test, dans lesquels les visualisations en mouvement sont placées dans un arrière-plan chargé et dynamique mais doivent aider les joueurs à prendre des décisions rapides pour gagner. Mon étude montre qu'il existe des compromis entre la lisibilité de la visualisation en mouvement et son esthétique. Les participants recherchent un équilibre entre la lisibilité de la visualisation, l'adéquation esthétique au contexte, l'expérience d'immersion qu'apporte la visualisation, le support que la visualisation peut fournir pour gagner, et l'harmonie entre la conception des visualisations et leur contexte
In my thesis, I define visualization in motion and make several contributions to how to visualize and design situated visualizations in motion. In situated data visualization, the data is directly visualized near their data referent, i.e., the physical space, object, or person it refers to. Situated visualizations are often useful in contexts where the data referent or the viewer does not remain stationary but is in relative motion. For example, a runner is looking at visualizations from their fitness band while running or from a public display as they are passing it by. Reading visualizations in such scenarios might be impacted by motion factors. As such, understanding how to best design visualizations for dynamic contexts is important. That is, effective and visually stable situated data encodings need to be defined and studied when motion factors are involved. As such, I first define visualization in motion as visual data representations used in contexts that exhibit relative motion between a viewer and an entire visualization. I classify visualization in motion into 3 categories: (a) moving viewer & stationary visualization, (b) moving visualization & stationary viewer, and (c) moving viewer & moving visualization. To analyze the opportunities and challenges of designing visualization in motion, I propose a research agenda. To explore to what extent viewers can accurately read visualization in motion, I conduct a series of empirical perception studies on magnitude proportion estimation. My results show that people can get reliable information from visualization in motion, even if at high speed and under irregular trajectories. Based on my perception results, I move toward answering the question of how to design and embed visualization in motion in real contexts. I pick up swimming as an application scenario because swimming has rich, dynamic data. I implement a technology probe that allows users to embed visualizations in motion in a live swimming video. Users can adjust in real-time visual encoding parameters, the movement status, and the situatedness of visualization. The visualizations encode real swimming race-related data. My evaluation with designers confirms that designing visualizations in motion requires more than what traditional visualization toolkits provide: the visualization needs to be placed in-context (e.g., its data referent, its background) but also needs to be previewed under its real movement. The full context with motion effects can affect design decisions. After that, I continue my work to understand the impact of the context on the design of visualizations in motion and its user experience. I select video games as my test platform, in which visualizations in motion are placed in a busy, dynamic background but need to help players make quick decisions to win. My study shows there are trade-offs between visualization's readability under motion and aesthetics. Participants seek a balance between the readability of visualization, the aesthetic fitting to the context, the immersion experience the visualization brings, the support the visualization can provide for a win, and the harmony between the visualization and its context
APA, Harvard, Vancouver, ISO, and other styles
31

Rapoport, Robert S. "The iterative frame : algorithmic video editing, participant observation & the black box." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:8339bcb5-79f2-44d1-b78d-7bd28aa1956e.

Full text
Abstract:
Machine learning is increasingly involved in both our production and consumption of video. One symptom of this is the appearance of automated video editing applications. As this technology spreads rapidly to consumers, the need for substantive research about its social impact grows. To this end, this project maintains a focus on video editing as a microcosm of larger shifts in cultural objects co-authored by artificial intelligence. The window in which this research occurred (2010-2015) saw machine learning move increasingly into the public eye, and with it ethical concerns. What follows is, on the most abstract level, a discussion of why these ethical concerns are particularly urgent in the realm of the moving image. Algorithmic editing consists of software instructions to automate the creation of timelines of moving images. The criteria that this software uses to query a database is variable. Algorithmic authorship already exists in other media, but I will argue that the moving image is a separate case insofar as the raw material of text and music software can develop on its own. The performance of a trained actor can still not be generated by software. Thus, my focus is on the relationship between live embodied performance, and the subsequent algorithmic editing of that footage. This is a process that can employ other software like computer vision (to analyze the content of video) and predictive analytics (to guess what kind of automated film to make for a given user). How is performance altered when it has to communicate to human and non-human alike? The ritual of the iterative frame gives literal form to something that throughout human history has been a projection: the omniscient participant observer, more commonly known as the Divine. We experience black boxed software (AI's, specifically neural networks, which are intrinsically opaque) as functionally omniscient and tacitly allow it to edit more and more of life (e.g. filtering articles, playlists and even potential spouses). As long as it remains disembodied, we will continue to project the Divine on to the black box, causing cultural anxiety. In other words, predictive analytics alienate us from the source code of our cultural texts. The iterative frame then is a space in which these forces can be inscribed on the body, and hence narrated. The algorithmic editing of content is already taken for granted. The editing of moving images, in contrast, still requires a human hand. We need to understand the social power of moving image editing before it is delegated to automation. Practice Section: This project is practice-led, meaning that the portfolio of work was produced as it was being theorized. To underscore this, the portfolio comes at the end of the document. Video editors use artificial intelligence (AI) in a number of different applications, from deciding the sequencing of timelines to using facial and language detection to find actors in archives. This changes traditional production workflows on a number of levels. How can the single decision cut a between two frames of video speak to the larger epistemological shifts brought on by predictive analytics and Big Data (upon which they rely)? When predictive analytics begin modeling the world of moving images, how will our own understanding of the world change? In the practice-based section of this thesis, I explore how these shifts will change the way in which actors might approach performance. What does a gesture mean to AI and how will the editor decontextualize it? The set of a video shoot that will employ an element of AI in editing represents a move towards ritualization of production, summarized in the term the 'iterative frame'. The portfolio contains eight works that treat the set was taken as a microcosm of larger shifts in the production of culture. There is, I argue, metaphorical significance in the changing understanding of terms like 'continuity' and 'sync' on the AI-watched set. Theory Section In the theoretical section, the approach is broadly comparative. I contextualize the current dynamic by looking at previous shifts in technology that changed the relationship between production and post-production, notably the lightweight recording technology of the 1960s. This section also draws on debates in ethnographic filmmaking about the matching of film and ritual. In this body of literature, there is a focus on how participant observation can be formalized in film. Triangulating between event, participant observer and edit grammar in ethnographic filmmaking provides a useful analogy in understanding how AI as film editor might function in relation to contemporary production. Rituals occur in a frame that is dependent on a spatially/temporally separate observer. This dynamic also exists on sets bound for post-production involving AI, The convergence of film grammar and ritual grammar occurred in the 1960s under the banner of cinéma vérité in which the relationship between participant observer/ethnographer and the subject became most transparent. In Rouch and Morin's Chronicle of a Summer (1961), reflexivity became ritualized in the form of on-screen feedback sessions. The edit became transparent-the black box of cinema disappeared. Today as artificial intelligence enters the film production process this relationship begins to reverse-feedback, while it exists, becomes less transparent. The weight of the feedback ritual gets gradually shifted from presence and production to montage and post-production. Put differently, in cinéma vérité, the participant observer was most present in the frame. As participant observation gradually becomes shared with code it becomes more difficult to give it an embodied representation and thus its presence is felt more in the edit of the film. The relationship between the ritual actor and the participant observer (the algorithm) is completely mediated by the edit, a reassertion of the black box, where once it had been transparent. The crucible for looking at the relationship between algorithmic editing, participant observation and the black box is the subject in trance. In ritual trance the individual is subsumed by collective codes. Long before the advent of automated editing trance was an epistemological problem posed to film editing. In the iterative frame, for the first time, film grammar can echo ritual grammar and indeed become continuous with it. This occurs through removing the act of cutting from the causal world, and projecting this logic of post-production onto performance. Why does this occur? Ritual and specifically ritual trance is the moment when a culture gives embodied form to what it could not otherwise articulate. The trance of predictive analytics-the AI that increasingly choreographs our relationship to information-is the ineffable that finds form in the iterative frame. In the iterative frame a gesture never exists in a single instance, but in a potential state. The performers in this frame begin to understand themselves in terms of how automated indexing processes reconfigure their performance. To the extent that gestures are complicit with this mode of databasing they can be seen as votive toward the algorithmic. The practice section focuses on the poetics of this position. Chapter One focuses on cinéma vérité as a moment in which the relationship between production and post-production shifted as a function of more agile recording technology, allowing the participant observer to enter the frame. This shift becomes a lens to look at changes that AI might bring. Chapter Two treats the work of Pierre Huyghe as a 'liminal phase' in which a new relationship between production and post-production is explored. Finally, Chapter Three looks at a film in which actors perform with awareness that footage will be processed by an algorithmic edit.
The conclusion looks at the implications this way of relating to AI-especially commercial AI-through embodied performance could foster a more critical relationship to the proliferating black-boxed modes of production.
APA, Harvard, Vancouver, ISO, and other styles
32

Krishnan, Sherly Rishi, Mengwei Guo, and Guanting Liu. "Worldspace Heatmaps." Thesis, Uppsala universitet, Institutionen för speldesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448212.

Full text
Abstract:
Many games are set in 3D worlds and have shifting camera viewpoints. In this study, we attempt to create and evaluate a Proof-of-Concept Worldspace Heatmap System that accounts for the shifting camera views in 3D game worlds, in an attempt to improve user testing processes. We test the system by conducting a stimulated recall user study, in which we examine the areas in a game that drew the attention of the participants, with the help of heatmaps placed in the game world. Our results include observations of several behavior patterns and participant evaluations of the Worldspace Heatmap System. We observed multiple indications in the data we gathered, that such a system can be useful for obtaining player behavior insights and for enhancing user testing processes, especially if some of the limitations are overcome.
APA, Harvard, Vancouver, ISO, and other styles
33

Prantl, Daniel, and Christopher Wallbaum. "The analytical short film in teacher education: Report of an accompanying research study in university teaching." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34631.

Full text
Abstract:
This chapter presents the application of the method of the Analytical Short Film in teacher education seminars and the main results of an accompanying research. Central findings indicate that the usage of the method increases the students’ abilities of reasoning on a scientific basis and improves their levels of reflection (Roters 2012).
APA, Harvard, Vancouver, ISO, and other styles
34

Höschel, Friederike. "“Doing Gender” in the music classroom: Analytical short film (ASF) about “Doing Gender”-processes in the Bavaria-Lesson." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34639.

Full text
Abstract:
The Chapter shows the phenomenon of “Doing Gender” taking place in a part of the Bavaria-Lesson. And what is more, it shows, that boys are “doing girl” and girls are “doing boy”. The chapter doesn’t offer implications for music educators explicitly, but shows an Analytical Short Film (ASF) serving as evidence.
APA, Harvard, Vancouver, ISO, and other styles
35

Silva, Ricardo Moutinho da [UNESP]. "Estudo do comportamento do eletrodo de vidro combinado em etanol anidro e misturas etanol-água." Universidade Estadual Paulista (UNESP), 2009. http://hdl.handle.net/11449/97794.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:29:07Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-05-29Bitstream added on 2014-06-13T19:38:04Z : No. of bitstreams: 1 silva_rm_me_araiq.pdf: 652590 bytes, checksum: 78edb22c4b169d083b6e485c42fcdb98 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Universidade Estadual Paulista (UNESP)
Os estudos realizados do comportamento do eletrodo de vidro combinado em etanol anidro e misturas etanol-água mostraram que o tempo para realizar as medidas dos valores de pH foi 60 segundos, uma vez que a partir desse tempo nota-se uma tendência à estabilidade dos valores das medidas do pH. Foi também verificado que as medidas de pH em meio básico se diferenciam do meio ácido, sendo que a composição etanol-água tem influência significativa sobre os valores medidos e que a adição de eletrólito contribui para medidas de pH com desvios padrões menores. Os estudos da determinação dos fatores de correção para eletrodo de vidro mostram que os resultados obtidos estão próximos aos previstos. Por outro lado, na determinação desses fatores na região de pH compreendida entre 6 e 9, os resultados obtidos mostram que não é possivel a sua aplicação em amostras reais de etanol combustível. Para aplicação em amostras reais de etanol combustível, propõe-se um método alternativo utilizando curvas de correção que apresentaram relações lineares com coeficientes de correlação de 0,99 para misturas contendo 0,3, 5 e 10% m/m de água em etanol, proporções as quais se aproximam do etanol combustível anidro e hidratado.
Studies of the behavior of a combined glass electrode in anhydrous ethanol and ethanol-water mixtures showed that 60 s was a enough time to perform the pH measurements, since for times equal and greater than 60 s the pH values tend to be stable. For all ethanol-water mixtures the behavior of pH measurements in alkaline medium was very different from that observed in the acidic one, and the ethanol-water composition had a significant influence on the pH values. The addition of electrolyte to the water-ethanol mixture contributed to decrease standard deviations of the pH values. Studies for determining the correction factors for the glass electrode showed values near the expected ones. Furthermore, the determination of these factors in the pH 6 to 9 region showed that they are not applied for real samples of ethanol fuel. For application in real samples of ethanol fuel an alternative method of correction was proposed using linear curves which showed correlation coefficients around 0.99 for mixtures containing 0.3, 5 and 10% m/m of water in ethanol. These mixtures are close to the anhydrous and hydrated ethanol fuel composition.
APA, Harvard, Vancouver, ISO, and other styles
36

Souza, Vinícius Nunes Rocha e. "Análise da imagem visual em videogames." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/149358.

Full text
Abstract:
Há milhares de anos, as imagens visuais impactam significantemente o cotidiano do ser humano, caracterizando-se como um ótimo meio de comunicação e expressão. Com os avanços tecnológicos, evidenciam-se transformações significativas da linguagem visual, que se moldam aos novos contextos no qual se encontra. Os videogames, artefatos digitais amplamente difundidos na sociedade que permitem a imersão do usuário em ambientes lúdicos dotados de interatividade, são alvo de projetos estéticos cada vez mais sofisticados. Uma vez que utilizam linguagem predominantemente visual, tem-se como premissa que a imagem desempenha papel fundamental para que estes cumpram sua função adequadamente. Entretanto, as imagens em videogames nem sempre obedecem a um padrão de qualidade, carecendo de estudos e métodos que amparem seu desenvolvimento e compreensão. Com isso, o presente estudo tem como objetivo desenvolver um método para análise da imagem visual em videogames, considerando a ampla gama de funções que a mesma exerce em artefatos dessa natureza. Para isso, a fim de permitir o desenvolvimento do método e garantir sua replicabilidade, foram definidos determinados procedimentos metodológicos, que envolvem: a realização e avaliação de um primeiro modelo do método; desenvolvimento de um segundo modelo; coleta e análise de dados envolvendo sujeitos de pesquisa especialistas na área; e o desenvolvimento de um modelo final. Como resultados, pode-se perceber que a análise de imagens visuais em videogames pode ser realizada a partir de um método sistemático, todavia, foram apontadas inúmeras ressalvas e considerações a respeito de como o método pode tornar-se mais eficiente.
For thousands of years, the visual images significantly affect the daily life of the human being, characterized as a great means of communication and expression. With technological advances, are evident the significant changes in visual language, which are molded to the new contexts in which it is. Video games, digital artifacts widespread in society that allow the user's immersion in playful environments with interactivity. They are subject to increasingly sophisticated aesthetic designs. Once they predominantly use visual language, there is a premise that the image plays a key role for them to fulfill their function properly. However, images in video games does not always follow a standard of quality, lacking studies and methods that help its development and understanding. Thus, this study aims to develop a method for the analysis of visual image in video games, considering the wide range of functions that it carries on such artifacts. For this, in order to allow the development of the method and ensuring their replication, were defined certain methodological procedures that involve: implementation and evaluation of a first model of the method; development of a second model; collection and analysis of data involving research subjects experts in the field; and the development of the final model. As a result, it can be perceived that the analysis of visual images in videogames can be performed from a systematic method, however, were identified numerous considerations about how the method can become more efficient.
APA, Harvard, Vancouver, ISO, and other styles
37

Riedel, Jana, Susan Berthold, Marlen Dubrau, and Kathrin Möbius. "Flexibilität und Vielseitigkeit mit digitalen Lehr- und Lernmaterialien erhöhen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-217402.

Full text
Abstract:
Die vorliegende Broschüre ist Teil einer Publikationsserie, die einen Überblick über verschiedene Medienformate von digitalen Texten über elektronische Tests und Wikis bis hin zu digitalen Simulationen gibt. Dieses Heft widmet sich schwerpunktmäßig der Bereitstellung und Aufbereitung von Materialien, die den Studierenden für das individuelle und flexible Lernen zur Verfügung gestellt werden können. Anhand von Ergebnissen einer Online-Befragung im Jahr 2016 und Interviews, die Beispiele aus der Lehre sächsischer Hochschullehrender vorstellen, wird aufgezeigt, welche Einsatzmöglichkeiten derzeit an den sächsischen Hochschulen genutzt werden. Sie bieten Inspiration für die Entwicklung eigener mediengestützter Lehrkonzepte. Hinweise auf Werkzeuge zur Erstellung digitaler Lehrangebote und Antworten zu häufigen Fragen bei der Nutzung der einzelnen Medienformate bieten Anregungen und Informationen, wie der Einstieg in die digital gestützte Lehre möglichst ohne großen Initialaufwand gestaltet werden kann. Antworten auf häufig gestellte Fragen, praktische Tipps und rechtliche Hinweise geben eine erste Orientierung und Sicherheit bei der Nutzung digitaler Medien. Dabei erfahren Sie auch, wie Sie die einzelnen medial gestützten Formate mit der klassischen Präsenzlehre verbinden und wie unterschiedliche Einsatzszenarien miteinander kombiniert werden können.
APA, Harvard, Vancouver, ISO, and other styles
38

Riedel, Jana, Susan Berthold, Marlen Dubrau, and Kathrin Möbius. "Flexibilität und Vielseitigkeit mit digitalen Lehr- und Lernmaterialien erhöhen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-231688.

Full text
Abstract:
Die vorliegende Broschüre ist Teil einer Publikationsserie, die einen Überblick über verschiedene Medienformate von digitalen Texten über elektronische Tests und Wikis bis hin zu digitalen Simulationen gibt. Dieses Heft widmet sich schwerpunktmäßig der Bereitstellung und Aufbereitung von Materialien, die den Studierenden für das individuelle und flexible Lernen zur Verfügung gestellt werden können. Anhand von Ergebnissen einer Online-Befragung im Jahr 2016 und Interviews, die Beispiele aus der Lehre sächsischer Hochschullehrender vorstellen, wird aufgezeigt, welche Einsatzmöglichkeiten derzeit an den sächsischen Hochschulen genutzt werden. Sie bieten Inspiration für die Entwicklung eigener mediengestützter Lehrkonzepte. Hinweise auf Werkzeuge zur Erstellung digitaler Lehrangebote und Antworten zu häufigen Fragen bei der Nutzung der einzelnen Medienformate bieten Anregungen und Informationen, wie der Einstieg in die digital gestützte Lehre möglichst ohne großen Initialaufwand gestaltet werden kann. Antworten auf häufig gestellte Fragen, praktische Tipps und rechtliche Hinweise geben eine erste Orientierung und Sicherheit bei der Nutzung digitaler Medien. Dabei erfahren Sie auch, wie Sie die einzelnen medial gestützten Formate mit der klassischen Präsenzlehre verbinden und wie unterschiedliche Einsatzszenarien miteinander kombiniert werden können.
APA, Harvard, Vancouver, ISO, and other styles
39

Riedel, Jana, Marlen Dubrau, Kathrin Möbius, and Susan Berthold. "Digitales Lehren & Lernen in der Hochschule." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-217606.

Full text
Abstract:
Liebe Lehrende, liebe Leserin und lieber Leser, Digitalisierung betrifft immer mehr Bereiche des alltäglichen Lebens. Auch und besonders die Hochschullehre an den sächsischen Hochschulen ist davon betroffen. Allerdings nicht erst seit kurzem. Im Jahr 2001 startete das Verbundprojekt Bildungsportal Sachsen, welches bis heute als hochschulübergreifende strategische Initiative mit dem angeschlossenen Arbeitskreis E-Learning der Landesrektorenkonferenz fortbesteht. Ein Ergebnis dieser Initiative ist auch die von den Hochschulen beaufsichtigte Bildungsportal Sachsen GmbH, die an den meisten sächsischen Hochschulen mit dem Lernmanagementsystem OPAL, der Testsuite ONYX und dem Videodienst Magma eine solide technische Infrastruktur bereit stellt. Seit 15 Jahren unterstützt das SMWK die Entwicklung des Lehrens und Lernens mit digitalen Medien an den sächsischen Hochschulen auch finanziell und wird dies auch in den nächsten Jahren fortführen. Zahlreiche kleine Projekte und mittlerweile hochschulübergreifende Projektverbünde haben im Laufe der Jahre zur technologischen und didaktischen Weiterentwicklung der digital gestützten Hochschullehre beigetragen. Liebe Lehrende, ich möchte Sie auffordern, sich den neuen Anforderungen und Entwicklungen in der Lehre zu öffnen und die bereits gegebenen Möglichkeiten im Bereich der Digitalisierung für Ihre tägliche Arbeit als Unterstützung zu nutzen. Erst mit Ihnen, die im Mittelpunkt der Wissensvermittlung für die zukünftige Generation stehen werden, kann die digitale Bildung gelingen. Und dass die Digitalisierung nicht nur ein kurzfristiges Phänomen ist, vermitteln auch die Strategien der Kultusministerkonferenz "Bildung in der digitalen Welt" sowie des Bundesministeriums für Bildung und Forschung "Bildungsoffensive für die digitale Wissensgesellschaft". Ich wünsche Ihnen bei der Lektüre viele interessante und anregende Informationen, gutes Gelingen für die Zukunft, motivierte Studentinnen und Studenten und eine abwechslungsreiche Vermittlung des Lehrstoffes. Dr. Eva-Maria Stange Sächsische Staatsministerin für Wissenschaft und Kunst
APA, Harvard, Vancouver, ISO, and other styles
40

Riedel, Jana, Susan Berthold, Marlen Dubrau, and Kathrin Möbius. "Flexibilität und Vielseitigkeit mit digitalen Lehr- und Lernmaterialien erhöhen." Technische Universität Dresden, 2016. https://tud.qucosa.de/id/qucosa%3A30111.

Full text
Abstract:
Die vorliegende Broschüre ist Teil einer Publikationsserie, die einen Überblick über verschiedene Medienformate von digitalen Texten über elektronische Tests und Wikis bis hin zu digitalen Simulationen gibt. Dieses Heft widmet sich schwerpunktmäßig der Bereitstellung und Aufbereitung von Materialien, die den Studierenden für das individuelle und flexible Lernen zur Verfügung gestellt werden können. Anhand von Ergebnissen einer Online-Befragung im Jahr 2016 und Interviews, die Beispiele aus der Lehre sächsischer Hochschullehrender vorstellen, wird aufgezeigt, welche Einsatzmöglichkeiten derzeit an den sächsischen Hochschulen genutzt werden. Sie bieten Inspiration für die Entwicklung eigener mediengestützter Lehrkonzepte. Hinweise auf Werkzeuge zur Erstellung digitaler Lehrangebote und Antworten zu häufigen Fragen bei der Nutzung der einzelnen Medienformate bieten Anregungen und Informationen, wie der Einstieg in die digital gestützte Lehre möglichst ohne großen Initialaufwand gestaltet werden kann. Antworten auf häufig gestellte Fragen, praktische Tipps und rechtliche Hinweise geben eine erste Orientierung und Sicherheit bei der Nutzung digitaler Medien. Dabei erfahren Sie auch, wie Sie die einzelnen medial gestützten Formate mit der klassischen Präsenzlehre verbinden und wie unterschiedliche Einsatzszenarien miteinander kombiniert werden können.:Grußwort 3 Digitale Medien für eine neue Lehr-/ Lernkultur 4 Wo soll ich anfangen 6 Texte, Präsentationen, Grafiken, Bilder 7 Trend: freie Bildungsressourcen (OER) 16 Filme, Video- und Audiodateien 19 Trend: Flipped Classroom-Modell 27 Digitale Simulationen und Planspiele 31 Trend: Massive Open Online Courses (MOOCs 39 Trend: Open Badges 45 Trend: Learning Analytics 51 Unterstützung, Services, Kontakt 55
APA, Harvard, Vancouver, ISO, and other styles
41

Riedel, Jana, Marlen Dubrau, Kathrin Möbius, and Susan Berthold. "Digitales Lehren & Lernen in der Hochschule." Technische Universität Dresden, 2016. https://tud.qucosa.de/id/qucosa%3A30123.

Full text
Abstract:
Liebe Lehrende, liebe Leserin und lieber Leser, Digitalisierung betrifft immer mehr Bereiche des alltäglichen Lebens. Auch und besonders die Hochschullehre an den sächsischen Hochschulen ist davon betroffen. Allerdings nicht erst seit kurzem. Im Jahr 2001 startete das Verbundprojekt Bildungsportal Sachsen, welches bis heute als hochschulübergreifende strategische Initiative mit dem angeschlossenen Arbeitskreis E-Learning der Landesrektorenkonferenz fortbesteht. Ein Ergebnis dieser Initiative ist auch die von den Hochschulen beaufsichtigte Bildungsportal Sachsen GmbH, die an den meisten sächsischen Hochschulen mit dem Lernmanagementsystem OPAL, der Testsuite ONYX und dem Videodienst Magma eine solide technische Infrastruktur bereit stellt. Seit 15 Jahren unterstützt das SMWK die Entwicklung des Lehrens und Lernens mit digitalen Medien an den sächsischen Hochschulen auch finanziell und wird dies auch in den nächsten Jahren fortführen. Zahlreiche kleine Projekte und mittlerweile hochschulübergreifende Projektverbünde haben im Laufe der Jahre zur technologischen und didaktischen Weiterentwicklung der digital gestützten Hochschullehre beigetragen. Liebe Lehrende, ich möchte Sie auffordern, sich den neuen Anforderungen und Entwicklungen in der Lehre zu öffnen und die bereits gegebenen Möglichkeiten im Bereich der Digitalisierung für Ihre tägliche Arbeit als Unterstützung zu nutzen. Erst mit Ihnen, die im Mittelpunkt der Wissensvermittlung für die zukünftige Generation stehen werden, kann die digitale Bildung gelingen. Und dass die Digitalisierung nicht nur ein kurzfristiges Phänomen ist, vermitteln auch die Strategien der Kultusministerkonferenz "Bildung in der digitalen Welt" sowie des Bundesministeriums für Bildung und Forschung "Bildungsoffensive für die digitale Wissensgesellschaft". Ich wünsche Ihnen bei der Lektüre viele interessante und anregende Informationen, gutes Gelingen für die Zukunft, motivierte Studentinnen und Studenten und eine abwechslungsreiche Vermittlung des Lehrstoffes. Dr. Eva-Maria Stange Sächsische Staatsministerin für Wissenschaft und Kunst
APA, Harvard, Vancouver, ISO, and other styles
42

Zandén, Olle. "Enacted possibilities for learning in goals- and results-based music teaching." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34628.

Full text
Abstract:
In this chapter, enacted possibilities for learning in a Scottish and a Swedish music lesson are analysed and compared with the intended learning outcomes as defined in the Swedish national curriculum. The Scotland-Lesson proves to place more emphasis on music's auditive aspects while the Sweden-Lesson focuses playing as individual manual skills.
APA, Harvard, Vancouver, ISO, and other styles
43

Wallbaum, Christopher. "RED – A supposedly universal quality as the core of music education." Georg Olms Verlag, 2018. https://slub.qucosa.de/id/qucosa%3A34616.

Full text
Abstract:
The Chapter consists in two sections complementing Analytical Short Films. The first is about a supposedly universal atmosphere called RED in the Bavaria-Lesson, the second about different cultures in voice and posture coming together in the Beijing-Lesson. Both are related to theory as well as German philosophies of music education.
APA, Harvard, Vancouver, ISO, and other styles
44

Iwazaki, Alexandra. "Estudo da influência da força iônica na medição de H+ utilizando eletrodo de membrana de vidro." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/46/46133/tde-21092018-095836/.

Full text
Abstract:
O presente trabalho mostra a influência da força iônica na leitura potenciométrica de pH de soluções aquosas utilizando membrana de vidro. O tamanho e a carga dos íons são importantes fatos apontados. Um tipo de efeito de \"memória\" aparece durante as leituras de pH, este foi contornado pela calibração multipontuada. Propostas para corrigir a influência da força iônica foram estudadas sob aspectos experimental e teórico. A teoria de Cheng sobre o mecanismo de ação da membrana de vidro como capacitor químico foi comprovada por estudos em que se variou a ação do campo elétrico. O conceito da grandeza de pH também foi discutido.
This work shows the influence of the ionic strength on the potenciometric measurements of pH of aqueous solutions using the electrode of glass membrane. The size and charge of the ions are important studied facts. One type of \"memory\" of glass membrane was observed and this effect was overcame using multipoint calibration. To correct the influence of the ionic strength two studies on experimental and theorical aspects were performed. The model proposed by Cheng to the mechanism of glass membrane as a chemical capacitor was confirmed by application of electrical field on the glass membrane. The concept of pH amount was discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Santos, Mauro Sergio Ferreira. "Eletroforese capilar com derivatização eletroquímica de compostos neutros: novas aplicações, otimização e miniaturização do sistema em fluxo EC-CE-C4D." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/46/46136/tde-11042017-071834/.

Full text
Abstract:
A combinação de célula eletroquímica (EC) com a entrada do equipamento de eletroforese capilar (CE), apesar de recente, tem permitido realizar determinação de ânions radicais; pré-concentração eletroquímica de metais pesados, seguida de redissolução, separação e detecção; bem como monitorar produtos carregados formados por oxidação eletrocatalítica de espécies neutras, como álcoois primários e glicerol. Empregando o sistema EC-CE-C4D desenvolvido pelo grupo, a determinação simultânea de cátions, ânions (no contra fluxo) e espécies neutras (detectadas após derivatização eletroquímica) foi demonstrada pela primeira vez, tendo o antisséptico bucal (Listerine® Tartar Control) como amostra real. Embora constante e reprodutível, a conversão dos álcoois primários nos respectivos carboxilatos apresentou rendimento relativamente baixo, ~16%, nas condições anteriormente adotadas, 1,6 V vs. Ag/AgClKCl 3M empregando eletrodo de platina em meio ácido (HNO3 5 mmol L-1 / HCl 1 mmol L-1). Dessa maneira, avaliou-se a oxidação de álcoois primários de cadeia normal (C2 − C5) sobre diferentes materiais de eletrodo (ouro e platina) em diferentes meios (ácido, neutro e alcalino). Os carboxilatos gerados foram monitorados injetando uma alíquota da amostra derivatizada no capilar (50 µm d.i., 45 cm de comprimento e 20 cm efetivo) aplicando 5 kPa durante 5 s, e durante as separações, 30 kV foi aplicado entre as extremidades do capilar preenchido com Tris 30 mmol L-1 / HCl 10 mmol L-1 usado como BGE. Os resultados obtidos com o sistema EC-CE-C4D apontaram maior conversão dos álcoois nos respectivos ácidos carboxílicos em meio ácido, tanto em ouro quanto em platina. Adicionalmente, em eletrodo de ouro a formação dos carboxilatos apresentou certa seletividade não observada sobre platina, favorecendo a conversão dos álcoois de cadeia menor. Noutra vertente, buscando atender as necessidades atuais por metodologias que possibilitem monitorar a eletrooxidação do glicerol em reatores eletroquímicos, desenvolveu-se um método que permitiu determinar simultaneamente o glicerol e alguns de seus possíveis produtos de oxidação neutros, como gliceraldeído e dihidroxiacetona, explorando a formação de complexo carregado com borato (presente no BGE composto por H3BO3 60 mmol L-1 / LiOH 30 mmol L-1), além dos produtos ionizáveis (ácidos carboxílicos) que são comumente analisados por CE. O equipamento de CE utilizado, munido de dois detectores C4D, também permitiu avaliar a interação de alguns ácidos carboxílicos com os modificadores de EOF, Polybrene® e CTAB, empregando MES 30 mmol L-1 / His 30 mmol L-1 como BGE. Seguindo a atual tendência à miniaturização de sistemas analíticos, avaliou-se a possibilidade de construir um sistema EC-CE-C4D miniaturizado. Para isso, um novo método para fabricação de microdispositivos em vidro, baseado em ablação a laser de CO2 assistida por parafina, como alternativa aos dispendiosos métodos de corrosão por via úmida foi desenvolvido. Os dispositivos obtidos por esse método apresentaram canais de perfil semicircular, e as dimensões puderam ser controladas variando a potência e/ou a velocidade de ablação do laser. Contudo, pelos desafios ainda encontrados para se construir um sistema EC-CE-C4D completo em substrato de vidro por ablação a laser de CO2, optou-se por iniciar a miniaturização do sistema EC-CE-C4D com um sistema híbrido em que se aproveita as características mais bem definidas e favoráveis dos tubos capilares de sílica fundida usados em CE convencional. Esse sistema permitiu a determinação quantitativa de metanol na presença de alta concentração de etanol, possibilitando, numa primeira aplicação, realizar o monitoramento da quantidade de metanol e etanol nas frações iniciais coletada durante o processo de destilação fracionada na produção de uísque de milho (moonshine) feito em laboratório. Visto a maior seletividade para conversão dos álcoois de cadeia menor obtidas em eletrodo de ouro e meio ácido, esse foi escolhido para a presente aplicação. As condições que apresentaram melhores resultados no sistema híbrido EC-CE-C4D abrangeram diluição de 100 vezes da amostra em HNO3 2 mmol L-1, eletrooxidação a 1,4 V vs. Ag durante 60 s, injeção eletrocinética no capilar mediante aplicação de 3 kV durante 4 s, e a separação dos carboxilatos realizada aplicando 3 kV entre as extremidades do capilar (50 µm d.i., 15 cm de comprimento com 12 cm efetivo), preenchido com CHES 10 mmol L-1 / NaOH 5 mmol L-1, usado como BGE. A análise das primeiras frações destiladas da \"labmade moonshine\" apresentou um aumento na concentração de etanol (variando de ~80 % a ~100 %) e simultâneo decréscimo da concentração de metanol (variando de 4 % a ~0,1 %). Em suma, avançou-se tanto no leque de aplicações da derivatização eletroquímica hifenizada com a eletroforese capilar como na miniaturização da instrumentação analítica para EC-CE-C4D, favorecendo a disseminação dessa poderosa combinação de três técnicas eletroquímicas.
The direct couple of electrochemical cell (EC) with the inlet of the capillary electrophoresis (CE) equipment, recently demonstrated, has allowed the determination of radical anions; to perform electrochemical preconcentration of traces of heavy metals, followed by stripping, injection, separation and detection; and the generation of charged species by electrochemical oxidation of neutral molecules, e.g. primary alcohols and glycerol. Employing the EC-CE-C4D system developed by our group, the simultaneous determination of cations, anions (in the counter EOF mode) and neutral species (after electrochemical derivatization) was demonstrated for the first time and a mouthwash (Listerine® Tartar Control) was used as a real sample. Although constant and reproducible, the conversion of primary alcohols into carboxylates had a low yield (~16%), under the adopted conditions, 1.6 V vs. Ag/AgClKCl 3M using platinum electrode in acid medium (5 mmol L-1 HNO3 / 1 mmol L-1 HCl). Thus, the yield of carboxylates was studied for the oxidation of alcohols (C2 − C5) on two electrode materials (gold and platinum) in different media (acid, neutral and alkaline). After the electrooxidation step an aliquot of the derivatized sample was automatically injected into the capillary (50 µm i.d., 45 cm in length and 20 cm up to detector) by applying 5 kPa during 5 s. The separation was carried out applying 30 kV between the capillary ends previously filled with 30 mmol L-1 Tris / 10 mmol L-1 HCl BGE. Cyclic voltammograms show higher current density for alcohols oxidation in alkaline medium than in acid one both on gold and platinum electrodes. On the other hand the yields of carboxylic acids were higher in acidic medium. Besides that, only on gold electrode some selectivity for the carboxylate formation was observed favoring the conversion of the short chain alcohols. In order to meet the current needs for methodologies that allow the monitoring of the electrooxidation of glycerol in electrochemical reactors, a method was also developed that allowed the determination of glycerol and some of its possible neutral oxidation products, such as glyceraldehyde and dihydroxyacetone, by exploring the formation of borate complexes (provided in the BGE composed of 60 mmol L-1 H3BO3 / 30 mmol L-1 LiOH), together with ionizable ones like carboxylic acids. The employed CE equipment with two C4D detectors allowed the evaluation of the interaction between some carboxylic acids and the EOF modifiers, Polybrene® and CTAB, using 30 mmol L-1 MES / 30 mmol L-1 His as BGE. Aligned with a current trend of analytical instrumentation, the miniaturized EC-CE-C4D system was attempted. For that, a new method for manufacturing microdevices in glass, based on paraffin-assisted CO2 laser ablation, was developed as an alternative to costly wet-etching methods. The devices obtained by this method presented channels of semicircular profile and the dimensions could be controlled by varying the laser power and/or ablation velocity. Due to remaining challenges in the construction of a complete laser ablated EC-CE-C4D system on glass, a miniaturized system based on a hybrid approach is presented in the thesis, by taking advantage of the more defined and favorable characteristics of the well known fused silica capillary tubes used in CE. This system allowed the quantitative determination of methanol in the presence of high ethanol concentration by taking advantage of the higher yield of short-chain carboxylic acid formation on gold in acidic medium. The first application was the monitoring of the amount of methanol and ethanol in the initial fractions collected during the fractional distillation process in the production of corn whiskey (moonshine) made in the laboratory. The conditions that showed the best results with the hybrid EC-CE-C4D system included a 100-fold dilution of the sample in 2 mmol L-1 HNO3, electrooxidation at 1.4 V vs. Ag for 60 s, electrokinetic injection into the capillary by applying 3 kV for 4 s and separation of the carboxylates carried out under 3 kV between the ends of the capillary (50 µm i.d., 15 cm in length and 12 cm up to detector) previously filled with 10 mmol L-1 CHES / 5 mmol L-1 NaOH, used as BGE. Analysis of the first distilled fractions of labmade moonshine showed an increase in ethanol concentration (ranging from ~ 80% to ~ 100%) and a simultaneous decrease in methanol concentration (ranging from 4% to ~ 0.1%). In short, both the range of applications of electrochemical derivatization hyphenated with capillary electrophoresis as well the miniaturization of analytical instrumentation for EC-CE-C4D were improved, favoring the dissemination of this powerful combination of three electrochemical techniques.
APA, Harvard, Vancouver, ISO, and other styles
46

Blake, Greyory. "Good Game." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5377.

Full text
Abstract:
This thesis and its corresponding art installation, Lessons from Ziggy, attempts to deconstruct the variables prevalent within several complex systems, analyze their transformations, and propose a methodology for reasserting the soap box within the display pedestal. In this text, there are several key and specific examples of the transformation of various signifiers (i.e. media-bred fear’s transformation into a political tactic of surveillance, contemporary freneticism’s transformation into complacency, and community’s transformation into nationalism as a state weapon). In this essay, all of these concepts are contextualized within the exponential growth of new technologies. That is to say, all of these semiotic developments must be framed within the post-Internet sphere.
APA, Harvard, Vancouver, ISO, and other styles
47

Bai, Yannan. "Video analytics system for surveillance videos." Thesis, 2018. https://hdl.handle.net/2144/30739.

Full text
Abstract:
Developing an intelligent inspection system that can enhance the public safety is challenging. An efficient video analytics system can help monitor unusual events and mitigate possible damage or loss. This thesis aims to analyze surveillance video data, report abnormal activities and retrieve corresponding video clips. The surveillance video dataset used in this thesis is derived from ALERT Dataset, a collection of surveillance videos at airport security checkpoints. The video analytics system in this thesis can be thought as a pipelined process. The system takes the surveillance video as input, and passes it through a series of processing such as object detection, multi-object tracking, person-bin association and re-identification. In the end, we can obtain trajectories of passengers and baggage in the surveillance videos. Abnormal events like taking away other's belongings will be detected and trigger the alarm automatically. The system could also retrieve the corresponding video clips based on user-defined query.
APA, Harvard, Vancouver, ISO, and other styles
48

Mitra, Adway. "Bayesian Nonparametric Modeling of Temporal Coherence for Entity-Driven Video Analytics." Thesis, 2015. https://etd.iisc.ac.in/handle/2005/3527.

Full text
Abstract:
In recent times there has been an explosion of online user-generated video content. This has generated significant research interest in video analytics. Human users understand videos based on high-level semantic concepts. However, most of the current research in video analytics are driven by low-level features and descriptors, which often lack semantic interpretation. Existing attempts in semantic video analytics are specialized and require additional resources like movie scripts, which are not available for most user-generated videos. There are no general purpose approaches to understanding videos through semantic concepts. In this thesis we attempt to bridge this gap. We view videos as collections of entities which are semantic visual concepts like the persons in a movie, or cars in a F1 race video. We focus on two fundamental tasks in Video Understanding, namely summarization and scene- discovery. Entity-driven Video Summarization and Entity-driven Scene discovery are important open problems. They are challenging due to the spatio-temporal nature of videos, and also due to lack of apriori information about entities. We use Bayesian nonparametric methods to solve these problems. In the absence of external resources like scripts we utilize fundamental structural properties like temporal coherence in videos- which means that adjacent frames should contain the same set of entities and have similar visual features. There have been no focussed attempts to model this important property. This thesis makes several contributions in Computer Vision and Bayesian nonparametrics by addressing Entity-driven Video Understanding through temporal coherence modeling. Temporal Coherence in videos is observed across its frames at the level of features/descriptors, as also at semantic level. We start with an attempt to model TC at the level of features/descriptors. A tracklet is a spatio-temporal fragment of a video- a set of spatial regions in a short sequence (5-20) of consecutive frames, each of which enclose a particular entity. We attempt to find a representation of tracklets to aid tracking of entities. We explore region descriptors like Covari- ance Matrices of spatial features in individual frames. Due to temporal coherence, such matrices from corresponding spatial regions in successive frames have nearly identical eigenvectors. We utilize this property to model a tracklet using a covariance matrix, and use it for region-based entity tracking. We propose a new method to estimate such a matrix. Our method is found to be much more efficient and effective than alternative covariance-based methods for entity tracking. Next, we move to modeling temporal coherence at a semantic level, with special emphasis on videos of movies and TV-series episodes. Each tracklet is associated with an entity (say a particular person). Spatio-temporally close but non-overlapping tracklets are likely to belong to the same entity, while tracklets that overlap in time can never belong to the same entity. Our aim is to cluster the tracklets based on the entities associated with them, with the goal of discovering the entities in a video along with all their occurrences. We argue that Bayesian Nonparametrics is the most convenient way for this task. We propose a temporally coherent version of Chinese Restaurant Process (TC-CRP) that can encode such constraints easily, and results in discovery of pure clusters of tracklets, and also filter out tracklets resulting from false detections. TC-CRP shows excellent performance on person discovery from TV-series videos. We also discuss semantic video summarization, based on entity discovery. Next, we consider entity-driven temporal segmentation of a video into scenes, where each scene is characterized by the entities present in it. This is a novel application, as existing work on temporal segmentation have focussed on low-level features of frames, rather than entities. We propose EntScene: a generative model for videos based on entities and scenes, and propose an inference algorithm based on Blocked Gibbs Sampling, for simultaneous entity discovery and scene discovery. We compare it to alternative inference algorithms, and show significant improvements in terms of segmentatio and scene discovery. Video representation by low-rank matrix has gained popularity recently, and has been used for various tasks in Computer Vision. In such a representation, each column corresponds to a frame or a single detection. Such matrices are likely to have contiguous sets of identical columns due to temporal coherence, and hence they should be low-rank. However, we discover that none of the existing low-rank matrix recovery algorithms are able to preserve such structures. We study regularizers to encourage these structures for low-rank matrix recovery through convex optimization, but note that TC-CRP-like Bayesian modeling is better for enforcing them. We then focus our attention on modeling temporal coherence in hierarchically grouped sequential data, such as word-tokens grouped into sentences, paragraphs, documents etc in a text corpus. We attempt Bayesian modeling for such data, with application to multi-layer segmentation. We first make a detailed study of existing models for such data. We present a taxonomy for such models called Degree-of-Sharing (DoS), based on how various mixture components are shared by the groups of data in these models. We come up with Layered Dirichlet Process which generalizes Hierarchical Dirichlet Process to multiple layers, and can also handle sequential information easily through Markovian approach. This is applied to hierarchical co-segmentation of a set of news transcripts- into broad categories (like politics, sports etc) and individual stories. We also propose a explicit-duration (semi-Markov) approach for this purpose, and provide an efficient inference algorithm for this. We also discuss generative processes for distribution matrices, where each column is a probability distribution. For this we discuss an application: to infer the correct answers to questions on online answering forums from opinions provided by different users.
APA, Harvard, Vancouver, ISO, and other styles
49

Mitra, Adway. "Bayesian Nonparametric Modeling of Temporal Coherence for Entity-Driven Video Analytics." Thesis, 2015. http://etd.iisc.ernet.in/2005/3527.

Full text
Abstract:
In recent times there has been an explosion of online user-generated video content. This has generated significant research interest in video analytics. Human users understand videos based on high-level semantic concepts. However, most of the current research in video analytics are driven by low-level features and descriptors, which often lack semantic interpretation. Existing attempts in semantic video analytics are specialized and require additional resources like movie scripts, which are not available for most user-generated videos. There are no general purpose approaches to understanding videos through semantic concepts. In this thesis we attempt to bridge this gap. We view videos as collections of entities which are semantic visual concepts like the persons in a movie, or cars in a F1 race video. We focus on two fundamental tasks in Video Understanding, namely summarization and scene- discovery. Entity-driven Video Summarization and Entity-driven Scene discovery are important open problems. They are challenging due to the spatio-temporal nature of videos, and also due to lack of apriori information about entities. We use Bayesian nonparametric methods to solve these problems. In the absence of external resources like scripts we utilize fundamental structural properties like temporal coherence in videos- which means that adjacent frames should contain the same set of entities and have similar visual features. There have been no focussed attempts to model this important property. This thesis makes several contributions in Computer Vision and Bayesian nonparametrics by addressing Entity-driven Video Understanding through temporal coherence modeling. Temporal Coherence in videos is observed across its frames at the level of features/descriptors, as also at semantic level. We start with an attempt to model TC at the level of features/descriptors. A tracklet is a spatio-temporal fragment of a video- a set of spatial regions in a short sequence (5-20) of consecutive frames, each of which enclose a particular entity. We attempt to find a representation of tracklets to aid tracking of entities. We explore region descriptors like Covari- ance Matrices of spatial features in individual frames. Due to temporal coherence, such matrices from corresponding spatial regions in successive frames have nearly identical eigenvectors. We utilize this property to model a tracklet using a covariance matrix, and use it for region-based entity tracking. We propose a new method to estimate such a matrix. Our method is found to be much more efficient and effective than alternative covariance-based methods for entity tracking. Next, we move to modeling temporal coherence at a semantic level, with special emphasis on videos of movies and TV-series episodes. Each tracklet is associated with an entity (say a particular person). Spatio-temporally close but non-overlapping tracklets are likely to belong to the same entity, while tracklets that overlap in time can never belong to the same entity. Our aim is to cluster the tracklets based on the entities associated with them, with the goal of discovering the entities in a video along with all their occurrences. We argue that Bayesian Nonparametrics is the most convenient way for this task. We propose a temporally coherent version of Chinese Restaurant Process (TC-CRP) that can encode such constraints easily, and results in discovery of pure clusters of tracklets, and also filter out tracklets resulting from false detections. TC-CRP shows excellent performance on person discovery from TV-series videos. We also discuss semantic video summarization, based on entity discovery. Next, we consider entity-driven temporal segmentation of a video into scenes, where each scene is characterized by the entities present in it. This is a novel application, as existing work on temporal segmentation have focussed on low-level features of frames, rather than entities. We propose EntScene: a generative model for videos based on entities and scenes, and propose an inference algorithm based on Blocked Gibbs Sampling, for simultaneous entity discovery and scene discovery. We compare it to alternative inference algorithms, and show significant improvements in terms of segmentatio and scene discovery. Video representation by low-rank matrix has gained popularity recently, and has been used for various tasks in Computer Vision. In such a representation, each column corresponds to a frame or a single detection. Such matrices are likely to have contiguous sets of identical columns due to temporal coherence, and hence they should be low-rank. However, we discover that none of the existing low-rank matrix recovery algorithms are able to preserve such structures. We study regularizers to encourage these structures for low-rank matrix recovery through convex optimization, but note that TC-CRP-like Bayesian modeling is better for enforcing them. We then focus our attention on modeling temporal coherence in hierarchically grouped sequential data, such as word-tokens grouped into sentences, paragraphs, documents etc in a text corpus. We attempt Bayesian modeling for such data, with application to multi-layer segmentation. We first make a detailed study of existing models for such data. We present a taxonomy for such models called Degree-of-Sharing (DoS), based on how various mixture components are shared by the groups of data in these models. We come up with Layered Dirichlet Process which generalizes Hierarchical Dirichlet Process to multiple layers, and can also handle sequential information easily through Markovian approach. This is applied to hierarchical co-segmentation of a set of news transcripts- into broad categories (like politics, sports etc) and individual stories. We also propose a explicit-duration (semi-Markov) approach for this purpose, and provide an efficient inference algorithm for this. We also discuss generative processes for distribution matrices, where each column is a probability distribution. For this we discuss an application: to infer the correct answers to questions on online answering forums from opinions provided by different users.
APA, Harvard, Vancouver, ISO, and other styles
50

Chada, Sharath. "Human Action Recognition in Videos Using Intermediate Matching Kernel." Thesis, 2014. http://raiith.iith.ac.in/657/1/CS12M1001.pdf.

Full text
Abstract:
Human action recognition can be considered as the process of labelling the videos with the corre- sponding action labels. Coming to the elds of computer vision, video sensing this has become an important area of research. There are a lot of factors such as recording environment,intra class and inter class variations,realistic action ambiguities and varying length of actions in the videos which make this problem more challenging Videos containing human actions can be considered as the varying length patterns because the actions in videos may last for dierent duration. In this thesis the issue of varying length patterns is being addressed. To solve this issue a paradigm of building intermediate matching kernel as a dynamic is used so that the similarity among the patterns of varying length can be obtained. The idea of the intermediate matching kernel is using a generative model as a reference and obtain the similarity between the videos. A video is a sequence of frames which can be represented as a sequence of feature vectors and so hidden markov model is used as the generative model as it captures the stochastic information. The complete idea of this thesis can be described as building intermediate matching kernels using hidden markov model as generative model over which the SVM is used as a descriminative model for calssifying the actions based on the computed kernels. This idea is evaluated on the standard datasets like KTH, UCF50 and HMDB51
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography