Dissertations / Theses on the topic 'Visual optimization'

To see the other types of publications on this topic, follow the link: Visual optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Visual optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Timm, Richard W. (Richard William). "Visual-based methods in compliant mechanism optimization." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35649.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.
Includes bibliographical references (p. 103-105).
The purpose of this research is to generate visual-based methods for optimizing compliant mechanisms (CMs). Visual-based optimization methods use graphical representations (3-D plots) of CM performance to convey design information. They have many advantages over traditional optimization methods, such as enabling judgment-based design tradeoffs and ensuring robustness of optimized solutions. This research fulfilled the primary aims of determining (1) how to best convey decision-driving design information, and (2) how to interpret and analyze the results of a visual-based optimization method. Other useful tools resulting from this work are (3) a nondimensional model of a CM (a compliant four-bar mechanism) that may be used to maximize the information density of optimization plots, and (4) a new model of a compliant beam that establishes a link between beam stiffness and instant center location. This work presents designers with an optimization tool that may either be used to augment or replace current optimization methods.
by Richard W. Timm.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmad, Naeem. "Modelling and optimization of sky surveillance visual sensor network." Licentiate thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17123.

Full text
Abstract:
A Visual Sensor Network (VSN) is a distributed system of a largenumber of camera sensor nodes. The main components of a camera sensornode are image sensor, embedded processor, wireless transceiver and energysupply. The major difference between a VSN and an ordinary sensor networkis that a VSN generates two dimensional data in the form of an image, whichcan be exploited in many useful applications. Some of the potentialapplication examples of VSNs include environment monitoring, surveillance,structural monitoring, traffic monitoring, and industrial automation.However, the VSNs also raise new challenges. They generate large amount ofdata which require higher processing powers, large bandwidth requirementsand more energy resources but the main constraint is that the VSN nodes arelimited in these resources.This research focuses on the development of a VSN model to track thelarge birds such as Golden Eagle in the sky. The model explores a number ofcamera sensors along with optics such as lens of suitable focal length whichensures a minimum required resolution of a bird, flying at the highestaltitude. The combination of a camera sensor and a lens formulate amonitoring node. The camera node model is used to optimize the placementof the nodes for full coverage of a given area above a required lower altitude.The model also presents the solution to minimize the cost (number of sensornodes) to fully cover a given area between the two required extremes, higherand lower altitudes, in terms of camera sensor, lens focal length, camera nodeplacement and actual number of nodes for sky surveillance.The area covered by a VSN can be increased by increasing the highermonitoring altitude and/or decreasing the lower monitoring altitude.However, it also increases the cost of the VSN. The desirable objective is toincrease the covered area but decrease the cost. This objective is achieved byusing optimization techniques to design a heterogeneous VSN. The core ideais to divide a given monitoring range of altitudes into a number of sub-rangesof altitudes. The sub-ranges of monitoring altitudes are covered by individualsub VSNs, the VSN1 covers the lower sub-range of altitudes, the VSN2 coversthe next higher sub-range of altitudes and so on, such that a minimum cost isused to monitor a given area.To verify the concepts, developed to design the VSN model, and theoptimization techniques to decrease the VSN cost, the measurements areperformed with actual cameras and optics. The laptop machines are used withthe camera nodes as data storage and analysis platforms. The area coverage ismeasured at the desired lower altitude limits of homogeneous as well asheterogeneous VSNs and verified for 100% coverage. Similarly, the minimumresolution is measured at the desired higher altitude limits of homogeneous aswell as heterogeneous VSNs to ensure that the models are able to track thebird at these highest altitudes.
APA, Harvard, Vancouver, ISO, and other styles
3

Chung, Ka Kei. "Interactive visual optimization and analysis for RFID system performance /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20CHUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gisginis, Alexandros. "Production line optimization featuring cobots and visual inspection system." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21752.

Full text
Abstract:
This study examines the automatization potential for two production lines at Scania Transmission workshop in Södertälje using Industry 4.0 technologies. In order to do so, the capabilities of the operational performance and safety features of cobots and an Automated Visual Inspection System is theoretically investigated and intended to substitute CNC operators on certain tasks such as loading of conveyors and quality controls. The purpose of the study is to generate a realistic approach and give insight to the benefits of a future practical implementation.Previous research around these technologies as well as the actual data recordings and several interviews that took place during on-site visits is presented. The results show that a significant amount of time can be saved and allocated differently. Based on the findings of the study, a layout for the cobots and AVIS placement is proposed, aiming for CNC operator’s better control over the critical parts of the production lines, thus contributing to a much more manageable workflow.
APA, Harvard, Vancouver, ISO, and other styles
5

Treptow, André. "Optimization techniques for real time visual object detection and tracking." Berlin Logos-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2938420&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Zhaozhong. "Visual-Inertial SLAM Extrinsic Parameter Calibration Based on Bayesian Optimization." Thesis, University of Colorado at Boulder, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10789260.

Full text
Abstract:

VI-SLAM (Visual-Inertial Simultaneous Localization and Mapping) is a popular way for robotics navigation and tracking. With the help of sensor fusion from IMU and camera, VI-SLAM can give a more accurate solution for navigation. One important problem needs to be solved in VI-SLAM is that we need to know accurate relative position between camera and IMU, we call it the extrinsic parameter. However, our measurement of the rotation and translation between IMU and camera is noisy. If the measurement is slightly o?, the result of SLAM system will be much more away from the ground truth after a long run. Optimization is necessary. This paper uses a global optimization method called Bayesian Optimization to optimize the relative pose between IMU and camera based on the sliding window residual output from VISLAM. The advantage of using Bayesian Optimization is that we can get an accurate pose estimation between IMU and camera from a large searching range. Whats more, thanks to the Gaussian Process or T process of Bayesian Optimization, we can get a result with a known uncertainty, which cannot be done by many optimization solutions.

APA, Harvard, Vancouver, ISO, and other styles
7

Verpers, Felix. "Improving a stereo-based visual odometry prototype with global optimization." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-383268.

Full text
Abstract:
In this degree project global optimization methods, for a previously developedsoftwareprototype of a stereo odometry system, were studied. The existing softwareestimatesthe motion between stereo frames and builds up a map of selected stereo frameswhich accumulates increasing error over time. The aim of the project was to studymethods to mitigate the error accumulated over time in the step-wise motionestimation.One approach based on relative pose estimates and another approach based onreprojection optimization were implemented and evaluated for the existing platform.The results indicate that optimization based on relative keyframe estimates ispromising for real-time usage. The second strategy based on reprojection of stereotriangulatedpoints proved useful as a refinement step but the relatively small errorreduction comes at an increased computational cost. Therefore, this approachrequiresfurther improvements to become applicable in situations where corrections areneededin real-time, and it is hard to justify the increased computations for the relatively smallerror reduction.The results also show that the global optimization primarily improves the absolutetrajectory error.
APA, Harvard, Vancouver, ISO, and other styles
8

Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.

Full text
Abstract:
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite
With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
APA, Harvard, Vancouver, ISO, and other styles
9

Elsidani, Elariss Haifa. "A new visual query language and query optimization for mobile GPS." Thesis, Kingston University, 2008. http://eprints.kingston.ac.uk/20306/.

Full text
Abstract:
In recent years computer applications have been deployed to manage spatial data with Geographic Information Systems (GIS) to store and analyze data related to domains such as transportation and tourism. Recent developments have shown that there is an urgent need to develop systmes for mobile devices and particularly for Location Based Services (LBS) such as proximity analysis that helps in finding the nearest neighbors, for example. restaurant, and the facilities that are located within a circle area around the user's location, known as a buffer area, for example, all restaurants within 100 meters. The mobile market potential is across geographical and cultural boundaries. Hence the visualization of queries becomes important especially that the existing visual query languages have a number of limitations. They are not tailored for mobile GIS and they do not support dynamic complex queries (DCQ) and visual query formation. Thus, the first aim of this research is to develop a new visual query language (IVQL) for mobile GIS that handles static and DCQ for proximity analysis. IVQL is designed and implemented using smiley icons that visualize operators, values, and objects. The evaluation results reveal that it has an expressive power, easy-to-use user interface, easy query building, and a high user satisfaction. There is also a need that new optimization strategies consider the scale of mobile user queries. Existing query optimization strategies are based on the sharing and push-down paradigms and they do not cover multiple-DCQ (MDCQ) for proximity analysis. This leads to the second aim of this thesis which is to develop the query melting processor (QMP) that is responsible for processing MDCQs. QMP is based on the new Query Melting paradigm which consists of the sharing paradigm, query optimization, and is implemented by a new strategy "Melting Ruler". Moreover, with the increase in volume of cost sensitive mobile users, the need emerges to develop a time cost optimizer for processing MDCQs. Thus, the thirs aim of the thesis is to develop a new Decision Making Mechanism for time cost optimization (TCOP) and prove its cost effectiveness. TCOP is based on the new paradigm "Sharing global execution plans by MDCQs with similar scenarios". The experimental evaluation results, using a case study based on the map of Paris, proved that significant saving in time can be achieved by employing the newly developed strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Hernandez, Herrero Sandra. "Cross-layer optimization for visual-inertial localization on resource-constrained devices." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296834.

Full text
Abstract:
Mobile devices are increasingly expected to support high-performance cyber- physical applications in resource-constrained devices, such as drones and rovers. However, the gap between hardware limitations of these devices and application requirements is still prohibitive – conflicting goals such as robust, accurate, and efficient execution must be managed carefully to achieve acceptable operation. This thesis is focus on the exploration of the tradeoff between performance and efficiency in such cyber-physical systems, specifically with respect to localization, a core task for any mobile autonomous device. We perform a design space exploration (DSE) given a number of configurable parameters for both localization algorithm and platform layers. Given the configuration space, we formulate a cross-layer multi-objective optimization problem to explore the tradeoff between localization accuracy and power consumption. For our experiments we execute maplab – a visual-inertial localization and mapping framework – monolithically on the Nvidia Jetson AGX and NX platforms. We then propose a predictive model for robust execution that can be used to determine desirable configurations at runtime in the face of environmental changes.
Mobila enheter med begränsade resurser, som drönare och rovers, förväntas stödja mer och mer krävande cyberfysiska applikationer. Glappet mellan enheternas begränsningar i hårdvara och applikationskrav är dock fortfarande stort - motstridiga mål som robust, noggrann och effektiv körning måste uppfyllas för att uppnå acceptabel drift. Detta examensarbete undersöker avvägningen mellan prestanda och effektivitet i cyberfysiska system, särskilt med avseende på lokalisering som är en av de viktigaste uppgifterna för alla mobila autonoma enheter. Vi gör en design space exploration (DSE) genom att variera ett antal parametrar för både lokaliseringsalgoritm och plattformslager. Baserat på konfigurationsrummet formulerar vi ett tvärlageroptimeringsproblem med flera mål för att utforska avvägningen mellan noggrannhet i lokaliseringen och energiåtgång. I våra experiment kör vi maplab – ett visuellt tröghetsramverk för lokalisering och kartläggning – på Nvidia Jetson AGXoch NX-plattformarna. Vi presenterar sedan en robust prediktiv modell som kan användas för att välja konfigurationer vid körning i en föränderlig miljö.
APA, Harvard, Vancouver, ISO, and other styles
11

Tall, Fredrik. "VR Performance for mobile devices : Optimization vs. visual shift against realism." Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64168.

Full text
Abstract:
This report will go through the performance and the limitations of VR on mobile devices for achieving realism. It will start off with a theoretical part such as the device and VR headset, and then a practical part where performance is tested to find relevant cost for pixels changed in an already created indoor office environment. The ultimate goal of VR is the Holodeck in Star Trek but without the need for the Starship Enterprise as a number cruncher. But to use a slick pair of sunglasses and with the computer everyone already has in their pocket, the smartphone. With phones and technology moving as fast as it is and components getting smaller and more powerful every year, it is no longer impossible to think that it’s achievable.  In this thesis we will be discussing the limitations of realistic VR graphics and their costs for performance limited devices that exist on today’s phones. The results will show the visual effect versus performance in an office environment created for Unity, it will cover… What optimizations in VR creates more authentic pixels versus their cost in FPS when you have to consider a much stricter performance budget?  What parameters are important to work with to get closer to a credible recreation of a real world environment versus the cost in FPS.
Denna rapport kommer att gå igenom prestanda och begränsningar av VR på mobila enheter för att uppnå realism. Det kommer att börja med en teoretisk del som enheten och VR-headsetet, och sedan en praktisk del där prestanda testas för att hitta relevanta kostnader för pixlar ändras i en redan skapad inomhus kontorsmiljö. Det ultimata målet för VR är Holodeck i Star Trek men utan behov av Starship Enterprise som en stor dator. Använda solglasögon och datorn har alla redan har i fickan, smartmobilen. Tekniken i smartmobiler rör sig fort, komponenter blir mindre och kraftfullare varje år. Det är inte ovanligt att de dubblar sin kapacitet från föregående års modell. I denna avhandling diskuteras begränsningarna av realistisk VR-grafik och deras kostnader för prestationsbegränsade enheter som finns i smartmobiler. Resultaten visar visuell effekt kontra prestanda i en kontorsmiljö skapad för Unity, den kommer att täcka... Vilka optimeringar i VR skapar mer autentiska pixlar, jämfört med deras kostnad i FPS när du måste ta hänsyn till en mycket mer limiterad prestationsbudget? Vilka parametrar är viktiga att arbeta med för att komma närmare en trovärdig rekreation av en verklig världsmiljö jämfört med kostnaden i FPS.
APA, Harvard, Vancouver, ISO, and other styles
12

Lindstrand, Klas, and Axel Simonsson. "Optimization Workflow for Flat Slab Systems : Using Parametric Design with Visual programming." Thesis, KTH, Mekanik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230892.

Full text
Abstract:
The advancement of IT and technology has enabled the development of boundary breaking tools such as Parametric design and visual programming. Structural engineering has the potential to take the advantage of this development, by implementing visual programming which with the combination of optimization algorithms can explore design proposals. This opens up new possibilities to work closer with architects in the early stages of projects to create bolder architectural and structural designs. The task of the master thesis was to create a workflow using parametric design with visual programming and including an optimization algorithm. In the workflow, an optimization process should perform structural analysis and optimization operations to find suboptimal flat slab system designs. The idea was that the workflow should be implemented in the early stages of the structural design process, where an architectural model is used as a boundary to generate suboptimal flat slab systems based on user input. Thereafter, the different generated solutions need to be evaluated and verified by an engineer before proceeding further to the final design. The result obtained from the workflow was that an optimized flat slab system with column placements could be created through an optimization process with input data including geometry, loads and element properties. This led to an approach which exploited the capabilities of using parametric design and visual programming for structural design. This meant that, the user could alter the optimization process to narrow down the generated solutions to find the optimal flat slab system based on the requirements of the project. The results of the structural analysis in the workflow was not fully satisfactory, meaning it could not be used for final design without verification. The conclusion was that parametric design in combination with visual programming and optimization algorithms could generate multiple alternative designs. These alternatives could be used as inspiration for engineers to create new structural solutions in the early stages.
Framsteg inom IT och teknologi har möjliggjort utveckling av banbrytande verktyg som parametrisk design med visuell programmering. Konstruktörer har möjligheten att utnyttja denna utveckling genom att implementera visuell programmering, vilket i kombination med optimeringsalgoritmer kan generera alternativa konstruktionslösningar. Detta teknikskifte möjliggör ett närmare samarbete med arkitekter i tidiga skeden vilket kan resultera i mer vågade konstruktioner och arkitektur. Syftet med examensarbetet var att skapa ett arbetsflöde som utnyttjade parametrisk design och optimering i en visuell programmeringsmiljö som kunde utföra strukturanalys och optimering, vilket genererade optimala pelardäck med oväntade pelarplaceringar. Idén med detta var att arbetsflödet kunde implementeras i tidiga skeden med arkitekter, när den kan användas för att generera optimala pelardäck baserade på användarens indata. Därefter behöver de genererade lösningarna utvärderas och verifieras av en ingenjör, innan man fortsätter till nästa skede. Resultatet från arbetsflödet är att ett optimerat pelardäck med oväntade pelarplaceringar skapas genom en optimeringsprocess med indata innehållande geometri, laster, randvillkor och materialegenskaper. Detta arbetsflöde leder till ett angreppssätt som utnyttjar möjligheterna med parametrisk design och visuell programmering. Detta innebär att användaren kan påverka optimeringsprocessen för att smalna av resultatet för att hitta optimerade pelardäck baserade på projektets krav. Resultaten från strukturanalysen i arbetsflödet är inte helt tillförlitliga, vilket innebär att resultaten behöver verifieras. Sammanfattningsvis kan parametrisk design i kombination med visuell programmering och optimeringsalgoritmer skapa en mångfald av lösningar. Dessa alternativ kan inspirera ingenjörer att skapa nya konstruktionslösningar i tidiga skeden.
APA, Harvard, Vancouver, ISO, and other styles
13

Leeds, Daniel Demeny. "Searching for the Visual Components of Object Perception." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/313.

Full text
Abstract:
The nature of visual properties used for object perception in mid- and high-level vision areas of the brain is poorly understood. Past studies have employed simplistic stimuli probing models limited in descriptive power and mathematical under-pinnings. Unfortunately, pursuit of more complex stimuli and properties requires searching through a wide, unknown space of models and of images. The difficulty of this pursuit is exacerbated in brain research by the limited number of stimulus responses that can be collected for a given human subject over the course of an experiment. To more quickly identify complex visual features underlying cortical object perception, I develop, test, and use a novel method in which stimuli for use in the ongoing study are selected in realtime based on fMRI-measured cortical responses to recently-selected and displayed stimuli. A variation of the simplex method controls this ongoing selection as part of a search in visual space for images producing maximal activity — measured in realtime — in a pre-determined 1 cm3 brain region. I probe cortical selectivities during this search using photographs of real-world objects and synthetic “Fribble” objects. Real-world objects are used to understand perception of naturally-occurring visual properties. These objects are characterized based on feature descriptors computed from the scale invariant feature transform (SIFT), a popular computer vision method that is well established in its utility for aiding in computer object recognition and that I recently found to account for intermediate-level representations in the visual object processing pathway in the brain. Fribble objects are used to study object perception in an arena in which visual properties are well defined a priori. They are constructed from multiple well-defined shapes, and variation of each of these component shapes produces a clear space of visual stimuli. I study the behavior of my novel realtime fMRI search method, to assess its value in the investigation of cortical visual perception, and I study the complex visual properties my method identifies as highly-activating selected brain regions in the visual object processing pathway. While there remain further technical and biological challenges to overcome, my method uncovers reliable and interesting cortical properties for most subjects — though only for selected searches performed for each subject. I identify brain regions selective for holistic and component object shapes and for varying surface properties, providing examples of more precise selectivities within classes of visual properties previously associated with cortical object representation. I also find examples of “surround suppression,” in which cortical activity is inhibited upon viewing stimuli slightly deviation from the visual properties preferred by a brain region, expanding on similar observations at lower levels of vision.
APA, Harvard, Vancouver, ISO, and other styles
14

Gembler, Felix [Verfasser]. "Parameter Optimization for Brain-Computer Interfaces based on Visual Evoked Potentials / Felix Gembler." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1222672227/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Boisard, Olivier. "Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS016/document.

Full text
Abstract:
L'industrie a des besoins croissants en systèmes dits intelligents, capable d'analyserles signaux acquis par des capteurs et prendre une décision en conséquence. Cessystèmes sont particulièrement utiles pour des applications de vidéo-surveillanceou de contrôle de qualité. Pour des questions de coût et de consommation d'énergie,il est souhaitable que la prise de décision ait lieu au plus près du capteur. Pourrépondre à cette problématique, une approche prometteuse est d'utiliser des méthodesdites bio-inspirées, qui consistent en l'application de modèles computationels issusde la biologie ou des sciences cognitives à des problèmes industriels. Les travauxmenés au cours de ce doctorat ont consisté à choisir des méthodes d'extractionde caractéristiques bio-inspirées, et à les optimiser dans le but de les implantersur des plateformes matérielles dédiées pour des applications en vision par ordinateur.Tout d'abord, nous proposons un algorithme générique pouvant être utilisés dans différentscas d'utilisation, ayant une complexité acceptable et une faible empreinte mémoire.Ensuite, nous proposons des optimisations pour une méthode plus générale, baséesessentiellement sur une simplification du codage des données, ainsi qu'une implantationmatérielle basées sur ces optimisations. Ces deux contributions peuvent par ailleurss'appliquer à bien d'autres méthodes que celles étudiées dans ce document
Industry has growing needs for so-called “intelligent systems”, capable of not only ac-quire data, but also to analyse it and to make decisions accordingly. Such systems areparticularly useful for video-surveillance, in which case alarms must be raised in case ofan intrusion. For cost saving and power consumption reasons, it is better to perform thatprocess as close to the sensor as possible. To address that issue, a promising approach isto use bio-inspired frameworks, which consist in applying computational biology modelsto industrial applications. The work carried out during that thesis consisted in select-ing bio-inspired feature extraction frameworks, and to optimize them with the aim toimplement them on a dedicated hardware platform, for computer vision applications.First, we propose a generic algorithm, which may be used in several use case scenarios,having an acceptable complexity and a low memory print. Then, we proposed opti-mizations for a more global framework, based on precision degradation in computations,hence easing up its implementation on embedded systems. Results suggest that whilethe framework we developed may not be as accurate as the state of the art, it is moregeneric. Furthermore, the optimizations we proposed for the more complex frameworkare fully compatible with other optimizations from the literature, and provide encourag-ing perspective for future developments. Finally, both contributions have a scope thatgoes beyond the sole frameworks that we studied, and may be used in other, more widelyused frameworks as well
APA, Harvard, Vancouver, ISO, and other styles
16

Johnander, Joakim. "Visual Tracking with Deformable Continuous Convolution Operators." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138597.

Full text
Abstract:
Visual Object Tracking is the computer vision problem of estimating a target trajectory in a video given only its initial state. A visual tracker often acts as a component in the intelligent vision systems seen in for instance surveillance, autonomous vehicles or robots, and unmanned aerial vehicles. Applications may require robust tracking performance on difficult sequences depicting targets undergoing large changes in appearance, while enforcing a real-time constraint. Discriminative correlation filters have shown promising tracking performance in recent years, and consistently improved state-of-the-art. With the advent of deep learning, new robust deep features have improved tracking performance considerably. However, methods based on discriminative correlation filters learn a rigid template describing the target appearance. This implies an assumption of target rigidity which is not fulfilled in practice. This thesis introduces an approach which integrates deformability into a stateof-the-art tracker. The approach is thoroughly tested on three challenging visual tracking benchmarks, achieving state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Cabral, Ricardo da Silveira. "Unifying Low-Rank Models for Visual Learning." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/506.

Full text
Abstract:
Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex.
APA, Harvard, Vancouver, ISO, and other styles
18

Gu, Hairong. "Graphic-Processing-Units Based Adaptive Parameter Estimation of a Visual Psychophysical Model." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1350577967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dang, Duong Ngoc. "Humanoid manipulation and locomotion with real-time footstep optimization." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0098/document.

Full text
Abstract:
Cette thèse porte sur la réalisation des tâches avec la locomotion sur des robots humanoïdes. Grâce à leurs nombreux degrés de liberté, ces robots possèdent un très haut niveau de redondance. D’autre part, les humanoïdes sont sous-actionnés dans le sens où la position et l’orientation ne sont pas directement contrôlées par un moteur. Ces deux aspects, le plus souvent étudiés séparément dans la littérature, sont envisagés ici dans un même cadre. En outre, la génération d’un mouvement complexe impliquant à la fois des tâches de manipulation et de locomotion, étudiée habituellement sous l’angle de la planification de mouvement, est abordée ici dans sa composante réactivité temps réel. En divisant le processus d’optimisation en deux étapes, un contrôleur basé sur la notion de pile de tâches permet l’adaptation temps réel des empreintes de pas planifiées dans la première étape. Un module de perception est également conçu pour créer une boucle fermée de perception-décision-action. Cette architecture combinant planification et réactivité est validée sur le robot HRP-2. Deux classes d’expériences sont menées. Dans un cas, le robot doit saisir un objet éloigné, posé sur une table ou sur le sol. Dans l’autre, le robot doit franchir un obstacle. Dans les deux cas, les condition d’exécution sont mises à jour en temps réel pour faire face à la dynamique de l’environnement : changement de position de l’objet à saisir ou de l’obstacle à franchir
This thesis focuses on realization of tasks with locomotion on humanoid robots. Thanks to their numerous degrees of freedom, humanoid robots possess a very high level of redundancy. On the other hand, humanoids are underactuated in the sense that the position and orientation of the base are not directly controlled by any motor. These two aspects, usually studied separately in manipulation and locomotion research, are unified in a same framework in this thesis and are resolved as one unique problem. Moreover, the generation of a complex movement involving both tasks and footsteps is also improved becomes reactive. By dividing the optimization process into appropriate stages and by feeding directly the intermediate result to a task-based controller, footsteps can be calculated and adapted in real-time to deal with changes in the environment. A perception module is also developed to build a closed perception-decision-action loop. This architecture combining motion planning and reactivity validated on the HRP-2 robot. Two classes of experiments are carried out. In one case the robot has to grasp an object far away at different height level. In the other, the robot has to step over an object on the floor. In both cases, the execution conditions are updated in real-time to deal with the dynamics of the environment: changes in position of the target to be caught or of the obstacle to be stepped over
APA, Harvard, Vancouver, ISO, and other styles
20

Lavelle, Jerome Philip. "An optimization of the placement of flexible reflective post delineators from a visual detection point of view." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183139198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Leung, Raymond Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Scalable video compression with optimized visual performance and random accessibility." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/24192.

Full text
Abstract:
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video.
APA, Harvard, Vancouver, ISO, and other styles
22

Keil, Wolfgang Verfasser], Fred [Akademischer Betreuer] Wolf, and Theo [Akademischer Betreuer] [Geisel. "Optimization principles and constraints shaping visual cortical architecture / Wolfgang Keil. Gutachter: Fred Wolf ; Theo Geisel. Betreuer: Fred Wolf." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2012. http://d-nb.info/1042305773/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Naser, Karam Adil. "Modeling the Perceptual Similarity of Static and Dynamic Visual Textures : application to the Perceptual Optimization of Video Compression." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4013/document.

Full text
Abstract:
Les textures sont des signaux particuliers dans la scène visuelle, où elles peuvent couvrir de vastes zones. Elles peuvent être classées en deux catégories : statique et dynamique, où les textures dynamiques impliquent des variations temporelles. Plusieurs travaux sur la perception des textures statiques ont permis de définir des mesures de similarité visuelle pour des applications comme la reconnaissance ou la classification de textures. Ces mesures utilisent souvent une représentation inspirée du traitement neuronal du système visuel humain. Cependant de telles approches ont été peu explorées dans le cas de textures dynamiques. Dans cette thèse, un modèle perceptuel généralisé pour la mesure de similarité applicable aux textures statiques et dynamiques, a été développé. Ce modèle est inspiré du traitement effectué dans le cortex visuel primaire. Il s’avère très efficace pour des applications de classification et de reconnaissance de textures. L’application du modèle dans le cadre de l’optimisation perceptuelle de la compression vidéo, a été également étudiée. En particulier, l’intégration de la mesure de similarité entre textures, a été utilisée pour l’optimisation débit-distorsion de l’encodeur. Les résultats expérimentaux avec observateurs humains montrent une qualité visuelle améliorée des vidéos ainsi codés/décodées, avec une réduction significative du débit par rapport aux approches traditionnelles
Textures are special signals in the visual scene, where they can cover large areas. They can be classified into two categories: static and dynamic, where dynamic textures involve temporal variations. Several works on the perception of static textures made it possible to define visual similarity measurements for applications such as the recognition or classification of textures. These measures often use a representation inspired by the neural processing of the human visual system. However, such approaches have been little explored in the case of dynamic textures. In this thesis, a generalized perceptual model for the measurement of similarity applicable to static and dynamic textures has been developed. This model is inspired by the processing performed in the primary visual cortex. It is very effective for texture classification and recognition applications. The application of the model in the context of the perceptual optimization of video compression, was also studied. In particular, the integration of the similarity measure between textures, was used for the rate-distortion optimization of the encoder. Experimental results with human observers showed an improved visual quality of the decoded videos, with a significant reduction in the bitrate compared to the traditional approaches
APA, Harvard, Vancouver, ISO, and other styles
24

Dang, Hieu. "Adaptive multiobjective memetic optimization: algorithms and applications." Journal of Cognitive Informatics and Natural Intelligence, 2012. http://hdl.handle.net/1993/30856.

Full text
Abstract:
The thesis presents research on multiobjective optimization based on memetic computing and its applications in engineering. We have introduced a framework for adaptive multiobjective memetic optimization algorithms (AMMOA) with an information theoretic criterion for guiding the selection, clustering, and local refinements. A robust stopping criterion for AMMOA has also been introduced to solve non-linear and large-scale optimization problems. The framework has been implemented for different benchmark test problems with remarkable results. This thesis also presents two applications of these algorithms. First, an optimal image data hiding technique has been formulated as a multiobjective optimization problem with conflicting objectives. In particular, trade-off factors in designing an optimal image data hiding are investigated to maximize the quality of watermarked images and the robustness of watermark. With the fixed size of a logo watermark, there is a conflict between these two objectives, thus a multiobjective optimization problem is introduced. We propose to use a hybrid between general regression neural networks (GRNN) and the adaptive multiobjective memetic optimization algorithm (AMMOA) to solve this challenging problem. This novel image data hiding approach has been implemented for many different test natural images with remarkable robustness and transparency of the embedded logo watermark. We also introduce a perceptual measure based on the relative Rényi information spectrum to evaluate the quality of watermarked images. The second application is the problem of joint spectrum sensing and power control optimization for a multichannel, multiple-user cognitive radio network. We investigated trade-off factors in designing efficient spectrum sensing techniques to maximize the throughput and minimize the interference. To maximize the throughput of secondary users and minimize the interference to primary users, we propose a joint determination of the sensing and transmission parameters of the secondary users, such as sensing times, decision threshold vectors, and power allocation vectors. There is a conflict between these two objectives, thus a multiobjective optimization problem is used again in the form of AMMOA. This algorithm learns to find optimal spectrum sensing times, decision threshold vectors, and power allocation vectors to maximize the averaged opportunistic throughput and minimize the averaged interference to the cognitive radio network.
February 2016
APA, Harvard, Vancouver, ISO, and other styles
25

Thuresson, Sofia. "Parametric optimization of reinforced concrete slabs subjected to punching shear." Thesis, KTH, Betongbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279466.

Full text
Abstract:
The construction industry is currently developing and evolving towards more automated and optimized processes in the project design phase. One reason for this development is that computational power is becoming a more precise and accessible tool and its applications are multiplying daily. Complex structural engineering problems are typically time-consuming with large scale calculations, resulting in a limited number of evaluated solutions. Quality solutions are based on engineering experience, assumptions and previous knowledge of the subject.The use of parametric design within a structural design problem is a way of coping with complex solutions. Its methodology strips down each problem to basic solvable parameters, allowing the structure to be controlled and recombined to achieve an optimal solution.This thesis introduces the concept of parametric design and optimization in structural engineering practice, explaining how the software application works and presenting a case study carried out to evaluate the result. In this thesis a parametric model was built using the Dynamo software to handle a design process involving a common structural engineering problem. The structural problem investigated is a reinforced concrete slab supported by a centre column that is exposed to punching shear failure. The results provided are used for comparisons and as indicators of whether a more effective and better design has been achieved. Such indicators included less materials and therefore less financial cost and/or fewer environmental impacts, while maintaining the structural strength. A parametric model allows the user to easily modify and adapt any type of structure modification, making it the perfect tool to apply to an optimization process.The purpose of this thesis was to find a more effective way to solve a complex problem and to increase the number of solutions and evaluations of the problem compared to a more conventional method. The focus was to develop a parametric model of a reinforced concrete slab subjected to punching shear, which would be able to implement optimization in terms of time spent on the project and therefore also the cost of the structure and environmental impact.The result of this case study suggests a great potential for cost savings. The created parametric model proved in its current state to be a useful and helpful tool for the designer of reinforced concrete slab subjected to punching shear. The result showed several solutions that meet both the economical and the punching shear failure goals and which were optimized using the parametrical model. Many solutions were provided and evaluated beyond what could have been done in a project using a conventional method. For a structure of this type, a parametric strategy will help the engineer to achieve more optimal solutions.
Just nu utvecklas Byggbranschen mot mer automatiserade och optimerade processer i projektdesignfasen. Denna utveckling beror till stor del på teknikutveckling i form av bättre datorprogram och tillgänglighet för dessa. Traditionellt sett löses komplexa konstruktionsproblem med hjälp av tidskrävande och storskaliga beräkningar, vilka sedan resulterar i ett begränsat antal utvärderade lösningar. Kvalitets lösningar bygger då på teknisk erfarenhet, antaganden och tidigare kunskaper inom ämnet.Användning av parametrisk design inom ett konstruktionsproblem är ett sätt att hantera komplexa lösningar. Dess metod avgränsar varje problem ner till ett antal lösbara parametrar, vilket gör att strukturen kan kontrolleras och rekombineras för att uppnå en optimal lösning.Denna avhandling introducerar begreppet parametrisk design och optimering i konstruktionsteknik, den förklarar hur programvaran fungerar och presenterar en fallstudie som genomförts för att utvärdera resultatet. I denna avhandling byggdes en parametrisk modell med hjälp av programvaran Dynamo för att hantera en designprocess av ett vanligt konstruktionsproblem. Det strukturella problemet som undersökts är en armerad betongplatta som stöds av en mittpelare, utsatt för genomstansning. Resultaten används för att utvärdera om en bättre design med avseende på materialanvändning har uppnåtts. Minimering av materialanvändning anses vara en bra parameter att undersöka eftersom det ger lägre kostnader och/eller lägre miljöpåverkan, detta undersöks under förutsättning att konstruktionens hållfasthet bibehålls. En parametrisk modell gör det möjligt för användaren att enkelt modifiera en konstruktionslösning med avseende på olika parametrar. Detta gör det till det perfekta verktyget att tillämpa en optimeringsprocess på.Syftet med denna avhandling var att hitta ett mer effektivt sätt att lösa ett komplext problem och att multiplicera antalet lösningar och utvärderingar av problemet jämfört med en mer konventionell metod. Fokus var att utveckla en parametrisk modell av en armerad betongplatta utsatt för genomstansning, som kommer att kunna genomföra optimering med avseende på tid som spenderas på projektet och därmed också kostnaden för konstruktionen och miljöpåverkan.Resultatet av denna fallstudie tyder på att det finns en stor möjlighet till kostnadsbesparingar och anses därför vara ett mycket hjälpsamt verktyg för en konstruktör. Resultatet visade flera lösningar som uppfyllde de konstruktionsmässiga kraven samtidigt som de gav en lägre materialanvändning tack vare optimeringen. Många lösningar tillhandahölls och utvärderades utöver vad som kunde ha gjorts i ett projekt med en konventionell metod. En parametrisk strategi kommer att hjälpa ingenjören att optimera lösningen för en konstruktion av denna typ.
APA, Harvard, Vancouver, ISO, and other styles
26

Marco, Pontus. "Design & optimization of modular tanksystems for vehicle wash facilities." Thesis, Karlstads universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79009.

Full text
Abstract:
Clean and safe water is important for the well being of all organisms on earth. Therefore, it is important to reduce harmful emissions from industrial processes that use water in different ways. In vehicle washing processes, water is used in high-pressure processes, as a medium for detergents, and for rinsing of vehicles. The wastewater produced by these functions passes through a water reclamation system. A water reclamation system has two main functions, to produce reusable water to be used in future washing cycles, and to separate contaminants and purify the wastewater so it can be released back into the commercial grid. The reclamation system achieves this by using a combination of different water handling processes, these include: sludge tanks, an oil-water separator, a water reclamation unit, buffer tanks, and a water purification unit. The two components that stand for the more advanced cleaning processes are the water reclamation unit and the water purification unit. In this thesis, in collaboration with the company Westmatic, the water reclamation unit consists of cyclone separators that use centrifugal forces to separate heavy particles and ozone treatment to break up organic substances and combat bad odors. The Purification unit of choice is an electrocoagulation unit that, by a direct current, creates flocculants of impurities that rises to the surface and can be mechanically removed in a water volume inside the unit. This purification process is completely chemical-free thus making the process more environmentally friendly than other purification processes used in other circumstances. This master thesis aimed to develop a dynamic design tool for a modular solution of the different parts in the water reclamation system. This design tool uses specific user input to produce construction information for each instance. As an additional sub-aim, this design tool was linked with a computer-aided design program to produce parametric 3D models with underlying blueprints. This to produce a light solution, that has a short manufacturing time and that are highly customer adjusted. The first course of action was to mathematically define the complete water reclamation system and its components. These sections were described in a flowchart that shows how the different parts interact and operate. From the wash station, wastewater runs trough a course- and fine-sludge tank. From the fine sludge tank, the wastewater is directed in two different directions. Firstly, the water is pumped to the water reclamation unit and to one or multiple buffer tanks to finally be used in the wash station as reclaimed water. Secondly, the water travels to an oil separator, pump chamber, and water purification unit. In the purification unit, 99% of the inlet mass is directed out of the system as purified water. The remaining 1% is directed to a depot that acts like the end stage of the whole system. After all equations were defined and the design was related to the user-defined input flow the design tool was structured. The program of choice to house the design tool is Microsoft Excel. In this Excel document, a user interface with navigation was constructed and the intended user is directed through a series of input pages where input data is defined. This data is used in a normally hidden page where constructional dimensions are calculated. The constructional dimensions are displayed to the user on the second last page. At this stage the Excel document can be connected to a CAD program and 3D models with blueprints can be opened that depend on the output from the Excel file. Additionally, a pipe calculator is provided on the last page of the Excel document where pipe dimensions for different cases can be found. With this solution, glass fiber tanks are molded according to the resulting blueprints that are customer specific. In this way the solution is more adaptive and easier to handle. Additionally, the provided design tool enables an easier and more well-defined methodology when deriving the different needed volume and accompanied constructional dimensions for an arbitrary water reclamation system.
APA, Harvard, Vancouver, ISO, and other styles
27

Aldahdooh, Ahmed. "Content-aware video transmission in HEVC context : optimization of compression, of error resilience and concealment, and of visual quality." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4011/document.

Full text
Abstract:
Dans cette étude, nous utilisons des caractéristiques locales/globales en vue d’améliorer la chaîne de transmission des séquences de vidéos. Ce travail est divisé en quatre parties principales qui mettent à profit les caractéristiques de contenu vidéo. La première partie introduit un modèle de prédiction de paramètres d’un encodeur basé sur la complexité du contenu. Ce modèle utilise le débit, la distorsion, ainsi que la complexité de différentes configurations de paramètres afin d’obtenir des valeurs souhaitables (recommandées) de paramètres d’encodage. Nous identifions ensuite le lien en les caractéristiques du contenu et ces valeurs recommandées afin de construire le modèle de prédiction. La deuxième partie illustre le schéma de l’encodage à description multiple (Multiple Description Coding ou MDC, en anglais) que nous proposons dans ces travaux. Celui-ci est optimisé pour des MDC d’ordre-hauts. Le décodage correspondant et la procédure de récupération de l’erreur contenu-dépendant sont également étudiés et identifiés. La qualité de la vidéo reçue a été évaluée subjectivement. En analysant les résultats des expériences subjectives, nous introduisons alors un schéma adaptatif, c’est-à-dire adapté à la connaissance du contenu vidéo. Enfin, nous avons simulé un scénario d’application afin d’évaluer un taux de débit réaliste. Dans la troisième partie, nous utilisons une carte de déplacement, calculées au travers des propriétés de mouvement du contenu vidéo, comme entrée pour l’algorithme de masquage d’erreur par recouvrement (inpainting based error concealment algorithm). Une expérience subjective a été conduite afin d’évaluer l’algorithme et d’étudier la perturbation de l’observateur au visionnage de la vidéo traitée. La quatrième partie possèdent deux sous-parties. La première se penche sur les algorithmes de sélections par HRC pour les grandes bases de données de vidéos. La deuxième partie introduit l’évaluation de la qualité vidéo utilisant la connaissance du contenu global non-référencé
In this work, the global/local content characteristics are utilized in order to improve the delivery chain of the video sequences. The work is divided into four main parts that take advantages of video content features. The first part introduces a joint content-complexity encoder parameters prediction model. This model uses bitrate, distortion, and complexity of different parameters configurations in order to get the recommended encoder parameters value. Then, the links between content features and the recommended values are identified. Finally, the prediction model is built using these features and the recommended encoder parameter values. The second part illustrates the proposed multiple description coding (MDC) scheme that is optimized for high-order MDC. The corresponding decoding and content-dependent error recovery procedures are also identified. The quality of the received videos is evaluated subjectively. By analyzing the subjective experiment results, an adaptive, i.e. content-aware, scheme is introduced. Finally, an application scenario is simulated to study the realistic bitrate consumption. The third part uses the motion properties of a content to introduce a motion map that will be used as an input for the modified state-of-the-art inpainting based error concealment algorithm. A subjective experiment was conducted to evaluate the algorithm and also to study the content-aware observer’s disturbance when perceiving the processed videos. The fourth part has two sub-parts, the first one is about HRC selection algorithms for the large-scale video database with an improved performance evaluation measures for video quality assessment algorithms using training and validation sets. The second part introduces global content aware no-reference video quality assessment
APA, Harvard, Vancouver, ISO, and other styles
28

Simonenko, Ekaterina. "OLAP query optimization and result visualization." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112138.

Full text
Abstract:
Nous explorons différents aspects des entrepôts de données et d’OLAP, le point commun de nos recherches étant le modèle fonctionnel pour l'analyse de données. Notre objectif principal est d'utiliser ce modèle dans l'étude de trois aspects différents, mais liés:- l'optimisation de requêtes par réécriture et la gestion du cache,- la visualisation du résultat d'une requête OLAP,- le mapping d'un schéma relationnel en BCNF vers un schéma fonctionnel. L'optimisation de requêtes et la gestion de cache sont des problèmes cruciaux dans l'évaluation de requêtes en général, et les entrepôts de données en particulier; et la réécriture de requêtes est une des techniques de base pour l'optimisation de requêtes. Nous établissons des conditions d'implication de requêtes analytiques, en utilisant le pré-ordre partiel sur l'ensemble de requêtes, et nous définissons un algorithme sain et complet de réécriture ainsi que une stratégie de gestion de cache optimisée, tous les deux basés sur le modèle fonctionnel.Le deuxième aspect important que nous explorons dans cette thèse est celui de la visualisation du résultat. Nous démontrons l'importance pour la visualisation de reproduire des propriétés essentielles de données qui sont les dépendances fonctionnelles. Nous montrons que la connexion, existante entre les données et leur visualisation, est précisément la connexion entre leurs représentations fonctionnelles. Nous dérivons alors un cadre technique, ayant pour objectif d'établir une telle connexion pour un ensemble de données et un ensemble de visualisations. En plus d'analyse du processus de visualisation, nous utilisons le modèle fonctionnel comme un guide pour la visualisation interactive, et définissons ce qu'on appelle la visualisation paramétrique. Le troisième aspect important de notre travail est l'expérimentation des résultats obtenus dans cette thèse. Les résultats de cette thèse peuvent être utilisés afin d’analyser les données contenues dans une table en Boyce-Codd Normal Form (BCNF), étant donné que le schéma de la table peut être transformé aisément en un schéma fonctionnel. Nous présentons une telle transformation (mapping) dans cette thèse. Une fois le schéma relationnel transformé en un schéma fonctionnel, nous pouvons profiter des résultats sur l'optimisation et la visualisation de requêtes. Nous avons utilisé cette transformation dans l’implémentation de deux prototypes dans le cadre de deux projets différents
In this thesis, we explore different aspects of Data Warehousing and OLAP, the common point of our proposals being the functional model for data analysis. Our main objective is to use that model in studying three different, but related aspects:- query optimization through rewriting and cache management,- query result visualization,- mapping of a relational BCNF schema to a functional schema.Query optimization and cache management is a crucial issue in query processing in general, and in data warehousing in particular; and query rewriting is one of the basic techniques for query optimization. We establish derivability conditions for analytic functional queries, using a partial pre-order over the set of queries. Then we provide a sound and complete rewriting algorithm, as well as an optimized cache management strategy, both based on the underlying functional model.A second important aspect that we explore in the thesis is that of query result visualization. We show the importance for the visualization to reflect such essential features of the dataset as functional dependencies. We show that the connection existing between data and visualization is precisely the connection between their functional representations. We then define a framework, whose objective is to establish such a connection for a given dataset and a set of visualizations. In addition to the analysis of the visualization process, we use the functional data model as a guide for interactive visualization, and define what we call a parametric visualization. A third important aspect of our work is experimentation with the results obtained in the thesis. In order to be able to analyze the data contained in a Boyce-Codd Normal Form (BCNF) table, one can use the results obtained in this thesis, provided that the schema of the table can be mapped to a functional schema. We present such a mapping in this thesis. Once the relational schema has been transformed into a functional schema, we can take advantage of the query optimization and result visualization results presented in the thesis. We have used this transformation in the implementation of two prototypes in the context of two different projects
APA, Harvard, Vancouver, ISO, and other styles
29

Hermann, Frank [Verfasser], and Hartmut [Akademischer Betreuer] Ehrig. "Analysis and Optimization of Visual Enterprise Models : Based on Graph and Model Transformation [[Elektronische Ressource]] / Frank Hermann. Betreuer: Hartmut Ehrig." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1014891507/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

D'Angio, Paul Christopher. "Adaptive and Passive Non-Visual Driver Assistance Technologies for the Blind Driver Challenge®." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27582.

Full text
Abstract:
This work proposes a series of driver assistance technologies that enable blind persons to safely and independently operate an automobile on standard public roads. Such technology could additionally benefit sighted drivers by augmenting vision with suggestive cues during normal and low-visibility driving conditions. This work presents a non-visual human-computer interface system with passive and adaptive controlling software to realize this type of driver assistance technology. The research and development behind this work was made possible through the Blind Driver Challenge® initiative taken by the National Federation of the Blind. The instructional technologies proposed in this work enable blind drivers to operate an automobile through the provision of steering wheel angle and speed cues to the driver in a non-visual method. This paradigm imposes four principal functionality requirements: Perception, Motion Planning, Reference Transformations, and Communication. The Reference Transformation and Communication requirements are the focus of this work and convert motion planning trajectories into a series of non-visual stimuli that can be communicated to the human driver. This work proposes two separate algorithms to perform the necessary reference transformations described above. The first algorithm, called the Passive Non-Visual Interface Driver, converts the planned trajectory data into a form that can be understood and reliably interacted with by the blind driver. This passive algorithm performs the transformations through a method that is independent of the driver. The second algorithm, called the Adaptive Non-Visual Interface Driver, performs similar trajectory data conversions through methods that adapt to each particular driver. This algorithm uses Model Predictive Control supplemented with Artificial Neural Network driver models to generate non-visual stimuli that are predicted to induce optimal performance from the driver. The driver models are trained online and in real-time with a rapid training approach to continually adapt to changes in the driver's dynamics over time. The communication of calculated non-visual stimuli is subsequently performed through a Non-Visual Interface System proposed by this work. This system is comprised of two non-visual human computer interfaces that communicate driving information through haptic stimuli. The DriveGrip interface is pair of vibro-tactile gloves that communicate steering information through the driverâ s hands and fingers. The SpeedStrip interface is a vibro-tactile cushion fitted on the driverâ s seat that communicates speed information through the driver's legs and back. The two interfaces work simultaneously to provide a continuous stream of directions to the driver as he or she navigates the vehicle.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Martínez, Ana Laura, and Natali Arvidsson. "Balance Between Performance and Visual Quality in 3D Game Assets : Appropriateness of Assets for Games and Real-Time Rendering." Thesis, Uppsala universitet, Institutionen för speldesign, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413871.

Full text
Abstract:
This thesis explores the balance between visual quality and the performance of a 3D object for computer games. Additionally, it aims to help new 3D artists to create assets that are both visually adequate and optimized for real-time rendering. It further investigates the differences in the judgement of the visual quality of thosethat know computer graphics, and thosenot familiar with it. Many explanations of 3D art optimization are often highly technical and challenging for graphic artists to grasp. Additionally, they regularly neglect the effects of optimization to the visual quality of the assets. By testing several 3D assets to measure their render time while using a survey to gather their visual assessments, it was discovered that 3D game art is very contextual. No definite or straightforward way was identified to find the balance between art quality and performance universally. Neither when it comes to performance nor visuals. However, some interesting findings regarding the judgment of visual quality were observed and presented.
Den här uppsatsen utforskar balansen mellan visuell kvalitéoch prestanda i 3D modeller för spel. Vidare eftersträvar den att utgöra ett stöd för nya 3D-modelleingskonstnärer för att skapa modeller som är både visuellt adekvata och optimerade för att renderas i realtid. Dessutom undersöks skillnaden mellan omdömet av den visuella kvalitén mellan de som är bekanta med 3D datorgrafik och de som inte är det. Många förklaringar gällande optimering av 3D grafik är högst tekniska och utgör en utmaning för grafiker att förståsig på och försummar dessutom ofta effekten av hur optimering påverkar resultatet rent visuallet. Genom att testa ett flertal 3D modeller, mäta tiden det tar för dem att renderas, samt omdömen gällande visuella intryck, drogs slutsatsen att bedömning av 3D modellering för spel är väldigt kontextuell. Inget definitivt och enkelt sätt att hitta balansen mellan visuella kvalitén upptäcktes. Varken gällande prestanda eller visuell kvalité. Däremot gjordes några intressanta upptäckter angående bedömningen av den visuella kvalitén som observerades och presenterades.
APA, Harvard, Vancouver, ISO, and other styles
32

Nielsen, Jerel Bendt. "Robust Visual-Inertial Navigation and Control of Fixed-Wing and Multirotor Aircraft." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7584.

Full text
Abstract:
With the increased performance and reduced cost of cameras, the robotics community has taken great interest in estimation and control algorithms that fuse camera data with other sensor data.In response to this interest, this dissertation investigates the algorithms needed for robust guidance, navigation, and control of fixed-wing and multirotor aircraft applied to target estimation and circumnavigation.This work begins with the development of a method to estimate target position relative to static landmarks, deriving and using a state-of-the-art EKF that estimates static landmarks in its state.Following this estimator, improvements are made to a nonlinear observer solving part of the SLAM problem.These improvements include a moving origin process to keep the coordinate origin within the camera field of view and a sliding window iteration algorithm to drastically improve convergence speed of the observer.Next, observers to directly estimate relative target position are created with a circumnavigation guidance law for a multirotor aircraft.Taking a look at fixed-wing aircraft, a state-dependent LQR controller with inputs based on vector fields is developed, in addition to an EKF derived from error state and Lie group theory to estimate aircraft state and inertial wind velocity.The robustness of this controller/estimator combination is demonstrated through Monte Carlo simulations.Next, the accuracy, robustness, and consistency of a state-of-the-art EKF are improved for multirotors by augmenting the filter with a drag coefficient, partial updates, and keyframe resets.Monte Carlo simulations demonstrate the improved accuracy and consistency of the augmented filter.Lastly, a visual-inertial EKF using image coordinates is derived, as well as an offline calibration tool to estimate the transforms needed for accurate, visual-inertial estimation algorithms.The imaged-based EKF and calibrator are also shown to be robust under various conditions through numerical simulation.
APA, Harvard, Vancouver, ISO, and other styles
33

Seeman, Michal. "Zpracování obrazu pro lepší vjem a interakci." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261263.

Full text
Abstract:
Reprodukce obrazu má zprostředkovat vjem co nejvíce podobný tomu, když pozorujeme původní obraz. Digitální reprodukce obrazu zahrnuje snímání, zpracování a vykreslení. Mnohé postupy v tomto procesu nejsou dokonalé. Tato práce předkládá zlepšení v rychlosti a přesnosti několika ze současných metod.
APA, Harvard, Vancouver, ISO, and other styles
34

Granberg, Andreas, and Joel Wahlstein. "Parametric design and optimization of pipe bridges : Automating the design process in early stage of design." Thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277935.

Full text
Abstract:
Parametric design can be used for structural design. This approach has someclear advantages compared to the conventional point-based approach using differentComputer Aided Design (CAD)-software, especially in early stage of design. Since themodel is parametrically defined, alternate designs, that are within the scope of theparametric definition, can be explored with little effort from the user compared tothe point-based models. In this way, optimization routines can be used to make moreinformed decisions about the design. Pipe bridges usually have a similar design that issuitable to be defined parametrically.The aim of the thesis is to automate the modeling of pipe bridges in the earlystages of design, to make an integrated analysis and to optimize the structure withregard to material cost and carbon dioxide equivalent-emissions as well as mass ofthe structure. Further, to investigate in what way these objectives are correlated.This thesis improves an existing grasshopper script used to design pipe bridges andimplement an automatic generation of a Bill of Quantity (BoQ).The results of the thesis case study suggests that there is potential in usingoptimization with parametric design to minimize the cost of pipe bridges. With a goodparametric design definition alternate designs can be explored with little effort fromthe user. This benefit to speed up the design process, and allowing the designer towork with adaptable design, could be reasons to turn to a parametric design method.It should also be stressed that this thesis suggests a correlation between the cost of thestructure and the carbon dioxide equivalent emission from the structure. Meaning thatwhile minimizing emissions one could also be minimizing the cost.
APA, Harvard, Vancouver, ISO, and other styles
35

Liljengård, Anton. "Filstorleksoptimering för retuscheringsarbete : Enundersökning med fokus på moderetuschering." Thesis, Högskolan Dalarna, Grafisk teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:du-25077.

Full text
Abstract:
Under bearbetning av bilder idag förekommer ofta stora filer. Med den effektiva teknologiska utvecklingen har efterfrågan på kvalitet växt allt mer. I en värld där fotografens kamera har blivit mer högupplöst har även bilders filstorlek blivit större. Målet med detta examensarbete har varit att komma fram med en rekommendation för hur man arbetar mot en liten filstorlek. Rekommendationen är till för retuschörer som arbetar inom modebranschen och med bilder ämnade för print. Arbetet har försökt åskådliggöra vad under retuscheringens arbetsgång som orsakar en större filstorlek. Detta genom att kontakta retuschörer som ofta arbetar med modebilder. Fokus har legat på lager i Photoshop samt editeringsalternativ för retuschören. Det framkom att retuschörer gjorde liknande åtgärder för att få en liten filstorlek, och att en viss likhet kan urskiljas i deras arbetssätt kring vad som ökade filstorlek. Det framkom även att filstorleken påverkas mest av hur pixellager och masker ser ut, till skillnad från justeringslager.
During the processing of pictures today the file size often becomes large. An effective technological development has made the demand for quality higher. In a world where the photographer's camera has gotten a higher resolution, the image's file size has also increased. The aim of this thesis has been to come up with a recommendation for how to work towards getting a smaller file size. The recommendation was intended for retouchers who work in the fashion industry and with pictures meant for print. The work has dealt with file sizes associated with retouching and have tried to illustrate what during the retouch procedure that causes a larger file size. This has been done by contacting retouchers who often work with fashion images. The focus has been on the layers in Photoshop and editing options for the retoucher. The results showed that the retouchers had similar ways of working towards a small file size, and a certain similarity is apparent in their way of retouching which caused a bigger file size. What also showed was that the file size is the most affected by how layers consisting of pixels and masks look compared to adjustment layers.
APA, Harvard, Vancouver, ISO, and other styles
36

Keukelaar, J. H. D. "Topics in Soft Computing." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Church, Donald Glen. "Reducing Error Rates in Intelligence, Surveillance, and Reconnaissance (ISR) Anomaly Detection via Information Presentation Optimization." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452858183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lupi, Giacomo. "Modelli e metodi metaeuristici per la gestione operativa di una rete distributiva cross-docking: il caso OneExpress." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
La tesi si concentra sull’ottimizzazione della rete logistica del pallet network OneExpress, in particolare, si occupa di sviluppare una metodologia per l’instradamento degli ordini all’interno della rete. Il network di OneExpress è composto da tre Hub e da circa 120 affiliati sparsi per tutta Italia. Ognuno di questi è un piccolo operatore logistico che conferisce la merce ad un Hub, il quale riceve il truck con merce frazionata in inbound, la ribalta e la consolida verso un unico truck che sarà diretto all’affiliato di destinazione. L’obiettivo del lavoro è costruire uno strumento che fornisca le rotte che devono percorrere gli ordini all’interno della rete, minimizzando i costi di trasporto. Nella prima parte dell’elaborato viene presentata la modellazione matematica del problema operativo, il quale, all’aumentare del numero delle entità in gioco, diventa presto irrisolvibile tramite l’utilizzo di risolutori commerciali. Perciò, per poter risolvere un problema di grandi dimensioni come quello della rete OneExpress, è stato utilizzato un approccio metaeuristico basato sull’Ant Colony Optimization. Questo algoritmo si basa sul comportamento naturale delle formiche, le quali, per raggiungere una fonte di cibo, sono in grado di trovare il percorso più breve e di tramandarlo alle altre formiche. La costruzione del network e dell’algoritmo è stata realizzata su Excel utilizzando il linguaggio Visual Basic. L’idea di costruire questo algoritmo al di fuori di software commerciali che risolvono questo tipo di problemi è stata pensata per produrre un piccolo motore di calcolo che sia in grado di girare su Excel e che, data una base dati costruita in un certo modo, sia in grado di generare uno strumento di controllo operativo che suggerisca i percorsi migliori degli ordini in tutta la rete.
APA, Harvard, Vancouver, ISO, and other styles
39

Farkač, Daniel. "Aplikace VBA (Visual Basic for Application) a Maple na problémy procesního inženýrství." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228712.

Full text
Abstract:
The task of the diploma thesis named VBA and Maple Application on Process Engineering Problems is to show the possibilities of using these programming languages for various engineering tasks. Particularly the programming language Visual Basic for Application (VBA), which is a part of MS Office package, is very little used in practise. That´s why this thesis solves the complex task of a furnaces design process; the topic was reccomended by the supervisor prof. Ing. Josef Kohoutek, CSc. Specifically, the thesis deals with calculations of heat transfer and optimization of the height of extended surfaces of tubes in the convection section of process furnaces. The entire task is elaborated in VBA and runs in Excel. After entering the input information, the created program first calculates the size and heat output of the convection section, but it can also optimize the height of extended surfaces in different parts of the convection section and thus minimize investment costs.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Hailong. "Analytical Model for Energy Management in Wireless Sensor Networks." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367936881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Costa, Daniel Gouveia. "Otimiza??es da transmiss?o de imagens em redes de sensores visuais sem fio explorando a relev?ncia de monitoramento dos n?s fontes e codifica??o DWT." Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15216.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:11Z (GMT). No. of bitstreams: 1 DanielGC_TESE_Capa_pag90.pdf: 3923138 bytes, checksum: b23776867381c62bd332c913640275ac (MD5) Previous issue date: 2013-04-29
The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, covering since communication aspects to issues related with energy efficiency. When source sensors are endowed with cameras for visual monitoring, a new scope of challenges is raised, as transmission and monitoring requirements are considerably changed. Particularly, visual sensors collect data following a directional sensing model, altering the meaning of concepts as vicinity and redundancy but allowing the differentiation of source nodes by their sensing relevancies for the application. In such context, we propose the combined use of two differentiation strategies as a novel QoS parameter, exploring the sensing relevancies of source nodes and DWT image coding. This innovative approach supports a new scope of optimizations to improve the performance of visual sensor networks at the cost of a small reduction on the overall monitoring quality of the application. Besides definition of a new concept of relevance and the proposition of mechanisms to support its practical exploitation, we propose five different optimizations in the way images are transmitted in wireless visual sensor networks, aiming at energy saving, transmission with low delay and error recovery. Putting all these together, the proposed innovative differentiation strategies and the related optimizations open a relevant research trend, where the application monitoring requirements are used to guide a more efficient operation of sensor networks
O desenvolvimento de redes de sensores sem fio para fun??es de controle e monitoramento tem criado um pulsante cen?rio de investiga??o, abrangendo desde aspectos da comunica??o em rede at? quest?es como efici?ncia energ?tica. Quando sensores s?o equipados com c?meras para fun??es de monitoramento visual, um novo escopo de desafios ? lan?ado, uma vez que h? uma mudan?a significativa nos requisitos de monitoramento e transmiss?o. Em particular, sensores visuais coletam dados seguindo um modelo direcional de monitoramento, alterando conceitos j? estabelecidos de vizinhan?a e redund?ncia, por?m tornando poss?vel a diferencia??o de sensores pelas suas relev?ncias de monitoramento para a aplica??o. Nesse contexto, propomos que a relev?ncia de monitoramento dos sensores fontes sejam exploradas em conjunto com a codifica??o de imagens por transformada DWT, unindo assim dois diferentes escopos de relev?ncia para a cria??o de novos par?metros de QoS. Essa abordagem inovadora permite uma nova gama de otimiza??es da opera??o da rede, possibilitando aumento de desempenho com pequenas perdas na qualidade global de monitoramento. Al?m da defini??o de um novo conceito de relev?ncia e a proposi??o de mecanismos para suportar sua utiliza??o pr?tica, cinco diferentes otimiza??es da transmiss?o de imagens em redes de sensores visuais sem fio s?o propostas, visando economia de energia, transmiss?o com baixo atraso e recupera??o de erros. Em conjunto, as estrat?gias de diferencia??o e as otimiza??es relacionadas abrem uma importante vertente de pesquisa, onde os requisitos de monitoramento das aplica??es s?o utilizados para guiar uma opera??o mais eficiente da rede
APA, Harvard, Vancouver, ISO, and other styles
42

Sundaramoorthi, Ganesh. "Global Optimizing Flows for Active Contours." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16145.

Full text
Abstract:
This thesis makes significant contributions to the object detection problem in computer vision. The object detection problem is, given a digital image of a scene, to detect the relevant object in the image. One technique for performing object detection, called ``active contours,' optimizes a constructed energy that is defined on contours (closed curves) and is tailored to image features. An optimization method can be used to perform the optimization of the energy, and thereby deform an initially placed contour to the relevant object. The typical optimization technique used in almost every active contour paper is evolving the contour by the energy's gradient descent flow, i.e., the steepest descent flow, in order to drive the initial contour to (hopefully) the minimum curve. The problem with this technique is that often times the contour becomes stuck in a sub-optimal and undesirable local minimum of the energy. This problem can be partially attributed to the fact that the gradient flows of these energies make use of only local image and contour information. By local, we mean that in order to evolve a point on the contour, only information local to that point is used. Therefore, in this thesis, we introduce a new class of flows that are global in that the evolution of a point on the contour depends on global information from the entire curve. These flows help avoid a number of problems with traditional flows including helping in avoiding undesirable local minima. We demonstrate practical applications of these flows for the object detection problem, including applications to both image segmentation and visual object tracking.
APA, Harvard, Vancouver, ISO, and other styles
43

Wahlberg, David. "Data Reduction Methods for Deep Images." Thesis, Högskolan i Gävle, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-25473.

Full text
Abstract:
Deep images for use in visual effects work during deep compositing tend to be very large. Quite often the files are larger than needed for their final purpose, which opens up an opportunity for optimizations. This research project is about finding methods for identifying redundant and excessive data use in deep images, and then approximate this data by resampling it and representing it using less data. Focus was on maintaining the final visual quality while optimizing the files so the methods can be used in a sharp production environment. While not being very successful processing geometric data, the results when optimizing volumetric data were very succesfull and over the expectations.
APA, Harvard, Vancouver, ISO, and other styles
44

Rangel, Elivelton Oliveira. "Otimiza??o de Redes de Sensores Visuais sem Fio por Algoritmos Evolutivos Multiobjetivo." Universidade Estadual de Feira de Santana, 2018. http://tede2.uefs.br:8080/handle/tede/675.

Full text
Abstract:
Submitted by Jadson Francisco de Jesus SILVA (jadson@uefs.br) on 2018-07-18T21:55:12Z No. of bitstreams: 1 Disserta??o.pdf: 2639155 bytes, checksum: af49bdcdf83d4a063546324a223124a4 (MD5)
Made available in DSpace on 2018-07-18T21:55:12Z (GMT). No. of bitstreams: 1 Disserta??o.pdf: 2639155 bytes, checksum: af49bdcdf83d4a063546324a223124a4 (MD5) Previous issue date: 2018-03-27
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES
Wireless visual sensor networks can provide valuable information for a lot of moni- toring and control applications, which has driven much attention from the academic community in last years. For some applications, a set of targets have to be covered by visual sensors and sensing redundancy may be desired in many cases, especially when applications have availability requirements or demands for multiple coverage perspectives for viewed targets. For rotatable visual sensors, the sensing orientations can be adjusted for optimized coverage and redundancy, with different optimization approaches available to address this problem. Particularly, as different optimization parameters may be considered, the redundant coverage maximization issue may be treated as a multi-objective problem, with some potential solutions to be conside- red. In this context, two different evolutionary algorithms are proposed to compute redundant coverage maximization for target viewing, intending to be more efficient alternatives to greedy-based algorithms. Simulation results reinforce the benefits of employing evolutionary algorithms for adjustments of sensors? orientations, poten- tially benefiting deployment and management of wireless visual sensor networks for different applications.
As redes de sensores visuais sem fio podem obter, atrav?s de c?meras, informa??es importantes para aplica??es de controle e monitoramento, e tem ganhado aten??o da comunidade acad?mica nos ?ltimos anos. Para algumas aplica??es, um conjunto de alvos deve ser coberto por sensores visuais, e por vezes com demanda de redund?ncia de cobertura, especialmente quando h? requisitos de disponibilidade ou demandas de m?ltiplas perspectivas de cobertura para os alvos visados. Para sensores visuais rotacion?veis, as orienta??es de detec??o podem ser ajustadas para otimizar cobertura e redund?ncia, existindo diferentes abordagens de otimiza??o dispon?veis para solucionar esse problema. Particularmente, como diferentes par?metros de otimizac?o podem ser considerados, o problema de maximiza??o de cobertura redundante pode ser tratado como um problema multiobjetivo, com algumas solu??es potenciais a serem consideradas. Neste contexto, dois algoritmos evolutivos diferentes s?o propostos para calcular a maximiza??o de cobertura redundante para visualiza??o de alvos, pretendendo ser alternativas mais eficientes para algoritmos gulosos. Os resultados da simula??o refor?am os benef?cios de empregar algoritmos evolutivos para ajustes das orienta??es dos sensores, potencialmente beneficiando a implanta??o e o gerenciamento de redes de sensores visuais sem fio para diferentes aplica??es.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
46

Бабушкин, Б. Д., and B. D. Babushkin. "Разработка web-приложения решения задачи оптимизации состава многокомпонентной плавильной шихты : магистерская диссертация." Master's thesis, б. и, 2021. http://hdl.handle.net/10995/99885.

Full text
Abstract:
Магистерская диссертация посвящена разработке web-приложения решения задачи оптимизации состава многокомпонентной плавильной шихты. В ходе работы рассмотрены основные этапы разработки программного обеспечения: анализ предметной области; создание архитектуры программного обеспечения; разработка алгоритмического обеспечения и справочной документации; подготовка дистрибутива. В процессе выполнения работы реализованы все цели и задачи, стоящие перед проектом. Основными пользователями системы являются специалисты инженерно-технологического персонала доменных цехов, студенты. Научная новизна полученных в работе результатов заключается в применении нового метода эффективной организации и ведения специализированного алгоритмического и программного обеспечения решения задачи оптимизации расчета многокомпонентной плавильной шихты, ориентированного на повышение эффективности управления процессами получения качественных сплавов с использованием современных методов обработки информации: использование гибкой методологии разработки (Agile) и таск-трекера Atlassian JIRA для ведения проекта, взаимодействия с заказчиком во время разработки, отслеживания ошибок, визуального отображения задач и мониторинга процесса их выполнения; функциональное моделирование процессов для реализации web-приложения решения задачи оптимизации затрат на перевозку продукции на основе методологии IDEF0 и средства реализации Ramus Educational; использование методики коллективного владения программным кодом на основе сервиса (удаленного репозитория) Atlassian Bitbucket. Практическая значимость результатов заключается в том, что разработанное программное обеспечение позволит: производить расчёт оптимального состава многокомпонентной плавильной шихты; инженерно-технологическому персоналу литейных цехов металлургических предприятий сократить время на выполнение расчетов состава многокомпонентной плавильной шихты за счет реализации эргономичного web-интерфейса; специалистам отдела сопровождения информационных систем предоставляет условия для снижения трудозатрат на сопровождение, совершенствование и развитие системы с учетом пожеланий пользователей. Результаты работы могут быть использованы также в учебном процессе для обучения бакалавров и магистрантов по направлению «Информационные системы и технологии».
The master's thesis is devoted to the development of a web application for solving the problem of optimizing the composition of a multicomponent melting mixture. In the course of the work, the main stages of software development were considered: analysis of the subject area; creation of software architecture; development of algorithmic support and reference documentation; distribution kit preparation. In the process of performing the work, all the goals and objectives of the project have been realized. The main users of the system are specialists of the engineering and technological personnel of blast-furnace shops, students. The scientific novelty of the results obtained in the work lies in the application of a new method of effective organization and maintenance of specialized algorithmic and software for solving the problem of optimizing the calculation of a multicomponent melting charge, focused on increasing the efficiency of control of the processes of obtaining high-quality alloys using modern information processing methods: use of flexible development methodology (Agile) and the Atlassian JIRA task tracker for project management, interaction with the customer during development, tracking errors, visual display of tasks and monitoring the process of their implementation; functional modeling of processes for the implementation of a web-application for solving the problem of optimizing the costs of transportation of products based on the IDEF0 methodology and Ramus Educational tools; using the method of collective ownership of the program code based on the service (remote repository) Atlassian Bitbucket. The practical significance of the results lies in the fact that the developed software will allow: to calculate the optimal composition of the multicomponent melting mixture; to the engineering and technological personnel of foundries of metallurgical enterprises to reduce the time for performing calculations of the composition of a multicomponent melting mixture by implementing an ergonomic web interface; for specialists of the information systems support department, it provides conditions for reducing labor costs for maintaining, improving and developing the system, taking into account the wishes of users. The results of the work can also be used in the educational process for training bachelors and undergraduates in the direction of "Information systems and technologies".
APA, Harvard, Vancouver, ISO, and other styles
47

Wholey, Leonard N. (Leonard Nathaniel). "Trajectory optimization with detection avoidance for visually identifying an aircraft." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32458.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.
Includes bibliographical references (p. 115-118).
Unmanned aerial vehicles (UAVs) play an essential role for the US Armed Forces by performing missions deemed as "dull, dirty and dangerous" for a pilot. As the capability of UAVs expand. they will perform a broader range of missions such as air-to-air combat. The focus of this thesis is forming trajectories for the closing phase of an air-to-air combat scenario. A UAV should close with the suspected aircraft in a manner that allows a ground operator to visually identify the suspected aircraft while avoiding visual/electronic detection from the other pilot. This thesis applies and compares three methods for producing trajectories which enable a visual identification. The first approach is formulated as a mixed integer linear programming problem which can be solved in real time. However, there are limitations to the accuracy of a radar detection model formed with only linear equations, which might justify using a nonlinear programming formulation. With this approach the interceptor's radar cross section and range between the suspected aircraft and interceptor can be incorporated into the problem formulation. The main limitation of this method is that the optimization software might not be able to reach online an optimal or even feasible solution. The third applied method is trajectory interpolation. In this approach, trajectories with specified boundary values and dynamics are formed offline; online, the method interpolates between the given trajectories to obtain similar maneuvers with different initial conditions and end- states. With this method, because the number of calculations required to produce a feasible trajectory is known, the amount of time to calculate a trajectory can be estimated.
by Leonard N. Wholey.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
48

Penin, Bryan. "Contributions to optimal and reactive vision-based trajectory generation for a quadrotor UAV." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S101/document.

Full text
Abstract:
La vision représente un des plus importants signaux en robotique. Une caméra monoculaire peut fournir de riches informations visuelles à une fréquence raisonnable pouvant être utilisées pour la commande, l’estimation d’état ou la navigation dans des environnements inconnus par exemple. Il est cependant nécessaire de respecter des contraintes visuelles spécifiques telles que la visibilité de mesures images et les occultations durant le mouvement afin de garder certaines cibles visuelles dans le champ de vision. Les quadrirotors sont dotés de capacités de mouvement très réactives du fait de leur structure compacte et de la configuration des moteurs. De plus, la vision par une caméra embarquée (fixe) va subir des rotations dues au sous-actionnement du système. Dans cette thèsenous voulons bénéficier de l’agilité du quadrirotor pour réaliser plusieurs tâches de navigation basées vision. Nous supposons que l’estimation d’état repose uniquement sur la fusion capteurs d’une centrale inertielle (IMU) et d’une caméra monoculaire qui fournit des estimations de pose précises. Les contraintes visuelles sont donc critiques et difficiles dans un tel contexte. Dans cette thèse nous exploitons l’optimisation numérique pour générer des trajectoires faisables satisfaisant un certain nombre de contraintes d’état, d’entrées et visuelles non linéaires. A l’aide la platitude différentielle et de la paramétrisation par des B-splines nous proposons une stratégie de replanification performante inspirée de la commande prédictive pour générer des trajectoires lisses et agiles. Enfin, nous présentons un algorithme de planification en temps minimum qui supporte des pertes de visibilité intermittentes afin de naviguer dans des environnements encombrés plus vastes. Cette contribution porte l’incertitude de l’estimation d’état au niveau de la planification pour produire des trajectoires robustes et sûres. Les développements théoriques discutés dans cette thèse sont corroborés par des simulations et expériences en utilisant un quadrirotor. Les résultats reportés montrent l’efficacité des techniques proposées
Vision constitutes one of the most important cues in robotics. A single monocular camera can provide rich visual information at a reasonable rate that can be used as a feedback for control, state estimation of mobile robots or safe navigation in unknown environments for instance. However, it is necessary to satisfy particular visual constraints on the image such as visibility and occlusion constraints during motion to keep some visual targets visible. Quadrotors are endowed with very reactive motion capabilities due to their compact structure and motor configuration. Moreover, vision from a (fixed) on-board camera will suffer from rotation motions due to the system underactuation. In this thesis, we want to benefit from the system aggressiveness to perform several vision-based navigation tasks. We assume state estimation relies solely on sensor fusion of an onboard inertial measurement unit (IMU) and a monocular camera that provides reliable pose estimates. Therefore, visual constraints are challenging and critical in this context. In this thesis we exploit numerical optimization to design feasible trajectories satisfying several state, input and visual nonlinear constraints. With the help of differential flatness and B-spline parametrization we will propose an efficient replanning strategy inspired form Model Predictive Control to generate smooth and agile trajectories. Finally, we propose a minimum-time planning algorithm that handles intermittent visibility losses in order to navigate in larger cluttered environments. This contribution brings state estimation uncertainty at the planning stage to produce robust and safe trajectories. All the theoretical developments discussed in this thesis are corroborated by simulations and experiments run by using a quadrotor UAV. The reported results show the effectiveness of proposed techniques
APA, Harvard, Vancouver, ISO, and other styles
49

Жужгов, А. И., and A. I. Zhuzhgov. "Разработка web-приложения решения задачи оптимизации затрат на перевозку продукции : магистерская диссертация." Master's thesis, б. и, 2021. http://hdl.handle.net/10995/99886.

Full text
Abstract:
Объектом исследования является процесс транспортных перевозок. Предметом исследования выступают пункты потребления и пункты производства, автоматизация системы расчета оптимальной стоимости перевозки. Поставленные задачи: 1. Возможность ввода, корректировки и сохранения вариантов расчёта по оптимизации. 2. Отображение результатов расчета в графическом виде на пользовательской форме. Целью данной работы является создание информационного Web-приложения, который позволит рассчитывать оптимальную стоимость перевозки продукции, предоставлять пользователю результаты расчета в графическом виде. Научная новизна полученных в работе результатов заключается в применении нового метода эффективной организации и ведения специализированного алгоритмического и программного обеспечения решения задачи оптимизации затрат на перевозку продукции, ориентированного на повышение эффективности управления процессами грузоперевозок с использованием современных методов обработки информации: использование гибкой методологии разработки (Agile) и таск-трекера Atlassian JIRA для ведения проекта, взаимодействия с заказчиком во время разработки, отслеживания ошибок, визуального отображения задач и мониторинга процесса их выполнения; функциональное моделирование процессов для реализации web-приложения решения задачи оптимизации затрат на перевозку продукции на основе методологии IDEF0 и средства реализации Ramus Educational; использование методики коллективного владения программным кодом на основе сервиса (удаленного репозитория) Atlassian Bitbucket. Практическая значимость результатов заключается в том, что разработанное программное обеспечение позволит: производить расчёт оптимальной себестоимости транспортных перевозок для любого количества пунктов производства; специалистам транспортно-логистического операционного отдела сократить время на формирование отчетных документов, сократить время поиска необходимой фактической отчетной информации за счет реализации эргономичного web-интерфейса; специалистам отдела сопровождения информационных систем предоставляет условия для снижения трудозатрат на сопровождение, совершенствование и развитие системы с учетом пожеланий пользователей. Результаты работы могут быть использованы также в учебном процессе для обучения бакалавров и магистрантов по направлению «Информационные системы и технологии».
The object of the research is the process of transportation. The subject of the research is points of consumption and points of production, automation of the system for calculating the optimal cost of transportation. Assigned tasks: 1. Possibility of entering, adjusting and saving options for the calculation of optimization. 2. Displaying the calculation results in a graphical form on the user form. The purpose of this work is to create an information Web-application that will allow you to calculate the optimal cost of transportation of products, provide the user with the results of the calculation in a graphical form. The scientific novelty of the results obtained in the work lies in the application of a new method of effective organization and maintenance of specialized algorithmic and software solutions for the optimization of the cost of transportation of products, focused on improving the efficiency of management of cargo transportation processes using modern information processing methods: the use of flexible development methodology (Agile) and the Atlassian JIRA task tracker for project management, interaction with the customer during development, tracking errors, visual display of tasks and monitoring the process of their implementation; functional modeling of processes for the implementation of a web-application for solving the problem of optimizing the costs of transportation of products based on the IDEF0 methodology and Ramus Educational tools; using the method of collective ownership of the program code based on the service (remote repository) Atlassian Bitbucket. The practical significance of the results lies in the fact that the developed software will allow: to calculate the optimal cost of transportation for any number of points of production; for specialists of the transport and logistics operations department, to reduce the time for the formation of reporting documents, to reduce the time to search for the necessary actual reporting information due to the implementation of an ergonomic web interface; for specialists of the information systems support department, it provides conditions for reducing labor costs for maintaining, improving and developing the system, taking into account the wishes of users. The results of the work can also be used in the educational process for training bachelors and undergraduates in the direction "Information systems and technologies".
APA, Harvard, Vancouver, ISO, and other styles
50

Jackson, James Scott. "Enabling Autonomous Operation of Micro Aerial Vehicles Through GPS to GPS-Denied Transitions." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/8709.

Full text
Abstract:
Micro aerial vehicles and other autonomous systems have the potential to truly transform life as we know it, however much of the potential of autonomous systems remains unrealized because reliable navigation is still an unsolved problem with significant challenges. This dissertation presents solutions to many aspects of autonomous navigation. First, it presents ROSflight, a software and hardware architure that allows for rapid prototyping and experimentation of autonomy algorithms on MAVs with lightweight, efficient flight control. Next, this dissertation presents improvments to the state-of-the-art in optimal control of quadrotors by utilizing the error-state formulation frequently utilized in state estimation. It is shown that performing optimal control directly over the error-state results in a vastly more computationally efficient system than competing methods while also dealing with the non-vector rotation components of the state in a principled way. In addition, real-time robust flight planning is considered with a method to navigate cluttered, potentially unknown scenarios with real-time obstacle avoidance. Robust state estimation is a critical component to reliable operation, and this dissertation focuses on improving the robustness of visual-inertial state estimation in a filtering framework by extending the state-of-the-art to include better modeling and sensor fusion. Further, this dissertation takes concepts from the visual-inertial estimation community and applies it to tightly-coupled GNSS, visual-inertial state estimation. This method is shown to demonstrate significantly more reliable state estimation than visual-inertial or GNSS-inertial state estimation alone in a hardware experiment through a GNSS-GNSS denied transition flying under a building and back out into open sky. Finally, this dissertation explores a novel method to combine measurements from multiple agents into a coherent map. Traditional approaches to this problem attempt to solve for the position of multiple agents at specific times in their trajectories. This dissertation instead attempts to solve this problem in a relative context, resulting in a much more robust approach that is able to handle much greater intial error than traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography