Academic literature on the topic 'Computer vision-based framework'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer vision-based framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer vision-based framework"

1

Almaghout, K., and A. Klimchik. "Vision-Based Robotic Comanipulation for Deforming Cables." Nelineinaya Dinamika 18, no. 5 (2022): 0. http://dx.doi.org/10.20537/nd221213.

Full text
Abstract:
Although deformable linear objects (DLOs), such as cables, are widely used in the majority of life fields and activities, the robotic manipulation of these objects is considerably more complex compared to the rigid-body manipulation and still an open challenge. In this paper, we introduce a new framework using two robotic arms cooperatively manipulating a DLO from an initial shape to a desired one. Based on visual servoing and computer vision techniques, a perception approach is proposed to detect and sample the DLO as a set of virtual feature points. Then a manipulation planning approach is introduced to map between the motion of the manipulators end effectors and the DLO points by a Jacobian matrix. To avoid excessive stretching of the DLO, the planning approach generates a path for each DLO point forming profiles between the initial and desired shapes. It is guaranteed that all these intershape profiles are reachable and maintain the cable length constraint. The framework and the aforementioned approaches are validated in real-life experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Ohta, Yuichi. "3D Image Media and Computer Vision -From CV as Robot Technology to CV as Media Technology-." Journal of Robotics and Mechatronics 9, no. 2 (April 20, 1997): 92–97. http://dx.doi.org/10.20965/jrm.1997.p0092.

Full text
Abstract:
The possibility to apply the computer vision technology to the development of a new image medium is discussed. Computer vision has been studied as a sensor technology between the real world and computers. On the other hand, the computer graphics are the interface technology between the computers and human beings. The invention of ""3D photography"" based on the computer vision technology will realize a new 3D image medium which connects the real world and the human beings via computer. In such a framework, computer vision should be studied as a media technology rather than a robot technology.
APA, Harvard, Vancouver, ISO, and other styles
3

Morley, Terence, Tim Morris, and Martin Turner. "A Computer Vision Encyclopedia-Based Framework with Illustrative UAV Applications." Computers 10, no. 3 (March 4, 2021): 29. http://dx.doi.org/10.3390/computers10030029.

Full text
Abstract:
This paper presents the structure of an encyclopedia-based framework (EbF) in which to develop computer vision systems that incorporate the principles of agile development with focussed knowledge-enhancing information. The novelty of the EbF is that it specifies both the use of drop-in modules, to enable the speedy implementation and modification of systems by the operator, and it incorporates knowledge of the input image-capture devices and presentation preferences. This means that the system includes automated parameter selection and operator advice and guidance. Central to this knowledge-enhanced framework is an encyclopedia that is used to store all information pertaining to the current system operation and can be used by all of the imaging modules and computational runtime components. This ensures that they can adapt to changes within the system or its environment. We demonstrate the implementation of this system over three use cases in computer vision for unmanned aerial vehicles (UAV) showing how it is easy to control and set up by novice operators utilising simple computational wrapper scripts.
APA, Harvard, Vancouver, ISO, and other styles
4

SHA, Liang, Guijin WANG, Xinggang LIN, and Kongqiao WANG. "A Framework of Real Time Hand Gesture Vision Based Human-Computer Interaction." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E94-A, no. 3 (2011): 979–89. http://dx.doi.org/10.1587/transfun.e94.a.979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ivorra, Eugenio, Mario Ortega, José Catalán, Santiago Ezquerro, Luis Lledó, Nicolás Garcia-Aracil, and Mariano Alcañiz. "Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms." Sensors 18, no. 8 (July 24, 2018): 2408. http://dx.doi.org/10.3390/s18082408.

Full text
Abstract:
Assistive technologies help all persons with disabilities to improve their accessibility in all aspects of their life. The AIDE European project contributes to the improvement of current assistive technologies by developing and testing a modular and adaptive multimodal interface customizable to the individual needs of people with disabilities. This paper describes the computer vision algorithms part of the multimodal interface developed inside the AIDE European project. The main contribution of this computer vision part is the integration with the robotic system and with the other sensory systems (electrooculography (EOG) and electroencephalography (EEG)). The technical achievements solved herein are the algorithm for the selection of objects using the gaze, and especially the state-of-the-art algorithm for the efficient detection and pose estimation of textureless objects. These algorithms were tested in real conditions, and were thoroughly evaluated both qualitatively and quantitatively. The experimental results of the object selection algorithm were excellent (object selection over 90%) in less than 12 s. The detection and pose estimation algorithms evaluated using the LINEMOD database were similar to the state-of-the-art method, and were the most computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
6

Ataş, Musa. "Open Cezeri Library: A novel java based matrix and computer vision framework." Computer Applications in Engineering Education 24, no. 5 (May 17, 2016): 736–43. http://dx.doi.org/10.1002/cae.21745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sharma, Rajeev, and Jose Molineros. "Computer Vision-Based Augmented Reality for Guiding Manual Assembly." Presence: Teleoperators and Virtual Environments 6, no. 3 (June 1997): 292–317. http://dx.doi.org/10.1162/pres.1997.6.3.292.

Full text
Abstract:
Augmented reality (AR) has the goal of enhancing a person's perception of the surrounding world, unlike virtual reality (VR) that aims at replacing the perception of the world with an artificial one. An important issue in AR is making the virtual world sensitive to the current state of the surrounding real world as the user interacts with it. For providing the appropriate augmentation stimulus at the right position and time, the system needs some sensor to interpret the surrounding scene. Computer vision holds great potential in providing the necessary interpretation of the scene. While a computer vision-based general interpretation of a scene is extremely difficult, the constraints from the assembly domain and specific marker-based coding scheme are used to develop an efficient and practical solution. We consider the problem of scene augmentation in the context of a human engaged in assembling a mechanical object from its components. Concepts from robot assembly planning are used to develop a systematic framework for presenting augmentation stimuli for this assembly domain. An experimental prototype system, VEGAS (Visual Enhancement for Guiding Assembly Sequences), is described, that implements some of the AR concepts for guiding assembly using computer vision.
APA, Harvard, Vancouver, ISO, and other styles
8

Saha, Sourav, Sahibjot Kaur, Jayanta Basak, and Priya Ranjan Sinha Mahapatra. "A Computer Vision Framework for Automated Shape Retrieval." American Journal of Advanced Computing 1, no. 1 (January 1, 2020): 1–15. http://dx.doi.org/10.15864/ajac.1108.

Full text
Abstract:
With the increasing number of images generated every day, textual annotation of images for image mining becomes impractical and inefficient. Thus, computer vision based image retrieval has received considerable interest in recent years. One of the fundamental characteristics of any image representation of an object is its shape which plays a vital role to recognize the object at primitive level. Keeping this view as the primary motivational focus, we propose a shape descriptive frame work using a multilevel tree structured representation called Hierarchical Convex Polygonal Decomposition (HCPD). Such a frame work explores different degrees of convexity of an object’s contour-segments in the course of its construction. The convex and non-convex segments of an object’s contour are discovered at every level of the HCPD-tree generation by repetitive convex-polygonal approximation of contour segments. We have also presented a novel shape-string-encoding scheme for representing the HCPD-tree which allows us touse the popular concept of string-edit distance to compute shape similarity score between two objects. The proposed framework when deployed for similar shape retrieval task demonstrates reasonably good performance in comparison with other popular shape-retrieval algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Farahbakhsh, Ehsan, Rohitash Chandra, Hugo K. H. Olierook, Richard Scalzo, Chris Clark, Steven M. Reddy, and R. Dietmar Müller. "Computer vision-based framework for extracting tectonic lineaments from optical remote sensing data." International Journal of Remote Sensing 41, no. 5 (October 11, 2019): 1760–87. http://dx.doi.org/10.1080/01431161.2019.1674462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhuang, Yizhou, Weimin Chen, Tao Jin, Bin Chen, He Zhang, and Wen Zhang. "A Review of Computer Vision-Based Structural Deformation Monitoring in Field Environments." Sensors 22, no. 10 (May 16, 2022): 3789. http://dx.doi.org/10.3390/s22103789.

Full text
Abstract:
Computer vision-based structural deformation monitoring techniques were studied in a large number of applications in the field of structural health monitoring (SHM). Numerous laboratory tests and short-term field applications contributed to the formation of the basic framework of computer vision deformation monitoring systems towards developing long-term stable monitoring in field environments. The major contribution of this paper was to analyze the influence mechanism of the measuring accuracy of computer vision deformation monitoring systems from two perspectives, the physical impact, and target tracking algorithm impact, and provide the existing solutions. Physical impact included the hardware impact and the environmental impact, while the target tracking algorithm impact included image preprocessing, measurement efficiency and accuracy. The applicability and limitations of computer vision monitoring algorithms were summarized.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computer vision-based framework"

1

Çelik, Turgay. "A multiresolution framework for computer vision-based autonomous navigation." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/36782/.

Full text
Abstract:
Autonomous navigation, e.g., for mobile robots and vehicle driver assistance, rely on intelligent processing of data acquired from different resources such as sensor networks, laser scanners, and video cameras. Due to its low cost and easy installation, video cameras are the most feasible. Thus, there is a need for robust computer vision algorithms for autonomous navigation. This dissertation investigates the use of multiresolution image analysis and proposes a framework for autonomous navigation. Multiresolution image representation is achieved via complex wavelet transform to benefit from its limited data redundancy, approximately shift invariance and improved directionality. Image enhancement is developed to enhance image features for navigation and other applications. The colour constancy is developed to correct colour aberrations to utilize colour information as a robust feature to identify drivable regions. A novel algorithm which combines multiscale edge information with contextual information through colour similarity is developed for unsupervised image segmentation. The texture analysis is accomplished through a novel multiresolution texture classifier. Each component of the framework is initially evaluated independent of the other components and on various and more general applications. The framework as a whole is applied for drivable region identification and obstacle detection. The drivable regions are identified using the colour information. The obstacle is defined as vehicles on a road and other objects which cannot be part of a road. Multiresolution texture classifier and machine learning algorithms are applied to learn the appearance of vehicles for the purpose of vehicle detection.
APA, Harvard, Vancouver, ISO, and other styles
2

Berry, David T. "A knowledge-based framework for machine vision." Thesis, Heriot-Watt University, 1987. http://hdl.handle.net/10399/1022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Caudle, Eric Weaver. "An evaluation framework for designing a night vision, computer-based trainer." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA278005.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, December 1993.
Thesis advisor(s): Kishore Sengupta ; Carl R. Jones. "December 1993." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Abusaleh, Sumaya. "A Novel Computer Vision-Based Framework for Supervised Classification of Energy Outbreak Phenomena." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10746723.

Full text
Abstract:

Today, there is a need to implement a proper design of an adequate surveillance system that detects and categorizes explosion phenomena in order to identify the explosion risk to reduce its impact through mitigation and preparedness. This dissertation introduces state-of-the-art classification of explosion phenomena through pattern recognition techniques on color images. Consequently, we present a novel taxonomy for explosion phenomena. In particular, we demonstrate different aspects of volcanic eruptions and nuclear explosions of the proposed taxonomy that include scientific formation, real examples, existing monitoring methodologies, and their limitations. In addition, we propose a novel framework designed to categorize explosion phenomena against non-explosion phenomena. Moreover, a new dataset, Volcanic and Nuclear Explosions (VNEX), was collected. The totality of VNEX is 10, 654 samples, and it includes the following patterns: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds.

In order to achieve high reliability in the proposed explosion classification framework, we propose to employ various feature extraction approaches. Thus, we calculated the intensity levels to extract the texture features. Moreover, we utilize the YCbCr color model to calculate the amplitude features. We also employ the Radix-2 Fast Fourier Transform to compute the frequency features. Furthermore, we use the uniform local binary patterns technique to compute the histogram features. Additionally, these discriminative features were combined into a single input vector that provides valuable insight of the images, and then fed into the following classification techniques: Euclidian distance, correlation, k-nearest neighbors, one-against-one multiclass support vector machines with different kernels, and the multilayer perceptron model. Evaluation results show the design of the proposed framework is effective and robust. Furthermore, a trade-off between the computation time and the classification rate was achieved.

APA, Harvard, Vancouver, ISO, and other styles
5

Fang, Bing. "A Framework for Human Body Tracking Using an Agent-based Architecture." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77135.

Full text
Abstract:
The purpose of this dissertation is to present our agent-based human tracking framework, and to evaluate the results of our work in light of the previous research in the same field. Our agent-based approach departs from a process-centric model where the agents are bound to specific processes, and introduces a novel model by which agents are bound to the objects or sub-objects being recognized or tracked. The hierarchical agent-based model allows the system to handle a variety of cases, such as single people or multiple people in front of single or stereo cameras. We employ the job-market model for agents' communication. In this dissertation, we will present several experiments in detail, which demonstrate the effectiveness of the agent-based tracking system. Per our research, the agents are designed to be autonomous, self-aware entities that are capable of communicating with other agents to perform tracking within agent coalitions. Each agent with high-level abstracted knowledge seeks evidence for its existence from the low-level features (e.g. motion vector fields, color blobs) and its peers (other agents representing body-parts with which it is compatible). The power of the agent-based approach is its flexibility by which the domain information may be encoded within each agent to produce an overall tracking solution.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Basso, Maik. "A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/179536.

Full text
Abstract:
A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.
Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware.
APA, Harvard, Vancouver, ISO, and other styles
7

Sanders, Nathaniel. "A CAMERA-BASED ENERGY RELAXATION FRAMEWORK TO MINIMIZE COLOR ARTIFACTS IN A PROJECTED DISPLAY." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/431.

Full text
Abstract:
We introduce a technique to automatically correct color inconsistencies in a display composed of one or more digital light projectors (DLP). The method is agnostic to the source of error and can detect and address color problems from a number of sources. Examples include inter- and intra-projector color differences, display surface markings, and environmental lighting differences on the display. In contrast to methods that discover and map all colors into the greatest common color space, we minimize local color discontinuities to create color seamlessness while remaining tolerant to significant color error. The technique makes use of a commodity camera and highdynamic range sensing to measure color gamuts at many different spatial locations. A differentiable energy function is defined that combines both a smoothness and data term. This energy function is globally minimized through the successive application of projective warps defined using gradient descent. After convergence the warps can be applied at runtime to minimize color defects in the display. The framework is demonstrated on displays that suffer from several sources of color error.
APA, Harvard, Vancouver, ISO, and other styles
8

Gongbo, Liang. "Pedestrian Detection Using Basic Polyline: A Geometric Framework for Pedestrian Detection." TopSCHOLAR®, 2016. http://digitalcommons.wku.edu/theses/1582.

Full text
Abstract:
Pedestrian detection has been an active research area for computer vision in recently years. It has many applications that could improve our lives, such as video surveillance security, auto-driving assistance systems, etc. The approaches of pedestrian detection could be roughly categorized into two categories, shape-based approaches and appearance-based approaches. In the literature, most of approaches are appearance-based. Shape-based approaches are usually integrated with an appearance-based approach to speed up a detection process. In this thesis, I propose a shape-based pedestrian detection framework using the geometric features of human to detect pedestrians. This framework includes three main steps. Give a static image, i) generating the edge image of the given image, ii) according to the edge image, extracting the basic polylines, and iii) using the geometric relationships among the polylines to detect pedestrians. The detection result obtained by the proposed framework is promising. There was a comparison made of this proposed framework with the algorithm which introduced by Dalal and Triggs [7]. This proposed algorithm increased the true-positive detection result by 47.67%, and reduced the false-positive detection number by 41.42%.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoke, Jaclyn Ann. "A wavelet-based framework for efficient processing of digital imagery with an application to helmet-mounted vision systems." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6435.

Full text
Abstract:
Image acquisition devices, as well as image processing theory, algorithms, and hardware have advanced to the point that low Size-Weight-and-Power, real-time embedded imaging systems have become a reality. To be practical in a fielded application, an image processing sub-system must be able to conduct multiple, often highly complex tasks, in real-time. The design and construction of such systems have to address technical challenges, including real-time, low-latency processing and fixed-point algorithms in order to leverage lowest-power computing platforms. Further design complications stem from the reality that state-of-the-art image processing algorithms take very different forms, greatly complicating low-latency implementations. This dissertation presents the design and preliminary implementation of an image processing sub-system that minimizes computational complexity and power consumption by eliminating repeated transformations between processing domains. Specifically, this processing chain utilizes the LeGall 5/3 wavelet as the basis for applying multiple algorithms within a single domain. The wavelet processing chain is compared, in terms of image quality, computational cost, and power consumption, to a benchmark processing chain comprised of algorithms intended to produce high quality image results. Image quality is assessed through a subject matter expert evaluation. Computational cost is analyzed theoretically and empirically, and the power consumption is derived from the execution times and characteristics of the processing devices. The results demonstrate significant promise, but several areas for additional work have been identified.
APA, Harvard, Vancouver, ISO, and other styles
10

Strand, Mattias. "A Software Framework for Facial Modelling and Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54563.

Full text
Abstract:

The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computer vision-based framework"

1

Panin, Giorgio. Model-based visual tracking: The OpenTL framework. Hoboken, N.J: Wiley, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Panin, Giorgio. Model-Based Visual Tracking: The OpenTL Framework. Wiley & Sons, Incorporated, John, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Panin, Giorgio. Model-Based Visual Tracking: The OpenTL Framework. Wiley & Sons, Incorporated, John, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Panin, Giorgio. Model-Based Visual Tracking: The OpenTL Framework. Wiley & Sons, Incorporated, John, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer vision-based framework"

1

Chaudhary, Rashmi, and Manoj Kumar. "Computer Vision-Based Framework for Anomaly Detection." In Lecture Notes in Networks and Systems, 549–56. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0666-3_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ye, Guangqi, Jason Corso, Darius Burschka, and Gregory D. Hager. "VICs: A Modular Vision-Based HCI Framework." In Lecture Notes in Computer Science, 257–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36592-3_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cormier, Michael, Robin Cohen, Richard Mann, Kamal Rahim, and Donglin Wang. "A Robust Vision-Based Framework for Screen Readers." In Computer Vision - ECCV 2014 Workshops, 555–69. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16199-0_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nunes, Urbano Miguel, and Yiannis Demiris. "Entropy Minimisation Framework for Event-Based Vision Model Estimation." In Computer Vision – ECCV 2020, 161–76. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58558-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Savyasachi, Dhananjai Chand, and Ilaiah Kavati. "Computer Vision based Animal Collision Avoidance Framework for Autonomous Vehicles." In Communications in Computer and Information Science, 237–48. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1103-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lam, Meng Chun, Anton Satria Prabuwono, Haslina Arshad, and Chee Seng Chan. "A Real-Time Vision-Based Framework for Human-Robot Interaction." In Lecture Notes in Computer Science, 257–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25191-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Crispim-Junior, Carlos Fernando, and Francois Bremond. "Uncertainty Modeling Framework for Constraint-Based Elementary Scenario Detection in Vision Systems." In Computer Vision - ECCV 2014 Workshops, 269–82. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16181-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Santa Cruz, Ulices, and Yasser Shoukry. "NNLander-VeriF: A Neural Network Formal Verification Framework for Vision-Based Autonomous Aircraft Landing." In Lecture Notes in Computer Science, 213–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06773-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Xiaohong, Zhengyao Lin, Minh-Thai Trinh, and Grigore Roşu. "Towards a Trustworthy Semantics-Based Language Framework via Proof Generation." In Computer Aided Verification, 477–99. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_23.

Full text
Abstract:
AbstractWe pursue the vision of an ideal language framework, where programming language designers only need to define the formal syntax and semantics of their languages, and all language tools are automatically generated by the framework. Due to the complexity of such a language framework, it is a big challenge to ensure its trustworthiness and to establish the correctness of the autogenerated language tools. In this paper, we propose an innovative approach based on proof generation. The key idea is to generate proof objects as correctness certificates for each individual task that the language tools conduct, on a case-by-case basis, and use a trustworthy proof checker to check the proof objects. This way, we avoid formally verifying the entire framework, which is practically impossible, and thus can make the language framework both practical and trustworthy. As a first step, we formalize program execution as mathematical proofs and generate their complete proof objects. The experimental result shows that the performance of our proof object generation and proof checking is very promising.
APA, Harvard, Vancouver, ISO, and other styles
10

Ghanshala, Tejasvi, Vikas Tripathi, and Bhaskar Pant. "An Effective Vision Based Framework for the Identification of Tuberculosis in Chest X-Ray Images." In Communications in Computer and Information Science, 36–45. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6634-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer vision-based framework"

1

Cernica, Ionut, and Nirvana Popescu. "Computer Vision Based Framework For Detecting Phishing Webpages." In 2020 19th RoEduNet Conference: Networking in Education and Research (RoEduNet). IEEE, 2020. http://dx.doi.org/10.1109/roedunet51892.2020.9324850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shahid, Aasma, Alina Tayyab, Musfira Mehmood, Rida Anum, Abdul Jalil, Ahmad Ali, Haider Ali, and Javed Ahmed. "Computer vision based intruder detection framework (CV-IDF)." In 2017 2nd International Conference on Computer and Communication Systems (ICCCS). IEEE, 2017. http://dx.doi.org/10.1109/ccoms.2017.8075263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mathews, Mary Shaji, M. Prabu, Arish Pitchai, Derin Ben Roberts, and G. Rahul. "Improved Computer Vision-based Framework for Electronic Toll Collection." In 2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2022. http://dx.doi.org/10.1109/confluence52989.2022.9734219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Jiujun, Gang Xiao, Fei Gao, Hongbin Zhou, and Xiaofang Ying. "Vision-Based Perceptive Framework for Fish Motion." In 2009 International Conference on Information Engineering and Computer Science. IEEE, 2009. http://dx.doi.org/10.1109/iciecs.2009.5364666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"A YARP-BASED ARCHITECTURAL FRAMEWORK FOR ROBOTIC VISION APPLICATIONS." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2009. http://dx.doi.org/10.5220/0001773600650068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Zixuan, Huaiyuan Teng, Jeremy Goldhawk, Ilya Kovalenko, Efe C. Balta, Felipe Lopez, Dawn Tilbury, and Kira Barton. "A Vision-Based Framework for Enhanced Quality Control in a Smart Manufacturing System." In ASME 2019 14th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/msec2019-2966.

Full text
Abstract:
Abstract Dimensional metrology is an integral part of quality control in manufacturing systems. Most existing manufacturing systems utilize contact-based metrology, which is time consuming and not flexible to design changes. There have been recent applications of computer vision for performing dimensional metrology in manufacturing systems. Existing computer vision metrology techniques need repeated calibration of the system and are not utilized with data analysis methods to improve decision making. In this work, we propose a robust non-contact computer vision metrology pipeline integrated with Computer Aided Design (CAD) that has the capacity to enable control of smart manufacturing systems. The pipeline uses CAD data to extract nominal dimensions and tolerances. The dimensions are compared to the measured ones, computed using camera images and computer vision algorithms. A quality check module evaluates if the measurements are within admissible bounds and informs a central controller. If a part does not meet a tolerance, the central controller changes a program running on a specific machine to ensure that parts meet the necessary specifications. Results from an implementation of the proposed pipeline on a manufacturing research testbed are given at the end.
APA, Harvard, Vancouver, ISO, and other styles
7

Tiwari, Rohit Kumar, and Gyanendra K. Verma. "A computer vision based framework for visual gun detection using SURF." In 2015 International Conference on Electrical, Electronics, Signals, Communication and Optimization (EESCO). IEEE, 2015. http://dx.doi.org/10.1109/eesco.2015.7253863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gutierrez, Julian, Shi Dong, and David Kaeli. "Vega: A Computer Vision Processing Enhancement Framework with Graph-based Acceleration." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2020. http://dx.doi.org/10.24251/hicss.2020.818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

De, Oyndrila, Puskar Deb, Sagnik Mukherjee, Sayantan Nandy, Tamal Chakraborty, and Sourav Saha. "Computer vision based framework for digit recognition by hand gesture analysis." In 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). IEEE, 2016. http://dx.doi.org/10.1109/iemcon.2016.7746361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Xianglei, Sen Liu, Peng Zhang, and Yihai Duan. "A Unified Framework of Intelligent Vehicle Damage Assessment based on Computer Vision Technology." In 2019 IEEE 2nd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). IEEE, 2019. http://dx.doi.org/10.1109/auteee48671.2019.9033150.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer vision-based framework"

1

Alhasson, Haifa F., and Shuaa S. Alharbi. New Trends in image-based Diabetic Foot Ucler Diagnosis Using Machine Learning Approaches: A Systematic Review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, November 2022. http://dx.doi.org/10.37766/inplasy2022.11.0128.

Full text
Abstract:
Review question / Objective: A significant amount of research has been conducted to detect and recognize diabetic foot ulcers (DFUs) using computer vision methods, but there are still a number of challenges. DFUs detection frameworks based on machine learning/deep learning lack systematic reviews. With Machine Learning (ML) and Deep learning (DL), you can improve care for individuals at risk for DFUs, identify and synthesize evidence about its use in interventional care and management of DFUs, and suggest future research directions. Information sources: A thorough search of electronic databases such as Science Direct, PubMed (MIDLINE), arXiv.org, MDPI, Nature, Google Scholar, Scopus and Wiley Online Library was conducted to identify and select the literature for this study (January 2010-January 01, 2023). It was based on the most popular image-based diagnosis targets in DFu such as segmentation, detection and classification. Various keywords were used during the identification process, including artificial intelligence in DFu, deep learning, machine learning, ANNs, CNNs, DFu detection, DFu segmentation, DFu classification, and computer-aided diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography