Journal articles on the topic 'Image guide'

To see the other types of publications on this topic, follow the link: Image guide.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image guide.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Berry, E., M. Cuppone, S. Porada, P. A. Millner, A. Rao, N. Chiverton, and B. B. Seedhom. "Personalised image-based templates for intra-operative guidance." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 219, no. 2 (February 1, 2005): 111–18. http://dx.doi.org/10.1243/095441105x9273.

Full text
Abstract:
The high rate of unplanned perforation, poor fixation, and nerve injury with freehand pedicle screw insertion has led to the use of image-guided navigation systems. Although these improve accuracy, they have several drawbacks that could be overcome by using image-based drilling guide templates. The accuracy of such templates was tested in a cadaveric study of screw placement in the lumbar, thoracic, and cervical regions of the spine. The dimensional stability with autoclaving of duraform polyamide, to be used for manufacturing the guides, was first determined using test specimens. Computed tomography (CT) images were acquired of 4 cadaveric spines, and placement of 4 cervical, 32 thoracic, and 14 lumbar screws was planned. Eighteen personalized drilling guide templates, in four different designs, were built. Orthopaedic surgeons experienced in the freehand techniques used the templates. CT images were acquired to assess placement position with respect to the pedicle. Duraform polyamide was found to be unaffected by sterilization. Two of the template designs facilitated the placement of 20/20 screws without error. Templates can lead to successful screw placement, even in small pedicles, providing their design is optimized for the application area, e.g. with enhanced rotational stabilization.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhan, Jianhang Zhang, Ruibin Zhong, Bir Bhanu, Yuling Chen, Qingfeng Zhang, and Haoqing Tang. "Lightweight and Efficient Image Dehazing Network Guided by Transmission Estimation from Real-World Hazy Scenes." Sensors 21, no. 3 (February 1, 2021): 960. http://dx.doi.org/10.3390/s21030960.

Full text
Abstract:
In this paper, a transmission-guided lightweight neural network called TGL-Net is proposed for efficient image dehazing. Unlike most current dehazing methods that produce simulated transmission maps from depth data and haze-free images, in the proposed work, guided transmission maps are computed automatically using a filter-refined dark-channel-prior (F-DCP) method from real-world hazy images as a regularizer, which facilitates network training not only on synthetic data, but also on natural images. A double-error loss function that combines the errors of a transmission map with the errors of a dehazed image is used to guide network training. The method provides a feasible solution for introducing priors obtained from traditional non-learning-based image processing techniques as a guide for training deep neural networks. Extensive experimental results demonstrate that, in terms of several reference and non-reference evaluation criteria for real-world images, the proposed method can achieve state-of-the-art performance with a much smaller network size and with significant improvements in efficiency resulting from the training guidance.
APA, Harvard, Vancouver, ISO, and other styles
3

Ishizaka, Hiroshi. "Directable Needle Guide: Efficacy for Image-Guided Percutaneous Interventions." ISRN Radiology 2013 (November 27, 2013): 1–3. http://dx.doi.org/10.5402/2013/516941.

Full text
Abstract:
Diagnostic and therapeutic image-guided percutaneous interventions have become increasingly important in the clinical management of various conditions. Though precise needle placement via a safe route is essential for successful percutaneous interventions, it is often difficult in cases of deeply situated, small lesions. The present paper describes the efficacy of the directable needle guide (DNG), which allows manipulation of the direction of a fine needle within organs. The DNG was used in patients for needle biopsy of hepatic (n=26) and splenic (n=1) lesions and for percutaneous ethanol injection therapy for liver tumors (n=33) under sonographic or computed tomography guidance. The DNG enabled the direction of a 21- or 22-gauge needle to be successfully changed during needle advancement in all cases, allowing adjustment of the location of the needle tip or needle access root to avoid vessels, the gallbladder, and the lungs. We conclude that DNG increases the safety and ease of percutaneous interventions.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yuan, Peng Wang, Ji Xiang Li, Shi Wei Yin, and Jing Lei Xin. "Path Segmentation Algorithm for Automatic Guided Vehicle Based on Machine Vision." Key Engineering Materials 431-432 (March 2010): 330–33. http://dx.doi.org/10.4028/www.scientific.net/kem.431-432.330.

Full text
Abstract:
A novel path segmentation algorithm for automatic guided vehicle is designed by machine vision technology. Firstly, wiener filter based on parameter estimate is used to restore path image in HIS model. Secondly, mutivariate stepwise fitted regression method is adopted to smooth guide line edge of path image. It can be concluded that path guide line of automatic guided vehicle under industry surrounding may be recognized accurately through this way.
APA, Harvard, Vancouver, ISO, and other styles
5

Karthikesh, Madhumithra S., and Xinmai Yang. "Photoacoustic image-guided interventions." Experimental Biology and Medicine 245, no. 4 (November 20, 2019): 330–41. http://dx.doi.org/10.1177/1535370219889323.

Full text
Abstract:
Photoacoustic imaging has demonstrated its potential for diagnosis over the last few decades. In recent years, its unique imaging capabilities, such as detecting structural, functional and molecular information in deep regions with optical contrast and ultrasound resolution, have opened up many opportunities for photoacoustic imaging to be used during image-guided interventions. Numerous studies have investigated the capability of photoacoustic imaging to guide various interventions such as drug delivery, therapies, surgeries, and biopsies. These studies have demonstrated that photoacoustic imaging can guide these interventions effectively and non-invasively in real-time. In this minireview, we will elucidate the potential of photoacoustic imaging in guiding active and passive drug deliveries, photothermal therapy, and other surgeries and therapies using endogenous and exogenous contrast agents including organic, inorganic, and hybrid nanoparticles, as well as needle-based biopsy procedures. The advantages of photoacoustic imaging in guided interventions will be discussed. It will, therefore, show that photoacoustic imaging has great potential in real-time interventions due to its advantages over current imaging modalities like computed tomography, magnetic resonance imaging, and ultrasound imaging. Impact statement Photoacoustic imaging is an emerging modality for use in image-guided interventional procedures. This imaging technology has a unique ability to offer real-time, non-invasive, cost-effective, and radiation-free guidance in a real-world operating environment. This is substantiated in this article which sums up the current state and underlines promising results of research using photoacoustic imaging in guiding drug delivery, therapy, surgery, and biopsy. Hence, this minireview facilitates future research and real-world application of photoacoustic image-guided interventions.
APA, Harvard, Vancouver, ISO, and other styles
6

Ji, Jingyu, Yuhua Zhang, Zhilong Lin, Yongke Li, Changlong Wang, Yongjiang Hu, Fuyu Huang, and Jiangyi Yao. "Fusion of Infrared and Visible Images Based on Optimized Low-Rank Matrix Factorization with Guided Filtering." Electronics 11, no. 13 (June 26, 2022): 2003. http://dx.doi.org/10.3390/electronics11132003.

Full text
Abstract:
In recent years, image fusion has been a research hotspot. However, it is still a big challenge to balance the problems of noiseless image fusion and noisy image fusion. In order to improve the weak performance and low robustness of existing image fusion algorithms in noisy images, an infrared and visible image fusion algorithm based on optimized low-rank matrix factorization with guided filtering is proposed. First, the minimized error reconstruction factorization is introduced into the low-rank matrix, which effectively enhances the optimization performance, and obtains the base image with good filtering performance. Then using the base image as the guide image, the source image is decomposed into the high-frequency layer containing detail information and noise, and the low-frequency layer containing energy information through guided filtering. According to the noise intensity, the sparse reconstruction error is adaptively obtained to fuse the high-frequency layers, and the weighted average strategy is utilized to fuse the low-frequency layers. Finally, the fusion image is obtained by reconstructing the pre-fused high-frequency layer and the pre-fused low-frequency layer. The comparative experiments show that the proposed algorithm not only has good performance for noise-free images, but more importantly, it can effectively deal with the fusion of noisy images.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jinfeng. "Low-light image enhancement with contrast regularization." Frontiers in Computing and Intelligent Systems 1, no. 3 (October 19, 2022): 25–28. http://dx.doi.org/10.54097/fcis.v1i3.2022.

Full text
Abstract:
Because the processing of existing low-light images undergoes multiple sampling processing, there is serious information degradation, and only clear images are used as positive samples to guide network training, low-light image enhancement processing is still a challenging and unsettled problem. Therefore, a multi-scale contrast learning low-light image enhancement network is proposed. First, the image generates rich features through the input module, and then the features are imported into a multi-scale enhancement network with dense residual blocks, using positive and negative samples to guide the network training, and finally using the refinement module to enrich the image details. Experimental results on the dataset show that this method can reduce noise and artifacts in low-light images, and can improve contrast and brightness, demonstrating its advantages.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Ying, Dihe Huang, Shang Xu, Jianlin Liu, and Yong Liu. "Guide Local Feature Matching by Overlap Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 365–73. http://dx.doi.org/10.1609/aaai.v36i1.19913.

Full text
Abstract:
Local image feature matching under large appearance, viewpoint, and distance changes is challenging yet important. Conventional methods detect and match tentative local features across the whole images, with heuristic consistency checks to guarantee reliable matches. In this paper, we introduce a novel Overlap Estimation method conditioned on image pairs with TRansformer, named OETR, to constrain local feature matching in the commonly visible region. OETR performs overlap estimation in a two step process of feature correlation and then overlap regression. As a preprocessing module, OETR can be plugged into any existing local feature detection and matching pipeline, to mitigate potential view angle or scale variance. Intensive experiments show that OETR can boost state of the art local feature matching performance substantially, especially for image pairs with small shared regions. The code will be publicly available at https://github.com/AbyssGaze/OETR.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Guxue, Huicheng Lai, and Zhenhong Jia. "Unsupervised Image Dedusting via a Cycle-Consistent Generative Adversarial Network." Remote Sensing 15, no. 5 (February 27, 2023): 1311. http://dx.doi.org/10.3390/rs15051311.

Full text
Abstract:
In sand–dust weather, the quality of the image is seriously degraded, which affects the ability of advanced applications to image using remote sensing. To improve the image quality and enhance the performance of image dedusting, we propose an end-to-end cyclic generative adversarial network (D-CycleGAN) for image dedusting, which does not require pairs of sand–dust images and corresponding ground truth images for training. In other words, we train the network in an unpaired way. Specifically, we designed a jointly optimized guided module (JOGM), comprised of the sandy guided synthesis module (SGSM) and the clean guided synthesis module (CGSM), which aim to jointly guide the generator through corresponding discriminator adversarials to reduce the color distortion and artifacts. JOGM can significantly improve image quality. We propose a network hidden layer adversarial branch to perform adversarials from inside the network, which better supervises the hidden layer to further improve the quality of the generated images. In addition, we improved the original CycleGAN loss function and propose a dual-scale semantic perception loss in feature space and a color identity-preserving loss in pixel space to constrain the network. Extensive experiments demonstrate that our proposed network model effectively removes sand dust, has better clarity and image quality, and outperforms the state-of-the-art techniques. In addition, the proposed method can help the target detection algorithm to improve its detection accuracy and capability, and our method generalizes well to the enhancement of underwater images and hazy images.
APA, Harvard, Vancouver, ISO, and other styles
10

Safirin Karis, Mohd, Nursabillilah Mohd Ali, Mohd Azamuddin Ali, Muhamad Raimi Sadiq Samsudin, Nurasmiza Selamat, Wira Hidayat Mohd Saad, Amar Faiz Zainal Abidin, and Zairi Ismael Rizman. "Guide Sign Analysis of Traffic Sign Data-Set Using Supervised Spiking Neuron Technique." International Journal of Engineering & Technology 7, no. 3.14 (July 25, 2018): 221. http://dx.doi.org/10.14419/ijet.v7i3.14.16897.

Full text
Abstract:
In this paper, 20 guided traffic signs mostly displayed around Malacca area were selected as project databased. Early hypothesis was made as the error for each usable image will increased as more interference introduced to the original image used. Three types of conditions which are hidden region, image brightness and image rotation were selected as an experiment to analyze the performance of each sign used. Each condition will perform a specific error to generate their mean value and in the same, image recognition will take place in the matchup process. By focusing on the result, it produces hidden region critically ascending mean error value at 62.5% = 0.07 and has average value at others points. For image brightness effect, it shows a higher mean error value collected at less brightness points and non-stable pattern at 10% to 60% brightness. As for rotation upshot, the values show a critically ascending for error value at 22.5% and slightly increase at 2% to 5% rotation point. For the recognition process, at 6.25% hidden region, almost 70% of images are correctly matched to its own classes while at 62.5% hidden region only 40% of images are correctly matched to its own classes and leaving 2 images to outperform. For -40% brightness, 45% of images are correctly matched to its own classes while at 60% brightness 65% of images are correctly matched to its own classes and leaving 1 image to outperform. Lastly, at 2.5 degree rotation, 85% of images are correctly matched to its own classes while at 25° rotation, 45% of images are correctly matched to its own classes and leaving 2 images to outperform. Finally, the error forms will affect the final output response of the detected traffic signs used.
APA, Harvard, Vancouver, ISO, and other styles
11

Ariav, Ido, and Israel Cohen. "Fully Cross-Attention Transformer for Guided Depth Super-Resolution." Sensors 23, no. 5 (March 2, 2023): 2723. http://dx.doi.org/10.3390/s23052723.

Full text
Abstract:
Modern depth sensors are often characterized by low spatial resolution, which hinders their use in real-world applications. However, the depth map in many scenarios is accompanied by a corresponding high-resolution color image. In light of this, learning-based methods have been extensively used for guided super-resolution of depth maps. A guided super-resolution scheme uses a corresponding high-resolution color image to infer high-resolution depth maps from low-resolution ones. Unfortunately, these methods still have texture copying problems due to improper guidance from color images. Specifically, in most existing methods, guidance from the color image is achieved by a naive concatenation of color and depth features. In this paper, we propose a fully transformer-based network for depth map super-resolution. A cascaded transformer module extracts deep features from a low-resolution depth. It incorporates a novel cross-attention mechanism to seamlessly and continuously guide the color image into the depth upsampling process. Using a window partitioning scheme, linear complexity in image resolution can be achieved, so it can be applied to high-resolution images. The proposed method of guided depth super-resolution outperforms other state-of-the-art methods through extensive experiments.
APA, Harvard, Vancouver, ISO, and other styles
12

Enam, Rabia Noor, Muhammad Tahir, Syed Muhammad Nabeel Mustafa, Rehan Qureshi, and Hasan Shahid. "Tourist Guide via Image Processing Techniques." Pakistan Journal of Engineering and Technology 4, no. 2 (June 30, 2021): 144–48. http://dx.doi.org/10.51846/vol4iss2pp144-148.

Full text
Abstract:
To achieve the goal of identification, image processing and recognition is performed on the actual picture transformation. The amount of information included in an image is enormous because it is a two-dimensional space. Neural network image recognition is a new type of picture recognition technology developed by modern computer innovation. In this paper we have used neural network to implement image-based location identification. Using the image data, we have evaluated our proposed model’s predictive performance. By including more hidden layers in the convolution neural network, it is feasible to improve the network's training speed and reliability by having a large amount of data. However, 80% of the dataset is used for training and the remaining 20% is used for testing in our studies. The preliminary findings have shown that using a neural network to detect photos is both successful and practical. The proposed methods serve as a guide for travelers or tourists who are unfamiliar with a country. They simply need to point the camera at any historical or well-known location to learn more about it.
APA, Harvard, Vancouver, ISO, and other styles
13

Murray, Richard. "What image features guide lightness perception?" Journal of Vision 17, no. 15 (December 1, 2017): 5. http://dx.doi.org/10.1167/17.15.5a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Minjung, Jason M. Gold, and Richard F. Murray. "What image features guide lightness perception?" Journal of Vision 18, no. 13 (December 3, 2018): 1. http://dx.doi.org/10.1167/18.13.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ghaemi, H. F., Yao Li, Tineke Thio, and T. Wang. "Fiber image guide with subwavelength resolution." Applied Physics Letters 72, no. 10 (March 9, 1998): 1137–39. http://dx.doi.org/10.1063/1.120994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lavin, Michael R. "A librarian's guide to Image Comics." Serials Review 24, no. 2 (June 1998): 79–92. http://dx.doi.org/10.1080/00987913.1998.10764447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Evans, D., and J. Romer. "A Guide to Guernsey image rights." Journal of Intellectual Property Law & Practice 8, no. 10 (August 29, 2013): 761–63. http://dx.doi.org/10.1093/jiplp/jpt153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Earle, Greville. "A user's guide to image processing." Physics World 5, no. 8 (August 1992): 31–36. http://dx.doi.org/10.1088/2058-7058/5/8/30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tiwari, A. K., and R. P. Singh. "An efficient image guide mode launcher." International Journal of Infrared and Millimeter Waves 7, no. 6 (June 1986): 845–56. http://dx.doi.org/10.1007/bf01013031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bevilaqua, Marcus. "Guide to image editing and production of figures for scientific publications with an emphasis on taxonomy Image editing for scientific publications." Zoosystematics and Evolution 96, no. 1 (May 4, 2020): 139–58. http://dx.doi.org/10.3897/zse.96.49225.

Full text
Abstract:
Figures for scientific publications go through various stages from the planning, to the capturing of images, to the production of finished figures for publication. This guide is meant to familiarise the reader with the main image-editing software used by professional photographers. The guide’s focus is on digital photo editing and the production of figures using Adobe Photoshop to produce publication-quality figures for scientific publications. This guide will be of fundamental use for the academic public, especially taxonomists and others who work with images. Besides, it should be useful for anyone interested in becoming familiar with the basic tools of image editing.
APA, Harvard, Vancouver, ISO, and other styles
21

Aspalli, Pooja, and Prakash Pattan. "Image Registration and Fusion using Moving Frame based Decomposition Framework Algorithm." International Journal of Innovative Technology and Exploring Engineering 10, no. 5 (March 30, 2021): 57–63. http://dx.doi.org/10.35940/ijitee.e8669.0310521.

Full text
Abstract:
Image fusion is an important process in the medical image diagnostics methods. Fusing images by obtaining information from different source and different types of images(modals) called multi-modal image fusion. This paper implements an effective and fast spatial domain based multimodal image fusion using moving frame based decomposition (MFDF)method. Images from two different modalities are taken and decomposed to texture and approximation components. Weight mapping strategy is applied along with the guide filtering to fuse the approximation components using the final map. Weight mapping using the guide filtering is used for the fusing the images from different modalities. MATLAB is used for algorithm implementation. The results obtained are comparatively competitive with the recent publication[11]. Multi modal image fusion thus implemented gives promising results, when compared to moving frame decomposition framework method. The size and the blurring variable of the guiding filter is optimized to obtain a better Structural Similarity Index Measurement (SSIM).
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yang, Haitao Yang, and Yuge Gao. "An Image Fusion Algorithm Based on Improved RGF and Visual Saliency Map." Emergency Medicine International 2022 (August 25, 2022): 1–10. http://dx.doi.org/10.1155/2022/1693531.

Full text
Abstract:
To solve the artifact problem in fused images and the lack of enough generalization under different scenarios of existing fusion algorithms, the paper proposes an image fusion algorithm based on improved RGF and visual saliency map to realize fusion for infrared and visible light images and a multimode medical image. Firstly, the paper uses RGF (rolling guidance filter) and Gaussian filter to decompose the image into the base layer, interlayer, and detail layer by a different scale. Secondly, the paper obtains a visual weight map by the calculation of the source image and uses the guided filter to better guide the base layer fusion. Then, it realizes the interlayer fusion through maximum local variance and realizes the detail layer fusion through the maximum absolute value of the pixel. Finally, it obtains the fused image through weight fusion. The experiment demonstrates that the proposed method shows better comprehensive performance and obtains better results in fusion for infrared and visible light images and medical images compared to the contrast method.
APA, Harvard, Vancouver, ISO, and other styles
23

You, Jane, Qin Li, and Jinghua Wang. "On Hierarchical Content-Based Image Retrieval by Dynamic Indexing and Guided Search." International Journal of Cognitive Informatics and Natural Intelligence 4, no. 4 (October 2010): 18–36. http://dx.doi.org/10.4018/jcini.2010100102.

Full text
Abstract:
This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing. It also provides an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features. Experimental results confirm that the new approach is feasible for content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
24

Bouzos, Odysseas, Ioannis Andreadis, and Nikolaos Mitianoudis. "Conditional Random Field-Guided Multi-Focus Image Fusion." Journal of Imaging 8, no. 9 (September 5, 2022): 240. http://dx.doi.org/10.3390/jimaging8090240.

Full text
Abstract:
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA transform is applied to high-frequency components and a Conditional Random Field (CRF) model is created from the low frequency and the transform coefficients. The CRF model is solved efficiently with the α-expansion method. The estimated labels are used to guide the fusion of the low-frequency components and the transform coefficients. Inverse ICA is then applied to the fused transform coefficients. Finally, the fused image is the addition of the fused low frequency and the fused high frequency. CRF-Guided fusion does not introduce artifacts during fusion and supports image denoising during fusion by applying transform domain coefficient shrinkage. Quantitative and qualitative evaluation demonstrate the superior performance of CRF-Guided fusion compared to state-of-the-art multi-focus image fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
25

LAY, YUN-LONG, PEI-WEN CHEN, and HUI-JEN YANG. "THE MACHINE VISION BLIND GUIDE SYSTEM." Biomedical Engineering: Applications, Basis and Communications 14, no. 02 (April 25, 2002): 81–85. http://dx.doi.org/10.4015/s1016237202000127.

Full text
Abstract:
The available guide tools of the orientation and mobility for a blind are the cane, guide dog and electronic guide devices. A cane is easy to detect the hindrance that is in front of the user but not for the hindrance above the user waist. That's why a cane user of a blind sometimes will be hit by the upper hindrance. Guide dog is a very powerful mobility guider but expensive and the training and living care for the dogs are difficult. Hence, guide dog is not popular in many countries. The electronic devices for blind guide tools such as laser cane; sonic glasses, sonic guide etc. can only detect a single point at a time and not for a whole view. In our system, a machine vision blind guide system is proposed. A CCD grabbed the image of front view and divided the image into nine blocks. Each block is calculated to get the distance message, which is multipoint data to guide the blind by the converted voice signal.
APA, Harvard, Vancouver, ISO, and other styles
26

Rastogi, Alok Kumar, and A. K. Tiwari. "Metal wave-guide to dielectric image guide transition using exponential taper." International Journal of Infrared and Millimeter Waves 13, no. 12 (December 1992): 1895–914. http://dx.doi.org/10.1007/bf01011330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zou, Huanxin, Shitian He, Xu Cao, Li Sun, Juan Wei, Shuo Liu, and Jian Liu. "Rescaling-Assisted Super-Resolution for Medium-Low Resolution Remote Sensing Ship Detection." Remote Sensing 14, no. 11 (May 27, 2022): 2566. http://dx.doi.org/10.3390/rs14112566.

Full text
Abstract:
Medium-low resolution (M-LR) remote sensing ship detection is a challenging problem due to the small target sizes and insufficient appearance information. Although image super resolution (SR) has become a popular solution in recent years, the ability of image SR is limited since much information is lost in input images. Inspired by the powerful information embedding ability of the encoder in image rescaling, in this paper, we introduce image rescaling to guide the training of image SR. Specifically, we add an adaption module before the SR network, and use the pre-trained rescaling network to guide the optimization of the adaption module. In this way, more information is embedded in the adapted M-LR images, and the subsequent SR module can utilize more information to achieve better performance. Extensive experimental results demonstrate the effectiveness of our method on image SR. More importantly, our method can be used as a pre-processing approach to improve the detection performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Shirai, Keiichiro, Tatsuya Baba, Shunsuke Ono, Masahiro Okuda, Yusuke Tatesumi, and Paul Perrotin. "Guided Facial Skin Color Correction." Signals 2, no. 3 (August 24, 2021): 540–58. http://dx.doi.org/10.3390/signals2030033.

Full text
Abstract:
This paper proposes an automatic image correction method for portrait photographs, which promotes consistency of facial skin color by suppressing skin color changes due to background colors. In portrait photographs, skin color is often distorted due to the lighting environment (e.g., light reflected from a colored background wall and over-exposure by a camera strobe). This color distortion is emphasized when artificially synthesized with another background color, and the appearance becomes unnatural. In our framework, we, first, roughly extract the face region and rectify the skin color distribution in a color space. Then, we perform color and brightness correction around the face in the original image to achieve a proper color balance of the facial image, which is not affected by luminance and background colors. Our color correction process attains natural results by using a guide image, unlike conventional algorithms. In particular, our guided image filtering for the color correction does not require a perfectly-aligned guide image required in the original guide image filtering method proposed by He et al. Experimental results show that our method generates more natural results than conventional methods on not only headshot photographs but also natural scene photographs. We also show automatic yearbook style photo generation as another application.
APA, Harvard, Vancouver, ISO, and other styles
29

Ribera-Navarro, Aida, Ravikiran Shenoy, Gregory Cunningham, Vejay Vakharia, Alexander Gibson, Mehran Moazen, and Deepak M. Kalaskar. "Patient-specific 3D-printed surgical guides for pedicle screw insertion: comparison of different guide design approaches." Journal of 3D Printing in Medicine 5, no. 2 (June 2021): 71–82. http://dx.doi.org/10.2217/3dp-2021-0002.

Full text
Abstract:
Aim: Patient-specific 3D-printed guides for pedicle screw insertion in spinal deformity surgery offer an alternative to image-guided, robotic and free-hand methods. Different design features can impact their accuracy and clinical applicability. The aim of this study was to compare the performance of three different guide designs with the nonguided free-hand technique. Materials & methods: 3D-printed guides were design and tested using anatomical models of human spines and porcine cadaveric specimens. Three different guided groups (low, medium and full contact) and one nonguided group was formed. Results & conclusion: The design approach affected level of accuracy of screw placement. A variability in terms of accuracy of screw insertion between surgeon’s experience using nonguided/guided techniques was also observed, suggesting benefit for junior surgeons in improving surgical accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Xue, Xiushan Nie, Ziruo Sun, Xiaofeng Li, Chuntao Wang, Peng Tao, and Sumaira Hussain. "Re-ranking vehicle re-identification with orientation-guide query expansion." International Journal of Distributed Sensor Networks 18, no. 3 (March 2022): 155014772110663. http://dx.doi.org/10.1177/15501477211066305.

Full text
Abstract:
Vehicle re-identification, which aims to retrieve information regarding a vehicle from different cameras with non-overlapping views, has recently attracted extensive attention in the field of computer vision owing to the development of smart cities. This task can be regarded as a type of retrieval problem, where re-ranking is important for performance enhancement. In the vehicle re-identification ranking list, images whose orientations are dissimilar to that of the query image must preferably be optimized on priority. However, traditional methods are incompatible with such samples, resulting in unsatisfactory vehicle re-identification performances. Therefore, in this study, we propose a vehicle re-identification re-ranking method with orientation-guide query expansion to optimize the initial ranking list obtained by a re-identification model. In the proposed method, we first find the nearest neighbor image whose orientation is dissimilar to the queried image and then fuse the features of the query and neighbor images to obtain new features for information retrieval. Experiments are performed on two public data sets, VeRi-776 and VehicleID, and the effectiveness of the proposed method is confirmed.
APA, Harvard, Vancouver, ISO, and other styles
31

Qiang, Xiwen. "Remote Sensing of Atmospheric Turbulence Profiles by Laser Guide Stars." EPJ Web of Conferences 237 (2020): 06014. http://dx.doi.org/10.1051/epjconf/202023706014.

Full text
Abstract:
Ranged-resolved profiles of atmospheric turbulence are necessary and important for many applications in astronomical and adaptive optics communities. In order to characterize the vertical atmospheric structure in field, a technique is put forward to remote sensing ranged-resolved profiles of atmospheric turbulence by combined with laser guide stars and differential image motion method. Laser guide stars are formed at several successive altitudes by projecting pulsed laser, returned signals of images are received by a optical system with two receiving telescopes, and variance of centroids′ distance is derived from the images with two spots at the same altitude. So, based on a inversion algorithm, atmospheric turbulence profiles are retrieved from differential image motion variance of distance of centroids at various altitudes. The structure constants of refractive index of atmosphere range from 10-14m-2/3 at lower altitudes to 10-16m-2/3 at higher altitudes are remote sensed experimentally. The results show it is a effective method that combined laser guide stars with differential image motion method and could sense atmospheric turbulence profiles remotely in real time.
APA, Harvard, Vancouver, ISO, and other styles
32

Sivakumar, V. "Review Made on the Content-Based Image Recovery." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (July 31, 2022): 4911–17. http://dx.doi.org/10.22214/ijraset.2022.45973.

Full text
Abstract:
Abstract: Images recovery mean that recovering the authentic images from which the images recovered from the feature database, as you will see inhere we talked in the paper all about the latest methods and technique in the field based on image recovery and image processing. Content Based Image Recovery (CBIR) is fastest and fastest developing research in field and in the place of the Graphic Processing. Translated methods facilitate faster image recovery, queries for image recovery system, image based on image processing (CBIR) in combination with multimedia encoded features, and outside many image recovery techniques out there. Some futuristic researches guide us to the present here to evaluate the research in this area and infield of image recovery.
APA, Harvard, Vancouver, ISO, and other styles
33

Ogama, Takeo. "A beginner’s guide to improving image acquisition in fluorescence microscopy." Biochemist 42, no. 6 (December 7, 2020): 22–27. http://dx.doi.org/10.1042/bio20200075.

Full text
Abstract:
This article presents an overview of optical microscopy and digs into the details of fluorescence microscopy, exploring the link between the signals in biological samples and the digital data from microscope cameras. Understanding this relationship can help you set the ideal image acquisition conditions to achieve the highest quality images and data.
APA, Harvard, Vancouver, ISO, and other styles
34

Broderick, Lynn S., Kenyon K. Kopecky, and Harvey Cramer. "Image-Guided Coaxial Fine Needle Aspiration Biopsy With a Side-Exiting Guide." Journal of Computer Assisted Tomography 26, no. 2 (March 2002): 292–97. http://dx.doi.org/10.1097/00004728-200203000-00023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hata, Nobuhiko, Sang-Eun Song, Olutayo Olubiyi, Yasumichi Arimitsu, Kosuke Fujimoto, Takahisa Kato, Kemal Tuncali, Soichiro Tani, and Junichi Tokuda. "Body-mounted robotic instrument guide for image-guided cryotherapy of renal cancer." Medical Physics 43, no. 2 (January 20, 2016): 843–53. http://dx.doi.org/10.1118/1.4939875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kikuma, Tomohiro, Nozomu Ishii, and Kiyohiko Itoh. "A new transition from image NRD guide to dielectric image line." Electronics and Communications in Japan (Part II: Electronics) 83, no. 2 (February 2000): 1–8. http://dx.doi.org/10.1002/(sici)1520-6432(200002)83:2<1::aid-ecjb1>3.0.co;2-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sim, Jae Jun, Woo Jin Jung, Hyeon Seok Yang, Bok Gyu Han, Yong Chae Cho, and Young Shik Moon. "Nuclei Segmentation utilizing CNN and Guide image." Journal of the Institute of Electronics and Information Engineers 55, no. 9 (September 30, 2018): 83–91. http://dx.doi.org/10.5573/ieie.2018.55.9.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Paul, D. L., M. Habibi, J. Castrillo, P. Gelin, and S. Toutain. "Analysis of coupling in image guide technology." IEEE Transactions on Microwave Theory and Techniques 40, no. 5 (May 1992): 1025–28. http://dx.doi.org/10.1109/22.137415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bug, William, and Jonathan Nissanov. "A Guide to Building Image-Centric Databases." Neuroinformatics 1, no. 4 (2003): 359–78. http://dx.doi.org/10.1385/ni:1:4:359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Kwan, P., and C. Vittoria. "Propagation characteristic of a ferrite image guide." Journal of Applied Physics 73, no. 10 (May 15, 1993): 6466–68. http://dx.doi.org/10.1063/1.352581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Deren. "Remote sensing image fusion: a practical guide." Geo-spatial Information Science 20, no. 1 (January 2, 2017): 56. http://dx.doi.org/10.1080/10095020.2017.1288843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rastogi, A. K., S. Mishra, M. Nigam, and A. K. Tiwari. "Directional couplers in trapped image guide configuration." International Journal of Infrared and Millimeter Waves 15, no. 10 (October 1994): 1709–20. http://dx.doi.org/10.1007/bf02096898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ozan, Oguz, Emre Seker, Sevcan Kurtulmus-Yilmaz, and Ahmet Ersan Ersoy. "Clinical Application of Stereolithographic Surgical Guide With a Handpiece Guidance Apparatus: A Case Report." Journal of Oral Implantology 38, no. 5 (October 20, 2012): 603–9. http://dx.doi.org/10.1563/aaid-joi-d-11-00010.

Full text
Abstract:
The success of implant-supported restorations depends on the treatment planning and the transfer of planning through the surgical field. Recently, new computer-aided design and manufacturing (CAD/CAM) techniques, such as stereolithographic (SLA) rapid prototyping, have been developed to fabricate surgical guides to improve the precision of implant placement. The objective of the present case is to introduce a recently developed SLA surgical guide system into the rehabilitation of a 62-year-old male patient with mandibular edentulism. After obtaining a cone-beam computerized tomography (CBCT) scan of the mandible with a radiographic template, the images were transferred into a 3-dimensional (3D) image-based software for implant planning. The StentCad Beyond SLA surgical guide system, which is a combination of a currently used surgical template with pilot hollows and a surgical handpiece guidance apparatus, was designed to transfer a preoperatively defined implant position onto the surgical site without any drill-surgical guide contact. For the fabrication of this system, a surgical handpiece was scanned by a laser optical scanner and a mucosa-supported surgical guide was designed according to the patient's 3D model, which was attained from the CBCT images. Four dental implants were inserted through the SLA surgical guide system by a torque-controlled surgical handpiece to the interforaminal region via a flapless surgical procedure. Implants were assessed 3 months after surgery, and an implant-retained mandibular overdenture was fabricated. The present case emphasizes that CAD/CAM SLA surgical guides, along with CBCT images and scanning data, may help clinicians plan and place dental implants.
APA, Harvard, Vancouver, ISO, and other styles
44

Guo, Di, Xiaobo Qu, Xiaofeng Du, Keshou Wu, and Xuhui Chen. "Salt and Pepper Noise Removal with Noise Detection and a Patch-Based Sparse Representation." Advances in Multimedia 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/682747.

Full text
Abstract:
Images may be corrupted by salt and pepper impulse noise due to noisy sensors or channel transmission errors. A denoising method by detecting noise candidates and enforcing image sparsity with a patch-based sparse representation is proposed. First, noise candidates are detected and an initial guide image is obtained via an adaptive median filtering; second, a patch-based sparse representation is learnt from this guide image; third, a weightedl1-l1regularization method is proposed to penalize the noise candidates heavier than the rest of pixels. An alternating direction minimization algorithm is derived to solve the regularization model. Experiments are conducted for 30%∼90% impulse noise levels, and the simulation results demonstrate that the proposed method outperforms total variation and Wavelet in terms of preserving edges and structural similarity to the noise-free images.
APA, Harvard, Vancouver, ISO, and other styles
45

Qiang, Xi Wen, Jun Wei Zhao, Shuang Lian Feng, Min Wu, Jing Yong Chang, Fei Zong, and Jian Ping Song. "Review on Remote Sensing of Atmospheric Turbulence Profiles by Laser Guide Stars." Applied Mechanics and Materials 303-306 (February 2013): 823–26. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.823.

Full text
Abstract:
Ranged-resolved profiles of atmospheric turbulence are necessary and important for many applications in astronomical and adaptive optics communities. In order to characterize the vertical atmospheric structure in field, a technique is put forward to remote sensing ranged-resolved profiles of atmospheric turbulence by combined with laser guide stars and differential image motion method. Laser guide stars are formed at several successive altitudes by projecting pulsed laser, returned signals of images are received by a optical system with two receiving telescopes, and variance of centroids distance is derived from the images with two spots at the same altitude. So, based on a inversion algorithm, atmospheric turbulence profiles are retrieved from differential image motion variance of distance of centroids at various altitudes. The structure constants of refractive index of atmosphere range from 10-14m-2/3 at lower altitudes to 10-16m-2/3 at higher altitudes are remote sensed experimentally. The results show it is a effective method that combined laser guide stars with differential image motion method and could sense atmospheric turbulence profiles remotely in real time.
APA, Harvard, Vancouver, ISO, and other styles
46

Mohd Yusoff, Mohd Soyapi, R. Sulaiman, K. Shafinah, R. Fatihah, and J. Abdullah. "Development of Computed Tomography Scanner Application: Prototype Model Approach." International Journal of Software Engineering and Technologies (IJSET) 1, no. 2 (August 1, 2016): 93. http://dx.doi.org/10.11591/ijset.v1i2.4573.

Full text
Abstract:
The development of software application for image reconstruction system is essential in the research study of computed tomography (CT). An experimental prototype was the research tool used in developing a series of useful prototype to evaluate the reconstruction methods. A CT scanner software prototype was created using the MATLAB image processing toolbox together with the Graphical User Interface (GUI) and Graphical User Interface Development Environment (GUIDE). The prototype model was selected to guide the development of the prototype application and mainly to provide a platform to update a system from the available knowledge. The prototype model was capable in creating a CT scanner synthetic datasets, display a sinogram image from the synthetic datasets, construct images using back projection technique and displaying cross-section images. Two types of images were acquired from this research, the first type was the unfiltered and blurred image and the second type was the filtered and clear image. This research also focused on two-dimensional parallel-beam and fan-beam geometry configuration; the principle of transforming fan-beam datasets into parallel-beam datasets was carried out using the classical rebinning concept.
APA, Harvard, Vancouver, ISO, and other styles
47

Suhartono, Derwin, William Surya Permana, Antoni Wiguna, and Ferlan Gisman Putra. "Aplikasi E-Tour Guide dengan Fitur Pengenalan Image Menggunakan Metode Haar Classifier." ComTech: Computer, Mathematics and Engineering Applications 4, no. 2 (December 1, 2013): 1190. http://dx.doi.org/10.21512/comtech.v4i2.2593.

Full text
Abstract:
Smartphone has became an important instrument in modern society as it is used for entertainment and information searching except for communication. Concerning to this condition, it is needed to develop an application in order to improve smart phone functionality. The objective of this research is to create an application named E-Tour Guide as a tool for helping to plan and manage tourism activity equipped with image recognition feature. Image recognition method used is the Haar Classifier method. The feature is used to recognize historical objects. From the testing result done to 20 images sample, 85% accuracy is achieved for the image recognition feature.
APA, Harvard, Vancouver, ISO, and other styles
48

Tan, Xin, Shiming Lai, Yu Liu, and Maojun Zhang. "Green Channel Guiding Denoising on Bayer Image." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/979081.

Full text
Abstract:
Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Baodi, Lifei Zhao, Jiaoyue Li, Hengle Zhao, Weifeng Liu, Ye Li, Yanjiang Wang, Honglong Chen, and Weijia Cao. "Saliency-Guided Remote Sensing Image Super-Resolution." Remote Sensing 13, no. 24 (December 17, 2021): 5144. http://dx.doi.org/10.3390/rs13245144.

Full text
Abstract:
Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate the above issue while maintaining the merits of GAN-based methods for the generation of perceptual-pleasant details. More specifically, we exploit the salient maps of images to guide the recovery in two aspects: On the one hand, the saliency detection network in SG-GAN learns more high-resolution saliency maps to provide additional structure priors. On the other hand, the well-designed saliency loss imposes a second-order restriction on the super-resolution process, which helps SG-GAN concentrate more on the salient objects of remote sensing images. Experimental results show that SG-GAN achieves competitive PSNR and SSIM compared with the advanced super-resolution methods. Visual results demonstrate our superiority in restoring structures while generating remote sensing super-resolution images.
APA, Harvard, Vancouver, ISO, and other styles
50

Goh, Jia Yin, and Tsung Fei Khang. "On the classification of simple and complex biological images using Krawtchouk moments and Generalized pseudo-Zernike moments: a case study with fly wing images and breast cancer mammograms." PeerJ Computer Science 7 (September 9, 2021): e698. http://dx.doi.org/10.7717/peerj-cs.698.

Full text
Abstract:
In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations. Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n = 74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n = 1,151), we found that image features generated from the Generalized pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography