Auswahl der wissenschaftlichen Literatur zum Thema „Image cropping“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Image cropping" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Image cropping"

1

Li, Ya Feng, und Ying Lin. „Adaptive Image Cropping Based Depth of Field“. Advanced Engineering Forum 6-7 (September 2012): 895–99. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.895.

Der volle Inhalt der Quelle
Annotation:
The communication and sharing of visual media show a strong cross-platform feature. Massive images are shown on various devices with different resolution. Automatic image cropping, usually used to change the resolution of image, often discards important content in the image. This paper proposed a novel adaptive photo cropping method. The main idea is exploiting the characteristics of photograph aesthetics in photograph works. The algorithm infers the intention of photographer according to the depth of field in the image. The effects brought out by the focus and unfocus are utilized to exact importance information of the image. So, the method can be more satisfied with the subjective evaluation. And, it has advantage in term of computational speed. Experimentations are presented to demonstrate the validity of proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gao, Shangbing, Youdong Zhang, Wanli Feng und Dashan Chen. „Image Cropping by Patches Dissimilarities“. International Journal of Signal Processing, Image Processing and Pattern Recognition 8, Nr. 8 (31.08.2015): 79–88. http://dx.doi.org/10.14257/ijsip.2015.8.8.09.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yuhandri. „Perbandingan Metode Cropping pada Sebuah Citra untuk Pengambilan Motif Tertentu pada Kain Songket Sumatera Barat“. Jurnal KomtekInfo 6, Nr. 1 (01.06.2019): 97–107. http://dx.doi.org/10.35134/komtekinfo.v6i1.45.

Der volle Inhalt der Quelle
Annotation:
At the time of image processing where we only need a certain part of an image according to the needs called the Region of Interest (ROI), in order to obtain that, the processing is carried out in a cropping process. Cropping is mostly done by researchers, especially those who research in the field of image processing in order to do data processing on an image, the results of cropping process on an image are usually done to make it easier for researchers to focus on something that is needed only. In this study is to compare existing cropping methods to get a motif found in an image of West Sumatra songket fabric. In this study using the method of cropping rectangle, square, circle, ellipse, polygon and tested using the Matlab programming language. The results of comparison of 5 cropping methods for taking certain motifs on the songket image with 5 different songket image samples, shows that the best results are obtained by using the polygon method. Polygon method can reach certain coordinate points in a songket image, so that the results of cropping are better and other motives that are carried along during the cropping process can be reduced.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Darwis, Dedi, Akmal Junaidi, Dewi Asiah Shofiana und Wamiliana. „A New Digital Image Steganography Based on Center Embedded Pixel Positioning“. Cybernetics and Information Technologies 21, Nr. 2 (01.06.2021): 89–104. http://dx.doi.org/10.2478/cait-2021-0021.

Der volle Inhalt der Quelle
Annotation:
Abstract In this study we propose a new approach to tackle the cropping problem in steganography which is called Center Embedded Pixel Positioning (CEPP) which is based on Least Significant Bit (LSB) Matching by setting the secret image in the center of the cover image. The evaluation of the experiment indicated that the secret image can be retrieved by a maximum of total 40% sequential cropping on the left, right, up, and bottom of the cover image. The secret image also can be retrieved if the total asymmetric cropping area is 25% that covered two sides (either left-right, left-up or right-up). In addition, the secret image can also be retrieved if the total asymmetric cropping area is 70% if the bottom part is included. If asymmetric cropping area included three sides, then the algorithm fails to retrieve the secret image. For cropping in the botom the secret image can be extracted up to 70%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lu, Weirui, Xiaofen Xing, Bolun Cai und Xiangmin Xu. „Listwise View Ranking for Image Cropping“. IEEE Access 7 (2019): 91904–11. http://dx.doi.org/10.1109/access.2019.2925430.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Khalid, Shamsul Kamal Ahmad, Mustafa Mat Deris und Kamaruddin Malik Mohamad. „Anti-cropping digital image watermarking using Sudoku“. International Journal of Grid and Utility Computing 4, Nr. 2/3 (2013): 169. http://dx.doi.org/10.1504/ijguc.2013.056253.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ciocca, Gianluigi, Claudio Cusano, Francesca Gasparini und Raimondo Schettini. „Self-Adaptive Image Cropping for Small Displays“. IEEE Transactions on Consumer Electronics 53, Nr. 4 (November 2007): 1622–27. http://dx.doi.org/10.1109/tce.2007.4429261.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chang, Chin Chen, I. Ta Lee, Tsung Ta Ke und Wen Kai Tai. „An Object-Based Image Reducing Approach“. Advanced Materials Research 1044-1045 (Oktober 2014): 1049–52. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1049.

Der volle Inhalt der Quelle
Annotation:
Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yang, Rong, Robert Wang, Yunkai Deng, Xiaoxue Jia und Heng Zhang. „Rethinking the Random Cropping Data Augmentation Method Used in the Training of CNN-Based SAR Image Ship Detector“. Remote Sensing 13, Nr. 1 (23.12.2020): 34. http://dx.doi.org/10.3390/rs13010034.

Der volle Inhalt der Quelle
Annotation:
The random cropping data augmentation method is widely used to train convolutional neural network (CNN)-based target detectors to detect targets in optical images (e.g., COCO datasets). It can expand the scale of the dataset dozens of times while consuming only a small amount of calculations when training the neural network detector. In addition, random cropping can also greatly enhance the spatial robustness of the model, because it can make the same target appear in different positions of the sample image. Nowadays, random cropping and random flipping have become the standard configuration for those tasks with limited training data, which makes it natural to introduce them into the training of CNN-based synthetic aperture radar (SAR) image ship detectors. However, in this paper, we show that the introduction of traditional random cropping methods directly in the training of the CNN-based SAR image ship detector may generate a lot of noise in the gradient during back propagation, which hurts the detection performance. In order to eliminate the noise in the training gradient, a simple and effective training method based on feature map mask is proposed. Experiments prove that the proposed method can effectively eliminate the gradient noise introduced by random cropping and significantly improve the detection performance under a variety of evaluation indicators without increasing inference cost.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

DHARWADKAR, NAGARAJ V., und B. B. AMBERKER. „STEGANOGRAPHIC SCHEME FOR GRAY-LEVEL IMAGE USING PIXEL NEIGHBORHOOD AND LSB SUBSTITUTION“. International Journal of Image and Graphics 10, Nr. 04 (Oktober 2010): 589–607. http://dx.doi.org/10.1142/s0219467810003901.

Der volle Inhalt der Quelle
Annotation:
The exchange of secret message using images has vital importance in secret communication. Steganographic scheme is employed to achieve the task of secret message communication using images. The existing scheme based on pixel value differencing (PVD) with least significant bit (LSB) sequential substitution suffer from low embedding capacity. The embedding capacity is increased by using the edge regions of image obtained by neighborhood connectivity of pixel. We propose an adaptive steganographic scheme for gray-level images. Our scheme relies on the neighborhood connectivity of pixels to estimate the embedding capacity and resolves the problem of sequential substitution by jumbling the bits of secret message. The effect of cropping and filtration attacks on stegoimage is minimized by embedding the copies of secret message into four different regions of the cover image. The performance of the scheme is analyzed for various types of image processing attacks like cropping, blurring, filtering, adding noise, and sharpening. The proposed scheme is found rigid to these attacks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Image cropping"

1

Deigmoeller, Joerg. „Intelligent image cropping and scaling“. Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/4745.

Der volle Inhalt der Quelle
Annotation:
Nowadays, there exist a huge number of end devices with different screen properties for watching television content, which is either broadcasted or transmitted over the internet. To allow best viewing conditions on each of these devices, different image formats have to be provided by the broadcaster. Producing content for every single format is, however, not applicable by the broadcaster as it is much too laborious and costly. The most obvious solution for providing multiple image formats is to produce one high resolution format and prepare formats of lower resolution from this. One possibility to do this is to simply scale video images to the resolution of the target image format. Two significant drawbacks are the loss of image details through ownscaling and possibly unused image areas due to letter- or pillarboxes. A preferable solution is to find the contextual most important region in the high-resolution format at first and crop this area with an aspect ratio of the target image format afterwards. On the other hand, defining the contextual most important region manually is very time consuming. Trying to apply that to live productions would be nearly impossible. Therefore, some approaches exist that automatically define cropping areas. To do so, they extract visual features, like moving reas in a video, and define regions of interest (ROIs) based on those. ROIs are finally used to define an enclosing cropping area. The extraction of features is done without any knowledge about the type of content. Hence, these approaches are not able to distinguish between features that might be important in a given context and those that are not. The work presented within this thesis tackles the problem of extracting visual features based on prior knowledge about the content. Such knowledge is fed into the system in form of metadata that is available from TV production environments. Based on the extracted features, ROIs are then defined and filtered dependent on the analysed content. As proof-of-concept, this application finally adapts SDTV (Standard Definition Television) sports productions automatically to image formats with lower resolution through intelligent cropping and scaling. If no content information is available, the system can still be applied on any type of content through a default mode. The presented approach is based on the principle of a plug-in system. Each plug-in represents a method for analysing video content information, either on a low level by extracting image features or on a higher level by processing extracted ROIs. The combination of plug-ins is determined by the incoming descriptive production metadata and hence can be adapted to each type of sport individually. The application has been comprehensively evaluated by comparing the results of the system against alternative cropping methods. This evaluation utilised videos which were manually cropped by a professional video editor, statically cropped videos and simply scaled, non-cropped videos. In addition to and apart from purely subjective evaluations, the gaze positions of subjects watching sports videos have been measured and compared to the regions of interest positions extracted by the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abdulla, Ghaleb. „An image processing tool for cropping and enhancing images“. Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12232009-020207/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Yuxia. „Traffic and tillage effects on dryland cropping systems in north-east Australia /“. [St. Lucia, Qld.], 2001. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16335.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ling, Haibin. „Techniques for image retrieval deformation insensitivity and automatic thumbnail cropping /“. College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3859.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mennborg, Alexander. „AI-Driven Image Manipulation : Image Outpainting Applied on Fashion Images“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85148.

Der volle Inhalt der Quelle
Annotation:
The e-commerce industry frequently has to deal with displaying product images in a website where the images are provided by the selling partners. The images in question can have drastically different aspect ratios and resolutions which makes it harder to present them while maintaining a coherent user experience. Manipulating images by cropping can sometimes result in parts of the foreground (i.e. product or person within the image) to be cut off. Image outpainting is a technique that allows images to be extended past its boundaries and can be used to alter the aspect ratio of images. Together with object detection for locating the foreground makes it possible to manipulate images without sacrificing parts of the foreground. For image outpainting a deep learning model was trained on product images that can extend images by at least 25%. The model achieves 8.29 FID score, 44.29 PSNR score and 39.95 BRISQUE score. For testing this solution in practice a simple image manipulation pipeline was created which uses image outpainting when needed and it shows promising results. Images can be manipulated in under a second running on ZOTAC GeForce RTX 3060 (12GB) GPU and a few seconds running on a Intel Core i7-8700K (16GB) CPU. There is also a special case of images where the background has been digitally replaced with a solid color and they can be outpainted even faster without deep learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fredericks, Erin P. K. „Preferred color correction for mixed taking-illuminant placement and cropping /“. Online version of thesis, 2009. http://hdl.handle.net/1850/11350.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Retsas, Ioannis. „A DCT-based image watermarking algorithm robust to cropping and compression“. Thesis, Monterey California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/6032.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited.
Digital watermarking is a highly evolving field, which involves the embedding of a certain kind of information under a digital object (image, video, audio) for the purpose of copyright protection. Both the image and the watermark are most frequently translated into a transform domain where the embedding takes place. The selection of both the transform domain and the particular algorithm that is used for the embedding of the watermark, depend heavily on the application. One of the most widely used transform domains for watermarking of still digital images is the Discrete Cosine Transform domain. The reason is that the Discrete Cosine Transform is a part of the JPEG standard, which in turn is widely used for storage of digital images. In our research we propose a unique methodfor DCT-based image watermarking. In an effort to achieve robustness to cropping and JPEG compression wehave developed an algorithm for rating the 8.8 blocks of the image DCT coefficients taking into account theirembedding capacity and their spatial location within the image. Our experiments show that the proposed schemeoffers adequate transparency, and works exceptionally well against cropping while at the same time maintainssufficient robustness to JPEG compression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Swathanthira, Kumar Murali Murugavel M. „Magnetic Resonance Image segmentation using Pulse Coupled Neural Networks“. Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-dissertations/280.

Der volle Inhalt der Quelle
Annotation:
The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this dissertation, three novel PCNN based automatic segmentation algorithms were developed to segment Magnetic Resonance Imaging (MRI) data: (a) PCNN image 'signature' based single region cropping; (b) PCNN - Kittler Illingworth minimum error thresholding and (c) PCNN -Gaussian Mixture Model - Expectation Maximization (GMM-EM) based multiple material segmentation. Among other control tests, the proposed algorithms were tested on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes, 20 T1 weighted MR human brain volumes from Harvard's Internet Brain Segmentation Repository and 5 human MR breast volumes. The results were compared against manually segmented gold standards, Brain Extraction Tool (BET) V2.1 results, published results and single threshold methods. The Jaccard similarity index was used for numerical evaluation of the proposed algorithms. Our quantitative results demonstrate conclusively that PCNN based multiple material segmentation strategies can approach a human eye's intensity delineation capability in grayscale image segmentation tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chea, Sareth. „Economics of rice double-cropping in rainfed lowland areas of Cambodia : a farm-level analysis /“. [St. Lucia, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16913.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Janurberg, Norman, und Christian Luksitch. „Exploring Deep Learning Frameworks for Multiclass Segmentation of 4D Cardiac Computed Tomography“. Thesis, Linköpings universitet, Institutionen för hälsa, medicin och vård, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178648.

Der volle Inhalt der Quelle
Annotation:
By combining computed tomography data with computational fluid dynamics, the cardiac hemodynamics of a patient can be assessed for diagnosis and treatment of cardiac disease. The advantage of computed tomography over other medical imaging modalities is its capability of producing detailed high resolution images containing geometric measurements relevant to the simulation of cardiac blood flow. To extract these geometries from computed tomography data, segmentation of 4D cardiac computed tomography (CT) data has been performed using two deep learning frameworks that combine methods which have previously shown success in other research. The aim of this thesis work was to develop and evaluate a deep learning based technique to segment the left ventricle, ascending aorta, left atrium, left atrial appendage and the proximal pulmonary vein inlets. Two frameworks have been studied where both utilise a 2D multi-axis implementation to segment a single CT volume by examining it in three perpendicular planes, while one of them has also employed a 3D binary model to extract and crop the foreground from surrounding background. Both frameworks determine a segmentation prediction by reconstructing three volumes after 2D segmentation in each plane and combining their probabilities in an ensemble for a 3D output.  The results of both frameworks show similarities in their performance and ability to properly segment 3D CT data. While the framework that examines 2D slices of full size volumes produces an overall higher Dice score, it is less successful than the cropping framework at segmenting the smaller left atrial appendage. Since the full size 2D slices also contain background information in each slice, it is believed that this is the main reason for better segmentation performance. While the cropping framework provides a higher proportion of each foreground label, making it easier for the model to identify smaller structures. Both frameworks show success for use in 3D cardiac CT segmentation, and with further research and tuning of each network, even better results can be achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Image cropping"

1

Paint Shop Pro 9 and Studio in Easy Steps: Edit Photos like a Pro! Southam: Computer Step, 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Image cropping"

1

Ardizzone, Edoardo, Alessandro Bruno und Giuseppe Mazzola. „Saliency Based Image Cropping“. In Image Analysis and Processing – ICIAP 2013, 773–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41181-6_78.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chen, Haixia, Shangpeng Wang, Hongyan Zhang und Wei Wu. „Image Authentication for Permissible Cropping“. In Information Security and Cryptology, 308–25. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14234-6_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Smith, Jan, und Roman Joost. „Image Straightening, Cropping, Scaling, and Perspective“. In GIMP for Absolute Beginners, 61–83. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-3169-1_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Qu, Zhan, Jinqiao Wang, Min Xu und Hanqing Lu. „Fusing Warping, Cropping, and Scaling for Optimal Image Thumbnail Generation“. In Computer Vision – ACCV 2012, 445–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37447-0_34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lim, Bryan, und Richard LaFranchi. „Building an Image-Cropping Tool with Vue and Active Storage“. In Vue on Rails, 135–51. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5116-4_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Yang, Linjun Yang und Gangshan Wu. „Smart Thumbnail: Automatic Image Cropping by Mining Canonical Query Objects“. In Lecture Notes in Computer Science, 337–49. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03731-8_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rajabi, Mohammad Javad, Shahidan M. Abdullah, Majid Bakhtiari und Saeid Bakhtiari. „A Robust DCT Based Technique for Image Watermarking Against Cropping Attacks“. In Recent Trends in Information and Communication Technology, 747–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59427-9_77.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sebé, Francesc, Josep Domingo-Ferrer und Jordi Herrera. „Spatial-Domain Image Watermarking Robust against Compression, Filtering, Cropping, and Scaling“. In Lecture Notes in Computer Science, 44–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44456-4_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Koh, Sung Shik, und Chung Hwa Kim. „Cropping, Rotation and Scaling Invariant LBX Interleaved Voice-in-Image Watermarking“. In Lecture Notes in Computer Science, 498–507. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30583-5_53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lu, JunFeng, und MingXue Liao. „Target Cropping: A New Data Augmentation Method of Fine-Grained Image Classification“. In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 343–51. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32456-8_37.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Image cropping"

1

Cheatle, Phil. „Automatic image cropping for republishing“. In IS&T/SPIE Electronic Imaging, herausgegeben von Qian Lin, Zhigang Z. Fan, Theo Gevers, Raimondo Schettini und Cees Snoek. SPIE, 2010. http://dx.doi.org/10.1117/12.838452.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jieying She, Duo Wang und Mingli Song. „Automatic image cropping using sparse coding“. In 2011 First Asian Conference on Pattern Recognition (ACPR 2011). IEEE, 2011. http://dx.doi.org/10.1109/acpr.2011.6166623.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Jaiswal, Nehal, und Yogesh K. Meghrajani. „Automatic image cropping using saliency map“. In 2015 International Conference on Industrial Instrumentation and Control (ICIC). IEEE, 2015. http://dx.doi.org/10.1109/iic.2015.7150885.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Steinebach, Martin, Huajian Liu und York Yannikos. „Efficient Cropping-Resistant Robust Image Hashing“. In 2014 Ninth International Conference on Availability, Reliability and Security (ARES). IEEE, 2014. http://dx.doi.org/10.1109/ares.2014.85.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lian, Tianpei, Zhiguo Cao, Ke Xian, Zhiyu Pan und Weicai Zhong. „Context-Aware Candidates for Image Cropping“. In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506111.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yan, Jianzhou, Stephen Lin, Sing Bing Kang und Xiaoou Tang. „Learning the Change for Automatic Image Cropping“. In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013. http://dx.doi.org/10.1109/cvpr.2013.130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Jiansheng, Gaocheng Bai, Shaoheng Liang und Zhengqin Li. „Automatic Image Cropping: A Computational Complexity Study“. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.61.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Zhuopeng, und Xiaoyan Zhang. „Collaborative Deep Reinforcement Learning for Image Cropping“. In 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019. http://dx.doi.org/10.1109/icme.2019.00052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ciocca, G., C. Cusano, F. Gasparini und R. Schettini. „Self-Adaptive Image Cropping for Small Displays“. In 2007 Digest of Technical Papers International Conference on Consumer Electronics. IEEE, 2007. http://dx.doi.org/10.1109/icce.2007.341331.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ziabari, Seyed Sahand Mohamadi. „Intelligent image watermarking robust against cropping attack“. In 2015 2nd International Conference on Knowledge-Based Engineering and Innovation (KBEI). IEEE, 2015. http://dx.doi.org/10.1109/kbei.2015.7436220.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie