Добірка наукової літератури з теми "REAL IMAGE PREDICTION"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "REAL IMAGE PREDICTION".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "REAL IMAGE PREDICTION"

1

Takezawa, Takuma, and Yukihiko Yamashita. "Wavelet Based Image Coding via Image Component Prediction Using Neural Networks." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 137–42. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1026.

Повний текст джерела
Анотація:
In the process of wavelet based image coding, it is possible to enhance the performance by applying prediction. However, it is difficult to apply the prediction using a decoded image to the 2D DWT which is used in JPEG2000 because the decoded pixels are apart from pixels which should be predicted. Therefore, not images but DWT coefficients have been predicted. To solve this problem, predictive coding is applied for one-dimensional transform part in 2D DWT. Zhou and Yamashita proposed to use half-pixel line segment matching for the prediction of wavelet based image coding with prediction. In this research, convolutional neural networks are used as the predictor which estimates a pair of target pixels from the values of pixels which have already been decoded and adjacent to the target row. It helps to reduce the redundancy by sending the error between the real value and its predicted value. We also show its advantage by experimental results.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hong, Yan, Li Niu, and Jianfu Zhang. "Shadow Generation for Composite Image in Real-World Scenes." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 914–22. http://dx.doi.org/10.1609/aaai.v36i1.19974.

Повний текст джерела
Анотація:
Image composition targets at inserting a foreground object into a background image. Most previous image composition methods focus on adjusting the foreground to make it compatible with background while ignoring the shadow effect of foreground on the background. In this work, we focus on generating plausible shadow for the foreground object in the composite image. First, we contribute a real-world shadow generation dataset DESOBA by generating synthetic composite images based on paired real images and deshadowed images. Then, we propose a novel shadow generation network SGRNet, which consists of a shadow mask prediction stage and a shadow filling stage. In the shadow mask prediction stage, foreground and background information are thoroughly interacted to generate foreground shadow mask. In the shadow filling stage, shadow parameters are predicted to fill the shadow area. Extensive experiments on our DESOBA dataset and real composite images demonstrate the effectiveness of our proposed method. Our dataset and code are available at https://github.com/bcmi/Object-Shadow-Generation- Dataset-DESOBA.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sather, A. P., S. D. M. Jones, and D. R. C. Bailey. "Real-time ultrasound image analysis for the estimation of carcass yield and pork quality." Canadian Journal of Animal Science 76, no. 1 (March 1, 1996): 55–62. http://dx.doi.org/10.4141/cjas96-008.

Повний текст джерела
Анотація:
Average backfat thickness measurements (liveweight of 92.5 kg) were made on 276 pigs using the Krautkramer USK7 ultrasonic machine. Immediately preceding and 1 h after slaughter real-time ultrasonic images were made between the 3rd and 4th last ribs with the Tokyo Keiki LS-1000 (n = 149) and/or CS-3000 (n = 240) machines. Image analysis software was used to measure fat thickness (FT), muscle depth (MD) and area (MA) as well as scoring the number of objects, object area and percentage object area of the loin to be used for predicting meat quality. Carcasses were also graded by the Hennessy Grading Probe (HGP). Prediction equations for lean in the primal cuts based on USK7 and LS-1000 animal fat measurements had R2-values (residual standard deviations, RSD) of 0.62 (27.0) and 0.66 (25.7). Adding MD or MA to LS-1000 FT measurements increased the R2-values to 0.68 and 0.66. Prediction equations using animal fat measurements made by the USK7 and CS-3000 had R2-values (RSD) of 0.66 (26.5) and 0.76 (22.4). Adding MD or MA to CS-3000 FT measurements made no further improvement in the R2-values. Estimation of commercial lean yield from carcass FT and MD measurements made by the HGP and LS-1000 had R2-values (RSD) of 0.58 (1.72) and 0.65 (1.56). Adding MA to LS-1000 measurements made no further improvement in the R2-values. Prediction equations based on carcass FT and MD measurements made by the HGPandCS-3000 had R2-values (RSD) of 0.68 (1.65) and 0.72 (1.54). Adding MA to CS-3000 measurements made no further improvement in the prediction equation. It was concluded that RTU has most value for predicting carcass lean content and further improvements in precision will come from more accurate FT measurements from RTU images made by image analysis software. Correlation of the number of objects, object area and of percent object area of image from RTU images with intramuscular fat or marbling score made on the live pig or carcass were low, and presently do not appear suitable for predicting intramuscular fat. Key words: Carcass composition, meat quality, marbling, intramuscular fat, sex, pigs
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tham, Hwee Sheng, Razaidi Hussin, and Rizalafande Che Ismail. "A Real-Time Distance Prediction via Deep Learning and Microsoft Kinect." IOP Conference Series: Earth and Environmental Science 1064, no. 1 (July 1, 2022): 012048. http://dx.doi.org/10.1088/1755-1315/1064/1/012048.

Повний текст джерела
Анотація:
Abstract 3D(Dimension) understanding has become the herald of computer vision and graphics research in the era of technology. It benefits many applications such as autonomous cars, robotics, and medical image processing. The pros and cons of 3D detection bring convenience to the human community instead of 2D detection. The 3D detection consists of RGB (Red, Green and Blue) colour images and depth images which are able to perform better than 2D in real. The current technology is relying on the high costing light detection and ranging (LiDAR). However, the use of Microsoft Kinect has replaced the LiDAR systems for 3D detection gradually. In this project, a Kinect camera is used to extract the depth of image information. From the depth images, the distance can be defined easily. As in the colour scale, the red colour is the nearest and the blue colour is the farthest. The depth image will turn black when reaching the limitation of the Kinect camera measuring range. The depth information collected will be trained with deep learning architecture to perform a real-time distance prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (May 28, 2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Повний текст джерела
Анотація:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Snider, Eric J., Sofia I. Hernandez-Torres, and Ryan Hennessey. "Using Ultrasound Image Augmentation and Ensemble Predictions to Prevent Machine-Learning Model Overfitting." Diagnostics 13, no. 3 (January 23, 2023): 417. http://dx.doi.org/10.3390/diagnostics13030417.

Повний текст джерела
Анотація:
Deep learning predictive models have the potential to simplify and automate medical imaging diagnostics by lowering the skill threshold for image interpretation. However, this requires predictive models that are generalized to handle subject variability as seen clinically. Here, we highlight methods to improve test accuracy of an image classifier model for shrapnel identification using tissue phantom image sets. Using a previously developed image classifier neural network—termed ShrapML—blind test accuracy was less than 70% and was variable depending on the training/test data setup, as determined by a leave one subject out (LOSO) holdout methodology. Introduction of affine transformations for image augmentation or MixUp methodologies to generate additional training sets improved model performance and overall accuracy improved to 75%. Further improvements were made by aggregating predictions across five LOSO holdouts. This was done by bagging confidences or predictions from all LOSOs or the top-3 LOSO confidence models for each image prediction. Top-3 LOSO confidence bagging performed best, with test accuracy improved to greater than 85% accuracy for two different blind tissue phantoms. This was confirmed by gradient-weighted class activation mapping to highlight that the image classifier was tracking shrapnel in the image sets. Overall, data augmentation and ensemble prediction approaches were suitable for creating more generalized predictive models for ultrasound image analysis, a critical step for real-time diagnostic deployment.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Froning, Dieter, Eugen Hoppe, and Ralf Peters. "The Applicability of Machine Learning Methods to the Characterization of Fibrous Gas Diffusion Layers." Applied Sciences 13, no. 12 (June 9, 2023): 6981. http://dx.doi.org/10.3390/app13126981.

Повний текст джерела
Анотація:
Porous materials can be characterized by well-trained neural networks. In this study, fibrous paper-type gas diffusion layers were trained with artificial data created by a stochastic geometry model. The features of the data were calculated by means of transport simulations using the Lattice–Boltzmann method based on stochastic micro-structures. A convolutional neural network was developed that can predict the permeability and tortuosity of the material, through-plane and in-plane. The characteristics of real data, both uncompressed and compressed, were predicted. The data were represented by reconstructed images of different sizes and image resolutions. Image artifacts are also a source of potential errors in the prediction. The Kozeny–Carman trend was used to evaluate the prediction of permeability and tortuosity of compressed real data. Using this method, it was possible to decide if the predictions on compressed data were appropriate.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Moskolaï, Waytehad Rose, Wahabou Abdou, Albert Dipanda, and Kolyang. "Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review." Remote Sensing 13, no. 23 (November 27, 2021): 4822. http://dx.doi.org/10.3390/rs13234822.

Повний текст джерела
Анотація:
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rajesh, E., Shajahan Basheer, Rajesh Kumar Dhanaraj, Soni Yadav, Seifedine Kadry, Muhammad Attique Khan, Ye Jin Kim, and Jae-Hyuk Cha. "Machine Learning for Online Automatic Prediction of Common Disease Attributes Using Never-Ending Image Learner." Diagnostics 13, no. 1 (December 28, 2022): 95. http://dx.doi.org/10.3390/diagnostics13010095.

Повний текст джерела
Анотація:
The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bhimte, Sumit, Hrishikesh hasabnis, Rohit Shirsath, Saurabh Sonar, and Mahendra Salunke. "Severity Prediction System for Real Time Pothole Detection." Journal of University of Shanghai for Science and Technology 23, no. 07 (July 29, 2021): 1328–34. http://dx.doi.org/10.51201/jusst/21/07356.

Повний текст джерела
Анотація:
Pothole Detection System using Image Processing or using Accelerometer is not a new normal. But there is no real time application which utilizes both techniques to provide us with efficient solution. We present a system which can be useful for the drivers to determine the intensity of Pothole using both Image Processing Technology and Accelerometer device-based Algorithm. The challenge in building this system was to efficiently detect a Pothole present in roads, to analyze the severity of Pothole and to provide users with information like Road Quality and best possible route. We have used various algorithms for frequency-based pothole detection. We compared the results. Apart from that, we selected the best approach suitable for achieving the project goals. We have used a Simple Differentiation-based Edge Detection Algorithm for Image Processing. The system has been built on Map Interfaces for Android devices using Android Studio, which consists of usage of Image Processing Algorithm based Python frameworks which is a sub field of Machine Learning. It is backed by powerful DBMS. This project facilitates use of most efficient technology tools to provide a good user experience, real time application, reliability and improved efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "REAL IMAGE PREDICTION"

1

Raykhel, Ilya. "Real-time automatic price prediction for eBay online trading /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2697.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yin, Ling. "Automatic Stereoscopic 3D Chroma-Key Matting Using Perceptual Analysis and Prediction." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31851.

Повний текст джерела
Анотація:
This research presents a novel framework for automatic chroma keying and the optimizations for real-time and stereoscopic 3D processing. It first simulates the process of human perception on isolating foreground elements in a given scene by perceptual analysis, and then predicts foreground colours and alpha map based on the analysis results and the restored clean background plate rather than direct sampling. Besides, an object level depth map is generated through stereo matching on a carefully determined feature map. In addition, three prototypes on different platforms have been implemented according to their hardware capability based on the proposed framework. To achieve real-time performance, the entire procedures are optimized for parallel processing and data paths on the GPU, as well as heterogeneous computing between GPU and CPU. The qualitative comparisons between results generated by the proposed algorithm and other existing algorithms show that the proposed one is able to generate more acceptable alpha maps and foreground colours especially in those regions that contain translucencies and details. And the quantitative evaluations also validate our advantages in both quality and speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kaneva, Biliana K. "Large databases of real and synthetic images for feature evaluation and prediction." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71478.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 157-167).
Image features are widely used in computer vision applications from stereo matching to panorama stitching to object and scene recognition. They exploit image regularities to capture structure in images both locally, using a patch around an interest point, and globally, over the entire image. Image features need to be distinctive and robust toward variations in scene content, camera viewpoint and illumination conditions. Common tasks are matching local features across images and finding semantically meaningful matches amongst a large set of images. If there is enough structure or regularity in the images, we should be able not only to find good matches but also to predict parts of the objects or the scene that were not directly captured by the camera. One of the difficulties in evaluating the performance of image features in both the prediction and matching tasks is the availability of ground truth data. In this dissertation, we take two different approaches. First, we propose using a photorealistic virtual world for evaluating local feature descriptors and leaning new feature detectors. Acquiring ground truth data and, in particular pixel to pixel correspondences between images, in complex 3D scenes under different viewpoint and illumination conditions in a controlled way is nearly impossible in a real world setting. Instead, we use a high-resolution 3D model of a city to gain complete and repeatable control of the environment. We calibrate our virtual world evaluations by comparing against feature rankings made from photographic data of the same subject matter (the Statue of Liberty). We then use our virtual world to study the effects on descriptor performance of controlled changes in viewpoint and illumination. We further employ machine learning techniques to train a model that would recognize visually rich interest points and optimize the performance of a given descriptor. In the latter part of the thesis, we take advantage of the large amounts of image data available on the Internet to explore the regularities in outdoor scenes and, more specifically, the matching and prediction tasks in street level images. Generally, people are very adept at predicting what they might encounter as they navigate through the world. They use all of their prior experience to make such predictions even when placed in unfamiliar environment. We propose a system that can predict what lies just beyond the boundaries of the image using a large photo collection of images of the same class, but not from the same location in the real world. We evaluate the performance of the system using different global or quantized densely extracted local features. We demonstrate how to build seamless transitions between the query and prediction images, thus creating a photorealistic virtual space from real world images.
by Biliana K. Kaneva.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Повний текст джерела
Анотація:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

KHICHI, MANISH. "DEEPFAKE OR REAL IMAGE PREDICTION USING MESONET." Thesis, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18952.

Повний текст джерела
Анотація:
Advance development in Machine Learning, Deep Learning, and Artificial Intelli gence (AI) allow people to exchange the faces and voices of other people in videos so that they look like people did or wanted to say. These videos and photos are called ”deepfake” and each day they are more complicated, which worries legislators. This technology uses machine learning technology to provide the computer with real data about the image so that we can falsify it. The creators of Deepfake use artificial in telligence and machine learning algorithms to mimic the work and characteristics of the real human. It differs from traditional fake media because it is difficult to iden tify. As the 2020 election approaches, 4,444 AI-generated DeepFakes have entered the news cycle. DeepFakes threatens facial recognition and online content. This hoax can be dangerous, because if used incorrectly, you can abuse this technique. Fake video, voice and audio clips can cause enormous damage.We will use Mesonet To make predictions on image data. we will examine four sets of images-correctly identified deepfakes, correctly identified reals, misIdentified deepfakes, misIdentified reals and we will see whether the human eye can pick up on any insights into the world of deepfakes.We will be using the Meso 4 model Trained on the deepfake and real data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lee, Hsua-Yun, and 李炫運. "Real-Time Multi-Object Tracking Algorithm Using Improved Object-Image-Completeness and Prediction-Search." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/8n9yym.

Повний текст джерела
Анотація:
碩士
國立中興大學
電機工程學系所
101
Except for general camera, currently most of robust techniques for multi-object tracking are to use other assistant sensors such as IR, stereo image sensor or multiple cameras for acquiring other information (e.g. depth field data) to segment objects from background. There are also many researches in single image sequence, however, most of them can''t accurately segment objects from background in complex background, moreover, can''t track the overlapped objects. Furthermore, they usually use the complex algorithm. This thesis presents a method which avoids the common practice of using a complex algorithm for multi-object tracking based on single image sequence to achieve low cost and real-time. We propose a “Prediction-Search” to lower the computation for real-time demand. Furthermore, we also propose an improved “Object-Image-Completeness” to improve the broken image issue for target objects. In addition to “Prediction-Search”, we add the distance and color comparison algorithms for tracking assistant to make the tracking robust. The “Prediction-Search” has a very low computation to achieve tracking task so that the tracking speed will be very fast. In general, most of tracks will be completed by “Prediction-Search”, and the minority need distance comparison, and a few of the minority need color comparison, so the tracking speed will be very fast. Regarding the real tracking speed, our system can keep 30 frames/sec tracking speed based on 30 frames/sec input image sequence for real time demand in our test platform. In tracking performance, even if the objects are overlapping each other, the proposed algorithm still can track each object. We have implemented the tracking task for 18 objects, 3 objects overlap, different figure objects, different size objects, irregular path objects, walking peoples and running peoples.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Harding, G., and M. Bloj. "Real and predicted influence of image manipulations on eye movements during scene recognition." 2010. http://hdl.handle.net/10454/6004.

Повний текст джерела
Анотація:
In this paper, we investigate how controlled changes to image properties and orientation affect eye movements for repeated viewings of images of natural scenes. We make changes to images by manipulating low-level image content (such as luminance or chromaticity) and/or inverting the image. We measure the effects of these manipulations on human scanpaths (the spatial and chronological path of fixations), additionally comparing these effects to those predicted by a widely used saliency model (L. Itti & C. Koch, 2000). Firstly we find that repeated viewing of a natural image does not significantly modify the previously known repeatability (S. A. Brandt & L. W. Stark, 1997; D. Noton & L. Stark, 1971) of scanpaths. Secondly we find that manipulating image features does not necessarily change the repeatability of scanpaths, but the removal of luminance information has a measurable effect. We also find that image inversion appears to affect scene perception and recognition and may alter fixation selection (although we only find an effect on scanpaths with the additional removal of luminance information). Additionally we confirm that visual saliency as defined by L. Itti and C. Koch's (2000) model is a poor predictor of real observer scanpaths and does not predict the small effects of our image manipulations on scanpaths.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "REAL IMAGE PREDICTION"

1

Carr, Michael William. International Marine's weather predicting simplified: How to read weather charts and satellite images. Camden, Me: International Marine, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "REAL IMAGE PREDICTION"

1

Robinson, Robert, Ozan Oktay, Wenjia Bai, Vanya V. Valindria, Mihir M. Sanghvi, Nay Aung, José M. Paiva, et al. "Real-Time Prediction of Segmentation Quality." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 578–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00937-3_66.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bertini, M., A. Del Bimbo, and W. Nunziati. "Soccer Videos Highlight Prediction and Annotation in Real Time." In Image Analysis and Processing – ICIAP 2005, 637–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11553595_78.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Joldes, Grand Roman, Adam Wittek, Mathieu Couton, Simon K. Warfield, and Karol Miller. "Real-Time Prediction of Brain Shift Using Nonlinear Finite Element Algorithms." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009, 300–307. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04271-3_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zuo, Fei, and Peter H. N. de With. "Real-Time Facial Feature Extraction by Cascaded Parameter Prediction and Image Optimization." In Lecture Notes in Computer Science, 651–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30126-4_79.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhong, Lijun, Qifeng Yu, Jiexin Zhou, Xiaohu Zhang, and Yani Lu. "Real-Time Interpretation Method for Shooting-Range Image Based on Position Prediction." In Lecture Notes in Computer Science, 68–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34120-6_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ding, Yukun, Dewen Zeng, Mingqi Li, Hongwen Fei, Haiyun Yuan, Meiping Huang, Jian Zhuang, and Yiyu Shi. "Towards Efficient Human-Machine Collaboration: Real-Time Correction Effort Prediction for Ultrasound Data Acquisition." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 461–70. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87193-2_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Muthukumar, Pratyush, Emmanuel Cocom, Jeanne Holm, Dawn Comer, Anthony Lyons, Irene Burga, Christa Hasenkopf, and Mohammad Pourhomayoun. "Real-Time Spatiotemporal Air Pollution Prediction with Deep Convolutional LSTM Through Satellite Image Analysis." In Advances in Data Science and Information Engineering, 315–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71704-9_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Singh, Chandan, Wooseok Ha, and Bin Yu. "Interpreting and Improving Deep-Learning Models with Reality Checks." In xxAI - Beyond Explainable AI, 229–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_12.

Повний текст джерела
Анотація:
AbstractRecent deep-learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. This chapter covers recent work aiming to interpret models by attributing importance to features and feature groups for a single prediction. Importantly, the proposed attributions assign importance to interactions between features, in addition to features in isolation. These attributions are shown to yield insights across real-world domains, including bio-imaging, cosmology image and natural-language processing. We then show how these attributions can be used to directly improve the generalization of a neural network or to distill it into a simple model. Throughout the chapter, we emphasize the use of reality checks to scrutinize the proposed interpretation techniques. (Code for all methods in this chapter is available at "Image missing"github.com/csinva and "Image missing"github.com/Yu-Group, implemented in PyTorch [54]).
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Klingner, Marvin, and Tim Fingscheidt. "Improved DNN Robustness by Multi-task Training with an Auxiliary Self-Supervised Task." In Deep Neural Networks and Data for Automated Driving, 149–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_5.

Повний текст джерела
Анотація:
AbstractWhile deep neural networks for environment perception tasks in autonomous driving systems often achieve impressive performance on clean and well-prepared images, their robustness under real conditions, i.e., on images being perturbed with noise patterns or adversarial attacks, is often subject to a significantly decreased performance. In this chapter, we address this problem for the task of semantic segmentation by proposing multi-task training with the additional task of depth estimation with the goal to improve the DNN robustness. This method has a very wide potential applicability as the additional depth estimation task can be trained in a self-supervised fashion, relying only on unlabeled image sequences during training. The final trained segmentation DNN is, however, still applicable on a single-image basis during inference without additional computational overhead compared to the single-task model. Additionally, our evaluation introduces a measure which allows for a meaningful comparison between different noise and attack types. We show the effectiveness of our approach on the Cityscapes and KITTI datasets, where our method improves the DNN performance w.r.t. the single-task baseline in terms of robustness against multiple noise and adversarial attack types, which is supplemented by an improved absolute prediction performance of the resulting DNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schwan, Constanze, and Wolfram Schenck. "Design of Interpretable Machine Learning Tasks for the Application to Industrial Order Picking." In Technologien für die intelligente Automation, 291–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-662-64283-2_21.

Повний текст джерела
Анотація:
AbstractState-of-the-art methods in image-based robotic grasping use deep convolutional neural networks to determine the robot parameters that maximize the probability of a stable grasp given an image of an object. Despite the high accuracy of these models they are not applied in industrial order picking tasks to date. One of the reasons is the fact that the generation of the training data for these models is expensive. Even though this could be solved by using a physics simulation for training data generation, another even more important reason is that the features that lead to the prediction made by the model are not human-readable. This lack of interpretability is the crucial factor why deep networks are not found in critical industrial applications. In this study we suggest to reformulate the task of robotic grasping as three tasks that are easy to assess from human experience. For each of the three steps we discuss the accuracy and interpretability. We outline how the proposed three-step model can be extended to depth images. Furthermore we discuss how interpretable machine learning models can be chosen for the three steps in order to be applied in a real-world industrial environment.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "REAL IMAGE PREDICTION"

1

Venkataswamy, Prashanth, M. Omair Ahmad, and M. N. S. Swamy. "Real-time Image Aesthetic Score Prediction for Portable Devices." In 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2020. http://dx.doi.org/10.1109/mwscas48704.2020.9184491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sarvan, N. "Analysis Prediction Template Toolkit (APTT) for real-time image processing." In 7th International Conference on Image Processing and its Applications. IEE, 1999. http://dx.doi.org/10.1049/cp:19990293.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hussain, Akhtar, Nitin Afzulpur, Muhammad Waseem Ashraf, Shahzadi Tayyaba, and Abdul Rehman Abbasi. "Detecting unintended gesture in real-time video for mental state prediction." In 2011 International Conference on Graphic and Image Processing. SPIE, 2011. http://dx.doi.org/10.1117/12.913576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Xiaokang, Yajie Xing, and Gang Zeng. "Real-Time Semantic Scene Completion Via Feature Aggregation And Conditioned Prediction." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

"Online and Real-time Network for Video Pedestrian Intent Prediction." In 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2022. http://dx.doi.org/10.1109/dicta56598.2022.10034602.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lv, Mingming, Yuanlong Hou, Rongzhong Liu, and Runmin Hou. "Fast template matching based on grey prediction for real-time object tracking." In Eighth International Conference on Graphic and Image Processing, edited by Yulin Wang, Tuan D. Pham, Vit Vozenilek, David Zhang, and Yi Xie. SPIE, 2017. http://dx.doi.org/10.1117/12.2266225.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Handrich, Sebastian, Laslo Dinges, Frerk Saxen, Ayoub Al-Hamadi, and Sven Wachmuth. "Simultaneous Prediction of Valence / Arousal and Emotion Categories in Real-time." In 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA). IEEE, 2019. http://dx.doi.org/10.1109/icsipa45851.2019.8977743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhaoxia, Xu, Xing Renpeng, Lin Yong, and Shan Tiecheng. "Real-time prediction of urban traffic flow based on data mining." In 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). IEEE, 2021. http://dx.doi.org/10.1109/ipec51340.2021.9421212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rahman, A. K. M. Mahbubur, Md Iftekhar Tanveer, Asm Iftekhar Anam, and Mohammed Yeasin. "IMAPS: A smart phone based real-time framework for prediction of affect in natural dyadic conversation." In 2012 Visual Communications and Image Processing (VCIP). IEEE, 2012. http://dx.doi.org/10.1109/vcip.2012.6410828.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tong, Wei, Yubing Gao, Edmond Q. Wu, and Li-Min Zhu. "Self-Supervised Depth Estimation Based on the Consistency of Synthetic-real Image Prediction." In 2023 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2023. http://dx.doi.org/10.1109/icarm58088.2023.10218857.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "REAL IMAGE PREDICTION"

1

Gur, Amit, Edward Buckler, Joseph Burger, Yaakov Tadmor, and Iftach Klapp. Characterization of genetic variation and yield heterosis in Cucumis melo. United States Department of Agriculture, January 2016. http://dx.doi.org/10.32747/2016.7600047.bard.

Повний текст джерела
Анотація:
Project objectives: 1) Characterization of variation for yield heterosis in melon using Half-Diallele (HDA) design. 2) Development and implementation of image-based yield phenotyping in melon. 3) Characterization of genetic, epigenetic and transcriptional variation across 25 founder lines and selected hybrids. The epigentic part of this objective was modified during the course of the project: instead of characterization of chromatin structure in a single melon line through genome-wide mapping of nucleosomes using MNase-seq approach, we took advantage of rapid advancements in single-molecule sequencing and shifted the focus to Nanoporelong-read sequencing of all 25 founder lines. This analysis provides invaluable information on genome-wide structural variation across our diversity 4) Integrated analyses and development of prediction models Agricultural heterosis relates to hybrids that outperform their inbred parents for yield. First generation (F1) hybrids are produced in many crop species and it is estimated that heterosis increases yield by 15-30% globally. Melon (Cucumismelo) is an economically important species of The Cucurbitaceae family and is among the most important fleshy fruits for fresh consumption Worldwide. The major goal of this project was to explore the patterns and magnitude of yield heterosis in melon and link it to whole genome sequence variation. A core subset of 25 diverse lines was selected from the Newe-Yaar melon diversity panel for whole-genome re-sequencing (WGS) and test-crosses, to produce structured half-diallele design of 300 F1 hybrids (MelHDA25). Yield variation was measured in replicated yield trials at the whole-plant and at the rootstock levels (through a common-scion grafted experiments), across the F1s and parental lines. As part of this project we also developed an algorithmic pipeline for detection and yield estimation of melons from aerial-images, towards future implementation of such high throughput, cost-effective method for remote yield evaluation in open-field melons. We found extensive, highly heritable root-derived yield variation across the diallele population that was characterized by prominent best-parent heterosis (BPH), where hybrids rootstocks outperformed their parents by 38% and 56 % under optimal irrigation and drought- stress, respectively. Through integration of the genotypic data (~4,000,000 SNPs) and yield analyses we show that root-derived hybrids yield is independent of parental genetic distance. However, we mapped novel root-derived yield QTLs through genome-wide association (GWA) analysis and a multi-QTLs model explained more than 45% of the hybrids yield variation, providing a potential route for marker-assisted hybrid rootstock breeding. Four selected hybrid rootstocks are further studied under multiple scion varieties and their validated positive effect on yield performance is now leading to ongoing evaluation of their commercial potential. On the genomic level, this project resulted in 3 layers of data: 1) whole-genome short-read Illumina sequencing (30X) of the 25 founder lines provided us with 25 genome alignments and high-density melon HapMap that is already shown to be an effective resource for QTL annotation and candidate gene analysis in melon. 2) fast advancements in long-read single-molecule sequencing allowed us to shift focus towards this technology and generate ~50X Nanoporesequencing of the 25 founders which in combination with the short-read data now enable de novo assembly of the 25 genomes that will soon lead to construction of the first melon pan-genome. 3) Transcriptomic (3' RNA-Seq) analysis of several selected hybrids and their parents provide preliminary information on differentially expressed genes that can be further used to explain the root-derived yield variation. Taken together, this project expanded our view on yield heterosis in melon with novel specific insights on root-derived yield heterosis. To our knowledge, thus far this is the largest systematic genetic analysis of rootstock effects on yield heterosis in cucurbits or any other crop plant, and our results are now translated into potential breeding applications. The genomic resources that were developed as part of this project are putting melon in the forefront of genomic research and will continue to be useful tool for the cucurbits community in years to come.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії