Literatura académica sobre el tema "AI-based compression"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "AI-based compression".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "AI-based compression"

1

Sarinova, Assiya y Alexander Zamyatin. "Methodology for Developing Algorithms for Compressing Hyperspectral Aerospace Images used on Board Spacecraft". E3S Web of Conferences 223 (2020): 02007. http://dx.doi.org/10.1051/e3sconf/202022302007.

Texto completo
Resumen
The paper describes a method for constructing and developing algorithms for compressing hyperspectral aerospace images (AI) of hardware implementation for subsequent use in remote sensing Systems (RSS). The developed compression methods based on differential and discrete transformations are proposed as compression algorithms necessary for reducing the amount of transmitted information. The paper considers a method for developing compression algorithms, which is used to develop an adaptive algorithm for compressing hyperspectral AI using programmable devices. Studies have shown that the proposed algorithms have sufficient efficiency for use and can be applied on Board spacecraft when transmitting hyperspectral remote sensing data in conditions of limited buffer memory capacity and communication channel bandwidth.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sarinova, Assiya. "Development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations". E3S Web of Conferences 333 (2021): 01011. http://dx.doi.org/10.1051/e3sconf/202133301011.

Texto completo
Resumen
The paper describes the development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations for the purpose of subsequent compression in Earth remote sensing systems. As compression algorithms necessary to reduce the amount of transmitted information, it is proposed to use the developed compression methods based on Walsh-Hadamard transformations and discrete-cosine transformation. The paper considers a methodology for developing lossy and high-quality compression algorithms during recovery, taking into account which an adaptive algorithm for compressing hyperspectral AI and the generated quantization table has been developed. The conducted studies have shown that the proposed lossy algorithms have sufficient efficiency for use and can be applied when transmitting hyperspectral remote sensing data in conditions of limited buffer memory capacity and bandwidth of the communication channel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kim, Myung-Jun y Yung-Lyul Lee. "Object Detection-Based Video Compression". Applied Sciences 12, n.º 9 (29 de abril de 2022): 4525. http://dx.doi.org/10.3390/app12094525.

Texto completo
Resumen
Video compression is designed to provide good subjective image quality, even at a high-compression ratio. In addition, video quality metrics have been used to show the results can maintain a high Peak Signal-to-Noise Ratio (PSNR), even at high compression. However, there are many difficulties in object recognition on the decoder side due to the low image quality caused by high compression. Accordingly, providing good image quality for the detected objects is necessary for the given total bitrate for utilizing object detection in a video decoder. In this paper, object detection-based video compression by the encoder and decoder is proposed that allocates lower quantization parameters to the detected-object regions and higher quantization parameters to the background. Therefore, better image quality is obtained for the detected objects on the decoder side. Object detection-based video compression consists of two types: Versatile Video Coding (VVC) and object detection. In this paper, the decoder performs the decompression process by receiving the bitstreams in the object-detection decoder and the VVC decoder. In the proposed method, the VVC encoder and decoder are processed based on the information obtained from object detection. In a random access (RA) configuration, the average Bjøntegaard Delta (BD)-rates of Y, Cb, and Cr increased by 2.33%, 2.67%, and 2.78%, respectively. In an All Intra (AI) configuration, the average BD-rates of Y, Cb, and Cr increased by 0.59%, 1.66%, and 1.42%, respectively. In an RA configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.17%, 0.23%, and 0.04%, respectively. In an AI configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.71%, 0.30%, and 0.30%, respectively. Subjective image quality was also improved in the object-detected areas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pinheiro, Antonio. "JPEG column: 89th JPEG meeting". ACM SIGMultimedia Records 12, n.º 4 (diciembre de 2020): 1. http://dx.doi.org/10.1145/3548580.3548583.

Texto completo
Resumen
JPEG initiates standardisation of image compression based on AI. The 89th JPEG meeting was held online from 5 to 9 October 2020. During this meeting, multiple JPEG standardisation activities and explorations were discussed and progressed. Notably, the call for evidence on learning-based image coding was successfully completed and evidence was found that this technology promises several new functionalities while offering at the same time superior compression efficiency, beyond the state of the art. A new work item, JPEG AI, that will use learning-based image coding as core technology has been proposed, enlarging the already wide families of JPEG standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sarinova, Assiya, Pavel Dunayev, Aigul Bekbayeva, Ali Mekhtiyev y Yermek Sarsikeyev. "Development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations". Eastern-European Journal of Enterprise Technologies 1, n.º 2(115) (25 de febrero de 2022): 22–30. http://dx.doi.org/10.15587/1729-4061.2022.251404.

Texto completo
Resumen
The work is devoted to the description of the development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations for the purpose of subsequent compression in Earth remote sensing systems. As compression algorithms necessary to reduce the amount of transmitted information, it is proposed to use the developed compression methods based on Walsh-Hadamard transformations and discrete-cosine transformation. The paper considers a methodology for developing lossy and high-quality compression algorithms during recovery of 85 % or more, taking into account which an adaptive algorithm for compressing hyperspectral AI and the generated quantization table have been developed. The existing solutions to the lossless compression problem for hyperspectral aerospace images are analyzed. Based on them, a compression algorithm is proposed taking into account inter-channel correlation and the Walsh-Hadamard transformation, characterized by data transformation with a decrease in the range of the initial values by forming a set of channel groups [10–15] with high intra-group correlation [0.9–1] of the corresponding pairs with the selection of optimal parameters. The results obtained in the course of the research allow us to determine the optimal parameters for compression: the results of the compression ratio indicators were improved by more than 30 % with an increase in the size of the parameter channels. This is due to the fact that the more values to be converted, the fewer bits are required to store them. The best values of the compression ratio [8–12] are achieved by choosing the number of channels in an ordered group with high correlation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nagarsenker, Anish, Prasad Khandekar y Minal Deshmukh. "JPEG2000-Based Semantic Image Compression using CNN". International journal of electrical and computer engineering systems 14, n.º 5 (5 de junio de 2023): 527–34. http://dx.doi.org/10.32985/ijeces.14.5.4.

Texto completo
Resumen
Some of the computer vision applications such as understanding, recognition as well as image processing are some areas where AI techniques like convolutional neural network (CNN) have attained great success. AI techniques are not very frequently used in applications like image compression which are a part of low-level vision applications. Intensifying the visual quality of the lossy video/image compression has been a huge obstacle for a very long time. Image processing tasks and image recognition can be addressed with the application of deep learning CNNs as a result of the availability of large training datasets and the recent advances in computing power. This paper consists of a CNN-based novel compression framework comprising of Compact CNN (ComCNN) and Reconstruction CNN (RecCNN) where they are trained concurrently and ideally consolidated into a compression framework, along with MS-ROI (Multi Structure-Region of Interest) mapping which highlights the semiotically notable portions of the image. The framework attains a mean PSNR value of 32.9dB, achieving a gain of 3.52dB and attains mean SSIM value of 0.9262, achieving a gain of 0.0723dB over the other methods when compared using the 6 main test images. Experimental results in the proposed study validate that the architecture substantially surpasses image compression frameworks, that utilized deblocking or denoising post- processing techniques, classified utilizing Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measures (SSIM) with a mean PSNR, SSIM and Compression Ratio of 38.45, 0.9602 and 1.75x respectively for the 50 test images, thus obtaining state-of-art performance for Quality Factor (QF)=5.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Petraikin, A. V., Zh E. Belaya, A. N. Kiseleva, Z. R. Artyukova, M. G. Belyaev, V. A. Kondratenko, M. E. Pisov et al. "Artificial intelligence for diagnosis of vertebral compression fractures using a morphometric analysis model, based on convolutional neural networks". Problems of Endocrinology 66, n.º 5 (25 de diciembre de 2020): 48–60. http://dx.doi.org/10.14341/probl12605.

Texto completo
Resumen
BACKGROUND: Pathological low-energy (LE) vertebral compression fractures (VFs) are common complications of osteoporosis and predictors of subsequent LE fractures. In 84% of cases, VFs are not reported on chest CT (CCT), which calls for the development of an artificial intelligence-based (AI) assistant that would help radiology specialists to improve the diagnosis of osteoporosis complications and prevent new LE fractures.AIMS: To develop an AI model for automated diagnosis of compression fractures of the thoracic spine based on chest CT images.MATERIALS AND METHODS: Between September 2019 and May 2020 the authors performed a retrospective sampling study of ССТ images. The 160 of results were selected and anonymized. The data was labeled by seven readers. Using the morphometric analysis, the investigators received the following metric data: ventral, medial and dorsal dimensions. This was followed by a semiquantitative assessment of VFs degree. The data was used to develop the Comprise-G AI mode based on CNN, which subsequently measured the size of the vertebral bodies and then calculates the compression degree. The model was evaluated with the ROC curve analysis and by calculating sensitivity and specificity values.RESULTS: Formed data consist of 160 patients (a training group - 100 patients; a test group - 60 patients). The total of 2,066 vertebrae was annotated. When detecting Grade 2 and 3 maximum VFs in patients the Comprise-G model demonstrated sensitivity - 90,7%, specificity - 90,7%, AUC ROC - 0.974 on the 5-FOLD cross-validation data of the training dataset; on the test data - sensitivity - 83,2%, specificity - 90,0%, AUC ROC - 0.956; in vertebrae demonstrated sensitivity - 91,5%, specificity - 95,2%, AUC ROC - 0.981 on the cross-validation data; for the test data sensitivity - 79,3%, specificity - 98,7%, AUC ROC - 0.978.CONCLUSIONS: The Comprise-G model demonstrated high diagnostic capabilities in detecting the VFs on CCT images and can be recommended for further validation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jane, Robert, Tae Young Kim, Samantha Rose, Emily Glass, Emilee Mossman y Corey James. "Developing AI/ML Based Predictive Capabilities for a Compression Ignition Engine Using Pseudo Dynamometer Data". Energies 15, n.º 21 (28 de octubre de 2022): 8035. http://dx.doi.org/10.3390/en15218035.

Texto completo
Resumen
Energy and power demands for military operations continue to rise as autonomous air, land, and sea platforms are developed and deployed with increasingly energetic weapon systems. The primary limiting capability hindering full integration of such systems is the need to effectively and efficiently manage, generate, and transmit energy across the battlefield. Energy efficiency is primarily dictated by the number of dissimilar energy conversion processes in the system. After combustion, a Compression Ignition (CI) engine must periodically continue to inject fuel to produce mechanical energy, simultaneously generating thermal, acoustic, and fluid energy (in the form of unburnt hydrocarbons, engine coolant, and engine oil). In this paper, we present multiple sets of Shallow Artificial Neural Networks (SANNs), Convolutional Neural Network (CNNs), and K-th Nearest Neighbor (KNN) classifiers, capable of approximating the in-cylinder conditions and informing future optimization and control efforts. The neural networks provide outstanding predictive capabilities of the variables of interest and improve understanding of the energy and power management of a CI engine, leading to improved awareness, efficiency, and resilience at the device and system level.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ma, Xiaoqian. "Analysis on the Application of Multimedia-Assisted Music Teaching Based on AI Technology". Advances in Multimedia 2021 (27 de diciembre de 2021): 1–12. http://dx.doi.org/10.1155/2021/5728595.

Texto completo
Resumen
In order to improve the effect of modern music teaching, this paper combines AI technology to construct a multimedia-assisted music teaching system, combines music teaching data processing requirements to improve the algorithm, proposes appropriate music data filtering algorithms, and performs appropriate data compression processing. Moreover, the functional structure analysis of the intelligent music teaching system is carried out with the support of the improved algorithm, and the three-tier framework technology that is currently more widely used is used in the music multimedia teaching system. Finally, in order to realize the complex functions of the system, the system adopts a layered approach. From the experimental research results, it can be seen that the multimedia-assisted music teaching system based on AI technology proposed in this paper can effectively improve the effect of modern music teaching.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bai, Ye, Fei Bo, Wencan Ma, Hongwei Xu y Dawei Liu. "Effect of Interventional Therapy on Iliac Venous Compression Syndrome Evaluated and Diagnosed by Artificial Intelligence Algorithm-Based Ultrasound Images". Journal of Healthcare Engineering 2021 (22 de julio de 2021): 1–8. http://dx.doi.org/10.1155/2021/5755671.

Texto completo
Resumen
In order to explore the efficacy of using artificial intelligence (AI) algorithm-based ultrasound images to diagnose iliac vein compression syndrome (IVCS) and assist clinicians in the diagnosis of diseases, the characteristics of vein imaging in patients with IVCS were summarized. After ultrasound image acquisition, the image data were preprocessed to construct a deep learning model to realize the position detection of venous compression and the recognition of benign and malignant lesions. In addition, a dataset was built for model evaluation. The data came from patients with thrombotic chronic venous disease (CVD) and deep vein thrombosis (DVT) in hospital. The image feature group of IVCS extracted by cavity convolution was the artificial intelligence algorithm imaging group, and the ultrasound images were directly taken as the control group without processing. Digital subtraction angiography (DSA) was performed to check the patient’s veins one week in advance. Then, the patients were rolled into the AI algorithm imaging group and control group, and the correlation between May–Thurner syndrome (MTS) and AI algorithm imaging was analyzed based on DSA and ultrasound results. Satisfaction of intestinal venous stenosis (or occlusion) or formation of collateral circulation was used as a diagnostic index for MTS. Ultrasound showed that the AI algorithm imaging group had a higher percentage of good treatment effects than that of the control group. The call-up rate of the DMRF-convolutional neural network (CNN), precision, and accuracy were all superior to those of the control group. In addition, the degree of venous swelling of patients in the artificial intelligence algorithm imaging group was weak, the degree of pain relief was high after treatment, and the difference between the artificial intelligence algorithm imaging group and control group was statistically considerable ( p < 0.005 ). Through grouped experiments, it was found that the construction of the AI imaging model was effective for the detection and recognition of lower extremity vein lesions in ultrasound images. To sum up, the ultrasound image evaluation and analysis using AI algorithm during MTS treatment was accurate and efficient, which laid a good foundation for future research, diagnosis, and treatment.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "AI-based compression"

1

Berthet, Alexandre. "Deep learning methods and advancements in digital image forensics". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS252.

Texto completo
Resumen
Le volume de données visuelles numériques augmente considérablement d'année en années. En parallèle, l’édition d'images est devenue plus facile et plus précise. Les modifications malveillantes sont donc plus accessibles. La criminalistique des images fournit des solutions pour garantir l’authenticité des données visuelles numériques. Tout d’abord, les solutions étaient des méthodes classiques basées sur les artéfacts produits lors de la création d’une image numérique. Puis, comme pour d’autres domaines du traitement d’images, les méthodes sont passées à l’apprentissage profond. Dans un premier temps, nous présentons une étude de l’état de l’art des méthodes d’apprentissage profond pour la criminalistique des images. Notre étude de l’état de l'art souligne le besoin d’appliquer des modules de pré-traitement pour extraire les artéfacts cachés par le contenu des images. Nous avons aussi mis en avant les problèmes concernant les protocoles d’évaluation de la reconnaissance d’image. De plus, nous abordons la contre-criminalistique et présentons la compression basée sur l’intelligence artificielle, qui pourrait être pris en compte comme une attaque. Dans un second temps, cette thèse détaille trois protocoles d’évaluation progressifs qui abordent les problèmes de reconnaissance de caméras. Le protocole final, plus fiable et reproductible, met en avant l’impossibilité des méthodes de l’état de l’art à reconnaître des caméras dans un contexte difficile. Dans un troisième temps, nous étudions l’impact de la compression basée sur l’intelligence artificielle sur deux tâches analysant les artéfacts de compression : la détection de falsifications et la reconnaissance du réseau social
The volume of digital visual data is increasing dramatically year after year. At the same time, image editing has become easier and more precise. Malicious modifications are therefore more accessible. Image forensics provides solutions to ensure the authenticity of digital visual data. Recognition of the source camera and detection of falsified images are among the main tasks. At first, the solutions were classical methods based on the artifacts produced during the creation of a digital image. Then, as in other areas of image processing, the methods moved to deep learning. First, we present a state-of-the-art survey of deep learning methods for image forensics. Our state-of-the-art survey highlights the need to apply pre-processing modules to extract artifacts hidden by image content. We also highlight the problems concerning image recognition evaluation protocols. Furthermore, we address counter-forensics and present compression based on artificial intelligence, which could be considered as an attack. In a second step, this thesis details three progressive evaluation protocols that address camera recognition problems. The final protocol, which is more reliable and reproducible, highlights the impossibility of state-of-the-art methods to recognize cameras in a challenging context. In a third step, we study the impact of compression based on artificial intelligence on two tasks analyzing compression artifacts: tamper detection and social network recognition. The performances obtained show on the one hand that this compression must be taken into account as an attack, but that it leads to a more important decrease than other manipulations for an equivalent image degradation
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Desai, Ujjaval Y., Marcelo M. Mizuki, Ichiro Masaki y Berthold K. P. Horn. "Edge and Mean Based Image Compression". 1996. http://hdl.handle.net/1721.1/5943.

Texto completo
Resumen
In this paper, we present a static image compression algorithm for very low bit rate applications. The algorithm reduces spatial redundancy present in images by extracting and encoding edge and mean information. Since the human visual system is highly sensitive to edges, an edge-based compression scheme can produce intelligible images at high compression ratios. We present good quality results for facial as well as textured, 256~x~256 color images at 0.1 to 0.3 bpp. The algorithm described in this paper was designed for high performance, keeping hardware implementation issues in mind. In the next phase of the project, which is currently underway, this algorithm will be implemented in hardware, and new edge-based color image sequence compression algorithms will be developed to achieve compression ratios of over 100, i.e., less than 0.12 bpp from 12 bpp. Potential applications include low power, portable video telephones.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "AI-based compression"

1

Li, Ge, Wei Gao y Wen Gao. "MPEG AI-Based 3D Graphics Coding Standard". En Point Cloud Compression, 219–41. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1957-0_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Falk, Eric. "AI to Solve the Data Deluge: AI-Based Data Compression". En Future of Business and Finance, 271–85. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41309-5_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Becking, Daniel, Maximilian Dreyer, Wojciech Samek, Karsten Müller y Sebastian Lapuschkin. "ECQ$$^{\text {x}}$$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs". En xxAI - Beyond Explainable AI, 271–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_14.

Texto completo
Resumen
AbstractThe remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resource-constrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression techniques.In this chapter, we develop and describe a novel quantization paradigm for DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of information theory: Instead of assigning weight values based on their distances to the quantization clusters, the assignment function additionally considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). The ultimate goal is to preserve the most relevant weights in quantization clusters of highest information content.Experimental results show that this novel Entropy-Constrained and XAI-adjusted Quantization (ECQ$$^{\text {x}}$$ x ) method generates ultra low-precision (2–5 bit) and simultaneously sparse neural networks while maintaining or even improving model performance. Due to reduced parameter precision and high number of zero-elements, the rendered networks are highly compressible in terms of file size, up to 103$$\times $$ × compared to the full-precision unquantized DNN model. Our approach was evaluated on different types of models and datasets (including Google Speech Commands, CIFAR-10 and Pascal VOC) and compared with previous work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Oni-Orisan, Oluwatomiwa. "Viability of Knowledge Management Practices for a Successful Digital Transformation in Small- and Medium- Sized Enterprises". En Informatik aktuell, 129–39. Wiesbaden: Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-43705-3_10.

Texto completo
Resumen
AbstractDigital innovations and technologies, particularly artificial intelligence, offer a unique opportunity to fundamentally transform business processes. Small and medium-sized enterprises (SMEs), having a significant impact on the German economy, are encouraged to fully embrace this opportunity for digital transformation.Inspired by the usage of knowledge management as a mediation mechanism for effective AI application in [5], this case study examines its practical implications. Wiewald, an SME specializing in compressor systems, serves as an application partner in the KMI project, exploring the implementation of AI in SMEs.By comparing academic literature on Industry 4.0’s impact on knowledge management with industry experts’ perspectives, we develop an appropriate digitalization strategy based on knowledge management. Its potential implementation is discussed using Wiewald as a practical example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sandhya, Mandha y G. Mallikarjuna Rao. "Prediction of Compressive Strength of Fly Ash-Based Geopolymer Concrete Using AI Approach". En Lecture Notes in Civil Engineering, 9–20. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8496-8_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dayal, Sankalp. "AI on Edge: A Mass Accessible Tool for Decision Support Systems". En Decision Support Systems (DSS) and Tools [Working Title]. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1003945.

Texto completo
Resumen
Artificial Intelligence (AI) advancements in last decade have been explosive and now AI is considered to be potentially surpassing human intelligence for many decision tasks like detecting objects, answering specific questions or decoding sound to speech. This also means now human can offload raw information processing to an AI enabled machine and use its outcome as their Decision Support System (DSS). To make this AI based DSS mass accessible, it has to be pervasive, cheap or free and available locally where decision is being made like a home or industry. This requires Machine Learning (ML) models powering the AI to be deployed at edge hardware like phones, security cameras, automobiles. The deployment on edge requires compressing these ML models and, in some cases, tailoring to the hardware. This chapter explains how AI is becoming an important tool for DSS and then discusses the State-of-the-Art (SOTA) model compression techniques used for deployment on edge ensuring no loss in performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Oommen, B. John y Luis Rueda. "Stochastic Learning-based Weak Estimation and Its Applications". En Knowledge-Based Intelligent System Advancements, 1–29. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61692-811-7.ch001.

Texto completo
Resumen
Although the field of Artificial Intelligence (AI) has been researched for more than five decades, researchers, scientists and practitioners are constantly seeking superior methods that are applicable for increasingly difficult problems. In this chapter, our aim is to consider knowledge-based novel and intelligent cybernetic approaches for problems in which the environment (or medium) is time-varying. While problems of this sort can be approached from various perspectives, including those that appropriately model the time-varying nature of the environment, in this chapter, we shall concentrate on using new estimation or “training” methods. The new methods that we propose are based on the principles of stochastic learning, and are referred to as the Stochastic Learning Weak Estimators (SLWE). An empirical analysis on synthetic data shows the advantages of the scheme for non-stationary distributions, which is where we claim to advance the state-of-the-art. We also examine how these new methods can be applicable to learning and intelligent systems, and to Pattern Recognition (PR). The chapter briefly reports conclusive results that demonstrate the superiority of the SLWE in two applications, namely in PR and data compression. The application in PR involves artificial data and real-life news reports from the Canadian Broadcasting Corporation (CBC). We also demonstrate its applicabilty in data compression, where the underlying distribution of the files being compressed is, again, modeled as being non-stationary. The superiority of the SLWE in both these cases is demonstrated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chakraborty, Sanjay y Lopamudra Dey. "Image Representation, Filtering, and Natural Computing in a Multivalued Quantum System". En Handbook of Research on Natural Computing for Optimization Problems, 689–717. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0058-2.ch028.

Texto completo
Resumen
Image processing on quantum platform is a hot topic for researchers now a day. Inspired from the idea of quantum physics, researchers are trying to shift their focus from classical image processing towards quantum image processing. Storing and representation of images in a binary and ternary quantum system is always one of the major issues in quantum image processing. This chapter mainly deals with several issues regarding various types of image representation and storage techniques in a binary as well as ternary quantum system. How image pixels can be organized and retrieved based on their positions and intensity values in 2-states and 3-states quantum systems is explained here in detail. Beside that it also deals with the topic that focuses on the clear filteration of images in quantum system to remove unwanted noises. This chapter also deals with those important applications (like Quantum image compression, Quantum edge detection, Quantum Histogram etc.) where quantum image processing associated with some of the natural computing techniques (like AI, ANN, ACO, etc.).
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Fratrič, Peter, Giovanni Sileno, Tom van Engers y Sander Klous. "A Compression and Simulation-Based Approach to Fraud Discovery". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220463.

Texto completo
Resumen
With the uptake of digital services in public and private sectors, the formalization of laws is attracting increasing attention. Yet, non-compliant fraudulent behaviours (money laundering, tax evasion, etc.)—practical realizations of violations of law—remain very difficult to formalize, as one does not know the exact formal rules that define such violations. The present work introduces a methodological framework aiming to discover non-compliance through compressed representations of behaviour, considering a fraudulent agent that explores via simulation the space of possible non-compliant behaviours in a given social domain. The framework is founded on a combination of utility maximization and active learning. We illustrate its application on a simple social domain. The results are promising, and seemingly reduce the gap on fundamental questions in AI and Law, although this comes at the cost of developing complex models of the simulation environment, and sophisticated reasoning models of the fraudulent agent.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "AI-based compression"

1

Bergmann, Sandra, Denise Moussa, Fabian Brand, André Kaup y Christian Riess. "Frequency-Domain Analysis of Traces for the Detection of AI-based Compression". En 2023 11th International Workshop on Biometrics and Forensics (IWBF). IEEE, 2023. http://dx.doi.org/10.1109/iwbf57495.2023.10157489.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Stroot, Markus, Stefan Seiler, Philipp Lutat y Andreas Ulbig. "Comparative Analysis of Modern, AI-based Data Compression on Power Quality Disturbance Data". En 2023 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). IEEE, 2023. http://dx.doi.org/10.1109/smartgridcomm57358.2023.10333901.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Berthet, Alexandre, Chiara Galdi y Jean-Luc Dugelay. "On the Impact of AI-Based Compression on Deep Learning-Based Source Social Network Identification". En 2023 IEEE 25th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2023. http://dx.doi.org/10.1109/mmsp59012.2023.10337726.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Berthet, Alexandre y Jean-Luc Dugelay. "AI-Based Compression: A New Unintended Counter Attack on JPEG-Related Image Forensic Detectors?" En 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897697.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jane, Robert, Corey James, Samantha Rose y Tae Kim. "Developing Artificial Intelligence (AI) and Machine Learning (ML) Based Soft Sensors for In-Cylinder Predictions with a Real-Time Simulator and a Crank Angle Resolved Engine Model". En WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0102.

Texto completo
Resumen
<div class="section abstract"><div class="htmlview paragraph">Currently, there are no safe and suitable fuel sources with comparable power density to traditional combustible fuels capable of replacing Internal Combustion Engines (ICEs). For the foreseeable future, civilian and military systems are likely to be reliant on traditional combustible fuels. Hybridization of the vehicle powertrains is the most likely avenue which can reduce emissions, minimize system inefficiencies, and build more sustainable vehicle systems that support the United States Army modernization priorities. Vehicle systems may further be improved by the creation and implementation of artificial intelligence and machine learning (AI/ML) in the form of advanced predictive capabilities and more robust control policies. AI/ML requires numerous characterized and complete datasets, given the sensitive nature of military systems, such data is unlikely to be known or accessible limiting the reach to develop and deploy AI/ML to military systems. With the absence of data, AI/ML may still be developed and deployed to military systems if supported by near-real-time or real-time computationally efficient and effective hardware and software or cloud-based computing. In this research, an OPAL real-time (OPAL-RT) simulator was used to emulate a compression ignition (CI) engine simulation architecture capable of developing and deploying advanced AI/ML predictive algorithms. The simulation architecture could be used for developing online predictive capabilities required to maximize the effectiveness or efficiency of a vehicle. The architecture includes a real-time simulator (RTS), a host PC, and a secondary PC. The RTS simulates a crank angle resolved engine model which utilized pseudo engine dynamometer data in the form of multi-dimensional matrices to emulate quasi-steady state conditions of the engine. The host PC was used to monitor and control the engine while the secondary PC was used to train the AI/ML to predict the per-cylinder generated torque from the crank shaft torque, which was then used to predict the in-cylinder temperature and pressure. The results indicate that using minimal sensor data and pretrained predictive algorithms, in-cylinder characterizations for unobserved engine variables may be achievable, providing an approximate characterization of quasi-steady state in-cylinder conditions.</div></div>
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Misra, Siddharth, Jungang Chen, Polina Churilova y Yusuf Falola. "Generative Artificial Intelligence for Geomodeling". En International Petroleum Technology Conference. IPTC, 2024. http://dx.doi.org/10.2523/iptc-23477-ms.

Texto completo
Resumen
Abstract Subsurface earth models, also known as geomodels, are essential for characterizing and developing complex subsurface systems. Traditional geomodel generation methods, such as multiple-point statistics, can be time-consuming and computationally expensive. Generative Artificial Intelligence (AI) offers a promising alternative, with the potential to generate high-quality geomodels more quickly and efficiently. This paper proposes a deep-learning-based generative AI for geomodeling that comprises two deep learning models: a hierarchical vector-quantized variational autoencoder (VQ-VAE-2) and a PixelSNAIL autoregressive model. The VQ-VAE-2 learns to massively compress geomodels into a low-dimensional, discrete latent representation. The PixelSNAIL then learns the prior distribution of the latent codes. To generate a geomodel, the PixelSNAIL samples from the prior distribution of latent codes and the decoder of the VQ-VAE-2 converts the sampled latent code to a newly constructed geomodel. The PixelSNAIL can be used for unconditional or conditional geomodel generation. In unconditional generation, the generative workflow generates an ensemble of geomodels without any constraint. In conditional geomodel generation, the generative workflow generates an ensemble of geomodels similar to a user-defined source geomodel. This facilitates the control and manipulation of the generated geomodels. To improve the generation of fluvial channels in the geomodels, we use perceptual loss instead of the traditional mean absolute error loss in the VQ-VAE-2 model. At a specific compression ratio, the proposed Generative AI method generates multi-attribute geomodels of higher quality than single-attribute geomodels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ma, jialin, hang hu y Yuetong Zhang. "Reconfigurable pulse-compression signal generation based on DP-QPSK modulator". En AI in Optics and Photonics, editado por Juejun Hu y Jianji Dong. SPIE, 2023. http://dx.doi.org/10.1117/12.3007024.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kang, Haoyan, Jiachi Ye, Behrouz Movahhed Nouri, Belal Jahannia, Salem Altaleb, Hao Wang, Elham Heidari, Volker Sorger y Hamed Dalir. "Reconfigurable complex convolution module based optical data compression and hashing algorithm". En AI and Optical Data Sciences V, editado por Volker J. Sorger y Ken-ichi Kitayama. SPIE, 2024. http://dx.doi.org/10.1117/12.3003411.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Thambi, Joel Luther, Subhransu Sekhar Mohapatra, Vinod Jose Kavalakkat, Subhransu S. Mohapatra, Ullas U y Saibal Kanchan Barik. "A Combined Data Science and Simulation-Based Methodology for Efficient and Economic Prediction of Thermoplastic Performance for Automotive Industry". En WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0936.

Texto completo
Resumen
<div class="section abstract"><div class="htmlview paragraph">There are significant predictive tool usages by design engineers in automotive industry to capture material composition and manufacturing process-induced variables. In specific, an accurate modeling of material behavior to predict the mechanical performance of a thermoplastic part is an evolving subject in this field as one needs to consider multiple factors and steps to achieve the right prediction accuracies. The variability in prediction comes from different factors such as polymer type (filled vs. unfilled, amorphous vs semi crystalline etc.), design and manufacturing features (weldline, gate locations, thickness, notches etc.), operating conditions (temperature, moisture etc.) and finally load states (tension, compression, flexural, impact etc.). Using traditional numerical simulation-based modelling to study and validate all these factors requires significant computational time and effort. An alternative method by using data science and AI-ML models is proposed to reduce the overall validation time needed for simulation. To validate this methodology, extensive part level experiments were done on a representative cylindrical geometry to accommodate all these factors using different ULTEM™ Resin materials (PEI). The results show that by using neural network ML model, it is possible to accurately predict the structural response like maximum displacement and force. The ML model results were compared to the CAE based approaches and results overlapped with each other well within the 95% scatter band. By combining both the CAE modelling and ML modelling it is possible to accurately predict the critical structural response of applications more efficiently and economically.</div></div>
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Akita, Eiji, Shin Gomi, Scott Cloyd, Michael Nakhamkin y Madhukar Chiruvolu. "The Air Injection Power Augmentation Technology Provides Additional Significant Operational Benefits". En ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-28336.

Texto completo
Resumen
The Air Injection (AI) Power Augmentation technology (HAI for humid Air injection and DAI for dry air injection) has primary benefits of increasing power of combustion turbine/combined cycle (CT/CC) power plants by 15–30% at a fraction of the new plant cost with coincidental significant heat rate reductions (10–15%) and NOx emissions reductions (for diffusion type combustors up to 60%) (See References 1, 2, 3): Figure 1A is a simplified heat and mass balance for the PG7241 (FA) combustion turbine with HAI. The auxiliary compressor supplies the additional airflow that is mixed with the steam produced by the HRSG and injected upstream of combustors. Figure 1B presents the heat and mass balance for the PG7142 CT based combined cycle power plant with HAI. It is similar to that presented on Figure 1A except that the humid air is created by mixing of steam, extracted from the steam turbine, with the supplementary airflow from the auxiliary compressor. The maximum acceptable injection rates are evaluated with proper margins by a number of factors established by OEMs: the compressor surge limitations, maximum torque, the generator capacities, maximum moisture levels upstream of combustors, etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía