Artigos de revistas sobre o tema "AI-based compression"

Siga este link para ver outros tipos de publicações sobre o tema: AI-based compression.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "AI-based compression".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Sarinova, Assiya, e Alexander Zamyatin. "Methodology for Developing Algorithms for Compressing Hyperspectral Aerospace Images used on Board Spacecraft". E3S Web of Conferences 223 (2020): 02007. http://dx.doi.org/10.1051/e3sconf/202022302007.

Texto completo da fonte
Resumo:
The paper describes a method for constructing and developing algorithms for compressing hyperspectral aerospace images (AI) of hardware implementation for subsequent use in remote sensing Systems (RSS). The developed compression methods based on differential and discrete transformations are proposed as compression algorithms necessary for reducing the amount of transmitted information. The paper considers a method for developing compression algorithms, which is used to develop an adaptive algorithm for compressing hyperspectral AI using programmable devices. Studies have shown that the proposed algorithms have sufficient efficiency for use and can be applied on Board spacecraft when transmitting hyperspectral remote sensing data in conditions of limited buffer memory capacity and communication channel bandwidth.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sarinova, Assiya. "Development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations". E3S Web of Conferences 333 (2021): 01011. http://dx.doi.org/10.1051/e3sconf/202133301011.

Texto completo da fonte
Resumo:
The paper describes the development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations for the purpose of subsequent compression in Earth remote sensing systems. As compression algorithms necessary to reduce the amount of transmitted information, it is proposed to use the developed compression methods based on Walsh-Hadamard transformations and discrete-cosine transformation. The paper considers a methodology for developing lossy and high-quality compression algorithms during recovery, taking into account which an adaptive algorithm for compressing hyperspectral AI and the generated quantization table has been developed. The conducted studies have shown that the proposed lossy algorithms have sufficient efficiency for use and can be applied when transmitting hyperspectral remote sensing data in conditions of limited buffer memory capacity and bandwidth of the communication channel.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kim, Myung-Jun, e Yung-Lyul Lee. "Object Detection-Based Video Compression". Applied Sciences 12, n.º 9 (29 de abril de 2022): 4525. http://dx.doi.org/10.3390/app12094525.

Texto completo da fonte
Resumo:
Video compression is designed to provide good subjective image quality, even at a high-compression ratio. In addition, video quality metrics have been used to show the results can maintain a high Peak Signal-to-Noise Ratio (PSNR), even at high compression. However, there are many difficulties in object recognition on the decoder side due to the low image quality caused by high compression. Accordingly, providing good image quality for the detected objects is necessary for the given total bitrate for utilizing object detection in a video decoder. In this paper, object detection-based video compression by the encoder and decoder is proposed that allocates lower quantization parameters to the detected-object regions and higher quantization parameters to the background. Therefore, better image quality is obtained for the detected objects on the decoder side. Object detection-based video compression consists of two types: Versatile Video Coding (VVC) and object detection. In this paper, the decoder performs the decompression process by receiving the bitstreams in the object-detection decoder and the VVC decoder. In the proposed method, the VVC encoder and decoder are processed based on the information obtained from object detection. In a random access (RA) configuration, the average Bjøntegaard Delta (BD)-rates of Y, Cb, and Cr increased by 2.33%, 2.67%, and 2.78%, respectively. In an All Intra (AI) configuration, the average BD-rates of Y, Cb, and Cr increased by 0.59%, 1.66%, and 1.42%, respectively. In an RA configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.17%, 0.23%, and 0.04%, respectively. In an AI configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.71%, 0.30%, and 0.30%, respectively. Subjective image quality was also improved in the object-detected areas.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Pinheiro, Antonio. "JPEG column: 89th JPEG meeting". ACM SIGMultimedia Records 12, n.º 4 (dezembro de 2020): 1. http://dx.doi.org/10.1145/3548580.3548583.

Texto completo da fonte
Resumo:
JPEG initiates standardisation of image compression based on AI. The 89th JPEG meeting was held online from 5 to 9 October 2020. During this meeting, multiple JPEG standardisation activities and explorations were discussed and progressed. Notably, the call for evidence on learning-based image coding was successfully completed and evidence was found that this technology promises several new functionalities while offering at the same time superior compression efficiency, beyond the state of the art. A new work item, JPEG AI, that will use learning-based image coding as core technology has been proposed, enlarging the already wide families of JPEG standards.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sarinova, Assiya, Pavel Dunayev, Aigul Bekbayeva, Ali Mekhtiyev e Yermek Sarsikeyev. "Development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations". Eastern-European Journal of Enterprise Technologies 1, n.º 2(115) (25 de fevereiro de 2022): 22–30. http://dx.doi.org/10.15587/1729-4061.2022.251404.

Texto completo da fonte
Resumo:
The work is devoted to the description of the development of compression algorithms for hyperspectral aerospace images based on discrete orthogonal transformations for the purpose of subsequent compression in Earth remote sensing systems. As compression algorithms necessary to reduce the amount of transmitted information, it is proposed to use the developed compression methods based on Walsh-Hadamard transformations and discrete-cosine transformation. The paper considers a methodology for developing lossy and high-quality compression algorithms during recovery of 85 % or more, taking into account which an adaptive algorithm for compressing hyperspectral AI and the generated quantization table have been developed. The existing solutions to the lossless compression problem for hyperspectral aerospace images are analyzed. Based on them, a compression algorithm is proposed taking into account inter-channel correlation and the Walsh-Hadamard transformation, characterized by data transformation with a decrease in the range of the initial values by forming a set of channel groups [10–15] with high intra-group correlation [0.9–1] of the corresponding pairs with the selection of optimal parameters. The results obtained in the course of the research allow us to determine the optimal parameters for compression: the results of the compression ratio indicators were improved by more than 30 % with an increase in the size of the parameter channels. This is due to the fact that the more values to be converted, the fewer bits are required to store them. The best values of the compression ratio [8–12] are achieved by choosing the number of channels in an ordered group with high correlation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Nagarsenker, Anish, Prasad Khandekar e Minal Deshmukh. "JPEG2000-Based Semantic Image Compression using CNN". International journal of electrical and computer engineering systems 14, n.º 5 (5 de junho de 2023): 527–34. http://dx.doi.org/10.32985/ijeces.14.5.4.

Texto completo da fonte
Resumo:
Some of the computer vision applications such as understanding, recognition as well as image processing are some areas where AI techniques like convolutional neural network (CNN) have attained great success. AI techniques are not very frequently used in applications like image compression which are a part of low-level vision applications. Intensifying the visual quality of the lossy video/image compression has been a huge obstacle for a very long time. Image processing tasks and image recognition can be addressed with the application of deep learning CNNs as a result of the availability of large training datasets and the recent advances in computing power. This paper consists of a CNN-based novel compression framework comprising of Compact CNN (ComCNN) and Reconstruction CNN (RecCNN) where they are trained concurrently and ideally consolidated into a compression framework, along with MS-ROI (Multi Structure-Region of Interest) mapping which highlights the semiotically notable portions of the image. The framework attains a mean PSNR value of 32.9dB, achieving a gain of 3.52dB and attains mean SSIM value of 0.9262, achieving a gain of 0.0723dB over the other methods when compared using the 6 main test images. Experimental results in the proposed study validate that the architecture substantially surpasses image compression frameworks, that utilized deblocking or denoising post- processing techniques, classified utilizing Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measures (SSIM) with a mean PSNR, SSIM and Compression Ratio of 38.45, 0.9602 and 1.75x respectively for the 50 test images, thus obtaining state-of-art performance for Quality Factor (QF)=5.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Petraikin, A. V., Zh E. Belaya, A. N. Kiseleva, Z. R. Artyukova, M. G. Belyaev, V. A. Kondratenko, M. E. Pisov et al. "Artificial intelligence for diagnosis of vertebral compression fractures using a morphometric analysis model, based on convolutional neural networks". Problems of Endocrinology 66, n.º 5 (25 de dezembro de 2020): 48–60. http://dx.doi.org/10.14341/probl12605.

Texto completo da fonte
Resumo:
BACKGROUND: Pathological low-energy (LE) vertebral compression fractures (VFs) are common complications of osteoporosis and predictors of subsequent LE fractures. In 84% of cases, VFs are not reported on chest CT (CCT), which calls for the development of an artificial intelligence-based (AI) assistant that would help radiology specialists to improve the diagnosis of osteoporosis complications and prevent new LE fractures.AIMS: To develop an AI model for automated diagnosis of compression fractures of the thoracic spine based on chest CT images.MATERIALS AND METHODS: Between September 2019 and May 2020 the authors performed a retrospective sampling study of ССТ images. The 160 of results were selected and anonymized. The data was labeled by seven readers. Using the morphometric analysis, the investigators received the following metric data: ventral, medial and dorsal dimensions. This was followed by a semiquantitative assessment of VFs degree. The data was used to develop the Comprise-G AI mode based on CNN, which subsequently measured the size of the vertebral bodies and then calculates the compression degree. The model was evaluated with the ROC curve analysis and by calculating sensitivity and specificity values.RESULTS: Formed data consist of 160 patients (a training group - 100 patients; a test group - 60 patients). The total of 2,066 vertebrae was annotated. When detecting Grade 2 and 3 maximum VFs in patients the Comprise-G model demonstrated sensitivity - 90,7%, specificity - 90,7%, AUC ROC - 0.974 on the 5-FOLD cross-validation data of the training dataset; on the test data - sensitivity - 83,2%, specificity - 90,0%, AUC ROC - 0.956; in vertebrae demonstrated sensitivity - 91,5%, specificity - 95,2%, AUC ROC - 0.981 on the cross-validation data; for the test data sensitivity - 79,3%, specificity - 98,7%, AUC ROC - 0.978.CONCLUSIONS: The Comprise-G model demonstrated high diagnostic capabilities in detecting the VFs on CCT images and can be recommended for further validation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jane, Robert, Tae Young Kim, Samantha Rose, Emily Glass, Emilee Mossman e Corey James. "Developing AI/ML Based Predictive Capabilities for a Compression Ignition Engine Using Pseudo Dynamometer Data". Energies 15, n.º 21 (28 de outubro de 2022): 8035. http://dx.doi.org/10.3390/en15218035.

Texto completo da fonte
Resumo:
Energy and power demands for military operations continue to rise as autonomous air, land, and sea platforms are developed and deployed with increasingly energetic weapon systems. The primary limiting capability hindering full integration of such systems is the need to effectively and efficiently manage, generate, and transmit energy across the battlefield. Energy efficiency is primarily dictated by the number of dissimilar energy conversion processes in the system. After combustion, a Compression Ignition (CI) engine must periodically continue to inject fuel to produce mechanical energy, simultaneously generating thermal, acoustic, and fluid energy (in the form of unburnt hydrocarbons, engine coolant, and engine oil). In this paper, we present multiple sets of Shallow Artificial Neural Networks (SANNs), Convolutional Neural Network (CNNs), and K-th Nearest Neighbor (KNN) classifiers, capable of approximating the in-cylinder conditions and informing future optimization and control efforts. The neural networks provide outstanding predictive capabilities of the variables of interest and improve understanding of the energy and power management of a CI engine, leading to improved awareness, efficiency, and resilience at the device and system level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ma, Xiaoqian. "Analysis on the Application of Multimedia-Assisted Music Teaching Based on AI Technology". Advances in Multimedia 2021 (27 de dezembro de 2021): 1–12. http://dx.doi.org/10.1155/2021/5728595.

Texto completo da fonte
Resumo:
In order to improve the effect of modern music teaching, this paper combines AI technology to construct a multimedia-assisted music teaching system, combines music teaching data processing requirements to improve the algorithm, proposes appropriate music data filtering algorithms, and performs appropriate data compression processing. Moreover, the functional structure analysis of the intelligent music teaching system is carried out with the support of the improved algorithm, and the three-tier framework technology that is currently more widely used is used in the music multimedia teaching system. Finally, in order to realize the complex functions of the system, the system adopts a layered approach. From the experimental research results, it can be seen that the multimedia-assisted music teaching system based on AI technology proposed in this paper can effectively improve the effect of modern music teaching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Bai, Ye, Fei Bo, Wencan Ma, Hongwei Xu e Dawei Liu. "Effect of Interventional Therapy on Iliac Venous Compression Syndrome Evaluated and Diagnosed by Artificial Intelligence Algorithm-Based Ultrasound Images". Journal of Healthcare Engineering 2021 (22 de julho de 2021): 1–8. http://dx.doi.org/10.1155/2021/5755671.

Texto completo da fonte
Resumo:
In order to explore the efficacy of using artificial intelligence (AI) algorithm-based ultrasound images to diagnose iliac vein compression syndrome (IVCS) and assist clinicians in the diagnosis of diseases, the characteristics of vein imaging in patients with IVCS were summarized. After ultrasound image acquisition, the image data were preprocessed to construct a deep learning model to realize the position detection of venous compression and the recognition of benign and malignant lesions. In addition, a dataset was built for model evaluation. The data came from patients with thrombotic chronic venous disease (CVD) and deep vein thrombosis (DVT) in hospital. The image feature group of IVCS extracted by cavity convolution was the artificial intelligence algorithm imaging group, and the ultrasound images were directly taken as the control group without processing. Digital subtraction angiography (DSA) was performed to check the patient’s veins one week in advance. Then, the patients were rolled into the AI algorithm imaging group and control group, and the correlation between May–Thurner syndrome (MTS) and AI algorithm imaging was analyzed based on DSA and ultrasound results. Satisfaction of intestinal venous stenosis (or occlusion) or formation of collateral circulation was used as a diagnostic index for MTS. Ultrasound showed that the AI algorithm imaging group had a higher percentage of good treatment effects than that of the control group. The call-up rate of the DMRF-convolutional neural network (CNN), precision, and accuracy were all superior to those of the control group. In addition, the degree of venous swelling of patients in the artificial intelligence algorithm imaging group was weak, the degree of pain relief was high after treatment, and the difference between the artificial intelligence algorithm imaging group and control group was statistically considerable ( p < 0.005 ). Through grouped experiments, it was found that the construction of the AI imaging model was effective for the detection and recognition of lower extremity vein lesions in ultrasound images. To sum up, the ultrasound image evaluation and analysis using AI algorithm during MTS treatment was accurate and efficient, which laid a good foundation for future research, diagnosis, and treatment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Kumbhare, Pratiksha R., e U. M. Gokhale. "Design and Implementation of 2D-DCT by Using Arai Algorithm for Image Compression". Journal of Advance Research in Electrical & Electronics Engineering (ISSN: 2208-2395) 2, n.º 3 (31 de março de 2015): 08–14. http://dx.doi.org/10.53555/nneee.v2i3.212.

Texto completo da fonte
Resumo:
Discrete Cosine Transform (DCT) exploits cosine functions; it transforms a signal from spatial representation into frequency domain. It is one of the most widely used techniques for the compression of the image. The main goal of image compression using DCT is the reduction or elimination of redundancy in data representation in order to achieve reduction in storage and communication cost. In this work, we proposed the low complexity architecture for the computation of an algebraic integer (AI) based 8-point DCT. The proposed approach is fast and provides low complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Huang, Chen-Hsiu, e Ja-Ling Wu. "Unveiling the Future of Human and Machine Coding: A Survey of End-to-End Learned Image Compression". Entropy 26, n.º 5 (24 de abril de 2024): 357. http://dx.doi.org/10.3390/e26050357.

Texto completo da fonte
Resumo:
End-to-end learned image compression codecs have notably emerged in recent years. These codecs have demonstrated superiority over conventional methods, showcasing remarkable flexibility and adaptability across diverse data domains while supporting new distortion losses. Despite challenges such as computational complexity, learned image compression methods inherently align with learning-based data processing and analytic pipelines due to their well-suited internal representations. The concept of Video Coding for Machines has garnered significant attention from both academic researchers and industry practitioners. This concept reflects the growing need to integrate data compression with computer vision applications. In light of these developments, we present a comprehensive survey and review of lossy image compression methods. Additionally, we provide a concise overview of two prominent international standards, MPEG Video Coding for Machines and JPEG AI. These standards are designed to bridge the gap between data compression and computer vision, catering to practical industry use cases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Ly, Hai-Bang, Lu Minh Le, Huan Thanh Duong, Thong Chung Nguyen, Tuan Anh Pham, Tien-Thinh Le, Vuong Minh Le, Long Nguyen-Ngoc e Binh Thai Pham. "Hybrid Artificial Intelligence Approaches for Predicting Critical Buckling Load of Structural Members under Compression Considering the Influence of Initial Geometric Imperfections". Applied Sciences 9, n.º 11 (31 de maio de 2019): 2258. http://dx.doi.org/10.3390/app9112258.

Texto completo da fonte
Resumo:
The main aim of this study is to develop different hybrid artificial intelligence (AI) approaches, such as an adaptive neuro-fuzzy inference system (ANFIS) and two ANFISs optimized by metaheuristic techniques, namely simulated annealing (SA) and biogeography-based optimization (BBO) for predicting the critical buckling load of structural members under compression, taking into account the influence of initial geometric imperfections. With this aim, the existing results of compression tests on steel columns were collected and used as a dataset. Eleven input parameters, representing the slenderness ratios and initial geometric imperfections, were considered. The predicted target was the critical buckling load of columns. Statistical criteria, namely the correlation coefficient (R), the root mean squared error (RMSE), and the mean absolute error (MAE) were used to evaluate and validate the three proposed AI models. The results showed that SA and BBO were able to improve the prediction performance of the original ANFIS. Excellent results using the BBO optimization technique were achieved (i.e., an increase in R by 7.15%, RMSE by 40.48%, and MAE by 38.45%), and those using the SA technique were not much different (i.e., an increase in R by 5.03%, RMSE by 26.68%, and MAE by 20.40%). Finally, sensitivity analysis was performed, and the most important imperfections affecting column buckling capacity was found to be the initial in-plane loading eccentricity at the top and bottom ends of the columns. The methodology and the developed AI models herein could pave the way to establishing an advanced approach to forecasting damages of columns under compression.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Saffarini, Mohammed H., e George Z. Voyiadjis. "Atomistic-Continuum Constitutive Modeling Connection for Gold Foams under Compression at High Strain Rates: The Dislocation Density Effect". Metals 13, n.º 4 (25 de março de 2023): 652. http://dx.doi.org/10.3390/met13040652.

Texto completo da fonte
Resumo:
Constitutive description of the plastic flow in metallic foams has been rarely explored in the literature. Even though the material is of great interest to researchers, its plasticity remains a topic that has a much room for exploration. With the help of the rich literature that explored the material deformation mechanism, it is possible to introduce a connection between the results of the atomistic simulations and the well-established continuum constitutive models that were developed for various loading scenarios. In this work, we perform large-scale atomistic simulations of metallic gold foams of two different sizes at a wide range of strain rates (107−109 s−1) under uniaxial compression. By utilizing the results of those simulations, as well as the results we reported in our previous works, a physical atomistic-continuum dislocations-based constitutive modeling connection is proposed to capture the compressive plastic flow in gold foams for a wide range of sizes, strain rates, temperatures, and porosities. The results reported in this work present curated datasets that can be of extreme usefulness for the data-driven AI design of metallic foams with tunable nanoscale properties. Eventually, we aim to produce an optimal physical description to improve integrated physics-based and AI-enabled design, manufacture, and validation of hierarchical architected metallic foams that deliver tailored mechanical responses and precision failure patterns at different scales.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Zhang, Weiguo, e Chenggang Zhao. "Exposing Face-Swap Images Based on Deep Learning and ELA Detection". Proceedings 46, n.º 1 (17 de novembro de 2019): 29. http://dx.doi.org/10.3390/ecea-5-06684.

Texto completo da fonte
Resumo:
New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to enhance the efficiency of distinguishing facial images generated by AI from real facial images, a novel model has been developed based on deep learning and error level analysis (ELA) detection, which is related to entropy and information theory, such as cross-entropy loss function in the final Softmax layer, normalized mutual information in image preprocessing, and some applications of an encoder based on information theory. Due to the limitations of computing resources and production time, the DeepFake algorithm can only generate limited resolutions, resulting in two different image compression ratios between the fake face area as the foreground and the original area as the background, which leaves distinctive artifacts. By using the error level analysis detection method, we can detect the presence or absence of different image compression ratios and then use Convolution neural network (CNN) to detect whether the image is fake. Experiments show that the training efficiency of the CNN model can be significantly improved by using the ELA method. And the detection accuracy rate can reach more than 97% based on CNN architecture of this method. Compared to the state-of-the-art models, the proposed model has the advantages such as fewer layers, shorter training time, and higher efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Artyukova, Z. R., N. D. Kudryavtsev, A. V. Petraikin, L. R. Abuladze, A. K. Smorchkova, E. S. Akhmad, D. S. Semenov et al. "Using an artificial intelligence algorithm to assess the bone mineral density of the vertebral bodies based on computed tomography data". Medical Visualization 27, n.º 2 (13 de maio de 2023): 125–37. http://dx.doi.org/10.24835/1607-0763-1257.

Texto completo da fonte
Resumo:
Goal: To develop a method for automated assessment of the volumetric bone mineral density (BMD) of the vertebral bodies using an artificial intelligence (AI) algorithm and a phantom modeling method.Materials and Methods: Evaluation of the effectiveness of the AI algorithm designed to assess BMD of the vertebral bodies based on chest CT data. The test data set contains 100 patients aged over 50 y.o.; the ratio between the subjects with/without compression fractures (Сfr) is 48/52. The X-ray density (XRD) of vertebral bodies at T11-L3 was measured by experts and the AI algorithm for 83 patients (205 vertebrae). We used a recently developed QCT PK (Quantitative Computed Tomography Phantom Kalium) method to convert XRD into BMD followed by building calibration lines for seven 64-slice CT scanners. Images were taken from 1853 patients and then processed by the AI algorithm after the calibration. The male to female ratio was 718/1135.Results: The experts and the AI algorithm reached a strong agreement when comparing the measurements of the XRD. The coefficient of determination was R2=0.945 for individual vertebrae (T11-L3) and 0.943 for patients (p=0.000). Once the subjects from the test sample had been separated into groups with/without Сfr, the XRD data yielded similar ROC AUC values for both the experts – 0.880, and the AI algorithm – 0.875. When calibrating CT scanners using a phantom containing BMD samples made of potassium hydrogen phosphate, the following averaged dependence formula BMD =0.77*HU-1.343 was obtained. Taking into account the American College Radiology criteria for osteoporosis, the cut-off value of BMD<80 mg/ml was 105.6HU; for osteopenia BMD<120 mg/ml was 157.6HU. During the opportunistic assessment of BMD in patients aged above 50 years using the AI algorithm, osteoporosis was detected in 31.72% of female and 18.66% of male subjects.Conclusions: This paper demonstrates good comparability for the measurements of the vertebral bodies’ XRD performed by the AI morphometric algorithm and the experts. We presented a method and demonstrated great effectiveness of opportunistic assessment of vertebral bodies’ BMD based on computed tomography data using the AI algorithm and the phantom modeling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ashfaq, Mohammed, Mudassir Iqbal, Mohsin Ali Khan, Fazal E. Jalal, Majed Alzara, M. Hamad e Ahmed M. Yosri. "GEP tree-based computational AI approach to evaluate unconfined compression strength characteristics of Fly ash treated alkali contaminated soils". Case Studies in Construction Materials 17 (dezembro de 2022): e01446. http://dx.doi.org/10.1016/j.cscm.2022.e01446.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Teimuraz Goderdzishvili, Teimuraz Goderdzishvili. "Artificial Intelligence and Creative Thinking, the Future of Idea Generation". Economics 105, n.º 3-4 (15 de maio de 2023): 63–73. http://dx.doi.org/10.36962/ecs105/3-4/2023-63.

Texto completo da fonte
Resumo:
Artificial intelligence has been the most successful modern technology created in recent years. Its development began in the 1960s and was based on simulating the human brain. The scientific community's interest has focused on developing machine-learning neural networks and their potential applications in music, art, architecture, and other fields. AI is increasingly capable of generating creative ideas and assisting humans in decision-making processes. With advanced algorithms, machine learning, and natural language processing, AI has the potential to revolutionize creative thinking. In this article, there is discussed whether AI can generate creative ideas and how expressive and creative these ideas can be. AI technologies can be used in creative industries such as content creation, information analysis, information retrieval, and processing. The purpose of new technologies is to automate specific processes, simplify them, maintain accuracy, and reduce production costs. In some cases, they enable us to complete tasks or solve problems that were previously impossible. AI is a set of codes, methods, algorithms, and other data that allow a computer system to imitate human behavior and make human-like decisions. AI systems widely use data compression and storage capabilities, which enable them to process information by observing human actions and behaviors. To generate creative ideas, AI requires a different level of training and expertise, a faster rate of information processing, and high-quality analysis. Advances in AI largely depend on data relevance. Human creative thinking often uses imagination to generate original ideas that may not follow general rules. Creative tasks usually require originality of thought and innovation, which can be challenging to replicate with AI systems. Artificial intelligence has shown mixed results in both original creative and automated work processes. However, it is increasingly being associated with human creativity and original thinking. Over the decades, numerous studies have been conducted on the use of artificial intelligence in the creative field. Keywords: Creative idea, digital technologies, artificial intelligence, machine learning language (NLP).
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Pinheiro, Antonio. "JPEG Column: 94th JPEG Meeting". ACM SIGMultimedia Records 14, n.º 1 (março de 2022): 1. http://dx.doi.org/10.1145/3630646.3630650.

Texto completo da fonte
Resumo:
The 94 th JPEG meeting was held online from 17 to 21 January 2022. A major milestone has been reached at this meeting with the release of the final call for proposals under the JPEG AI project. This standard aims at the joint standardization of the first image coding standard based on machine learning by the IEC, ISO and ITU, offering a single stream, compact compressed domain representation, targeting both human visualization with significant compression efficiency improvement over image coding standards in common use at equivalent subjective quality and effective performance for image processing and computer vision tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Armand, Atiampo Kodjo, Gokou Hervé Fabrice Diédié e N’Takpé Tchimou Euloge. "Super-tokens Auto-encoders for image compression and reconstruction in IoT applications". International Journal of Advances in Scientific Research and Engineering 10, n.º 01 (2024): 29–46. http://dx.doi.org/10.31695/ijasre.2024.1.4.

Texto completo da fonte
Resumo:
New telecommunications networks are enabling powerful AI applications for smart cities and transport. These applications require real-time processing of large amounts of media data. Sending data to the cloud for processing is very difficult due to latency and energy constraints. Lossy compression can help, but traditional codecs may not provide enough quality or be efficient enough for resource-constrained devices. This paper proposes a new image compression and processing approach based on variational auto-encoders (VAEs). This VAE-based method aims to efficiently compress images while still allowing for high-quality reconstruction and object detection tasks. The encoder is designed to be lightweight and suitable for devices with limited computing power. The decoder is more complex and uses multi-level vector quantization to reconstruct high-resolution images. This approach allows for a simple encoder on edge devices and a powerful decoder on cloud servers. Key contributions include a low-complexity encoder, a new VAE model based on vector quantization, and a framework for using VAEs in IoT. The first experiments on reconstructed images on CelebA and ImageNet100 datasets show promising results in terms of MS-SSIM, PSNR, MSE and rFID compared to the literature and the ability of our approach to be used in IoT applications. Our approach presents results similar to complex algorithms like compression algorithms BPG in term of trade-off rate-distortion, and hierarchical auto-encoder (HQA) in terms of image reconstruction quality
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Sudars, Kaspars, Ivars Namatēvs e Kaspars Ozols. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach". Journal of Imaging 8, n.º 2 (30 de janeiro de 2022): 30. http://dx.doi.org/10.3390/jimaging8020030.

Texto completo da fonte
Resumo:
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Pinheiro, Antonio. "JPEG Column: 93rd JPEG Meeting". ACM SIGMultimedia Records 13, n.º 4 (dezembro de 2021): 1. http://dx.doi.org/10.1145/3578508.3578512.

Texto completo da fonte
Resumo:
The 93rd JPEG meeting was held online from 18 to 22 October 2021. The JPEG Committee continued its work on the development of new standardised solutions for the representation of visual information. Notably, the JPEG Committee has decided to release a new call for proposals on point cloud coding based on machine learning technologies that targets both compression efficiency and effective performance for 3D processing as well as machine and computer vision tasks. This activity will be conducted in parallel with JPEG AI standardization. Furthermore, it was also decided to pursue the development of a new standard in the context of the exploration on JPEG Fake News activity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Shkalenko, Anna, e Ekaterina Fadeeva. "Impact of Artificial Intelligence on Creative Industries: Trends and Prospects". Vestnik Volgogradskogo gosudarstvennogo universiteta. Ekonomika, n.º 3 (outubro de 2022): 44–59. http://dx.doi.org/10.15688/ek.jvolsu.2022.3.4.

Texto completo da fonte
Resumo:
This study is based on the elements of the innovative methodology of post-institutional analysis including interdisciplinary synthesis, the core of which is the evolutionary-genetic concept of production factors and the model of the “development core” of economic systems, which involves overcoming the mono-aspect, dichotomy and dogmatism of many concepts of orthodox neo-institutionalism. The main idea of this study is to apply an interdisciplinary approach to study the impact of artificial intelligence on creative industries. The assessment of the current problems under study and the conceptual framework of the study was carried out on the basis of studying and rethinking the results of numerous works by European and Russian scientists, as well as the legislation of the Russian Federation. As a result of the study, it was found that AI and its technologies are now used and can be used in applications related to the creative industries. An overview of the current state of AI and its technologies was provided, as well as examples of applications for creative industries. The main categories of industries in which end-to-end AI technology is involved are highlighted: content creation, analysis of information, content enhancement and post-production workflows, information extraction and enhancement, and data compression. This study considers two main categories that characterize the economic activity of economic entities in detail: content creation and information analysis. The role of using AI for creative industries is determined, which can improve the process of using responsible innovation for sustainable business development in the period of digital transformation of society. Problems are identified and a forecast is made for the future potential of AI associated with the creative industries.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Strantzalis, Konstantinos, Fotios Gioulekas, Panagiotis Katsaros e Andreas Symeonidis. "Operational State Recognition of a DC Motor Using Edge Artificial Intelligence". Sensors 22, n.º 24 (9 de dezembro de 2022): 9658. http://dx.doi.org/10.3390/s22249658.

Texto completo da fonte
Resumo:
Edge artificial intelligence (EDGE-AI) refers to the execution of artificial intelligence algorithms on hardware devices while processing sensor data/signals in order to extract information and identify patterns, without utilizing the cloud. In the field of predictive maintenance for industrial applications, EDGE-AI systems can provide operational state recognition for machines and production chains, almost in real time. This work presents two methodological approaches for the detection of the operational states of a DC motor, based on sound data. Initially, features were extracted using an audio dataset. Two different Convolutional Neural Network (CNN) models were trained for the particular classification problem. These two models are subject to post-training quantization and an appropriate conversion/compression in order to be deployed to microcontroller units (MCUs) through utilizing appropriate software tools. A real-time validation experiment was conducted, including the simulation of a custom stress test environment, to check the deployed models’ performance on the recognition of the engine’s operational states and the response time for the transition between the engine’s states. Finally, the two implementations were compared in terms of classification accuracy, latency, and resource utilization, leading to promising results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Karras, Aristeidis, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas e Spyros Sioutas. "TinyML Algorithms for Big Data Management in Large-Scale IoT Systems". Future Internet 16, n.º 2 (25 de janeiro de 2024): 42. http://dx.doi.org/10.3390/fi16020042.

Texto completo da fonte
Resumo:
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

PANTOLI, LEONARDO, VINCENZO STORNELLI e GIORGIO LEUZZI. "TUNABLE ACTIVE FILTERS FOR RF AND MICROWAVE APPLICATIONS". Journal of Circuits, Systems and Computers 23, n.º 06 (14 de maio de 2014): 1450088. http://dx.doi.org/10.1142/s0218126614500881.

Texto completo da fonte
Resumo:
In this paper, we present a low-voltage tunable active filter for microwave applications. The proposed filter is based on a single-transistor active inductor (AI), that allows the reduction of circuit area and power consumption. The three active-cell bandpass filter has a 1950 MHz center frequency with a -1 dB flat bandwidth of 10 MHz (Q ≈ 200), a shape factor (30–3 dB) of 2.5, and can be tuned in the range 1800–2050 MHz, with constant insertion loss. A dynamic range of about 75 dB is obtained, with a P1dB compression point of -5 dBm. The prototype board, fabricated on a TLX-8 substrate, has a 4 mW power consumption with a 1.2 V power supply voltage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Asad, Muhammad, Ahmed Moustafa e Takayuki Ito. "FedOpt: Towards Communication Efficiency and Privacy Preservation in Federated Learning". Applied Sciences 10, n.º 8 (21 de abril de 2020): 2864. http://dx.doi.org/10.3390/app10082864.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) has been applied to solve various challenges of real-world problems in recent years. However, the emergence of new AI technologies has brought several problems, especially with regard to communication efficiency, security threats and privacy violations. Towards this end, Federated Learning (FL) has received widespread attention due to its ability to facilitate the collaborative training of local learning models without compromising the privacy of data. However, recent studies have shown that FL still consumes considerable amounts of communication resources. These communication resources are vital for updating the learning models. In addition, the privacy of data could still be compromised once sharing the parameters of the local learning models in order to update the global model. Towards this end, we propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in FL. In order to implement FedOpt, we design a novel compression algorithm, namely, Sparse Compression Algorithm (SCA) for efficient communication, and then integrate the additively homomorphic encryption with differential privacy to prevent data from being leaked. Thus, the proposed FedOpt smoothly trade-offs communication efficiency and privacy preservation in order to adopt the learning task. The experimental results demonstrate that FedOpt outperforms the state-of-the-art FL approaches. In particular, we consider three different evaluation criteria; model accuracy, communication efficiency and computation overhead. Then, we compare the proposed FedOpt with the baseline configurations and the state-of-the-art approaches, i.e., Federated Averaging (FedAvg) and the paillier-encryption based privacy-preserving deep learning (PPDL) on all these three evaluation criteria. The experimental results show that FedOpt is able to converge within fewer training epochs and a smaller privacy budget.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Cai, Han, Ji Lin, Yujun Lin, Zhijian Liu, Haotian Tang, Hanrui Wang, Ligeng Zhu e Song Han. "Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications". ACM Transactions on Design Automation of Electronic Systems 27, n.º 3 (31 de maio de 2022): 1–50. http://dx.doi.org/10.1145/3486618.

Texto completo da fonte
Resumo:
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing, and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand to enable numerous edge AI applications. This article provides an overview of efficient deep learning methods, systems, and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization, as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video, and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Garbowski, Tomasz, Anna Knitter-Piątkowska e Jakub Krzysztof Grabski. "Estimation of the Edge Crush Resistance of Corrugated Board Using Artificial Intelligence". Materials 16, n.º 4 (15 de fevereiro de 2023): 1631. http://dx.doi.org/10.3390/ma16041631.

Texto completo da fonte
Resumo:
Recently, AI has been used in industry for very precise quality control of various products or in the automation of production processes through the use of trained artificial neural networks (ANNs) which allow us to completely replace a human in often tedious work or in hard-to-reach locations. Although the search for analytical formulas is often desirable and leads to accurate descriptions of various phenomena, when the problem is very complex or when it is impossible to obtain a complete set of data, methods based on artificial intelligence perfectly complement the engineering and scientific workshop. In this article, different AI algorithms were used to build a relationship between the mechanical parameters of papers used for the production of corrugated board, its geometry and the resistance of a cardboard sample to edge crushing. There are many analytical, empirical or advanced numerical models in the literature that are used to estimate the compression resistance of cardboard across the flute. The approach presented here is not only much less demanding in terms of implementation from other models, but is as accurate and precise. In addition, the methodology and example presented in this article show the great potential of using machine learning algorithms in such practical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Pascual, Alexander M., Erick C. Valverde, Jeong-in Kim, Jin-Woo Jeong, Yuchul Jung, Sang-Ho Kim e Wansu Lim. "Light-FER: A Lightweight Facial Emotion Recognition System on Edge Devices". Sensors 22, n.º 23 (6 de dezembro de 2022): 9524. http://dx.doi.org/10.3390/s22239524.

Texto completo da fonte
Resumo:
Facial emotion recognition (FER) systems are imperative in recent advanced artificial intelligence (AI) applications to realize better human–computer interactions. Most deep learning-based FER systems have issues with low accuracy and high resource requirements, especially when deployed on edge devices with limited computing resources and memory. To tackle these problems, a lightweight FER system, called Light-FER, is proposed in this paper, which is obtained from the Xception model through model compression. First, pruning is performed during the network training to remove the less important connections within the architecture of Xception. Second, the model is quantized to half-precision format, which could significantly reduce its memory consumption. Third, different deep learning compilers performing several advanced optimization techniques are benchmarked to further accelerate the inference speed of the FER system. Lastly, to experimentally demonstrate the objectives of the proposed system on edge devices, Light-FER is deployed on NVIDIA Jetson Nano.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Chhaily, Zuhair A., Ahmed I. Joda, Ahmed S. Abd Ali e Zaid H. Ali. "IS LOCKED COMPRESSION PLATE BETTER THAN LIMITED CONTACT DYNAMIC COMPRESSION PLATE IN TREATMENT OF CLOSED MIDDLE THIRD RADIUS AND ULNAR FRACTURES IN ADULTS: A SHORT-TERM COMPARATIVE STUDY". Iraqi Journal of Medical Sciences 20, n.º 1 (30 de junho de 2022): 146–53. http://dx.doi.org/10.22578/ijms.20.1.19.

Texto completo da fonte
Resumo:
Background: Forearm bone fracture is a commonly encountered fracture. The inception of locking compression plate (LCP) has revolutionized fracture management. With their dramatic success for articular fractures, there is a speculation that they might be more appropriate for diaphyseal fractures as well. Objective: To compare internal fixation of closed, middle third forearm fractures with LCP and limited contact dynamic compression plate (LC-DCP) in adults with respect to union rate, implant failure, functional outcome, and infection rate. Methods: Twenty-two patients with closed, middle third fractures of both the forearm bones were involved in this prospective, randomized, controlled study, which took place between February 2019 to January 2021. They were segregated into two groups based on open reduction and internal fixation with LCP (n=11) and with LC-DCP (n=11). Postoperative follow-up intervals of 1, 2, 6 weeks and 3, 6 months. The patients were assessed for implant failure, fracture union and function outcome of Andersons’ criteria to assess union, forearm rotation, and wrist flexion-extension, and disabilities of the arm, shoulder and hand (DASH) score for patient related outcome at the latest follow up. Results: The mean age of the patients was 30.9 years (range 19-47 years) with mean follow up about of 2 years. The union rate in LCP group was (100%) whereas in LC-DCP was (81.8%), the p value was (0.4), which is not statistically significant. The p value for Quick DASH score and Anderson’ criteria were (0.8 and 0.43), respectively which is also not statistically significant. No incidence of implant failure in both groups. Conclusion: Although LCP is an effective treatment alternative and may have a subtle edge over LC-DCP in the management of these fractures, their supremacy could not be certified. We deduce that surgical planning and expertise rather than the choice of implant are more pivotal for outstanding results. Keywords: Limited contact dynamic compression plate, locking compression plate, closed, middle third fractures, both bones of forearm Citation: Chhaily ZA, Joda AI, Abd Ali AS, Ali ZH. Is locked compression plate better than limited contact dynamic compression plate in treatment of closed middle third radius and ulnar fractures in adults: A short-term comparative study. Iraqi JMS. 2022; 20(1): 146-153. doi: 10.22578/IJMS.20.1.19
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Gaspar-Cunha, António, Paulo Costa, Alexandre Delbem, Francisco Monaco, Maria José Ferreira e José Covas. "Evolutionary Multi-Objective Optimization of Extrusion Barrier Screws: Data Mining and Decision Making". Polymers 15, n.º 9 (7 de maio de 2023): 2212. http://dx.doi.org/10.3390/polym15092212.

Texto completo da fonte
Resumo:
Polymer single-screw extrusion is a major industrial processing technique used to obtain plastic products. To assure high outputs, tight dimensional tolerances, and excellent product performance, extruder screws may show different design characteristics. Barrier screws, which contain a second flight in the compression zone, have become quite popular as they promote and stabilize polymer melting. Therefore, it is important to design efficient extruder screws and decide whether a conventional screw will perform the job efficiently, or a barrier screw should be considered instead. This work uses multi-objective evolutionary algorithms to design conventional and barrier screws (Maillefer screws will be studied) with optimized geometry. The processing of two polymers, low-density polyethylene and polypropylene, is analyzed. A methodology based on the use of artificial intelligence (AI) techniques, namely, data mining, decision making, and evolutionary algorithms, is presented and utilized to obtain results with practical significance, based on relevant performance measures (objectives) used in the optimization. For the various case studies selected, Maillefer screws were generally advantageous for processing LDPE, while for PP, the use of both types of screws would be feasible.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Volyanska, Larysa, Maryna Pikul e Volodymyr Otroshchenko. "ANALYSIS OF METHODS FOR INCREASING THE EFFICIENCY OF GAS TURBINE UNIT OPERATION". Science-based technologies 51, n.º 3 (28 de outubro de 2021): 255–64. http://dx.doi.org/10.18372/2310-5461.51.15996.

Texto completo da fonte
Resumo:
A promising direction in the development of the energy sector is the use of energy–saving technologies based on gas turbine units, which can significantly increase the efficiency of fossil fuel use. One of the promising ways to improve the fuel efficiency of gas turbine plants is the use of complex thermodynamic cycles. The article presents a study of increasing the efficiency of gas turbine plants by improving their thermal and technological schemes. The schemes with intermediate cooling in the process of air compression and with heat recovery of exhaust gases are considered. The article discusses the problem of optimization and selection of rational parameters of the working process of a gas turbine plant. Integrated optimization of the parameters of the thermodynamic cycle of a gas turbine unit, such as the gas temperature in front of the turbine, compressor pressure ratio, as well as the parameters that determine the working process of additional units of the plant, plays an important role in increasing its efficiency. The AI–336–1 / 2–10 gas turbine drive, designed to drive gas–pumping units and other industrial plants with a capacity of 10 MW, was chosen as the object of the study.Selecting workflow parameters that provide maximum efficiency is a major challenge in the design of complex cycle motors.The results of a numerical study of the influence of the main parameters of the operating process of the plant on the efficiency are presented, the influence of pressure ratio and the maximum cycle temperature on the cycle parameters is analyzed. The comparison of the values ​​of the optimal pressure ratio in terms of effective efficiency with the optimal pressure ratio in terms of specific effective work for cycles of a simple scheme and a complex one.It is shown that due to multistage compression with intercooling and exhaust gas regeneration; it is possible to achieve high values ​​of effective efficiency, which for a gas turbine plant of a simple scheme can be obtained only at high cycle parameters
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Baek, Myunghun, Taeho An, Seungkuk Kuk e Kyongtae Park. "14‐3: Drop Resistance Optimization Through Post‐Hoc Analysis of Chemically Strengthened Glass". SID Symposium Digest of Technical Papers 54, n.º 1 (junho de 2023): 174–77. http://dx.doi.org/10.1002/sdtp.16517.

Texto completo da fonte
Resumo:
In the market of the mobile cover glass, the development of chemically strengthened glass is focused on the drop resistance improvement. The cover glass which protects the display panel is a typical brittle material that the micro cracks tends to be occurred underneath a glass surface by physical impacts. The micro cracks tends to be propagated by tensile stress and it is known as a general procedures on glass breakage. In purpose of the cover glass strength improvement, compressive stress is applied to the glass surface using chemical strengthening method by ion exchange. However, since the central area of the glass is relatively subjected to tensile stress, the glass is instantly broken when the propagated cracks are reached on the tensile stress exerted area. The glass specifications have been managed for maintaining the strength using parameters of compressive stress (CS), depth of compression (DOC), and central tension (CT). However, there was no method to judge the most effective factor for impact drop resistance. In order to solve this problem, we developed a machine learning program for cover glass that can predict the breakage height based on stress and mapped evaluation data based on the measured stress profile data and the actual drop breakage simulated evaluation. Especially, eXplainable AI (XAI) method is used to identify the effective factors on drop breakage height. And the test and measurement data were applied to various chemically strengthened glass for analysis.In this study, it was possible to predict the drop breakage height only by measuring the stress profile of chemically strengthened glass. The feature value of chemically strengthened that has the most effect on the drop breakage resistance on rough surfaces was identified to be stress value at 30um in the case of alkali aluminosilicate glass. And it was proved that the drop breakage height was improved by 25% only by increasing the compressive stress at 30um depth.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Lucan Orășan, Ioan, Ciprian Seiculescu e Cătălin Daniel Căleanu. "A Brief Review of Deep Neural Network Implementations for ARM Cortex-M Processor". Electronics 11, n.º 16 (14 de agosto de 2022): 2545. http://dx.doi.org/10.3390/electronics11162545.

Texto completo da fonte
Resumo:
Deep neural networks have recently become increasingly used for a wide range of applications, (e.g., image and video processing). The demand for edge inference is growing, especially in the areas of relevance to the Internet-of-Things. Low-cost microcontrollers as edge devices are a promising solution for optimal application systems from several points of view such as: cost, power consumption, latency, or real-time execution. The implementation of these systems has become feasible due to the advanced development of hardware architectures and DSP capabilities, while the cost and power consumption have been maintained at a low level. The aim of the paper is to provide a literature review on the implementation of deep neural networks using ARM Cortex-M core-based low-cost microcontrollers. As an emerging research direction, there are a limited number of publications that address this topic at the moment. Therefore, the research papers that stand out have been analyzed in greater detail, to promote further interest of researchers to bring AI techniques to low power standard ARM Cortex-M microcontrollers. The article addresses a niche research domain. Despite the increasing interest manifested toward both (1) edge AI applications and (2) theoretical contributions in DNN optimization and compression, the number of existing publications dedicated to the current topic is rather limited. Therefore, a comprehensive literature survey using systematic mapping is not possible. The presentation focuses on systems that have shown increased efficiency in resource-constrained applications, as well as the predominant impediments that still hinder their implementation. The reader will take away the following concepts from this paper: (1) an overview of applications, DNN architectures, and results obtained using ARM Cortex-M core-based microcontrollers, (2) an overview of low-cost hardware devices and SW development solutions, and (3) understanding recent trends and opportunities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zhu, Mengyun, Ximin Fan, Weijing Liu, Jianying Shen, Wei Chen, Yawei Xu e Xuejing Yu. "Artificial Intelligence-Based Echocardiographic Left Atrial Volume Measurement with Pulmonary Vein Comparison". Journal of Healthcare Engineering 2021 (6 de dezembro de 2021): 1–11. http://dx.doi.org/10.1155/2021/1336762.

Texto completo da fonte
Resumo:
This paper combines echocardiographic signal processing and artificial intelligence technology to propose a deep neural network model adapted to echocardiographic signals to achieve left atrial volume measurement and automatic assessment of pulmonary veins efficiently and quickly. Based on the echocardiographic signal generation mechanism and detection method, an experimental scheme for the echocardiographic signal acquisition was designed. The echocardiographic signal data of healthy subjects were measured in four different experimental states, and a database of left atrial volume measurements and pulmonary veins was constructed. Combining the correspondence between ECG signals and echocardiographic signals in the time domain, a series of preprocessing such as denoising, feature point localization, and segmentation of the cardiac cycle was realized by wavelet transform and threshold method to complete the data collection. This paper proposes a comparative model based on artificial intelligence, adapts to the characteristics of one-dimensional time-series echocardiographic signals, automatically extracts the deep features of echocardiographic signals, effectively reduces the subjective influence of manual feature selection, and realizes the automatic classification and evaluation of human left atrial volume measurement and pulmonary veins under different states. The experimental results show that the proposed BP neural network model has good adaptability and classification performance in the tasks of LV volume measurement and pulmonary vein automatic classification evaluation and achieves an average test accuracy of over 96.58%. The average root-mean-square error percentage of signal compression is only 0.65% by extracting the coding features of the original echocardiographic signal through the convolutional autoencoder, which completes the signal compression with low loss. Comparing the training time and classification accuracy of the LSTM network with the original signal and encoded features, the experimental results show that the AI model can greatly reduce the model training time cost and achieve an average accuracy of 97.97% in the test set and increase the real-time performance of the left atrial volume measurement and pulmonary vein evaluation as well as the security of the data transmission process, which is very important for the comparison of left atrial volume measurement and pulmonary vein. It is of great practical importance to compare left atrial volume measurements with pulmonary veins.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Lingyu, Tian, He Dongpo, Zhao Jianing e Wang Hongguang. "Durability of geopolymers and geopolymer concretes: A review". REVIEWS ON ADVANCED MATERIALS SCIENCE 60, n.º 1 (1 de janeiro de 2021): 1–14. http://dx.doi.org/10.1515/rams-2021-0002.

Texto completo da fonte
Resumo:
Abstract Geopolymers are green materials with three-dimensional silicon and aluminum tetrahedral structures that can be serving as environmentally friendly construction materials and therefore have the potential to contribute to sustainable development. In this paper, the mechanism and research progress regarding the carbonation resistance, structural fire resistance, corrosion resistance, permeation properties and frost resistance of geopolymer concretes are reviewed, and the main problems with the durability of geopolymer concretes are discussed. Geopolymers possess the superb mechanic property and their compression strengths could be higher than 100 MPa. Generally, the higher the GPC strength, the better the carbonation-resistant. GPC has excellent fire resistance, due to geopolymers are acquired an inorganic skeleton which is affected by the alkali content, alkali cation, and Si/Ai ratio. There are a large number of Al-O and Si-O structures in geopolymers. Geopolymers do not react with acids at room temperature and can be used to make acid-resistant materials. Besides, GPC owning low porosity volume shows good resistance to permeability. The freezing-thawing failure mechanism of geopolymer concretes is mainly based on hydrostatic and osmotic pressure theory. GPC has poor frost resistance, and the freezing-thawing limit is less than 75 times.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Talib, Hanadi Saad Bin, Afnan Abdaljabbar Almurashi e Khames T. Alzahrani. "Cardiopulmonary Resuscitation CPR Quality Outcome of Patients with Cardiac Arrest by Using Robotics/ Artificial Intelligence in Hospitals". International Journal of Innovative Research in Medical Science 8, n.º 09 (4 de setembro de 2022): 440–44. http://dx.doi.org/10.23958/ijirms/vol07-i09/1502.

Texto completo da fonte
Resumo:
Background: The life-saving procedure is required rapid and effective methods to keep patients survive. Some literature reviews demonstrate those robotic medical systems like the applications as chest compression manipulators, which applied to enhance the quality during cardiopulmonary resuscitation in hospitals and because the number of studies that talked about the importance of using robotics was few This paper will be discussed Using of robotics and artificial intelligence (AI) during cardiopulmonary resuscitation in hospitals focusing on the level of efficiency of the outcome based on systematic reviews studies. Methods: This is a systematic review is based on collecting all previous articles which were done on the using Cardiopulmonary resuscitation CPR quality outcome of Patients with cardiac arrest by using Robotics/ Artificial Intelligence in hospitals. Results: The review process involved determining the suitability of 46 publications. There were 22 papers that made it through the full text screening and into the final review. The purpose of this research was to assess and improve outcomes for patients with cardiac arrest using robotics/artificial intelligence during CPR in hospitals. Conclusion: Robots and artificial intelligence have created a device that can reduce the risk of saving patients. It is safe, affordable, and accessible in real time, and more precise instrumentation and controls can adapt the massage to the rigidity of the patient's rib cage or clinical presentation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Alzaidy, F., e A. H. K. Albayati. "A Comparison between Static and Repeated Load Test to Predict Asphalt Concrete Rut Depth". Engineering, Technology & Applied Science Research 11, n.º 4 (21 de agosto de 2021): 7363–69. http://dx.doi.org/10.48084/etasr.4236.

Texto completo da fonte
Resumo:
Rutting has a significant impact on the pavements' performance. Rutting depth is often used as a parameter to assess the quality of pavements. The Asphalt Institute (AI) design method prescribes a maximum allowable rutting depth of 13mm, whereas the AASHTO design method stipulates a critical serviceability index of 2.5 which is equivalent to an average rutting depth of 15mm. In this research, static and repeated compression tests were performed to evaluate the permanent strain based on (1) the relationship between mix properties (asphalt content and type), and (2) testing temperature. The results indicated that the accumulated plastic strain was higher during the repeated load test than that during the static load tests. Notably, temperature played a major role. The power-law model was used to describe the relationship between the accumulated permanent strain and the number of load repetitions. Furthermore, graphical analysis was performed using VESYS 5W to predict the rut depth for the asphalt concrete layer. The α and µ parameters affected the predicted rut depth significantly. The results show a substantial difference between the two tests, indicating that the repeated load test is more adequate, useful, and accurate when compared with the static load test for the evaluation of the rut depth.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Liu, Jia, Jianjian Xiang, Yongjun Jin, Renhua Liu, Jining Yan e Lizhe Wang. "Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey". Remote Sensing 13, n.º 21 (30 de outubro de 2021): 4387. http://dx.doi.org/10.3390/rs13214387.

Texto completo da fonte
Resumo:
In recent years unmanned aerial vehicles (UAVs) have emerged as a popular and cost-effective technology to capture high spatial and temporal resolution remote sensing (RS) images for a wide range of precision agriculture applications, which can help reduce costs and environmental impacts by providing detailed agricultural information to optimize field practices. Furthermore, deep learning (DL) has been successfully applied in agricultural applications such as weed detection, crop pest and disease detection, etc. as an intelligent tool. However, most DL-based methods place high computation, memory and network demands on resources. Cloud computing can increase processing efficiency with high scalability and low cost, but results in high latency and great pressure on the network bandwidth. The emerging of edge intelligence, although still in the early stages, provides a promising solution for artificial intelligence (AI) applications on intelligent edge devices at the edge of the network close to data sources. These devices are with built-in processors enabling onboard analytics or AI (e.g., UAVs and Internet of Things gateways). Therefore, in this paper, a comprehensive survey on the latest developments of precision agriculture with UAV RS and edge intelligence is conducted for the first time. The major insights observed are as follows: (a) in terms of UAV systems, small or light, fixed-wing or industrial rotor-wing UAVs are widely used in precision agriculture; (b) sensors on UAVs can provide multi-source datasets, and there are only a few public UAV dataset for intelligent precision agriculture, mainly from RGB sensors and a few from multispectral and hyperspectral sensors; (c) DL-based UAV RS methods can be categorized into classification, object detection and segmentation tasks, and convolutional neural network and recurrent neural network are the mostly common used network architectures; (d) cloud computing is a common solution to UAV RS data processing, while edge computing brings the computing close to data sources; (e) edge intelligence is the convergence of artificial intelligence and edge computing, in which model compression especially parameter pruning and quantization is the most important and widely used technique at present, and typical edge resources include central processing units, graphics processing units and field programmable gate arrays.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Park, Hyung Joo. "Technical Innovations in the Minimally Invasive Approach for Treating Pectus Excavatum: A Paradigm Shift through Six Years’ Experience with 630 Patients". Innovations: Technology and Techniques in Cardiothoracic and Vascular Surgery 2, n.º 1 (janeiro de 2007): 25–28. http://dx.doi.org/10.1097/imi.0b013e3180313a19.

Texto completo da fonte
Resumo:
Objective The multiple-momentum (MM) based multitarget (MT) approach has been developed through a single surgeon's experience to overcome the limits of the conventional Nuss procedure, which is the single target-single momentum approach that corrects only symmetric pectus excavatum (PE). The new techniques that have been devised on a morphologic basis, according to the Terrain Contour Matching (TERCOM) system, have made this approach a comprehensive one that can cover all types of pectus deformity. The aim of this study was to elucidate the difference between the conventional technique and the new approach. Methods The data of 630 PE patients who received the modified Nuss procedure, based on the MM-MT-TERCOM approach, between 1999 and 2005, were retrospectively studied. The conceptual differences between the new approach and the conventional one were determined. The techniques according to a new paradigm, for treating asymmetry, adults, and complex morphology, as well as the bar fixation technique, were analyzed. The results of the repair were assessed with a new CT index, the Asymmetry Index (AI). Results According to the morphologic classification, 269 patients were asymmetric (42.7%): 138 were eccentric (53 Grand Canyon type), 88 were unbalanced, and 36 were combined. On the basis of the MM-MT-TERCOM concept for repairing complex morphology, multiple targets were selected in 224 patients (35.6%). To correct targets simultaneously, positive momentum (630 patients, 100%) and negative momentum (124 patients, 19.7%) were applied as appropriate. The techniques used were an asymmetric bar (250 patients, 39.7%), a seagull bar (107 patients, 17%), a complex bar via TERCOM (126 patients, 20%), the crest compression technique (59 patients, 9.4%), and a compound bar (84 patients, 13.3%). The postoperative changes of the AI were from 1.03 ± 0.06 to 1.02 ± 0.13 (P = 0.117) in the symmetric group and they were from 1.1 ± 0.05 to 1.02 ± 0.02 (P < 0.001) in the asymmetric group. Conclusions Refinement of techniques in accordance with the morphology and cause-effect basis of the bar action provided reproducible results for achieving postrepair symmetry for treating complex PE. Therefore, the new approach with techniques that use multiple momentums (MM-MT-TERCOM) supports the new paradigm of the Nuss procedure is effective in repair of all morphologic types of PE.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Bharany, Salil, Sandeep Sharma, Surbhi Bhatia, Mohammad Khalid Imam Rahmani, Mohammed Shuaib e Saima Anwar Lashari. "Energy Efficient Clustering Protocol for FANETS Using Moth Flame Optimization". Sustainability 14, n.º 10 (19 de maio de 2022): 6159. http://dx.doi.org/10.3390/su14106159.

Texto completo da fonte
Resumo:
FANET (flying ad-hoc networks) is currently a trending research topic. Unmanned aerial vehicles (UAVs) have two significant challenges: short flight times and inefficient routing due to low battery power and high mobility. Due to these topological restrictions, FANETS routing is considered more complicated than MANETs or VANETs. Clustering approaches based on artificial intelligence (AI) approaches can be used to solve complex routing issues when static and dynamic routings fail. Evolutionary algorithm-based clustering techniques, such as moth flame optimization, and ant colony optimization, can be used to solve these kinds of problems with routes. Moth flame optimization gives excellent coverage while consuming little energy and requiring a minimum number of cluster heads (CHs) for routing. This paper employs a moth flame optimization algorithm for network building and node deployment. Then, we employ a variation of the K-Means Density clustering approach to choosing the cluster head. Choosing the right cluster heads increases the cluster’s lifespan and reduces routing traffic. Moreover, it lowers the number of routing overheads. This step is followed by MRCQ image-based compression techniques to reduce the amount of data that must be transmitted. Finally, the reference point group mobility model is used to send data by the most optimal path. Particle swarm optimization (PSO), ant colony optimization (ACO), and grey wolf optimization (GWO) were put to the test against our proposed EECP-MFO. Several metrics are used to gauge the efficiency of our proposed method, including the number of clusters, cluster construction time, cluster lifespan, consistency of cluster heads, and energy consumption. This paper demonstrates that our proposed algorithm performance is superior to the current state-of-the-art approaches using experimental results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Rao, Praveena, Hemaraju Pollayi e Madhuri Rao. "Machine learning based design of reinforced concrete shear walls subjected to earthquakes". Journal of Physics: Conference Series 2327, n.º 1 (1 de agosto de 2022): 012068. http://dx.doi.org/10.1088/1742-6596/2327/1/012068.

Texto completo da fonte
Resumo:
Abstract Civil engineering structural components are classified according to their projected structural performance in the present building code regulations and design standards. These building design codes are largely based upon previous experimental results of thousands of samples tested to failure and validated with analytical solutions. Machine Learning techniques (ML) is a subset of Artificial Intelligence (AI) that facilitates classification and prediction of structural performances for a broad spectrum of complex structures with greater accuracy. Machine learning models have the potential to make reliable predictions with the help of algorithms. Thereby, saving a tremendous amount of time and resources invested in experimental investigations of large structural components such as shear walls and columns. The ML algorithms can learn from the available data, deduce underlying inter-relationships, make inferences and detect patterns based on previous experience. In the present work, various ML algorithms were implemented to identify the influence of geometrical as well as mechanical characteristics. Database of 393 specimens of reinforced concrete shear walls with rectangular (R), flanged (F) and barbell (B) cross-sections are adopted for the analysis. Shear walls are fundamentally classified into four failure categories which include flexure or due to bending, shear, intermediate flexure-shear and sliding due to shear. The objective of this paper is to classify and predict the shear strength, flexural strength as per the Indian standard code provisions and failure modes of shear walls with the help of ML techniques. Algorithms such as KNearest Neighbors, Naive Bayes, Decision Tree, Random Forest, AdaBoost, LightGBM, XGBoost and Cat-Boost is implemented using Python. Highest accuracy of 85% is achieved on the test set by Random Forest, 83% by CatBoost and 81% by LightGBM boosting algorithms. It is observed that input variables such as aspect ratio (lw/tw), characteristic strength of concrete in compression (f ck ), characteristic yield strength of steel (f y ), percentage of steel (ρ), web vertical reinforcement, horizontal reinforcement, boundary element reinforcements play a vital role in governing the shear strength (V u ) and flexural strength (M u ) of shear walls.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Miquel, Jonathan, Laurent Latorre e Simon Chamaillé-Jammes. "Energy-Efficient Audio Processing at the Edge for Biologging Applications". Journal of Low Power Electronics and Applications 13, n.º 2 (27 de abril de 2023): 30. http://dx.doi.org/10.3390/jlpea13020030.

Texto completo da fonte
Resumo:
Biologging refers to the use of animal-borne recording devices to study wildlife behavior. In the case of audio recording, such devices generate large amounts of data over several months, and thus require some level of processing automation for the raw data collected. Academics have widely adopted offline deep-learning-classification algorithms to extract meaningful information from large datasets, mainly using time-frequency signal representations such as spectrograms. Because of the high deployment costs of animal-borne devices, the autonomy/weight ratio remains by far the fundamental concern. Basically, power consumption is addressed using onboard mass storage (no wireless transmission), yet the energy cost associated with data storage activity is far from negligible. In this paper, we evaluate various strategies to reduce the amount of stored data, making the fair assumption that audio will be categorized using a deep-learning classifier at some point of the process. This assumption opens up several scenarios, from straightforward raw audio storage paired with further offline classification on one side, to a fully embedded AI engine on the other side, with embedded audio compression or feature extraction in between. This paper investigates three approaches focusing on data-dimension reduction: (i) traditional inline audio compression, namely ADPCM and MP3, (ii) full deep-learning classification at the edge, and (iii) embedded pre-processing that only computes and stores spectrograms for later offline classification. We characterized each approach in terms of total (sensor + CPU + mass-storage) edge power consumption (i.e., recorder autonomy) and classification accuracy. Our results demonstrate that ADPCM encoding brings 17.6% energy savings compared to the baseline system (i.e., uncompressed raw audio samples). Using such compressed data, a state-of-the-art spectrogram-based classification model still achieves 91.25% accuracy on open speech datasets. Performing inline data-preparation can significantly reduce the amount of stored data allowing for a 19.8% energy saving compared to the baseline system, while still achieving 89% accuracy during classification. These results show that while massive data reduction can be achieved through the use of inline computation of spectrograms, it translates to little benefit on device autonomy when compared to ADPCM encoding, with the added downside of losing original audio information.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Wolff, J. Gerard. "How the SP System May Promote Sustainability in Energy Consumption in IT Systems". Sustainability 13, n.º 8 (20 de abril de 2021): 4565. http://dx.doi.org/10.3390/su13084565.

Texto completo da fonte
Resumo:
The SP System (SPS), referring to the SP Theory of Intelligence and its realisation as the SP Computer Model, has the potential to reduce demands for energy from IT, especially in AI applications and in the processing of big data, in addition to reductions in CO2 emissions when the energy comes from the burning of fossil fuels. The biological foundations of the SPS suggest that with further development, the SPS may approach the extraordinarily low (20 W)energy demands of the human brain. Some of these savings may arise in the SPS because, like people, the SPS may learn usable knowledge from a single exposure or experience. As a comparison, deep neural networks (DNNs) need many repetitions, with much consumption of energy, for the learning of one concept. Another potential saving with the SPS is that like people, it can incorporate old learning in new. This contrasts with DNNs where new learning wipes out old learning (‘catastrophic forgetting’). Other ways in which the mature SPS is likely to prove relatively parsimonious in its demands for energy arise from the central role of information compression (IC) in the organisation and workings of the system: by making data smaller, there is less to process; because the efficiency of searching for matches between patterns can be improved by exploiting probabilities that arise from the intimate connection between IC and probabilities; and because, with SPS-derived ’Model-Based Codings’ of data, there can be substantial reductions in the demand for energy in transmitting data from one place to another.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Fanariotis, Anastasios, Theofanis Orphanoudakis, Konstantinos Kotrotsios, Vassilis Fotopoulos, George Keramidas e Panagiotis Karkazis. "Power Efficient Machine Learning Models Deployment on Edge IoT Devices". Sensors 23, n.º 3 (1 de fevereiro de 2023): 1595. http://dx.doi.org/10.3390/s23031595.

Texto completo da fonte
Resumo:
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, connected through networking and referred to as edge devices. When these devices are also connected to the Internet, they are generally named Internet-of-Thing (IoT) devices. Developing Machine Learning (ML) algorithms on these types of devices allows them to provide Artificial Intelligence (AI) inference functions such as computer vision, pattern recognition, etc. However, this capability is severely limited by the device’s resource scarcity. Embedded devices have limited computational and power resources available while they must maintain a high degree of autonomy. While there are several published studies that address the computational weakness of these small systems-mostly through optimization and compression of neural networks- they often neglect the power consumption and efficiency implications of these techniques. This study presents power efficiency experimental results from the application of well-known and proven optimization methods using a set of well-known ML models. The results are presented in a meaningful manner considering the “real world” functionality of devices and the provided results are compared with the basic “idle” power consumption of each of the selected systems. Two different systems with completely different architectures and capabilities were used providing us with results that led to interesting conclusions related to the power efficiency of each architecture.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Bicchi, Marco, Davide Biliotti, Michele Marconcini, Lorenzo Toni, Francesco Cangioli e Andrea Arnone. "An AI-Based Fast Design Method for New Centrifugal Compressor Families". Machines 10, n.º 6 (8 de junho de 2022): 458. http://dx.doi.org/10.3390/machines10060458.

Texto completo da fonte
Resumo:
Limiting global warming’s effects requires a sudden reduction of greenhouse gas emissions to pursue a net-zero carbon growth in the next decades. Along with this energy transition, drastic and rapid changes in demand are expected in many sectors, including the one for centrifugal compressors. In this context, new aerodynamic design processes exploiting the know-how of existing impeller families to generate novel centrifugal compressors could quickly react to demand variations and ensure companies’ success. Modifying the characteristics of existing compressors using a 1D single-zone model is a fast way to exploit this know-how. Besides, artificial intelligence could be useful to highlight relationships between geometrical parameters and performance, thus facilitating the achievement of optimized machines for new applications. Although the scientific literature shows several studies on mono-dimensional approaches, the joint use of a 1D single-zone model with an artificial neural network for designing new impellers from pre-engineered ones remains understudied. Such a model was provided in this paper. An application to the case study of an expander–compressor impeller family derived from other existing natural gas liquefaction one was presented. Results proved that the proposed model enabled developing a new family from an existing one, improving the performance while containing design time and computational efforts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

How, Meng-Leong, e Wei Loong David Hung. "Educing AI-Thinking in Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education". Education Sciences 9, n.º 3 (15 de julho de 2019): 184. http://dx.doi.org/10.3390/educsci9030184.

Texto completo da fonte
Resumo:
In science, technology, engineering, arts, and mathematics (STEAM) education, artificial intelligence (AI) analytics are useful as educational scaffolds to educe (draw out) the students’ AI-Thinking skills in the form of AI-assisted human-centric reasoning for the development of knowledge and competencies. This paper demonstrates how STEAM learners, rather than computer scientists, can use AI to predictively simulate how concrete mixture inputs might affect the output of compressive strength under different conditions (e.g., lack of water and/or cement, or different concrete compressive strengths required for art creations). To help STEAM learners envision how AI can assist them in human-centric reasoning, two AI-based approaches will be illustrated: first, a Naïve Bayes approach for supervised machine-learning of the dataset, which assumes no direct relations between the mixture components; and second, a semi-supervised Bayesian approach to machine-learn the same dataset for possible relations between the mixture components. These AI-based approaches enable controlled experiments to be conducted in-silico, where selected parameters could be held constant, while others could be changed to simulate hypothetical “what-if” scenarios. In applying AI to think discursively, AI-Thinking can be educed from the STEAM learners, thereby improving their AI literacy, which in turn enables them to ask better questions to solve problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Swati, Jitendra Khatti e Kamaldeep Singh Grover. "Computation of Compressive Strength of GGBS Mixed Concrete using Machine Learning". International Journal of Recent Technology and Engineering (IJRTE) 10, n.º 4 (30 de novembro de 2021): 241–50. http://dx.doi.org/10.35940/ijrte.d6631.1110421.

Texto completo da fonte
Resumo:
Concrete is a composite material formed by cement, water, and aggregate. Concrete is an important material for any Civil Engineering project. Several concretes are produced as per the functional requirements using waste materials or by-products. Many researchers reported that these waste materials or by-products enhance the concrete properties, but the laboratory procedures for determining the concrete properties are time-consuming. Therefore, numerous researchers used statistical and artificial intelligence methods for predicting concrete properties. In the present research work, the compressive strength of GGBS mixed concrete is computed using AI technologies, namely Regression Analysis (RA), Support Vector Machine (SVM), Decision Tree (DT), and Artificial Neural Networks (ANNs). The cement content (CC), C/F ratio, w/c ratio, GGBS (in Kg & %), admixture, and age (days) are selected as input parameters to construct the RA, SVM, DT, ANNs models for computing the compressive strength of GGBS mixed concrete. The CS_MLR, Link_CS_SVM, 20LF_CS_DT, and GDM_CS_ANN models are identified as the best architectural AI models based on the performance of AI models. The performance of the best architectural AI models is compared to determine the optimum performance model. The correlation coefficient is computed for input and output variables. The compressive strength of GGBS mixed concrete is highly influenced by age (curing days). Comparing the performance of optimum performance AI models and models available in the literature study shows that the optimum performance AI model outperformed the published models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Yahaya, Bashir Hussein, Abdullahi Alhassan Ahmed e Bibiana Ometere Anikajogun. "Economic Sustainability of Building and Construction Projects Based on Artificial Intelligence Techniques". Asian Review of Civil Engineering 12, n.º 1 (22 de junho de 2023): 34–40. http://dx.doi.org/10.51983/tarce-2023.12.1.3677.

Texto completo da fonte
Resumo:
Artificial intelligence (AI) has been shown to be an effective replacement for conventional modelling approaches. AI is a subfield of computer science that develops software and tools that mimic human intelligence. AI offers advantages over traditional methods for handling ambiguous circumstances. In addition, AI-based solutions can successfully replace testing when identifying engineering design parameters, saving a lot of time and resources. AI can also increase computer efficiency, decrease mistake rates, and speed up decision-making. Recently, there has been a lot of interest in machine learning (ML), a new area of cutting-edge intelligent methods for use in structural engineering. Consequently, this work presents a study on the economic management of building and construction projects based on creating ML techniques. It begins with an overview of the value of applying AI techniques in building and construction industry. The analysis of the prediction of reinforced concrete’s compressive strength while taking cost into account is then done using empirical data based on a case study. Accordingly, the findings showed that the support vector regression (SVR) and k-Nearest Neighbour (KNN) intelligence techniques are helpful in the construction business for controlling the strength of concrete based on sustainable cost reduction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia