Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Explicable Machine Learning.

Zeitschriftenartikel zum Thema „Explicable Machine Learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-44 Zeitschriftenartikel für die Forschung zum Thema "Explicable Machine Learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

FOMICHEVA, S. G. „INFLUENCE OF ATTACK INDICATOR RANKING ON THE QUALITY OF MACHINE LEARNING MODELS IN AGENT-BASED CONTINUOUS AUTHENTICATION SYSTEMS“. T-Comm 17, Nr. 8 (2023): 45–55. http://dx.doi.org/10.36724/2072-8735-2023-17-8-45-55.

Der volle Inhalt der Quelle
Annotation:
Security agents of authentication systems function in automatic mode and control the behavior of subjects, analyzing their dynamics using both traditional (statistical) methods and methods based on machine learning. The expansion of the cybersecurity fabric paradigm actualizes the improvement of adaptive explicable methods and machine learning models. Purpose: the purpose of the study was to assess the impact of ranking methods at compromise indicators, attacks indicators and other futures on the quality of detecting network traffic anomalies as part of the security fabric with continuous authentication. Probabilistic and explicable methods of binary classification were used, as well as nonlinear regressors based on decision trees. The results of the study showed that the methods of pre liminary ranking increase the F1-Score and functioning speed for supervised ML-models by an average of 7%. In unsupervised models, preliminary ranking does not significantly affect the training time, but increases the by 2-10%, which justifies their expediency in agent based systems of continuous authentication. Practical relevance: the models developed in the work substantiate the feasibility of mechanisms for preliminary ranking of compromise and attacks indicators, creating patterns prototypes of attack indicators in automatic mode. In general, uncontrolled models are not as accurate as controlled ones, which actualizes the improvement of either explicable uncontrolled approaches to detecting anomalies, or approaches based on methods with reinforcement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abrahamsen, Nils-Gunnar Birkeland, Emil Nylén-Forthun, Mats Møller, Petter Eilif de Lange und Morten Risstad. „Financial Distress Prediction in the Nordics: Early Warnings from Machine Learning Models“. Journal of Risk and Financial Management 17, Nr. 10 (27.09.2024): 432. http://dx.doi.org/10.3390/jrfm17100432.

Der volle Inhalt der Quelle
Annotation:
This paper proposes an explicable early warning machine learning model for predicting financial distress, which generalizes across listed Nordic corporations. We develop a novel dataset, covering the period from Q1 2001 to Q2 2022, in which we combine idiosyncratic quarterly financial statement data, information from financial markets, and indicators of macroeconomic trends. The preferred LightGBM model, whose features are selected by applying explainable artificial intelligence, outperforms the benchmark models by a notable margin across evaluation metrics. We find that features related to liquidity, solvency, and size are highly important indicators of financial health and thus crucial variables for forecasting financial distress. Furthermore, we show that explicitly accounting for seasonality, in combination with entity, market, and macro information, improves model performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fomicheva, Svetlana, und Sergey Bezzateev. „Modification of the Berlekamp-Massey algorithm for explicable knowledge extraction by SIEM-agents“. Journal of Physics: Conference Series 2373, Nr. 5 (01.12.2022): 052033. http://dx.doi.org/10.1088/1742-6596/2373/5/052033.

Der volle Inhalt der Quelle
Annotation:
Abstract The article discusses the problems of applying self-explanatory machine learning models in Security Information Event Management systems. We prove the possibility of using information processing methods in finite fields for extracting knowledge from security event repositories by mobile agents. Based on the isomorphism of fuzzy production and fuzzy relational knowledge bases, a constructive method for identifying patterns based on the modified Berlekamp-Massey algorithm is proposed. This allows security agents, while solving their typical cryptanalysis tasks, to use the existing built-in tools to extract knowledge and detect previously unknown anomalies. Experimental characteristics of the application of the proposed algorithm are given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alharbi, Abdulrahman, Ivan Petrunin und Dimitrios Panagiotakopoulos. „Assuring Safe and Efficient Operation of UAV Using Explainable Machine Learning“. Drones 7, Nr. 5 (19.05.2023): 327. http://dx.doi.org/10.3390/drones7050327.

Der volle Inhalt der Quelle
Annotation:
The accurate estimation of airspace capacity in unmanned traffic management (UTM) operations is critical for a safe, efficient, and equitable allocation of airspace system resources. While conventional approaches for assessing airspace complexity certainly exist, these methods fail to capture true airspace capacity, since they fail to address several important variables (such as weather). Meanwhile, existing AI-based decision-support systems evince opacity and inexplicability, and this restricts their practical application. With these challenges in mind, the authors propose a tailored solution to the needs of demand and capacity management (DCM) services. This solution, by deploying a synthesized fuzzy rule-based model and deep learning will address the trade-off between explicability and performance. In doing so, it will generate an intelligent system that will be explicable and reasonably comprehensible. The results show that this advisory system will be able to indicate the most appropriate regions for unmanned aerial vehicle (UAVs) operation, and it will also increase UTM airspace availability by more than 23%. Moreover, the proposed system demonstrates a maximum capacity gain of 65% and a minimum safety gain of 35%, while possessing an explainability attribute of 70%. This will assist UTM authorities through more effective airspace capacity estimation and the formulation of new operational regulations and performance requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fujii, Keisuke. „Understanding of social behaviour in human collective motions with non-trivial rule of control“. Impact 2019, Nr. 10 (30.12.2019): 84–86. http://dx.doi.org/10.21820/23987073.2019.10.84.

Der volle Inhalt der Quelle
Annotation:
The coordination and movement of people in large crowds, during sports games or when socialising, seems readily explicable. Sometimes this occurs according to specific rules or instructions such as in a sport or game, at other times the motivations for movement may be more focused around an individual's needs or fears. Over the last decade, the computational ability to identify and track a given individual in video footage has increased. The conventional methods of how data is gathered and interpreted in biology rely on fitting statistical results to particular models or hypotheses. However, data from tracking movements in social groups or team sports are so complex as they cannot easily analyse the vast amounts of information and highly varied patterns. The author is an expert in human behaviour and machine learning who is based at the Graduate School of Informatics at Nagoya University. His challenge is to bridge the gap between rule-based theoretical modelling and data-driven modelling. He is employing machine learning techniques to attempt to solve this problem, as a visiting scientist in RIKEN Center for Advanced Intelligence Project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Chen, Lin Liu, Chengcheng Xu und Weitao Lv. „Predicting Future Driving Risk of Crash-Involved Drivers Based on a Systematic Machine Learning Framework“. International Journal of Environmental Research and Public Health 16, Nr. 3 (25.01.2019): 334. http://dx.doi.org/10.3390/ijerph16030334.

Der volle Inhalt der Quelle
Annotation:
The objective of this paper is to predict the future driving risk of crash-involved drivers in Kunshan, China. A systematic machine learning framework is proposed to deal with three critical technical issues: 1. defining driving risk; 2. developing risky driving factors; 3. developing a reliable and explicable machine learning model. High-risk (HR) and low-risk (LR) drivers were defined by five different scenarios. A number of features were extracted from seven-year crash/violation records. Drivers’ two-year prior crash/violation information was used to predict their driving risk in the subsequent two years. Using a one-year rolling time window, prediction models were developed for four consecutive time periods: 2013–2014, 2014–2015, 2015–2016, and 2016–2017. Four tree-based ensemble learning techniques were attempted, including random forest (RF), Adaboost with decision tree, gradient boosting decision tree (GBDT), and extreme gradient boosting decision tree (XGboost). A temporal transferability test and a follow-up study were applied to validate the trained models. The best scenario defining driving risk was multi-dimensional, encompassing crash recurrence, severity, and fault commitment. GBDT appeared to be the best model choice across all time periods, with an acceptable average precision (AP) of 0.68 on the most recent datasets (i.e., 2016–2017). Seven of nine top features were related to risky driving behaviors, which presented non-linear relationships with driving risk. Model transferability held within relatively short time intervals (1–2 years). Appropriate risk definition, complicated violation/crash features, and advanced machine learning techniques need to be considered for risk prediction task. The proposed machine learning approach is promising, so that safety interventions can be launched more effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Valladares-Rodríguez, Sonia, Manuel J. Fernández-Iglesias, Luis E. Anido-Rifón und Moisés Pacheco-Lorenzo. „Evaluation of the Predictive Ability and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment“. Electronics 11, Nr. 21 (22.10.2022): 3424. http://dx.doi.org/10.3390/electronics11213424.

Der volle Inhalt der Quelle
Annotation:
The high prevalence of Alzheimer-type dementia and the limitations of traditional neuropsychological tests motivate the introduction of new cognitive assessment methods. We discuss the validation of an all-digital, ecological and non-intrusive e-health application for the early detection of cognitive impairment, based on artificial intelligence for patient classification, and more specifically on machine learning algorithms. To evaluate the discrimination power of this application, a cross-sectional pilot study was carried out involving 30 subjects: 10 health control subjects (mean age: 75.62 years); 14 individuals with mild cognitive impairment (mean age: 81.24 years) and 6 early-stage Alzheimer’s patients (mean age: 80.44 years). The study was carried out in two separate sessions in November 2021 and January 2022. All participants completed the study, and no concerns were raised about the acceptability of the test. Analysis including socio-demographics and game data supports the prediction of participants’ cognitive status using machine learning algorithms. According to the performance metrics computed, best classification results are obtained a Multilayer Perceptron classifier, Support Vector Machines and Random Forest, respectively, with weighted recall values >= 0.9784 ± 0.0265 and F1-score = 0.9764 ± 0.0291. Furthermore, thanks to hyper-parameter optimization, false negative rates were dramatically reduced. Shapley’s additive planning (SHAP) applied according to the eXplicable AI (XAI) method, made it possible to visually and quantitatively evaluate the importance of the different features in the final classification. This is a relevant step ahead towards the use of machine learning and gamification to early detect cognitive impairment. In addition, this tool was designed to support self-administration, which could be a relevant aspect in confinement situations with limited access to health professionals. However, further research is required to identify patterns that may help to predict or estimate future cognitive damage and normative data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hermitaño Castro, Juler Anderson. „Aplicación de Machine Learning en la Gestión de Riesgo de Crédito Financiero: Una revisión sistemática“. Interfases, Nr. 015 (11.08.2022): e5898. http://dx.doi.org/10.26439/interfases2022.n015.5898.

Der volle Inhalt der Quelle
Annotation:
La gestión de riesgos bancarios puede ser dividida en las siguientes tipologías: riesgo crediticio, riesgo de mercado, riesgo operativo y riesgo de liquidez, siendo el primero el tipo de riesgo más importante para el sector financiero. El presente artículo tiene como objetivo mostrar las ventajas y desventajas que posee la implementación de los algoritmos de machine learning en la gestión de riesgos de crédito y, a partir de esto, mostrar cuál tiene mejor rendimiento, mostrando también las desventajas que puedan presentar. Para lograr el objetivo se realizó una revisión sistemática de la literatura con la estrategia de búsqueda PICo y se seleccionaron 12 artículos. Los resultados reflejan que el riesgo de crédito es el de mayor relevancia. Además, algunos de los algoritmos de machine learning ya han comenzado a implementarse, sin embargo, algunos presentan desventajas resaltantes como el no poder explicar el funcionamiento del modelo y ser considerados como caja negra. En ese sentido, desfavorece la implementación debido a que los organismos regulatorios exigen que un modelo deba ser explicable, interpretable y poseer una transparencia. Frente a esto, se ha optado por realizar modelos híbridos entre algoritmos que no son sencillos de explicar cómo aquellos modelos tradicionales de regresión logística. También, se presenta como alternativa utilizar métodos como SHAPley Additive exPlanations (SHAP) que ayudan a la interpretación de estos modelos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Umar, Muhammad, Ashish Shiwlani, Fiza Saeed, Ahsan Ahmad, Masoomi Hifazat Ali Shah und Anoosha Tahir. „Role of Deep Learning in Diagnosis, Treatment, and Prognosis of Oncological Conditions“. International Journal of Membrane Science and Technology 10, Nr. 5 (15.11.2023): 1059–71. http://dx.doi.org/10.15379/ijmst.v10i5.3695.

Der volle Inhalt der Quelle
Annotation:
Deep learning, a branch of artificial intelligence, excavates massive data sets for patterns and predictions using a machine learning method known as artificial neural networks. Research on the potential applications of deep learning in understanding the intricate biology of cancer has intensified due to its increasing applications among healthcare domains and the accessibility of extensively characterized cancer datasets. Although preliminary findings are encouraging, this is a fast-moving sector where novel insights into deep learning and cancer biology are being discovered. We give a framework for new deep learning methods and their applications in oncology in this review. Our attention was directed towards its applications for DNA methylation, transcriptomic, and genomic data, along with histopathological inferences. We offer insights into how these disparate data sets can be combined for the creation of decision support systems. Specific instances of learning applications in cancer prognosis, diagnosis, and therapy planning are presented. Additionally, the present barriers and difficulties in deep learning applications in the field of precision oncology, such as the dearth of phenotypical data and the requirement for more explicable deep learning techniques have been elaborated. We wrap up by talking about ways to get beyond the existing challenges so that deep learning can be used in healthcare settings in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Valdivieso-Ros, Carmen, Francisco Alonso-Sarria und Francisco Gomariz-Castillo. „Effect of the Synergetic Use of Sentinel-1, Sentinel-2, LiDAR and Derived Data in Land Cover Classification of a Semiarid Mediterranean Area Using Machine Learning Algorithms“. Remote Sensing 15, Nr. 2 (05.01.2023): 312. http://dx.doi.org/10.3390/rs15020312.

Der volle Inhalt der Quelle
Annotation:
Land cover classification in semiarid areas is a difficult task that has been tackled using different strategies, such as the use of normalized indices, texture metrics, and the combination of images from different dates or different sensors. In this paper we present the results of an experiment using three sensors (Sentinel-1 SAR, Sentinel-2 MSI and LiDAR), four dates and different normalized indices and texture metrics to classify a semiarid area. Three machine learning algorithms were used: Random Forest, Support Vector Machines and Multilayer Perceptron; Maximum Likelihood was used as a baseline classifier. The synergetic use of all these sources resulted in a significant increase in accuracy, Random Forest being the model reaching the highest accuracy. However, the large amount of features (126) advises the use of feature selection to reduce this figure. After using Variance Inflation Factor and Random Forest feature importance, the amount of features was reduced to 62. The final overall accuracy obtained was 0.91 ± 0.005 (α = 0.05) and kappa index 0.898 ± 0.006 (α = 0.05). Most of the observed confusions are easily explicable and do not represent a significant difference in agronomic terms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Pai, Kai-Chih, Wen-Cheng Chao, Yu-Len Huang, Ruey-Kai Sheu, Lun-Chi Chen, Min-Shian Wang, Shau-Hung Lin, Yu-Yi Yu, Chieh-Liang Wu und Ming-Cheng Chan. „Artificial intelligence–aided diagnosis model for acute respiratory distress syndrome combining clinical data and chest radiographs“. DIGITAL HEALTH 8 (Januar 2022): 205520762211203. http://dx.doi.org/10.1177/20552076221120317.

Der volle Inhalt der Quelle
Annotation:
Objective The aim of this study was to develop an artificial intelligence–based model to detect the presence of acute respiratory distress syndrome (ARDS) using clinical data and chest X-ray (CXR) data. Method The transfer learning method was used to train a convolutional neural network (CNN) model with an external image dataset to extract the image features. Then, the last layer of the model was fine-tuned to determine the probability of ARDS. The clinical data were trained using three machine learning algorithms—eXtreme Gradient Boosting (XGB), random forest (RF), and logistic regression (LR)—to estimate the probability of ARDS. Finally, ensemble-weighted methods were proposed that combined the image model and the clinical data model to estimate the probability of ARDS. An analysis of the importance of clinical features was performed to explore the most important features in detecting ARDS. A gradient-weighted class activation mapping (Grad-CAM) model was used to explain what our CNN sees and understands when making a decision. Results The proposed ensemble-weighted methods improved the performances of the ARDS classifiers (XGB + CNN, area under the curve [AUC] = 0.916; RF + CNN, AUC = 0.920; LR + CNN, AUC = 0.920; XGB + RF + LR + CNN, AUC = 0.925). In addition, the ML model using clinical data to present the top 15 important features to identify the risk factors of ARDS. Conclusion This study developed combined machine learning models with clinical data and CXR images to detect ARDS. According to the results of the Shapley Additive exPlanations values and the Grad-CAM techniques, an explicable ARDS diagnosis model is suitable for a real-life scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhao, Ziting, Tong Liu und Xudong Zhao. „Variable Selection from Image Texture Feature for Automatic Classification of Concrete Surface Voids“. Computational Intelligence and Neuroscience 2021 (06.03.2021): 1–10. http://dx.doi.org/10.1155/2021/5538573.

Der volle Inhalt der Quelle
Annotation:
Machine learning plays an important role in computational intelligence and has been widely used in many engineering fields. Surface voids or bugholes frequently appearing on concrete surface after the casting process make the corresponding manual inspection time consuming, costly, labor intensive, and inconsistent. In order to make a better inspection of the concrete surface, automatic classification of concrete bugholes is needed. In this paper, a variable selection strategy is proposed for pursuing feature interpretability, together with an automatic ensemble classification designed for getting a better accuracy of the bughole classification. A texture feature deriving from the Gabor filter and gray-level run lengths is extracted in concrete surface images. Interpretable variables, which are also the components of the feature, are selected according to a presented cumulative voting strategy. An ensemble classifier with its base classifier automatically assigned is provided to detect whether a surface void exists in an image or not. Experimental results on 1000 image samples indicate the effectiveness of our method with a comparable prediction accuracy and model explicable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Rudas, Imre J. „Intelligent Engineering Systems“. Journal of Advanced Computational Intelligence and Intelligent Informatics 4, Nr. 4 (20.07.2000): 237–39. http://dx.doi.org/10.20965/jaciii.2000.p0237.

Der volle Inhalt der Quelle
Annotation:
The "information revolution" of our time affects our entire generation. While a vision of the "Information Society," with its financial, legal, business, privacy, and other aspects has emerged in the past few years, the "traditional scene" of information technology, that is, industrial automation, maintained its significance as a field of unceasing development. Since the old-fashioned concept of "Hard Automation" applicable only to industrial processes of fixed, repetitive nature and manufacturing large batches of the same product1)was thrust to the background by keen market competition, the key element of this development remained the improvement of "Machine Intelligence". In spite of the fact that L. A. Zadeh already introduced the concept of "Machine Intelligence Quotient" in 1996 to measure machine intelligence2) , this term remained more or less of a mysterious meaning best explicable on the basis of practical needs. The weak point of hard automation is that the system configuration and operations are fixed and cannot be changed without incurring considerable cost and downtime. Mainly it can be used in applications that call for fast and accurate operation in large batch production. Whenever a variety of products must be manufactured in small batches and consequently the work-cells of a production line should be quickly reconfigured to accommodate a change in products, hard automation becomes inefficient and fails due to economic reasons. In these cases, new, more flexible way of automation, so-called "Soft Automation," are expedient and suitable. The most important "ingredient" of soft automation is its adaptive ability for efficiently coping with changing, unexpected or previously unknown conditions, and working with a high degree of uncertainty and imprecision since in practice increasing precision can be very costly. This adaptation must be realized without or within limited human interference: this is one essential component of machine intelligence. Another important factor is that engineering practice often must deal with complex systems of multiple variable and multiple parameter models almost always with strong nonlinear coupling. Conventional analysis-based approaches for describing and predicting the behavior of such systems in many cases are doomed to failure from the outset, even in the phase of the construction of a more or less appropriate mathematical model. These approaches normally are too categorical in the sense that in the name of "modeling accuracy," they try to describe all structural details of the real physical system to be modeled. This significantly increases the intricacy of the model and may result in huge computational burden without considerably improving precision. The best paradigm exemplifying this situation may be the classic perturbation theory: the less significant the achievable correction is, the more work must be invested for obtaining it. Another important component of machine intelligence is a kind of "structural uniformity" giving room and possibility to model arbitrary particular details a priori not specified and unknown. This idea is similar to that of the ready-to-wear industry, whose products can later be slightly modified in contrast to the custom-tailors' made-to-measure creations aiming at maximum accuracy from the beginning. Machines carry out these later corrections automatically. This "learning ability" is another key element of machine intelligence. To realize the above philosophy in a mathematically correct way, L. A. Zadeh separated Hard Computing from Soft Computing. This revelation immediately resulted in distinguishing between two essential complementary branches of machine intelligence: Hard Computing based Artificial Intelligence and Soft Computing based Computational Intelligence. In the last decades, it became generally known that fuzzy logic, artificial neural networks, and probabilistic reasoning based Soft Computing is a fruitful orientation in designing intelligent systems. Moreover, it became generally accepted that soft computing rather than hard computing should be viewed as the foundation of real machine intelligence via exploiting the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality. Further research in the past decade confirmed the view that typical components of present soft computing such as fuzzy logic, neurocomputing, evolutionary computation and probabilistic reasoning are complementary and best results can be obtained by their combined application. These complementary branches of Machine Intelligence, Artificial Intelligence and Computational Intelligence, serve as the basis of Intelligent Engineering Systems. The huge number of scientific results published in journals and conference proceedings worldwide substantiates this statement. Three years ago, a new series of conferences in this direction was initiated and launched with the support of several organizations including the IEEE Industrial Electronics Society and IEEE Hungary Section in technical cooperation with IEEE Robotics & Automation Society. The first event of the series hosted by Bdnki Dondt Polytechnic, Budapest, Hungary, was called "19997 IEEE International Conference on Intelligent Engineering Systems " (INES'97). The Technical University of Vienna, Austria hosted the next event of the series in 1998, followed by INES'99 held by the Technical University of Kosice, Slovakia. The present special issue consists of the extended and revised version of the most interesting papers selected out of the presentations of this conference. The papers exemplify recent development trends of intelligent engineering systems. The first paper pertains to the wider class of neural network applications. It is an interesting report of applying a special Adaptive Resonance Theory network for identifying objects in multispectral images. It is called "Extended Gaussian ARTMAP". The authors conclude that this network is especially advantageous for classification of large, low dimensional data sets. The second paper's subject belongs to the realm of fuzzy systems. It reports successful application of fundamental similarity relations in diagnostic systems. As an example failure detection of rolling-mill transmission is considered. The next paper represents the AI-branch of machine intelligence. The paper is a report on an EU-funded project focusing on the storage of knowledge in a corporate organizational memory used for storing and retrieving knowledge chunks for it. The flexible structure of the system makes it possible to adopt it to different SMEs via using company-specific conceptual terms rather than traditional keywords. The fourth selected paper's contribution is to the field of knowledge discovery. For this purpose in the first step, cluster analysis is done. The method is found to be helpful whenever little or no information on the characteristics of a given data set is available. The next paper approaches scheduling problems by the application of the multiagent system. It is concluded that due to the great number of interactions between components, MAS seems to be well suited for manufacturing scheduling problems. The sixth selected paper's topic is emerging intelligent technologies in computer-aided engineering. It discusses key issues of CAD/CAM technology of our days. The conclusion is that further development of CAD/CAM methods probably will serve companies on the competitive edge. The seventh paper of the selection is a report on seeking a special tradeoff between classical analytical modeling and traditional soft computing. It nonconventionally integrates uniform structures obtained from Lagrangian Classical Mechanics with other simple elements of machine intelligence such as saturated sigmoid transition functions borrowed from neural nets, and fuzzy rules with classical PID/ST, and a simplified version of regression analysis. It is concluded that these different components can successfully cooperate in adaptive robot control. The last paper focuses on the complexity problem of fuzzy and neural network approaches. A fuzzy rule base, be it generated from expert operators or by some learning or identification schemes, may contain redundant, weakly contributing, or outright inconsistent components. Moreover, in pursuit of good approximation, one may be tempted to overly assign the number of antecedent sets, thereby resulting in large fuzzy rule bases and much problems in computation time and storage space. Engineers using neural networks have to face the same complexity problem with the number of neurons and layers. A fuzzy rule base and neural network design, hence, have two important objectives. One is to achieve a good approximation. The other is to reduce the complexity. The main difficulty is that these two objectives are contradictory. A formal approach to extracting the more pertinent elements of a given rule set or neurons is, hence, highly desirable. The last paper is an attempt in this direction. References 1)C. W. De Silva. Automation Intelligence. Engineering Application of Artificial Intelligence. Vol. 7. No. 5. 471-477 (1994). 2)L. A. Zadeh. Fuzzy Logic, Neural Networks and Soft Computing. NATO Advanced Studies Institute on Soft Computing and Its Application. Antalya, Turkey. (1996). 3)L. A. Zadeh. Berkeley Initiative in Soft Computing. IEEE Industrial Electronics Society Newsletter. 41, (3), 8-10 (1994).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Fazelpour, Sina, und Maria De-Arteaga. „Diversity in sociotechnical machine learning systems“. Big Data & Society 9, Nr. 1 (Januar 2022): 205395172210820. http://dx.doi.org/10.1177/20539517221082027.

Der volle Inhalt der Quelle
Annotation:
There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct rationales that should inform how we measure diversity in a given context. Similarly, the lack of specificity about the precise mechanisms underpinning diversity’s potential benefits can result in uninformative generalities, invalid experimental designs, and illicit interpretations of findings. In this work, we draw on research in philosophy, psychology, and social and organizational sciences to make three contributions: First, we introduce a taxonomy of different diversity concepts from philosophy of science, and explicate the distinct epistemic and political rationales underlying these concepts. Second, we provide an overview of mechanisms by which diversity can benefit group performance. Third, we situate these taxonomies of concepts and mechanisms in the lifecycle of sociotechnical machine learning systems and make a case for their usefulness in fair and accountable machine learning. We do so by illustrating how they clarify the discourse around diversity in the context of machine learning systems, promote the formulation of more precise research questions about diversity’s impact, and provide conceptual tools to further advance research and practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Saladi, Saritha, Yepuganti Karuna, Srinivas Koppu, Gudheti Ramachandra Reddy, Senthilkumar Mohan, Saurav Mallik und Hong Qin. „Segmentation and Analysis Emphasizing Neonatal MRI Brain Images Using Machine Learning Techniques“. Mathematics 11, Nr. 2 (05.01.2023): 285. http://dx.doi.org/10.3390/math11020285.

Der volle Inhalt der Quelle
Annotation:
MRI scanning has shown significant growth in the detection of brain tumors in the recent decade among various methods such as MRA, X-ray, CT, PET, SPECT, etc. Brain tumor identification requires high exactness because a minor error can be life-threatening. Brain tumor disclosure remains a challenging job in medical image processing. This paper targets to explicate a method that is more precise and accurate in brain tumor detection and focuses on tumors in neonatal brains. The infant brain varies from the adult brain in some aspects, and proper preprocessing technique proves to be fruitful to avoid miscues in results. This paper is divided into two parts: In the first half, preprocessing was accomplished using HE, CLAHE, and BPDFHE enhancement techniques. An analysis is the sequel to the above methods to check for the best method based on performance metrics, i.e., MSE, PSNR, RMSE, and AMBE. The second half deals with the segmentation process. We propose a novel ARKFCM to use for segmentation. Finally, the trends in the performance metrics (dice similarity and Jaccard similarity) as well as the segmentation results are discussed in comparison with the conventional FCM method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Munk, Anders Kristian, Asger Gehrt Olesen und Mathieu Jacomy. „The Thick Machine: Anthropological AI between explanation and explication“. Big Data & Society 9, Nr. 1 (Januar 2022): 205395172110698. http://dx.doi.org/10.1177/20539517211069891.

Der volle Inhalt der Quelle
Annotation:
According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a neural network to predict the emoji reaction associated with a comment and asked a group of human players to compete against the machine. We show that a) the machine can reach the same (poor) accuracy as the players (51%), b) it fails in roughly the same ways as the players, and c) easily predictable emoji reactions tend to reflect unambiguous situations where interpretation is easy. We therefore repurpose the failures of the neural network to point us to deeper and more ambiguous situations where interpretation is hard and explication becomes both necessary and interesting. We use this experiment as a point of departure for discussing how experiences from anthropology, and in particular the tension between formalist ethnoscience and interpretive thick description, might contribute to debates about explainable AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Parker, J. Clint. „Below the Surface of Clinical Ethics“. Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 48, Nr. 1 (01.02.2023): 1–11. http://dx.doi.org/10.1093/jmp/jhac041.

Der volle Inhalt der Quelle
Annotation:
Abstract Often lurking below the surface of many clinical ethical issues are questions regarding background metaphysical, epistemological, meta-ethical, and political beliefs. In this issue, authors critically examine the effects of background beliefs on conscientious objection, explore ethical issues through the lenses of particular theoretical approaches like pragmatism and intersectional theory, rigorously explore the basic concepts at play within the patient safety movement, offer new theoretical approaches to old problems involving decision making for patients with dementia, explicate and explore the problems and promises of machine learning in medicine, and offer us a non-rights-based argument for the just distribution of healthcare resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Tay, Louis, Sang Eun Woo, Louis Hickman und Rachel M. Saef. „Psychometric and Validity Issues in Machine Learning Approaches to Personality Assessment: A Focus on Social Media Text Mining“. European Journal of Personality 34, Nr. 5 (September 2020): 826–44. http://dx.doi.org/10.1002/per.2290.

Der volle Inhalt der Quelle
Annotation:
In the age of big data, substantial research is now moving toward using digital footprints like social media text data to assess personality. Nevertheless, there are concerns and questions regarding the psychometric and validity evidence of such approaches. We seek to address this issue by focusing on social media text data and (i) conducting a review of psychometric validation efforts in social media text mining (SMTM) for personality assessment and discussing additional work that needs to be done; (ii) considering additional validity issues from the standpoint of reference (i.e. ‘ground truth’) and causality (i.e. how personality determines variations in scores derived from SMTM); and (iii) discussing the unique issues of generalizability when validating SMTM for personality assessment across different social media platforms and populations. In doing so, we explicate the key validity and validation issues that need to be considered as a field to advance SMTM for personality assessment, and, more generally, machine learning personality assessment methods. © 2020 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Hussain, Iqram, Rafsan Jany, Richard Boyer, AKM Azad, Salem A. Alyami, Se Jin Park, Md Mehedi Hasan und Md Azam Hossain. „An Explainable EEG-Based Human Activity Recognition Model Using Machine-Learning Approach and LIME“. Sensors 23, Nr. 17 (27.08.2023): 7452. http://dx.doi.org/10.3390/s23177452.

Der volle Inhalt der Quelle
Annotation:
Electroencephalography (EEG) is a non-invasive method employed to discern human behaviors by monitoring the neurological responses during cognitive and motor tasks. Machine learning (ML) represents a promising tool for the recognition of human activities (HAR), and eXplainable artificial intelligence (XAI) can elucidate the role of EEG features in ML-based HAR models. The primary objective of this investigation is to investigate the feasibility of an EEG-based ML model for categorizing everyday activities, such as resting, motor, and cognitive tasks, and interpreting models clinically through XAI techniques to explicate the EEG features that contribute the most to different HAR states. The study involved an examination of 75 healthy individuals with no prior diagnosis of neurological disorders. EEG recordings were obtained during the resting state, as well as two motor control states (walking and working tasks), and a cognition state (reading task). Electrodes were placed in specific regions of the brain, including the frontal, central, temporal, and occipital lobes (Fz, C1, C2, T7, T8, Oz). Several ML models were trained using EEG data for activity recognition and LIME (Local Interpretable Model-Agnostic Explanations) was employed for interpreting clinically the most influential EEG spectral features in HAR models. The classification results of the HAR models, particularly the Random Forest and Gradient Boosting models, demonstrated outstanding performances in distinguishing the analyzed human activities. The ML models exhibited alignment with EEG spectral bands in the recognition of human activity, a finding supported by the XAI explanations. To sum up, incorporating eXplainable Artificial Intelligence (XAI) into Human Activity Recognition (HAR) studies may improve activity monitoring for patient recovery, motor imagery, the healthcare metaverse, and clinical virtual reality settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Mucha, Tomasz, Sijia Ma und Kaveh Abhari. „Riding a bicycle while building its wheels: the process of machine learning-based capability development and IT-business alignment practices“. Internet Research 33, Nr. 7 (18.07.2023): 168–205. http://dx.doi.org/10.1108/intr-10-2022-0769.

Der volle Inhalt der Quelle
Annotation:
PurposeRecent advancements in Artificial Intelligence (AI) and, at its core, Machine Learning (ML) offer opportunities for organizations to develop new or enhance existing capabilities. Despite the endless possibilities, organizations face operational challenges in harvesting the value of ML-based capabilities (MLbC), and current research has yet to explicate these challenges and theorize their remedies. To bridge the gap, this study explored the current practices to propose a systematic way of orchestrating MLbC development, which is an extension of ongoing digitalization of organizations.Design/methodology/approachData were collected from Finland's Artificial Intelligence Accelerator (FAIA) and complemented by follow-up interviews with experts outside FAIA in Europe, China and the United States over four years. Data were analyzed through open coding, thematic analysis and cross-comparison to develop a comprehensive understanding of the MLbC development process.FindingsThe analysis identified the main components of MLbC development, its three phases (development, release and operation) and two major MLbC development challenges: Temporal Complexity and Context Sensitivity. The study then introduced Fostering Temporal Congruence and Cultivating Organizational Meta-learning as strategic practices addressing these challenges.Originality/valueThis study offers a better theoretical explanation for the MLbC development process beyond MLOps (Machine Learning Operations) and its hindrances. It also proposes a practical way to align ML-based applications with business needs while accounting for their structural limitations. Beyond the MLbC context, this study offers a strategic framework that can be adapted for different cases of digital transformation that include automation and augmentation of work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Calabuig, J. M., L. M. Garcia-Raffi und E. A. Sánchez-Pérez. „Aprender como una máquina: introduciendo la Inteligencia Artificial en la enseñanza secundaria“. Modelling in Science Education and Learning 14, Nr. 1 (27.01.2021): 5. http://dx.doi.org/10.4995/msel.2021.15022.

Der volle Inhalt der Quelle
Annotation:
<p class="p1">La inteligencia artificial está presente en el entorno habitual de todos los estudiantes de secundaria. Sin embargo, la población general -y los alumnos en particular- no conocen cómo funcionan estas técnicas algorítmicas, que muchas veces tienen mecanismos muy sencillos y que pueden explicarse a nivel elemental en las clases de matemáticas o de tecnología en los Institutos de Enseñanza Secundaria (IES). Posiblemente estos contenidos tardarán muchos años en formar parte de los currículos de estas asignaturas, pero se pueden introducir como parte de los contenidos de álgebra que se explican en matemáticas, o de los relacionados con los algoritmos, en las clases de informática. Sobre todo si se plantean en forma de juego, en los que pueden competir diferentes grupos de estudiantes, tal y como proponemos en este artículo. Así, presentamos un ejemplo muy simple de un algoritmo de aprendizaje por refuerzo (Machine Learning-Reinforcement Learning), que sintetiza en una actividad lúdica los elementos fundamentales que constituyen un algoritmo de inteligencia artificial.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Ahn, Yongsu, Muheng Yan, Yu-Ru Lin, Wen-Ting Chung und Rebecca Hwa. „Tribe or Not? Critical Inspection of Group Differences Using TribalGram“. ACM Transactions on Interactive Intelligent Systems 12, Nr. 1 (31.03.2022): 1–34. http://dx.doi.org/10.1145/3484509.

Der volle Inhalt der Quelle
Annotation:
With the rise of AI and data mining techniques, group profiling and group-level analysis have been increasingly used in many domains, including policy making and direct marketing. In some cases, the statistics extracted from data may provide insights to a group’s shared characteristics; in others, the group-level analysis can lead to problems, including stereotyping and systematic oppression. How can analytic tools facilitate a more conscientious process in group analysis? In this work, we identify a set of accountable group analytics design guidelines to explicate the needs for group differentiation and preventing overgeneralization of a group. Following the design guidelines, we develop TribalGram , a visual analytic suite that leverages interpretable machine learning algorithms and visualization to offer inference assessment, model explanation, data corroboration, and sense-making. Through the interviews with domain experts, we showcase how our design and tools can bring a richer understanding of “groups” mined from the data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Topper, Noah, George Atia, Ashutosh Trivedi und Alvaro Velasquez. „Active Grammatical Inference for Non-Markovian Planning“. Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 647–51. http://dx.doi.org/10.1609/icaps.v32i1.19853.

Der volle Inhalt der Quelle
Annotation:
Planning in finite stochastic environments is canonically posed as a Markov decision process where the transition and reward structures are explicitly known. Reinforcement learning (RL) lifts the explicitness assumption by working with sampling models instead. Further, with the advent of reward machines, we can relax the Markovian assumption on the reward. Angluin's active grammatical inference algorithm L* has found novel application in explicating reward machines for non-Markovian RL. We propose maintaining the assumption of explicit transition dynamics, but with an implicit non-Markovian reward signal, which must be inferred from experiments. We call this setting non-Markovian planning, as opposed to non-Markovian RL. The proposed approach leverages L* to explicate an automaton structure for the underlying planning objective. We exploit the environment model to learn an automaton faster and integrate it with value iteration to accelerate the planning. We compare against recent non-Markovian RL solutions which leverage grammatical inference, and establish complexity results that illustrate the difference in runtime between grammatical inference in planning and RL settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Cantillo Romero, Janer Rafael, Javier Javier Estrada Romero und Carlos Henríquez Miranda. „APLICACIÓN DE ALGORITMOS DE APRENDIZAJE AUTOMÁTICO EN GEOCIENCIA: REVISIÓN INTEGRAL Y DESAFÍO FUTURO“. REVISTA AMBIENTAL AGUA, AIRE Y SUELO 14, Nr. 2 (30.11.2023): 9–18. http://dx.doi.org/10.24054/raaas.v14i2.2783.

Der volle Inhalt der Quelle
Annotation:
Este artículo aborda la aplicación de técnicas de Aprendizaje Automático o Machine Learning en la geoingeniería y geociencia, destacando su relevancia en la predicción y comprensión de fenómenos naturales. A pesar de prescindir de leyes físicas explícitas, los modelos de ML ofrecen flexibilidad para adaptarse y descubrir patrones complejos. En particular, se resalta la capacidad del aprendizaje automático para mejorar la precisión y eficiencia en la predicción de la susceptibilidad a deslizamientos de tierra, con enfoques como el aprendizaje supervisado y no supervisado. Se menciona la importancia de comprender por qué un modelo clasifica ciertas clases, ofreciendo herramientas explicables que permitan alinear resultados con la comprensión física de los procesos geológicos. Además, se exploran aplicaciones cruciales de ML en la ingeniería geotécnica, con modelos basados en algoritmos como máquinas de vectores de soporte, redes neuronales artificiales y clasificadores de Bayes. Se destaca la necesidad de investigar el acoplamiento de modelos basados en la física y en datos de IA para una comprensión más completa y predicciones confiables. La integración de técnicas de ML en la geoingeniería emerge como una estrategia clave para abordar los desafíos climáticos y antropogénicos actuales, ofreciendo nuevas perspectivas en la investigación de deslizamientos de tierra y otros riesgos geológicos. Este artículo forma parte de la investigación realizada en el marco de la Maestría en Ingeniería Ambiental, donde se busca explorar el potencial del Aprendizaje Automático para la gestión de riesgos geológicos
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Bhattacharyya, Som Sekhar, und Srikant Nair. „Explicating the future of work: perspectives from India“. Journal of Management Development 38, Nr. 3 (08.04.2019): 175–94. http://dx.doi.org/10.1108/jmd-01-2019-0032.

Der volle Inhalt der Quelle
Annotation:
PurposeThe world is witnessing the advent of a wide range of technologies like machine learning, big data analytics, artificial intelligence, blockchain technology, robotics, additive manufacturing, augmented and virtual reality, cloud computing, Internet of Things and such others. Amidst this concoction of diverse technologies, the future of work is getting redefined. Thus, the purpose of this paper is to understand the future of work in the context of an emerging economy like India.Design/methodology/approachThe authors undertook a qualitative research with a positivist approach. The authors undertook expert interviews with 26 respondents. The respondents were interviewed with a semi-structured open-ended questionnaire. The responses were content analyzed for themes. System dynamics was applied to explicate the phenomenon studied.FindingsThe authors found that the future of work has multiple facets. The authors found that in future, organizations would not only use automation for lower end routine manual jobs, but also for moderate knowledge-centric tasks. Future jobs would have significant data dependency, and employees would be expected to analyze and synthesize data for sense making. Another finding pointed out that in future, individuals would be constantly required for skills upgradation and thus learning would become a continuous lifelong process. In future, individuals would get short-term tasks rather than long-term secured jobs. Thus, job flexibility would be high as freelancing would be a dominant way of work. Organizations would reduce dedicated workspaces and would use co-working spaces to reduce office space investments. In future, jobs that are impregnated with novelty and creativity would remain. A finding of concern was that with the advent of automated technologies a larger portion of workforce would lose jobs and there could be widespread unemployment that might lead to social unrest. The provision of universal basic income has been advocated by some experts to handle social crisis.Research limitations/implicationsThis research is based on an organization centric view that is anchored in the resource-based view and dynamic capabilities. The research contributes to the conversation of human resource co-existence with automated technologies for organizations of tomorrow. Thus, this work specifically contributes to strategic human resource with technology capabilities in organizations.Practical implicationsThese research findings would help organizational design and development practitioners to comprehend what kind of interventions would be required to be future ready to both accommodate technology and human resources. For policy makers, the results of this study would help them design policy interventions that could keep the nation’s workforce job ready in the age of automated technologies through investments in automated technology education.Originality/valueIndia is bestowed with one of the largest English-speaking, technically qualified young workforce working at lower salary levels than their developed county counterparts. The advent of automated technologies ushers in challenges and opportunities for this young qualified workforce to step into future. This is the first study from India that deliberates on the “future of work” in India.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ge, Hanwen, Yuekun Bai, Rui Zhou, Yaoze Liu, Jiahui Wei, Shenglin Wang, Bin Li und Huanfei Xu. „Explicable Machine Learning for Predicting High-Efficiency Lignocellulose Pretreatment Solvents Based on Kamlet–Taft and Polarity Parameters“. ACS Sustainable Chemistry & Engineering, 29.04.2024. http://dx.doi.org/10.1021/acssuschemeng.4c01563.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Sun, Kun, und Jiayi Pan. „Model of Storm Surge Maximum Water Level Increase in a Coastal Area Using Ensemble Machine Learning and Explicable Algorithm“. Earth and Space Science 10, Nr. 12 (Dezember 2023). http://dx.doi.org/10.1029/2023ea003243.

Der volle Inhalt der Quelle
Annotation:
AbstractThis study proposes a novel, new ensemble model (NEM) designed to simulate the maximum water level increases caused by storm surges in a frequently cyclone‐affected coastal water of Hong Kong, China. The model relies on storm and water level data spanning 1978–2022. The NEM amalgamates three machine learning algorithms: Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and XGBoost (XGB), employing a stacking technique for integration. Six parameters, determined using the Random Forest and Recursive Feature Elimination algorithms (RF‐RFE), are used as input features for the NEM. These parameters are the nearest wind speed, gale distance, nearest air pressure, minimum distance, maximum pressure drop within 24 hr, and large wind radius. Model assessment results suggest that the NEM exhibits superior performance over RF, GBDT, and XGB, delivering high stability and precision. It reaches a coefficient of determination (R2) up to 0.95 and a mean absolute error (MAE) that fluctuates between 0.08 and 0.20 m for the test data set. An interpretability analysis conducted using the SHapley Additive exPlanations (SHAP) method shows that gale distance and nearest wind speed are the most significant features for predicting peak water level increases during storm surges. The results of this study could provide practical implications for predictive models concerning storm surges. These findings present essential tools for the mitigation of coastal disasters and the improvement of marine disaster warning systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kim, Ho Heon, Dong-Wook Kim, Junwoo Woo und Kyoungyeul Lee. „Explicable prioritization of genetic variants by integration of rule-based and machine learning algorithms for diagnosis of rare Mendelian disorders“. Human Genomics 18, Nr. 1 (21.03.2024). http://dx.doi.org/10.1186/s40246-024-00595-8.

Der volle Inhalt der Quelle
Annotation:
Abstract Background In the process of finding the causative variant of rare diseases, accurate assessment and prioritization of genetic variants is essential. Previous variant prioritization tools mainly depend on the in-silico prediction of the pathogenicity of variants, which results in low sensitivity and difficulty in interpreting the prioritization result. In this study, we propose an explainable algorithm for variant prioritization, named 3ASC, with higher sensitivity and ability to annotate evidence used for prioritization. 3ASC annotates each variant with the 28 criteria defined by the ACMG/AMP genome interpretation guidelines and features related to the clinical interpretation of the variants. The system can explain the result based on annotated evidence and feature contributions. Results We trained various machine learning algorithms using in-house patient data. The performance of variant ranking was assessed using the recall rate of identifying causative variants in the top-ranked variants. The best practice model was a random forest classifier that showed top 1 recall of 85.6% and top 3 recall of 94.4%. The 3ASC annotates the ACMG/AMP criteria for each genetic variant of a patient so that clinical geneticists can interpret the result as in the CAGI6 SickKids challenge. In the challenge, 3ASC identified causal genes for 10 out of 14 patient cases, with evidence of decreased gene expression for 6 cases. Among them, two genes (HDAC8 and CASK) had decreased gene expression profiles confirmed by transcriptome data. Conclusions 3ASC can prioritize genetic variants with higher sensitivity compared to previous methods by integrating various features related to clinical interpretation, including features related to false positive risk such as quality control and disease inheritance pattern. The system allows interpretation of each variant based on the ACMG/AMP criteria and feature contribution assessed using explainable AI techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Clarke, Gerald P., und Adam Kapelner. „The Bayesian Additive Regression Trees Formula for Safe Machine Learning-Based Intraocular Lens Predictions“. Frontiers in Big Data 3 (18.12.2020). http://dx.doi.org/10.3389/fdata.2020.572134.

Der volle Inhalt der Quelle
Annotation:
Purpose: Our work introduces a highly accurate, safe, and sufficiently explicable machine-learning (artificial intelligence) model of intraocular lens power (IOL) translating into better post-surgical outcomes for patients with cataracts. We also demonstrate its improved predictive accuracy over previous formulas.Methods: We collected retrospective eye measurement data on 5,331 eyes from 3,276 patients across multiple centers who received a lens implantation during cataract surgery. The dependent measure is the post-operative manifest spherical equivalent error from intended and the independent variables are the patient- and eye-specific characteristics. This dataset was split so that one subset was for formula construction and the other for validating our new formula. Data excluded fellow eyes, so as not to confound the prediction with bilateral eyes.Results: Our formula is three times more precise than reported studies with a median absolute IOL error of 0.204 diopters (D). When converted to absolute predictive refraction errors on the cornea, the median error is 0.137 D which is close to the IOL manufacturer tolerance. These estimates are validated out-of-sample and thus are expected to reflect the future performance of our prediction formula, especially since our data were collected from a wide variety of patients, clinics, and manufacturers.Conclusion: The increased precision of IOL power calculations has the potential to optimize patient positive refractive outcomes. Our model also provides uncertainty plots that can be used in tandem with the clinician’s expertise and previous formula output, further enhancing the safety.Translational relavance: Our new machine learning process has the potential to significantly improve patient IOL refractive outcomes safely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Khan, Ijaz, Abdul Rahim Ahmad, Nafaa Jabeur und Mohammed Najah Mahdi. „An artificial intelligence approach to monitor student performance and devise preventive measures“. Smart Learning Environments 8, Nr. 1 (08.09.2021). http://dx.doi.org/10.1186/s40561-021-00161-y.

Der volle Inhalt der Quelle
Annotation:
AbstractA major problem an instructor experiences is the systematic monitoring of students’ academic progress in a course. The moment the students, with unsatisfactory academic progress, are identified the instructor can take measures to offer additional support to the struggling students. The fact is that the modern-day educational institutes tend to collect enormous amount of data concerning their students from various sources, however, the institutes are craving novel procedures to utilize the data to magnify their prestige and improve the education quality. This research evaluates the effectiveness of machine learning algorithms to monitor students’ academic progress and informs the instructor about the students at the risk of ending up with unsatisfactory result in a course. In addition, the prediction model is transformed into a clear shape to make it easy for the instructor to prepare the necessary precautionary procedures. We developed a set of prediction models with distinct machine learning algorithms. Decision tree triumph over other models and thus is further transformed into easily explicable format. The final output of the research turns into a set of supportive measures to carefully monitor students’ performance from the very start of the course and a set of preventive measures to offer additional attention to the struggling students.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Siddique, Abu Bokkar, Eliyas Rayhan, Faisal Sobhan, Nabanita Das, Md Azizul Fazal, Shashowti Chowdhury Riya und Subrata Sarker. „Spatio-temporal analysis of land use and land cover changes in a wetland ecosystem of Bangladesh using a machine-learning approach“. Frontiers in Water 6 (10.07.2024). http://dx.doi.org/10.3389/frwa.2024.1394863.

Der volle Inhalt der Quelle
Annotation:
This study investigates quantifiable and explicable changes in Land Use and Land Cover (LULC) within the context of a freshwater wetland, Hakaluki Haor, in Bangladesh. The haor is a vital RAMSAR site and Ecologically Critical Area (ECA), which needs to be monitored to investigate LULC change patterns for future management interventions. Leveraging Landsat satellite data, the Google Earth Engine Database, CART algorithm, ArcGIS 10.8 and the R programming language, this study analyses LULC dynamics from 2000 to 2023. It focuses explicitly on seasonal transitions between the rainy and dry seasons, unveiling substantial transformations in cumulative LULC change patterns over the study period. Noteworthy changes include an overall reduction (~51%) in Water Bodies. Concurrently, there is a significant increase (~353%) in Settlement areas. Moreover, vegetation substantially declines (71%), while Crop Land demonstrates varying coverage. These identified changes underscore the dynamic nature of LULC alterations and their potential implications for the environmental, hydrological, and agricultural aspects within the Hakaluki Haor region. The outcomes of this study aim to provide valuable insights to policymakers for formulating appropriate land-use strategies in the area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Funer, Florian. „Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship“. Philosophy & Technology 35, Nr. 1 (29.01.2022). http://dx.doi.org/10.1007/s13347-022-00505-7.

Der volle Inhalt der Quelle
Annotation:
AbstractThe initial successes in recent years in harnessing machine learning (ML) technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as the agent responsible for the implementation of diagnosis, therapy, and care is unable to access the generation of findings and recommendations. There is widespread agreement that, generally, a complete traceability is preferable to opaque recommendations; however, there are differences about addressing ML-based systems whose functioning seems to remain opaque to some degree—even if so-called explicable or interpretable systems gain increasing amounts of interest. This essay approaches the epistemic foundations of ML-generated information specifically and medical knowledge generally to advocate differentiations of decision-making situations in clinical contexts regarding their necessary depth of insight into the process of information generation. Empirically accurate or reliable outcomes are sufficient for some decision situations in healthcare, whereas other clinical decisions require extensive insight into ML-generated outcomes because of their inherently normative implications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zhang, Jiahui, Wenjie Du, Xiaoting Yang, Di Wu, Jiahe Li, Kun Wang und Yang Wang. „SMG-BERT: integrating stereoscopic information and chemical representation for molecular property prediction“. Frontiers in Molecular Biosciences 10 (30.06.2023). http://dx.doi.org/10.3389/fmolb.2023.1216765.

Der volle Inhalt der Quelle
Annotation:
Molecular property prediction is a crucial task in various fields and has recently garnered significant attention. To achieve accurate and fast prediction of molecular properties, machine learning (ML) models have been widely employed due to their superior performance compared to traditional methods by trial and error. However, most of the existing ML models that do not incorporate 3D molecular information are still in need of improvement, as they are mostly poor at differentiating stereoisomers of certain types, particularly chiral ones. Also,routine featurization methods using only incomplete features are hard to obtain explicable molecular representations. In this paper, we propose the Stereo Molecular Graph BERT (SMG-BERT) by integrating the 3D space geometric parameters, 2D topological information, and 1D SMILES string into the self-attention-based BERT model. In addition, nuclear magnetic resonance (NMR) spectroscopy results and bond dissociation energy (BDE) are integrated as extra atomic and bond features to improve the model’s performance and interpretability analysis. The comprehensive integration of 1D, 2D, and 3D information could establish a unified and unambiguous molecular characterization system to distinguish conformations, such as chiral molecules. Intuitively integrated chemical information enables the model to possess interpretability that is consistent with chemical logic. Experimental results on 12 benchmark molecular datasets show that SMG-BERT consistently outperforms existing methods. At the same time, the experimental results demonstrate that SMG-BERT is generalizable and reliable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hu, Chang, Chao Gao, Tianlong Li, Chang Liu und Zhiyong Peng. „Explainable artificial intelligence model for mortality risk prediction in the intensive care unit: a derivation and validation study“. Postgraduate Medical Journal, 19.01.2024. http://dx.doi.org/10.1093/postmj/qgad144.

Der volle Inhalt der Quelle
Annotation:
Abstract Background The lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. Herein, we aimed to improve transparency by utilizing the latest ML explicable technology, SHapley Additive exPlanation (SHAP), to develop a predictive model for critically ill patients. Methods We extracted data from the Medical Information Mart for Intensive Care IV database, encompassing all intensive care unit admissions. We employed nine different methods to develop the models. The most accurate model, with the highest area under the receiver operating characteristic curve, was selected as the optimal model. Additionally, we used SHAP to explain the workings of the ML model. Results The study included 21 395 critically ill patients, with a median age of 68 years (interquartile range, 56–79 years), and most patients were male (56.9%). The cohort was randomly split into a training set (N = 16 046) and a validation set (N = 5349). Among the nine models developed, the Random Forest model had the highest accuracy (87.62%) and the best area under the receiver operating characteristic curve value (0.89). The SHAP summary analysis showed that Glasgow Coma Scale, urine output, and blood urea nitrogen were the top three risk factors for outcome prediction. Furthermore, SHAP dependency analysis and SHAP force analysis were used to interpret the Random Forest model at the factor level and individual level, respectively. Conclusion A transparent ML model for predicting outcomes in critically ill patients using SHAP methodology is feasible and effective. SHAP values significantly improve the explainability of ML models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Marey, Ahmed, Parisa Arjmand, Ameerh Dana Sabe Alerab, Mohammad Javad Eslami, Abdelrahman M. Saad, Nicole Sanchez und Muhammad Umair. „Explainability, transparency and black box challenges of AI in radiology: impact on patient care in cardiovascular radiology“. Egyptian Journal of Radiology and Nuclear Medicine 55, Nr. 1 (13.09.2024). http://dx.doi.org/10.1186/s43055-024-01356-2.

Der volle Inhalt der Quelle
Annotation:
AbstractThe integration of artificial intelligence (AI) in cardiovascular imaging has revolutionized the field, offering significant advancements in diagnostic accuracy and clinical efficiency. However, the complexity and opacity of AI models, particularly those involving machine learning (ML) and deep learning (DL), raise critical legal and ethical concerns due to their "black box" nature. This manuscript addresses these concerns by providing a comprehensive review of AI technologies in cardiovascular imaging, focusing on the challenges and implications of the black box phenomenon. We begin by outlining the foundational concepts of AI, including ML and DL, and their applications in cardiovascular imaging. The manuscript delves into the "black box" issue, highlighting the difficulty in understanding and explaining AI decision-making processes. This lack of transparency poses significant challenges for clinical acceptance and ethical deployment. The discussion then extends to the legal and ethical implications of AI's opacity. The need for explicable AI systems is underscored, with an emphasis on the ethical principles of beneficence and non-maleficence. The manuscript explores potential solutions such as explainable AI (XAI) techniques, which aim to provide insights into AI decision-making without sacrificing performance. Moreover, the impact of AI explainability on clinical decision-making and patient outcomes is examined. The manuscript argues for the development of hybrid models that combine interpretability with the advanced capabilities of black box systems. It also advocates for enhanced education and training programs for healthcare professionals to equip them with the necessary skills to utilize AI effectively. Patient involvement and informed consent are identified as critical components for the ethical deployment of AI in healthcare. Strategies for improving patient understanding and engagement with AI technologies are discussed, emphasizing the importance of transparent communication and education. Finally, the manuscript calls for the establishment of standardized regulatory frameworks and policies to address the unique challenges posed by AI in healthcare. By fostering interdisciplinary collaboration and continuous monitoring, the medical community can ensure the responsible integration of AI into cardiovascular imaging, ultimately enhancing patient care and clinical outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Smith, Matthew G., Jack Radford, Eky Febrianto, Jorge Ramírez, Helen O’Mahony, Andrew B. Matheson, Graham M. Gibson, Daniele Faccio und Manlio Tassieri. „Machine learning opens a doorway for microrheology with optical tweezers in living systems“. AIP Advances 13, Nr. 7 (01.07.2023). http://dx.doi.org/10.1063/5.0161014.

Der volle Inhalt der Quelle
Annotation:
It has been argued that linear microrheology with optical tweezers (MOT) of living systems “is not an option” because of the wide gap between the observation time required to collect statistically valid data and the mutational times of the organisms under study. Here, we have explored modern machine learning (ML) methods to reduce the duration of MOT measurements from tens of minutes down to one second by focusing on the analysis of computer simulated experiments. For the first time in the literature, we explicate the relationship between the required duration of MOT measurements (Tm) and the fluid relative viscosity (ηr) to achieve an uncertainty as low as 1% by means of conventional analytical methods, i.e., Tm≅17ηr3 minutes, thus revealing why conventional MOT measurements commonly underestimate the materials’ viscoelastic properties, especially in the case of high viscous fluids or soft-solids. Finally, by means of real experimental data, we have developed and corroborated an ML algorithm to determine the viscosity of Newtonian fluids from trajectories of only one second in duration, yet capable of returning viscosity values carrying an error as low as ∼0.3% at best, hence opening a doorway for MOT in living systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Sun, Deliang, Yuekai Ding, Haijia Wen und Fengtai Zhang. „A novel QLattice‐based whitening machine learning model of landslide susceptibility mapping“. Earth Surface Processes and Landforms, 06.08.2023. http://dx.doi.org/10.1002/esp.5675.

Der volle Inhalt der Quelle
Annotation:
AbstractLandslide susceptibility mapping (LSM) enables the prediction of landslide occurrences, thereby offering a scientific foundation for disaster prevention and control. In recent years, numerous studies have been conducted on LSM using machine learning techniques. However, the majority of machine learning models is considered “black box” models due to their lack of transparent explanations. In contrast, the QLattice model serves as a white box model, as it can elucidate the decision‐making mechanism while representing a novel approach to whitening machine learning models. QLattice possesses the capability to automatically select and scale data features. In this study, Fengjie County in China was selected as the research area, with slope units serving as evaluation units. A geospatial database was constructed using 12 conditioning factors, including elevation, slope, and annual average rainfall. LSM models were conducted using both the QLattice and random forest (RF) algorithms. The findings demonstrate that the QLattice model achieved an area under curve value of 0.868, while the RF model attained an area under curve value of 0.849 for the test datasets. These results highlight the superior predictive ability and stability of the QLattice model compared with RF. Furthermore, QLattice can explicate and clarify the change processes of conditioning factors, thereby revealing the internal decision‐making mechanism and causes behind the LSM model's decisions. The innovative QLattice‐based model provides new ideas and methodologies for LSM research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ahn, Sungyong. „On That <em>Toy-Being</em> of Generative Art Toys“. M/C Journal 26, Nr. 2 (25.04.2023). http://dx.doi.org/10.5204/mcj.2947.

Der volle Inhalt der Quelle
Annotation:
Exhibiting Procedural Generation Generative art toys are software applications that create aesthetically pleasing visual patterns in response to the users toying with various input devices, from keyboard and mouse to more intuitive and tactile devices for motion tracking. The “art” part of these toy objects might relate to the fact that they are often installed in art galleries or festivals as a spectacle for non-players that exhibits the unlimited generation of new patterns from a limited source code. However, the features that used to characterise generative arts as a new meditative genre, such as the autonomy of the algorithmic system and its self-organisation (Galanter 151), do not explain the pleasure of fiddling with these playthings, which feel sticky like their toy relatives, slime, rather than meditative, like mathematical sublime. Generative algorithms are more than software tools to serve human purposes now. While humans are still responsible for the algorithmically generated content, this is either to the extent of the simple generation rules the artists design for their artworks or only to the extent that our everyday conversations and behaviours serve as raw material to train machine learning-powered generation algorithms, such as ChatGPT, to interpret the world they explore stochastically, extrapolating it in an equivalently statistical way. Yet, as the algorithms become more responsive to the contingency of human behaviours, and so the trained generation rules become too complex, it becomes almost impossible for humans to understand how they translate all contingencies in the real world into machine-learnable correlations. In turn, the way we are entangled with the generated content comes to far exceed our responsibility. One disturbing future scenario of this hyper-responsiveness of the algorithms, for which we could never be fully responsible, is when machine-generated content replaces the ground truth sampled from the real world, leading to the other machine learning-powered software tools that govern human behaviour being trained on these “synthetic data” (Steinhoff). The multiplicities of human worlds are substituted for their algorithmically generated proxies, and the AIs trained instead on the proxies’ stochastic complexities would tell us how to design our future behaviours. As one aesthetic way to demonstrate the creativity of the machines, generative arts have exhibited generative algorithms in a somewhat decontextualised and thus less threatening manner by “emphasizing the circularity of autopoietic processes” of content generation (Hayles 156). Their current toy conversion playfully re-contextualises how these algorithms in real life, incarnated into toy-like gadgets, both enact and are enacted by human users. These interactions not only form random seeds for content generation but also constantly re-entangle generated contents with contingent human behaviors. The toy-being of generative algorithms I conceptualise here is illustrative of this changed mode of their exhibition. They move from displaying generative algorithms as speculative objects at a distance to sticky toy objects up close and personal: from emphasising their autopoietic closure to “more open-ended and transformative” engagement with their surroundings (Hayles 156). (Katherine Hayles says this changed focus in the research of artificial life/intelligence from the systems’ auto-poietic self-closure to their active engagement with environments characterises “the transition from the second to the third wave” of cybernetics; 17.) Their toy-being also reflects how the current software industry repurposes these algorithms, once developed for automation of content creation with no human intervention, as machines that enact commercially promising entanglements between contingent human behaviors and a mixed-reality that is algorithmically generated. Tool-Being and Toy-Being of Generative Algorithms What I mean by toy-being is a certain mode of existence in which a thing appears when our habitual sensorimotor relations with it are temporarily suspended. It is comparable to what Graham Harman calls a thing’s tool-being in his object-oriented rereading of Heidegger’s tool analysis. In that case, this thing’s becoming either a toy or tool pertains to how our hands are entangled with its ungraspable aspects. According to Heidegger a hammer, for instance, is ready-to-hand when its reactions to our grip, and swinging, and to the response from the nail, are fully integrated into our habitual action of hammering to the extent that its stand-alone existence is almost unnoticeable (Tool-Being). On the other hand, it is when the hammer breaks down, or slips out of our grasp, that it begins to feel present-at-hand. For Harman, this is the moment the hammer reveals its own way to be in the world, outside of our instrumentalist concern. It is the hint of the hammer’s “subterranean reality”, which is inexhaustible by any practical and theoretical concerns we have of it (“Well-Wrought” 186). It is unconstrained by the pragmatic maxim that any conception of an object should be grounded in the consequences of what it does or what can be done with it (Peirce). In Harman’s object-oriented ontology, neither the hammer’s being ready to serve any purpose of human and nonhuman others – nor its being present as an object with its own social, economic, and material histories – explicate its tool-being exhaustively. Instead, it always preserves more than the sum of the relations it has ever built with others throughout its lifetime. So, the mode of existence that describes best this elusive tool-being for him is withdrawing-from-hand. Generative art toys are noteworthy regarding this ever-switching and withdrawing mode of things on which Harman and other speculative realists focus. In the Procedural Content Generation (PCG) community, the current epicentre of generative art toys, which consists of videogame developers and researchers, these software applications are repurposed from the development tools they aim to popularise through this toy conversion. More importantly, procedural algorithms are not ordinary tools ready to be an extension of a developer’s hands, just as traditional level design tools follow Ivan Suntherland’s 1963 Sketchpad archetype. Rather, procedural generation is an autopoietic process through which the algorithm organises its own representation of the world from recursively generated geographies, characters, events, and other stuff. And this representation does not need to be a truthful interpretation of its environments, which are no other than generation parameters and other input data from the developer. Indeed, they “have only a triggering role in the release of the internally-determined activity” of content generation. The representation it generates suffices to be just “structurally coupled” with these developer-generated data (Hayles 136, 138). In other words, procedural algorithms do not break down to be felt present-at-hand because they always feel as though their operations are closed against their environments-developers. Furthermore, considered as the solution to the ever-increasing demand for the more expansive and interactive sandbox design of videogames, they not only promise developers unlimited regeneration of content for another project but promise players a virtual reality, which constantly changes its shape while always appearing perfectly coupled with different decisions made by avatars, and thus promise unlimited replayability of the videogame. So, it is a common feeling of playing a videogame with procedurally generated content or a story that evolves in real time that something is constantly withdrawing from the things the player just grasped. (The most vicious way to exploit this gamer feeling would be the in-game sale of procedurally generated items, such as weapons with many re-combinable parts, instead of the notorious loot-box that sells a random item from the box, but with the same effect of leading gamers to a gambling addiction by letting them believe there is still something more.) In this respect, it is not surprising that Harman terms his object-oriented ontology after object-oriented programming in computer science. Both look for an inexhaustible resource for the creative generation of the universe and algorithmic systems from the objects infinitely relatable to one another thanks ironically to the secret inner realities they enclose against each other. Fig. 1: Kate Compton, Idle Hands. http://galaxykate.com/apps/idlehands/ However, the toy-being of the algorithms, which I rediscover from the PCG community’s playful conversion of their development tools and which Harman could not pay due attention to while holding on to the self-identical tool-being, is another mode of existence that all tools, or all things before they were instrumentalised, including even the hammer, had used to be in children’s hands. For instance, in Kate Compton’s generative art toy Idle Hands (fig. 1), what a player experiences is her hand avatar, every finger and joint of which is infinitely extended into the space, even as they also serve as lines into which the space is infinitely folded. So, as the player clenches and unclenches her physical hands, scanned in real-time by the motion tracking device Leapmotion, and interpreted into linear input for the generation algorithm, the space is constantly folded and refolded everywhere even by the tiniest movement of a single joint. There is nothing for her hands to grasp onto because nothing is ready to respond consistently to her repeated hand gestures. It is almost impossible to replicate the exact same gesture but, even if she does, the way the surrounding area is folded by this would be always unpredictable. Put differently, in this generative art toy, the player cannot functionally close her sensorimotor activity. This is not so much because of the lack of response, but because it is Compton’s intention to render the whole “fields of the performer” as hyperresponsive to “a body in motion” as if “the dancer wades through water or smoke or tall grass, if they disturb [the] curtain as they move” (Compton and Mateas). At the same time, the constant re-generation of the space as a manifold is no longer felt like an autonomous self-creation of the machine but arouses the feeling that “all of these phenomena ‘listen’ to the movement of the [hands] and respond in some way” (Compton and Mateas). Let me call this fourth mode of things, neither ready-to-hand nor present-at-hand, nor withdrawing-from-hand, but sticky-to-hand: describing a thing’s toy-being. This is so entangled with the hands that its response to our grasp is felt immediately, on every surface and joint, so that it is impossible to anticipate exactly how it would respond to further grasping or releasing. It is a typical feeling of the hand toying with a chunk of clay or slime. It characterises the hypersensitivity of the autistic perception that some neurodiverse people may have, even to ordinary tools, not because they have closed their minds against the world as the common misunderstanding says, but because even the tiniest pulsations that things exert to their moving bodies are too overwhelming to be functionally integrated into their habitual sensorimotor activities let alone to be unentangled as present-at-hand (Manning). In other words, whereas Heideggerian tool-being, for Harman, draws our attention to the things outside of our instrumentalist concern, their toyfication puts the things that were once under our grip back into our somewhat animistic interests of childhood. If our agency as tool-users presupposes our body’s optimal grip on the world that Hubert Dreyfus defines as “the body’s tendency to refine its responses so as to bring the current situation closer to an optimal gestalt” (367), our becoming toy-players is when we feel everything is responsive to each other until that responsiveness is trivialised as the functional inputs for habitual activities. We all once felt things like these animistic others, before we were trained to be tool-users, and we may consequently recall a forgotten genealogy of toy-being in the humanities. This genealogy may begin with a cotton reel in Freud’s fort-da game, while also including such things as jubilant mirror doubles and their toy projections in Lacanian psychoanalysis, various playthings in Piaget’s development theory, and all non-tool-beings in Merleau-Ponty’s phenomenology. To trace this genealogy is not this article’s goal but the family resemblance that groups these things under the term toy-being is noteworthy. First, they all pertain to a person’s individuation processes at different stages, whether it be for the symbolic and tactile re-staging of a baby’s separation from her mother, her formation of a unified self-image from the movements of different body parts, the child’s organisation of object concepts from tactile and visual feedbacks of touching and manipulating hands, the subsequent “projection of such ‘symbolic schemas’” as social norms, as Barbie’s and Ken’s, onto these objects (Piaget 165-166), or a re-doing of all these developmental processes through aesthetic assimilation of objects as the flesh of the worlds (Merleau-Ponty). And these individuations through toys seem to approach the zero-degree of human cognition in which a body (either human or nonhuman) is no other than a set of loosely interconnected sensors and motors. In this zero-degree, the body’s perception or optimal grip on things is achieved as the ways each thing responds to the body’s motor activities are registered on its sensors as something retraceable, repeatable, and thus graspable. In other words, there is no predefined subject/object boundary here but just multiplicities of actions and sensations until a group of sensors and motors are folded together to assemble a reflex arc, or what Merleau-Ponty calls intention arc (Dreyfus), or what I term sensor-actuator arc in current smart spaces (Ahn). And it is when some groups of sensations are distinguished as those consistently correlated with and thus retraceable by certain operations of the body that this fold creates an optimal grip on the rest of the field. Let me call this enfolding of the multiplicities whereby “the marking of the ‘measuring agencies’ by the ‘measured object’” emerges prior to the interaction between two, following Karen Barad, intra-action (177). Contrary to the experience of tool-being present-at-hand as no longer consistently contributing to our habitually formed reflex arc of hammering or to any socially constructed measuring agencies for normative behaviors of things, what we experience with this toy-being sticky-to-hand is our bodies’ folding into the multiplicities of actions and sensations, to discover yet unexplored boundaries and grasping between our bodies and the flesh of the world. Generative Art Toys as the Machine Learning’s Daydream Then, can I say even the feeling I have on my hands while I am folding and refolding the slime is intra-action? I truly think so, but the multiplicities in this case are so sticky. They join to every surface of my hands whereas the motility under my conscious control is restricted only to several joints of my fingers. The real-life multiplicities unfolded from toying with the slime are too overwhelming to be relatable to my actions with the restricted degree of freedom. On the other hand, in Compton’s Idle Hands, thanks to the manifold generated procedurally in virtual reality, a player experiences these multiplicities so neatly entangled with all the joints on the avatar hands. Rather than simulating a meaty body enfolded within “water or smoke or tall grass,” or the flesh of the world, the physical hands scanned by Leapmotion and abstracted into “3D vector positions for all finger joints” are embedded in the paper-like virtual space of Idle Hands (Compton and Mateas). And rather than delineating a boundary of the controlling hands, they are just the joints on this immanent plane, through which it is folded into itself in so many fantastic ways impossible on a sheet of paper in Euclidean geometry. Another toy relative which Idle Hands reminds us of is, in this respect, Cat’s Cradle (fig. 2). This play of folding a string entangled around the fingers into itself over and over again to unfold each new pattern is, for Donna Haraway, a metaphor for our creative cohabitation of the world with nonhuman others. Feeling the tension the fingers exchange with each other across the string is thus, for her, compared to “our task” in the Anthropocene “to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places” (Haraway 1). Fig. 2: Nasser Mufti, Multispecies Cat's Cradle, 2011. https://www.kit.ntnu.no/sites/www.kit.ntnu.no/files/39a8af529d52b3c35ded2aa1b5b0cb0013806720.jpg In the alternative, in Idle Hands, each new pattern is easily unfolded even from idle and careless finger movements without any troubled feeling, because its procedural generation is to guarantee that every second of the player’s engagement is productive and wasteless relation-making. In Compton’s terms, the pleasure of generative art toys is relevant to the players’ decision to trade the control they once enjoyed as tool users for power. And this tricky kind of power that the players are supposed to experience is not because of their strong grip, but because they give up this strong grip. It is explicable as the experience of being re-embedded as a fold within this intra-active field of procedural generation: the feeling that even seemingly purposeless activities can make new agential cuts as the triggers for some artistic creations (“Generative Art Toys” 164-165), even though none of these creations are graspable or traceable by the players. The procedural algorithm as the new toy-being is, therefore, distinguishable from its non-digital toy relatives by this easy feeling of engagement that all generated patterns are wastelessly correlated with the players’ sensorimotor activities in some ungraspable ways. And given the machine learning community’s current interest in procedural generation as the method to “create more training data or training situations” and “to facilitate the transfer of policies trained in a simulator to the real world” (Risi and Togelius 428, 430), the pleasure of generative art toys can be interpreted as revealing the ideal picture of the mixed-reality dreamed of by machine learning algorithms. As the solution to circumvent the issue of data privacy in surveillance capitalism, and to augment the lack of diversity in existing training data, the procedurally generated synthetic data are now considered as the new benchmarks for machine learning instead of those sampled from the real world. This is not just about a game-like object for a robot to handle, or geographies of fictional terrains for a smart vehicle to navigate (Risi and Togelius), but is more about “little procedural people” (“Little Procedural People”), “synthetic data for banking, insurance, and telecommunications companies” (Steinhoff 8). In the near future, as the AIs trained solely on these synthetic data begin to guide our everyday decision-making, the mixed-reality will thus be more than just a virtual layer of the Internet superimposed on the real world but haunted by so many procedurally generated places, things, and people. Compared to the real world, still too sticky like slime, machine learning could achieve an optimal grip on this virtual layer because things are already generated there under the assumption that they are all entangled with one another by some as yet unknown correlations that machine learning is supposed to unfold. Then the question recalled by this future scenario of machine learning would be again Philip K. Dick’s: Do the machines dream of (procedurally generated) electronic sheep? Do they rather dream of this easy wish fulfillment in place of playing an arduous Cat’s Cradle with humans to discover more patterns to commodify between what our eyes attend to and what our fingers drag and click? Incarnated into toy-like gadgets on mobile devices, machine learning algorithms relocate their users to the zero-degree of social profiles, which is no other than yet-unstructured personal data supposedly responsive to (and responsible for regenerating) invisible arcs, or correlations, between things they watch and things they click. In the meanwhile, what the generative art toys really generate might be the self-fulfilling hope of the software industry that machines could generate their mixed-reality, so neatly and wastelessly engaged with the idle hands of human users, the dream of electronic sheep under the maximal grip of Android (as well as iOS). References Ahn, Sungyong. “Stream Your Brain! Speculative Economy of the IoT and Its Pan-Kinetic Dataveillance.” Big Data & Society 8.2 (2021). Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke UP, 2007. Compton, Kate. “Generative Art Toys.” Procedural Generation in Game Design, eds. Tanya Short and Tarn Adams. New York: CRC Press, 2017. 161-173. Compton, Kate. “Little Procedural People: Playing Politics with Generators.” Proceedings of the 12th International Conference on the Foundations of Digital Games, eds. Alessandro Canossa, Casper Harteveld, Jichen Zhu, Miguel Sicart, and Sebastian Deterding. New York: ACM, 2017. Compton, Kate, and Michael Mateas. “Freedom of Movement: Generative Responses to Motion Control.” CEUR Workshop Proceedings, 2282, ed. Jichen Zhu. Aachen: CEUR-WS, 2018. Dreyfus, Hubert L. “Intelligence without Representation: Merleau-Ponty’s Critique of Mental Representation.” Phenomenology and the Cognitive Sciences 1 (2002): 367-383. Galanter, Philip. “Generative Art Theory.” A Companion to Digital Art, ed. Christiane Paul. Hoboken, NJ: Wiley-Blackwell, 2016. 146-180. Haraway, Donna J. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke UP, 2016. Harman, Graham. Tool-Being: Heidegger and the Metaphysics of Objects. Chicago: Open Court, 2002. ———. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43 (2012): 183-203. Hayles, Katherine N. How We Become Posthuman: Virtual Bodies in Cybernetics, Literatures, and Informatics. Chicago: U of Chicago P, 1999. Manning, Erin. The Minor Gesture. Durham: Duke UP, 2016. Merleau-Ponty, Maurice. The Visible and the Invisible. Ed. Claude Lefort. Trans. Alphonso Lingis. Evanston, IL: Northwestern UP, 1968. Peirce, Charles S. “How to Make Our Ideas Clear.” Popular Science Monthly 12 (1878): 286-302. Piaget, Jean. Play, Dreams and Imitation in Childhood. Trans. C. Gattegno and F.M. Hodgson. New York: W.W. Norton, 1962. Risi, Sebastian, and Julian Togelius. “Increasing Generality in Machine Learning through Procedural Content Generation.” Nature Machine Intelligence 2 (2020): 428-436. Steinhoff, James. “Toward a Political Economy of Synthetic Data: A Data-Intensive Capitalism That Is Not a Surveillance Capitalism?” New Media and Society, 2022.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Luo, Hong, Jisong Yan, Dingyu Zhang und Xia Zhou. „Identification of cuproptosis-related molecular subtypes and a novel predictive model of COVID-19 based on machine learning“. Frontiers in Immunology 14 (17.07.2023). http://dx.doi.org/10.3389/fimmu.2023.1152223.

Der volle Inhalt der Quelle
Annotation:
BackgroundTo explicate the pathogenic mechanisms of cuproptosis, a newly observed copper induced cell death pattern, in Coronavirus disease 2019 (COVID-19).MethodsCuproptosis-related subtypes were distinguished in COVID-19 patients and associations between subtypes and immune microenvironment were probed. Three machine algorithms, including LASSO, random forest, and support vector machine, were employed to identify differentially expressed genes between subtypes, which were subsequently used for constructing cuproptosis-related risk score model in the GSE157103 cohort to predict the occurrence of COVID-19. The predictive values of the cuproptosis-related risk score were verified in the GSE163151 cohort, GSE152418 cohort and GSE171110 cohort. A nomogram was created to facilitate the clinical use of this risk score, and its validity was validated through a calibration plot. Finally, the model genes were validated using lung proteomics data from COVID-19 cases and single-cell data.ResultsPatients with COVID-19 had higher significantly cuproptosis level in blood leukocytes compared to patients without COVID-19. Two cuproptosis clusters were identified by unsupervised clustering approach and cuproptosis cluster A characterized by T cell receptor signaling pathway had a better prognosis than cuproptosis cluster B. We constructed a cuproptosis-related risk score, based on PDHA1, PDHB, MTF1 and CDKN2A, and a nomogram was created, which both showed excellent predictive values for COVID-19. And the results of proteomics showed that the expression levels of PDHA1 and PDHB were significantly increased in COVID-19 patient samples.ConclusionOur study constructed and validated an cuproptosis-associated risk model and the risk score can be used as a powerful biomarker for predicting the existence of SARS-CoV-2 infection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour und Kristian Lum. „Algorithmic Fairness: Choices, Assumptions, and Definitions“. Annual Review of Statistics and Its Application 8, Nr. 1 (09.11.2020). http://dx.doi.org/10.1146/annurev-statistics-042720-125902.

Der volle Inhalt der Quelle
Annotation:
A recent wave of research has attempted to define fairness quantitatively. In particular, this work has explored what fairness might mean in the context of decisions based on the predictions of statistical and machine learning models. The rapid growth of this new field has led to wildly inconsistent motivations, terminology, and notation, presenting a serious challenge for cataloging and comparing definitions. This article attempts to bring much-needed order. First, we explicate the various choices and assumptions made—often implicitly—to justify the use of prediction-based decision-making. Next, we show how such choices and assumptions can raise fairness concerns and we present a notationally consistent catalog of fairness definitions from the literature. In doing so, we offer a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision-making. Expected final online publication date for the Annual Review of Statistics, Volume 8 is March 8, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Tobing, Margaret BR, Fizri Ismaliana SNA, Nadya Risky Hayrunnisa, Nur Indah Tika Haswuri, Cucu Sutarsyah und Feni Munifatullah. „An Exploration of Artificial Intelligence in English Language Teaching As a Foreign Language“. International Journal of Social Science and Human Research 06, Nr. 06 (30.06.2023). http://dx.doi.org/10.47191/ijsshr/v6-i6-78.

Der volle Inhalt der Quelle
Annotation:
The aim of this study is to analyze the technologies which are currently being used in foreign language teaching and learning English as an applied language at the university level based on the findings of the detected experimental studies. The PRISMA criteria for systematic reviews and meta-analyses were compiled in the methodology. The findings of the experimental studies shown the lack of innovative technologies, such as chatbots or virtual reality (VR) devices, which are commonly utilized in foreign language (FL) education. Furthermore, mobile apps are primarily concerned with the acquisition of foreign language vocabulary. The findings shown that foreign language teachers might allegedly understand the latest technology devices, such as neural machine translation, they do not combine them accurately in their teaching process. As a result, this study indicates that teachers should be educated and know how to employ them in their foreign language lessons as well as traditional instruction in order to determine which skills or structures of language could have been created as a result of their use. Additionally, it is suggested that further experimental studies must be conducted to explicate the evidence and how useful it is in teaching a foreign language as an applicable language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Guest, Olivia. „What Makes a Good Theory, and How Do We Make a Theory Good?“ Computational Brain & Behavior, 24.01.2024. http://dx.doi.org/10.1007/s42113-023-00193-2.

Der volle Inhalt der Quelle
Annotation:
AbstractI present an ontology of criteria for evaluating theory to answer the titular question from the perspective of a scientist practitioner. Set inside a formal account of our adjudication over theories, a metatheoretical calculus, this ontology comprises the following: (a) metaphysical commitment, the need to highlight what parts of theory are not under investigation, but are assumed, asserted, or essential; (b) discursive survival, the ability to be understood by interested non-bad actors, to withstand scrutiny within the intended (sub)field(s), and to negotiate the dialectical landscape thereof; (c) empirical interface, the potential to explicate the relationship between theory and observation, i.e., how observations relate to, and affect, theory and vice versa; (d) minimising harm, the reckoning with how theory is forged in a fire of historical, if not ongoing, abuses—from past crimes against humanity, to current exploitation, turbocharged or hyped by machine learning, to historical and present internal academic marginalisation. This work hopes to serve as a possible beginning for scientists who want to examine the properties and characteristics of theories, to propose additional virtues and vices, and to engage in further dialogue. Finally, I appeal to practitioners to iterate frequently over such criteria, by building and sharing the metatheoretical calculi used to adjudicate over theories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Maity, Sourav, und Karan Veer. „An Approach for Evaluation and Recognition of Facial Emotions Using EMG Signal“. International Journal of Sensors, Wireless Communications and Control 14 (05.01.2024). http://dx.doi.org/10.2174/0122103279260571231213053403.

Der volle Inhalt der Quelle
Annotation:
Background: Facial electromyography (fEMG) records muscular activities from the facial muscles, which provides details regarding facial muscle stimulation patterns in experimentation. Objectives: The Principal Component Analysis (PCA) is mostly implemented, whereas the actual or unprocessed initial fEMG data are rendered into low-spatial units with minimizing the level of data repetition. Methods: Facial EMG signal was acquired by using the instrument BIOPAC MP150. Four electrodes were fixed on the face of each participant for capturing the four different emotions like happiness, anger, sad and fear. Two electrodes were placed on arm for grounding purposes. Results: The aim of this research paper is to propagate the functioning of PCA in synchrony with the subjective fEMG analysis and to give a thorough apprehension of the advanced PCA in the areas of machine learning. It describes its arithmetical characteristics, while PCA is estimated by implying the covariance matrix. Datasets which are larger in size are progressively universal and their interpretation often becomes complex or tough. So, it is necessary to minimize the number of variables and elucidate linear compositions of the data to explicate it on a huge number of variables with a relevant approach. Therefore, Principal Component Analysis (PCA) is applied because it is an unsupervised training method that utilizes advanced statistical concept to minimize the dimensionality of huge datasets. Conclusion: This work is furthermore inclined toward the analysis of fEMG signals acquired for four different facial expressions using Analysis of Variance (ANOVA) to provide clarity on the variation of features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

P., Naachimuthu K. „Sustainable Agriculture - The Indian Way“. Journal of Rural and Industrial Development 3, Nr. 1 (2015). http://dx.doi.org/10.21863/jrid/2015.3.1.002.

Der volle Inhalt der Quelle
Annotation:
The five natural elements (earth, water, fire, air, and sky), the sun and the moon, plants, trees, birds, and animals, came into existence much ahead of the human beings. In fact, man, as a part of nature, was the last creation in the universe. Though, we (human beings) have been created with the superlative degree of intellect, there is so much that can be learnt from nature, traditions of wisdom from the world teach us that a divine essence flows through all creations. Together with nature, man can co-create groundbreaking ideas that would help create wealth and well-being, for nature offers solutions for inclusive growth and sustainable development. Food scarcity is the major issue concerning the developing countries these days, one out of every 8 person in the world goes to bed without food (FAO, 2012). Of the several hundred million hungry people in the world, 98 percent are in developing countries. There were several things done to alleviate this problem, but the consequences of those actions are even more costly. Usage of heavy machines, pesticides and chemical fertilizers in the soil created a lasting impact causing imbalance in ecosystem, degradation of soil, soil erosion and land degradation. Natural farming is an ancient form of agriculture which follows the principles of nature to develop systems for raising crops, and livestock that are self-sustaining. The present paper attempts to explicate the sustainable nature of natural farming, as against the quick fix solution agriculture of using fertilizers, and chemicals. This holistic learning outlook also tries to bring out the role of farm animals (and remain of farm animals, and farm produces), microorganisms in the soil, in creating food abundance, concerns about food loss and food wastage and its global impact.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie