Journal articles on the topic 'Machine theory of collective intelligence'

To see the other types of publications on this topic, follow the link: Machine theory of collective intelligence.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine theory of collective intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Canonico, Lorenzo Barberis, Christopher Flathmann, and Nathan McNeese. "The Wisdom of the Market: Using Human Factors to Design Prediction Markets for Collective Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 1471–75. http://dx.doi.org/10.1177/1071181319631282.

Full text
Abstract:
There is an ever-growing literature on the power of prediction markets to harness “the wisdom of the crowd” from large groups of people. However, traditional prediction markets are not designed in a human-centered way, often restricting their own potential. This creates the opportunity to implement a cognitive science perspective on how to enhance the collective intelligence of the participants. Thus, we propose a new model for prediction markets that integrates human factors, cognitive science, game theory and machine learning to maximize collective intelligence. We do this by first identifying the connections between prediction markets and collective intelligence, to then use human factors techniques to analyze our design, culminating in the practical ways with which our design enables artificial intelligence to complement human intelligence.
APA, Harvard, Vancouver, ISO, and other styles
2

Dzyaloshinsky, I. M. "Artificial Intelligence: A Humanitarian Perspective." Vestnik NSU. Series: History and Philology 21, no. 6 (June 17, 2022): 20–29. http://dx.doi.org/10.25205/1818-7919-2022-21-6-20-29.

Full text
Abstract:
The article is devoted to the study of the features of human intelligence and the intelligence of complex computer systems, usually referred to as artificial intelligence (AI). As a hypothesis, a statement was formulated about a significant difference between human and artificial intelligence. Human intelligence is a product of a multi-thousand-year history of the development and interaction of three interrelated processes: 1) the formation and development of the human personality; 2) the formation of complex network relationships between members of the social community; 3) collective activity as the basis for the existence and development of communities and individuals. AI is a complex of technological solutions that imitate human cognitive processes. Because of this, with all the options for technical development (acceleration of processes for collecting and processing data and finding solutions, using computer vision, speech recognition and synthesis, etc.). AI will always be associated with human activity. In other words, only people (not machines) are the ultimate source and determinant of values on which any artificial intelligence depends. No mind (human or machine) will ever be truly autonomous: everything we do depends on the social context created by other people who determine the meaning of what we want to achieve. This means that people are responsible for everything that AI does.
APA, Harvard, Vancouver, ISO, and other styles
3

Barberis Canonico, Lorenzo, Nathan J. McNeese, and Chris Duncan. "Machine Learning as Grounded Theory: Human-Centered Interfaces for Social Network Research through Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1252–56. http://dx.doi.org/10.1177/1541931218621287.

Full text
Abstract:
Internet technologies have created unprecedented opportunities for people to come together and through their collective effort generate large amounts of data about human behavior. With the increased popularity of grounded theory, many researchers have sought to use ever-increasingly large datasets to analyze and draw patterns about social dynamics. However, the data is simply too big to enable a single human to derive effective models for many complex social phenomena. Computational methods offer a unique opportunity to analyze a wide spectrum of sociological events by leveraging the power of artificial intelligence. Within the human factors community, machine learning has emerged as the dominant AI-approach to deal with big data. However, along with its many benefits, machine learning has introduced a unique challenge: interpretability. The models of macro-social behavior generated by AI are so complex that rarely can they translated into human understanding. We propose a new method to conduct grounded theory research by leveraging the power of machine learning to analyze complex social phenomena through social network analysis while retaining interpretability as a core feature.
APA, Harvard, Vancouver, ISO, and other styles
4

Orallo, José Hernández. "Heasuring (machine) intelligence universally: An interdisciplinary challenge." Acta Europeana Systemica 4 (July 13, 2020): 37–40. http://dx.doi.org/10.14428/aes.v4i1.57073.

Full text
Abstract:
Artificial intelligence (Al) is having a deep impact on the way humans work, communicate and enjoy their leisure time. Al systems have been traditionally devised to solve specific tasks, such as playing chess, diagnosing a disease or driving a car. However, more and more Al systems are now being devised to be generally adaptable, and learn to solve a variety of tasks or to assist humans and organisations in their everyday tasks. As a result, an increasing number of robots, bots, avatars and 'smart' devices are enhancing our capabilities as individuals, collectives and humanity as a whole. What are these systems capable of doing? What is their global intelligence? How to tell whether they are meeting their specifications?Are the organisations including Al systems being less predictable and difficult to govern? The truth is that we lack proper measurement tools to evaluate the cognitive abilities and expected behaviour of this variety of systems. includino hybrid [e.g. machine-enhanced humans] and collectives. Once realised the relevance of Al evaluation and its difficulty, we will survey what has been done in the past twenty years in this area, focussing on approaches based on algorithmic information theory and Kolmogorov complexity, and its relation to other disciplines that are concerned with intelligence evaluation in humans and animals, such as psychometrics and comparative cognition. This will lead us to the notion of universal intelligence test and the new endeavour of universal psychometrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Liang, Thow Yick. "The inherent structure and dynamic of intelligent human organizations." Human Systems Management 21, no. 1 (February 16, 2002): 9–19. http://dx.doi.org/10.3233/hsm-2002-21102.

Full text
Abstract:
As humankind ventures deeper into the intelligence era, a totally re-defined mindset is essential to ensure its continuity. With the emerging new environment, human organizations must behave as intelligent beings, in the same manner as biological entities are competing for survival in an ecological system. They must learn, self-organize, adapt, compete and evolve. Thus, human systems can no longer be like machine. Consequently, the structures and characteristics of the industrial era will have to be dismantled. This shift in paradigm requires all human organizations to re-design their structure and operations around intelligence. Therefore, to strategize for the future, the first initiative human organizations need to adopt is to establish an intelligent structure, and to nurture an orgmind and its collective intelligence. A significant component of the orgmind is an intelligence enhancer comprising three entities, namely, intelligence, knowledge structure and theory. These entities interact continuously among themselves, supported by at least one physical symbol system. Eventually, the accuracy and appropriateness of the language used helps to enhance the engagement of the interacting agents in organization. In this respect, the ability to learn continuously, to adapt quickly, and to evolve effectively, is sustained by the intelligence enhancer.
APA, Harvard, Vancouver, ISO, and other styles
6

Liang, Thow Yick. "The new intelligence leadership strategy for iCAS." Human Systems Management 26, no. 2 (July 13, 2007): 111–22. http://dx.doi.org/10.3233/hsm-2007-26204.

Full text
Abstract:
As humanity becomes more dependent on information and knowledge, the current concepts, theories and practices associated with leadership strategy have to be transformed. Fundamentally, the influence of the knowledge-intensive, fast-changing and more complex environment has initiated a shift in the mindset, strategic thinking, ability and style in the new generation of leaders. In addition, for all categories of human organizations (economics, business, social, education and political) their members are becoming better educated and informed, and consequently they are more sophisticated interacting agents with modified expectations. Leading these new intelligent human organizations is drastically different from leading a traditional setup. Consequently, the introduction of a new leadership strategy is inevitable. Concurrently, in the new context, it is also highly significant to recognize that all human thinking systems and human organizations are indeed complex adaptive systems. In such systems, order and complexity co-exist, and they learn, adapt and evolve with the changing environment, similar to the behavior of any biological species in an ecological system. The complex and nonlinear evolving dynamic is driven by the intrinsic intelligence of the individuals and the collective intelligence of the group. Therefore, focusing and exploiting the bio-logic rather than machine-logic perspective is definitely more appropriate. In this respect, a better comprehension of leadership strategy and organizational dynamics can be acquired by “bisociating” the complexity theory and the concept of organizing around intelligence. The resulting evolutionary model of this analysis is the intelligence leadership strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Zhengxin. "Learning about Learners: System Learning in Virtual Learning Environment." International Journal of Computers Communications & Control 3, no. 1 (March 1, 2008): 33. http://dx.doi.org/10.15837/ijccc.2008.1.2372.

Full text
Abstract:
Virtual learning is not just about a set of useful IT tools for learning. From an examination on where virtual learning stands in the overall learning spectrum, we point out the important impact of natural computing on virtual learning. We survey and analyze selected literature on important role of natural computing aspects, such as emergence (using swarm intelligence to achieve collective intelligence) and emotion, to virtual learning. In addition, in order to effectively incorporate these aspects into virtual learning, we propose using infrastructural support for virtual learning through system learning: The virtual learning environment not only provides facilities for learners, but also observes the behavior of learners and takes actions, so that its own performance can be improved (i.e., to better serve the learners). In this sense, system learning is concerned with learning about learners. Consequently, a virtual learning environment is a true human-machine symbiosis, paired by human learning and system learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Moore, Phoebe V., Kendra Briken, and Frank Engster. "Machines and measure." Capital & Class 44, no. 2 (February 6, 2020): 139–44. http://dx.doi.org/10.1177/0309816820902016.

Full text
Abstract:
This Special Issue, entitled ‘Machines & Measure’, is largely the dissemination from a workshop held at University of Leicester School of Business, organised by editor Phoebe V Moore, for the Conference for Socialist Economists South Group in February 2018, which was hosted by the University of Leicester School of Business, Philosophy and Political Economy Centre. Not all the authors in the Special Issue were speakers at the event, but this collection provides a carefully selected, representative collection of articles and essays which address the questions and disturbances that drove the event’s concept, those being, as articulated in the event description: How are machines being used in contemporary capitalism to perpetuate control and to intensify power relations at work? Theorising how this occurs through discussions about the physical machine, the calculation machine and the social machine, the workshop was designed to re-visit questions about how quantification and measure both human and machinic become entangled in the social and how the incorporation and absorption of workers as appendages within the machine as Marx identified, where artificial intelligence and the platform economy dominate today’s discussions in digitalised work research.Stemming from Marxist critical theory, questions of money, time, space are also revisited in the Special Issues articles, as well as less debated concepts in rhythmanalysis and a revival of historically frequently discussed issues such as activities on the shop floor, where a whole range of semi-automated and fully automated methods to manage work through numeration without, necessarily, remuneration continue. Articles ask the most important questions today and begin to identify possible solutions from a self-consciously Marxist perspective.
APA, Harvard, Vancouver, ISO, and other styles
9

KAMYSHOVA, G. N. "MODELING OF NEURAL PREDICTIVE CONTROL OF IRRIGATION MACHINES." Prirodoobustrojstvo, no. 1 (2021): 14–22. http://dx.doi.org/10.26897/1997-6011-2021-1-14-22.

Full text
Abstract:
The purpose of the study is to develop new scientific approaches to improve the efficiency of irrigation machines. Modern digital technologies allow the collection of data, their analysis and operational management of equipment and technological processes, often in real time. All this allows, on the one hand, applying new approaches to modeling technical systems and processes (the so-called “data-driven models”), on the other hand, it requires the development of fundamentally new models, which will be based on the methods of artificial intelligence (artificial neural networks, fuzzy logic, machine learning algorithms and etc.).The analysis of the tracks and the actual speeds of the irrigation machines in real time showed their significant deviations in the range from the specified speed, which leads to a deterioration in the irrigation parameters. We have developed an irrigation machine’s control model based on predictive control approaches and the theory of artificial neural networks. Application of the model makes it possible to implement control algorithms with predicting the response of the irrigation machine to the control signal. A diagram of an algorithm for constructing predictive control, a structure of a neuroregulator and tools for its synthesis using modern software are proposed. The versatility of the model makes it possible to use it both to improve the efficiency of management of existing irrigation machines and to develop new ones with integrated intelligent control systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Sen, Wang, Zhu Xiaomei, and Deng Lin. "Impact of Job Demands on Employee Learning: The Moderating Role of Human–Machine Cooperation Relationship." Computational Intelligence and Neuroscience 2022 (December 6, 2022): 1–11. http://dx.doi.org/10.1155/2022/7406716.

Full text
Abstract:
New artificial intelligence (AI) technologies are applied to work scenarios, which may change job demands and affect employees’ learning. Based on the resource conservation theory, the impact of job demands on employee learning was evaluated in the context of AI. The study further explores the moderating effect of the human–machine cooperation relationship between them. By collecting 500 valid questionnaires, a hierarchical regression for the test was performed. Results indicate that, in the AI application scenario, a U-shaped relationship exists between job demands and employee learning. Second, the human–machine cooperation relationship moderates the U-shaped curvilinear relationship between job demands and employees’ learning. In this study, AI is introduced into the field of employee psychology and behavior, enriching the research into the relationship between job demands and employee learning.
APA, Harvard, Vancouver, ISO, and other styles
11

Kabanda, Gabriel. "An Evaluation of Big Data Analytics Projects and the Project Predictive Analytics Approach." Oriental journal of computer science and technology 12, no. 4 (January 27, 2020): 132–46. http://dx.doi.org/10.13005/ojcst12.04.01.

Full text
Abstract:
Big Data is the process of managing large volumes of data obtained from several heterogeneous data types e.g. internal, external, structured and unstructured that can be used for collecting and analyzing enterprise data. The purpose of the paper is to conduct an evaluation of Big Data Analytics Projects which discusses why the projects fail and explain why and how the Project Predictive Analytics (PPA) approach may make a difference with respect to the future methods based on data mining, machine learning, and artificial intelligence. A qualitative research methodology was used. The research design was discourse analysis supported by document analysis. Laclau and Mouffe’s discourse theory was the most thoroughly poststructuralist approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Silva, Matheus Alencar da, Bonfim Amaro Junior, Ramon Rudá Brito Medeiros, and Plácido Rogério Pinheiro. "A Neuroevolutionary Model to Estimate the Tensile Strength of Manufactured Parts Made by 3D Printing." Algorithms 15, no. 8 (July 28, 2022): 263. http://dx.doi.org/10.3390/a15080263.

Full text
Abstract:
Three-dimensional printing has advantages, such as an excellent flexibility in producing parts from the digital model, enabling the fabrication of different geometries that are both simple or complex, using low-cost materials and generating little residue. Many technologies have gained space, highlighting the artificial intelligence (AI), which has several applications in different areas of knowledge and can be defined as any technology that allows a system to demonstrate human intelligence. In this context, machine learning uses artificial intelligence to develop computational techniques, aiming to build knowledge automatically. This system is responsible for making decisions based on experiences accumulated through successful solutions. Thus, this work aims to develop a neuroevolutionary model using artificial intelligence techniques, specifically neural networks and genetic algorithms, to predict the tensile strength in materials manufactured by fused filament fabrication (FFF)-type 3D printing. We consider the collection and construction of a database on three-dimensional instances to reach our objective. To train our model, we adopted some parameters. The model algorithm was developed in the Python programming language. After analyzing the data and graphics generated by the execution of the tests, we present that the model outperformed, with a determination coefficient superior to 90%, resulting in a high rate of assertiveness.
APA, Harvard, Vancouver, ISO, and other styles
13

Hedar, Abdel-Rahman, Majid Almaraashi, Alaa E. Abdel-Hakim, and Mahmoud Abdulrahim. "Hybrid Machine Learning for Solar Radiation Prediction in Reduced Feature Spaces." Energies 14, no. 23 (November 29, 2021): 7970. http://dx.doi.org/10.3390/en14237970.

Full text
Abstract:
Solar radiation prediction is an important process in ensuring optimal exploitation of solar energy power. Numerous models have been applied to this problem, such as numerical weather prediction models and artificial intelligence models. However, well-designed hybridization approaches that combine numerical models with artificial intelligence models to yield a more powerful model can provide a significant improvement in prediction accuracy. In this paper, novel hybrid machine learning approaches that exploit auxiliary numerical data are proposed. The proposed hybrid methods invoke different machine learning paradigms, including feature selection, classification, and regression. Additionally, numerical weather prediction (NWP) models are used in the proposed hybrid models. Feature selection is used for feature space dimension reduction to reduce the large number of recorded parameters that affect estimation and prediction processes. The rough set theory is applied for attribute reduction and the dependency degree is used as a fitness function. The effect of the attribute reduction process is investigated using thirty different classification and prediction models in addition to the proposed hybrid model. Then, different machine learning models are constructed based on classification and regression techniques to predict solar radiation. Moreover, other hybrid prediction models are formulated to use the output of the numerical model of Weather Research and Forecasting (WRF) as learning elements in order to improve the prediction accuracy. The proposed methodologies are evaluated using a data set that is collected from different regions in Saudi Arabia. The feature-reduction has achieved higher classification rates up to 8.5% for the best classifiers and up to 15% for other classifiers, for the different data collection regions. Additionally, in the regression, it achieved improvements of average root mean square error up to 5.6% and in mean absolute error values up to 8.3%. The hybrid models could reduce the root mean square errors by 70.2% and 4.3% than the numerical and machine learning models, respectively, when these models are applied to some dataset. For some reduced feature data, the hybrid models could reduce the root mean square errors by 47.3% and 14.4% than the numerical and machine learning models, respectively.
APA, Harvard, Vancouver, ISO, and other styles
14

Petrov, Ivan, and Toni Janevski. "5G Mobile Technologies and Early 6G Viewpoints." European Journal of Engineering Research and Science 5, no. 10 (October 14, 2020): 1240–46. http://dx.doi.org/10.24018/ejers.2020.5.10.2169.

Full text
Abstract:
Design of each successor mobile technology assures improved and advanced functionality features compared to its predecessor. Machine Learning and generally Artificial Intelligence (AI) is becoming necessity for further expansion of the beyond 5G mobile world. AI-assisted IoT services, data collection, analytics and storage should become native in the beyond 5G era. 5G introduces New Radio (NR) in sub-6 GHz bands and also in mmWave bands above 24 GHz, network virtualization and softwarization, which means that Next Generation Core and 5G NR access network are built by using different functions in split user and control planes that introduces the network slicing approach. Enhanced Mobile Broadband (eMBB), massive Machine Type Communication (mMTC) and Ultra-Reliable Low-Latency Communication (URLLC), that are provided via separate network slices as logically separated network partitions are the key 5G services that constantly will increase the traffic volume and the number of connected devices. Terahertz and visible light communication and fundamental technologies like compressed sensing theory, new channel coding, large-scale antenna, flexible spectrum usage, AI-based wireless communication, special technical features as Space-Air-Ground-Sea integrated communication and wireless tactile network are few of the novelties that are expected to become a common network standard available beyond 2030.
APA, Harvard, Vancouver, ISO, and other styles
15

Petrov, Ivan, and Toni Janevski. "5G Mobile Technologies and Early 6G Viewpoints." European Journal of Engineering and Technology Research 5, no. 10 (October 14, 2020): 1240–46. http://dx.doi.org/10.24018/ejeng.2020.5.10.2169.

Full text
Abstract:
Design of each successor mobile technology assures improved and advanced functionality features compared to its predecessor. Machine Learning and generally Artificial Intelligence (AI) is becoming necessity for further expansion of the beyond 5G mobile world. AI-assisted IoT services, data collection, analytics and storage should become native in the beyond 5G era. 5G introduces New Radio (NR) in sub-6 GHz bands and also in mmWave bands above 24 GHz, network virtualization and softwarization, which means that Next Generation Core and 5G NR access network are built by using different functions in split user and control planes that introduces the network slicing approach. Enhanced Mobile Broadband (eMBB), massive Machine Type Communication (mMTC) and Ultra-Reliable Low-Latency Communication (URLLC), that are provided via separate network slices as logically separated network partitions are the key 5G services that constantly will increase the traffic volume and the number of connected devices. Terahertz and visible light communication and fundamental technologies like compressed sensing theory, new channel coding, large-scale antenna, flexible spectrum usage, AI-based wireless communication, special technical features as Space-Air-Ground-Sea integrated communication and wireless tactile network are few of the novelties that are expected to become a common network standard available beyond 2030.
APA, Harvard, Vancouver, ISO, and other styles
16

West, Adam, John Clifford, and David Atkinson. "“Alexa, Build Me a Brand” — An Investigation into the I mpact of Artificial Intelligence on Branding." Journal of Business and Economics 9, no. 10 (October 22, 2018): 877–87. http://dx.doi.org/10.15341/jbe(2155-7950)/10.09.2018/005.

Full text
Abstract:
Brands are built by “wrapping mediocre products in emotive and social associations” (Galloway, 2016). Nike and Coca-Cola differentiate through the emotional benefits associated with their brand, not their products functional benefit — with the latter long considered the worlds’ most valuable brand (Interbrand, 2016). This brand-building model has not been scrutinised in an environment where technology is a primary driver of organisational success, not merely a support function (E&Y, 2011). Artificial Intelligence (AI) has made “giant leaps” (Hosea, 2016) — algorithms — fly our planes and beat us at chess. Organisational spending on AI is set to reach $47 billion by 2020 (Ismail, 2017) with many (32%) claiming its biggest impact will be in marketing. Marketing communities conject that AI will “revolutionise” marketing (John, 2015) and while companies like Amazon appear to use a different model — utilizing AI to fulfil customer’s functional needs (commerce) — AI’s impact on brand has seldom been explored in an academic context. This paper aims to establish the implementation of AI as a source of brand success — recommending to marketing professionals how to allocate resources to sustain brand effectiveness. Grounded theory research was used; semi-structured interviews were conducted and data collection/analysis was done concurrently. There were three major findings: AI can improve operational efficiency — improving the consistency in which a brand delivers their promise. Natural Language Processing (NLP) can improve elements of customer service. And Machine Learning enables personalized offerings, but organizations are limited by data quality/quantity and knowledge of the technologies applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Schwan, Constanze, and Wolfram Schenck. "A three-step model for the detection of stable grasp points with machine learning." Integrated Computer-Aided Engineering 28, no. 4 (August 27, 2021): 349–67. http://dx.doi.org/10.3233/ica-210659.

Full text
Abstract:
Robotic grasping in dynamic environments is still one of the main challenges in automation tasks. Advances in deep learning methods and computational power suggest that the problem of robotic grasping can be solved by using a huge amount of training data and deep networks. Despite these huge accomplishments, the acceptance and usage in real-world scenarios is still limited. This is mainly due to the fact that the collection of the training data is expensive, and that the trained network is a black box. While the collection of the training data can sometimes be facilitated by carrying it out in simulation, the trained networks, however, remain a black box. In this study, a three-step model is presented that profits both from the advantages of using a simulation approach and deep neural networks to identify and evaluate grasp points. In addition, it even offers an explanation for failed grasp attempts. The first step is to find all grasp points where the gripper can be lowered onto the table without colliding with the object. The second step is to determine, for the grasp points and gripper parameters from the first step, how the object moves while the gripper is closed. Finally, in the third step, for all grasp points from the second step, it is predicted whether the object slips out of the gripper during lifting. By this simplification, it is possible to understand for each grasp point why it is stable and – just as important – why others are unstable or not feasible. All of the models employed in each of the three steps and the resulting Overall Model are evaluated. The predicted grasp points from the Overall Model are compared to the grasp points determined analytically by a force-closure algorithm, to validate the stability of the predicted grasps.
APA, Harvard, Vancouver, ISO, and other styles
18

Dos Reis Tomás, Cecília Cristina, and António Moreira Teixeira. "Ethical Challenges in the Use of Iot in Education: On the Path to Personalization." EDEN Conference Proceedings, no. 1 (October 21, 2020): 217–26. http://dx.doi.org/10.38069/edenconf-2020-rw-0024.

Full text
Abstract:
In the research on the ethical challenges related to the Internet of Things (IoT) and the personalisation of the learning process, four key categories have been identified: Security, Privacy, Automation, and Interaction. Based on this framework, using Constructivist Grounded Theory (CGT), we’ve conducted a study with twenty one actors in the field which have reflected on the advantages, risks and challenges, creating and developing theoretical solutions from technological, pedagogical, and ethical-philosophical perspectives. Coupled with the challenge of interoperability on IoT highways, the educational process generates disadvantages associated with access, use, monitoring and ownership of data, as well as standardization that falls under “profiling” rather than personalization. This leads to problems like exclusion, redundancy of the human being in education through its homogenization and determinism that leads to a loss of sense of freedom, control and choice. The consequence is surveillance associated with corporativism and the loss of the notion of the Common Good in general and in the education in particular. In this paper we discuss how IoT, algorithms and Artificial Intelligence (AI) linked to automation falls within the profiling; and whether more artisanal solutions linked to human language, communication and the relationship that enhance collaboration among multitudes, lead to a stigmeric learning enhancing a personalization of proximity. In this way we are invited to think of a symbiosis between the human being and the machine without the threat of its control, but with the openness and access in education as advantages, the expansion of interaction and communication enhanced by automated processes in pursuit of personalization, distinguishing the cost from the value of data, the value of collective data from the value of personal data among other challenges. In the paper we suggest the idea of a new social contract, whose ethical dimension necessarily rests on the value of the Common Good associated with justice, equity, equality and inclusion.
APA, Harvard, Vancouver, ISO, and other styles
19

Kovacheva, Diana. "The New Subjects of Law – Are Artificial Intelligence Systems Already among Us?" De Jure 13, no. 2 (December 21, 2022): 227–49. http://dx.doi.org/10.54664/brem6290.

Full text
Abstract:
The study explores the issue of legal personality and liability of artificial intelligence (AI) systems. A real AI should have a will and self-awareness, but, at this point, there are mainly systems with a collective “cloud” intelligence that is located outside of them, supported by people (Sofia, the chatbot Miraya, the chatbot Tai, the xenobots). It is important to be clear about the fact whether robots are still only a “means”, a “tool” that facilitates human life, or whether they already have qualities that make them independent entities. Currently, AI systems are treated as objects of law. Granting legal personality similar to that of legal entities is not a solution as well because of their specific nature. If, in the future, intelligent systems become independent and emancipated from the human beings that created them, they could be considered a new specific subject – a legal person sui generis. The regulatory framework of international organizations in this area already places robots in the category of “electronic person” (EU) and binds their legal status to the protection of basic human rights. At this point, a number of practical issues are yet to be resolved – identifiability, establishment of a register, and up-to-dateness of the data in it. The possible granting of legal personality to AI systems, even specific or limited one, raises the question of the rights of robots themselves (procedural legal capacity, property rights, labour rights, tax legal personality), as well as of the responsibility for damages and their compensation. One of the most important issues in the development of intelligent machines is the extent to which we should allow them to make autonomous or automated decisions. Algorithms, which are initially set and related to the protection of fundamental human rights, should be stable, or “locked” for changes by artificial intelligence systems in the context of their improvement and self-learning. The issue of human control is important, especially in cases where decisions might affect human life, health, and social support. The rapid development of digital technologies should make us think about a future in which AI systems can deviate so much from the basic algorithms set by humans that joint and individual financial liability can be reached. The theory also discusses the issue of the applicability of criminal liability to robots.
APA, Harvard, Vancouver, ISO, and other styles
20

S Neves, Fabio, and Marc Timme. "Bio-inspired computing by nonlinear network dynamics—a brief introduction." Journal of Physics: Complexity 2, no. 4 (December 1, 2021): 045019. http://dx.doi.org/10.1088/2632-072x/ac3ad4.

Full text
Abstract:
Abstract The field of bio-inspired computing has established a new Frontier for conceptualizing information processing, aggregating knowledge from disciplines as different as neuroscience, physics, computer science and dynamical systems theory. The study of the animal brain has shown that no single neuron or neural circuit motif is responsible for intelligence or other higher-order capabilities. Instead, complex functions are created through a broad variety of circuits, each exhibiting an equally varied repertoire of emergent dynamics. How collective dynamics may contribute to computations still is not fully understood to date, even on the most elementary level. Here we provide a concise introduction to bio-inspired computing via nonlinear dynamical systems. We first provide a coarse overview of how the study of biological systems has catalyzed the development of artificial systems in several broad directions. Second, we discuss how understanding the collective dynamics of spiking neural circuits and model classes thereof, may contribute to and inspire new forms of ‘bio-inspired’ computational paradigms. Finally, as a specific set of examples, we analyze in more detail bio-inspired approaches to computing discrete decisions based on multi-dimensional analogue input signals, via k-winners-take-all functions. This article may thus serve as a brief introduction to the qualitative variety and richness of dynamical bio-inspired computing models, starting broadly and focusing on a general example of computation from current research. We believe that understanding basic aspects of the variety of bio-inspired approaches to computation on the coarse level of first principles (instead of details about specific simulation models) and how they relate to each other, may provide an important step toward catalyzing novel approaches to autonomous and computing machines in general.
APA, Harvard, Vancouver, ISO, and other styles
21

Naghizadeh, Alireza, Wei-chung Tsao, Jong Hyun Cho, Hongye Xu, Mohab Mohamed, Dali Li, Wei Xiong, Dimitri Metaxas, Carlos A. Ramos, and Dongfang Liu. "In vitro machine learning-based CAR T immunological synapse quality measurements correlate with patient clinical outcomes." PLOS Computational Biology 18, no. 3 (March 18, 2022): e1009883. http://dx.doi.org/10.1371/journal.pcbi.1009883.

Full text
Abstract:
The human immune system consists of a highly intelligent network of billions of independent, self-organized cells that interact with each other. Machine learning (ML) is an artificial intelligence (AI) tool that automatically processes huge amounts of image data. Immunotherapies have revolutionized the treatment of blood cancer. Specifically, one such therapy involves engineering immune cells to express chimeric antigen receptors (CAR), which combine tumor antigen specificity with immune cell activation in a single receptor. To improve their efficacy and expand their applicability to solid tumors, scientists optimize different CARs with different modifications. However, predicting and ranking the efficacy of different "off-the-shelf" immune products (e.g., CAR or Bispecific T-cell Engager [BiTE]) and selection of clinical responders are challenging in clinical practice. Meanwhile, identifying the optimal CAR construct for a researcher to further develop a potential clinical application is limited by the current, time-consuming, costly, and labor-intensive conventional tools used to evaluate efficacy. Particularly, more than 30 years of immunological synapse (IS) research data demonstrate that T cell efficacy is not only controlled by the specificity and avidity of the tumor antigen and T cell interaction, but also it depends on a collective process, involving multiple adhesion and regulatory molecules, as well as tumor microenvironment, spatially and temporally organized at the IS formed by cytotoxic T lymphocytes (CTL) and natural killer (NK) cells. The optimal function of cytotoxic lymphocytes (including CTL and NK) depends on IS quality. Recognizing the inadequacy of conventional tools and the importance of IS in immune cell functions, we investigate a new strategy for assessing CAR-T efficacy by quantifying CAR IS quality using the glass-support planar lipid bilayer system combined with ML-based data analysis. Previous studies in our group show that CAR-T IS quality correlates with antitumor activities in vitro and in vivo. However, current manually quantified IS quality data analysis is time-consuming and labor-intensive with low accuracy, reproducibility, and repeatability. In this study, we develop a novel ML-based method to quantify thousands of CAR cell IS images with enhanced accuracy and speed. Specifically, we used artificial neural networks (ANN) to incorporate object detection into segmentation. The proposed ANN model extracts the most useful information to differentiate different IS datasets. The network output is flexible and produces bounding boxes, instance segmentation, contour outlines (borders), intensities of the borders, and segmentations without borders. Based on requirements, one or a combination of this information is used in statistical analysis. The ML-based automated algorithm quantified CAR-T IS data correlates with the clinical responder and non-responder treated with Kappa-CAR-T cells directly from patients. The results suggest that CAR cell IS quality can be used as a potential composite biomarker and correlates with antitumor activities in patients, which is sufficiently discriminative to further test the CAR IS quality as a clinical biomarker to predict response to CAR immunotherapy in cancer. For translational research, the method developed here can also provide guidelines for designing and optimizing numerous CAR constructs for potential clinical development. Trial Registration: ClinicalTrials.gov NCT00881920.
APA, Harvard, Vancouver, ISO, and other styles
22

Peng, Zikang. "New Media Marketing Strategy Optimization in the Catering Industry Based on Deep Machine Learning Algorithms." Journal of Mathematics 2022 (February 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/5780549.

Full text
Abstract:
With the in-depth development of new-generation network technologies such as the Internet, big data, and cloud intelligence, people can obtain massive amounts of information on mobile phones or mobile platforms. The era of unreachable big data has arrived, which raises questions for the development of corporate marketing. With the development of Internet technology, people use mobile terminals for longer and longer periods of time. New media has gradually become the mainstream of the media arena. It has distinctive features such as freedom to find audiences, diverse content forms, and timeliness of information release, which has changed the traditional. The marketing model has a profound impact on the development of the market. This article uses relevant theories, such as new media, marketing, and catering industry marketing strategies, studies the related concepts and characteristics of new media, clarifies the impact of the development of new media on the catering industry and audience groups, and studies the impact of the catering industry from multiple dimensions. Based on the development factors in the new media environment, combined with marketing theory, it puts forward suggestions for catering companies to use new media to carry out marketing planning in product innovation, improving information channels, creating network events and topics, and promoting innovation and health in the catering industry. And a marketing strategy is proposed based on deep machine learning algorithms; including a cloud server, the cloud server communicates with the e-commerce software platform and the input of physical sales is recorded. The adopted cloud server is connected with data collection, data processing, and communication module. The communication module is connected with a deep machine learning algorithm system; that is, deep machine learning algorithm system is connected with a sales platform in communication. The sales platform is connected with advertising settings and advertising, and the advertising is electrically connected with an algorithm of advertising delivery methods. Advertisement delivery method algorithm communication is connected to the cloud server. This article uses deep machine learning algorithms to process the data information to make the data information easy to view and clear. The advertisement delivery method algorithm calculates the best way of advertising and then calculates the advertisement to deliver.
APA, Harvard, Vancouver, ISO, and other styles
23

Oruganti, Yagna. "Technology Focus: Data Analytics (October 2021)." Journal of Petroleum Technology 73, no. 10 (October 1, 2021): 60. http://dx.doi.org/10.2118/1021-0060-jpt.

Full text
Abstract:
With a moderate- to low-oil-price environment being the new normal, improving process efficiency, thereby leading to hydrocarbon recovery at reduced costs, is becoming the need of the hour. The oil and gas industry generates vast amounts of data that, if properly leveraged, can generate insights that lead to recovering hydrocarbons with reduced costs, better safety records, lower costs associated with equipment downtime, and reduced environmental footprint. Data analytics and machine-learning techniques offer tremendous potential in leveraging the data. An analysis of papers in OnePetro from 2014 to 2020 illustrates the steep increase in the number of machine-learning-related papers year after year. The analysis also reveals reservoir characterization, formation evaluation, and drilling as domains that have seen the highest number of papers on the application of machine-learning techniques. Reservoir characterization in particular is a field that has seen an explosion of papers on machine learning, with the use of convolutional neural networks for fault detection, seismic imaging and inversion, and the use of classical machine-learning algorithms such as random forests for lithofacies classification. Formation evaluation is another area that has gained a lot of traction with applications such as the use of classical machine-learning techniques such as support vector regression to predict rock mechanical properties and the use of deep-learning techniques such as long short-term memory to predict synthetic logs in unconventional reservoirs. Drilling is another domain where a tremendous amount of work has been done with papers on optimizing drilling parameters using techniques such as genetic algorithms, using automated machine-learning frameworks for bit dull grade prediction, and application of natural language processing for stuck-pipe prevention and reduction of nonproductive time. As the application of machine learning toward solving various problems in the upstream oil and gas industry proliferates, explainable artificial intelligence or machine-learning interpretability becomes critical for data scientists and business decision-makers alike. Data scientists need the ability to explain machine-learning models to executives and stakeholders to verify hypotheses and build trust in the models. One of the three highlighted papers used Shapley additive explanations, which is a game-theory-based approach to explain machine-learning outputs, to provide a layer of interpretability to their machine-learning model for identification of identification of geomechanical facies along horizontal wells. A cautionary note: While there is significant promise in applying these techniques, there remain many challenges in capitalizing on the data—lack of common data models in the industry, data silos, data stored in on-premises resources, slow migration of data to the cloud, legacy databases and systems, lack of digitization of older/legacy reports, well logs, and lack of standardization in data-collection methodologies across different facilities and geomarkets, to name a few. I would like to invite readers to review the selection of papers to get an idea of various applications in the upstream oil and gas space where machine-learning methods have been leveraged. The highlighted papers cover the topics of fatigue dam-age of marine risers and well performance optimization and identification of frackable, brittle, and producible rock along horizontal wells using drilling data. Recommended additional reading at OnePetro: www.onepetro.org. SPE 201597 - Improved Robustness in Long-Term Pressure-Data Analysis Using Wavelets and Deep Learning by Dante Orta Alemán, Stanford University, et al. SPE 202379 - A Network Data Analytics Approach to Assessing Reservoir Uncertainty and Identification of Characteristic Reservoir Models by Eugene Tan, the University of Western Australia, et al. OTC 30936 - Data-Driven Performance Optimization in Section Milling by Shantanu Neema, Chevron, et al.
APA, Harvard, Vancouver, ISO, and other styles
24

Sugiono, Sugiono, Renaldi P. Prasetya, Angga A. Fanani, and Amanda N. Cahyawati. "PREDICTING THE MENTAL STRESS LEVEL OF DRIVERS IN A BRAKING CAR PROCESS USING ARTIFICIAL INTELLIGENCE." Acta Neuropsychologica 20, no. 1 (February 23, 2022): 1–15. http://dx.doi.org/10.5604/01.3001.0015.7716.

Full text
Abstract:
Reducing the physical and mental weariness of drivers is significant in improving healthy and safe driving. This paper is aim to predict the stress level of drivers while braking in various conditions of the track. By discovering the drivers’ mental stress level, we are able to safely and comfortably adjust the distance in relation to the vehicle ahead. The initial step used was a study related to Artificial Intelligence (AI), Electroencephalogram (EEG), safe distance in braking, and the theory of mental stress. The data was collected by doing a direct measurement of drivers’stress levels using the EEG tool. The respondents were 5 parties around 30-50 years old who had experience in driving for> 5 years. The research asembled 400 pieces of data about braking including the data of the velocity before braking, track varieties (cityroad, rural road, residential road, and toll road), braking distance, stress level (EEG), and focus (EEG). The database constructed was used to input the machine learning (AI) – Back Propagation Neural Network (BPNN) in order to predict the drivers’ mental stress level. Referring to the data collection, each road type gave a different value of metal stress and focus. City road drivers used an average velocity of 23.24 Km/h with an average braking distance of 11.17 m which generated an average stress level of 53.44 and a focus value of 45.76.Under other conditions, city road drivers generated a 52.11 stress level, the rural road = 48.65, and 50.23 for the toll road. BPNN Training with 1 hidden layer, neuron = 17, ground transfer function, sigmoid linear, and optimation using Genetic Algorithm (GA) obtained the Mean Square Error (MSE) value = 0.00537. The road infrastructure, driving behavior, and emerging hazards in driving took part in increasing the stress level and concentration needs of the drivers. The conclusion may be drawn that the available data and the chosen BPNN structure were appropriate to be used in training and be utilized to predict drivers’ focus and mental stress level. This AI module is beneficial in inputting the data to the braking car safety system by considering those mental factors completing the existing technical factor considerations.
APA, Harvard, Vancouver, ISO, and other styles
25

Nabi, Rebwar M. "Multiclass Classifier for Stock Price Prediction." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 11, 2021): 4157–69. http://dx.doi.org/10.17762/turcomat.v12i3.1707.

Full text
Abstract:
The stock market has been a crucial factor of investments in the financial domain. Risk modeling and profit generation heavily rely on the sophisticated and intricate stock movement prediction task. Stock Price forecasting is complex that could have a significant influence on the financial market. The Machine Learning (ML) type of artificial intelligence (AI) provides a more accurate forecast for binary and multiclass classification. Different effective methods have been recommended to resolve the problem in the binary classification case but the multiclass classification case is a more delicate one. This paper discusses the application of multiclass classifier mappings such as One v/s All (OvA) and One v/s One (OvO) for stock movement prediction. The proposed approach comprises four main steps: data collection, assign a multi-label (up, down, or same), discover the best classifier methods, and comparison of classifiers on evaluation metrics of 10k cross-validation for stock price movement. In this study, a stock NASDAQ dataset for about ten years of ten companies from yahoo finance on daily basis is used. The resultant Stock Price prediction uncovers Neural Network classifier has good performance in some case whereas Multiclass (One V/s One) and (One V/s All) have overall better performance among all other classifiers as AdaBoost, Support Vector Machine, OneR, Bagging, Simple Logistic, Hoeffding trees, PART, Decision Tree and Random Forest. The Precision, Recall, F-Measure, and ROC area comparison results show that Multiclass (One V/s All) is better than Multiclass (One V/s one). The proposed method Multiclass classification (One v/s All) yields an accuracy of 97.63% for average prediction performance on all ten stock companies, also the highest accuracy achieved as 98.7% for QCOM. The individual stock-wise evaluation of the Multiclass (One V/s All) classifier is found to achieve the highest accuracy among all other classifiers which is outperforming all the recent proposals.
APA, Harvard, Vancouver, ISO, and other styles
26

Smirnov, Alexander, Andrew Ponomarev, Tatiana Levashova, and Nikolay Shilov. "Conceptual Framework of a Human-Machine Collective Intelligence Environment for Decision Support." Proceedings of the Bulgarian Academy of Sciences 75, no. 1 (February 2, 2022): 102–9. http://dx.doi.org/10.7546/crabs.2022.01.12.

Full text
Abstract:
The paper extends collective intelligence understanding to the problem-solving abilities of heterogeneous groups, consisting of human participants and software services. It describes a conceptual framework of a new computational environment, supporting such heterogeneous teams, working on decision support problems. In particular, the paper discusses the most acute problems, related to such heterogeneous collective intelligence – interoperability and self-organization. To address interoperability issues, the environment re- lies on multi-aspect ontologies and smart space-based interaction. To provide the necessary degree of self-organization, a guided self-organization approach is proposed. The proposed human-machine collective intelligence environment can improve decision-making in many complex areas, requiring collective effort and dynamic adaptation to the changing situation.
APA, Harvard, Vancouver, ISO, and other styles
27

Smirnov, Alexander, Tatiana Levashova, and Andrew Ponomarev. "Decision support based on human-machine collective intelligence: stateof-the-art and conceptual model." Information and Control Systems, no. 2 (April 20, 2020): 60–70. http://dx.doi.org/10.31799/1684-8853-2020-2-60-70.

Full text
Abstract:
Introduction: Due to the development of information and communication technologies and artificial intelligence, human-machine computing systems are becoming more widely used. However, in the vast majority of developments in this area, a human, in fact, plays the role of a “computing device”, who can only handle requests of a certain kind. Thus, human creativity and the ability to (self-)organize are largely discarded. Purpose: Developing a decision support concept based on the use of human-machine collective intelligence. Analyzing the current state of the problem in the field of constructing flexible human-machine systems. Proposing a conceptual model of the environment based on which decision support systems can be created. Results: A conceptual model of decision support is proposed based on human-machine collective intelligence. Its central concepts are: a) the problem at whose solution the human-machine collective activity is aimed, b) the collective of machines and people interacting through the environment to solve the problem, c) the process model which describes the decision support process in terms of information collection development and evaluation of alternatives. Practical relevance: The developed model can be a base to create a new class of decision support systems leveraging the self-organization potential of human-machine collectives.
APA, Harvard, Vancouver, ISO, and other styles
28

Krafft, Peter M. "A Simple Computational Theory of General Collective Intelligence." Topics in Cognitive Science 11, no. 2 (June 13, 2018): 374–92. http://dx.doi.org/10.1111/tops.12341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Król, Dariusz, and Heitor Silvério Lopes. "Nature-inspired collective intelligence in theory and practice." Information Sciences 182, no. 1 (January 2012): 1–2. http://dx.doi.org/10.1016/j.ins.2011.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Shmalko, Elizaveta, Yuri Rumyantsev, Ruslan Baynazarov, and Konstantin Yamshanov. "Identification of Neural Network Model of Robot to Solve the Optimal Control Problem." Informatics and Automation 20, no. 6 (November 18, 2021): 1254–78. http://dx.doi.org/10.15622/ia.20.6.3.

Full text
Abstract:
To calculate the optimal control, a satisfactory mathematical model of the control object is required. Further, when implementing the calculated controls on a real object, the same model can be used in robot navigation to predict its position and correct sensor data, therefore, it is important that the model adequately reflects the dynamics of the object. Model derivation is often time-consuming and sometimes even impossible using traditional methods. In view of the increasing diversity and extremely complex nature of control objects, including the variety of modern robotic systems, the identification problem is becoming increasingly important, which allows you to build a mathematical model of the control object, having input and output data about the system. The identification of a nonlinear system is of particular interest, since most real systems have nonlinear dynamics. And if earlier the identification of the system model consisted in the selection of the optimal parameters for the selected structure, then the emergence of modern machine learning methods opens up broader prospects and allows you to automate the identification process itself. In this paper, a wheeled robot with a differential drive in the Gazebo simulation environment, which is currently the most popular software package for the development and simulation of robotic systems, is considered as a control object. The mathematical model of the robot is unknown in advance. The main problem is that the existing mathematical models do not correspond to the real dynamics of the robot in the simulator. The paper considers the solution to the problem of identifying a mathematical model of a control object using machine learning technique of the neural networks. A new mixed approach is proposed. It is based on the use of well-known simple models of the object and identification of unaccounted dynamic properties of the object using a neural network based on a training sample. To generate training data, a software package was written that automates the collection process using two ROS nodes. To train the neural network, the PyTorch framework was used and an open source software package was created. Further, the identified object model is used to calculate the optimal control. The results of the computational experiment demonstrate the adequacy and performance of the resulting model. The presented approach based on a combination of a well-known mathematical model and an additional identified neural network model allows using the advantages of the accumulated physical apparatus and increasing its efficiency and accuracy through the use of modern machine learning tools.
APA, Harvard, Vancouver, ISO, and other styles
31

Ruoslahti, Harri, and Bríd Davis. "Societal Impacts of Cyber Security Assets of Project ECHO." WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT 17 (January 11, 2022): 1274–83. http://dx.doi.org/10.37394/232015.2021.17.116.

Full text
Abstract:
Solutions on both consumer and state levels have become increasingly vulnerable to sophisticated cyberattacks by e.g. malware, phishing, machine learning and artificial intelligence. As the adoption and integration of information technologies are increasing and solutions are developing, the need to invest in cyber-security is at an all-time high. Investment in cybersecurity is a chief priority within the European Union, and project ECHO is a one initiative that put emphasis on devising, elaborating, implementing and enhancing a series of technological solutions (assets) to counteract cyber-attacks. The research problem of this study is what societal impacts do the ECHO assets have as product, as knowledge use, and as benefits to society. The literature review includes theory and practice from academic papers, EU innovation project and professional reports, and some ECHO project workflows. Relevant academic theoretical approaches that provide a basis for this task are: e-skills and training, Organisational Learning (OL), Societal Impact (SI), Societal Impact Assessment (SIA). This is a qualitative pilot study that evaluates the usefulness of employing a Product/ Knowledge/ Benefit Societal Impact framework to assessment of societal impacts. Data collection involved qualitative participatory observation of a co-creative expert hackathon workshop. This pilot study shows that the methodology path, where societal impact of ICT and AI solutions (e.g. the ECHO assets) are examined as these three elements (product, knowledge use, societal benefit). This pilot study serves as a step to validate this path and design and select practical, rigorous and relevant quantitative methodology to further the understanding of both societal impact assessment of cyber, e-, and AI-based solutions and services. To incorporate societal impacts with cyber and e-skills this study recommends developing and refining actual key performance indicators (KPI) to provide a basis for rigorous and relevant qualitative and quantitative questionnaire based inquiry of cyber, e-, and AI-based solutions and services.
APA, Harvard, Vancouver, ISO, and other styles
32

De Liddo, Anna, Ágnes Sándor, and Simon Buckingham Shum. "Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study." Computer Supported Cooperative Work (CSCW) 21, no. 4-5 (December 17, 2011): 417–48. http://dx.doi.org/10.1007/s10606-011-9155-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sulis, William. "Contextuality in Neurobehavioural and Collective Intelligence Systems." Quantum Reports 3, no. 4 (September 25, 2021): 592–614. http://dx.doi.org/10.3390/quantum3040038.

Full text
Abstract:
Contextuality is often described as a unique feature of the quantum realm, which distinguishes it fundamentally from the classical realm. This is not strictly true, and stems from decades of the misapplication of Kolmogorov probability. Contextuality appears in Kolmogorov theory (observed in the inability to form joint distributions) and in non-Kolmogorov theory (observed in the violation of inequalities of correlations). Both forms of contextuality have been observed in psychological experiments, although the first form has been known for decades but mostly ignored. The complex dynamics of neural systems (neurobehavioural regulatory systems) and of collective intelligence systems (social insect colonies) are described. These systems are contextual in the first sense and possibly in the second as well. Process algebra, based on the Process Theory of Whitehead, describes systems that are generated, transient, open, interactive, and primarily information-driven, and seems ideally suited to modeling these systems. It is argued that these dynamical characteristics give rise to contextuality and non-Kolmogorov probability in spite of these being entirely classical systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Gavriushenko, Mariia, Olena Kaikova, and Vagan Terziyan. "Bridging human and machine learning for the needs of collective intelligence development." Procedia Manufacturing 42 (2020): 302–6. http://dx.doi.org/10.1016/j.promfg.2020.02.092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Moradi, Morteza, Mohammad Moradi, Farhad Bayat, and Adel Nadjaran Toosi. "Collective hybrid intelligence: towards a conceptual framework." International Journal of Crowd Science 3, no. 2 (August 30, 2019): 198–220. http://dx.doi.org/10.1108/ijcs-03-2019-0012.

Full text
Abstract:
Purpose Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive. Design/methodology/approach According to the extensive acceptance and efficiency of crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the (complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework are delineated. Moreover, some discussion on architectural and realization issues are presented. Findings The paper describes the conceptual architecture, workflow and schematic representation of a new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements, practical roadmap and architectural notes are explained. Originality/value The major contribution of this work is introducing the conceptual foundations to combine and integrate collective intelligence of humans and machines to achieve higher efficiency and (computing) performance. To the best of the authors’ knowledge, this the first study in which such a blessing integration is considered. Therefore, it is believed that the proposed computing concept could inspire researchers toward realizing such unprecedented possibilities in practical and theoretical contexts.
APA, Harvard, Vancouver, ISO, and other styles
36

Wolpert, D. H., and K. Tumer. "Collective Intelligence, Data Routing and Braess' Paradox." Journal of Artificial Intelligence Research 16 (June 1, 2002): 359–87. http://dx.doi.org/10.1613/jair.995.

Full text
Abstract:
We consider the problem of designing the the utility functions of the utility-maximizing agents in a multi-agent system so that they work synergistically to maximize a global utility. The particular problem domain we explore is the control of network routing by placing agents on all the routers in the network. Conventional approaches to this task have the agents all use the Ideal Shortest Path routing Algorithm (ISPA). We demonstrate that in many cases, due to the side-effects of one agent's actions on another agent's performance, having agents use ISPA's is suboptimal as far as global aggregate cost is concerned, even when they are only used to route infinitesimally small amounts of traffic. The utility functions of the individual agents are not ``aligned'' with the global utility, intuitively speaking. As a particular example of this we present an instance of Braess' paradox in which adding new links to a network whose agents all use the ISPA results in a decrease in overall throughput. We also demonstrate that load-balancing, in which the agents' decisions are collectively made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to `side-effects', in this case of current routing decision on future traffic. The mathematics of Collective Intelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects in multi-agent systems, both over time and space. We present key concepts from that mathematics and use them to derive an algorithm whose ideal version should have better performance than that of having all agents use the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated via empirical means (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm almost always avoids Braess' paradox.
APA, Harvard, Vancouver, ISO, and other styles
37

Bolognini, Maurizio. "The SMSMS Project: Collective Intelligence Machines in the Digital City." Leonardo 37, no. 2 (April 2004): 147–49. http://dx.doi.org/10.1162/0024094041139247.

Full text
Abstract:
The SMSMS project is a computer-based interactive installation that derives from the author's previous work, Computer sigillati, in which 200 machines have been programmed to produce an endless flow of random images and left to work indefinitely without being connected to a monitor. In SMSMS, one of the Computer sigillati programs is employed to create images that are visible and can be modified by the public using cell phones. SMSMS could be considered as either an exercise in collective intelligence or, in contrast, as a disturbance to the unpredictable working of the machine. Some implications concerning art and new technologies are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Wamba-Taguimdje, Serge-Lopez, Samuel Fosso Wamba, Jean Robert Kala Kamdjoug, and Chris Emmanuel Tchatchouang Wanko. "Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects." Business Process Management Journal 26, no. 7 (May 12, 2020): 1893–924. http://dx.doi.org/10.1108/bpmj-10-2019-0411.

Full text
Abstract:
PurposeThe main purpose of our study is to analyze the influence of Artificial Intelligence (AI) on firm performance, notably by building on the business value of AI-based transformation projects. This study was conducted using a four-step sequential approach: (1) analysis of AI and AI concepts/technologies; (2) in-depth exploration of case studies from a great number of industrial sectors; (3) data collection from the databases (websites) of AI-based solution providers; and (4) a review of AI literature to identify their impact on the performance of organizations while highlighting the business value of AI-enabled projects transformation within organizations.Design/methodology/approachThis study has called on the theory of IT capabilities to seize the influence of AI business value on firm performance (at the organizational and process levels). The research process (responding to the research question, making discussions, interpretations and comparisons, and formulating recommendations) was based on a review of 500 case studies from IBM, AWS, Cloudera, Nvidia, Conversica, Universal Robots websites, etc. Studying the influence of AI on the performance of organizations, and more specifically, of the business value of such organizations’ AI-enabled transformation projects, required us to make an archival data analysis following the three steps, namely the conceptual phase, the refinement and development phase, and the assessment phase.FindingsAI covers a wide range of technologies, including machine translation, chatbots and self-learning algorithms, all of which can allow individuals to better understand their environment and act accordingly. Organizations have been adopting AI technological innovations with a view to adapting to or disrupting their ecosystem while developing and optimizing their strategic and competitive advantages. AI fully expresses its potential through its ability to optimize existing processes and improve automation, information and transformation effects, but also to detect, predict and interact with humans. Thus, the results of our study have highlighted such AI benefits in organizations, and more specifically, its ability to improve on performance at both the organizational (financial, marketing and administrative) and process levels. By building on these AI attributes, organizations can, therefore, enhance the business value of their transformed projects. The same results also showed that organizations achieve performance through AI capabilities only when they use their features/technologies to reconfigure their processes.Research limitations/implicationsAI obviously influences the way businesses are done today. Therefore, practitioners and researchers need to consider AI as a valuable support or even a pilot for a new business model. For the purpose of our study, we adopted a research framework geared toward a more inclusive and comprehensive approach so as to better account for the intangible benefits of AI within organizations. In terms of interest, this study nurtures a scientific interest, which aims at proposing a model for analyzing the influence of AI on the performance of organizations, and at the same time, filling the associated gap in the literature. As for the managerial interest, our study aims to provide managers with elements to be reconfigured or added in order to take advantage of the full benefits of AI, and therefore improve organizations’ performance, the profitability of their investments in AI transformation projects, and some competitive advantage. This study also allows managers to consider AI not as a single technology but as a set/combination of several different configurations of IT in the various company’s business areas because multiple key elements must be brought together to ensure the success of AI: data, talent mix, domain knowledge, key decisions, external partnerships and scalable infrastructure.Originality/valueThis article analyses case studies on the reuse of secondary data from AI deployment reports in organizations. The transformation of projects based on the use of AI focuses mainly on business process innovations and indirectly on those occurring at the organizational level. Thus, 500 case studies are being examined to provide significant and tangible evidence about the business value of AI-based projects and the impact of AI on firm performance. More specifically, this article, through these case studies, exposes the influence of AI at both the organizational and process performance levels, while considering it not as a single technology but as a set/combination of the several different configurations of IT in various industries.
APA, Harvard, Vancouver, ISO, and other styles
39

Hartmann, Douglas. "The Method of Democracy: John Dewey’s Theory of Collective Intelligence." Contemporary Sociology: A Journal of Reviews 51, no. 5 (August 23, 2022): 407–9. http://dx.doi.org/10.1177/00943061221116416w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sokolov, I. A. "Theory and practice in artificial intelligence." Вестник Российской академии наук 89, no. 4 (April 24, 2019): 365–70. http://dx.doi.org/10.31857/s0869-5873894365-370.

Full text
Abstract:
Artificial Intelligence is an interdisciplinary field, and formed about 60 years ago as an interaction between mathematical methods, computer science, psychology, and linguistics. Artificial Intelligence is an experimental science and today features a number of internally designed theoretical methods: knowledge representation, modeling of reasoning and behavior, textual analysis, and data mining. Within the framework of Artificial Intelligence, novel scientific domains have arisen: non-monotonic logic, description logic, heuristic programming, expert systems, and knowledge-based software engineering. Increasing interest in Artificial Intelligence in recent years is related to the development of promising new technologies based on specific methods like knowledge discovery (or machine learning), natural language processing, autonomous unmanned intelligent systems, and hybrid human-machine intelligence.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Yingxu, Fakhri Karray, Sam Kwong, Konstantinos N. Plataniotis, Henry Leung, Ming Hou, Edward Tunstel, et al. "On the philosophical, cognitive and mathematical foundations of symbiotic autonomous systems." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2207 (August 16, 2021): 20200362. http://dx.doi.org/10.1098/rsta.2020.0362.

Full text
Abstract:
Symbiotic autonomous systems (SAS) are advanced intelligent and cognitive systems that exhibit autonomous collective intelligence enabled by coherent symbiosis of human–machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general-AI technologies that either function without human intervention or synergize humans and intelligent machines in coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviours. This paper explores the cognitive and mathematical foundations of SAS. The challenges to seamless human–machine interactions in a hybrid environment are addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, cognitive computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via autonomous knowledge learning systems that symbiotically work between humans and cognitive robots. This article is part of the theme issue ‘Towards symbiotic autonomous systems'.
APA, Harvard, Vancouver, ISO, and other styles
42

Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. "An Active Inference Model of Collective Intelligence." Entropy 23, no. 7 (June 29, 2021): 830. http://dx.doi.org/10.3390/e23070830.

Full text
Abstract:
Collective intelligence, an emergent phenomenon in which a composite system of multiple interacting agents performs at levels greater than the sum of its parts, has long compelled research efforts in social and behavioral sciences. To date, however, formal models of collective intelligence have lacked a plausible mathematical description of the relationship between local-scale interactions between autonomous sub-system components (individuals) and global-scale behavior of the composite system (the collective). In this paper we use the Active Inference Formulation (AIF), a framework for explaining the behavior of any non-equilibrium steady state system at any scale, to posit a minimal agent-based model that simulates the relationship between local individual-level interaction and collective intelligence. We explore the effects of providing baseline AIF agents (Model 1) with specific cognitive capabilities: Theory of Mind (Model 2), Goal Alignment (Model 3), and Theory of Mind with Goal Alignment (Model 4). These stepwise transitions in sophistication of cognitive ability are motivated by the types of advancements plausibly required for an AIF agent to persist and flourish in an environment populated by other highly autonomous AIF agents, and have also recently been shown to map naturally to canonical steps in human cognitive ability. Illustrative results show that stepwise cognitive transitions increase system performance by providing complementary mechanisms for alignment between agents’ local and global optima. Alignment emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives to agents’ behaviors (contra existing computational models of collective intelligence) or top-down priors for collective behavior (contra existing multiscale simulations of AIF). These results shed light on the types of generic information-theoretic patterns conducive to collective intelligence in human and other complex adaptive systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Bartashevich, Palina, and Sanaz Mostaghim. "Multi-featured collective perception with Evidence Theory: tackling spatial correlations." Swarm Intelligence 15, no. 1-2 (May 22, 2021): 83–110. http://dx.doi.org/10.1007/s11721-021-00192-8.

Full text
Abstract:
AbstractCollective perception allows sparsely distributed agents to form a global view on a common spatially distributed problem without any direct access to global knowledge and only based on a combination of locally perceived information. However, the evidence gathered from the environment is often subject to spatial correlations and depends on the movements of the agents. The latter is not always easy to control and the main question is how to share and to combine the estimated information to achieve the most precise global estimate in the least possible time. The current article aims at answering this question with the help of evidence theory, also known as Dempster–Shafer theory, applied to the collective perception scenario as a collective decision-making problem. We study eight most common belief combination operators to address the arising conflict between different sources of evidence in a highly dynamic multi-agent setting, driven by modulation of positive feedback. In comparison with existing approaches, such as voter models, the presented framework operates on quantitative belief assignments of the agents based on the observation time of the options according to the agents’ opinions. The evaluated results on an extended benchmark set for multiple options ($$n>2$$ n > 2 ) indicate that the proportional conflict redistribution (PCR) principle allows a collective of small size ($$N=20$$ N = 20 ), occupying $$3.5\%$$ 3.5 % of the surface, to successfully resolve the conflict between clustered areas of features and reach a consensus with almost $$100\%$$ 100 % certainty up to $$n=5$$ n = 5 .
APA, Harvard, Vancouver, ISO, and other styles
44

Pescetelli, Niccolo. "A Brief Taxonomy of Hybrid Intelligence." Forecasting 3, no. 3 (September 1, 2021): 633–43. http://dx.doi.org/10.3390/forecast3030039.

Full text
Abstract:
As artificial intelligence becomes ubiquitous in our lives, so do the opportunities to combine machine and human intelligence to obtain more accurate and more resilient prediction models across a wide range of domains. Hybrid intelligence can be designed in many ways, depending on the role of the human and the algorithm in the hybrid system. This paper offers a brief taxonomy of hybrid intelligence, which describes possible relationships between human and machine intelligence for robust forecasting. In this taxonomy, biological intelligence represents one axis of variation, going from individual intelligence (one individual in isolation) to collective intelligence (several connected individuals). The second axis of variation represents increasingly sophisticated algorithms that can take into account more aspects of the forecasting system, from information to task to human problem-solvers. The novelty of the paper lies in the interpretation of recent studies in hybrid intelligence as precursors of a set of algorithms that are expected to be more prominent in the future. These algorithms promise to increase hybrid system’s resilience across a wide range of human errors and biases thanks to greater human-machine understanding. This work ends with a short overview for future research in this field.
APA, Harvard, Vancouver, ISO, and other styles
45

Qi, Xiaoya, Chuang Liu, Chen Fu, and Zhongxue Gan. "Theory of Collective Intelligence Evolution and Its Applications in Intelligent Robots." Chinese Journal of Engineering Science 20, no. 4 (2018): 101. http://dx.doi.org/10.15302/j-sscae-2018.04.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Remagnino, Paolo, Andrea Prati, Gian Luca Foresti, and Rita Cucchiara. "Expert environments: machine intelligence methods for ambient intelligence." Expert Systems 24, no. 5 (November 2007): 293–94. http://dx.doi.org/10.1111/j.1468-0394.2007.00434.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Atreides, Kyrtin. "Philosophy 2.0: Applying Collective Intelligence Systems and Iterative Degrees of Scientific Validation." Filozofia i Nauka Zeszyt specjalny, no. 10 (May 10, 2022): 49–70. http://dx.doi.org/10.37240/fin.2022.10.zs.3.

Full text
Abstract:
Methods of improving the state and rate of progress within the domain of philosophy using collective intelligence systems are considered. By applying mASI systems superintelligence, debiasing, and humanity’s current sum of knowledge may be applied to this domain in novel ways. Such systems may also serve to strongly facilitate new forms and degrees of cooperation and understanding between different philosophies and cultures. The integration of these philosophies directly into their own machine intelligence seeds as cornerstones could further serve to reduce existential risk while improving both ethical quality and performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Toncic, Jason Christopher. "Advancing a critical artificial intelligence theory for schooling." Teknokultura. Revista de Cultura Digital y Movimientos Sociales 19, no. 1 (December 10, 2021): 13–24. http://dx.doi.org/10.5209/tekn.71136.

Full text
Abstract:
Artificial intelligence (AI) has gradually been integrated into major aspects of schooling and academic learning following breakthroughs in algorithmic machine learning over the past decade. Interestingly, history shows us that as new technologies become perceived as ‘normal’ they fade into uncritical aspects of institutions. Considering that schools produce and reproduce social practices and normative behavior through both explicit and implicit codes, the introduction of AI to classrooms can reveal much about schooling. Nevertheless, artificial intelligence technology (specifically new machine learning applications) has yet to be properly framed as a lens with which to critically analyze and interpret school-based inequities. Recent education discourse focuses more on practical applications of technology than on the institutional inequalities that are revealed when analyzing artificial intelligence technology in the classroom. Accordingly, this paper advances the case for a critical artificial intelligence theory as a valuable lens through which to examine institutions, particularly schools. On the cusp of machine learning artificial intelligence becoming widespread in schools’ academic and hidden curricula, establishing a practical epistemology of artificial intelligence may be particularly useful for researchers and scholars who are interested in what artificial intelligence says about school institutions and beyond.
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Fangzhen. "Machine Theorem Discovery." AI Magazine 39, no. 2 (July 1, 2018): 53–59. http://dx.doi.org/10.1609/aimag.v39i2.2794.

Full text
Abstract:
In this article, I propose a framework for machine theorem discovery and illustrate its use in discovering state invariants in planning domains and properties about Nash equilibria in game theory. I also discuss its potential use in program verification in software engineering. The main message of the article is that many AI problems can and should be formulated as machine theorem discovery tasks.
APA, Harvard, Vancouver, ISO, and other styles
50

Seralina, N. "INFORMATION SYSTEMS FOR MACHINE INTELLIGENCE TO AUTOMATED SOFTWARE TESTING." Herald of Kazakh-British technical university 18, no. 1 (March 1, 2021): 157–61. http://dx.doi.org/10.55452/1998-6688-2021-18-1-157-161.

Full text
Abstract:
The methods of development software develop rapidly. The testing of software has a great role in developing a good product. Many technologies assembled into all aspects of performance, based on software testing. Many advanced automation tools use in a set of test design and validation tests based on the artificial intelligence. The important thing is to focus on changes, to work on basis of collective reasoning of the test command and other commands analogues. The methods of the quality testing are based on the information provided in the modern digital world. The business is relying on new fast processes to provide automatic testing of software. Applying approaches of solutions to financial organization allows increase the transparency of all steps of software development. These steps can help systems show more percentage of the test case rate, can save time and money, but also effectively solves the problem of scaling the process and errors. In this paper, we research information systems for machine intelligence to automated software testing. The aim is divided to tasks: the importance of artificial intelligence, the necessary stage of Software Development - Testing and Quality Controlling System, research of main automation tools. We concluded that use of intellectual intelligence and machine learning: allows automating the repeating process and usage of the database; delivers superb intellectual product; adapts to the progressive algorithm of learning; adds more depth analysis of multiple objects; allows retrieving the maximum amount of data from the databases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography