To see the other types of publications on this topic, follow the link: Software-Driven medical technologies.

Journal articles on the topic 'Software-Driven medical technologies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Software-Driven medical technologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sherif, Suheib, Wooi Haw Tan, Chee Pun Ooi, Abubaker Sherif, and Sarina Mansor. "LoRa driven medical adherence system." Bulletin of Electrical Engineering and Informatics 9, no. 6 (December 1, 2020): 2294–301. http://dx.doi.org/10.11591/eei.v9i6.2195.

Full text
Abstract:
Recent discovered technologies have exposed many new theories and possibilities to improve our standard of living. Medical assistance has been a major research topic in the past, many efforts were put in to simplify the process of following treatment prescriptions. This paper summarizes the work done in developing LoRa driven medical adherence system in order to improve medicine adherence for elderlies. The designed system is composed of two sections; embedded hardware device for the use of patients at home and Web application to manage all patients along with their medicines and keep track of their medicine intake history. LoRa wireless communication technology is used for connecting all embedded devices with a central gateway that manages the network. Hardware and software tests have been conducted and showed great performance in terms of LoRa network range and latency. In short, the proposed system shows promising method of improving medicine adherence.
APA, Harvard, Vancouver, ISO, and other styles
2

Amaral, Carolina, Maria Paiva, Ana Rita Rodrigues, Francisco Veiga, and Victoria Bell. "Global Regulatory Challenges for Medical Devices: Impact on Innovation and Market Access." Applied Sciences 14, no. 20 (October 12, 2024): 9304. http://dx.doi.org/10.3390/app14209304.

Full text
Abstract:
Medical devices play a crucial role in human health. These are instruments, machines or even software programs used to diagnose, treat, monitor or prevent health conditions. They are designed to help improve patients’ quality of life and range from simple items, such as thermometers, to more advanced technologies, such as pacemakers. In order to guarantee the safety and efficacy of medical devices intended for use on patients, the establishment of appropriate regulatory frameworks is crucial to ascertain whether devices function as intended, comply with safety standards and offer benefits that outweigh the associated risks. Depending on the country, different regulatory agencies are responsible for the evaluation of these products. The regulatory landscape for medical devices varies significantly across major markets, including the European Union, the United States of America and Japan, reflecting diverse approaches aimed at ensuring the safety and efficacy of medical technologies. However, these regulatory differences can contribute to a “medical device lag,” where disparities in approval processes and market entry timelines driven by strict regulatory requirements, increasing device complexity and the lack of global harmonization, result in delays in accessing innovative technologies. These delays impact patient access to cutting-edge medical devices and competitiveness in the market. This review aims to address the regulatory framework of medical devices and the approval requirements by the European Commission (EC), the Food and Drug Administration (FDA) and Pharmaceuticals and Medical Device Agency (PMDA).
APA, Harvard, Vancouver, ISO, and other styles
3

Mani Padmanabhan, Et al. "Topological Data Analysis for Software Test Cases Generation." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (November 5, 2023): 2046–53. http://dx.doi.org/10.17762/ijritcc.v11i9.9203.

Full text
Abstract:
The escalating development of digital technologies over the last several decades has given rise to an accompanying surge in data analysis.The software based smart applications increase the importance of behaviour analysis. Software testing of expert systems such as electronic health records, health information system and software as a medical device (SaMD) refers to detect the difference between expected behaviour and actual outcome during healthcare expert systems development. Test cases are the core source of effective software testing. Test cases generation for expert systems are discover challenges of identifying the expected behaviour of the system, where the decision logic is obtained via a data –driven paradigm. In the traditional system software object oriented expected behaviour, provide the clear test cases does not change the flow such as in the healthcare system. Intelligent software test case generation approach required for smart health systems. In this research contributes key performance indicators from massive data sets, node –link diagrams from decision trees, test cases from adjacency matrix are elaborated. The experimental results of healthcare expert systems provide empirically evidence that topological data analysis are compact contribution for software requirement validation.
APA, Harvard, Vancouver, ISO, and other styles
4

Kricka, Larry J. "History of disruptions in laboratory medicine: what have we learned from predictions?" Clinical Chemistry and Laboratory Medicine (CCLM) 57, no. 3 (February 25, 2019): 308–11. http://dx.doi.org/10.1515/cclm-2018-0518.

Full text
Abstract:
Abstract Predictions about the future of laboratory medicine have had a mixed success, and in some instances they have been overambitious and incorrectly assessed the future impact of emerging technologies. Current predictions suggest a more highly automated and connected future for diagnostic testing. The central laboratory of the future may be dominated by more robotics and more connectivity in order to take advantage of the benefits of the Internet of Things and artificial intelligence (AI)-based systems (e.g. decision support software and imaging analytics). For point-of-care testing, mobile health (mHealth) may be in the ascendancy driven by healthcare initiatives from technology companies such as Amazon, Apple, Facebook, Google, IBM, Microsoft and Uber.
APA, Harvard, Vancouver, ISO, and other styles
5

Parak, Roman, and Martin Juricek. "Intelligent Sampling of Anterior Human Nasal Swabs using a Collaborative Robotic Arm." MENDEL 28, no. 1 (June 30, 2022): 32–40. http://dx.doi.org/10.13164/mendel.2022.1.032.

Full text
Abstract:
Advanced robotics does not always have to be associated with Industry 4.0, but can also be applied, for example, in the Smart Hospital concept. Developments in this field have been driven by the coronavirus disease (COVID-19), and any improvement in the work of medical staff is welcome. In this paper, an experimental robotic platform was designed and implemented whose main function is the swabbing samples from the nasal vestibule. The robotic platform represents a complete integration of software and hardware, where the operator has access to a web-based application and can control a number of functions. The increased safety and collaborative approach cannot be overlooked. The result of this work is a functional prototype of the robotic platform that can be further extended, for example, by using alternative technologies, extending patient safety, or clinical tests and studies. Code is available at https://github.com/Steigner/Robo_Medicinae_I
APA, Harvard, Vancouver, ISO, and other styles
6

Chmielewski, Mariusz, Damian Frąszczak, and Dawid Bugajewski. "Architectural concepts for managing biomedical sensor data utilised for medical diagnosis and patient remote care." MATEC Web of Conferences 210 (2018): 05016. http://dx.doi.org/10.1051/matecconf/201821005016.

Full text
Abstract:
This paper discusses experiences and architectural concepts developed and tested aimed at acquisition and processing of biomedical data in large scale system for elderly (patients) monitoring. Major assumptions for the research included utilisation of wearable and mobile technologies, supporting maximum number of inertial and biomedical data to support decision algorithms. Although medical diagnostics and decision algorithms have not been the main aim of the research, this preliminary phase was crucial to test capabilities of existing off-the-shelf technologies and functional responsibilities of system’s logic components. Architecture variants contained several schemes for data processing moving the responsibility for signal feature extraction, data classification and pattern recognition from wearable to mobile up to server facilities. Analysis of transmission and processing delays provided architecture variants pros and cons but most of all knowledge about applicability in medical, military and fitness domains. To evaluate and construct architecture, a set of alternative technology stacks and quantitative measures has been defined. The major architecture characteristics (high availability, scalability, reliability) have been defined imposing asynchronous processing of sensor data, efficient data representation, iterative reporting, event-driven processing, restricting pulling operations. Sensor data processing persist the original data on handhelds but is mainly aimed at extracting chosen set of signal features calculated for specific time windows – varying for analysed signals and the sensor data acquisition rates. Long term monitoring of patients requires also development of mechanisms, which probe the patient and in case of detecting anomalies or drastic characteristic changes tune the data acquisition process. This paper describes experiences connected with design of scalable decision support tool and evaluation techniques for architectural concepts implemented within the mobile and server software.
APA, Harvard, Vancouver, ISO, and other styles
7

Apanisile, Temitope, and Joshua Ayobami Ayeni. "Development of an Extended Medical Diagnostic System for Typhoid and Malaria Fever." Artificial Intelligence Advances 5, no. 1 (September 26, 2023): 28–40. http://dx.doi.org/10.30564/aia.v5i1.5505.

Full text
Abstract:
In developing countries like Nigeria, malaria and typhoid fever are major health challenges in society today. The symptoms vary and can lead to other illnesses in the body which include prolonged fever, fatigue, nausea, headaches, and the risk of contracting infection occurring concurrently if not properly diagnosed and treated. There is a strong need for cost-effective technologies to manage disease processes and reduce morbidity and mortality in developing countries. Some of the challenging issues confronting healthcare are lack of proper processing of data and delay in the dissemination of health information, which often causes delays in the provision of results and poor quality of service delivery. This paper addressed the weaknesses of the existing system through the development of an Artificial Intelligence (AI) driven extended diagnostic system (EDS). The dataset was obtained from patients’ historical records from the Lagos University Teaching Hospital (LUTH) and contained two-hundred and fifty (250) records with five (5) attributes such as risk level, gender, symptom 1, symptom 2, and ailment type. The malaria and typhoid dataset was pre-processed and cleansed to remove unwanted data and information. The EDS was developed using the Naive Bayes technique and implemented using software development tools. The performance of the system was evaluated using the following known metrics: accuracies of true positive (TP), true negative (TN), false positive (FP), and false negative (FN). The performance of the EDS was substantially significant for both malaria and typhoid fevers.
APA, Harvard, Vancouver, ISO, and other styles
8

Mazumder, Engr Rajib, Muhammad Anwar Hossain, and Dr Aparna Chakraborty. "Smart Defense: How Self-Learning AI Can Shield Bangladeshi Medical Records." International Journal of Scientific Research and Management (IJSRM) 12, no. 05 (May 8, 2024): 1174–80. http://dx.doi.org/10.18535/ijsrm/v12i05.ec02.

Full text
Abstract:
The digitalization of healthcare records in Bangladesh presents both opportunities and challenges, particularly concerning the security and protection of sensitive patient information. As electronic health records (EHRs) become increasingly prevalent, the threat of cyberattacks targeting medical data escalates, necessitating innovative solutions to fortify the country's healthcare cybersecurity infrastructure. This paper investigates the efficacy of self-learning artificial intelligence (AI) systems in safeguarding Bangladeshi medical records against cyber threats. The traditional methods of securing medical records, such as firewalls and antivirus software, are proving inadequate against the evolving tactics of cybercriminals. Bangladesh faces unique challenges in this regard, including limited resources, lack of cybersecurity awareness among healthcare professionals, technological fragmentation, and an increasingly sophisticated threat landscape. To address these challenges, there is a growing imperative to explore novel approaches that can adapt and evolve in real-time to counter emerging cyber threats. Self-learning AI systems represent a promising frontier in healthcare cybersecurity. By leveraging advanced machine learning algorithms, these systems can analyze vast amounts of data to detect patterns indicative of cyber threats. Unlike static security measures, self-learning AI continuously learns from new information and adjust their defense strategies, accordingly, enabling them to stay ahead of evolving threats. Key functionalities of self-learning AI include anomaly detection, threat prediction, and adaptive defense mechanisms, all of which are essential for safeguarding medical records in Bangladesh's healthcare landscape. The implications of integrating self-learning AI into Bangladesh's healthcare cybersecurity framework are significant. Not only can these technologies enhance the detection and prevention of cyber threats, but they can also alleviate resource constraints and technical challenges faced by healthcare organizations. However, successful implementation requires comprehensive training, adherence to data privacy regulations, and ongoing monitoring to ensure the effectiveness and reliability of AI-driven security measures. The protection of medical records is paramount as Bangladesh continues its digital transformation in healthcare. Self-learning AI offers a dynamic and proactive approach to cybersecurity, empowering healthcare organizations to mitigate risks and preserve patient privacy in an increasingly digitized landscape. Embracing these innovative technologies is crucial for building a resilient healthcare ecosystem that prioritizes data security and patient trust.
APA, Harvard, Vancouver, ISO, and other styles
9

Venudhar Rao Hajari, Abhip Dilip Chawda, Dr. Punit Goel, A Renuka, and Lagan Goel. "Embedded Systems Design for High-Performance Medical Applications." Journal of Quantum Science and Technology 1, no. 2 (August 31, 2024): 70–84. http://dx.doi.org/10.36676/jqst.v1.i3.28.

Full text
Abstract:
Advancements in patient care and diagnostic accuracy have been driven by the emergence of embedded systems, which has had a dramatic influence on the design and implementation of high-performance medical applications. This abstract explores the fundamental features of embedded system design that are specifically designed for medical applications. Particular attention is paid to the optimization of performance, the dependability of the system, and its integration within demanding healthcare contexts. When it comes to real-time processing, precision, and safety, embedded systems in medical devices are required to fulfill several severe standards. For the purpose of improving the performance of embedded systems that are used in high-performance medical applications, this article provides an overview of the important design considerations and tactics that are involved. Embedded systems are specialized computer systems that are situated inside a larger device and are responsible for performing certain duties. When it comes to the realm of medicine, these systems are an essential component of many technologies, including imaging machines, patient monitoring systems, and diagnostic instruments. The necessity for high dependability and real-time processing poses a unique set of issues when it comes to the design of embedded systems for use in medical applications. To guarantee the operation of the system, it is necessary to address concerns such as the amount of power used, the integrity of the data, and the capability to function in a variety of environments. Real-time processing is one of the most important characteristics to take into account when designing embedded systems for use in medical applications. In order to give rapid diagnosis and treatments, medical devices often need to respond immediately to sensor inputs or patient data. In order to achieve real-time processing, it is necessary to first optimize both the hardware and the software in order to reduce latency and guarantee correct data processing. It is very necessary to make use of sophisticated methods such as parallel processing and efficient algorithms in order to fulfill these criteria.
APA, Harvard, Vancouver, ISO, and other styles
10

Ruiu, Pietro, Michele Nitti, Virginia Pilloni, Marinella Cadoni, Enrico Grosso, and Mauro Fadda. "Metaverse & Human Digital Twin: Digital Identity, Biometrics, and Privacy in the Future Virtual Worlds." Multimodal Technologies and Interaction 8, no. 6 (June 5, 2024): 48. http://dx.doi.org/10.3390/mti8060048.

Full text
Abstract:
Driven by technological advances in various fields (AI, 5G, VR, IoT, etc.) together with the emergence of digital twins technologies (HDT, HAL, BIM, etc.), the Metaverse has attracted growing attention from scientific and industrial communities. This interest is due to its potential impact on people lives in different sectors such as education or medicine. Specific solutions can also increase inclusiveness of people with disabilities that are an impediment to a fulfilled life. However, security and privacy concerns remain the main obstacles to its development. Particularly, the data involved in the Metaverse can be comprehensive with enough granularity to build a highly detailed digital copy of the real world, including a Human Digital Twin of a person. Existing security countermeasures are largely ineffective and lack adaptability to the specific needs of Metaverse applications. Furthermore, the virtual worlds in a large-scale Metaverse can be highly varied in terms of hardware implementation, communication interfaces, and software, which poses huge interoperability difficulties. This paper aims to analyse the risks and opportunities associated with adopting digital replicas of humans (HDTs) within the Metaverse and the challenges related to managing digital identities in this context. By examining the current technological landscape, we identify several open technological challenges that currently limit the adoption of HDTs and the Metaverse. Additionally, this paper explores a range of promising technologies and methodologies to assess their suitability within the Metaverse context. Finally, two example scenarios are presented in the Medical and Education fields.
APA, Harvard, Vancouver, ISO, and other styles
11

Izonin, Ivan, and Nataliya Shakhovska. "Special issue: Informatics & data-driven medicine." Mathematical Biosciences and Engineering 18, no. 5 (2021): 6430–33. http://dx.doi.org/10.3934/mbe.2021319.

Full text
Abstract:
<abstract> <p>The current state of the development of Medicine today is changing dramatically. Previously, data of the patient's health were collected only during a visit to the clinic. These were small chunks of information obtained from observations or experimental studies by clinicians, and were recorded on paper or in small electronic files. The advances in computer power development, hardware and software tools and consequently design an emergence of miniature smart devices for various purposes (flexible electronic devices, medical tattoos, stick-on sensors, biochips etc.) can monitor various vital signs of patients in real time and collect such data comprehensively. There is a steady growth of such technologies in various fields of medicine for disease prevention, diagnosis, and therapy. Due to this, clinicians began to face similar problems as data scientists. They need to perform many different tasks, which are based on a huge amount of data, in some cases with incompleteness and uncertainty and in most others with complex, non-obvious connections between them and different for each individual patient (observation) as well as a lack of time to solve them effectively. These factors significantly decrease the quality of decision making, which usually affects the effectiveness of diagnosis or therapy. That is why the new concept in Medicine, widely known as Data-Driven Medicine, arises nowadays. This approach, which based on IoT and Artificial Intelligence, provide possibilities for efficiently process of the huge amounts of data of various types, stimulates new discoveries and provides the necessary integration and management of such information for enabling precision medical care. Such approach could create a new wave in health care. It will provide effective management of a huge amount of comprehensive information about the patient's condition; will increase the speed of clinician's expertise, and will maintain high accuracy analysis based on digital tools and machine learning. The combined use of different digital devices and artificial intelligence tools will provide an opportunity to deeply understand the disease, boost the accuracy and speed of its detection at early stages and improve the modes of diagnosis. Such invaluable information stimulates new ways to choose patient-oriented preventions and interventions for each individual case.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
12

Cavaleiro de Ferreira, Beatriz, Tiago Coutinho, Miguel Ayala Botto, and Susana Cardoso. "Development of an Inkjet Setup for Printing and Monitoring Microdroplets." Micromachines 13, no. 11 (October 31, 2022): 1878. http://dx.doi.org/10.3390/mi13111878.

Full text
Abstract:
Inkjet printing is a digitally controlled additive technology that allows the precise deposition of droplets. Because it is additive, it enables geometries usually unattainable by other technologies. Because it is digitally controlled, its output is easily modulated, even during operation. Combined with the development of functional materials and their micrometer precision, it can be applicable in a wide range of fields beyond the traditional graphic industry, such as medical diagnosis, electronics manufacturing, and the fabrication of microlenses. In this work, a solution based on open-source hardware and software was implemented instead of choosing a commercial alternative, making the most of inkjet flexibility in terms of inks, substrates, and actuation signal. First, a piezoelectric printhead from MicroFab, driven by an ArduinoDue, was mounted in a 3D printer adapted to ensure precise movement in three dimensions. Then, a monitoring system using a USB digital microscope and a computational algorithm was integrated. Both systems combined allow the printing and measurement of microdroplets by digital regulation of a unipolar signal. Finally, based on a theoretical model and a set of experimentally collected samples, the curve that relates the unipolar signal amplitude to the size of the microdroplets was estimated with an acceptable range of prediction uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
13

Kalabina, Elena G., Ekaterina A. Esina, and Svetlana N. Smirnykh. "Prototyping digitalization models of regional healthcare systems." Herald of Omsk University. Series: Economics 22, no. 2 (2024): 105–15. http://dx.doi.org/10.24147/1812-3988.2024.22(2).105-115.

Full text
Abstract:
To achieve the strategic goals of digital transformation, it is necessary to consider the digital maturity of organizations, industries, and regions. The initial level and potential for digitalization of different industries, regions, and organizations vary significantly. Recent global macroeconomic and social shocks have driven the development of digital services, technologies, and platforms in healthcare. These changes have led to a transformation in the operating models of public and private medical organizations. However, there are negative consequences to these changes, including a decrease in the availability of certain technologies, equipment, and software, as well as an increase in the time and financial costs of digital transformation. The interregional differentiation in digital health maturity not only depends on spatial differences in socio-economic, innovative, technological, and digital development, but also on other factors that need to be taken into account.We believe that the use of various healthcare digitalization models in different regions affects the level of digital maturity in this sector. The aim of the study is to identify typical models of digitalization in regional healthcare systems. Various approaches were used in the research, including the index method, statistical and comparative analysis, content analysis, expert assessments (both in-depth interviews and free interviews), and others. The study utilized data from the Federal State Statistics Service, the Ministry of Digital Development of Russia, and the Unified Interdepartmental Information Statistics System. The focus of the study was on the Sverdlovsk and Tyumen regions. A comparative analysis of digitalization practices in these regions revealed two main models: "centralized" (the case of Tyumen) and "decentralized" (Sverdlovsk). We found that intermediate results for the "centralized" model are generally higher than for the "decentralized". A promising area of research is evaluating the final outcomes of the digital transformation of regional healthcare systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Csáki, Csilla. "Social Sustainability in Medicine: The Role of Artificial Intelligence in Future Doctor–Patient Communication. A Methodological Experiment." Acta Universitatis Sapientiae, Communicatio 9, no. 1 (December 1, 2022): 90–107. http://dx.doi.org/10.2478/auscom-2022-0007.

Full text
Abstract:
Abstract Social sustainability is a development alternative that focuses on preserving and sustaining opportunities and resources for future generations rather than exploiting them. In addition to resource management, it is important to emphasize the focus on human well-being, in which the provision of a healthy life is a key factor. One possible alternative to improve the quality, safety, and affordability of universal healthcare is to integrate artificial intelligence into the health system. The development of AI in healthcare has brought a paradigm shift, as big-data-driven analytics can enable AI itself to identify symptom complexities and communicate with patients. In this process, it is important to explore the attitudes of healthcare professionals towards AI-based technologies, as doctor–patient communication is moving away from authoritarianism towards partnership medicine, in which AI will be an integral part of communication. In my research, I investigate the attitudes of future doctors, i.e. medical students and doctors already in practice, towards AI by using a hybrid research method of semi-structured interviews, photo collage techniques, and a questionnaire survey. The photo collage technique, due to its projective nature, can be used to reveal the respondent’s underlying evoked memories and attitudes. The new image network (collage) can be used to model the doctor–patient–AI relationship envisioned by the doctors. The results highlight the aspect of the application of AI in medicine and point out that it is not only the capabilities of the software but the attitudes of the entire health stakeholder community that influence the uptake of innovation. The exploration of issues of authority and trust in the field provides an opportunity for the creation of educational and outreach programmes.
APA, Harvard, Vancouver, ISO, and other styles
15

Zimmerman, William Denney, Melissa B. Pergakis, Emily F. Gorman, Melissa Motta, Peter H. Jin, Rachel Marie E. Salas, and Nicholas A. Morris. "Scoping Review: Innovations in Clinical Neurology Education." Neurology: Education 2, no. 1 (February 21, 2023): e200048. http://dx.doi.org/10.1212/ne9.0000000000200048.

Full text
Abstract:
Advances in adult learning theory and instructional technologies provide opportunities to improve neurology knowledge acquisition. This scoping review aimed to survey the emerging landscape of educational innovation in clinical neurology. With the assistance of a research librarian, we conducted a literature search on November 4, 2021, using the following databases: PubMed, Embase, Scopus, Cochrane Library, Education Resources Information Center, and PsycINFO. We included studies of innovative teaching methods for medical students through attending physician-level learners and excluded interventions for undergraduate students and established methods of teaching, as well as those published before 2010. Two authors independently reviewed all abstracts and full-text articles to determine inclusion. In the case of disagreement, a third author acted as arbiter. Study evaluation consisted of grading level of outcomes using the Kirkpatrick model, assessing for the presence of key components of education innovation literature, and applying an author-driven global innovation rating. Among 3,830 identified publications, 350 (175 full texts and 175 abstracts) studies were selected for analysis. Only 13 studies were included from 2010 to 2011, with 98 from 2020 to 2021. The most common innovations were simulation (142), eLearning, including web-based software and video-based learning (78), 3-dimensional modeling/printing (34), virtual/augmented reality (26) podcasts/smartphone applications/social media (24), team-based learning (17), flipped classroom (17), problem-based learning (10), and gamification (9). Ninety-eight (28.0%) articles included a study design with a comparison group, but only 23 of those randomized learners to an intervention. Most studies relied on Kirkpatrick Level 1 and 2 outcomes—the perceptions of training by learners and acquisition of knowledge. The sustainability of the innovation, transferability of the innovation to a new context, and the explanation of the novel nature of the innovations were some of the least represented features. We rated most innovations as only slightly innovative. There has been an explosion of reports on educational methods in clinical neurology over the last decade, especially in simulation and eLearning. Unfortunately, most reports lack adequate assessment of the validity and effect of the respective innovation's merits, as well as details regarding sustainability and transferability to new contexts.
APA, Harvard, Vancouver, ISO, and other styles
16

Prasser, Fabian, Oliver Kohlbacher, Ulrich Mansmann, Bernhard Bauer, and Klaus Kuhn. "Data Integration for Future Medicine (DIFUTURE)." Methods of Information in Medicine 57, S 01 (July 2018): e57-e65. http://dx.doi.org/10.3414/me17-02-0022.

Full text
Abstract:
Summary Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on the German Medical Informatics Initiative. Future medicine will be predictive, preventive, personalized, participatory and digital. Data and knowledge at comprehensive depth and breadth need to be available for research and at the point of care as a basis for targeted diagnosis and therapy. Data integration and data sharing will be essential to achieve these goals. For this purpose, the consortium Data Integration for Future Medicine (DIFUTURE) will establish Data Integration Centers (DICs) at university medical centers. Objectives: The infrastructure envisioned by DIFUTURE will provide researchers with cross-site access to data and support physicians by innovative views on integrated data as well as by decision support components for personalized treatments. The aim of our use cases is to show that this accelerates innovation, improves health care processes and results in tangible benefits for our patients. To realize our vision, numerous challenges have to be addressed. The objective of this article is to describe our concepts and solutions on the technical and the organizational level with a specific focus on data integration and sharing. Governance and Policies: Data sharing implies significant security and privacy challenges. Therefore, state-of-the-art data protection, modern IT security concepts and patient trust play a central role in our approach. We have established governance structures and policies safeguarding data use and sharing by technical and organizational measures providing highest levels of data protection. One of our central policies is that adequate methods of data sharing for each use case and project will be selected based on rigorous risk and threat analyses. Interdisciplinary groups have been installed in order to manage change. Architectural Framework and Methodology: The DIFUTURE Data Integration Centers will implement a three-step approach to integrating, harmonizing and sharing structured, unstructured and omics data as well as images from clinical and research environments. First, data is imported and technically harmonized using common data and interface standards (including various IHE profiles, DICOM and HL7 FHIR). Second, data is preprocessed, transformed, harmonized and enriched within a staging and working environment. Third, data is imported into common analytics platforms and data models (including i2b2 and tranSMART) and made accessible in a form compliant with the interoperability requirements defined on the national level. Secure data access and sharing will be implemented with innovative combinations of privacy-enhancing technologies (safe data, safe settings, safe outputs) and methods of distributed computing. Use Cases: From the perspective of health care and medical research, our approach is disease-oriented and use-case driven, i.e. following the needs of physicians and researchers and aiming at measurable benefits for our patients. We will work on early diagnosis, tailored therapies and therapy decision tools with focuses on neurology, oncology and further disease entities. Our early uses cases will serve as blueprints for the following ones, verifying that the infrastructure developed by DIFUTURE is able to support a variety of application scenarios. Discussion: Own previous work, the use of internationally successful open source systems and a state-of-the-art software architecture are cornerstones of our approach. In the conceptual phase of the initiative, we have already prototypically implemented and tested the most important components of our architecture.
APA, Harvard, Vancouver, ISO, and other styles
17

Fontana, Filippo, Christoph Klahn, and Mirko Meboldt. "Value-driven clustering of industrial additive manufacturing applications." Journal of Manufacturing Technology Management 30, no. 2 (February 28, 2019): 366–90. http://dx.doi.org/10.1108/jmtm-06-2018-0167.

Full text
Abstract:
Purpose A prerequisite for the successful adoption of additive manufacturing (AM) technologies in industry is the identification of areas, where such technologies could offer a clear competitive advantage. The purpose of this paper is to investigate the unique value-adding characteristics of AM, define areas of viable application in a firm value chain and discuss common implications of AM adoption for companies and their processes. Design/methodology/approach The research leverages a multi-case-study approach and considers interviews with AM adopting companies from the Swiss and central European region in the medical and industrial manufacturing industries. The authors rely on a value chain model comprising a new product development process and an order fulfillment process (OFP) to analyze the benefits of AM technologies. Findings The research identifies and defines seven clusters within a firm value chain, where the application of AM could create benefits for the adopting company and its customers. The authors suggest that understanding the AM process chain and the design experience are key to explaining the heterogeneous industrial maturity of the presented clusters. The authors further examine the suitability of AM technologies with agile development techniques to pursue incremental product launches in hardware. It is clearly a field requiring the attention of scholars. Originality/value This paper presents a value-driven approach for use-case identification and reveals implications of the industrial implementation of AM technologies. The resultant clustering model provides guidance to new AM adopters.
APA, Harvard, Vancouver, ISO, and other styles
18

Maté, Alejandro, Jesús Peral, Juan Trujillo, Carlos Blanco, Diego García-Saiz, and Eduardo Fernández-Medina. "Improving security in NoSQL document databases through model-driven modernization." Knowledge and Information Systems 63, no. 8 (July 13, 2021): 2209–30. http://dx.doi.org/10.1007/s10115-021-01589-x.

Full text
Abstract:
AbstractNoSQL technologies have become a common component in many information systems and software applications. These technologies are focused on performance, enabling scalable processing of large volumes of structured and unstructured data. Unfortunately, most developments over NoSQL technologies consider security as an afterthought, putting at risk personal data of individuals and potentially causing severe economic loses as well as reputation crisis. In order to avoid these situations, companies require an approach that introduces security mechanisms into their systems without scrapping already in-place solutions to restart all over again the design process. Therefore, in this paper we propose the first modernization approach for introducing security in NoSQL databases, focusing on access control and thereby improving the security of their associated information systems and applications. Our approach analyzes the existing NoSQL solution of the organization, using a domain ontology to detect sensitive information and creating a conceptual model of the database. Together with this model, a series of security issues related to access control are listed, allowing database designers to identify the security mechanisms that must be incorporated into their existing solution. For each security issue, our approach automatically generates a proposed solution, consisting of a combination of privilege modifications, new roles and views to improve access control. In order to test our approach, we apply our process to a medical database implemented using the popular document-oriented NoSQL database, MongoDB. The great advantages of our approach are that: (1) it takes into account the context of the system thanks to the introduction of domain ontologies, (2) it helps to avoid missing critical access control issues since the analysis is performed automatically, (3) it reduces the effort and costs of the modernization process thanks to the automated steps in the process, (4) it can be used with different NoSQL document-based technologies in a successful way by adjusting the metamodel, and (5) it is lined up with known standards, hence allowing the application of guidelines and best practices.
APA, Harvard, Vancouver, ISO, and other styles
19

Sapci, A. Hasan, and H. Aylin Sapci. "Innovative Assisted Living Tools, Remote Monitoring Technologies, Artificial Intelligence-Driven Solutions, and Robotic Systems for Aging Societies: Systematic Review." JMIR Aging 2, no. 2 (November 29, 2019): e15429. http://dx.doi.org/10.2196/15429.

Full text
Abstract:
Background The increase in life expectancy and recent advancements in technology and medical science have changed the way we deliver health services to the aging societies. Evidence suggests that home telemonitoring can significantly decrease the number of readmissions, and continuous monitoring of older adults’ daily activities and health-related issues might prevent medical emergencies. Objective The primary objective of this review was to identify advances in assistive technology devices for seniors and aging-in-place technology and to determine the level of evidence for research on remote patient monitoring, smart homes, telecare, and artificially intelligent monitoring systems. Methods A literature review was conducted using Cumulative Index to Nursing and Allied Health Literature Plus, MEDLINE, EMBASE, Institute of Electrical and Electronics Engineers Xplore, ProQuest Central, Scopus, and Science Direct. Publications related to older people’s care, independent living, and novel assistive technologies were included in the study. Results A total of 91 publications met the inclusion criteria. In total, four themes emerged from the data: technology acceptance and readiness, novel patient monitoring and smart home technologies, intelligent algorithm and software engineering, and robotics technologies. The results revealed that most studies had poor reference standards without an explicit critical appraisal. Conclusions The use of ubiquitous in-home monitoring and smart technologies for aged people’s care will increase their independence and the health care services available to them as well as improve frail elderly people’s health care outcomes. This review identified four different themes that require different conceptual approaches to solution development. Although the engineering teams were focused on prototype and algorithm development, the medical science teams were concentrated on outcome research. We also identified the need to develop custom technology solutions for different aging societies. The convergence of medicine and informatics could lead to the development of new interdisciplinary research models and new assistive products for the care of older adults.
APA, Harvard, Vancouver, ISO, and other styles
20

Baldassarre, Maria Teresa, Danilo Caivano, Simone Romano, Francesco Cagnetta, Victor Fernandez-Cervantes, and Eleni Stroulia. "PhyDSLK: a model-driven framework for generating exergames." Multimedia Tools and Applications 80, no. 18 (May 27, 2021): 27947–71. http://dx.doi.org/10.1007/s11042-021-10980-3.

Full text
Abstract:
AbstractIn recent years, we have been witnessing a rapid increase of research on exergames—i.e., computer games that require users to move during gameplay as a form of physical activity and rehabilitation. Properly balancing the need to develop an effective exercise activity with the requirements for a smooth interaction with the software system and an engaging game experience is a challenge. Model-driven software engineering enables the fast prototyping of multiple system variants, which can be very useful for exergame development. In this paper, we propose a framework, PhyDSLK, which eases the development process of personalized and engaging Kinect-based exergames for rehabilitation purposes, providing high-level tools that abstract the technical details of using the Kinect sensor and allows developers to focus on the game design and user experience. The system relies on model-driven software engineering technologies and is made of two main components: (i) an authoring environment relying on a domain-specific language to define the exergame model encapsulating the gameplay that the exergame designer has envisioned and (ii) a code generator that transforms the exergame model into executable code. To validate our approach, we performed a preliminary empirical evaluation addressing development effort and usability of the PhyDSLK framework. The results are promising and provide evidence that people with no experience in game development are able to create exergames with different complexity levels in one hour, after a less-than-two-hour training on PhyDSLK. Also, they consider PhyDSLK usable regardless of the exergame complexity.
APA, Harvard, Vancouver, ISO, and other styles
21

Richardson, Diana Olivia, and Thomas Oliver Kellerton. "Relationship between analytics and innovation in software based firms in Europe." Business & IT XII, no. 1 (2022): 125–33. http://dx.doi.org/10.14311/bit.2022.01.15.

Full text
Abstract:
Purpose: Data availability has resulted in an incredible increase due to the widespread adoption of electronic technologies. Companies are coping with huge amount of information which is awaiting to be exploited. Meanwhile, scholars are providing different methods and techniques to help companies capture the to be able to encourage innovation and improve the efficiency of existing processes, they've embedded value In their data. The data is a byproduct of these researches. Methodology / design Based on three instances used as illustrations for new ideas, the research relies on an exploratory multiple case study analysis. The collected data is, in particular, examined based on models previously provided in the literature review and built upon and expanded. The findings: Research takes a data - driven approach to development and also includes an unusual view on the development process. The trigger point is that there's a need for information that will permit the entire development process of a complex system to start. In this perspective, the application of data is a by-product of data. The entire innovation process, and not the primary outcome; This is unusual because nearly all literature is regarded as data the byproduct of the key product. Implications in practical matters Results provide a development process to inspire innovation, relying on the need for data as a trigger point, leading business people and managers through the construction process of the entire digital system.
APA, Harvard, Vancouver, ISO, and other styles
22

Cordeiro, Natália, Gil Facina, Afonso Nazário, Vanessa Sanvido, Joaquim Araujo Neto, Morgana Silva, and Simone Elias. "Abstract PO2-29-02: Towards Precision Medicine in Breast Imaging: A Novel Open Mammography Database with Tailor-Made 3D Image Retrieval for Artificial Intelligence and Teaching." Cancer Research 84, no. 9_Supplement (May 2, 2024): PO2–29–02—PO2–29–02. http://dx.doi.org/10.1158/1538-7445.sabcs23-po2-29-02.

Full text
Abstract:
Abstract Objectives: According to the World Health Organization (WHO), breast cancer is the most frequent malignant neoplasia and leading cause of cancer death among women worldwide. Low- and middle-income countries hold the worst survival rates mainly owing to a lack of access to appropriate diagnosis and treatment related resources. For proper early diagnosis, it is established that besides the physical structure itself (e.g., mammography units), there's a need for adequate interpretation of imaging and that might be a particularly major problem in low-income societies once there is a tendency of greater education setbacks. Mammography datasets can improve this resource-driven gap by enabling the development of artificial intelligence technologies (AI) which can make breast cancer diagnosis more accurate in a cost-effective and scalable way. We aim to create a new database of high quality digital mammography images suitable for AI development and education. Methods: Our mammography database was developed by means of retrospective selection of 100 exams performed by Hospital São Paulo - Federal University of São Paulo ranging from 2019 to 2023. The project is assumed to be safe, versatile, and usable, and required an extensive search for the appropriate tool. Ambra Health, an American company, has developed cloud-based software for medical image management and stood out as a viable alternative. Their platform meets international data security criteria, they also made the intended careful customization possible, in addition to the possibility of associating image and text attachments. The categories were created in accordance with the BI-RADS® descriptors, a wide range of clinical scenarios and additional materials available, and they served as the basis for the advanced search feature, which intuitively filters exams that meet the selected criteria simultaneously. The platform was integrated with an automatic anonymization system upon upload, ensuring data privacy. After submission, the exams are retained in a restricted area for anonymization verification, categorization, and attachment management, before being released to the end-user. So as to broaden geographic coverage, the descriptors were entered in American English, respecting the origin of the BI-RADS® lexicon, as for the website structure, automatic translation to the accessing browser standard language was selected. Results: Our website is active and available at http://mamografia.unifesp.br, with access granted upon a simple registration process. 941 mammography images from 100 anonymized cases, 62% of which include 3D images, can be filtered based on the combination of 113 clinical and imaging variables, as well as attachment availability. The language is adaptable to the user's native language, and categorized searches can be accessed directly from the browser or downloaded as customized datasets. Additionally, features such as saved searches or starred exams are also available. Conclusion: We have developed an online and free mammography database that is completely innovative by integrating various resources into a single platform. We provide high-resolution and 3D digital images that can be searched using an advanced search system. Moreover, we offer supplementary clinical information in various attachment formats, favoring a rich clinical correlation. In this way, we have achieved the ambivalence of our goal, which was to promote education and research. *"images speak louder than words" Database: https://mamografia.unifesp.br Tutorials: https://www.youtube.com/@Mamografiaunifesp e-mail: acesso@mamografia.unifesp.br password: acesso@mamografia12 (valid until dec/23) Citation Format: Natália Cordeiro, Gil Facina, Afonso Nazário, Vanessa Sanvido, Joaquim Araujo Neto, Morgana Silva, Simone Elias. Towards Precision Medicine in Breast Imaging: A Novel Open Mammography Database with Tailor-Made 3D Image Retrieval for Artificial Intelligence and Teaching [abstract]. In: Proceedings of the 2023 San Antonio Breast Cancer Symposium; 2023 Dec 5-9; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2024;84(9 Suppl):Abstract nr PO2-29-02.
APA, Harvard, Vancouver, ISO, and other styles
23

Hnatchuk, Yelyzaveta, Tetiana Hovorushchenko, and Olga Pavlova. "Methodology for the development and application of clinical decisions support information technologies with consideration of civil-legal grounds." Radioelectronic and Computer Systems, no. 1 (March 7, 2023): 33–44. http://dx.doi.org/10.32620/reks.2023.1.03.

Full text
Abstract:
Currently, there are no clinical decision support information technologies (CDSIT) that would consider civil-legal grounds when forming a decision for clinicians. Therefore, the design, development, and implementation of CDSIT, which considers civil-legal grounds when forming decisions, are actual problems. Methodology for the development and application of knowledge-driven, rule-based, clinical decisions support information technologies with consideration of civil-legal grounds has been developed, which provides a theoretical basis for developing clinical decisions support information technology with consideration of civil-legal grounds and partial CDSITs regarding the possibility of providing medical services of a certain type. In addition to the conclusion about the possibility or impossibility of providing certain medical services, the developed methodology ensures the presence of all essential terms (from the viewpoint of civil law regulation) in the contract for the certain medical service's provision and/or the data on potential patients for the provision of such a service, as well as minimization of the influence of the human factor when making clinical decisions. It is advisable to evaluate the CDSITs with consideration of civil-legal grounds, developed according to the proposed methodology, from the viewpoint of the correctness of the decisions generated by them, as well as from the viewpoint of their usefulness for clinics. In this paper, experiments with the methodology-based CDSIT regarding the possibility of performing a surrogate motherhood procedure with consideration of civil-legal grounds were conducted. Such experiments showed the correctness of the generated decisions at the level of 97 %. Experiments also demonstrated the usefulness of such IT for clinics from the viewpoint of eliminating adverse legal consequences, as they might arise due to violation or disregard of legal, and moral and ethical norms.
APA, Harvard, Vancouver, ISO, and other styles
24

Mamta Kale, Et al. "Telemedicine Revolution: Bridging Gaps in Access to Healthcare." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 7 (September 1, 2023): 394–99. http://dx.doi.org/10.17762/ijritcc.v11i7.9424.

Full text
Abstract:
A rapidly developing area of contemporary healthcare called telemedicine uses digital technologies to provide distant medical treatments. This overview examines the complex field of telemedicine, covering its history, effects on patient outcomes, access to healthcare, problems, and potential future developments. The rapid expansion of telemedicine, driven by technical improvements, is seen in its current vast repertory of services, which includes tele-diagnosis, tele-surgery, and tele-rehabilitation, from its early beginnings of allowing remote consultations. These developments—which have been made possible by the widespread use of mobile devices and internet connectivity—have transformed the way healthcare is delivered, bridging geographic divides and providing access to a wide range of medical specialties. The potential of telemedicine to close gaps, particularly in rural and underserved regions, demonstrates the significant influence it has on healthcare access. Telemedicine addresses gaps in healthcare accessibility and strengthens marginalised communities by enabling timely access to medical services through virtual consultations and remote monitoring. Additionally, telemedicine shows promise in improving patient outcomes by facilitating ongoing monitoring, tailored treatment, and early interventions. It is essential to the management of chronic illnesses, mental health services, and post-operative care, improving patient well-being and health results. However, there are obstacles to telemedicine, including differences in technology, worries about data security, unclear regulations, and ethical issues. In order to provide fair access, protect patient privacy, and create uniform standards for its moral and efficient integration into healthcare systems, these issues must be resolved.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Meijie, Xuefeng Huang, Baona Jiang, Zhihong Li, Yuanyuan Zhang, and Bo Gao. "AI in medical education: the moderating role of the chilling effect and STARA awareness." BMC Medical Education 24, no. 1 (June 7, 2024). http://dx.doi.org/10.1186/s12909-024-05627-4.

Full text
Abstract:
Abstract Background The rapid growth of artificial intelligence (AI) technologies has been driven by the latest advances in computing power. Although, there exists a dearth of research on the application of AI in medical education. Methods this study is based on the TAM-ISSM-UTAUT model and introduces STARA awareness and chilling effect as moderating variables. A total of 657 valid questionnaires were collected from students of a medical university in Dalian, China, and data were statistically described using SPSS version 26, Amos 3.0 software was used to validate the research model, as well as moderated effects analysis using Process (3.3.1) software, and Origin (2021) software. Results The findings reveal that both information quality and perceived usefulness are pivotal factors that positively influence the willingness to use AI products. It also uncovers the moderating influence of the chilling effect and STARA awareness. Conclusions This suggests that enhancing information quality can be a key strategy to encourage the widespread use of AI products. Furthermore, this investigation offers valuable insights into the intersection of medical education and AI use from the standpoint of medical students. This research may prove to be pertinent in shaping the promotion of Medical Education Intelligence in the future.
APA, Harvard, Vancouver, ISO, and other styles
26

Armeni, Patrizio, Irem Polat, Leonardo Maria De Rossi, Lorenzo Diaferia, Severino Meregalli, and Anna Gatti. "Exploring the potential of digital therapeutics: An assessment of progress and promise." DIGITAL HEALTH 10 (January 2024). http://dx.doi.org/10.1177/20552076241277441.

Full text
Abstract:
Digital therapeutics (DTx), a burgeoning subset of digital health solutions, has garnered considerable attention in recent times. These cutting-edge therapeutic interventions employ diverse technologies, powered by software algorithms, to treat, manage, and prevent a wide array of diseases and disorders. Although DTx shows significant promise as an integral component of medical care, its widespread integration is still in the preliminary stages. This limited adoption can be largely attributed to the scarcity of comprehensive research that delves into DTx's scope, including its technological underpinnings, potential application areas, and challenges—namely, regulatory hurdles and modest physician uptake. This review aims to bridge this knowledge gap by offering an in-depth overview of DTx products’ value to both patients and clinicians. It evaluates the current state of maturity of DTx applications driven by digital technologies and investigates the obstacles that developers and regulators encounter in the market introduction phase.
APA, Harvard, Vancouver, ISO, and other styles
27

Chai, Slyvester Yew Wang, Frederick Jit Fook Phang, Lip Siang Yeo, Lock Hei Ngu, and Bing Shen How. "Future era of techno-economic analysis: Insights from review." Frontiers in Sustainability 3 (August 10, 2022). http://dx.doi.org/10.3389/frsus.2022.924047.

Full text
Abstract:
Techno-economic analysis (TEA) has been considered an important tool to evaluate the economic performance of industrial processes. Recently, the application of TEA has been observed to have exponential growth due to the increasing competition among businesses across various industries. Thus, this review presents a deliberate overview of TEA to inculcate the importance and relevance of TEA. To further support the aforementioned points, this review article starts with a bibliometric analysis to evaluate the applicability of TEA within the research community. Conventional TEA is widely known to be conducted via software modeling (i.e., Python, AMIS, MATLAB, Aspen HYSYS, Aspen Plus, HOMER Pro, FORTRAN, R, SysML and Microsoft Excel) without involving any correlation or optimization between the process and economic performance. Apart from that, due to the arrival of the industrial revolution (IR) 4.0, industrial processes are being revolutionized into smart industries. Thus, to retain the integrity of TEA, a similar evolution to smart industries is deemed necessary. Studies have begun to incorporate data-driven technologies (i.e., artificial intelligence (AI) and blockchain) into TEA to effectively optimize both processes and economic parameters simultaneously. With this, this review explores the integration of data-driven technologies in the TEA framework. From literature reviews, it was found that genetic algorithm (GA) is the most applied data-driven technology in TEA, while the applications of blockchain, machine learning (ML), and artificial neural network (ANN) in TEA are still considerably scarce. Not to mention other advanced technologies, such as cyber-physical systems (CPS), IoT, cloud computing, big data analytics, digital twin (DT), and metaverse are yet to be incorporated into the existing TEA. The inclusion of set-up costs for the aforementioned technologies is also crucial for accurate TEA representation of smart industries deployment. Overall, this review serves as a reference note for future process engineers and industry stakeholders who wish to perform relevant TEA, which is capable to cover the new state-of-art elements under the new modern era.
APA, Harvard, Vancouver, ISO, and other styles
28

"In-Vivo Microvascular Imaging, a Non-Invasive Diagnostic Method for Oral Surgery and Oral Medicine." Surgical Case Reports and Images, September 5, 2021, 1–3. http://dx.doi.org/10.36879/scri.2021.000102.

Full text
Abstract:
There is a trend in the medical and dental fields to pursue alternative diagnostic and treatment strategies that are less invasive than traditional open surgical methods, while achieving equal, if not better clinical outcomes. Emerging technologies are helping to develop new diagnostic and treatment methods, many centred upon the philosophy of minimal invasive medical procedures bridging the gap between prevention and full formal surgical procedures. In-vivo microvascular imaging is gaining popularity, with increasing recognition of the morphological changes that occur in the superficial mesodermal tissues, driven and as a direct response to the presence of certain diseases. The Centre for Oral, Clinical and Translational Sciences, Faculty of Dentistry Oral and Craniofacial Sciences at King’s College London, have developed a dedicated intra-oral video-capillaroscopy instrument, named Real Time Optical Vascular Imaging. This technology when combined with its dedicated software, Customized Angiogenesis Analyzer, has the potential to offer a real time, noninvasive diagnostic method for oral diseases, particularly, oral cancer.
APA, Harvard, Vancouver, ISO, and other styles
29

Seth, Ishith, Aram Cox, Yi Xie, Gabriella Bulloch, David J. Hunter-Smith, Warren M. Rozen, and Richard Ross. "Evaluating Chatbot Efficacy for Answering Frequently Asked Questions in Plastic Surgery: A ChatGPT Case Study Focused on Breast Augmentation." Aesthetic Surgery Journal, May 9, 2023. http://dx.doi.org/10.1093/asj/sjad140.

Full text
Abstract:
Abstract Background The integration of artificial intelligence (AI) and machine learning (ML) technologies into healthcare is transforming patient-practitioner interaction and could offer an additional platform for patient education and support. Objectives This study investigates whether ChatGPT-4 could provide safe and up-to date medical information about breast augmentation which is comparable to other patient information sources. Methods ChatGPT-4 was asked to generate 6 commonly asked questions regarding breast augmentation and respond to them. Its responses were qualitatively evaluated by a panel of specialist plastic and reconstructive surgeons and reconciled with a literature search of two large medical databases for accuracy, informativeness and accessibility. Results ChatGPT-4 provided well-structured, grammatically accurate, and comprehensive responses to the questions posed however it was limited in providing personalised advice and sometimes generated inappropriate or outdated references. ChatGPT consistently encouraged engagement with a specialist for specific information. Conclusions Although ChatGPT-4 showed promise as an adjunct tool in patient education regarding breast augmentation, there are areas requiring improvement. Additional advancements and software engineering are needed to enhance the reliability and applicability of AI-driven chatbots in patient education and support systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Naqvi, Waqar M., Habiba Sundus, Gaurav Mishra, Ramprasad Muthukrishnan, and Praveen K. Kandakurti. "AI in Medical Education Curriculum: The Future of Healthcare Learning." European Journal of Therapeutics, January 30, 2024. http://dx.doi.org/10.58600/eurjther1995.

Full text
Abstract:
To address the evolving, quantitative nature of healthcare in the twenty-first century, it is imperative to integrate artificial intelligence (AI) with healthcare education. To bridge this educational gap, it is imperative to impart practical skills for the utilisation and interpretation of AI in healthcare settings, integrate technology into clinical operations, develop AI technologies, and enhance human competencies [1]. The swift rise of AI in contemporary society can be ascribed to the progress of intricate algorithms, cost-effective graphic processors, and huge annotated databases. AI has been a crucial component of healthcare education in recent years and has been implemented by numerous medical institutions globally. AI is widely prevalent in medical education in Western countries, in contrast to developing countries. The disparity could be mitigated through more infrastructural assistance from medical institutions in underdeveloped nations. It is crucial to raise awareness among medical educators and students regarding AI tools to facilitate the development and integration of AI-based technologies in medical education [2]. AI can impact the student learning process through three methods: direct instruction (transferring knowledge to the student in a teacher-like role), instructional support (assisting students as they learn), and learner empowerment (facilitating collaboration among multiple students to solve complex problems based on teacher feedback). Incorporating artificial intelligence (AI) tools into education can augment students' knowledge, foster skill acquisition, and deepen comprehension of intricate medical topics [2,3]. Virtual reality (VR) can enhance the immersion of learning sessions with virtual patients. Virtual Reality (VR) is a software-driven technology that generates a virtual environment with three-dimensional characteristics. Virtual Reality (VR) uses a head-mounted display or glasses to build a computer-simulated environment that provides a convincing and lifelike experience for the user. Conversely, augmented reality (AR) enhances the real-world environment by superimposing virtual elements onto a user's perspective of the actual world through a smartphone or similar device. By integrating these technologies, learners are able to investigate and actively participate in intricate clinical situations, resulting in a more pleasurable and efficient learning experience [4,5]. AI-powered games utilise data mining methodologies to examine the data gathered during gameplay and enhance the player's knowledge and abilities. In addition, they provide a personalised and engaging encounter that adapts the speed and level of challenge according to the player's achievements. Incorporating game components such as points, badges, and leaderboards enhances the enjoyment and engagement of the learning process. The implementation of gamification in the learning process boosts student engagement, fosters collaborative efforts, and optimises learning results. Additionally, they offer chances for clinical decision-making without any potential risks and provide instant feedback to the students, thereby becoming an essential component of undergraduate medical education [6]. By incorporating artificial intelligence (AI) techniques into learning management systems (LMS), learners are equipped with the necessary resources to achieve mastery at their own individualised pace. These computer algorithms assess the learner's level of understanding and deliver personalised educational material to help them achieve mastery of the content. The AI-powered platforms guide learners by effectively organising and arranging learning experiences, and then implementing targeted remedial actions. These customised and adaptable teaching techniques enhance the effectiveness and efficiency of learning. Virtual patients are computer-based simulations that replicate real-life clinical events and are used for training and education in health professions. Virtual patients are built to simulate authentic symptoms, react to students' treatments, and create dynamic therapeutic encounters. The student assumes the position of a healthcare provider and engages in activities such as gathering information, proposing potential diagnoses, implementing medical treatment, and monitoring the patient's progress. These simulations can accurately reproduce a range of medical settings and expose trainees to the problems they might encounter in real-world situations. Medical students can enhance their communication and clinical reasoning skills by engaging with virtual patients in a simulated environment that closely resembles real-life situations [6,7]. Furthermore, AI-driven solutions can be advantageous for educational purposes in diagnostic fields such as radiology, pathology, and microbiology. Content-based image retrieval (CBIR) is a highly promising method utilised in the field of radiology for educational and research purposes. CBIR facilitates the search for photos that have similar content with a reference image, utilising information extracted from the images [8]. Moreover, artificial intelligence (AI) integrated with machine learning techniques is currently being employed to accurately diagnose microbial illnesses. This application of AI has significant potential in training and educating specialists in the field of microbiology. Conversely, the current progress in AI-driven deep learning technologies that specifically target cellular imaging has the potential to revolutionise education in diagnostic pathology [9]. Ultimately, incorporating AI training into the medical education curriculum is a transformative step that will shape the future of healthcare practitioners. This sequence provides enhanced diagnostic precision, personalised learning prospects, and heightened ethical awareness. These potential benefits surpass the obstacles, initiating a new era in medical education where human beings and technology collaborate to deliver optimal patient care. The purposeful and calculated integration of AI into medical education will have a pivotal impact on shaping the future of healthcare as we navigate this unexplored territory.
APA, Harvard, Vancouver, ISO, and other styles
31

Mykolaienko, V. O. "Code generation for large commercial projects: how commercial development will look like in the future." Connectivity 166, no. 6 (2023). http://dx.doi.org/10.31673/2412-9070.2023.061315.

Full text
Abstract:
In the rapidly evolving landscape of software development, code generation stands at the forefront as a transformative force, especially within the domain of large-scale commercial projects. This paradigm-shifting technique is fundamental to agile adaptation in response to ever-changing business imperatives, all while upholding stringent code quality standards and reducing error rates. The article initiates a discourse by methodically analyzing the current methodologies and tools of code generation, emphasizing their critical importance for development teams. It delves into the complexities these tools mitigate, such as abstraction of repetitive coding tasks and acceleration of the development lifecycle. The discussion progresses to address the intricacies of scaling code generation processes, including issues of standardization, integration with legacy systems, and ensuring the maintained quality of automatically generated code segments. A particular focus is laid on the prospective role of artificial intelligence in the genesis of business logic, postulating a future where AI extends beyond mere automation, becoming a vital contributor to the creation of self-adapting, precision-targeted business functionalities. I introduce a structured approach to template code generation that underscores the importance of quick development setups and outline a set of principles for the AI-driven derivation of business logic. Moreover, the article offers an insightful prognosis on the potential reformation of software development paradigms by code generation technologies, looking into the crystal ball of this field’s evolutionary path. It concludes with a visionary examination of the implications for the software industry, charting out the roadmap for practitioners to navigate and adapt to these upcoming shifts in commercial development methodologies.
APA, Harvard, Vancouver, ISO, and other styles
32

Kussel, Tobias, Torben Brenner, Galina Tremper, Josef Schepers, Martin Lablans, and Kay Hamacher. "Record linkage based patient intersection cardinality for rare disease studies using Mainzelliste and secure multi-party computation." Journal of Translational Medicine 20, no. 1 (October 8, 2022). http://dx.doi.org/10.1186/s12967-022-03671-6.

Full text
Abstract:
Abstract Background The low number of patients suffering from any given rare diseases poses a difficult problem for medical research: With the exception of some specialized biobanks and disease registries, potential study participants’ information are disjoint and distributed over many medical institutions. Whenever some of those facilities are in close proximity, a significant overlap of patients can reasonably be expected, further complicating statistical study feasibility assessments and data gathering. Due to the sensitive nature of medical records and identifying data, data transfer and joint computations are often forbidden by law or associated with prohibitive amounts of effort. To alleviate this problem and to support rare disease research, we developed the Mainzelliste Secure EpiLinker (MainSEL) record linkage framework, a secure Multi-Party Computation based application using trusted-third-party-less cryptographic protocols to perform privacy-preserving record linkage with high security guarantees. In this work, we extend MainSEL to allow the record linkage based calculation of the number of common patients between institutions. This allows privacy-preserving statistical feasibility estimations for further analyses and data consolidation. Additionally, we created easy to deploy software packages using microservice containerization and continuous deployment/continuous integration. We performed tests with medical researchers using MainSEL in real-world medical IT environments, using synthetic patient data. Results We show that MainSEL achieves practical runtimes, performing 10 000 comparisons in approximately 5 minutes. Our approach proved to be feasible in a wide range of network settings and use cases. The “lessons learned” from the real-world testing show the need to explicitly support and document the usage and deployment for both analysis pipeline integration and researcher driven ad-hoc analysis use cases, thus clarifying the wide applicability of our software. MainSEL is freely available under: https://github.com/medicalinformatics/MainSEL Conclusions MainSEL performs well in real-world settings and is a useful tool not only for rare disease research, but medical research in general. It achieves practical runtimes, improved security guarantees compared to existing solutions, and is simple to deploy in strict clinical IT environments. Based on the “lessons learned” from the real-word testing, we hope to enable a wide range of medical researchers to meet their needs and requirements using modern privacy-preserving technologies.
APA, Harvard, Vancouver, ISO, and other styles
33

Leo, Stefano, Abdessalam Cherkaoui, Gesuele Renzi, and Jacques Schrenzel. "Mini Review: Clinical Routine Microbiology in the Era of Automation and Digital Health." Frontiers in Cellular and Infection Microbiology 10 (November 30, 2020). http://dx.doi.org/10.3389/fcimb.2020.582028.

Full text
Abstract:
Clinical microbiology laboratories are the first line to combat and handle infectious diseases and antibiotic resistance, including newly emerging ones. Although most clinical laboratories still rely on conventional methods, a cascade of technological changes, driven by digital imaging and high-throughput sequencing, will revolutionize the management of clinical diagnostics for direct detection of bacteria and swift antimicrobial susceptibility testing. Importantly, such technological advancements occur in the golden age of machine learning where computers are no longer acting passively in data mining, but once trained, can also help physicians in making decisions for diagnostics and optimal treatment administration. The further potential of physically integrating new technologies in an automation chain, combined to machine-learning-based software for data analyses, is seducing and would indeed lead to a faster management in infectious diseases. However, if, from one side, technological advancement would achieve a better performance than conventional methods, on the other side, this evolution challenges clinicians in terms of data interpretation and impacts the entire hospital personnel organization and management. In this mini review, we discuss such technological achievements offering practical examples of their operability but also their limitations and potential issues that their implementation could rise in clinical microbiology laboratories.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Julian P., Andre Grujovski, Tim Wright, Tzu-Ching Wu, DaiWai M. Olson, and Brad J. Kolls. "Abstract P147: Telestroke Software Offers Improved and Novel Methods of Studying Physician Decision Making." Circulation 125, suppl_10 (March 13, 2012). http://dx.doi.org/10.1161/circ.125.suppl_10.ap147.

Full text
Abstract:
Introduction: Data quality in stroke registries is typically dependent upon some form of chart review and manual data abstraction. The retrospective nature of this process is inherently prone to incomplete and inaccurate data collection with limited insight into the process of physician decision making. Hypothesis: New software packages accompanying telestroke systems will dramatically improve the quality of data by automating the abstraction process and providing real-time access to electronic databases. Methods: Telestroke systems provide web-based programs that record various levels of data. InTouch Technologies, Inc. currently provides StrokeRESPOND v3.0, a web-based program that facilitates telestroke consultation by organizing elements of the physician-patient encounter, including history, vitals, physical exam, laboratory results, and radiographs, and by generating a consultation note. Many data elements captured in the user interface mirror traditional metrics of acute stroke care research and can be de-identified and then directly transferred into an electronic database. The “forced choice” (aka hard-stop) design of data entry and elimination of secondhand abstraction can minimize data corruption and loss. Further, because each point of data entry and manipulation is time-stamped, powerful metadata_“data about data”_can be explored. By analyzing the sequence and patterns of clinical information entry and utilization, the actual thought process of the physician user can be investigated and provide new insights into stroke treatment. Optimization of acute stroke management, a complicated protocol, can be driven by identification of physician decision making patterns associated with multiple outcomes, including higher rates of treatment and faster treatment times. Conclusions: Specialized software programs will improve registry data collection, completeness and accuracy. The generation of metadata offers exciting, new avenues of research. Prospective stroke research using this methodology will require the collaboration of multiple academic institutions and industry partners.
APA, Harvard, Vancouver, ISO, and other styles
35

Vyas, Amber, Tanu Bhargava, Surendra Saraf, Vishal Jain, and Darshan Dubey. "A Review on digital medicine and its implications in drug development process." Asian Journal of Pharmacy and Technology, November 22, 2023, 263–69. http://dx.doi.org/10.52711/2231-5713.2023.00047.

Full text
Abstract:
A field known as "digital medicine" is focused with using technology as aid for assessment and involvement in the interest of better public health. Digital medical solutions are built on top-notch technology and software that supports the practice of medicine broadly, including treatment, rehabilitation, illness prevention, and health promotion for individuals and across groups. Digital medical products can be used independently or in conjunction with pharmaceuticals, biologics, devices, and other products to enhance patient care and health outcomes. With the use of smart, easily accessible tools, digital medicine equips patients and healthcare professionals to treat a variety of illnesses with high-quality, safe, and efficient measures and data-driven therapies. The discipline of digital medicine includes both considerable professional knowledge and responsibilities linked to the usage of these digital tools. The application of these technologies in digital medicine is supported by the development of evidence. Technology is causing changes in medicine. Wearable and sensors are becoming more compact and affordable, and algorithms are becoming strong enough to forecast medical outcomes. Nevertheless, despite quick advancements, the healthcare sector lags behind other sectors in effectively utilizing new technology. The cross-disciplinary approach necessary to develop such tools, needing knowledge from many experts across many professions, is a significant barrier to entry. The participation in digital medicine programs is optional, complies with all legal requirements and standards, and protects patient data in line with relevant state and federal privacy legislation, just like other data created and maintained in electronic medical records. Aside from helping doctors more correctly titrate dosages and assess how well a treatment works, experts say digital medicine programs hold promise as a solution to the problem of medication adherence.
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Xuesen, Haiyin Deng, Shiyun Jian, Huian Chen, Qing Li, Ruiyu Gong, and Jingsong Wu. "Global trends and hotspots in the digital therapeutics of autism spectrum disorders: a bibliometric analysis from 2002 to 2022." Frontiers in Psychiatry 14 (May 15, 2023). http://dx.doi.org/10.3389/fpsyt.2023.1126404.

Full text
Abstract:
IntroductionAutism spectrum disorder (ASD) is a severe neurodevelopmental disorder that has become a major cause of disability in children. Digital therapeutics (DTx) delivers evidence-based therapeutic interventions to patients that are driven by software to prevent, manage, or treat a medical disorder or disease. This study objectively analyzed the current research status of global DTx in ASD from 2002 to 2022, aiming to explore the current global research status and trends in the field.MethodsThe Web of Science database was searched for articles about DTx in ASD from January 2002 to October 2022. CiteSpace was used to analyze the co-occurrence of keywords in literature, partnerships between authors, institutions, and countries, the sudden occurrence of keywords, clustering of keywords over time, and analysis of references, cited authors, and cited journals.ResultsA total of 509 articles were included. The most productive country and institution were the United States and Vanderbilt University. The largest contributing authors were Warren, Zachary, and Sarkar, Nilanjan. The most-cited journal was the Journal of Autism and Developmental Disorders. The most-cited and co-cited articles were Brian Scarselati (Robots for Use in Autism Research, 2012) and Ralph Adolphs (Abnormal processing of social information from faces in autism, 2001). “Artificial Intelligence,” “machine learning,” “Virtual Reality,” and “eye tracking” were common new and cutting-edge trends in research on DTx in ASD.DiscussionThe use of DTx in ASD is developing rapidly and gaining the attention of researchers worldwide. The publications in this field have increased year by year, mainly concentrated in the developed countries, especially in the United States. Both Vanderbilt University and Yale University are very important institutions in the field. The researcher from Vanderbilt University, Warren and Zachary, his dynamics or achievements in the field is also more worth our attention. The application of new technologies such as virtual reality, machine learning, and eye-tracking in this field has driven the development of DTx on ASD and is currently a popular research topic. More cross-regional and cross-disciplinary collaborations are recommended to advance the development and availability of DTx.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, James, Victor Lu, and Vikas Khanduja. "The impact of extended reality on surgery: a scoping review." International Orthopaedics, January 16, 2023. http://dx.doi.org/10.1007/s00264-022-05663-z.

Full text
Abstract:
Abstract Purpose Extended reality (XR) is defined as a spectrum of technologies that range from purely virtual environments to enhanced real-world environments. In the past two decades, XR-assisted surgery has seen an increase in its use and also in research and development. This scoping review aims to map out the historical trends in these technologies and their future prospects, with an emphasis on the reported outcomes and ethical considerations on the use of these technologies. Methods A systematic search of PubMed, Scopus, and Embase for literature related to XR-assisted surgery and telesurgery was performed using Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) guidelines. Primary studies, peer-reviewed articles that described procedures performed by surgeons on human subjects and cadavers, as well as studies describing general surgical education, were included. Non-surgical procedures, bedside procedures, veterinary procedures, procedures performed by medical students, and review articles were excluded. Studies were classified into the following categories: impact on surgery (pre-operative planning and intra-operative navigation/guidance), impact on the patient (pain and anxiety), and impact on the surgeon (surgical training and surgeon confidence). Results One hundred and sixty-eight studies were included for analysis. Thirty-one studies investigated the use of XR for pre-operative planning concluded that virtual reality (VR) enhanced the surgeon’s spatial awareness of important anatomical landmarks. This leads to shorter operating sessions and decreases surgical insult. Forty-nine studies explored the use of XR for intra-operative planning. They noted that augmented reality (AR) headsets highlight key landmarks, as well as important structures to avoid, which lowers the chance of accidental surgical trauma. Eleven studies investigated patients’ pain and noted that VR is able to generate a meditative state. This is beneficial for patients, as it reduces the need for analgesics. Ten studies commented on patient anxiety, suggesting that VR is unsuccessful at altering patients’ physiological parameters such as mean arterial blood pressure or cortisol levels. Sixty studies investigated surgical training whilst seven studies suggested that the use of XR-assisted technology increased surgeon confidence. Conclusion The growth of XR-assisted surgery is driven by advances in hardware and software. Whilst augmented virtuality and mixed reality are underexplored, the use of VR is growing especially in the fields of surgical training and pre-operative planning. Real-time intra-operative guidance is key for surgical precision, which is being supplemented with AR technology. XR-assisted surgery is likely to undertake a greater role in the near future, given the effect of COVID-19 limiting physical presence and the increasing complexity of surgical procedures.
APA, Harvard, Vancouver, ISO, and other styles
38

Patel, Shraddha, Miles Stewart, and Martina Siwek. "Building Electronic Disease Surveillance Capacity in the Peruvian Navy with SAGES." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9833.

Full text
Abstract:
ObjectiveTo introduce SMS-based data collection into the Peruvian Navy’s public health surveillance system for increased reporting rates and timeliness, particularly from remote areas, as well as improve capabilities for analysis of surveillance data by decision makers.IntroductionIn the past 15 years, public health surveillance has undergone a revolution driven by advances in information technology (IT) with vast improvements in the collection, analysis, visualization, and reporting of health data. Mobile technologies and open source software have played a key role in advancing surveillance techniques, particularly in resource-limited settings. Johns Hopkins University Applied Physics Laboratory (JHU/APL) is an internationally recognized leader in the area of electronic disease surveillance. In addition to the Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE) used by several state and local jurisdictions and the CDC in the U.S., JHU/APL has also developed the Suite for Automated Global Electronic bioSurveillance (SAGES). SAGES is a collection of modular, open-source software tools designed to meet the challenges of electronic disease surveillance in resource-limited settings.JHU/APL is working with the Peruvian Navy health system to improve their electronic disease surveillance capabilities. The Peruvian Navy currently uses a SAGES-based system called Alerta DISAMAR that was implemented several years ago in an effort supported by the Armed Forces Health Surveillance Branch, and in collaboration with the Naval Medical Research Unit No. 6 (NAMRU-6). The system uses both web-based and IVR-based (interactive voice response) data collection from several Navy health facilities in Peru. For the present effort, JHU/APL is implementing a new SMS-based data collection capability for the Peruvian Navy.MethodsJHU/APL is engaged with the Peruvian Navy Health System to upgrade the existing SAGES-based Alerta DISAMAR surveillance system which relies on remote data collection using IVR (interactive voice recording) technology, with a SAGES-based system that uses SMS (short message service) text messages for remote data collection. Based on Peruvian Navy requirements, JHU/APL created mobile data entry forms for Android smartphones using the SAGES mCollect application. SAGES mCollect is built using Open Data Kit open source tools along with added features such as 128-bit encryption and quality checks.The JHU/APL team engages closely with end users and other stakeholders to determine system requirements and to deploy the system, as well as to train end users and the system administrators who will need to maintain the system once it is deployed. The JHU/APL team, consisting of both information technology and public health expertise, conduct a country-level capabilities and needs assessment to address design considerations and operational end user requirements. This assessment takes into account the requirements and objectives of the Peruvian Navy, while keeping in mind infrastructure, cost, and personnel constraints. A pilot test of SMS-based data collection is currently underway with 10 health clinics within the Navy.ResultsMany challenges exist when implementing electronic disease surveillance tools in resource-limited settings, but using a tailored approach to implementation in which specific needs, constraints, and expectations are identified with stakeholders helps increase the overall adoption and sustainment of the system. JHU/APL believes SMS-based data collection will be more sustainable than IVR-based data collection for the Peruvian Navy.ConclusionsJHU/APL is deploying a SAGES-based electronic disease surveillance system for the Peruvian Navy that has great potential to increase reporting rates from its health facilities as well as improve data quality and timeliness, thus resulting in greater awareness and enhanced public health decision making.
APA, Harvard, Vancouver, ISO, and other styles
39

Santoro, E., L. Boscherini, and E. G. Caiani. "Digital therapeutics: a systematic review of clinical trials characteristics." European Heart Journal 42, Supplement_1 (October 1, 2021). http://dx.doi.org/10.1093/eurheartj/ehab724.3115.

Full text
Abstract:
Abstract Introduction Digital therapeutics (DTx) are a subset of digital health tools delivering evidence-based therapeutic interventions that are driven by high quality software programs to prevent, manage, or treat a medical disorder or disease. They are studied using randomized clinical trial methodology and reviewed, cleared or certified by regulatory bodies as required to support product claims regarding risk, efficacy, and intended use. Purpose To perform a systematic review of clinical research/studies conducted in the field of DTx with the aim to describe studies where DTx were used, classifying them by digital intervention and condition, and analysing and reporting the characteristics of clinical trials. Methods The U.S. National Library of Medicine ClinicalTrials.gov was searched using the terms “digital therapeutics”, “digital therapeutic”, “digital therapy”, and “digital therapies” within the fields “Intervention/treatment” and “Title/Acronym”, and the resulting trial characteristics were extracted and analysed. Results In total, 560 clinical trials were retrieved on January 10, 2021. Most of them (n=424, 75.7%) were excluded because they were observational studies (n=82), non-randomized/single arm assignment studies (n=123), not involving any digital health tool (n=181), or involving digital health tools not classified as DTx (n=38). Of the remaining 136 trials, the DTx intervention was delivered through apps (n=57, 41.9%), web-based systems (n=35, 25.7%), videogames (n=12, 8.8%), virtual reality (n=6, 4.4%), text messages (n=5, 3.7%), social media platform (n=4, 2.9%), computer-based systems (n=3, 2.2%), or other (n=14, 10.3%), and applied to the following clinical scenarios: mental health (n=47, 34.6%), chronic pain and chronic diseases (n=26, 19.1%), smoking and other substances abuse or addiction (n=17, 12.5%), insomnia and sleeping disorders (n=12, 8.8%), obesity and physical activity (n=11, 8.1%), cardiovascular diseases (n=10, 7.3%), and other conditions (n=13, 9.6%). Apps were used more frequently for chronic pain (54%) and sleeping disorders (67%), while videogames and web-based systems were adopted for mental health in 21.3% and 38% of the trials, respectively, and text messages were preferred in 18.2% of obesity and 17.6% of addiction trials, respectively Sixty-eight trials (50%) started in the last three years (2019, 2020, and 2021), while 54.4% were ongoing, 33.8% completed, 2.9% stopped early, and 8.8% with status unknown. Conclusions The term “digital therapeutics” was very often incorrectly used by researchers when they register their trials in ClinicalTrials.gov, improperly including studies involving the use of digital health tools to support a drug intake or monitor a condition. Mobile apps, web-based systems and videogames were the most adopted technologies to deliver DTx, while mental health diseases, chronic pain, and addiction were the conditions in which they were most frequently studied. Funding Acknowledgement Type of funding sources: None.
APA, Harvard, Vancouver, ISO, and other styles
40

Radanliev, Petar, and David De Roure. "New and emerging forms of data and technologies: literature and bibliometric review." Multimedia Tools and Applications, July 30, 2022. http://dx.doi.org/10.1007/s11042-022-13451-5.

Full text
Abstract:
AbstractWith the increased digitalisation of our society, new and emerging forms of data present new values and opportunities for improved data driven multimedia services, or even new solutions for managing future global pandemics (i.e., Disease X). This article conducts a literature review and bibliometric analysis of existing research records on new and emerging forms of multimedia data. The literature review engages with qualitative search of the most prominent journal and conference publications on this topic. The bibliometric analysis engages with statistical software (i.e. R) analysis of Web of Science data records. The results are somewhat unexpected. Despite the special relationship between the US and the UK, there is not much evidence of collaboration in research on this topic. Similarly, despite the negative media publicity on the current relationship between the US and China (and the US sanctions on China), the research on this topic seems to be growing strong. However, it would be interesting to repeat this exercise after a few years and compare the results. It is possible that the effect of the current US sanctions on China has not taken its full effect yet.
APA, Harvard, Vancouver, ISO, and other styles
41

Pedram, Shiva, Grace Kennedy, and Sal Sanzone. "Assessing the validity of VR as a training tool for medical students." Virtual Reality 28, no. 1 (January 12, 2024). http://dx.doi.org/10.1007/s10055-023-00912-x.

Full text
Abstract:
AbstractThe advances in Virtual Reality technologies, increased availability and reducing hardware costs have diminished many of the early challenges in the adoption of VR. However, a commonly identified gap in immersive Virtual Reality-Head Mounded Display (VR-HMD) training for medical education is the confidence in the long-term validity of the applications, in particular, the acceleration of the learning curve efficacy of learning outcomes over time and actual skills translation into real environments. Research shows a wide range of ad hoc applications, with superficial evaluations often conducted by technology vendors, based on assumed environments and tasks, envisaged (as opposed to actual) users and effectiveness of learning outcomes underpinned with little or no research focusing on a requirements-driven validation approach. This presents decision-making challenges for those seeking to adopt, implement and embed such systems in teaching practice. The current paper aims to (i) determine whether medical VR training improves the skill acquisition of training candidates, (ii) determine the factors affecting the acquisition of skills and (iii) validate the VR-based training using requirement-driven approach. In this paper, we used within- and between-subject design approaches to assess the validity of VR-based surgical training platform developed by Vantari VR against requirements which have been identified to have impact on learning processes and outcomes in VR-based training. First, study and control groups were compared based on their level of skill acquisitions. Then, by tailoring a requirements framework, the system was validated against the appropriate requirements. In total, 74 out of 109 requirements were investigated and evaluated against survey, observer and stakeholder workshop data. The training scenario covered the topic of Arterial Blood Gas (ABG) collection for second-year university medical students. In total 44 students volunteered to participate in this study, having been randomly assigned to either the study or control group. Students exposed to VR training (the study group) outperformed the control group in practical clinical skills training tasks and also adhered to better safety and hygiene practices. The study group also had a greater procedural completion rate over the control group. Students showed increased self-efficacy and knowledge scores immediately post-VR training. Prior ABG training did not impact on VR training outcomes. Low levels of simulation sickness, physical strain and stress, coupled with high levels of enjoyability, engagement, presence and fidelity were identified as factors affecting the overall training experience. In terms of learning, high scores were recorded for active learning, cognitive benefit and reflective thinking. Lastly, by validating the system against 74 system requirements, the study found a user acceptance level of 75%. This enabled the identification of weaknesses of the current system and possible future directions.
APA, Harvard, Vancouver, ISO, and other styles
42

Pace, Steven. "Revisiting Mackay Online." M/C Journal 22, no. 3 (June 19, 2019). http://dx.doi.org/10.5204/mcj.1527.

Full text
Abstract:
IntroductionIn July 1997, the Mackay campus of Central Queensland University hosted a conference with the theme Regional Australia: Visions of Mackay. It was the first academic conference to be held at the young campus, and its aim was to provide an opportunity for academics, business people, government officials, and other interested parties to discuss their visions for the development of Mackay, a regional community of 75,000 people situated on the Central Queensland coast (Danaher). I delivered a presentation at that conference and authored a chapter in the book that emerged from its proceedings. The chapter entitled “Mackay Online” explored the potential impact that the Internet could have on the Mackay region, particularly in the areas of regional business, education, health, and entertainment (Pace). Two decades later, how does the reality compare with that vision?Broadband BluesAt the time of the Visions of Mackay conference, public commercial use of the Internet was in its infancy. Many Internet services and technologies that users take for granted today were uncommon or non-existent then. Examples include online video, video-conferencing, Voice over Internet Protocol (VoIP), blogs, social media, peer-to-peer file sharing, payment gateways, content management systems, wireless data communications, smartphones, mobile applications, and tablet computers. In 1997, most users connected to the Internet using slow dial-up modems with speeds ranging from 28.8 Kbps to 33.6 Kbps. 56 Kbps modems had just become available. Lamenting these slow data transmission speeds, I looked forward to a time when widespread availability of high-bandwidth networks would allow the Internet’s services to “expand to include electronic commerce, home entertainment and desktop video-conferencing” (Pace 103). Although that future eventually arrived, I incorrectly anticipated how it would arrive.In 1997, Optus and Telstra were engaged in the rollout of hybrid fibre coaxial (HFC) networks in Sydney, Melbourne, and Brisbane for the Optus Vision and Foxtel pay TV services (Meredith). These HFC networks had a large amount of unused bandwidth, which both Telstra and Optus planned to use to provide broadband Internet services. Telstra's Big Pond Cable broadband service was already available to approximately one million households in Sydney and Melbourne (Taylor), and Optus was considering extending its cable network into regional Australia through partnerships with smaller regional telecommunications companies (Lewis). These promising developments seemed to point the way forward to a future high-bandwidth network, but that was not the case. A short time after the Visions of Mackay conference, Telstra and Optus ceased the rollout of their HFC networks in response to the invention of Asynchronous Digital Subscriber Line (ADSL), a technology that increases the bandwidth of copper wire and enables Internet connections of up to 6 Mbps over the existing phone network. ADSL was significantly faster than a dial-up service, it was broadly available to homes and businesses across the country, and it did not require enormous investment in infrastructure. However, ADSL could not offer speeds anywhere near the 27 Mbps of the HFC networks. When it came to broadband provision, Australia seemed destined to continue playing catch-up with the rest of the world. According to data from the Organisation for Economic Cooperation and Development (OECD), in 2009 Australia ranked 18th in the world for broadband penetration, with 24.1 percent of Australians having a fixed-line broadband subscription. Statistics like these eventually prompted the federal government to commit to the deployment of a National Broadband Network (NBN). In 2009, the Kevin Rudd Government announced that the NBN would combine fibre-to-the-premises (FTTP), fixed wireless, and satellite technologies to deliver Internet speeds of up to 100 Mbps to 90 percent of Australian homes, schools, and workplaces (Rudd).The rollout of the NBN in Mackay commenced in 2013 and continued, suburb by suburb, until its completion in 2017 (Frost, “Mackay”; Garvey). The rollout was anything but smooth. After a change of government in 2013, the NBN was redesigned to reduce costs. A mixed copper/optical technology known as fibre-to-the-node (FTTN) replaced FTTP as the preferred approach for providing most NBN connections. The resulting connection speeds were significantly slower than the 100 Mbps that was originally proposed. Many Mackay premises could only achieve a maximum speed of 40 Mbps, which led to some overcharging by Internet service providers, and subsequent compensation for failing to deliver services they had promised (“Optus”). Some Mackay residents even complained that their new NBN connections were slower than their former ADSL connections. NBN Co representatives claimed that the problems were due to “service providers not buying enough space in the network to provide the service they had promised to customers” (“Telcos”). Unsurprisingly, the number of complaints about the NBN that were lodged with the Telecommunications Industry Ombudsman skyrocketed during the last six months of 2017. Queensland complaints increased by approximately 40 percent when compared with the same period during the previous year (“Qld”).Despite the challenges presented by infrastructure limitations, the rollout of the NBN was a boost for the Mackay region. For some rural residents, it meant having reliable Internet access for the first time. Frost, for example, reports on the experiences of a Mackay couple who could not get an ADSL service at their rural home because it was too far away from the nearest telephone exchange. Unreliable 3G mobile broadband was the only option for operating their air-conditioning business. All of that changed with the arrival of the NBN. “It’s so fast we can run a number of things at the same time”, the couple reported (“NBN”).Networking the NationOne factor that contributed to the uptake of Internet services in the Mackay region after the Visions of Mackay conference was the Australian Government’s Networking the Nation (NTN) program. When the national telecommunications carrier Telstra was partially privatised in 1997, and further sold in 1999, proceeds from the sale were used to fund an ambitious communications infrastructure program named Networking the Nation (Department of Communications, Information Technology and the Arts). The program funded projects that improved the availability, accessibility, affordability, and use of communications facilities and services throughout regional Australia. Eligibility for funding was limited to not-for-profit organisations, including local councils, regional development organisations, community groups, local government associations, and state and territory governments.In 1998, the Mackay region received $930,000 in Networking the Nation funding for Mackay Regionlink, a project that aimed to provide equitable community access to online services, skills development for local residents, an affordable online presence for local business and community organisations, and increased external awareness of the Mackay region (Jewell et al.). One element of the project was a training program that provided basic Internet skills to 2,168 people across the region over a period of two years. A second element of the project involved the establishment of 20 public Internet access centres in locations throughout the region, such as libraries, community centres, and tourist information centres. The centres provided free Internet access to users and encouraged local participation and skill development. More than 9,200 users were recorded in these centres during the first year of the project, and the facilities remained active until 2006. A third element of the project was a regional web portal that provided a free easily-updated online presence for community organisations. The project aimed to have every business and community group in the Mackay region represented on the website, with hosting fees for the business web pages funding its ongoing operation and development. More than 6,000 organisations were listed on the site, and the project remained financially viable until 2005.The availability, affordability and use of communications facilities and services in Mackay increased significantly during the period of the Regionlink project. Changes in technology, services, markets, competition, and many other factors contributed to this increase, so it is difficult to ascertain the extent to which Mackay Regionlink fostered those outcomes. However, the large number of people who participated in the Regionlink training program and made use of the public Internet access centres, suggests that the project had a positive influence on digital literacy in the Mackay region.The Impact on BusinessThe Internet has transformed regional business for both consumers and business owners alike since the Visions of Mackay conference. When Mackay residents made a purchase in 1997, their choice of suppliers was limited to a few local businesses. Today they can shop online in a global market. Security concerns were initially a major obstacle to the growth of electronic commerce. Consumers were slow to adopt the Internet as a place for doing business, fearing that their credit card details would be vulnerable to hackers once they were placed online. After observing the efforts that finance and software companies were making to eliminate those obstacles, I anticipated that it would only be a matter of time before online transactions became commonplace:Consumers seeking a particular product will be able to quickly find the names of suitable suppliers around the world, compare their prices, and place an order with the one that can deliver the product at the cheapest price. (Pace 106)This expectation was soon fulfilled by the arrival of online payment systems such as PayPal in 1998, and online shopping services such as eBay in 1997. eBay is a global online auction and shopping website where individuals and businesses buy and sell goods and services worldwide. The eBay service is free to use for buyers, but sellers are charged modest fees when they make a sale. It exemplifies the notion of “friction-free capitalism” articulated by Gates (157).In 1997, regional Australian business owners were largely sceptical about the potential benefits the Internet could bring to their businesses. Only 11 percent of Australian businesses had some form of web presence, and less than 35 percent of those early adopters felt that their website was significant to their business (Department of Industry, Science and Tourism). Anticipating the significant opportunities that the Internet offered Mackay businesses to compete in new markets, I recommended that they work “towards the goal of providing products and services that meet the needs of international consumers as well as local ones” (107). In the two decades that have passed since that time, many Mackay businesses have been doing just that. One prime example is Big on Shoes (bigonshoes.com.au), a retailer of ladies’ shoes from sizes five to fifteen (Plane). Big on Shoes has physical shopfronts in Mackay and Moranbah, an online store that has been operating since 2009, and more than 12,000 followers on Facebook. This speciality store caters for women who have traditionally been unable to find shoes in their size. As the store’s customer base has grown within Australia and internationally, an unexpected transgender market has also emerged. In 2018 Big on Shoes was one of 30 regional businesses featured in the first Facebook and Instagram Annual Gift Guide, and it continues to build on its strengths (Cureton).The Impact on HealthThe growth of the Internet has improved the availability of specialist health services for people in the Mackay region. Traditionally, access to surgical services in Mackay has been much more limited than in metropolitan areas because of the shortage of specialists willing to practise in regional areas (Green). In 2003, a senior informant from the Royal Australasian College of Surgeons bluntly described the Central Queensland region from Mackay to Gladstone as “a black hole in terms of surgery” (Birrell et al. 15). In 1997 I anticipated that, although the Internet would never completely replace a visit to a local doctor or hospital, it would provide tools that improve the availability of specialist medical services for people living in regional areas. Using these tools, doctors would be able to “analyse medical images captured from patients living in remote locations” and “diagnose patients at a distance” (Pace 108).These expectations have been realised in the form of Queensland Health’s Telehealth initiative, which permits medical specialists in Brisbane and Townsville to conduct consultations with patients at the Mackay Base Hospital using video-conference technology. Telehealth reduces the need for patients to travel for specialist advice, and it provides health professionals with access to peer support. Averill (7), for example, reports on the experience of a breast cancer patient at the Mackay Base Hospital who was able to participate in a drug trial with a Townsville oncologist through the Telehealth network. Mackay health professionals organised the patient’s scans, administered blood tests, and checked her lymph nodes, blood pressure and weight. Townsville health professionals then used this information to advise the Mackay team about her ongoing treatment. The patient expressed appreciation that the service allowed her to avoid the lengthy round-trip to Townsville. Prior to being offered the Telehealth option, she had refused to participate in the trial because “the trip was just too much of a stumbling block” (Averill 7).The Impact on Media and EntertainmentThe field of media and entertainment is another aspect of regional life that has been reshaped by the Internet since the Visions of Mackay conference. Most of these changes have been equally apparent in both regional and metropolitan areas. Over the past decade, the way individuals consume media has been transformed by new online services offering user-generated video, video-on-demand, and catch-up TV. These developments were among the changes I anticipated in 1997:The convergence of television and the Internet will stimulate the creation of new services such as video-on-demand. Today television is a synchronous media—programs are usually viewed while they are being broadcast. When high-quality video can be transmitted over the information superhighway, users will be able to watch what they want, when and where they like. […] Newly released movies will continue to be rented, but probably not from stores. Instead, consumers will shop on the information superhighway for movies that can be delivered on demand.In the mid-2000s, free online video-sharing services such as YouTube and Vimeo began to emerge. These websites allow users to freely upload, view, share, comment on, and curate online videos. Subscription-based streaming services such as Netflix and Amazon Prime have also become increasingly popular since that time. These services offer online streaming of a library of films and television programs for a fee of less than 20 dollars per month. Computers, smart TVs, Blu-ray players, game consoles, mobile phones, tablets, and other devices provide a multitude of ways of accessing streaming services. Some of these devices cost less than 100 dollars, while higher-end electronic devices include the capability as a bundled feature. Netflix became available in Mackay at the time of its Australian launch in 2015. The growth of streaming services greatly reduced the demand for video rental shops in the region, and all closed down as a result. The last remaining video rental store in Mackay closed its doors in 2018 after trading for 26 years (“Last”).Some of the most dramatic transformations that have occurred the field of media and entertainment were not anticipated in 1997. The rise of mobile technology, including wireless data communications, smartphones, mobile applications, and tablet computers, was largely unforeseen at that time. Some Internet luminaries such as Vinton Cerf expected that mobile access to the Internet via laptop computers would become commonplace (Lange), but this view did not encompass the evolution of smartphones, and it was not widely held. Similarly, the rise of social media services and the impact they have had on the way people share content and communicate was generally unexpected. In some respects, these phenomena resemble the Black Swan events described by Nassim Nicholas Taleb (xvii)—surprising events with a major effect that are often inappropriately rationalised after the fact. They remind us of how difficult it is to predict the future media landscape by extrapolating from things we know, while failing to take into consideration what we do not know.The Challenge for MackayIn 1997, when exploring the potential impact that the Internet could have on the Mackay region, I identified a special challenge that the community faced if it wanted to be competitive in this new environment:The region has traditionally prospered from industries that control physical resources such as coal, sugar and tourism, but over the last two decades there has been a global ‘shift away from physical assets and towards information as the principal driver of wealth creation’ (Petre and Harrington 1996). The risk for Mackay is that its residents may be inclined to believe that wealth can only be created by means of industries that control physical assets. The community must realise that its value-added information is at least as precious as its abundant natural resources. (110)The Mackay region has not responded well to this challenge, as evidenced by measures such as the Knowledge City Index (KCI), a collection of six indicators that assess how well a city is positioned to grow and advance in today’s technology-driven, knowledge-based economy. A 2017 study used the KCI to conduct a comparative analysis of 25 Australian cities (Pratchett, Hu, Walsh, and Tuli). Mackay rated reasonably well in the areas of Income and Digital Access. But the city’s ratings were “very limited across all the other measures of the KCI”: Knowledge Capacity, Knowledge Mobility, Knowledge Industries and Smart Work (44).The need to be competitive in a technology-driven, knowledge-based economy is likely to become even more pressing in the years ahead. The 2017 World Energy Outlook Report estimated that China’s coal use is likely to have peaked in 2013 amid a rapid shift toward renewable energy, which means that demand for Mackay’s coal will continue to decline (International Energy Agency). The sugar industry is in crisis, finding itself unable to diversify its revenue base or increase production enough to offset falling global sugar prices (Rynne). The region’s biggest tourism drawcard, the Great Barrier Reef, continues to be degraded by mass coral bleaching events and ongoing threats posed by climate change and poor water quality (Great Barrier Reef Marine Park Authority). All of these developments have disturbing implications for Mackay’s regional economy and its reliance on coal, sugar, and tourism. Diversifying the local economy through the introduction of new knowledge industries would be one way of preparing the Mackay region for the impact of new technologies and the economic challenges that lie ahead.ReferencesAverill, Zizi. “Webcam Consultations.” Daily Mercury 22 Nov. 2018: 7.Birrell, Bob, Lesleyanne Hawthorne, and Virginia Rapson. The Outlook for Surgical Services in Australasia. Melbourne: Monash University Centre for Population and Urban Research, 2003.Cureton, Aidan. “Big Shoes, Big Ideas.” Daily Mercury 8 Dec. 2018: 12.Danaher, Geoff. Ed. Visions of Mackay: Conference Papers. Rockhampton: Central Queensland UP, 1998.Department of Communications, Information Technology and the Arts. Networking the Nation: Evaluation of Outcomes and Impacts. Canberra: Australian Government, 2005.Department of Industry, Science and Tourism. Electronic Commerce in Australia. Canberra: Australian Government, 1998.Frost, Pamela. “Mackay Is Up with Switch to Speed to NBN.” Daily Mercury 15 Aug. 2013: 8.———. “NBN Boost to Business.” Daily Mercury 29 Oct. 2013: 3.Gates, Bill. The Road Ahead. New York: Viking Penguin, 1995.Garvey, Cas. “NBN Rollout Hit, Miss in Mackay.” Daily Mercury 11 Jul. 2017: 6.Great Barrier Reef Marine Park Authority. Reef Blueprint: Great Barrier Reef Blueprint for Resilience. Townsville: Great Barrier Reef Marine Park Authority, 2017.Green, Anthony. “Surgical Services and Referrals in Rural and Remote Australia.” Medical Journal of Australia 177.2 (2002): 110–11.International Energy Agency. World Energy Outlook 2017. France: IEA Publications, 2017.Jewell, Roderick, Mary O’Flynn, Fiorella De Cindio, and Margaret Cameron. “RCM and MRL—A Reflection on Two Approaches to Constructing Communication Memory.” Constructing and Sharing Memory: Community Informatics, Identity and Empowerment. Eds. Larry Stillman and Graeme Johanson. Newcastle: Cambridge Scholars Publishing, 2007. 73–86.Lange, Larry. “The Internet: Where’s It All Going?” Information Week 17 Jul. 1995: 30.“Last Man Standing Shuts Doors after 26 Years of Trade.” Daily Mercury 28 Aug. 2018: 7.Lewis, Steve. “Optus Plans to Share Cost Burden.” Australian Financial Review 22 May 1997: 26.Meredith, Helen. “Time Short for Cable Modem.” Australian Financial Review 10 Apr. 1997: 42Nassim Nicholas Taleb. The Black Swan: The Impact of the Highly Improbable. New York: Random House, 2007.“Optus Offers Comp for Slow NBN.” Daily Mercury 10 Nov. 2017: 15.Organisation for Economic Cooperation and Development. “Fixed Broadband Subscriptions.” OECD Data, n.d. <https://data.oecd.org/broadband/fixed-broadband-subscriptions.htm>.Pace, Steven. “Mackay Online.” Visions of Mackay: Conference Papers. Ed. Geoff Danaher. Rockhampton: Central Queensland University Press, 1998. 111–19.Petre, Daniel and David Harrington. The Clever Country? Australia’s Digital Future. Sydney: Lansdown Publishing, 1996.Plane, Melanie. “A Shoe-In for Big Success.” Daily Mercury 9 Sep. 2017: 6.Pratchett, Lawrence, Richard Hu, Michael Walsh, and Sajeda Tuli. The Knowledge City Index: A Tale of 25 Cities in Australia. Canberra: University of Canberra neXus Research Centre, 2017.“Qld Customers NB-uN Happy Complaints about NBN Service Double in 12 Months.” Daily Mercury 17 Apr. 2018: 1.Rudd, Kevin. “Media Release: New National Broadband Network.” Parliament of Australia Press Release, 7 Apr. 2009 <https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id:"media/pressrel/PS8T6">.Rynne, David. “Revitalising the Sugar Industry.” Sugar Policy Insights Feb. 2019: 2–3.Taylor, Emma. “A Dip in the Pond.” Sydney Morning Herald 16 Aug. 1997: 12.“Telcos and NBN Co in a Crisis.” Daily Mercury 27 Jul. 2017: 6.
APA, Harvard, Vancouver, ISO, and other styles
43

Wolbring, Gregor. "Is There an End to Out-Able? Is There an End to the Rat Race for Abilities?" M/C Journal 11, no. 3 (July 2, 2008). http://dx.doi.org/10.5204/mcj.57.

Full text
Abstract:
Introduction The purpose of this paper is to explore discourses of ‘ability’ and ‘ableism’. Terms such as abled, dis-abled, en-abled, dis-enabled, diff-abled, transable, assume different meanings as we eliminate ‘species-typical’ as the norm and make beyond ‘species-typical’ the norm. This paper contends that there is a pressing need for society to deal with ableism in all of its forms and its consequences. The discourses around 'able' and 'ableism' fall into two main categories. The discourse around species-typical versus sub-species-typical as identified by certain powerful members of the species is one category. This discourse has a long history and is linked to the discourse around health, disease and medicine. This discourse is about people (Harris, "One Principle"; Watson; Duke) who portray disabled people within a medical model of disability (Finkelstein; Penney; Malhotra; British Film Institute; Oliver), a model that classifies disabled people as having an intrinsic defect, an impairment that leads to ‘subnormal’ functioning. Disability Studies is an academic field that questions the medical model and the issue of ‘who defines whom’ as sub-species typical (Taylor, Shoultz, and Walker; Centre for Disability Studies; Disability and Human Development Department; Disabilitystudies.net; Society for Disability Studies; Campbell). The other category is the discourse around the claim that one has, as a species or a social group, superior abilities compared to other species or other segments in ones species whereby this superiority is seen as species-typical. Science and technology research and development and different forms of ableism have always been and will continue to be inter-related. The desire and expectation for certain abilities has led to science and technology research and development that promise the fulfillment of these desires and expectations. And science and technology research and development led to products that enabled new abilities and new expectations and desires for new forms of abilities and ableism. Emerging forms of science and technology, in particular the converging of nanotechnology, biotechnology, information technology, cognitive sciences and synthetic biology (NBICS), increasingly enable the modification of appearance and functioning of biological structures including the human body and the bodies of other species beyond existing norms and inter and intra species-typical boundaries. This leads to a changed understanding of the self, the body, relationships with others of the species, and with other species and the environment. There are also accompanying changes in anticipated, desired and rejected abilities and the transhumanisation of the two ableism categories. A transhumanised form of ableism is a network of beliefs, processes and practices that perceives the improvement of biological structures including the human body and functioning beyond species-typical boundaries as the norm, as essential. It judges an unenhanced biological structure including the human body as a diminished state of existence (Wolbring, "Triangle"; Wolbring, "Why"; Wolbring, "Glossary"). A by-product of this emerging form of ableism is the appearance of the ‘Techno Poor impaired and disabled people’ (Wolbring, "Glossary"); people who don’t want or who can’t afford beyond-species-typical body ability enhancements and who are, in accordance with the transhumanised form of ableism, perceived as people in a diminished state of being human and experience negative treatment as ‘disabled’ accordingly (Miller). Ableism Today: The First Category Ableism (Campbell; Carlson; Overboe) privileges ‘species typical abilities’ while labelling ‘sub-species-typical abilities’ as deficient, as impaired and undesirable often with the accompanying disablism (Miller) the discriminatory, oppressive, or abusive behaviour arising from the belief that sub-species-typical people are inferior to others. To quote the UK bioethicist John Harris I do define disability as “a physical or mental condition we have a strong [rational] preference not to be in” and that it is more importantly a condition which is in some sense a “‘harmed condition’”. So for me the essential elements are that a disabling condition is harmful to the person in that condition and that consequently that person has a strong rational preference not to be in such a condition. (Harris, "Is There") Harris’s quote highlights the non acceptance of sub-species-typical abilities as variations. Indeed the term “disabled” is mostly used to describe a person who is perceived as having an intrinsic defect, an impairment, disease, or chronic illness that leads to ‘subnormal’ functioning. A low quality of life and other negative consequences are often seen as the inevitable, unavoidable consequence of such ‘disability’. However many disabled people do not perceive themselves as suffering entities with a poor quality of life, in need of cure and fixing. As troubling as it is, that there is a difference in perception between the ‘afflicted’ and the ‘non-afflicted’ (Wolbring, "Triangle"; also see references in Wolbring, "Science") even more troubling is the fact that the ‘non-afflicted’ for the most part do not accept the self-perception of the ‘afflicted’ if the self-perception does not fit the agenda of the ‘non-afflicted’ (Wolbring, "Triangle"; Wolbring, "Science"). The views of disabled people who do not see themselves within the patient/medical model are rarely heard (see for example the positive non medical description of Down Syndrome — Canadian Down Syndrome Society), blatantly ignored — a fact that was recognised in the final documents of the 1999 UNESCO World Conference on Sciences (UNESCO, "Declaration on Science"; UNESCO, "Science Agenda") or rejected as shown by the Harris quote (Wolbring, "Science"). The non acceptance of ‘sub-species-typical functioning’ as a variation as evident in the Harris quote, also plays itself out in the case that a species-typical person wants to become sub-species-typical. Such behaviour is classified as a disorder, the sentiment being that no one with sound mind would seek to become sub-species-typical. Furthermore many of the so called sub-species-typical who accept their body structure and its way of functioning, use the ability language and measure employed by species-typical people to gain social acceptance and environmental accommodations. One can often hear ‘sub-species-typical people’ stating that “they can be as ‘able’ as the species-typical people if they receive the right accommodations”. Ableism Today: The Second Category The first category of ableism is only part of the ableism story. Ableism is much broader and more pervasive and not limited to the species-typical, sub-species dichotomy. The second category of ableism is a set of beliefs, processes and practices that produce a particular understanding of the self, the body, relationships with others of the species, and with other species and the environment, based on abilities that are exhibited or cherished (Wolbring, "Why"; Wolbring, "NBICS"). This form of ableism has been used historically and still is used by various social groups to justify their elevated level of rights and status in relation to other social groups, other species and to the environment they live in (Wolbring, "Why"; Wolbring, "NBICS"). In these cases the claim is not about species-typical versus sub-species-typical, but that one has - as a species or a social group- superior abilities compared to other species or other segments in ones species. Ableism reflects the sentiment of certain social groups and social structures to cherish and promote certain abilities such as productivity and competitiveness over others such as empathy, compassion and kindness (favouritism of abilities). This favouritism for certain abilities over others leads to the labelling of those who exhibit real or perceived differences from these ‘essential’ abilities, as deficient, and can lead to or justify other isms such as racism (it is often stated that the favoured race has superior cognitive abilities over other races), sexism (at the end of the 19th Century women were viewed as biologically fragile, lacking strength), emotional (exhibiting an undesirable ability), and thus incapable of bearing the responsibility of voting, owning property, and retaining custody of their own children (Wolbring, "Science"; Silvers), cast-ism, ageism (missing the ability one has as a youth), speciesism (the elevated status of the species homo sapiens is often justified by stating that the homo sapiens has superior cognitive abilities), anti-environmentalism, GDP-ism and consumerism (Wolbring, "Why"; Wolbring, "NBICS") and this superiority is seen as species-typical. This flavour of ableism is rarely questioned. Even as the less able classified group tries to show that they are as able as the other group. It is not questioned that ability is used as a measure of worthiness and judgement to start with (Wolbring, "Why"). Science and Technology and Ableism The direction and governance of science and technology and ableism are becoming increasingly interrelated. How we judge and deal with abilities and what abilities we cherish influences the direction and governance of science and technology processes, products and research and development. The increasing ability, demand for, and acceptance of changing, improving, modifying, enhancing the human body and other biological organisms including animals and microbes in terms of their structure, function or capabilities beyond their species-typical boundaries and the starting capability to synthesis, to generate, to design new genomes, new species from scratch (synthetic biology) leads to a changed understanding of oneself, one’s body, and one’s relationship with others of the species, other species and the environment and new forms of ableism and disablism. I have outlined so far the dynamics and characteristics of the existing ableism discourses. The story does not stop here. Advances in science and technology enable transhumanised forms of the two categories of ableism exhibiting similar dynamics and characteristics as seen with the non transhumanised forms of ableism. Transhumanisation of the First Category of AbleismThe transhumanised form of the first category of ableism is a network of beliefs, processes and practices that perceives the constant improvement of biological structures including the human body and functioning beyond species typical boundaries as the norm, as essential and judges an unenhanced biological structure — species-typical and sub-species-typical — including the human body as limited, defective, as a diminished state of existence (Wolbring, "Triangle"; Wolbring, "Why"; Wolbring, "Glossary"). It follows the same ideas and dynamics as its non transhumanised counterpart. It just moves the level of expected abilities from species-typical to beyond-species-typical. It follows a transhumanist model of health (43) where "health" is no longer the endpoint of biological systems functioning within species-typical, normative frameworks. In this model, all Homo sapiens — no matter how conventionally "medically healthy" — are defined as limited, defective, and in need of constant improvement made possible by new technologies (a little bit like the constant software upgrades we do on our computers). "Health" in this model means having obtained at any given time, maximum enhancement (improvement) of abilities, functioning and body structure. The transhumanist model of health sees enhancement beyond species-typical body structures and functioning as therapeutic interventions (transhumanisation of medicalisation; 2, 43). The transhumanisation of health and ableism could lead to a move in priorities away from curing sub-species-typical people towards species-typical functioning — that might be seen increasingly as futile and a waste of healthcare and medical resources – towards using health care dollars first to enhance species-typical bodies towards beyond-species-typical functioning and then later to shift the priorities to further enhance the human bodies of beyond species-typical body structures and functioning (enhancement medicine). Similar to the discourse of its non transhumanised counterpart there might not be a choice in the future to reject the enhancements. An earlier quote by Harris (Harris, "Is There") highlighted the non acceptance of sub- species-typical as a state one can be in. Harris makes in his 2007 book Enhancing Evolution: The Ethical Case for Making Better People the case that its moral to do enhancement if not immoral not to do it (Harris, "One Principle"). Keeping in mind the disablement people face who are labelled as subnormative it is reasonable to expect that those who cannot afford or do not want certain enhancements will be perceived as impaired (techno poor impaired) and will experience disablement (techno poor disabled) in tune with how the ‘impaired labelled people’ are treated today. Transhumanisation of the Second Category of Ableism The second category of Ableism is less about species-typical but about arbitrary flagging certain abilities as indicators of rights. The hierarchy of worthiness and superiority is also transhumanised.Cognition: Moving from Human to Sentient Rights Cognition is one ability used to justify many hierarchies within and between species. If it comes to pass whether through artificial intelligence advances or through cognitive enhancement of non human biological entities that other cognitive able sentient species appear one can expect that rights will eventually shift towards cognition as the measure of rights entitlement (sentient rights) and away from belonging to a given species like homo sapiens as a prerequisite of rights. If species-typical abilities are not important anymore but certain abilities are, abilities that can be added to all kind of species, one can expect that species as a concept might become obsolete or we will see a reinterpretation of species as one that exhibits certain abilities (given or natural). The Climate Change Link: Ableism and Transhumanism The disregard for nature reflects another form of ableism: humans are here to use nature as they see fit as they see themselves as superior to nature because of their abilities. We might see a climate change-driven appeal for a transhuman version of ableism, where the transhumanisation of humans is seen as a solution for coping with climate change. This could become especially popular if we reach a ‘point of no return’, where severe climate change consequences can no longer be prevented. Other Developments One Can Anticipate under a Transhumanised Form of AbleismThe Olympics would see only beyond-species-typical enhanced athletes compete (it doesn’t matter whether they were species-typical before or seen as sub-species-typical) and the transhumanised version of the Paralympics would host species and sub-species-typical athletes (Wolbring, "Oscar Pistorius"). Transhumanised versions of Abled, dis-abled, en-abled, dis-enabled, diff-abled, transable, and out-able will appear where the goal is to have the newest upgrades (abled), that one tries to out-able others by having better enhancements, that access to enhancements is seen as en-ablement and the lack of access as disenablement, that differently abled will not be used for just about sub-species-typical but for species-typical and species-sub-typical, that transable will not be about the species-typical who want to be sub-species-typical but about the beyond-species-typical who want to be species-typical. A Final WordTo answer the questions posed in the title. With the fall of the species-typical barrier it is unlikely that there will be an endpoint to the race for abilities and the sentiment of out-able-ing others (on an individual or collective level). The question remaining is who will have access to which abilities and which abilities are thought after for which purpose. I leave the reader with an exchange of two characters in the videogame Deus Ex: Invisible War, a PC and X-Box videogame released in 2003. It is another indicator for the embeddiness of ableism in societies fabric that the below is the only hit in Google for the term ‘commodification of ability’ despite the widespread societal commodification of abilities as this paper has hopefully shown. Conversation between Alex D and Paul DentonPaul Denton: If you want to even out the social order, you have to change the nature of power itself. Right? And what creates power? Wealth, physical strength, legislation — maybe — but none of those is the root principle of power.Alex D: I’m listening.Paul Denton: Ability is the ideal that drives the modern state. It's a synonym for one's worth, one's social reach, one's "election," in the Biblical sense, and it's the ideal that needs to be changed if people are to begin living as equals.Alex D: And you think you can equalise humanity with biomodification?Paul Denton: The commodification of ability — tuition, of course, but, increasingly, genetic treatments, cybernetic protocols, now biomods — has had the side effect of creating a self-perpetuating aristocracy in all advanced societies. When ability becomes a public resource, what will distinguish people will be what they do with it. Intention. Dedication. Integrity. The qualities we would choose as the bedrock of the social order. (Deus Ex: Invisible War) References British Film Institute. "Ways of Thinking about Disability." 2008. 25 June 2008 ‹http://www.bfi.org.uk/education/teaching/disability/thinking/›. Canadian Down Syndrome Society. "Down Syndrome Redefined." 2007. 25 June 2008 ‹http://www.cdss.ca/site/about_us/policies_and_statements/down_syndrome.php›. Carlson, Licia. "Cognitive Ableism and Disability Studies: Feminist Reflections on the History of Mental Retardation." Hypatia 16.4 (2001): 124-46. Centre for Disability Studies. "What is the Centre for Disability Studies (CDS)?" Leeds: Leeds University, 2008. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/what.htm›. Deus Ex: Invisible War. "The Commodification of Ability." Wikiquote, 2008 (2003). 25 June 2008 ‹http://en.wikiquote.org/wiki/Deus_Ex:_Invisible_War›. Disability and Human Development Department. "PhD in Disability Studies." Chicago: University of Illinois at Chicago, 2008. 25 June 2008 ‹http://www.ahs.uic.edu/dhd/academics/phd.php›, ‹http://www.ahs.uic.edu/dhd/academics/phd_objectives.php›. Disabilitystudies.net. "About the disabilitystudies.net." 2008. 25 June 2008 ‹http://www.disabilitystudies.net/index.php›. Duke, Winston D. "The New Biology." Reason 1972. 25 June 2008 ‹http://www.lifeissues.net/writers/irvi/irvi_34winstonduke.html›. Finkelstein, Vic. "Modelling Disability." Leeds: Disability Studies Program, Leeds University, 1996. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/archiveuk/finkelstein/models/models.htm›. Campbell, Fiona A.K. "Inciting Legal Fictions: 'Disability's' Date with Ontology and the Ableist Body of the Law." Griffith Law Review 10.1 (2001): 42. Harris, J. Enhancing Evolution: The Ethical Case for Making Better People. Princeton University Press, 2007. 25 June 2008 ‹http://www.studia.no/vare.php?ean=9780691128443›. Harris, J. "Is There a Coherent Social Conception of Disability?" Journal of Medical Ethics 26.2 (2000): 95-100. Harris, J. "One Principle and Three Fallacies of Disability Studies." Journal of Medical Ethics 27.6 (2001): 383-87. Malhotra, Ravi. "The Politics of the Disability Rights Movements." New Politics 8.3 (2001). 25 June 2008 ‹http://www.wpunj.edu/newpol/issue31/malhot31.htm›. Oliver, Mike. "The Politics of Disablement." Leeds: Disability Studies Program, Leeds University, 1990. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/archiveuk/Oliver/p%20of%20d%20Oliver%20contents.pdf›, ‹http://www.leeds.ac.uk/disability-studies/archiveuk/Oliver/p%20of%20d%20Oliver1.pdf›. Overboe, James. "Vitalism: Subjectivity Exceeding Racism, Sexism, and (Psychiatric) Ableism." Wagadu: A Journal of Transnational Women's and Gender Studies 4 (2007). 25 June 2008 ‹http://web.cortland.edu/wagadu/Volume%204/Articles%20Volume%204/Chapter2.htm› ‹http://web.cortland.edu/wagadu/Volume%204/Vol4pdfs/Chapter%202.pdf›. Miller, Paul, Sophia Parker, and Sarah Gillinson. "Disablism: How to Tackle the Last Prejudice." London: Demos, 2004. 25 June 2008 ‹http://www.demos.co.uk/files/disablism.pdf›. Penney, Jonathan. "A Constitution for the Disabled or a Disabled Constitution? Toward a New Approach to Disability for the Purposes of Section 15(1)." Journal of Law and Equality 1.1 (2002): 84-115. 25 June 2008 ‹http://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID876878_code574775.pdf?abstractid=876878&mirid=1›. Silvers, A., D. Wasserman, and M.B. Mahowald. Disability, Difference, Discrimination: Perspective on Justice in Bioethics and Public Policy. Landham: Rowman & Littlefield, 1998. Society for Disability Studies (USA). "General Guidelines for Disability Studies Program." 2004. 25 June 2008 ‹http://www.uic.edu/orgs/sds/generalinfo.html#4›, ‹http://www.uic.edu/orgs/sds/Guidelines%20for%20DS%20Program.doc›. Taylor, Steven, Bonnie Shoultz, and Pamela Walker. "Disability Studies: Information and Resources.". Syracuse: The Center on Human Policy, Law, and Disability Studies, Syracuse University, 2003. 25 June 2008 ‹http://thechp.syr.edu//Disability_Studies_2003_current.html#Introduction›. UNESCO. "UNESCO World Conference on Sciences Declaration on Science and the Use of Scientific Knowledge." 1999. 25 June 2008 ‹http://www.unesco.org/science/wcs/eng/declaration_e.htm›. UNESCO. "UNESCO World Conference on Sciences Science Agenda-Framework for Action." 1999. 25 June 2008 ‹http://www.unesco.org/science/wcs/eng/framework.htm›. Watson, James D. "Genes and Politics." Journal of Molecular Medicine 75.9 (1997): 624-36. Wolbring, G. "Science and Technology and the Triple D (Disease, Disability, Defect)." In Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science, eds. Mihail C. Roco and William Sims Bainbridge. Dordrecht: Kluwer Academic, 2003. 232-43. 25 June 2008 ‹http://www.wtec.org/ConvergingTechnologies/›, ‹http://www.bioethicsanddisability.org/nbic.html›. Wolbring, G. "The Triangle of Enhancement Medicine, Disabled People, and the Concept of Health: A New Challenge for HTA, Health Research, and Health Policy." Edmonton: Alberta Heritage Foundation for Medical Research, Health Technology Assessment Unit, 2005. 25 June 2008 ‹http://www.ihe.ca/documents/hta/HTA-FR23.pdf›. Wolbring, G. "Glossary for the 21st Century." International Center for Bioethics, Culture and Disability, 2007. 25 June 2008 ‹http://www.bioethicsanddisability.org/glossary.htm›. Wolbring, G. "NBICS, Other Convergences, Ableism and the Culture of Peace." Innovationwatch.com, 2007. 25 June 2008 ‹http://www.innovationwatch.com/choiceisyours/choiceisyours-2007-04-15.htm›. Wolbring, G. "Oscar Pistorius and the Future Nature of Olympic, Paralympic and Other Sports." SCRIPTed — A Journal of Law, Technology & Society 5.1 (2008): 139-60. 25 June 2008 ‹http://www.law.ed.ac.uk/ahrc/script-ed/vol5-1/wolbring.pdf›. Wolbring, G. "Why NBIC? Why Human Performance Enhancement?" Innovation: The European Journal of Social Science Research 21.1 (2008): 25-40.
APA, Harvard, Vancouver, ISO, and other styles
44

Jethani, Suneel, and Robbie Fordyce. "Darkness, Datafication, and Provenance as an Illuminating Methodology." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2758.

Full text
Abstract:
Data are generated and employed for many ends, including governing societies, managing organisations, leveraging profit, and regulating places. In all these cases, data are key inputs into systems that paradoxically are implemented in the name of making societies more secure, safe, competitive, productive, efficient, transparent and accountable, yet do so through processes that monitor, discipline, repress, coerce, and exploit people. (Kitchin, 165) Introduction Provenance refers to the place of origin or earliest known history of a thing. It refers to the custodial history of objects. It is a term that is commonly used in the art-world but also has come into the language of other disciplines such as computer science. It has also been applied in reference to the transactional nature of objects in supply chains and circular economies. In an interview with Scotland’s Institute for Public Policy Research, Adam Greenfield suggests that provenance has a role to play in the “establishment of reliability” given that a “transaction or artifact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parities” (Lawrence). Recent debates on the unrecognised effects of digital media have convincingly argued that data is fully embroiled within capitalism, but it is necessary to remember that data is more than just a transactable commodity. One challenge in bringing processes of datafication into critical light is how we understand what happens to data from its point of acquisition to the point where it becomes instrumental in the production of outcomes that are of ethical concern. All data gather their meaning through relationality; whether acting as a representation of an exterior world or representing relations between other data points. Data objectifies relations, and despite any higher-order complexities, at its core, data is involved in factualising a relation into a binary. Assumptions like these about data shape reasoning, decision-making and evidence-based practice in private, personal and economic contexts. If processes of datafication are to be better understood, then we need to seek out conceptual frameworks that are adequate to the way that data is used and understood by its users. Deborah Lupton suggests that often we give data “other vital capacities because they are about human life itself, have implications for human life opportunities and livelihoods, [and] can have recursive effects on human lives (shaping action and concepts of embodiment ... selfhood [and subjectivity]) and generate economic value”. But when data are afforded such capacities, the analysis of its politics also calls for us to “consider context” and “making the labour [of datafication] visible” (D’Ignazio and Klein). For Jenny L. Davis, getting beyond simply thinking about what data affords involves bringing to light how continually and dynamically to requests, demands, encourages, discourages, and refuses certain operations and interpretations. It is in this re-orientation of the question from what to how where “practical analytical tool[s]” (Davis) can be found. Davis writes: requests and demands are bids placed by technological objects, on user-subjects. Encourage, discourage and refuse are the ways technologies respond to bids user-subjects place upon them. Allow pertains equally to bids from technological objects and the object’s response to user-subjects. (Davis) Building on Lupton, Davis, and D’Ignazio and Klein, we see three principles that we consider crucial for work on data, darkness and light: data is not simply a technological object that exists within sociotechnical systems without having undergone any priming or processing, so as a consequence the data collecting entity imposes standards and way of imagining data before it comes into contact with user-subjects; data is not neutral and does not possess qualities that make it equivalent to the things that it comes to represent; data is partial, situated, and contingent on technical processes, but the outcomes of its use afford it properties beyond those that are purely informational. This article builds from these principles and traces a framework for investigating the complications arising when data moves from one context to another. We draw from the “data provenance” as it is applied in the computing and informational sciences where it is used to query the location and accuracy of data in databases. In developing “data provenance”, we adapt provenance from an approach that solely focuses on technical infrastructures and material processes that move data from one place to another and turn to sociotechnical, institutional, and discursive forces that bring about data acquisition, sharing, interpretation, and re-use. As data passes through open, opaque, and darkened spaces within sociotechnical systems, we argue that provenance can shed light on gaps and overlaps in technical, legal, ethical, and ideological forms of data governance. Whether data becomes exclusive by moving from light to dark (as has happened with the removal of many pages and links from Facebook around the Australian news revenue-sharing bill), or is publicised by shifting from dark to light (such as the Australian government releasing investigative journalist Andie Fox’s welfare history to the press), or even recontextualised from one dark space to another (as with genetic data shifting from medical to legal contexts, or the theft of personal financial data), there is still a process of transmission here that we can assess and critique through provenance. These different modalities, which guide data acquisition, sharing, interpretation, and re-use, cascade and influence different elements and apparatuses within data-driven sociotechnical systems to different extents depending on context. Attempts to illuminate and make sense of these complex forces, we argue, exposes data-driven practices as inherently political in terms of whose interests they serve. Provenance in Darkness and in Light When processes of data capture, sharing, interpretation, and re-use are obscured, it impacts on the extent to which we might retrospectively examine cases where malpractice in responsible data custodianship and stewardship has occurred, because it makes it difficult to see how things have been rendered real and knowable, changed over time, had causality ascribed to them, and to what degree of confidence a decision has been made based on a given dataset. To borrow from this issue’s concerns, the paradigm of dark spaces covers a range of different kinds of valences on the idea of private, secret, or exclusive contexts. We can parallel it with the idea of ‘light’ spaces, which equally holds a range of different concepts about what is open, public, or accessible. For instance, in the use of social data garnered from online platforms, the practices of academic researchers and analysts working in the private sector often fall within a grey zone when it comes to consent and transparency. Here the binary notion of public and private is complicated by the passage of data from light to dark (and back to light). Writing in a different context, Michael Warner complicates the notion of publicness. He observes that the idea of something being public is in and of itself always sectioned off, divorced from being fully generalisable, and it is “just whatever people in a given context think it is” (11). Michael Hardt and Antonio Negri argue that publicness is already shadowed by an idea of state ownership, leaving us in a situation where public and private already both sit on the same side of the propertied/commons divide as if the “only alternative to the private is the public, that is, what is managed and regulated by states and other governmental authorities” (vii). The same can be said about the way data is conceived as a public good or common asset. These ideas of light and dark are useful categorisations for deliberately moving past the tensions that arise when trying to qualify different subspecies of privacy and openness. The problem with specific linguistic dyads of private vs. public, or open vs. closed, and so on, is that they are embedded within legal, moral, technical, economic, or rhetorical distinctions that already involve normative judgements on whether such categories are appropriate or valid. Data may be located in a dark space for legal reasons that fall under the legal domain of ‘private’ or it may be dark because it has been stolen. It may simply be inaccessible, encrypted away behind a lost password on a forgotten external drive. Equally, there are distinctions around lightness that can be glossed – the openness of Open Data (see: theodi.org) is of an entirely separate category to the AACS encryption key, which was illegally but enthusiastically shared across the internet in 2007 to the point where it is now accessible on Wikipedia. The language of light and dark spaces allows us to cut across these distinctions and discuss in deliberately loose terms the degree to which something is accessed, with any normative judgments reserved for the cases themselves. Data provenance, in this sense, can be used as a methodology to critique the way that data is recontextualised from light to dark, dark to light, and even within these distinctions. Data provenance critiques the way that data is presented as if it were “there for the taking”. This also suggests that when data is used for some or another secondary purpose – generally for value creation – some form of closure or darkening is to be expected. Data in the public domain is more than simply a specific informational thing: there is always context, and this contextual specificity, we argue, extends far beyond anything that can be captured in a metadata schema or a licensing model. Even the transfer of data from one open, public, or light context to another will evoke new degrees of openness and luminosity that should not be assumed to be straightforward. And with this a new set of relations between data-user-subjects and stewards emerges. The movement of data between public and private contexts by virtue of the growing amount of personal information that is generated through the traces left behind as people make use of increasingly digitised services going about their everyday lives means that data-motile processes are constantly occurring behind the scenes – in darkness – where it comes into the view, or possession, of third parties without obvious mechanisms of consent, disclosure, or justification. Given that there are “many hands” (D’Iganzio and Klein) involved in making data portable between light and dark spaces, equally there can be diversity in the approaches taken to generate critical literacies of these relations. There are two complexities that we argue are important for considering the ethics of data motility from light to dark, and this differs from the concerns that we might have when we think about other illuminating tactics such as open data publishing, freedom-of-information requests, or when data is anonymously leaked in the public interest. The first is that the terms of ethics must be communicable to individuals and groups whose data literacy may be low, effectively non-existent, or not oriented around the objective of upholding or generating data-luminosity as an element of a wider, more general form of responsible data stewardship. Historically, a productive approach to data literacy has been finding appropriate metaphors from adjacent fields that can help add depth – by way of analogy – to understanding data motility. Here we return to our earlier assertion that data is more than simply a transactable commodity. Consider the notion of “giving” and “taking” in the context of darkness and light. The analogy of giving and taking is deeply embedded into the notion of data acquisition and sharing by virtue of the etymology of the word data itself: in Latin, “things having been given”, whereby in French données, a natural gift, perhaps one that is given to those that attempt capture for the purposes of empiricism – representation in quantitative form is a quality that is given to phenomena being brought into the light. However, in the contemporary parlance of “analytics” data is “taken” in the form of recording, measuring, and tracking. Data is considered to be something valuable enough to give or take because of its capacity to stand in for real things. The empiricist’s preferred method is to take rather than to accept what is given (Kitchin, 2); the data-capitalist’s is to incentivise the act of giving or to take what is already given (or yet to be taken). Because data-motile processes are not simply passive forms of reading what is contained within a dataset, the materiality and subjectivity of data extraction and interpretation is something that should not be ignored. These processes represent the recontextualisation of data from one space to another and are expressed in the landmark case of Cambridge Analytica, where a private research company extracted data from Facebook and used it to engage in psychometric analysis of unknowing users. Data Capture Mechanism Characteristics and Approach to Data Stewardship Historical Information created, recorded, or gathered about people of things directly from the source or a delegate but accessed for secondary purposes. Observational Represents patterns and realities of everyday life, collected by subjects by their own choice and with some degree of discretion over the methods. Third parties access this data through reciprocal arrangement with the subject (e.g., in exchange for providing a digital service such as online shopping, banking, healthcare, or social networking). Purposeful Data gathered with a specific purpose in mind and collected with the objective to manipulate its analysis to achieve certain ends. Integrative Places less emphasis on specific data types but rather looks towards social and cultural factors that afford access to and facilitate the integration and linkage of disparate datasets Table 1: Mechanisms of Data Capture There are ethical challenges associated with data that has been sourced from pre-existing sets or that has been extracted from websites and online platforms through scraping data and then enriching it through cleaning, annotation, de-identification, aggregation, or linking to other data sources (tab. 1). As a way to address this challenge, our suggestion of “data provenance” can be defined as where a data point comes from, how it came into being, and how it became valuable for some or another purpose. In developing this idea, we borrow from both the computational and biological sciences (Buneman et al.) where provenance, as a form of qualitative inquiry into data-motile processes, centres around understanding the origin of a data point as part of a broader almost forensic analysis of quality and error-potential in datasets. Provenance is an evaluation of a priori computational inputs and outputs from the results of database queries and audits. Provenance can also be applied to other contexts where data passes through sociotechnical systems, such as behavioural analytics, targeted advertising, machine learning, and algorithmic decision-making. Conventionally, data provenance is based on understanding where data has come from and why it was collected. Both these questions are concerned with the evaluation of the nature of a data point within the wider context of a database that is itself situated within a larger sociotechnical system where the data is made available for use. In its conventional sense, provenance is a means of ensuring that a data point is maintained as a single source of truth (Buneman, 89), and by way of a reproducible mechanism which allows for its path through a set of technical processes, it affords the assessment of a how reliable a system’s output might be by sheer virtue of the ability for one to retrace the steps from point A to B. “Where” and “why” questions are illuminating because they offer an ends-and-means view of the relation between the origins and ultimate uses of a given data point or set. Provenance is interesting when studying data luminosity because means and ends have much to tell us about the origins and uses of data in ways that gesture towards a more accurate and structured research agenda for data ethics that takes the emphasis away from individual moral patients and reorients it towards practices that occur within information management environments. Provenance offers researchers seeking to study data-driven practices a similar heuristic to a journalist’s line of questioning who, what, when, where, why, and how? This last question of how is something that can be incorporated into conventional models of provenance that make it useful in data ethics. The question of how data comes into being extends questions of power, legality, literacy, permission-seeking, and harm in an entangled way and notes how these factors shape the nature of personal data as it moves between contexts. Forms of provenance accumulate from transaction to transaction, cascading along, as a dataset ‘picks up’ the types of provenance that have led to its creation. This may involve multiple forms of overlapping provenance – methodological and epistemological, legal and illegal – which modulate different elements and apparatuses. Provenance, we argue is an important methodological consideration for workers in the humanities and social sciences. Provenance provides a set of shared questions on which models of transparency, accountability, and trust may be established. It points us towards tactics that might help data-subjects understand privacy in a contextual manner (Nissenbaum) and even establish practices of obfuscation and “informational self-defence” against regimes of datafication (Brunton and Nissenbaum). Here provenance is not just a declaration of what means and ends of data capture, sharing, linkage, and analysis are. We sketch the outlines of a provenance model in table 2 below. Type Metaphorical frame Dark Light What? The epistemological structure of a database determines the accuracy of subsequent decisions. Data must be consistent. What data is asked of a person beyond what is strictly needed for service delivery. Data that is collected for a specific stated purpose with informed consent from the data-subject. How does the decision about what to collect disrupt existing polities and communities? What demands for conformity does the database make of its subjects? Where? The contents of a database is important for making informed decisions. Data must be represented. The parameters of inclusion/exclusion that create unjust risks or costs to people because of their inclusion or exclusion in a dataset. The parameters of inclusion or exclusion that afford individuals representation or acknowledgement by being included or excluded from a dataset. How are populations recruited into a dataset? What divides exist that systematically exclude individuals? Who? Who has access to data, and how privacy is framed is important for the security of data-subjects. Data access is political. Access to the data by parties not disclosed to the data-subject. Who has collected the data and who has or will access it? How is the data made available to those beyond the data subjects? How? Data is created with a purpose and is never neutral. Data is instrumental. How the data is used, to what ends, discursively, practically, instrumentally. Is it a private record, a source of value creation, the subject of extortion or blackmail? How the data was intended to be used at the time that it was collected. Why? Data is created by people who are shaped by ideological factors. Data has potential. The political rationality that shapes data governance with regard to technological innovation. The trade-offs that are made known to individuals when they contribute data into sociotechnical systems over which they have limited control. Table 2: Forms of Data Provenance Conclusion As an illuminating methodology, provenance offers a specific line of questioning practices that take information through darkness and light. The emphasis that it places on a narrative for data assets themselves (asking what when, who, how, and why) offers a mechanism for traceability and has potential for application across contexts and cases that allows us to see data malpractice as something that can be productively generalised and understood as a series of ideologically driven technical events with social and political consequences without being marred by perceptions of exceptionality of individual, localised cases of data harm or data violence. References Brunton, Finn, and Helen Nissenbaum. "Political and Ethical Perspectives on Data Obfuscation." Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. Eds. Mireille Hildebrandt and Katja de Vries. New York: Routledge, 2013. 171-195. Buneman, Peter, Sanjeev Khanna, and Wang-Chiew Tan. "Data Provenance: Some Basic Issues." International Conference on Foundations of Software Technology and Theoretical Computer Science. Berlin: Springer, 2000. Davis, Jenny L. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge: MIT Press, 2020. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge: MIT Press, 2020. Hardt, Michael, and Antonio Negri. Commonwealth. Cambridge: Harvard UP, 2009. Kitchin, Rob. "Big Data, New Epistemologies and Paradigm Shifts." Big Data & Society 1.1 (2014). Lawrence, Matthew. “Emerging Technology: An Interview with Adam Greenfield. ‘God Forbid That Anyone Stopped to Ask What Harm This Might Do to Us’. Institute for Public Policy Research, 13 Oct. 2017. <https://www.ippr.org/juncture-item/emerging-technology-an-interview-with-adam-greenfield-god-forbid-that-anyone-stopped-to-ask-what-harm-this-might-do-us>. Lupton, Deborah. "Vital Materialism and the Thing-Power of Lively Digital Data." Social Theory, Health and Education. Eds. Deana Leahy, Katie Fitzpatrick, and Jan Wright. London: Routledge, 2018. Nissenbaum, Helen F. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2020. Warner, Michael. "Publics and Counterpublics." Public Culture 14.1 (2002): 49-90.
APA, Harvard, Vancouver, ISO, and other styles
45

Brabazon, Tara. "Freedom from Choice." M/C Journal 7, no. 6 (January 1, 2005). http://dx.doi.org/10.5204/mcj.2461.

Full text
Abstract:
On May 18, 2003, the Australian Minister for Education, Brendon Nelson, appeared on the Channel Nine Sunday programme. The Yoda of political journalism, Laurie Oakes, attacked him personally and professionally. He disclosed to viewers that the Minister for Education, Science and Training had suffered a false start in his education, enrolling in one semester of an economics degree that was never completed. The following year, he commenced a medical qualification and went on to become a practicing doctor. He did not pay fees for any of his University courses. When reminded of these events, Dr Nelson became agitated, and revealed information not included in the public presentation of the budget of that year, including a ‘cap’ on HECS-funded places of five years for each student. He justified such a decision with the cliché that Australia’s taxpayers do not want “professional students completing degree after degree.” The Minister confirmed that the primary – and perhaps the only – task for university academics was to ‘train’ young people for the workforce. The fact that nearly 50% of students in some Australian Universities are over the age of twenty five has not entered his vision. He wanted young people to complete a rapid degree and enter the workforce, to commence paying taxes and the debt or loan required to fund a full fee-paying place. Now – nearly two years after this interview and with the Howard government blessed with a new mandate – it is time to ask how this administration will order education and value teaching and learning. The curbing of the time available to complete undergraduate courses during their last term in office makes plain the Australian Liberal Government’s stance on formal, publicly-funded lifelong learning. The notion that a student/worker can attain all required competencies, skills, attributes, motivations and ambitions from a single degree is an assumption of the new funding model. It is also significant to note that while attention is placed on the changing sources of income for universities, there have also been major shifts in the pattern of expenditure within universities, focusing on branding, marketing, recruitment, ‘regional’ campuses and off-shore courses. Similarly, the short-term funding goals of university research agendas encourage projects required by industry, rather than socially inflected concerns. There is little inevitable about teaching, research and education in Australia, except that the Federal Government will not create a fully-funded model for lifelong learning. The task for those of us involved in – and committed to – education in this environment is to probe the form and rationale for a (post) publicly funded University. This short paper for the ‘order’ issue of M/C explores learning and teaching within our current political and economic order. Particularly, I place attention on the synergies to such an order via phrases like the knowledge economy and the creative industries. To move beyond the empty promises of just-in-time learning, on-the-job training, graduate attributes and generic skills, we must reorder our assumptions and ask difficult questions of those who frame the context in which education takes place. For the term of your natural life Learning is a big business. Whether discussing the University of the Third Age, personal development courses, self help bestsellers or hard-edged vocational qualifications, definitions of learning – let alone education – are expanding. Concurrent with this growth, governments are reducing centralized funding and promoting alternative revenue streams. The diversity of student interests – or to use the language of the time, client’s learning goals – is transforming higher education into more than the provision of undergraduate and postgraduate degrees. The expansion of the student body beyond the 18-25 age group and the desire to ‘service industry’ has reordered the form and purpose of formal education. The number of potential students has expanded extraordinarily. As Lee Bash realized Today, some estimates suggest that as many as 47 percent of all students enrolled in higher education are over 25 years old. In the future, as lifelong learning becomes more integrated into the fabric of our culture, the proportion of adult students is expected to increase. And while we may not yet realize it, the academy is already being transformed as a result. (35) Lifelong learning is the major phrase and trope that initiates and justifies these changes. Such expansive economic opportunities trigger the entrepreneurial directives within universities. If lifelong learning is taken seriously, then the goals, entry standards, curriculum, information management policies and assessments need to be challenged and changed. Attention must be placed on words and phrases like ‘access’ and ‘alternative entry.’ Even more consideration must be placed on ‘outcomes’ and ‘accountability.’ Lifelong learning is a catchphrase for a change in purpose and agenda. Courses are developed from a wide range of education providers so that citizens can function in, or at least survive, the agitation of the post-work world. Both neo-liberal and third way models of capitalism require the labeling and development of an aspirational class, a group who desires to move ‘above’ their current context. Such an ambiguous economic and social goal always involves more than the vocational education and training sector or universities, with the aim being to seamlessly slot education into a ‘lifestyle.’ The difficulties with this discourse are two-fold. Firstly, how effectively can these aspirational notions be applied and translated into a real family and a real workplace? Secondly, does this scheme increase the information divide between rich and poor? There are many characteristics of an effective lifelong learner including great personal motivation, self esteem, confidence and intellectual curiosity. In a double shifting, change-fatigued population, the enthusiasm for perpetual learning may be difficult to summon. With the casualization of the post-Fordist workplace, it is no surprise that policy makers and employers are placing the economic and personal responsibility for retraining on individual workers. Instead of funding a training scheme in the workplace, there has been a devolving of skill acquisition and personal development. Through the twentieth century, and particularly after 1945, education was the track to social mobility. The difficulty now – with degree inflation and the loss of stable, secure, long-term employment – is that new modes of exclusion and disempowerment are being perpetuated through the education system. Field recognized that “the new adult education has been embraced most enthusiastically by those who are already relatively well qualified.” (105) This is a significant realization. Motivation, meta-learning skills and curiosity are increasingly being rewarded when found in the already credentialed, empowered workforce. Those already in work undertake lifelong learning. Adult education operates well for members of the middle class who are doing well and wish to do better. If success is individualized, then failure is also cast on the self, not the social system or policy. The disempowered are blamed for their own conditions and ‘failures.’ The concern, through the internationalization of the workforce, technological change and privatization of national assets, is that failure in formal education results in social exclusion and immobility. Besides being forced into classrooms, there are few options for those who do not wish to learn, in a learning society. Those who ‘choose’ not be a part of the national project of individual improvement, increased market share, company competitiveness and international standards are not relevant to the economy. But there is a personal benefit – that may have long term political consequences – from being ‘outside’ society. Perhaps the best theorist of the excluded is not sourced from a University, but from the realm of fictional writing. Irvine Welsh, author of the landmark Trainspotting, has stated that What we really need is freedom from choice … People who are in work have no time for anything else but work. They have no mental space to accommodate anything else but work. Whereas people who are outside the system will always find ways of amusing themselves. Even if they are materially disadvantaged they’ll still find ways of coping, getting by and making their own entertainment. (145-6) A blurring of work and learning, and work and leisure, may seem to create a borderless education, a learning framework uninhibited by curriculum, assessment or power structures. But lifelong learning aims to place as many (national) citizens as possible in ‘the system,’ striving for success or at least a pay increase which will facilitate the purchase of more consumer goods. Through any discussion of work-place training and vocationalism, it is important to remember those who choose not to choose life, who choose something else, who will not follow orders. Everybody wants to work The great imponderable for complex economic systems is how to manage fluctuations in labour and the market. The unstable relationship between need and supply necessitates flexibility in staffing solutions, and short-term supplementary labour options. When productivity and profit are the primary variables through which to judge successful management, then the alignments of education and employment are viewed and skewed through specific ideological imperatives. The library profession is an obvious occupation that has confronted these contradictions. It is ironic that the occupation that orders knowledge is experiencing a volatile and disordered workplace. In the past, it had been assumed that librarians hold a degree while technicians do not, and that technicians would not be asked to perform – unsupervised – the same duties as librarians. Obviously, such distinctions are increasingly redundant. Training packages, structured through competency-based training principles, have ensured technicians and librarians share knowledge systems which are taught through incremental stages. Mary Carroll recognized the primary questions raised through this change. If it is now the case that these distinctions have disappeared do we need to continue to draw them between professional and para-professional education? Does this mean that all sectors of the education community are in fact learning/teaching the same skills but at different levels so that no unique set of skills exist? (122) With education reduced to skills, thereby discrediting generalist degrees, the needs of industry have corroded the professional standards and stature of librarians. Certainly, the abilities of library technicians are finally being valued, but it is too convenient that one of the few professions dominated by women has suffered a demeaning of knowledge into competency. Lifelong learning, in this context, has collapsed high level abilities in information management into bite sized chunks of ‘skills.’ The ideology of lifelong learning – which is rarely discussed – is that it serves to devalue prior abilities and knowledges into an ever-expanding imperative for ‘new’ skills and software competencies. For example, ponder the consequences of Hitendra Pillay and Robert Elliott’s words: The expectations inherent in new roles, confounded by uncertainty of the environment and the explosion of information technology, now challenge us to reconceptualise human cognition and develop education and training in a way that resonates with current knowledge and skills. (95) Neophilliacal urges jut from their prose. The stress on ‘new roles,’ and ‘uncertain environments,’ the ‘explosion of information technology,’ ‘challenges,’ ‘reconceptualisations,’ and ‘current knowledge’ all affirms the present, the contemporary, and the now. Knowledge and expertise that have taken years to develop, nurture and apply are not validated through this educational brief. The demands of family, work, leisure, lifestyle, class and sexuality stretch the skin taut over economic and social contradictions. To ease these paradoxes, lifelong learning should stress pedagogy rather than applications, and context rather than content. Put another way, instead of stressing the link between (gee wizz) technological change and (inevitable) workplace restructuring and redundancies, emphasis needs to be placed on the relationship between professional development and verifiable technological outcomes, rather than spruiks and promises. Short term vocationalism in educational policy speaks to the ordering of our public culture, requiring immediate profits and a tight dialogue between education and work. Furthering this logic, if education ‘creates’ employment, then it also ‘creates’ unemployment. Ironically, in an environment that focuses on the multiple identities and roles of citizens, students are reduced to one label – ‘future workers.’ Obviously education has always been marinated in the political directives of the day. The industrial revolution introduced a range of technical complexities to the workforce. Fordism necessitated that a worker complete a task with precision and speed, requiring a high tolerance of stress and boredom. Now, more skills are ‘assumed’ by employers at the time that workplaces are off-loading their training expectations to the post-compulsory education sector. Therefore ‘lifelong learning’ is a political mask to empower the already empowered and create a low-level skill base for low paid workers, with the promise of competency-based training. Such ideologies never need to be stated overtly. A celebration of ‘the new’ masks this task. Not surprisingly therefore, lifelong learning has a rich new life in ordering creative industries strategies and frameworks. Codifying the creative The last twenty years have witnessed an expanding jurisdiction and justification of the market. As part of Tony Blair’s third way, the creative industries and the knowledge economy became catchwords to demonstrate that cultural concerns are not only economically viable but a necessity in the digital, post-Fordist, information age. Concerns with intellectual property rights, copyright, patents, and ownership of creative productions predominate in such a discourse. Described by Charles Leadbeater as Living on Thin Air, this new economy is “driven by new actors of production and sources of competitive advantage – innovation, design, branding, know-how – which are at work on all industries.” (10) Such market imperatives offer both challenges and opportunity for educationalists and students. Lifelong learning is a necessary accoutrement to the creative industries project. Learning cities and communities are the foundations for design, music, architecture and journalism. In British policy, and increasingly in Queensland, attention is placed on industry-based research funding to address this changing environment. In 2000, Stuart Cunningham and others listed the eight trends that order education, teaching and learning in this new environment. The Changes to the Provision of Education Globalization The arrival of new information and communication technologies The development of a knowledge economy, shortening the time between the development of new ideas and their application. The formation of learning organizations User-pays education The distribution of knowledge through interactive communication technologies (ICT) Increasing demand for education and training Scarcity of an experienced and trained workforce Source: S. Cunningham, Y. Ryan, L. Stedman, S. Tapsall, K. Bagdon, T. Flew and P. Coaldrake. The Business of Borderless Education. Canberra: DETYA Evaluation and Investigations Program [EIP], 2000. This table reverberates with the current challenges confronting education. Mobilizing such changes requires the lubrication of lifelong learning tropes in university mission statements and the promotion of a learning culture, while also acknowledging the limited financial conditions in which the educational sector is placed. For university scholars facilitating the creative industries approach, education is “supplying high value-added inputs to other enterprises,” (Hartley and Cunningham 5) rather than having value or purpose beyond the immediately and applicably economic. The assumption behind this table is that the areas of expansion in the workforce are the creative and service industries. In fact, the creative industries are the new service sector. This new economy makes specific demands of education. Education in the ‘old economy’ and the ‘new economy’ Old Economy New Economy Four-year degree Forty-year degree Training as a cost Training as a source of competitive advantage Learner mobility Content mobility Distance education Distributed learning Correspondence materials with video Multimedia centre Fordist training – one size fits all Tailored programmes Geographically fixed institutions Brand named universities and celebrity professors Just-in-case Just-in-time Isolated learners Virtual learning communities Source: T. Flew. “Educational Media in Transition: Broadcasting, Digital Media and Lifelong Learning in the Knowledge Economy.” International Journal of Instructional Media 29.1 (2002): 20. There are myriad assumptions lurking in Flew’s fascinating table. The imperative is short courses on the web, servicing the needs of industry. He described the product of this system as a “learner-earner.” (50) This ‘forty year degree’ is based on lifelong learning ideologies. However Flew’s ideas are undermined by the current government higher education agenda, through the capping – through time – of courses. The effect on the ‘learner-earner’ in having to earn more to privately fund a continuance of learning – to ensure that they keep on earning – needs to be addressed. There will be consequences to the housing market, family structures and leisure time. The costs of education will impact on other sectors of the economy and private lives. Also, there is little attention to the groups who are outside this taken-for-granted commitment to learning. Flew noted that barriers to greater participation in education and training at all levels, which is a fundamental requirement of lifelong learning in the knowledge economy, arise in part out of the lack of provision of quality technology-mediated learning, and also from inequalities of access to ICTs, or the ‘digital divide.’ (51) In such a statement, there is a misreading of teaching and learning. Such confusion is fuelled by the untheorised gap between ‘student’ and ‘consumer.’ The notion that technology (which in this context too often means computer-mediated platforms) is a barrier to education does not explain why conventional distance education courses, utilizing paper, ink and postage, were also unable to welcome or encourage groups disengaged from formal learning. Flew and others do not confront the issue of motivation, or the reason why citizens choose to add or remove the label of ‘student’ from their bag of identity labels. The stress on technology as both a panacea and problem for lifelong learning may justify theories of convergence and the integration of financial, retail, community, health and education provision into a services sector, but does not explain why students desire to learn, beyond economic necessity and employer expectations. Based on these assumptions of expanding creative industries and lifelong learning, the shape of education is warping. An ageing population requires educational expenditure to be reallocated from primary and secondary schooling and towards post-compulsory learning and training. This cost will also be privatized. When coupled with immigration flows, technological changes and alterations to market and labour structures, lifelong learning presents a profound and personal cost. An instrument for economic and social progress has been individualized, customized and privatized. The consequence of the ageing population in many nations including Australia is that there will be fewer young people in schools or employment. Such a shift will have consequences for the workplace and the taxation system. Similarly, those young workers who remain will be far more entrepreneurial and less loyal to their employers. Public education is now publically-assisted education. Jane Jenson and Denis Saint-Martin realized the impact of this change. The 1980s ideological shift in economic and social policy thinking towards policies and programmes inspired by neo-liberalism provoked serious social strains, especially income polarization and persistent poverty. An increasing reliance on market forces and the family for generating life-chances, a discourse of ‘responsibility,’ an enthusiasm for off-loading to the voluntary sector and other altered visions of the welfare architecture inspired by neo-liberalism have prompted a reaction. There has been a wide-ranging conversation in the 1990s and the first years of the new century in policy communities in Europe as in Canada, among policy makers who fear the high political, social and economic costs of failing to tend to social cohesion. (78) There are dense social reorderings initiated by neo-liberalism and changing the notions of learning, teaching and education. There are yet to be tracked costs to citizenship. The legacy of the 1980s and 1990s is that all organizations must behave like businesses. In such an environment, there are problems establishing social cohesion, let alone social justice. To stress the product – and not the process – of education contradicts the point of lifelong learning. Compliance and complicity replace critique. (Post) learning The Cold War has ended. The great ideological battle between communism and Western liberal democracy is over. Most countries believe both in markets and in a necessary role for Government. There will be thunderous debates inside nations about the balance, but the struggle for world hegemony by political ideology is gone. What preoccupies decision-makers now is a different danger. It is extremism driven by fanaticism, personified either in terrorist groups or rogue states. Tony Blair (http://www.number-10.gov.uk/output/Page6535.asp) Tony Blair, summoning his best Francis Fukuyama impersonation, signaled the triumph of liberal democracy over other political and economic systems. His third way is unrecognizable from the Labour party ideals of Clement Attlee. Probably his policies need to be. Yet in his second term, he is not focused on probing the specificities of the market-orientation of education, health and social welfare. Instead, decision makers are preoccupied with a war on terror. Such a conflict seemingly justifies large defense budgets which must be at the expense of social programmes. There is no recognition by Prime Ministers Blair or Howard that ‘high-tech’ armory and warfare is generally impotent to the terrorist’s weaponry of cars, bodies and bombs. This obvious lesson is present for them to see. After the rapid and successful ‘shock and awe’ tactics of Iraq War II, terrorism was neither annihilated nor slowed by the Coalition’s victory. Instead, suicide bombers in Saudi Arabia, Morocco, Indonesia and Israel snuck have through defenses, requiring little more than a car and explosives. More Americans have been killed since the war ended than during the conflict. Wars are useful when establishing a political order. They sort out good and evil, the just and the unjust. Education policy will never provide the ‘big win’ or the visible success of toppling Saddam Hussein’s statue. The victories of retraining, literacy, competency and knowledge can never succeed on this scale. As Blair offered, “these are new times. New threats need new measures.” (ht tp://www.number-10.gov.uk/output/Page6535.asp) These new measures include – by default – a user pays education system. In such an environment, lifelong learning cannot succeed. It requires a dense financial commitment in the long term. A learning society requires a new sort of war, using ideas not bullets. References Bash, Lee. “What Serving Adult Learners Can Teach Us: The Entrepreneurial Response.” Change January/February 2003: 32-7. Blair, Tony. “Full Text of the Prime Minister’s Speech at the Lord Mayor’s Banquet.” November 12, 2002. http://www.number-10.gov.uk/output/Page6535.asp. Carroll, Mary. “The Well-Worn Path.” The Australian Library Journal May 2002: 117-22. Field, J. Lifelong Learning and the New Educational Order. Stoke on Trent: Trentham Books, 2000. Flew, Terry. “Educational Media in Transition: Broadcasting, Digital Media and Lifelong Learning in the Knowledge Economy.” International Journal of Instructional Media 29.1 (2002): 47-60. Hartley, John, and Cunningham, Stuart. “Creative Industries – from Blue Poles to Fat Pipes.” Department of Education, Science and Training, Commonwealth of Australia (2002). Jenson, Jane, and Saint-Martin, Denis. “New Routes to Social Cohesion? Citizenship and the Social Investment State.” Canadian Journal of Sociology 28.1 (2003): 77-99. Leadbeater, Charles. Living on Thin Air. London: Viking, 1999. Pillay, Hitendra, and Elliott, Robert. “Distributed Learning: Understanding the Emerging Workplace Knowledge.” Journal of Interactive Learning Research 13.1-2 (2002): 93-107. Welsh, Irvine, from Redhead, Steve. “Post-Punk Junk.” Repetitive Beat Generation. Glasgow: Rebel Inc, 2000: 138-50. Citation reference for this article MLA Style Brabazon, Tara. "Freedom from Choice: Who Pays for Customer Service in the Knowledge Economy?." M/C Journal 7.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0501/02-brabazon.php>. APA Style Brabazon, T. (Jan. 2005) "Freedom from Choice: Who Pays for Customer Service in the Knowledge Economy?," M/C Journal, 7(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0501/02-brabazon.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography