To see the other types of publications on this topic, follow the link: IBM Cognos Business Intelligence.

Journal articles on the topic 'IBM Cognos Business Intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 journal articles for your research on the topic 'IBM Cognos Business Intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hayman, I. "Review: Business Intelligence: The IBM Solution." Computer Bulletin 42, no. 1 (January 1, 2000): 30–31. http://dx.doi.org/10.1093/combul/42.1.30-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sousa, Maria José, and Ivo Dias. "Business Intelligence for Human Capital Management." International Journal of Business Intelligence Research 11, no. 1 (January 2020): 38–49. http://dx.doi.org/10.4018/ijbir.2020010103.

Full text
Abstract:
This article presents the results of an exploratory study of the use of business intelligence (BI) tools to help to make decisions about human resources management in Portuguese organizations. The purpose of this article is to analyze the effective use of BI tools in integrating reports, analytics, dashboards, and metrics, which impacts on the decision making the process of human resource managers. The methodology approach was quantitative based on the results of a survey to 43 human resource managers and technicians. The data analysis technique was correlation coefficient and regression analysis performed by IBM SPSS software. It was also applied qualitative analysis based on a focus group to identify the impacts of business intelligence on the human resources strategies of Portuguese companies. The findings of this study are that: business intelligence is positively associated with HRM decision-making, and business intelligence will significantly predict HRM decision making. The research also examines the process of the information gathered with BI tools from the human resources information system on the decisions of the human resources managers and that impacts the performance of the organizations. The study also gives indications about the practices and gaps, both in terms of human resources management and in processes related to business intelligence (BI) tools. It points out the different factors that must work together to facilitate effective decision-making. The article is structured as follows: a literature review concerning the use of the business intelligence concept and tools and the link between BI and human resources management, methodology, and the main findings and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
3

Seyal, Afzaal H., Taha Afzaal, and Susan Chin T. Saun. "Assessing Emotional Intelligence and Organizational Citizenship Behavior among Executives: Examples from Bruneian SMEs." International Business Management 6, no. 4 (April 1, 2012): 476–86. http://dx.doi.org/10.3923/ibm.2012.476.486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chornous, Galyna O., and Viktoriya L. Gura. "Integration of Information Systems for Predictive Workforce Analytics: Models, Synergy, Security of Entrepreneurship." European Journal of Sustainable Development 9, no. 1 (February 1, 2020): 83. http://dx.doi.org/10.14207/ejsd.2020.v9n1p83.

Full text
Abstract:
The era of information economy leads to redesigning not only business models of organizations but also to rethinking the human resources paradigm to harness the power of state-of-the-art technology for Human Capital Management (HCM) optimization. Predictive analytics and computational intelligence will bring transformative change to HCM. This paper deals with issues of HCM optimization based on the models of predictive workforce analytics (WFA) and Business Intelligence (BI). The main trends in the implementation of predictive WFA in the world and in Ukraine, as well as the need to protect business data for security of entrepreneurship and the tasks of predictive analysis in the context of proactive HCM were examined. Some models of effective integration of information systems for predictive WFA were proposed, their advantages and disadvantages were analyzed. These models combine ERP, HCM, BI, Predictive Analytics, and security systems. As an example, integration of HCM system, the analytics platform (IBM SPSS Modeler), BI system (IBM Planning Analytics), and security platform (IBM QRadar Security Intelligence Platform) for predicting the employee attrition was shown. This integration provides a cycle ‘prediction – planning – performance review – causal analysis’ to support protected data-driven decision making in proactive HCM The results of the research support ensuring the effective management of all spectrum of risks associated with the collection, storage and use of data. Keywords: Workforce Analytics (WFA), Human Capital Management (HCM), Predictive Analytics, Proactive Management, BI, Information Systems (IS), Integration, Security of Entrepreneurship
APA, Harvard, Vancouver, ISO, and other styles
5

Fathi Aldiabat, Bassam. "The Impact of Emotional Intelligence in the Leadership Styles from the Employees Point View in Jordanian Banks." International Business Management 13, no. 1 (October 20, 2019): 14–20. http://dx.doi.org/10.36478/ibm.2019.14.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hart, David. "Red, White, and “Big Blue”: IBM and the Business-Government Interface in the United States, 1956–2000." Enterprise & Society 8, no. 1 (March 2007): 1–34. http://dx.doi.org/10.1017/s1467222700008764.

Full text
Abstract:
This article describes the evolution of IBM's effort to manage its relationships with the U.S. government from the time that Thomas Watson, Jr. became CEO. While the Watson family controlled the firm, the family members served as the main bridges between IBM and the government. This personalized approach began to give way in the 1960s, as the intensity and scope of pressure from the firm's political environment grew beyond the capability of any individual to handle. During the 1970s and 1980s, IBM constructed a managerial hierarchy, with a newly opened Washington office at its center, which could gather more detailed intelligence and execute more sophisticated political strategies. The firm's crisis in the early 1990s provoked a second major restructuring of the interface, as IBM became more of a Washington “special interest.” Yet, some traces of the Watson imprint remained, even in the Gerstner era. Tracing IBM's evolution helps us to understand better the broader interactions between U.S. firms and their environments in this period. These interactions entailed adaptation by firms to environmental change but also efforts by firms to exert control over external forces, including public policy.
APA, Harvard, Vancouver, ISO, and other styles
7

Ramalu, Subramaniam Sri, Faridahwati Mohd Shamsudin, and Chandrakantan Subramania. "The Mediating Effect of Cultural Intelligence on the Relationship Between Openness Personality and Job Performance among Expatriates on International Assignments." International Business Management 6, no. 5 (May 1, 2012): 601–10. http://dx.doi.org/10.3923/ibm.2012.601.610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Graphics, Inc, Silicon. "IBM DB2 Universal Database and SGI Altix: Identity Resolution and Business Intelligence for an Uncertain World." Computational Methods in Science and Technology Special Issue, no. 1 (2006): 93–99. http://dx.doi.org/10.12921/cmst.2006.si.01.93-99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Singh, N. P., and Mohammad Jaffer Nayeem M. "Critical Analysis of Expansion Strategies of SAP, IBM, Oracle and Microsoft in the area of Business Intelligence." International Journal of Strategic Information Technology and Applications 2, no. 2 (April 2011): 23–43. http://dx.doi.org/10.4018/jsita.2011040103.

Full text
Abstract:
Business Intelligence (BI) industry has emerged as a high growth area. Recognizing this fact, both pure play vendors as well as big application and infrastructure vendors are trying different strategies to increase their market shares. The paper starts with a discussion of BI industry since 2002 to understand the underlying dynamics of the BI industry. It is followed by analysis of BI growth strategies of big four application and infrastructure vendors—SAP, IBM, Oracle, and Microsoft—to get a bigger pie of the market. The paper also analyzes the customer reactions to the new BI paradigm, impact of the acquisition on BI market and concluding remarks on the acquisition of pure play vendors by big four.
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Arvind, Biplav Srivastava, Daby Sow, Ching-Hua Chen, and Oshani Seneviratne. "Reflections on the Ingredients for Success in AI Research: An Interview with Arvind Gupta." AI Magazine 40, no. 4 (November 20, 2019): 24–27. http://dx.doi.org/10.1609/aimag.v40i4.5187.

Full text
Abstract:
This article contains the observations of Arvind Gupta, who has over 22 years of experience in leadership, policy, and entrepreneurial roles, in both the Silicon Valley and India. Gupta was recently interviewed about the factors that could influence successful artificial intelligence research. At the time of the interview, Gupta was the chief executive officer of MyGov, India. During our interview, he shared with the editorial team his perspectives on investing in artificial intelligence innovations for business and society, in India. The interviewers included members of the special track edito­rial team from IBM (Biplav Srivastava, Daby Sow, and Ching-Hua Chen) and Rensselaer Polytechnic Institute (Oshani Seneviratne).
APA, Harvard, Vancouver, ISO, and other styles
11

Haake, David. "Integrated operations program with Statoil." APPEA Journal 51, no. 2 (2011): 682. http://dx.doi.org/10.1071/aj10062.

Full text
Abstract:
Several years ago, IBM established its Smarter Planet vision: to bring a new level of intelligence to how the world works—to how every person, business, organisation, government, natural system, and man-made system interacts. Mr Haake will present a case study from our collaboration with Statoil on integrated operations. Statoil defines integrated operations (IO) as: collaboration across disciplines, companies, organisational and geographical boundaries—made possible by real-time data and new work processes—to reach safer and better decisions faster. To help identify the methods, technologies and work processes necessary to integrate its operations, Statoil appointed a research and development consortium consisting of ABB, IBM, SKF and Aker Kvaerner. The Statoil TAIL IO project was aimed at improving operations at fields approaching the end of their life-spans—the stage where production rate is declining, the facilities are aging, and the cost of operation is high. The result is a set of integrated operations solutions based on industry standards that offer great promise. “Our efforts to bring more integration and collaboration to our production processes are critical to the future of the offshore industry. IBM has shown a strong commitment to helping us achieve this goal.”—Adolfo Henriquez, head of Integrated Operations, Statoil. IBM Research, is the world’s largest private research institution. The IBM annual research and development budget is nearly $6 Billion. IBM is also an active member and participant in the development and leadership of multiple petroleum industry standards bodies: Mimosa and Open Operations & Maintenance Integrated Operations of the High North (IOHN) Energistics, coordinating WITSML and PRODML.
APA, Harvard, Vancouver, ISO, and other styles
12

Potapinski, Russell. "Woodside Energy Ltd: pioneer in cognitive computing, artificial intelligence and robotics." APPEA Journal 57, no. 2 (2017): 523. http://dx.doi.org/10.1071/aj16142.

Full text
Abstract:
Cognitive computing is a new disruptive technology with the potential to reshape the oil and gas industry across the entire value chain. For Woodside Energy Ltd (Woodside), embracing this technology is an opportunity to save time, drive efficiency and reduce costs. In 2015, Woodside collaborated with IBM and deployed a cognitive computing system (IBM’s Watson) into its business. The system focuses on capturing the vast proprietary database of knowledge on Woodside’s major capital projects. Today, the Watson proof-of-concept has been successfully deployed by the science function into the business and is now under the care of the projects function. Moreover, it is undergoing continuous advances through further machine learning, additional ingestion of documentation and features linking the cognitive computer system with existing subject matter experts. Given the success of the first pilot program, Woodside is continuing to rapidly leverage cognitive technologies in other areas of the business. In mid-2016, Woodside deployed Watson for drilling events using IBM’s Watson Explorer – Advanced Addition. This program identifies and classifies a wide variety of geological drilling events allowing Woodside’s geoscience team to provide more timely and accurate assessment of potential risks for well design. Woodside continues to develop several other business solutions using these platforms in areas as diverse as continuous improvement, business management, maintenance campaigns, legal advice, general management and robotics. This presentation shares Woodside’s lessons and insights derived from its journey across multiple forms of cognitive technology and provides insights as to the state-of-the-art and adaptation of these systems to achieve specific goals.
APA, Harvard, Vancouver, ISO, and other styles
13

Berman, Saul, and Anthony Marshall. "The next digital transformation: from an individual-centered to an everyone-to-everyone economy." Strategy & Leadership 42, no. 5 (September 9, 2014): 9–17. http://dx.doi.org/10.1108/sl-07-2014-0048.

Full text
Abstract:
Purpose – Based on extensive interviews and analysis, the article aims to explain how the next digital transformation, which is already underway, will result in a paradigm shift from one that is customer-centricity to an everyone-to-everyone (E2E) economy. Design/methodology/approach – For the 2013 IBM Digital Reinvention Study that this article is based on, IBM researchers surveyed approximately 1,100 business and government executives and 5,000 consumers across 15 countries. Thirty leading futurists were also interviewed. Of those interviewed for the executive study, 42 percent are C-level executives, with Chief Executive Officers comprising 10 percent of that group. More than three-fourths of the consumer study participants are university graduates, with 68 percent between the ages of 25 and 54. Findings – E2E is characterized by hyper-connectedness and collaboration of consumers and organizations across the gamut of value chain activities: co-design, co-creation, co-production, co-marketing, co-distribution and co-funding. Prospering in an E2E setting demands disruptive innovation that challenges established norms and blurs organizational boundaries. Practical implications – In the future, organizations will operate in ecosystems of converging products, services and industries. Originality/value – The four key dimensions of the new E2E business models are: connectivity, interactivity, awareness and intelligence.
APA, Harvard, Vancouver, ISO, and other styles
14

Auth, Gunnar, Oliver Jokisch, and Christian Dürk. "Revisiting automated project management in the digital age – a survey of AI approaches." Online Journal of Applied Knowledge Management 7, no. 1 (May 21, 2019): 27–39. http://dx.doi.org/10.36965/ojakm.2019.7(1)27-39.

Full text
Abstract:
In this decade, remarkable progress has been made in the field of artificial intelligence (AI). Inspired by well-known services of cognitive assistance systems such as IBM Watson, Apple's Siri or Google Duplex, AI concepts and algorithms are widely discussed regarding their automation potentials in business, politics and society. At first glance, project management (PM) seems to be less suitable for automation due to the inherent uniqueness of projects by definition. However, AI is also creating new application possibilities in the PM area, which will be explored in this contribution by involving an extensive literature review as well as real-world examples. The objective of this article is to provide a current overview of AI approaches and available tools that can be used for automating tasks in business project management.
APA, Harvard, Vancouver, ISO, and other styles
15

Inupakutika, D., M. Nadim, G. R. Gunnam, S. Kaghyan, D. Akopian, P. Chalela, and A. G. Ramirez. "Integration of NLP and Speech-to-text Applications with Chatbots." Electronic Imaging 2021, no. 3 (June 18, 2021): 35–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.3.mobmu-035.

Full text
Abstract:
With the evolving artificial intelligence technology, the chatbots are becoming smarter and faster lately. Chatbots are typically available round the clock providing continuous support and services. A chatbot or a conversational agent is a program or software that can communicate using natural language with humans. The challenge of developing an intelligent chatbot still exists ever since the onset of artificial intelligence. The functionality of chatbots can range from business oriented short conversations to healthcare intervention based longer conversations. However, the primary role that the chatbots have to play is in understanding human utterances in order to respond appropriately. To that end, there is an increased emergence of Natural Language Understanding (NLU) engines by popular cloud service providers. The NLU services identify entities and intents from the user utterances provided as input. Thus, in order to integrate such understanding to a chatbot, this paper presents a study on existing major NLU platforms. Then, we present a case study chatbot integrated with Google DialogFlow and IBM Watson NLU services and discuss their intent recognition performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Zurita-Ortega, Félix, Eva María Olmedo-Moreno, Ramón Chacón-Cuberos, Jorge Expósito López, and Asunción Martínez-Martínez. "Relationship between Leadership and Emotional Intelligence in Teachers in Universities and Other Educational Centres: A Structural Equation Model." International Journal of Environmental Research and Public Health 17, no. 1 (December 31, 2019): 293. http://dx.doi.org/10.3390/ijerph17010293.

Full text
Abstract:
This study uses an explanatory model of the dimensions of leadership and emotional intelligence according to the methods used in particular teaching environments (universities and other educational institutions). The effect of different kinds of leadership on emotional intelligence dimensions is also established using an explanatory model. A total of 954 teachers participated in this cross-sectional study, teaching in 137 different schools/universities. The instruments used for the data collection were the Multifactor Leadership Questionnaire (MLQ-5) and the Trait Meta Mood Scale (TMMS-24). Data analysis was performed with the software IBM AMOS 23.0. (International Business Machines Corporation, Armonk, NY, USA) using multi-group analysis and structural equations. Results showed that the structural equation model had a good fit. Transformational leadership depends mainly on intellectual stimulation in university teachers, whereas intrinsic motivation is more relevant at the lower educational levels. In relation to transactional leadership, contingency reward has a greater regression weight in non-university education, whereas passive leadership is governed more by passive exception in university teachers. There was a positive and direct relationship between levels of emotional intelligence and transformational leadership in non-university teachers, which reveals the need for effective understanding and management of both one’s own and students’ emotions in order to act effectively as a leader. Transactional leadership was negatively related to some emotional intelligence dimensions, given the relevance of obtaining power in this dimension.
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Cheng, Kevin Hayes, Kristy Lee, Jill Locascio, and Colleen Lougen. "Big picture in statistical frame – a statistical analysis and data visualization project of price change for electronic resources in academic libraries." Library Hi Tech News 35, no. 6 (August 6, 2018): 12–16. http://dx.doi.org/10.1108/lhtn-09-2017-0071.

Full text
Abstract:
Purpose The purposes of this paper are as follows: first analyze and visualize the price-changing pattern of common electronic resources; second, provide predictions for future price changes at the vendor level; third, discover any potential cause of such price changes; and fourth assess the practice of skills and techniques used for statistical analysis and data visualization. Design/methodology/approach Statistical analysis and data visualization of library’s expenditure data were conducted using business intelligence tools, in this case, Microsoft Excel and IBM SPSS. Findings This study reports the price changes of electronic resources over the past few years, as well as future prediction until 2018. Originality/value Overall, this research combines statistics analysis and data visualization to unveil current price-changing trends of E-resources, provides price prediction of near future and offers unique, while valuable, reference for future evidence-based acquisition decisions.
APA, Harvard, Vancouver, ISO, and other styles
18

Matyushok, Vladimir M., Vera A. Krasavina, and Sergey V. Matyushok. "Global artificial intelligence systems and technology market: formation and development trends." RUDN Journal of Economics 28, no. 3 (December 15, 2020): 505–21. http://dx.doi.org/10.22363/2313-2329-2020-28-3-505-521.

Full text
Abstract:
Every day more and more companies rely on artificial intelligence, from small startups to large companies, among which stand out not only the IT giants Google, Microsoft, Facebook, IBM, but even those that seemingly far from this topic - for example, General Motors and Boeing created a joint laboratory for AI research. It becomes obvious that AI technology is the real mainstream of our time. The article examines the global market for artificial intelligence systems and technologies. The authors described the peculiarities of the formation of this market and the main trends and segments in its development. The goal of research - identify the dynamics, features and trends of the global market for artificial intelligence systems and technologies. The methodology of system analysis, the dialectical method of scientific cognition, methods of historical, logical and comparative analysis are used. The concept of artificial intelligence has been systematized, the dynamics of the global market for artificial intelligence systems and technologies have been revealed, as well as in the regional context. The relationship between its dynamics and the sharp jump in performance of information processing algorithms, which became possible due to the fast computer based on GPUs, an avalanche-like data growth and the emergence of almost unlimited possibilities for storage and technology access, has been detected. It is shown that the global market for artificial intelligence technologies is in a phase of inflated expectations and with a high enough level of risk for investors. The main trends and segments in the development of the global market for artificial intelligence systems and technologies have been identified. These include deep learning technologies, the convergence of AI technologies with other technologies such as analytics, ERP, the Internet of Things, blockchain, and even quantum computing, which has the greatest impact, the development of cognitive intelligence systems, and the creation of a cognitive computer. It is shown that business leaders consider AI fundamental and absolutely necessary for the development of future business opportunities. It has been proven that the rapid development of AI systems and technologies is not just another technological innovation, but the technological platform of the Fourth Industrial Revolution, which is associated with hopes for accelerating the economic growth of the world economy, increasing the competitiveness of countries and companies.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdullahi, Ahmed Zakaria, Ebenezer Bugri Anarfo, and Hod Anyigba. "The impact of leadership style on organizational citizenship behavior: does leaders' emotional intelligence play a moderating role?" Journal of Management Development 39, no. 9/10 (November 24, 2020): 963–87. http://dx.doi.org/10.1108/jmd-01-2020-0012.

Full text
Abstract:
PurposeThe study investigates the effect of autocratic, democratic and transformational leadership styles on employees' organizational citizenship behavior (OCB). The study further examines the moderating role of leaders' emotional intelligence between leadership styles and OCB.Design/methodology/approachQuestionnaires were used to collect data from 618 small and medium-sized enterprises' (SMEs) employees in Ghana. For this study, both simple random and convenient sampling were adopted in selecting respondents. Regression was used to test the hypotheses in the research model using IBM–Statistical Package for the Social Sciences (SPSS).FindingsThe results show that democratic and transformational leadership styles both positively predicted the OCB of SME employees, although transformational leadership has a more significant influence. On the contrary, autocratic leadership style was found to have an insignificant relationship with OCB of SME employees when the interactive effect of the various leadership styles and emotional intelligence were introduced into the model. The results also show that whereas leaders' emotional intelligence positively moderate the relationship between autocratic leadership style and OCB, the relationships between democratic leadership style and OCB and between transformational leadership style and OCB are not significantly moderated by leaders' emotional intelligence.Research limitations/implicationsAn examination of other prominent leadership styles (for example, the transactional leadership style and the laissez faire leadership style) could be key areas for future research as it is a potential limitation of this study. Similarly, the use of a Western leadership instrument could also be a potential limitation in the Ghanaian context, although these instruments and scales may be applicable. Future studies could also consider a longitudinal approach to give a more holistic picture of the effect of the leadership styles on OCB.Practical implicationsIn general, the findings of the study support the idea that the autocratic leadership style affects SME employees' OCB both directly and indirectly through leaders' emotional intelligence. This study recommends that leaders of SMEs should focus on leadership styles that combine both result-oriented and people-centric behaviors to encourage SMEs' employees to engage in OCB.Originality/valueThis study provides firsthand information on the impact of autocratic leadership style, democratic leadership style and transformational leadership style on an employee's OCB from the Ghanaian SME perspective.
APA, Harvard, Vancouver, ISO, and other styles
20

Tsai, Cheng-Hung, Tsun Ku, and Wu-Fan Chien. "Object Architected Design and Efficient Dynamic Adjustment Mechanism of Distributed Web Crawlers." International Journal of Interdisciplinary Telecommunications and Networking 7, no. 1 (January 2015): 57–71. http://dx.doi.org/10.4018/ijitn.2015010105.

Full text
Abstract:
As the global socialnomics rise, big data makes enterprises face the tremendous tide of data at any time. How to efficiently process, analyze these unstructured data and dig useful information from them has been an issue for every level of enterprises to face and settle. Gartner conducted a survey (Gartner CIO Agenda) over 2300 CIOs worldwide and found out that the business intelligence based on big data has been the primary issue (IBM, 2013). Hence, by understanding the above developing trend of social media, this research is mainly based on the authors' previously proposed paper in ises face the tremendous ti:Design and Implementation of a Web Crawlers Based in Social Networkst any time. How to efficiently process, analyze these unstructured data and dig useful infed architecture of web crawler. This new architecture is added with the concept of object structure for the design and implementation of the whole system. The authors will also investigate the improved object structure that brings the convenience of system maintenance.
APA, Harvard, Vancouver, ISO, and other styles
21

De Pauw, Wim, and Henrique Andrade. "Visualizing Large-Scale Streaming Applications." Information Visualization 8, no. 2 (January 22, 2009): 87–106. http://dx.doi.org/10.1057/ivs.2009.5.

Full text
Abstract:
Stream processing is a new and important computing paradigm. Innovative streaming applications are being developed in areas ranging from scientific applications (for example, environment monitoring), to business intelligence (for example, fraud detection and trend analysis), to financial markets (for example, algorithmic trading systems). In this paper we describe Streamsight, a new visualization tool built to examine, monitor and help understand the dynamic behavior of streaming applications. Streamsight can handle the complex, distributed and large-scale nature of stream processing applications by using hierarchical graphs, multi-perspective visualizations, and de-cluttering strategies. To address the dynamic and adaptive nature of these applications, Streamsight also provides real-time visualization as well as the capability to record and replay. All these features are used for debugging, for performance optimization, and for management of resources, including capacity planning. More than 100 developers, both inside and outside IBM, have been using Streamsight to help design and implement large-scale stream processing applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Anagnoste, Sorin. "Robotic Automation Process – The operating system for the digital enterprise." Proceedings of the International Conference on Business Excellence 12, no. 1 (May 1, 2018): 54–69. http://dx.doi.org/10.2478/picbe-2018-0007.

Full text
Abstract:
Abstract Robotic Process Automation (RPA) is going into a “maturity market”. The main vendor providers surpassed USD 1 billion in evaluation and the research they are launching these days on the market will change again radically the business landscape. It can be seen already what is coming next to RPA: intelligent optical character recognition (IOCR), chat-bots, machine learning, big data analytics, cognitive platforms, anomaly detection, pattern analysis, voice recognition, data classification and many more. As a result the top vendors developed partnerships with the main leading artificial intelligence providers, such as: IBM Watson, Microsoft Artificial Intelligence, Microsoft Cognitive services, blockchain, Google etc. On the business part, the consulting companies who are implementing the RPA solution are moving from developing Proof-of-Concepts (POCs) and Pilots to helping clients with RAP global roll-outs and developing Centre of Excellences (CoE). As a result, the experiences gathered so far by the author on this kind of projects will be tackled also in this paper. In this article will we will present also some data related to automation for different business areas (eg. Accounts Payable, Accounts Receivable etc) and how an assessment can be done correctly in order to decide if a process can be automatized and, if yes, up to which extent (ie. percent). Moreover, through the case studies we will provide (1) how now the RPA is integrated with Artificial Intelligence and Cloud, (2) how can be scaled in order to face hypes, (3) how can interpret data and (4) what savings these technologies can bring to the organizations. All the aforementioned services made Robotics Process Automation a very powerful tool since a year ago when the author did the last research. A process that was mainly not recommended for automation or was partially automated can be now fully automated with more advantages, such as: money, non-FTE savings and fulfillment time.
APA, Harvard, Vancouver, ISO, and other styles
23

Santoso, Halim Budi, Priyanka Ayu Sonia Putri, and Budi Sutedjo Dharma Oetomo. "Implementation of Sales Executive Dashboard for A Multistore Company in Yogyakarta." International Journal of New Media Technology 4, no. 1 (June 14, 2017): 59–68. http://dx.doi.org/10.31937/ijnmt.v4i1.540.

Full text
Abstract:
Information Technology is a part of strategic part for enterprise strategic planning. Information Technology can help the enterprise to determine its strategic planning. Through data from the past, thecompany can learn something and help to decide some strategic issue. A Multistore Company in Yogyakarta has more than five stores. The problem raises to generate real-time sales reporting. Sales manager and owner do not have access to real-time sales condition. To ease management analyzing and reporting sales condition, dimension model of the sales data needs to be built. This dimension model will help to make executive report from some dimensions mentioned in data warehouse. Sales data will pass through some processes: Extract, Transform, and Load (ETL) in order to prepare the data warehouse. This process is preprocessing data before dimensional model is built. In this research multi-dimensional modelling by taking data from 3 stores ranging from 1 February 2014 to 31 January 2015. By implementing sales executive dashboard, it helps to monitor and analyze sales condition. Dashboard shows graphic which ease user, especially sales manager and owner to learn current and updated sales condition based on dimensions: time, outlet / store, and product. Report gives detail information and multidimensionalhelps to analyze data from different perspective. Index Terms—Dashboard, Multi Dimentional Model, ETL, Executive Reporting REFERENCES[1] W. Eckerson, Performance Dashboard: Measuring,Monitoring, and Managing Your Business, New Jersey: JohnWiley & Sons, 2011. [2] O. Romero and A. Abello, "A Survey of MultidimensionalModelling Methodologies," International Journal of Data Warehousing and Mining, vol. 5, no. 2, pp. 1-23, 2009. [3] R. Kimball and J. Caserta, The Data Warehouse ETL Toolkit:Practical Techniques for Extracting, Cleaning, Conforming, and Delivering Data, Indianapolis: Wiley Publishing, Inc., 2004. [4] G. Dhillon and J. Backhouse, "Information System Security Management in the New Millenium," Communications of the ACM, vol. 43, no. 7, pp. 125-128, 2000. [5] C. Ballard, D. M. Farrell, A. Gupta, C. Mazuela and S. Vohnik, Dimensional Modelling: In a Business Intelligence Environment, IBM Redbooks, 2006. [6] E. Fernández-Medina, J. Trujillo and M. Piattini, "Modeldriven multidimensional modeling of secure datawarehouses," European Journal of Information Systems, vol. 16, no. 4, pp. 374-389, 2007. [7] E. Johnson and J. Jones, Building ETL Processes for Business Intelligence Solutions Built on Microsoft SQL Server, CA ERwin, 2007. [8] J. Mylopoulos, L. Liu and M. T. Owen, "Database Design," inEnsyclopedia of Database System, United States, Springer,2009, pp. 708 - 710. [9] K. R. Gadda and S. Dey, "Business Intelligence for PublicSector Banks in India: A Case study. Design, Development,and Deployment," Journal of Finance, Accounting and Management, vol. 5, no. 2, pp. 37-58, 2014. [10] D. J. Power, "DSS Resources," 6 June 2007. [Online]. Available: http://dssresources.com/. [Accessed 29 May 2017]. [11] D. Hedgebeth, "Data-driven decision making for the enterprise: an overview of business intelligence applications," The Journal of Information and Knowledge Management System, vol. 37, no. 4, pp. 414-420, 2007. [12] S. Williams and N. Williams, The Profit Impact of Business Intelligence, San Fransisco: Elsevier, 2007. [13] Imelda, "Business Intelligence," in Majalah Ilmiah Unikom, Bandung, UNIKOM, 2013, pp. 111-112.
APA, Harvard, Vancouver, ISO, and other styles
24

Jurásek, Miroslav, and Tomislav Potocký. "How to Improve Communication within an Organization? The Relationship between Cultural Intelligence and Language Competence." Journal of Intercultural Management 12, no. 2 (June 1, 2020): 53–81. http://dx.doi.org/10.2478/joim-2020-0038.

Full text
Abstract:
AbstractObjective: This article deals with the effective functioning of an organization in the international context. It focuses on the two key aspects of the communication in this respects: cultural intelligence (CQ), the capacity to operate successfully in the multicultural setting, and the quality of internal communication; it is investigated whether CQ (and its components: metacognitive, cognitive, motivational and behavioural) are rather related to the number of foreign languages or the language proficiency a person (or an employee) knows (has).Methodology: The sample of 132 undergraduate students of the English and Czech study programs at one private business university in the Czech Republic was used. The Spearman correlation coefficient, Chi–Square test for independence and the one–way ANOVA test (all of them conducted in the statistical program IBM SPSS Statistics 21) are calculated in the paper.Findings: CQ depends on the quality (the level of proficiency) rather than the quantity (the number) of foreign language skills. This conclusion applies regardless of gender: our data did not confirm that language skills were gender-dependent.Value Added: Recently a very fashionable cultural intelligence (CQ) construct has been explored in relation to a variety of variables and outputs. Nevertheless, insufficient attention has been given to the relationship between cultural intelligence and language competence so far; moreover, the research has brought contradictory results up to now. This study fills the actual knowledge gap.Recommendations: It is shown that in terms of the effective functioning in a culturally unknown environment and with a restricted time to learn foreign languages, it is preferable to develop continuously one´s skills in lingua franca than parallel and more superficial studies of several languages.
APA, Harvard, Vancouver, ISO, and other styles
25

Kikuchi, Satoru, Kota Kadama, and Shintaro Sengoku. "Characteristics and Classification of Technology Sector Companies in Digital Health for Diabetes." Sustainability 13, no. 9 (April 26, 2021): 4839. http://dx.doi.org/10.3390/su13094839.

Full text
Abstract:
In recent years, technological progress in smart devices and artificial intelligence has also led to advancements in digital health. Digital health tools are especially prevalent in diabetes treatment and improving lifestyle. In digital health’s innovation ecosystem, new alliance networks are formed not only by medical device companies and pharmaceutical companies but also by information and communications technology (ICT) companies and start-ups. Therefore, while focusing on digital health for diabetes, this study explored the characteristics of companies with high network centralities. Our analysis of the changes in degree, betweenness, and eigenvector centralities of the sample companies from 2011 to 2020 found drastic changes in the company rankings of those with high network centrality during this period. Accordingly, the following eight companies were identified and investigated as the top-ranking technology sector companies: IBM Watson Health, Glooko, DarioHealth, Welldoc, OneDrop, Fitbit, Voluntis, and Noom. Lastly, we characterized these cases into three business models: (i) intermediary model, (ii) substitute model, and (iii) direct-to-consumer model, and we analyzed their customer value.
APA, Harvard, Vancouver, ISO, and other styles
26

Wamba-Taguimdje, Serge-Lopez, Samuel Fosso Wamba, Jean Robert Kala Kamdjoug, and Chris Emmanuel Tchatchouang Wanko. "Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects." Business Process Management Journal 26, no. 7 (May 12, 2020): 1893–924. http://dx.doi.org/10.1108/bpmj-10-2019-0411.

Full text
Abstract:
PurposeThe main purpose of our study is to analyze the influence of Artificial Intelligence (AI) on firm performance, notably by building on the business value of AI-based transformation projects. This study was conducted using a four-step sequential approach: (1) analysis of AI and AI concepts/technologies; (2) in-depth exploration of case studies from a great number of industrial sectors; (3) data collection from the databases (websites) of AI-based solution providers; and (4) a review of AI literature to identify their impact on the performance of organizations while highlighting the business value of AI-enabled projects transformation within organizations.Design/methodology/approachThis study has called on the theory of IT capabilities to seize the influence of AI business value on firm performance (at the organizational and process levels). The research process (responding to the research question, making discussions, interpretations and comparisons, and formulating recommendations) was based on a review of 500 case studies from IBM, AWS, Cloudera, Nvidia, Conversica, Universal Robots websites, etc. Studying the influence of AI on the performance of organizations, and more specifically, of the business value of such organizations’ AI-enabled transformation projects, required us to make an archival data analysis following the three steps, namely the conceptual phase, the refinement and development phase, and the assessment phase.FindingsAI covers a wide range of technologies, including machine translation, chatbots and self-learning algorithms, all of which can allow individuals to better understand their environment and act accordingly. Organizations have been adopting AI technological innovations with a view to adapting to or disrupting their ecosystem while developing and optimizing their strategic and competitive advantages. AI fully expresses its potential through its ability to optimize existing processes and improve automation, information and transformation effects, but also to detect, predict and interact with humans. Thus, the results of our study have highlighted such AI benefits in organizations, and more specifically, its ability to improve on performance at both the organizational (financial, marketing and administrative) and process levels. By building on these AI attributes, organizations can, therefore, enhance the business value of their transformed projects. The same results also showed that organizations achieve performance through AI capabilities only when they use their features/technologies to reconfigure their processes.Research limitations/implicationsAI obviously influences the way businesses are done today. Therefore, practitioners and researchers need to consider AI as a valuable support or even a pilot for a new business model. For the purpose of our study, we adopted a research framework geared toward a more inclusive and comprehensive approach so as to better account for the intangible benefits of AI within organizations. In terms of interest, this study nurtures a scientific interest, which aims at proposing a model for analyzing the influence of AI on the performance of organizations, and at the same time, filling the associated gap in the literature. As for the managerial interest, our study aims to provide managers with elements to be reconfigured or added in order to take advantage of the full benefits of AI, and therefore improve organizations’ performance, the profitability of their investments in AI transformation projects, and some competitive advantage. This study also allows managers to consider AI not as a single technology but as a set/combination of several different configurations of IT in the various company’s business areas because multiple key elements must be brought together to ensure the success of AI: data, talent mix, domain knowledge, key decisions, external partnerships and scalable infrastructure.Originality/valueThis article analyses case studies on the reuse of secondary data from AI deployment reports in organizations. The transformation of projects based on the use of AI focuses mainly on business process innovations and indirectly on those occurring at the organizational level. Thus, 500 case studies are being examined to provide significant and tangible evidence about the business value of AI-based projects and the impact of AI on firm performance. More specifically, this article, through these case studies, exposes the influence of AI at both the organizational and process performance levels, while considering it not as a single technology but as a set/combination of the several different configurations of IT in various industries.
APA, Harvard, Vancouver, ISO, and other styles
27

Fallucchi, Francesca, Marco Coladangelo, Romeo Giuliano, and Ernesto William De Luca. "Predicting Employee Attrition Using Machine Learning Techniques." Computers 9, no. 4 (November 3, 2020): 86. http://dx.doi.org/10.3390/computers9040086.

Full text
Abstract:
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has increasingly been paid to human resources (HR), since worker quality and skills represent a growth factor and a real competitive advantage for companies. After having been introduced to sales and marketing departments, artificial intelligence is also starting to guide employee-related decisions within HR management. The purpose is to support decisions that are based not on subjective aspects but on objective data analysis. The goal of this work is to analyse how objective factors influence employee attrition, in order to identify the main causes that contribute to a worker’s decision to leave a company, and to be able to predict whether a particular employee will leave the company. After the training, the obtained model for the prediction of employees’ attrition is tested on a real dataset provided by IBM analytics, which includes 35 features and about 1500 samples. Results are expressed in terms of classical metrics and the algorithm that produced the best results for the available dataset is the Gaussian Naïve Bayes classifier. It reveals the best recall rate (0.54), since it measures the ability of a classifier to find all the positive instances and achieves an overall false negative rate equal to 4.5% of the total observations.
APA, Harvard, Vancouver, ISO, and other styles
28

Lichtenthaler, Ulrich. "The world’s most innovative companies: a meta-ranking." Journal of Strategy and Management 11, no. 4 (November 19, 2018): 497–511. http://dx.doi.org/10.1108/jsma-07-2018-0065.

Full text
Abstract:
Purpose The purpose of this paper is to develop a meta-ranking of the world’s most innovative firms, which underscores the importance of external perceptions of innovativeness and of an innovation-based view on firm performance, including product, service, process, business model, management and organizational innovation. Design/methodology/approach This is an exploratory empirical paper, which integrates the results of five rankings of the world’s most innovative companies. Findings The five innovation rankings include a variety of companies based on different methods and strategic focus. This variety underscores the importance of a meta-ranking, whose multiple aggregation methods lead to consistent results. Only the following 11 companies are mentioned in at least three rankings, leading to a list of the 11 most innovative companies in the world: Amazon, Apple, Tencent, Google/Alphabet, Netflix, SpaceX, Tesla, Microsoft, IBM, Intel and General Electric. Overall, the meta-ranking is dominated by US companies from various industries with firms from China gaining importance. Originality/value The paper contributes to research into innovation antecedents and consequences by illustrating the importance of innovation perceptions. The meta-ranking highlights the need for pursuing different types of innovation, following the innovation-based view on firm performance with first-order and second-order innovations. Moreover, the results deepen our understanding of digital transformation and of capturing value from innovation in the digital economy because a considerable portion of the leading innovators has a business model emphasizing artificial intelligence and digital platforms, which have led to the generation of new and to the disruption of established markets.
APA, Harvard, Vancouver, ISO, and other styles
29

Carrero, Justin, Anna Krzeminska, and Charmine E. J. Härtel. "The DXC technology work experience program: disability-inclusive recruitment and selection in action." Journal of Management & Organization 25, no. 04 (July 2019): 535–42. http://dx.doi.org/10.1017/jmo.2019.23.

Full text
Abstract:
AbstractWith the rapid advancement of innovative technology, coupled with IT being a core function in contemporary business, there has been an upward trend of multi-national companies (MNCs) reporting a skill deficit in areas such as data analytics and cybersecurity (Columbus, 2017. IBM predicts demand for data scientists will soar 28% By 2020. Forbes; NeSmith, 2018. The cybersecurity gap is an industry crisis. Forbes). In a recent survey with over 3,000 CIOs, 65% indicated their organizations were unable to maintain par with the progression of technology in areas such as data analytics and security due to a lack of adequate talent (Harvey Nash & KPMG, 2018. CIO survey 2018). Although, organizations have recently started to expand their talent pipeline following a neurological breakthrough: research as well as anecdotal evidence suggests adults with mild forms of autism display above-average intelligence, increased attention focus, and high visual–spatial abilities; a combination in high market demand for roles such as software testing, data analysis, cybersecurity, and engineering due to their uncanny ability with pattern recognition, information processing, analytics, and attention to detail.These auspicious developments come at the helm of an increasing rate of governments around the world implementing provisions to their labour regulations towards equitable hiring of people with disabilities (Myors et al., 2017. Perspectives from 22 countries on the legal environment for selection. Handbook of Employee Selection. 659–677. Research Collection Lee Kong Chian School of Business.). Some, such as France, Japan, Kenya, Korea, and Taiwan, have gone so far as to set quota targets (Myors et al., 2017. Perspectives from 22 countries on the legal environment for selection. Handbook of Employee Selection. 659–677. Research Collection Lee Kong Chian School of Business.). The implication for organizations is that they need to develop disability-inclusive recruitment and selection systems along with work designs and environments that are disability friendly. But what does this mean in practice? What does a disability-inclusive recruitment and selection system look like?Enter DXC Technology (DXC): born out of a merger between global conglomerate Computer Science Corporation and Hewlett Packard Enterprise, generating close to $25 billion annually in revenue, with clients across more than 70 countries, they strategically became a pioneer in the digital transformation that was taking place globally. In the wake of the breakthrough in employment diversity, DXC recognized this as an opportunity to gain a critical edge within the increasingly competitive talent pool market. First, design a program of their own for recruiting and selecting adults with high functioning autism. Next, through a collaboration with various universities including the University of Queensland and Macquarie University, Neurodiversity Hubs were established; an initiative designed to assist neurodivergent students with obtaining work experience and internships. In doing so, they faced the following key challenges: How could they design a recruitment and selection strategy for neurodivergent individuals that was equitable, ethical, and efficient? In particular, where could they find suitable neurodivergent candidates, what criteria should they use to select them, and how should they handle unsuccessful candidates to ensure beneficial outcomes for all stakeholders?
APA, Harvard, Vancouver, ISO, and other styles
30

Gabrilove, Janice Lynn, Peter Backeris, Louise Lammers, Anthony Costa, Layla Fattah, Caroline Eden, Jason Rogers, and Kevin Costa. "2527 Mount Sinai health hackathon: Harnessing the power of collaboration to advance experiential team science education." Journal of Clinical and Translational Science 2, S1 (June 2018): 58. http://dx.doi.org/10.1017/cts.2018.218.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: Innovation in healthcare is increasingly dependent on technology and teamwork, requiring effective collaboration between disciplines. Through an intensive team-based competition event, Mount Sinai Health Hackathon 2017, aimed to harness the power of multidisciplinary and transdisciplinary collaboration to foster innovation in the field of cancer. Participants were immersed in an intensive weekend working in teams to develop technology solutions to important problems affecting patients and care providers in the field of cancer. The learning objectives were to enable participants to: Identify cancer-related healthcare problems which lend themselves to technology-based solutions. Delineate key behaviors critical to multidisciplinary team success Identify optimal strategies for communicating in multidisciplinary teams. Engage and inspire participants to apply knowledge of technology to meaningfully impact clinical care and well-being. METHODS/STUDY POPULATION: The Mount Sinai Health Hackathon is an annual 48-hour team-based competition, using a format adapted from guidelines provided by MIT Hacking Medicine. The 2017 event gathered a total of 87 participants (120 registered), representing 17 organizations from as far away as California, with a diverse range of backgrounds in bioinformatics, software and hardware, product design, business, digital health and clinical practice. The overall participation model included: Phase 0: Health Hackathon 101 summer workshops; Phase 1: pre-Hackathon priming activities using online forums Trello and Slack; Phase 2: a 48-hour onsite hackathon to catalyze innovation through problem sharing, solution pitches, team formation and development of prototype solutions; Phase 3: competitive presentations to judges and prize awards; Phase 4: a suite of post-hackathon support to stimulate continued development of innovations. The event sponsored by ConduITS, was also co-sponsored by Persistent Systems, IBM Watson, Tisch Cancer Institute, Sinai AppLab, Sinai Biodesign and other ISMMS Institutes. Mentors circulated throughout the event to support the teams in the technical, clinical, and business development aspects of their solutions. In total, the 14 teams formed during the Hackathon, created innovations ranging from diagnostic devices, networking apps, artificial intelligence tools, and others. The top 3 teams were each awarded $2500 to support their projects’ future development. RESULTS/ANTICIPATED RESULTS: Qualitative and quantitative post-event survey data revealed the Hackathon experience fostered collaborative attitudes and a positive experience for participants, providing insight into the potential benefits of team science. In the post-event survey (n=24) 92% of participants reported that the experience increased their ability to solve problems and 96% made new professional or personal connections. In addition, 96% of respondents would attend future Hackathon events and 75% reported they were likely to continue working on their project after the Hackathon. Qualitative feedback from 1 participant reported it was: “a wonderful event that really highlighted how much interdisciplinary team science can achieve.” Along with intermediate support interactions, including the winning teams participating in a Shark Tank style event with pitches to external entrepreneurs and investors, all teams will be followed up in 6 months time to determine if participants continue to work on projects, file new patents, create new companies, or leverage the new connections made through the Health Hackathon experience. DISCUSSION/SIGNIFICANCE OF IMPACT: Our experience indicates that a Health Hackathon is a compelling and productive forum to bring together students, trainees, faculty, and other stakeholders to explore tech-based solutions to problems in cancer and other areas of biomedicine. It is a valuable tool to foster collaboration and transdisciplinary team science and education. Follow-up analysis will determine to what extent the Mount Sinai Health Hackathon is contributing to an ecosystem that encourages professionals and trainees in healthcare and in technology development to work together to address unmet needs in healthcare with innovative technology solutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Magoma, Tshepo, Sithembiso Khumalo, and Tanya Du Plessis. "Affordability of IBM Cognos business intelligence tool features suitable for small-and medium-sized enterprises’ decision-making." SA Journal of Information Management 23, no. 1 (March 29, 2021). http://dx.doi.org/10.4102/sajim.v23i1.1291.

Full text
Abstract:
Background: Business intelligence (BI) tools are generally associated with organisations that have resources to purchase and implement these tools. Evidences abound regarding the correlation between BI tools and improved business decision-making. This study’s unit of analysis is affordability as a feature of IBM Cognos making it suitable for small-and medium-sized enterprises (SMEs).Objective: The research aim was to identify the fundamental features of IBM Cognos which would address decision-making needs of SMEs. The objective was to determine the significance of BI tool features by means of identifying affordable features suitable for SMEs’ decision-making.Method: Quantitative research design and a deductive approach were best suited for assessing the fundamental features of IBM Cognos for SMEs’ decision-making needs. The signification framework variables, such as presumed, prized and perceived value of BI tool features, werequantified and measured using statistical analysis tools. A non-probability convenience sampling technique was used with a sample size of 200, that is, 80 BI consultants, 60 SME BI developers and 60 SME managers.Results: Affordable key features of BI tools in the context of SMEs’ business decision-making include consistency and comfort, intuitive interface, avoiding impulsivity, cost effectiveness, availability of information, best programmed visualisations, reporting quickly and easily and financial decision-making.Conclusion: The signification framework’s presumed, prized and perceived value indicators link affordable BI tool features to the consistency of the decision-making process and present an alternative view of affordability. An intuitive interface relates to convenience and ease of authoring content, designing, building and securing reports to the SME, which helps inimproving consistent decision-making.
APA, Harvard, Vancouver, ISO, and other styles
32

"Business intelligence: The IBM solution." Computers & Mathematics with Applications 38, no. 11-12 (December 1999): 284. http://dx.doi.org/10.1016/s0898-1221(99)91209-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Deshpande, Nishad, Shabib Ahmeda, and Alok Khodea. "Business intelligence through patinformatics: A study of energy efficient data centres using patent data." Journal of Intelligence Studies in Business 6, no. 3 (December 30, 2016). http://dx.doi.org/10.37380/jisib.v6i3.193.

Full text
Abstract:
The advent of cloud computing has nurtured an unprecedented growth of datacentres. With its growth, the main concern for service providers and data centre owners is toefficiently manage the energy of the data centres without compromising their computingcapabilities. This concern is genuine as data centres utilise 10-30 times more energy thanoffice spaces and also generate immense heat. As cooling accounts for half of the total powerconsumption in data centres, efficient cooling systems have become a vital need for datacentres. This has resulted in increased research and innovation in the field of efficient coolingof data centres, which in turn has led to growth in filing of patents in this domain. Patents aretechno-legal documents that contain different kinds of information that is accessible to all. Inthe present study, patents are used as source of information for competitive/businessintelligence to highlight the technological trends in the field of energy efficient cooling of datacentres. The study reveals that IBM, HP, Schneider and Hon Hai Industries are the majorplayers working in this technological area. Contrary to the notion that air conditioning wouldbe the most researched area for cooling data centres, the study reveals that there is alsointerest in the hardware of the servers and racks to produce less heat or to have built-incooling mechanisms. The main technologies for which patents are being filed includeventilation using gaseous coolant, technologies related to rack design as well as liquid cooling.Original equipment manufacturers and other vendors have increased filings, along with cloudservice providers. Most of these technologies originate from Asia-Pacific and this region is astrong market, following the USA.
APA, Harvard, Vancouver, ISO, and other styles
34

Gutiérrez B., Wilson H. "Acxiom, Red Hat, Actifio, & IBM: A Strategic Partnership on Virtualization and Marketing." Virtu@lmente 6, no. 2 (August 1, 2019). http://dx.doi.org/10.21158/2357514x.v6.n2.2018.2269.

Full text
Abstract:
This document contains an analysis of strategic alliances between «big» players in information technology (IT) and software applications in server virtualization (SV) and marketing alliances. The main objective is to analyze the commercial advantages, benefits, and challenges of business partners, using the SV for information management, data storage, and collaboration in marketing programs. We analyze cases of alliances in companies like IBM, Acxiom, Red Hat, and Actifio, who have identified the competitive advantages of having a main «partner», which has the ideal product or service to complement one or several secondary brands as a commercial strategy and brand positioning. The SV brings positive transformations such as the reduction of hardware costs, the improvement of the provisioning and implementation of the server, disaster recovery solutions, efficient and economical use of energy, and increased productivity. This strategy invites to change the way in which data centers are being formed, and becomes a preferred solution not only for the reduction of IT costs, but at the same time it makes a company more flexible, productive, and efficient, generating better results. The ability to capture and store more and more detailed information about customer needs and behavior, taking advantage of the technology and intelligence of business partners, creates better opportunities in services, products, and more effective marketing campaigns, attracting new customers and positioning brands.
APA, Harvard, Vancouver, ISO, and other styles
35

Scuotto, Veronica, Alberto Ferraris, and Stefano Bresciani. "Internet of Things: applications and challenges in smart cities. A case study of IBM smart city projects." Business Process Management Journal 22, no. 2 (March 4, 2016). http://dx.doi.org/10.1108/bpmj-05-2015-0074.

Full text
Abstract:
Purpose An empirical testing on IBM Smart Cities projects was applied so as to demonstrate that the combination between the use of IoT and the implementation of the Open Innovation (OI) model within smart cities which has been changed the development of urban areas and effected firms’ innovativeness. Design/methodology/approach A case study methodology on a leading multinational firms deeply involved in smart cities projects has been chosen. Findings From this study it emerged how IBM: a) has a clear vision of Smart Cities and IoT; b) adopt a worldwide OI approach to Smart Cities; c) delineate specific strategies and create Open Innovation Units ad hoc for Smart Cities' Projects. Research limitations/implications The major limitation of this work is that the analysis presented has been developed only on one case of multinational firm that operate in Smart Cities contexts. Practical implications Recommendations will be made both to public and private actor in order to plan and implement efficient strategies to improve their performances. Originality/value The concept of smart city has become quite popular between scholars and practitioners in the era of digital economy. Cities become smart developing new urban area using new Information and Communication Technologies (ICTs) such as mobile devices, the semantic web, cloud computing, and the internet of things (IoT). Smart cities make innovation ecosystem, joining together different forces like knowledge–intensive activities, institutions for cooperation and learning, and web–based applications collective intelligence. This research is of importance and significance to scholars, government, and firms who need to understand the relevance of smart cities in the current economy.
APA, Harvard, Vancouver, ISO, and other styles
36

Boni, Arthur A. "Challenges for Transformative Innovation in Emerging Digital Health Organizations: advocating service design to address the multifaceted healthcare ecosystem." Journal of Commercial Biotechnology 25, no. 4 (December 11, 2020). http://dx.doi.org/10.5912/jcb957.

Full text
Abstract:
This article uses mini- case studies of three early stage organizations that pursued different pathways or models for bringing emerging, transformative digital technologies to the healthcare market. These organizations were each focused on different applications of digital health: Stentor was a venture capital backed, university spinoff focused in the field of digital radiology; Omnyx was formed as a joint venture (JV) by an academic medical center and industrial partner to transform the field of digital pathology; and, IBM Watson operating as an IBM unit, focused on the promise of artificial intelligence and machine learning for broad uses in cancer diagnosis and treatment. Each took a different organizational and business model path that resulted in mixed outcomes. While there are always many reasons for success or failure, we observe that these digital healthcare markets are more complex than typical consumer or technology markets. While any solution in healthcare demands patient centricity; healthcare markets additionally require a strong understanding and appreciation of the supporting ecosystem or network consisting of physicians and providers; and of constraints from payers and regulators. The value propositions of each member of the ecosystem must be understood and addressed. To meet this challenge, we advocate the formation of an integrated multidisciplinary commercialization team that addresses the multidimensional value proposition across the company life cycle. And importantly, that team should work collaboratively, and include service design as a key team member - along with the technology, business, marketing, reimbursement, and regulatory components.
APA, Harvard, Vancouver, ISO, and other styles
37

Al-Ayed, Sura I., and Ahmad Adnan Al-Tit. "Factors affecting the adoption of blended learning strategy." International Journal of Data and Network Science, 2021, 267–74. http://dx.doi.org/10.5267/j.ijdns.2021.6.007.

Full text
Abstract:
The aim of this study is to explore factors affecting the adoption of blended learning strategy. Data was collected using a questionnaire consisting of 42 items, distributed to a random sample of 174 faculty members of Saudi Electronic University and Qassim University. IBM SPSS was used to conduct data analysis. Supporting research hypothesis indicates that student, institutional and learning variables had significant influences on the adoption of blended learning strategy. Considering the findings, it was concluded that the adoption of a blended learning strategy depends not only on the technological aspect of the learning process but also on people, i.e., students who are engaged in the process and motivated teachers who possess the required knowledge and skills. The most important implication of this research is that policy and decision makers in business educational schools are requested to consider factors that had a significant effect on the adoption of blended learning. In doing that, the research contributes to the blended learning knowledge via highlighting the key variables that encourage or hinder the adoption of blended learning strategy.
APA, Harvard, Vancouver, ISO, and other styles
38

Degabriele, Maria. "Business as Usual." M/C Journal 3, no. 2 (May 1, 2000). http://dx.doi.org/10.5204/mcj.1834.

Full text
Abstract:
As a specialist in culture and communication studies, teaching in a school of business, I realised that the notion of interdisciplinarity is usually explored in the comfort of one's own discipline. Meanwhile, the practice of interdisciplinarity is something else. The very notion of disciplinarity implies a regime of discursive practices, but in the zone between disciplines, there is often no adequate language. This piece of writing is a brief analysis of an example of the language of business studies when business studies thinks about culture. It looks at how business studies approaches cultural difference in context of intercultural contact. Geert Hofstede's Cultures and Organizations: Software of the Mind (1991) This article is a brief and very selective critique of Geert Hofstede's notion of culture in Cultures and Organizations: Software of the Mind. Hofstede has been publishing his work on cross-cultural management since the 1960s. His work is routinely used in reference to cross/multi/intercultural issues in business studies (a term I use to include commerce, finance, management, and marketing). Before I begin, I must insist that Hofstede's Cultures and Organizations: Software of the Mind is a very useful text for business studies students, as it introduces them to useful concepts in relation to culture, like culture shock, acculturation (not enculturation -- I suppose managers are repatriated before that happens), and training for successful cross-cultural communication. It is worth including here a brief note on the subtitle of Cultures and Organizations: Software of the Mind. This "software of the mind" is clearly analogous to computer programming. However, Hofstede disavows the analogy, which is central to his thesis, saying that people are not programmed the way computers are. So they are, but not really. Hofstede claims that in order to learn something different, one "must unlearn ... (the) ... patterns of thinking, feeling, and potential acting which were learned throughout (one's) lifetime". And it is this thinking/feeling/acting function he calls the "software of the mind" (4). So, is the body the hardware? Thinking and feeling are abstract and could, with a flight of fancy, be seen as "software". However, acting is visible, tangible, and often visceral. I am suggesting that "acting" either represents or is just about all we have as culture. Acting (in the fullest sense, including speech, gesture, manners, textual production, etc.) is not evidence of culture, it is culture. Also, computer technology, like every other technology, is part of culture, as evident in this journal. Culture I share Clifford Geertz's concept of culture as a semiotic one, where interpretation is a search for meaning, and where meaning lies in social relations. Geertz writes that to claim that culture consists in brute patterns of behaviour in some identifiable community is to reduce it (the community and the notion of culture). Human behaviour is symbolic action. Culture is not just patterned conduct, a frame of mind which points to some sort of ontological status. Culture is public, social, relational, and contextual. To quote Geertz: "culture is not a power, something to which social events, behaviours, institutions, or processes can be causally attributed; it is a context" (14). Culture is not an ontological essence or set of behaviours. Culture is made up of webs of relationships. That Hofstede locates culture in the mind is probably the most problematic aspect of his writing. Culture is difficult for any discipline to describe because different disciplines have their own view of social reality. They operate in their own paradigms. Hofstede uses a behaviourist psychological approach to culture, which looks at what he calls national character and typical behaviours. Even though Hofstede is aware of being, as an observer of human behaviour, an integral part of his object of analysis (other cultures), he nevertheless continuously equates the observed behaviour to particular kinds of national thinking and feeling where national is often collapsed into cultural. Hofstede uses an empirical behaviourist paradigm which measures certain behaviours, as if the observer is outside the cultural significance attributed to behaviours, and attributes them to culture. Hofstede's Notion of Culture Hofstede's work is based on quantitative data gathered from questionnaires administered to IBM corporation employees in various countries. He looked at 72 national subsidiaries, 38 occupations, 20 languages, and at two points in time (1968 and 1972), and continued his commentary on that data into the 1990s. He claims that because the entire sample has a common corporate culture, the only thing that can account for systematic and consistent differences between national groups within a homogeneous multinational organisation is nationality itself. It is as if corporate culture is outside, has nothing to do with, national culture (itself a complex and dynamic concept). Hofstede's work does not account for the fact that IBM is an American multinational corporation and, as such, whatever attributes are used to measure cultural difference, those found in American corporate culture will set the benchmark for whatever other cultures are measured. This view is supported in business studies in general where American management practices are seen as universal and normal, even when they are described as 'Western'. The areas Hofstede's IBM survey looked at are: 1. Social inequality, including the relationship with authority (also described as power distance); 2. The relationship between the individual and the group (also described as individualism versus collectivism); 3. Concepts of masculinity and femininity: the social implications of having been born as a boy or a girl (also described as masculinity versus femininity); 4. Ways of dealing with uncertainty, relating to the control of aggression and the expression of emotions (also described as uncertainty avoidance). These concepts are in themselves culturally specific and have become structurally embedded in organisational theory. Hofstede writes that these four dimensions of culture are aspects of culture that can be measured relative to other cultures. What these four dimensions actually do is not to combine to give us a four-dimensional (complex?) appreciation of culture. Rather, they map onto each other and reinforce a politically conservative, Eurocentric view of culture. Hofstede does admit to having had "a 'Western' way of thinking", but he inevitably goes back to "the mind" as a place or goal. He refers to a questionnaire composted by "Eastern', in this case Chinese minds ... [which] ... are programmed according to their own particular cultural framework" (171). So there is this constant reference to culturally programmed minds that determine certain behaviours. In his justification of using typologies to categorise people and their behaviour (minds?) Hofstede also admits that most people / cultures are hybrids. And he admits that rules are made arbitrarily in order to classify people / cultures (minds?). However, he insists that the statistical clusters he ends up with are an empirical typology. Such a reduction of "culture" to this kind of radical realism is absolutely anatomical and enumerative. And, the more Hofstede is quoted as an authority on doing business across cultures, the more truth value his work accrues. The sort of language Hofstede uses to describe culture attributes intrinsic meanings and, as a result, points to difference rather than diversity. Languages of difference are based on binaristic notions of masculine/feminine, East/West, active/passive, collective/individual, and so on. In this opposition of activity and passivity, the East (feminine, collectivist) is the weaker partner of the West (masculine, individualist). There is a nexus of knowledge and power that constructs cultural difference along such binaristic lines. While a language of diversity take multiplicity as a starting point, or the norm, Hofstede's hegemonic and instrumentalist language of difference sees multiplicity as problematic. This problem is flagged at the very start of Cultures and Organizations. 12 Angry Men: Hofstede Interprets Culture and Ignores Gender In the opening page of Cultures and Organizations there is a brief passage from Reginald Rose's play 12 Angry Men (1955). (For a good review of the film see http://www.film.u- net.com/Movies/Reviews/Twelve_Angry.html. The film was recently remade.) Hofstede uses it as an example of how twelve different people with different cultural backgrounds "think, feel and act differently". The passage describes a confrontation between what Hofstede refers as "a garage owner" and "a European-born, probably Austrian, watchmaker". Such a comparison flags, right from the start, a particular way of categorising and distinguishing between two people, in terms of visible and audible signs and symbols. Both parties are described in terms of their occupation. But then the added qualification of one of the parties as being "European-born, probably Austrian" clearly indicates that the unqualified party places him in the broad category "American". In other words, the garage owner's apparently neutral ethnicity implies a normative "American", against which all markers of cultural difference are measured. Hofstede is aware of this problem. He writes that "cultural relativism does not imply normlessness for oneself, nor for one's society" (7). However, he still uses the syntax of binaristic classification which repeats and perpetuates the very problems he is apparently addressing. One of the main factors that makes 12 Angry Men such a powerful drama is that each man carries / inscribes different aspects of American culture. And American culture is idealised in the justice system, where rationality and consensus overcomes prejudice and social pressure. Each man has a unique make-up, which includes class, occupation, ethnicity, personality, intelligence, style and experience. But 12 Angry Men is also an interesting exploration of masculinity. Because Hofstede has included a category of "masculine/feminine" in his study of national culture, it is an interesting oversight that he does not comment on this powerful element of the drama. People identify along various lines, in terms of ethnicities, languages, histories, sexuality, politics and nationalism. Most people do have multiple and varied aspects to their identity. However, Hofstede sees multiple lines of identification as causing "conflicting mental programs". Hofstede claims that identification on the gender level of his hierarchy is determined "according to whether a person was born as a girl or as a boy" (10). Hofstede misses the crucial point that whilst whether one is born female or male determines one's sex, whether one is enculturated as and identifies as feminine or masculine indicates one's gender. Sex and gender are not the same thing. Sex is biological (natural) and gender is ideological (socially constructed and naturalised). This sort of blindness to the ideological component of identity is a fundamental flaw in Hofstede's thesis. Hofstede takes ideological constructions as given, as natural. For example, in endnote 1 of Chapter 4, "He, she, and (s)he", he writes "My choice of the terms (soft feminine and hard masculine) is based on what is in virtually all societies, not on what anybody thinks should be (107, his italics). He reinforces the notion of gendered essences, or essences which constitute national identity. Indeed, the world is not made up of entities or essences that are masculine or feminine, Western or Eastern, active or passive. And the question is not so much about empirical accuracy along such lines, but rather what are the effects of always reinscribing cultures as Western or Eastern, masculine or feminine, collectivist or individualist. In an era of globalism and mass, interconnected communication, identities are multiple, and terms like East and West, masculine and feminine, active and passive, should be used as undecidable codes that, at the most, flag fragments of histories and ideologies. Identity East and West are concepts that did not come out of a political or cultural vacuum. They are categories, or concepts, that originated and flourished with European expansionism from the 17th century. They underwrote imperialism and colonisation. They are not inert labels that merely point to something "out there". East and West, like masculine and feminine or any other binary pair, indicate an imaginary relationship that prioritises one of the pair over the other. People and cultures cannot be separated into static Western and Eastern essences. Culture itself is always diverse and dynamic. It is marked by migration, diaspora, and exile, not to mention historical change. There are no "original" cultures. The sort of discourse Hofstede uses to describe cultures is based on an ontological and epistemological distinction made between East and West. Culture is not something invisible or intangible. Culture is not something obscure that is in the mind (whatever or wherever that is) which manifests itself in peculiar behaviours. Culture is what and how we communicate, whether that takes the form of speech, gestures, novels, plays, architecture, style, or art. And, as such, communication includes the objects we produce and exchange and the symbols to which we give meaning. So, when Hofstede writes that the Austrian watchmaker acts the way he does because he cannot behave otherwise. After many years in his new home country, he still behaves the way he was raised. He carries within himself an indelible pattern of behaviour he is attributing a whole range of qualities which are frequently given by dominant cultures to their cultural "others" (1). Hofstede attributes politeness, tradition, and, above all, stasis, to the European-Austrian watchmaker. The phrase "after many years in his new home country" is contradictory. If so many years have passed, why is "home" still "new"? And, indeed, the watchmaker might still behave the way he was raised, but it would be safe to assume that the garage owner also behaves the way he was raised. One of the main points made in 12 Angry Men is that twelve American men are all very different to each other in terms of values and behaviour. All this is represented in the dialogue and behaviour of twelve men in a closed room. If we are concerned with different kinds of social behaviour, and we are not concerned with pathological behaviour, then how can we know what anyone carries within themselves? Why do we want to know what anyone carries within themselves? From a cultural studies perspective, the last question is political. However, from a business studies perspective, that question is naïve. The radical economic rationalist would want to know as much as possible about cultural differences so that we can better target consumer groups and be more successful in cross-cultural negotiations. In colonial days, foreigners often wielded absolute power in other societies and they could impose their rules on it [sic]. In these postcolonial days, foreigners who want to change something in another society will have to negotiate their interventions. (7) Those who wielded absolute power in the colonies were the non-indigenous colonisers. It was precisely the self-legitimating step of making a place a colony that ensured an ongoing presence of the colonising power. The impetus behind learning about the Other in the colonial times was a combination of spiritual salvation (as in the "mission civilisatrice") and economic exploitation (colonies were seen as resources for the benefit of the European and later American centres). And now, the impetus behind learning about cultural difference is that "negotiation is more likely to succeed when the parties concerned understand the reasons for the differences in viewpoints" (7). Culture as Commerce What, in fact, happens, is that business studies simultaneously wants to "do" components of cross-cultural studies, as it is clearly profitable, while shunning the theoretical discipline of cultural studies. A fundamental flaw in a business studies perspective, which is based on Hofstede's work, is a blindness to the ideological and historical component of identity. Business studies has picked up just enough orientalism, feminism, marxism, deconstruction and postcolonialism to thinly disavow any complicity with dominant (and dominating) discourses, while getting on with business-as-usual. Multiculturalism and gender are seen as modern categories to which one must pay lip service, only to be able to get on with business-as-usual. Negotiation, compromise and consensus are desired not for the sake of success in civil processes, but for the material value of global market presence, acceptance and share. However, civil process and commercial interests are not easily separable. To refer to a cultural economy is not just to use a metaphor. The materiality of business, in the various forms of commercial transactions, is itself part of one's culture. That is, culture is the production, consumption and circulation of objects (including less easily definable objects, like performance, language, style and manners). Also, culture is produced and consumed socially (in the realm of the civil) and circulates through official and unofficial social and commercial mechanisms. Culture is a material and social phenomenon. It's not something hidden from view that only reveals itself in behaviours. Hofstede rightly asserts that culture is learned and not inherited. Human nature is inherited. However, it is very difficult to determine exactly what human nature is. Most of what we consider to be human nature turns out to be, upon close inspection, ideological, naturalised. Hofstede writes that what one does with one's human nature is "modified by culture" (5). I would argue that whatever one does is cultural. And this includes taking part in commercial transactions. Even though commercial transactions (including the buying and selling of services) are material, they are also highly ritualistic and highly symbolic, involving complex forms of communication (verbal and nonverbal language). Culture as Mental Programming Hofstede's insistent ontological reference to 'the sources of one's mental programs' is problematic for many reasons. There is the constant ontological as well as epistemological distinction being made between cultures, as if there is a static core to each culture and that we can identify it, know what it is, and deal with it. It is as if culture itself is a knowable essence. Even though Hofstede pays lip service to culture as a social phenomenon, saying that "the sources of one's mental programs lie within the social environments in which one grew up and collected one's life experiences" (4), and that past theories of race have been largely responsible for massive genocides, he nevertheless implies a kind of biologism simply by turning the mind (a radical abstraction) into something as crude as computer software, where data can be stored, erased or reconfigured. In explaining how culture is socially constructed and not biologically determined, Hofstede says that one's mental programming starts with the family and goes on through the neighbourhood, school, social groups, the work place, and the community. He says that "mental programs vary as much as the social environments in which they were acquired", which is nothing whatsoever like computer software (4-5). But he carries on to claim that "a customary term for such mental software is culture" (4, my italics). Before the large-scale changes which took place in the second half of the twentieth century in disciplines like anthropology, history, linguistics, and psychology, culture was seen to be a recognisable, determined, contained, consistent way of living which had deep psychic roots. Today, any link between mental processes and culture (formerly referred to as "race") cannot be sustained. We must be cautious against presuming to understand the relationship between mental process and social life and also against concluding that the content of the mind in each racial (or, if you like, ethnic or cultural) group is of a peculiar kind, because it is this kind of reductionism that feeds stereotypes. And it is the accumulation of knowledge about cultural types that implies power over the very types that are thus created. Conclusion A genuinely interdisciplinary approach to communication, commerce and culture would make business studies more theoretical and more challenging. And it would make cultural studies take commerce more seriously, beyond a mere celebration of shopping. This article has attempted to reveal some of the cracks in how business studies accounts for cultural diversity in an age of global commercial ambitions. It has also looked at how Hofstede's writings, as exemplary of the business studies perspective, papers over those cracks with a very thin layer of pluralist cultural relativism. This article is an invitation to open up a critical dialogue which dares to go beyond disciplinary traditionalisms in order to examine how meaning, communication, culture, language and commerce are embedded in each other. References Carothers, J.C. Mind of Man in Africa. London: Tom Stacey, 1972. Degabriele, Maria. Postorientalism: Orientalism since "Orientalism". Ph.D. Thesis. Perth: Murdoch University, 1997. Geertz, Clifford. The Interpretation of Cultures: Selected Essays. New York: Basic Books, 1973. Hofstede, Geert. Cultures and Organisations: Software of the Mind. Sydney: McGraw-Hill, 1991. Moore, Charles A., ed. The Japanese Mind: Essentials of Japanese Philosophy and Culture. Honolulu: East-West Centre, U of Hawaii, 1967. Patai, Raphael. The Arab Mind. New York: Scribner, 1983. Toffler, Alvin. Future Shock: A Study of Mass Bewildernment in the Face of Accelerating Change. Sydney: Bodley Head, 1970. 12 Angry Men. Dir. Sidney Lumet. Orion-Nova, USA. 1957. Citation reference for this article MLA style: Maria Degabriele. "Business as Usual: How Business Studies Thinks Culture." M/C: A Journal of Media and Culture 3.2 (2000). [your date of access] Chicago style: Maria Degabriele, "Business as Usual: How Business Studies Thinks Culture," M/C: A Journal of Media and Culture 3, no. 2 (2000), ([your date of access]). APA style: Maria Degabriele. (2000) Business as usual: how business studies thinks culture. M/C: A Journal of Media and Culture 3(2). ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
39

Sampson, Tony. "A Virus in Info-Space." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2368.

Full text
Abstract:
‘We are faced today with an entire system of communication technology which is the perfect medium to host and transfer the very programs designed to destroy the functionality of the system.’ (IBM Researcher: Sarah Gordon, 1995) Despite renewed interest in open source code, the openness of the information space is nothing new in terms of the free flow of information. The transitive and nonlinear configuration of data flow has ceaselessly facilitated the sharing of code. The openness of the info-space encourages a free distribution model, which has become central to numerous developments through the abundant supply of freeware, shareware and source code. Key moments in open source history include the release in 1998 of Netscape’s Communicator source code, a clear attempt to stimulate browser development. More recently in February 2004 the ‘partial leaking’ of Microsoft Windows 2000 and NT 4.0 source code demonstrated the often-hostile disposition of open culture and the potential threat it poses to existing corporate business models. However, the leading exponents of the open source ethic predate these events by more than a decade. As an extension of the hacker, the virus writer has managed, since the 1980s, to bend the shape of info-space beyond recognition. By freely spreading viruses, worms and hacker programs across the globe, virus writers have provided researchers with a remarkable set of digital footprints to follow. The virus has, as IBM researcher Sarah Gordon points out, exposed the info-space as a ‘perfect medium’ rife for malicious viral infection. This paper argues that viral technologies can hold info-space hostage to the uncertain undercurrents of information itself. As such, despite mercantile efforts to capture the spirit of openness, the info-space finds itself frequently in a state far-from-equilibrium. It is open to often-unmanageable viral fluctuations, which produce levels of spontaneity, uncertainty and emergent order. So while corporations look to capture the perpetual, flexible and friction-free income streams from centralised information flows, viral code acts as an anarchic, acentred Deleuzian rhizome. It thrives on the openness of info-space, producing a paradoxical counterpoint to a corporatised information society and its attempt to steer the info-machine. The Virus in the Open System Fred Cohen’s 1984 doctoral thesis on the computer virus locates three key features of openness that makes viral propagation possible (see Louw and Duffy, 1992 pp. 13-14) and predicts a condition common to everyday user experience of info-space. Firstly, the virus flourishes because of the computer’s capacity for information sharing_; transitive flows of code between nodes via discs, connected media, network links, user input and software use. In the process of information transfer the ‘witting and unwitting’ cooperation of users and computers is a necessary determinant of viral infection. Secondly, information flow must be _interpreted._ Before execution computers interpret incoming information as a series of instructions (strings of bits). However, before execution, there is no fundamental distinction between information received, and as such, information has no _meaning until it has been executed. Thus, the interpretation of information does not differentiate between a program and a virus. Thirdly, the alterability or manipulability of the information process allows the virus to modify information. For example, advanced polymorphic viruses avoid detection by using non-significant, or redundant code, to randomly encrypt and decrypt themselves. Cohen concludes that the only defence available to combat viral spread is the ‘limited transitivity of information flow’. However, a reduction in flow is contrary to the needs of the system and leads ultimately to the unacceptable limitation of sharing (Cohen, 1991). As Cohen states ‘To be perfectly secure against viral attacks, a system must protect against incoming information flow, while to be secure against leakage of information a system must protect against outgoing information flow. In order for systems to allow sharing, there must be some information flow. It is therefore the major conclusion of this paper that the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible.’ Cohen’s research does not simply end with the eradication of the virus via the limitation of openness, but instead leads to a contentious idea concerning the benevolent properties of viral computing and the potential legitimacy of ‘friendly contagion’. Cohen looks beyond the malevolent enemy of the open network to a benevolent solution. The viral ecosystem is an alternative to Turing-von Neumann capability. Key to this system is a benevolent virus,_ which epitomise the ethic of open culture. Drawing upon a biological analogy, benevolent viral computing _reproduces in order to accomplish its goals; the computing environment evolving_ rather than being ‘designed every step of the way’ (see Zetter, 2000). The _viral ecosystem_ demonstrates how the spread of viruses can purposely _evolve through the computational space using the shared processing power of all host machines. Information enters the host machine via infection and a translator program alerts the user. The benevolent virus_ passes through the host machine with any additional modifications made by the _infected_ _user. The End of Empirical Virus Research? Cohen claims that his research into ‘friendly contagion’ has been thwarted by network administrators and policy makers (See Levy, 1992 in Spiller, 2002) whose ‘apparent fear reaction’ to early experiments resulted in trying to solve technical problems with policy solutions. However, following a significant increase in malicious viral attacks, with estimated costs to the IT industry of $13 billion in 2001 (Pipkin, 2003 p. 41), research into legitimate viruses has not surprisingly shifted from the centre to the fringes of the computer science community (see Dibbell, 1995)._ _Current reputable and subsequently funded research tends to focus on efforts by the anti-virus community to develop computer hygiene. Nevertheless, malevolent or benevolent viral technology provides researchers with a valuable recourse. The virus draws analysis towards specific questions concerning the nature of information and the culture of openness. What follows is a delineation of a range of approaches, which endeavour to provide some answers. Virus as a Cultural Metaphor Sean Cubitt (in Dovey, 1996 pp. 31-58) positions the virus as a contradictory cultural element, lodged between the effective management of info-space and the potential for spontaneous transformation. However, distinct from Cohen’s aspectual analogy, Cubitt’s often-frivolous viral metaphor overflows with political meaning. He replaces the concept of information with a space of representation, which elevates the virus from empirical experience to a linguistic construct of reality. The invasive and contagious properties of the biological parasite are metaphorically transferred to viral technology; the computer virus is thus imbued with an alien otherness. Cubitt’s cultural discourse typically reflects humanist fears of being subjected to increasing levels of technological autonomy. The openness of info-space is determined by a managed society aiming to ‘provide the grounds for mutation’ (p. 46) necessary for profitable production. Yet the virus, as a possible consequence of that desire, becomes a potential opposition to ‘ideological formations’. Like Cohen, Cubitt concludes that the virus will always exist if the paths of sharing remain open to information flow. ‘Somehow’, Cubitt argues, ‘the net must be managed in such a way as to be both open and closed. Therefore, openness is obligatory and although, from the point of view of the administrator, it is a recipe for ‘anarchy, for chaos, for breakdown, for abjection’, the ‘closure’ of the network, despite eradicating the virus, ‘means that no benefits can accrue’ (p.55). Virus as a Bodily Extension From a virus writing perspective it is, arguably, the potential for free movement in the openness of info-space that that motivates the spread of viruses. As one writer infamously stated it is ‘the idea of making a program that would travel on its own, and go to places its creator could never go’ that inspires the spreading of viruses (see Gordon, 1993). In a defiant stand against the physical limitations of bodily movement from Eastern Europe to the US, the Bulgarian virus writer, the Dark Avenger, contended that ‘the American government can stop me from going to the US, but they can’t stop my virus’. This McLuhanesque conception of the virus, as a bodily extension (see McLuhan, 1964), is picked up on by Baudrillard in Cool Memories_ _(1990). He considers the computer virus as an ‘ultra-modern form of communication which does not distinguish, according to McLuhan, between the information itself and its carrier.’ To Baudrillard the prosperous proliferation of the virus is the result of its ability to be both the medium and the message. As such the virus is a pure form of information. The Virus as Information Like Cohen, Claude Shannon looks to the biological analogy, but argues that we have the potential to learn more about information transmission in artificial and natural systems by looking at difference rather than resemblance (see Campbell, 1982). One of the key aspects of this approach is the concept of redundancy. The theory of information argues that the patterns produced by the transmission of information are likely to travel in an entropic mode, from the unmixed to the mixed – from information to noise. Shannon’s concept of redundancy ensures that noise is diminished in a system of communication. Redundancy encodes information so that the receiver can successfully decode the message, holding back the entropic tide. Shannon considers the transmission of messages in the brain as highly redundant since it manages to obtain ‘overall reliability using unreliable components’ (in Campbell, 1982 p. 191). While computing uses redundancy to encode messages, compared to transmissions of biological information, it is fairly primitive. Unlike the brain, Turing-von-Neumann computation is inflexible and literal minded. In the brain information transmission relies not only on deterministic external input, but also self-directed spontaneity and uncertain electro-chemical pulses. Nevertheless, while Shannon’s binary code is constrained to a finite set of syntactic rules, it can produce an infinite number of possibilities. Indeed, the virus makes good use of redundancy to ensure its successful propagation. The polymorphic virus is not simply a chaotic, delinquent noise, but a decidedly redundant form of communication, which uses non-significant code to randomly flip itself over to avoid detection. Viral code thrives on the infinite potential of algorithmic computing; the open, flexible and undecidable grammar of the algorithm allows the virus to spread, infect and evolve. The polymorphic virus can encrypt and decrypt itself so as to avoid anti-viral scanners checking for known viral signatures from the phylum of code known to anti-virus researchers. As such, it is a raw form of Artificial Intelligence, relying on redundant inflexible_ _code programmed to act randomly, ignore or even forget information. Towards a Concept of Rhizomatic Viral Computation Using the concept of the rhizome Deleuze and Guattari (1987 p. 79) challenge the relation between noise and pattern established in information theory. They suggest that redundancy is not merely a ‘limitative condition’, but is key to the transmission of the message itself. Measuring up the efficiency of a highly redundant viral transmission against the ‘splendour’ of the short-term memory of a rhizomatic message, it is possible to draw some conclusions from their intervention. On the surface, the entropic tendency appears to be towards the mixed and the running down of the system’s energy. However, entropy is not the answer since information is not energy; it cannot be conserved, it can be created and destroyed. By definition information is something new, something that adds to existing information (see Campbell, 1982 p. 231), yet efficient information transmission creates invariance in a variant environment. In this sense, the pseudo-randomness of viral code, which pre-programs elements of uncertainty and free action into its propagation, challenges the efforts to make information centralised, structured and ordered. It does this by placing redundant noise within its message pattern. The virus readily ruptures the patterned symmetry of info-space and in terms of information produces something new. Viral transmission is pure information as its objective is to replicate itself throughout info-space; it mutates the space as well as itself. In a rhizomatic mode the anarchic virus is without a central agency; it is a profound rejection of all Generals and power centres. Viral infection, like the rhizomatic network, is made up of ‘finite networks of automata in which communication runs from any neighbour to any other’. Viral spread flows along non-pre-existent ‘channels of communication’ (1987 p. 17). Furthermore, while efforts are made to striate the virus using anti-viral techniques, there is growing evidence that viral information not only wants to be free, but is free to do as it likes. About the Author Tony Sampson is a Senior Lecturer and Course Tutor in Multimedia & Digital Culture, School of Cultural and Innovation Studies at the University of East London, UK Email: t.d.sampson@uel.ac.uk Citation reference for this article MLA Style Sampson, Tony. "A Virus in Info-Space" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/07_Sampson.php>. APA Style Sampson, T. (2004, Jul1). A Virus in Info-Space. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/07_Sampson.php>
APA, Harvard, Vancouver, ISO, and other styles
40

Pedersen, Isabel, and Kirsten Ellison. "Startling Starts: Smart Contact Lenses and Technogenesis." M/C Journal 18, no. 5 (October 14, 2015). http://dx.doi.org/10.5204/mcj.1018.

Full text
Abstract:
On 17 January 2013, Wired chose the smart contact lens as one of “7 Massive Ideas That Could Change the World” describing a Google-led research project. Wired explains that the inventor, Dr. Babak Parviz, wants to build a microsystem on a contact lens: “Using radios no wider than a few human hairs, he thinks these lenses can augment reality and incidentally eliminate the need for displays on phones, PCs, and widescreen TVs”. Explained further in other sources, the technology entails an antenna, circuits embedded into a contact lens, GPS, and an LED to project images on the eye, creating a virtual display (Solve for X). Wi-Fi would stream content through a transparent screen over the eye. One patent describes a camera embedded in the lens (Etherington). Another mentions medical sensing, such as glucose monitoring of tears (Goldman). In other words, Google proposes an imagined future when we use contact lenses to search the Internet (and be searched by it), shop online, communicate with friends, work, navigate maps, swipe through Tinder, monitor our health, watch television, and, by that time, probably engage in a host of activities not yet invented. Often referred to as a bionic contact, the smart contact lens would signal a weighty shift in the way we work, socialize, and frame our online identities. However, speculative discussion over this radical shift in personal computing, rarely if ever, includes consideration of how the body, acting as a host to digital information, will manage to assimilate not only significant affordances, but also significant constraints and vulnerabilities. At this point, for most people, the smart contact lens is just an idea. Is a new medium of communication started when it is launched in an advertising campaign? When we Like it on Facebook? If we chat about it during a party amongst friends? Or, do a critical mass of people actually have to be using it to say it has started? One might say that Apple’s Macintosh computer started as a media platform when the world heard about the famous 1984 television advertisement aired during the American NFL Super Bowl of that year. Directed by Ridley Scott, the ad entails an athlete running down a passageway and hurling a hammer at a massive screen depicting cold war style rulers expounding state propaganda. The screen explodes freeing those imprisoned from their concentration camp existence. The direct reference to Orwell’s 1984 serves as a metaphor for IBM in 1984. PC users were made analogous to political prisoners and IBM served to represent the totalitarian government. The Mac became a something that, at the time, challenged IBM, and suggested an alternative use for the desktop computer that had previously been relegated for work rather than life. Not everyone bought a Mac, but the polemical ad fostered the idea that Mac was certainly the start of new expectations, civic identities, value-systems, and personal uses for computers. The smart contact lens is another startling start. News of it shocks us, initiates social media clicks and forwards, and instigates dialogue. But, it also indicates the start of a new media paradigm that is already undergoing popular adoption as it is announced in mainstream news and circulated algorithmically across media channels. Since 2008, news outlets like CNN, The New York Times, The Globe and Mail, Asian International News, United News of India, The Times of London and The Washington Post have carried it, feeding the buzz in circulation that Google intends. Attached to the wave of current popular interest generated around any technology claiming to be “wearable,” a smart contact lens also seems surreptitious. We would no longer hold smartphones, but hide all of that digital functionality beneath our eyelids. Its emergence reveals the way commercial models have dramatically changed. The smart contact lens is a futuristic invention imagined for us and about us, but also a sensationalized idea socializing us to a future that includes it. It is also a real device that Parviz (with Google) has been inventing, promoting, and patenting for commercial applications. All of these workings speak to a broader digital culture phenomenon. We argue that the smart contact lens discloses a process of nascent posthuman adaptation, launched in an era that celebrates wearable media as simultaneously astonishing and banal. More specifically, we adopt technology based on our adaptation to it within our personal, political, medial, social, and biological contexts, which also function in a state of flux. N. Katherine Hayles writes that “Contemporary technogenesis, like evolution in general, is not about progress ... rather, contemporary technogenesis is about adaptation, the fit between organisms and their environments, recognizing that both sides of the engagement (human and technologies) are undergoing coordinated transformations” (81). This article attends to the idea that in these early stages, symbolic acts of adaptation signal an emergent medium through rhetorical processes that society both draws from and contributes to. In terms of project scope, this article contributes a focused analysis to a much larger ongoing digital rhetoric project. For the larger project, we conducted a discourse analysis on a collection of international publications concerning Babak Parviz and the invention. We searched for and collected newspaper stories, news broadcasts, YouTube videos from various sources, academic journal publications, inventors’ conference presentations, and advertising, all published between January 2008 and May 2014, generating a corpus of more than 600 relevant artifacts. Shortly after this time, Dr. Parviz, a Professor at the University of Washington, left the secretive GoogleX lab and joined Amazon.com (Mac). For this article we focus specifically on the idea of beginnings or genesis and how digital spaces increasingly serve as the grounds for emergent digital cultural phenomena that are rarely recognized as starting points. We searched through the corpus to identify a few exemplary international mainstream news stories to foreground predominant tropes in support of the claim we make that smart contacts lenses are a startling idea. Content producers deliberately use astonishment as a persuasive device. We characterize the idea of a smart contact lens cast in rhetorical terms in order to reveal how its allure works as a process of adaptation. Rhetorician and philosopher, Kenneth Burke writes that “rhetorical language is inducement to action (or to attitude)” (42). A rhetorical approach is instrumental because it offers a model to explain how we deploy, often times, manipulative meaning as senders and receivers while negotiating highly complex constellations of resources and contexts. Burke’s rhetorical theory can show how messages influence and become influenced by powerful hierarchies in discourse that seem transparent or neutral, ones that seem to fade into the background of our consciousness. For this article, we also concentrate on rhetorical devices such as ethos and the inventor’s own appeals through different modes of communication. Ethos was originally proposed by Aristotle to identify speaker credibility as a persuasive tactic. Addressed by scholars of rhetoric for centuries, ethos has been reconfigured by many critical theorists (Burke; Baumlin Ethos; Hyde). Baumlin and Baumlin suggest that “ethos describes an audience’s projection of authority and trustworthiness onto the speaker ... ethos suggests that the ethical appeal to be a radically psychological event situated in the mental processes of the audience – as belonging as much to the audience as to the actual character of a speaker” (Psychology 99). Discussed in the next section, our impression of Parviz and his position as inventor plays a dramatic role in the surfacing of the smart contact lens. Digital Rhetoric is an “emerging scholarly discipline concerned with the interpretation of computer-generated media as objects of study” (Losh 48). In an era when machine-learning algorithms become the messengers for our messages, which have become commodity items operating across globalized, capitalist networks, digital rhetoric provides a stable model for our approach. It leads us to demonstrate how this emergent medium and invention, the smart contact lens, is born amid new digital genres of speculative communication circulated in the everyday forums we engage on a daily basis. Smart Contact Lenses, Sensationalism, and Identity One relevant site for exploration into how an invention gains ethos is through writing or video penned or produced by the inventor. An article authored by Parviz in 2009 discusses his invention and the technical advancements that need to be made before the smart contact lens could work. He opens the article using a fictional and sensationalized analogy to encourage the adoption of his invention: The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.But why stop there?In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes. Identity building is made to correlate with smart contact lenses in a manner that frames them as exciting. Coming to terms with them often involves casting us as superhumans, wielding abilities that we do not currently possess. One reason for embellishment is because we do not need digital displays on the eyes, so the motive to use them must always be geared to transcending our assumed present condition as humans and society members. Consequently, imagination is used to justify a shift in human identity along a future trajectory.This passage above also instantiates a transformation from humanist to posthumanist posturing (i.e. “the cyborg”) in order to incent the adoption of smart contact lenses. It begins with the bold declarative statement, “The human eye is a perceptual powerhouse,” which is a comforting claim about our seemingly human superiority. Indexing abstract humanist values, Parviz emphasizes skills we already possess, including seeing a plethora of colours, adjusting to light on the fly, and thinking fast, indeed faster than “a high-speed Internet connection”. However, the text goes on to summon the Terminator character and his optic feats from the franchise of films. Filmic cyborg characters fulfill the excitement that posthuman rhetoric often seems to demand, but there is more here than sensationalism. Parviz raises the issue of augmenting human vision using science fiction as his contextualizing vehicle because he lacks another way to imbricate the idea. Most interesting in this passage is the inventor’s query “But why stop there?” to yoke the two claims, one biological (i.e., “The human eye is a perceptual powerhouse”) and one fictional (i.e. Terminator, Vernor Vinge characters). The query suggests, Why stop with human superiority, we may as well progress to the next level and embrace a smart contact lens just as fictional cyborgs do. The non-threatening use of fiction makes the concept seem simultaneously exciting and banal, especially because the inventor follows with a clear description of the necessary scientific engineering in the rest of the article. This rhetorical act signifies the voice of a technoelite, a heavily-funded cohort responding to global capitalist imperatives armed with a team of technologists who can access technological advancements and imbue comments with an authority that may extend beyond their fields of expertise, such as communication studies, sociology, psychology, or medicine. The result is a powerful ethos. The idea behind the smart contact lens maintains a degree of respectability long before a public is invited to use it.Parviz exhumes much cultural baggage when he brings to life the Terminator character to pitch smart contact lenses. The Terminator series of films has established the “Arnold Schwarzenegger” character a cultural mainstay. Each new film reinvented him, but ultimately promoted him within a convincing dystopian future across the whole series: The Terminator (Cameron), Terminator 2: Judgment Day (Cameron), Terminator 3: Rise of the Machines (Mostow), Terminator Salvation (McG) and Terminator Genisys (Taylor) (which appeared in 2015 after Parviz’s article). Recently, several writers have addressed how cyborg characters figure significantly in our cultural psyche (Haraway, Bukatman; Leaver). Tama Leaver’s Artificial Culture explores the way popular, contemporary, cinematic, science fiction depictions of embodied Artificial Intelligence, such as the Terminator cyborgs, “can act as a matrix which, rather than separating or demarcating minds and bodies or humanity and the digital, reinforce the symbiotic connection between people, bodies, and technologies” (31). Pointing out the violent and ultimately technophobic motive of The Terminator films, Leaver reads across them to conclude nevertheless that science fiction “proves an extremely fertile context in which to address the significance of representations of Artificial Intelligence” (63).Posthumanism and TechnogenesisOne reason this invention enters the public’s consciousness is its announcement alongside a host of other technologies, which seem like parts of a whole. We argue that this constant grouping of technologies in the news is one process indicative of technogenesis. For example, City A.M., London’s largest free commuter daily newspaper, reports on the future of business technology as a hodgepodge of what ifs: As Facebook turns ten, and with Bill Gates stepping down as Microsoft chairman, it feels like something is drawing to an end. But if so, it is only the end of the technological revolution’s beginning ... Try to look ahead ten years from now and the future is dark. Not because it is bleak, but because the sheer profusion of potential is blinding. Smartphones are set to outnumber PCs within months. After just a few more years, there are likely to be 3bn in use across the planet. In ten years, who knows – wearables? smart contact lenses? implants? And that’s just the start. The Internet of Things is projected to be a $300bn (£183bn) industry by 2020. (Sidwell) This reporting is a common means to frame the commodification of technology in globalized business news that seeks circulation as much as it does readership. But as a text, it also posits how individuals frame the future and their participation with it (Pedersen). Smart contacts appear to move along this exciting, unstoppable trajectory where the “potential is blinding”. The motive is to excite and scare. However, simultaneously, the effect is predictable. We are quite accustomed to this march of innovations that appears everyday in the morning paper. We are asked to adapt rather than question, consequently, we never separate the parts from the whole (e.g., “wearables? smart contact lenses? Implants”) in order to look at them critically.In coming to terms with Cary Wolf’s definition of posthumanism, Greg Pollock writes that posthumanism is the questioning that goes on “when we can no longer rely on ‘the human’ as an autonomous, rational being who provides an Archimedean point for knowing about the world (in contrast to “humanism,” which uses such a figure to ground further claims)” (208). With similar intent, N. Katherine Hayles formulating the term technogenesis suggests that we are not really progressing to another level of autonomous human existence when we adopt media, we are in effect, adapting to media and media are also in a process of adapting to us. She writes: As digital media, including networked and programmable desktop stations, mobile devices, and other computational media embedded in the environment, become more pervasive, they push us in the direction of faster communication, more intense and varied information streams, more integration of humans and intelligent machines, and more interactions of language with code. These environmental changes have significant neurological consequences, many of which are now becoming evident in young people and to a lesser degree in almost everyone who interacts with digital media on a regular basis. (11) Following Hayles, three actions or traits characterize adaptation in a manner germane to the technogenesis of media like smart contact lenses. The first is “media embedded in the environment”. The trait of embedding technology in the form of sensors and chips into external spaces evokes the onset of The Internet of Things (IoT) foundations. Extensive data-gathering sensors, wireless technologies, mobile and wearable components integrated with the Internet, all contribute to the IoT. Emerging from cloud computing infrastructures and data models, The IoT, in its most extreme, involves a scenario whereby people, places, animals, and objects are given unique “embedded” identifiers so that they can embark on constant data transfer over a network. In a sense, the lenses are adapted artifacts responding to a world that expects ubiquitous networked access for both humans and machines. Smart contact lenses will essentially be attached to the user who must adapt to these dynamic and heavily mediated contexts.Following closely on the first, the second point Hayles makes is “integration of humans and intelligent machines”. The camera embedded in the smart contact lens, really an adapted smartphone camera, turns the eye itself into an image capture device. By incorporating them under the eyelids, smart contact lenses signify integration in complex ways. Human-machine amalgamation follows biological, cognitive, and social contexts. Third, Hayles points to “more interactions of language with code.” We assert that with smart contact lenses, code will eventually govern interaction between countless agents in accordance with other smart devices, such as: (1) exchanges of code between people and external nonhuman networks of actors through machine algorithms and massive amalgamations of big data distributed on the Internet;(2) exchanges of code amongst people, human social actors in direct communication with each other over social media; and (3) exchanges of coding and decoding between people and their own biological processes (e.g. monitoring breathing, consuming nutrients, translating brainwaves) and phenomenological (but no less material) practices (e.g., remembering, grieving, or celebrating). The allure of the smart contact lens is the quietly pressing proposition that communication models such as these will be radically transformed because they will have to be adapted to use with the human eye, as the method of input and output of information. Focusing on genetic engineering, Eugene Thacker fittingly defines biomedia as “entail[ing] the informatic recontextualization of biological components and processes, for ends that may be medical or nonmedical (economic, technical) and with effects that are as much cultural, social, and political as they are scientific” (123). He specifies, “biomedia are not computers that simply work on or manipulate biological compounds. Rather, the aim is to provide the right conditions, such that biological life is able to demonstrate or express itself in a particular way” (123). Smart contact lenses sit on the cusp of emergence as a biomedia device that will enable us to decode bodily processes in significant new ways. The bold, technical discourse that announces it however, has not yet begun to attend to the seemingly dramatic “cultural, social, and political” effects percolating under the surface. Through technogenesis, media acclimatizes rapidly to change without establishing a logic of the consequences, nor a design plan for emergence. Following from this, we should mention issues such as the intrusion of surveillance algorithms deployed by corporations, governments, and other hegemonic entities that this invention risks. If smart contact lenses are biomedia devices inspiring us to decode bodily processes and communicate that data for analysis, for ourselves, and others in our trust (e.g., doctors, family, friends), we also need to be wary of them. David Lyon warns: Surveillance has spilled out of its old nation-state containers to become a feature of everyday life, at work, at home, at play, on the move. So far from the single all-seeing eye of Big Brother, myriad agencies now trace and track mundane activities for a plethora of purposes. Abstract data, now including video, biometric, and genetic as well as computerized administrative files, are manipulated to produce profiles and risk categories in a liquid, networked system. The point is to plan, predict, and prevent by classifying and assessing those profiles and risks. (13) In simple terms, the smart contact lens might disclose the most intimate information we possess and leave us vulnerable to profiling, tracking, and theft. Irma van der Ploeg presupposed this predicament when she wrote: “The capacity of certain technologies to change the boundary, not just between what is public and private information but, on top of that, between what is inside and outside the human body, appears to leave our normative concepts wanting” (71). The smart contact lens, with its implied motive to encode and disclose internal bodily information, needs considerations on many levels. Conclusion The smart contact lens has made a digital beginning. We accept it through the mass consumption of the idea, which acts as a rhetorical motivator for media adoption, taking place long before the device materializes in the marketplace. This occurrence may also be a sign of our “posthuman predicament” (Braidotti). We have argued that the smart contact lens concept reveals our posthuman adaptation to media rather than our reasoned acceptance or agreement with it as a logical proposition. By the time we actually squabble over the price, express fears for our privacy, and buy them, smart contact lenses will long be part of our everyday culture. References Baumlin, James S., and Tita F. Baumlin. “On the Psychology of the Pisteis: Mapping the Terrains of Mind and Rhetoric.” Ethos: New Essays in Rhetorical and Critical Theory. Eds. James S. Baumlin and Tita F. Baumlin. Dallas: Southern Methodist University Press, 1994. 91-112. Baumlin, James S., and Tita F. Baumlin, eds. Ethos: New Essays in Rhetorical and Critical Theory. Dallas: Southern Methodist University Press, 1994. Bilton, Nick. “A Rose-Colored View May Come Standard.” The New York Times, 4 Apr. 2012. Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Bukatman, Scott. Terminal Identity: The Virtual Subject in Postmodern Science Fiction. Durham: Duke University Press, 1993. Burke, Kenneth. A Rhetoric of Motives. Berkeley: University of California Press, 1950. Cameron, James, dir. The Terminator. Orion Pictures, 1984. DVD. Cameron, James, dir. Terminator 2: Judgment Day. Artisan Home Entertainment, 2003. DVD. Etherington, Darrell. “Google Patents Tiny Cameras Embedded in Contact Lenses.” TechCrunch, 14 Apr. 2014. Goldman, David. “Google to Make Smart Contact Lenses.” CNN Money 17 Jan. 2014. Haraway, Donna. Simians, Cyborgs and Women: The Reinvention of Nature. London: Free Association Books, 1991. Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago, 2012. Hyde, Michael. The Ethos of Rhetoric. Columbia: University of South Carolina Press, 2004. Leaver, Tama. Artificial Culture: Identity, Technology, and Bodies. New York: Routledge, 2012. Losh, Elizabeth. Virtualpolitik: An Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes. Boston: MIT Press. 2009. Lyon, David, ed. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. New York: Routledge, 2003. Mac, Ryan. “Amazon Lures Google Glass Creator Following Phone Launch.” Forbes.com, 14 July 2014. McG, dir. Terminator Salvation. Warner Brothers, 2009. DVD. Mostow, Jonathan, dir. Terminator 3: Rise of the Machines. Warner Brothers, 2003. DVD. Parviz, Babak A. “Augmented Reality in a Contact Lens.” IEEE Spectrum, 1 Sep. 2009. Pedersen, Isabel. Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media. Anderson, South Carolina: Parlor Press, 2013. Pollock, Greg. “What Is Posthumanism by Cary Wolfe (2009).” Rev. of What is Posthumanism?, by Cary Wolfe. Journal for Critical Animal Studies 9.1/2 (2011): 235-241. Sidwell, Marc. “The Long View: Bill Gates Is Gone and the Dot-com Era Is Over: It's Only the End of the Beginning.” City A.M., 7 Feb. 2014. “Solve for X: Babak Parviz on Building Microsystems on the Eye.” YouTube, 7 Feb. 2012. Taylor, Alan, dir. Terminator: Genisys. Paramount Pictures, 2015. DVD. Thacker, Eugene “Biomedia.” Critical Terms for Media Studies. Eds. W.J.T Mitchell and Mark Hansen, Chicago: Chicago Press, 2010. 117-130. Van der Ploeg, Irma. “Biometrics and the Body as Information.” Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. Ed. David Lyon. New York: Routledge, 2003. 57-73. Wired Staff. “7 Massive Ideas That Could Change the World.” Wired.com, 17 Jan. 2013.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography