Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: AUTOMATED CONTEXTS MANAGEMENT.

Статті в журналах з теми "AUTOMATED CONTEXTS MANAGEMENT"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "AUTOMATED CONTEXTS MANAGEMENT".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Grefen, Paul. "Networked Business Process Management." International Journal of IT/Business Alignment and Governance 4, no. 2 (July 2013): 54–82. http://dx.doi.org/10.4018/ijitbag.2013070104.

Повний текст джерела
Анотація:
In the current economy, a shift can be seen from stand-alone business organizations to networks of tightly collaborating business organizations. To allow this tight collaboration, business process management in these collaborative networks is becoming increasingly important. This paper discusses automated support for this networked business process management: automated means to manage business processes that span multiple autonomous organizations. The author starts this paper with a treatment of intra- and inter-organizational business processes to provide a conceptual background for business process management in business networks. The author describes a number of research approaches in this area, including the context of these approaches and the architectures of the automated systems proposed by them. The approaches are described from early developments in the field relying on dedicated technology to current designs based on standardized technology in a service-oriented context. The paper thereby provides an overview of developments in the area of inter-organizational business process management in the spectrum from simple, static business networks to complex, dynamic networks. The author observes that the described BPM research efforts move from pushing new BPM technology into application domains to using BPM to realize business-IT alignment in complex application contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Benabbou, Amel, and Safia Nait-Bahloul. "Automated Context Formalization for Context-aware Specification Approach." International Journal of Information System Modeling and Design 9, no. 3 (July 2018): 23–47. http://dx.doi.org/10.4018/ijismd.2018070102.

Повний текст джерела
Анотація:
Requirement specification is a key element in model-checking verification. The context-aware approach is an effective technique for automating the specification of requirement considering specific environmental conditions. In most of existing approaches, there is no support of this crucial task and are mainly based on the considerable efforts and expertise of engineers. A domain-specific language, called CDL, has been proposed to facilitate the specification of requirement by formalizing contexts. However, the feedback has shown that manually writing CDL is hard, error prone and difficult to grasp on complex systems. In this article, the authors propose an approach to automatically generate CDL models using (IODs) elaborated through transformation chains from textual use cases. They offer an intermediate formalism between informal use cases scenarios and CDL models allowing to engineers to manipulate with familiar artifacts. Thanks to such high-level formalism, the gap between informal and formal requirements is reduced; consequently, the requirement specification is facilitated.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Grefen, Paul, and Oktay Turetken. "Advanced Business Process Management in Networked E-Business Scenarios." International Journal of E-Business Research 13, no. 4 (October 2017): 70–104. http://dx.doi.org/10.4018/ijebr.2017100105.

Повний текст джерела
Анотація:
In the modern economy, we see a shift towards networked business scenarios. In many contemporary situations, the operation of multiple organizations is tightly coupled in collaborative business networks. To allow this tightly coupled collaboration, business process management (BPM) in these collaborative networks is becoming increasingly important. We discuss automated support for this networked BPM: automated means to manage business processes that span multiple autonomous organizations - thereby combining aspects of process management and e-business. We first provide a conceptual background for networked BPM. We describe a number of research approaches in this area, ranging from early developments to contemporary designs in a service-oriented context. This provides an overview of developments in which we observe several major trends. Firstly, we see a development from support for static business processes to support for highly dynamic processes. Secondly, we see how approaches move from addressing simple business collaboration networks to addressing complex networks. Thirdly, we find a move from the use of dedicated information technology to the use of standard technology. Finally, we observe that the BPM research efforts move through time from pushing new BPM technology into application domains to using BPM to realize business-IT alignment in application contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Langer, Markus, Cornelius J. König, Diana Ruth-Pelipez Sanchez, and Sören Samadi. "Highly automated interviews: applicant reactions and the organizational context." Journal of Managerial Psychology 35, no. 4 (December 31, 2019): 301–14. http://dx.doi.org/10.1108/jmp-09-2018-0402.

Повний текст джерела
Анотація:
Purpose The technological evolution of job interviews continues as highly automated interviews emerge as alternative approaches. Initial evidence shows that applicants react negatively to such interviews. Additionally, there is emerging evidence that contextual influences matter when investigating applicant reactions to highly automated interviews. However, previous research has ignored higher-level organizational contexts (i.e. which kind of organization uses the selection procedure) and individual differences (e.g. work experience) regarding applicant reactions. The purpose of this paper is to investigate applicant reactions to highly automated interviews for students and employees and the role of the organizational context when using such interviews. Design/methodology/approach In a 2 × 2 online study, participants read organizational descriptions of either an innovative or an established organization and watched a video displaying a highly automated or a videoconference interview. Afterwards, participants responded to applicant reaction items. Findings Participants (n=148) perceived highly automated interviews as more consistent but as conveying less social presence. The negative effect on social presence diminished organizational attractiveness. The organizational context did not affect applicant reactions to the interview approaches, whereas differences between students and employees emerged but only affected privacy concerns to the interview approaches. Research limitations/implications The organizational context seems to have negligible effects on applicant reactions to technology-enhanced interviews. There were only small differences between students and employees regarding applicant reactions. Practical implications In a tense labor market, hiring managers need to be aware of a trade-off between efficiency and applicant reactions regarding technology-enhanced interviews. Originality/value This study investigates high-level contextual influences and individual differences regarding applicant reactions to highly automated interviews.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ye, Jun, and Guoxin Liu. "Analysis on the Development of Automation and Intelligence in China’s Manufacturing Industry—Taking R & D Collaboration among Automobile Enterprises." Mobile Information Systems 2022 (October 11, 2022): 1–14. http://dx.doi.org/10.1155/2022/6811605.

Повний текст джерела
Анотація:
With the development of intelligence for automated automobile industries in China, the traditional R&D mode of enterprise has evolved from integrated internal innovation within a single enterprise to collaborative innovation among enterprises, which makes enterprises break the limit of the organizational boundaries to seek for the right partners and cooperation. However, the R&D environment among business partners and the mismatch between R&D and management have greatly affected the development of intelligence for automated industrial. Based on synergetic, from the perspective of collaborative R&D among automobile companies, this paper takes the collaborative R&D between Renault-Nissan Alliance and Dongfeng Motor Group as the research object, conducting quantitative analysis using regression models and structural equation models with 423 valid questionnaires collected in three months (from October 2017 to May 2018), and attempting to reveal the situational context in the distributed innovation network as well as how the dynamic collaborative behaviors among R&D enterprises act on “collaborative R&D performance.” The results show that first, the four “collaborative contexts”–strategic context, cultural context, institutional context, and network context–have a significant positive correlation with the level of collaborative R&D; second, the three “collaborative behaviors”—knowledge sharing, information sharing, and specific asset investment—have a significant positive correlation with the level of knowledge growth; and third, the level of collaborative R&D and knowledge growth have a significant positive correlation with R&D performance. Thus, it is revealed that not only are there situational contexts among the R&D and innovation activities of the core enterprises but also dynamic collaborative behavior. This creates a new perspective for the research on the development of intelligence for automated automobile industries in China among automobile companies.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Derakhshanfar, Hossein, J. Jorge Ochoa, Konstantinos Kirytopoulos, Wolfgang Mayer, and Vivian W. Y. Tam. "Construction delay risk taxonomy, associations and regional contexts." Engineering, Construction and Architectural Management 26, no. 10 (November 18, 2019): 2364–88. http://dx.doi.org/10.1108/ecam-07-2018-0307.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to systematically develop a delay risk terminology and taxonomy. This research also explores two external and internal dimensions of the taxonomy to determine how much the taxonomy as a whole or combinations of its elements are generalisable. Design/methodology/approach Using mixed methods research, this systematic literature review incorporated data from 46 articles to establish delay risk terminology and taxonomy. Qualitative data of the top 10 delay risks identified in each article were coded based on the grounded theory and constant comparative analysis using a three-stage coding approach. Word frequency analysis and cross-tabulation were used to develop the terminology and taxonomy. Association rules within the taxonomy were also explored to define risk paths and to unmask associations among the risks. Findings In total, 26 delay risks were identified and grouped into ten categories to form the risk breakdown structure. The universal delay risks and other delay risks that are more or less depending on the project location were determined. Also, it is realized that delays connected to equipment, sub-contractors and design drawings are highly connected to project planning, finance and owner slow decision making, respectively. Originality/value The established terminology and taxonomy may be used in manual or automated risk management systems as a baseline for delay risk identification, management and communication. In addition, the association rules assist the risk management process by enabling mitigation of a combination of risks together.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Azzi, Anna, Daria Battini, Maurizio Faccio, Alessandro Persona, and Fabio Sgarbossa. "Inventory holding costs measurement: a multi-case study." International Journal of Logistics Management 25, no. 1 (May 6, 2014): 109–32. http://dx.doi.org/10.1108/ijlm-01-2012-0004.

Повний текст джерела
Анотація:
Purpose – Logisticians in the worldwide industry are frequently faced with the problem of measuring the total cost of holding inventories with simple and easy-to-use methodologies. The purpose of this paper is to look at the problem, and in particular illustrate the inventory holding cost rate computation, when different kind of warehousing systems are applied. Design/methodology/approach – A multiple case study analysis is here developed and supported by a methodological framework directly derived from the working group discussions and brainstorming activities. Two different field of application are considered: one related to five companies with manual warehousing systems operating with traditional fork lift trucks; the other is among five companies operating with automated storage/retrieval systems (AS/RS) to store inventories. Findings – The multi-case study helps to understand how the holding cost parameter is currently computed by industrial managers and how much the difference between manual and automated/automatic warehousing systems impacts on the inventory cost structure definition. The insights from the ten case studies provide evidence that the kind of storage system adopted inside the factory can impact on the holding cost rate computation and permit to derive important considerations. Practical implications – The final aim of this work is to help industrial engineers and logisticians in correctly understanding the inventory costs involved in their systems and their cost structure. In addition, the multi-case analysis leads to considerations, to be applied in different industrial contexts. As other industrial applications are identified, they may be analyzed by using the presented methodology, and with aid from the data from this paper. Originality/value – The relevance of this work is to help industrial engineers and logisticians in understanding correctly the inventory costs involved in their logistics systems and their cost structure. In addition, the multi-case analysis lead to interesting final considerations, easily to be applied in different industrial contexts. As other industrial applications are identified, they may be analyzed by using the methodology and extrapolating the data from this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dzieduszyński, Tomasz. "Convolutional Neural Networks as Context-Scraping Tools in Architecture and Urban Planning." BUILDER 296, no. 3 (February 25, 2022): 79–81. http://dx.doi.org/10.5604/01.3001.0015.7566.

Повний текст джерела
Анотація:
Data s craping” i s a t erm usually used in Web browsing to refer to the automated process of data extraction from websites or interfaces designed for human use. Currently, nearly two thirds of Net traffic are generated by bots rather than humans. Similarly, Deep Convolutional Neural Networks (CNNs) can be used as artificial agents scraping cities for relevant contexts. The convolutional filters, which distinguish CNNs from the Fully-connected Neural Networks (FNNs), make them very promising candidates for feature detection in the abundant and easily accessible smart-city data consisting of GIS and BIM models, as well as satellite imagery and sensory outputs. These new, convolutional city users could roam the abstract, digitized spaces of our cities to provide insight into the architectural and urban contexts relevant to design and management processes. This article presents the results of a query of the state-of-the-art applications of Convolutional Neural Networks as architectural “city scrapers” and proposes a new, experimental framework for utilization of CNNs in context scraping in urban scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Michellod, Julien Lancelot, Declan Kuch, Christian Winzer, Martin K. Patel, and Selin Yilmaz. "Building Social License for Automated Demand-Side Management—Case Study Research in the Swiss Residential Sector." Energies 15, no. 20 (October 20, 2022): 7759. http://dx.doi.org/10.3390/en15207759.

Повний текст джерела
Анотація:
Demand-side management (DSM) is increasingly needed for answering electricity flexibility needs in the upcoming transformation of energy systems. Use of automation leads to better efficiency, but its acceptance is problematic since it is linked with several issues, such as privacy or loss of control. Different approaches investigate what should be done for building community support for automation for the purpose of DSM, but it is only recently that literature has shown interest in the application of social license as a concept merging several issues traditionally treated separately. The social license concept emerged in the mining sector before being adopted for other problematic resources. It serves to identify different levels of community support for a project/company as well as various factors that influence it, such as economic and socio-political legitimacy and interactional trust. This paper investigates, through empirical evidence from eight case studies, what has been done in different contexts to build trust and legitimacy for an automated DSM project. Our findings suggest that patterns exist in respect of benefits, risks and rationale presented, the retention of control, information gathered, and inclusion and that these factors differ according to appliances/devices automated, operators of automation, and end-users targeted.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bonnet, Pierre, Christophe Botella, François Munoz, Pascal Monestiez, Mathias Chouet, Hervé Goëau, and Alexis Joly. "Automated Identification of Citizen Science Observations for Ecological Studies." Biodiversity Information Science and Standards 2 (May 17, 2018): e25450. http://dx.doi.org/10.3897/biss.2.25450.

Повний текст джерела
Анотація:
Pl@ntNet is an international initiative which was the first one attempting to combine the force of citizens networks with automated identification tools based on machine learning technologies (Joly et al. 2014). Launched in 2009 by a consortium involving research institutes in computer sciences, ecology and agriculture, it was the starting point of several scientific and technological productions (Goëau et al. 2012) which finally led to the first release of the Pl@ntNet app (iOS in February 2013 (Goëau et al. 2013) and Android (Goëau et al. 2014) the following year). Initially based on 800 plant species, the app was progressively enlarged to thousands of species of the European, North American and tropical regions. Nowadays, the app covers more than 15 000 species and is adapted to 22 regional and thematic contexts, such as the Andean plant species, the wild salads of southern Europe, the indigenous trees species of South Africa, the flora of the Indian Ocean Islands, the New Caledonian Flora, etc. The app is translated in 11 languages and is being used by more than 3 millions of end-users all over the world, mostly in Europe and the US. The analysis of the data collected by Pl@ntnet users, which represent more than 24 millions of observations up to now, has a high potential for different ecological and management questions. A recent work (Botella et al. 2018), in particular, did show that the stream of Pl@ntNet observations could allow a fine-grained and regular monitoring of some species of interest such as invasive ones. However, this requires cautious considerations about the contexts in which the application is used. In this talk, we will synthesize the results of this study and present another one related to phenology. Indeed, as the phenological stage of the observed plants is also recorded, these data offer a rich and unique material for phenological studies at large geographical or taxonomical scale. We will share preliminary results obtained on some important pantropical species (such as the Melia azedarach L., and the Lantana camara L.), for which we have detected significant intercontinental phenological patterns, among the project data.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Knight, Simon, and Karen Littleton. "Dialogue as Data in Learning Analytics for Productive Educational Dialogue." Journal of Learning Analytics 2, no. 3 (February 18, 2016): 111–43. http://dx.doi.org/10.18608/jla.2015.23.7.

Повний текст джерела
Анотація:
Accounts of the nature and role of productive dialogue in fostering educational outcomes are now well established in the learning sciences and are underpinned by bodies of strong empirical research and theorising. Allied to this there has been longstanding interest in fostering computer-supported collaborative learning (CSCL) in support of such dialogue. Learning analytic environments such as massive open online courses (moocs) and online learning environments (such as virtual learning environments, VLEs and learning management systems, LMSs) provide ripe potential spaces for learning dialogue. In prior research, preliminary steps have been taken to detect occurrences of productive dialogue automatically through the use of automated analysis techniques. Such advances have the potential to foster effective dialogue through the use of learning analytic techniques that scaffold, give feedback on, and provide pedagogic contexts promoting, such dialogue. However, the translation of learning science research to the online context is complex, requiring the operationalization of constructs theorized in different contexts (often face to face), and based on different data-sets and structures (often spoken dialogue).. In this paper we explore what could constitute the effective analysis of this kind of productive dialogue, arguing that it requires consideration of three key facets of the dialogue: features indicative of productive dialogue; the unit of segmentation; and the interplay of features and segmentation with the temporal underpinning of learning contexts. We begin by outlining what we mean by ‘productive educational dialogue’, before going on to discuss prior work that has been undertaken to date on its manual and automated analysis. We then highlight ongoing challenges for the development of computational analytic approaches to such data, discussing the representation of features, segments, and temporality in computational modelling. The paper thus foregrounds, to both learning-science-oriented and computationally-oriented researchers, key considerations in respect of the analysis dialogue data in emerging learning analytics environments. The paper provides a novel, conceptually driven, stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on and offline sites of learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

K A, Shirien, Neethu George, and Surekha Mariam Varghese. "Descriptive Answer Script Grading System using CNN-BiLSTM Network." International Journal of Recent Technology and Engineering 9, no. 5 (January 30, 2021): 139–44. http://dx.doi.org/10.35940/ijrte.e5212.019521.

Повний текст джерела
Анотація:
Descriptive answer script assessment and rating program is an automated framework to evaluate the answer scripts correctly. There are several classification schemes in which a piece of text is evaluated on the basis of spelling, semantics and meaning. But, lots of these aren’t successful. Some of the models available to rate the response scripts include Simple Long Short Term Memory (LSTM), Deep LSTM. In addition to that Convolution Neural Network and Bi-directional LSTM is considered here to refine the result. The model uses convolutional neural networks and bidirectional LSTM networks to learn local information of words and capture long-term dependency information of contexts on the Tensorflow and Keras deep learning framework. The embedding semantic representation of texts can be used for computing semantic similarities between pieces of texts and to grade them based on the similarity score. The experiment used methods for data optimization, such as data normalization and dropout, and tested the model on an Automated Student Evaluation Short Response Scoring, a commonly used public dataset. By comparing with the existing systems, the proposed model has achieved the state-of-the-art performance and achieves better results in the accuracy of the test dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Vos, Tanja E. J., Peter M. Kruse, Nelly Condori-Fernández, Sebastian Bauersfeld, and Joachim Wegener. "TESTAR." International Journal of Information System Modeling and Design 6, no. 3 (July 2015): 46–83. http://dx.doi.org/10.4018/ijismd.2015070103.

Повний текст джерела
Анотація:
Testing applications with a graphical user interface (GUI) is an important, though challenging and time consuming task. The state of the art in the industry are still capture and replay tools, which may simplify the recording and execution of input sequences, but do not support the tester in finding fault-sensitive test cases and leads to a huge overhead on maintenance of the test cases when the GUI changes. In earlier works the authors presented the TESTAR tool, an automated approach to testing applications at the GUI level whose objective is to solve part of the maintenance problem by automatically generating test cases based on a structure that is automatically derived from the GUI. In this paper they report on their experiences obtained when transferring TESTAR in three different industrial contexts with decreasing involvement of the TESTAR developers and increasing participation of the companies when deploying and using TESTAR during testing. The studies were successful in that they reached practice impact, research impact and give insight into ways to do innovation transfer and defines a possible strategy for taking automated testing tools into the market.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

O’ Leary, Niall O’, Lorenzo Leso, Frank Buckley, Jonathon Kenneally, Diarmuid McSweeney, and Laurence Shalloo. "Validation of an Automated Body Condition Scoring System Using 3D Imaging." Agriculture 10, no. 6 (June 26, 2020): 246. http://dx.doi.org/10.3390/agriculture10060246.

Повний текст джерела
Анотація:
Body condition scores (BCS) measure a cow’s fat reserves and is important for management and research. Manual BCS assessment is subjective, time-consuming, and requires trained personnel. The BodyMat F (BMF, Ingenera SA, Cureglia, Switzerland) is an automated body condition scoring system using a 3D sensor to estimate BCS. This study assesses the BMF. One hundred and three Holstein Friesian cows were assessed by the BMF and two assessors throughout a lactation. The BMF output is in the 0–5 scale commonly used in France. We develop and report the first equation to convert these scores to the 1–5 scale used by the assessors in Ireland in this study ((0–5 scale × 0.38) + 1.67 → 1–5 scale). Inter-assessor agreement as measured by Lin’s concordance of correlation was 0.67. BMF agreement with the mean of the two assessors was the same as between assessors (0.67). However, agreement was lower for extreme values, particularly in over-conditioned cows where the BMF underestimated BCS relative to the mean of the two human observers. The BMF outperformed human assessors in terms of reproducibility and thus is likely to be especially useful in research contexts. This is the second independent validation of a commercially marketed body condition scoring system as far as the authors are aware. Comparing the results here with the published evaluation of the other system, we conclude that the BMF performed as well or better.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zaniolo, Marta, Matteo Giuliani, Andrea Francesco Castelletti, and Manuel Pulido-Velazquez. "Automatic design of basin-specific drought indexes for highly regulated water systems." Hydrology and Earth System Sciences 22, no. 4 (April 20, 2018): 2409–24. http://dx.doi.org/10.5194/hess-22-2409-2018.

Повний текст джерела
Анотація:
Abstract. Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc “state index” is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Rafique, Muhammad Zeeshan, Mustafa Haider, Abdul Raheem, Mohd Nizam Ab Rahman, and Muhammad Saad Amjad. "Essential Elements for Radio Frequency Identification (RFID) adoption for Industry 4.0 Smart Manufacturing in Context of Technology-Organization-Environment (TOE) Framework – A Review." Jurnal Kejuruteraan 34, no. 1 (January 30, 2022): 1–10. http://dx.doi.org/10.17576/jkukm-2021-34(1)-01.

Повний текст джерела
Анотація:
Automatic identification and data collection provides an ideal basis for Industry 4.0 Smart Manufacturing. The manufacturing sectors, involving a wide spectrum of physical and digital world, are functioning in extremely challenging environment. To optimize production efficiency, the incorporation of automated data collection technologies such as Bar Code and Radio Frequency Identification (RFID) is essential. Both these technologies have a great overlap in terms of industrial applications and no study reviews the existing literature in this regard. Therefore to cope up this matter, a systematic literature review has been conducted in which the technologies have been studied and compared, followed by the detailed discussion under various contexts of Technology-Organization-Environment (TOE) Framework. It has been observed that both these technologies have been employed in various manufacturing domains such as lean manufacturing, inventory management and production planning. However, it has been observed that RFID technology carried technological superiority over Bar Code technology. The systems utilizing the former are highly reliable, exquisitely capable and perform excellent in case of automation. However, issues such as high capital costs and increased level of technical complexity are few dilemmas in case of adopting RFID based systems. In addition to that, the implementation of RFID systems is complemented by certain essential features of TOE framework, which can help to elevate competitiveness and efficiency of an organization regarding tracking and identification of assets and inventory.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Snider, Eric J., David Berard, Saul J. Vega, Evan Ross, Zechariah J. Knowlton, Guy Avital, and Emily N. Boice. "Hardware-in-Loop Comparison of Physiological Closed-Loop Controllers for the Autonomous Management of Hypotension." Bioengineering 9, no. 9 (August 27, 2022): 420. http://dx.doi.org/10.3390/bioengineering9090420.

Повний текст джерела
Анотація:
Trauma and hemorrhage are leading causes of death and disability worldwide in both civilian and military contexts. The delivery of life-saving goal-directed fluid resuscitation can be difficult to provide in resource-constrained settings, such as in forward military positions or mass-casualty scenarios. Automated solutions for fluid resuscitation could bridge resource gaps in these austere settings. While multiple physiological closed-loop controllers for the management of hypotension have been proposed, to date there is no consensus on controller design. Here, we compare the performance of four controller types—decision table, single-input fuzzy logic, dual-input fuzzy logic, and proportional–integral–derivative using a previously developed hardware-in-loop test platform where a range of hemorrhage scenarios can be programmed. Controllers were compared using traditional controller performance metrics, but conclusions were difficult to draw due to inconsistencies across the metrics. Instead, we propose three aggregate metrics that reflect the target intensity, stability, and resource efficiency of a controller, with the goal of selecting controllers for further development. These aggregate metrics identify a dual-input, fuzzy-logic-based controller as the preferred combination of intensity, stability, and resource efficiency within this use case. Based on these results, the aggressively tuned dual-input fuzzy logic controller should be considered a priority for further development.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Chui, Michelle A., Richard J. Holden, Alissa L. Russ, Olufunmilola Abraham, Preethi Srinivas, Jamie A. Stone, Michelle A. Jahn, and Mustafa Ozkaynak. "Human Factors in Pharmacy." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 666–70. http://dx.doi.org/10.1177/1541931213601653.

Повний текст джерела
Анотація:
Medication errors in the ambulatory setting are common and contribute to significant morbidity and mortality. Given the Institute of Medicine’s recommendation of adopting a systems-based approach to improving medication safety, research has been conducted utilizing human factors and ergonomics conceptual frameworks, approaches, and methods to study pharmacies and pharmacists. This panel will focus on how human factors principles and models have been adapted for contexts where medications are managed. Individual projects address pediatric patients’ medication-related needs, over-the-counter medication safety for older adults, anticoagulation management, automated prescription tracking, and medication safety-related decision making by healthcare professionals. These studies span settings from community pharmacies to inpatient pharmacies to specialty clinics and patients’ homes. By presenting a sample of the growing body of human factors work in pharmacy, this panel will offer unique implications for human factors theory, methods, and application in this important domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sboev, Alexander, Anton Selivanov, Ivan Moloshnikov, Roman Rybka, Artem Gryaznov, Sanna Sboeva, and Gleb Rylkov. "Extraction of the Relations among Significant Pharmacological Entities in Russian-Language Reviews of Internet Users on Medications." Big Data and Cognitive Computing 6, no. 1 (January 17, 2022): 10. http://dx.doi.org/10.3390/bdcc6010010.

Повний текст джерела
Анотація:
Nowadays, the analysis of digital media aimed at prediction of the society’s reaction to particular events and processes is a task of a great significance. Internet sources contain a large amount of meaningful information for a set of domains, such as marketing, author profiling, social situation analysis, healthcare, etc. In the case of healthcare, this information is useful for the pharmacovigilance purposes, including re-profiling of medications. The analysis of the mentioned sources requires the development of automatic natural language processing methods. These methods, in turn, require text datasets with complex annotation including information about named entities and relations between them. As the relevant literature analysis shows, there is a scarcity of datasets in the Russian language with annotated entity relations, and none have existed so far in the medical domain. This paper presents the first Russian-language textual corpus where entities have labels of different contexts within a single text, so that related entities share a common context. therefore this corpus is suitable for the task of belonging to the medical domain. Our second contribution is a method for the automated extraction of entity relations in Russian-language texts using the XLM-RoBERTa language model preliminarily trained on Russian drug review texts. A comparison with other machine learning methods is performed to estimate the efficiency of the proposed method. The method yields state-of-the-art accuracy of extracting the following relationship types: ADR–Drugname, Drugname–Diseasename, Drugname–SourceInfoDrug, Diseasename–Indication. As shown on the presented subcorpus from the Russian Drug Review Corpus, the method developed achieves a mean F1-score of 80.4% (estimated with cross-validation, averaged over the four relationship types). This result is 3.6% higher compared to the existing language model RuBERT, and 21.77% higher compared to basic ML classifiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Al-Shammari, Minwir M. "Production Value Chain Model for Sustainable Competitive Advantage." Management Systems in Production Engineering 31, no. 1 (February 18, 2023): 27–32. http://dx.doi.org/10.2478/mspe-2023-0004.

Повний текст джерела
Анотація:
Abstract In the Fourth Industrial Revolution, 4IR, manufacturing firms face more competitive environments, rapidly changing information and communication technologies (ICTs), and customers’ preferences than ever before. This paper analyzes relevant literature and proposes a systemic customer-centric knowledge-based production value chain (KPVC) to leverage distinctive core competencies (DCCs) and create sustainable competitive advantage (SCA) in manufacturing contexts. The paper introduces an integrative customer-centric KPVC model that enables companies to respond to environmental drivers, leverage DCCs, and create SCA. It adopts an exploratory approach to developing a unified and inherently interdisciplinary model based on a review of relevant scholarly literature. The KPVC integrated model comprises production value chain (PVC), knowledge management (KM) processes, and business process re-engineering (BPR) enabling activities. A successful move to KPVC requires a fully integrated and automated system allowing firms to define, track, and manage their work processes. Effective KPVC is a principal approach for leveraging DCCs in the quest for achieving SCA in today’s competitive business world and generating better values for customers and companies.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Snyder, Eric, Thomas Rivers, Lisa Smith, Scott Paoni, Scott Cunliffe, Arpan Patel, and Erika Ramsdale. "From months to minutes: Creating Hyperion, a novel data management system expediting data insights for oncology research and patient care." PLOS Digital Health 1, no. 11 (November 1, 2022): e0000036. http://dx.doi.org/10.1371/journal.pdig.0000036.

Повний текст джерела
Анотація:
Here we describe the design and implementation of a novel data management platform for an academic cancer center which meets the needs of multiple stakeholders. A small, cross-functional technical team identified key challenges to creating a broad data management and access software solution: lowering the technical skill floor, reducing cost, enhancing user autonomy, optimizing data governance, and reimagining technical team structures in academia. The Hyperion data management platform was designed to meet these challenges in addition to usual considerations of data quality, security, access, stability, and scalability. Implemented between May 2019 and December 2020 at the Wilmot Cancer Institute, Hyperion includes a sophisticated custom validation and interface engine to process data from multiple sources, storing it in a database. Graphical user interfaces and custom wizards permit users to directly interact with data across operational, clinical, research, and administrative contexts. The use of multi-threaded processing, open-source programming languages, and automated system tasks (normally requiring technical expertise) minimizes costs. An integrated ticketing system and active stakeholder committee support data governance and project management. A co-directed, cross-functional team with flattened hierarchy and integration of industry software management practices enhances problem solving and responsiveness to user needs. Access to validated, organized, and current data is critical to the functioning of multiple domains in medicine. Although there are downsides to developing in-house customized software, we describe a successful implementation of custom data management software in an academic cancer center.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Marzi, David, and Fabio Dell’Acqua. "Mapping European Rice Paddy Fields Using Yearly Sequences of Spaceborne Radar Reflectivity: A Case Study in Italy." Earth 2, no. 3 (July 2, 2021): 387–404. http://dx.doi.org/10.3390/earth2030023.

Повний текст джерела
Анотація:
Although a vast literature exists on satellite-based mapping of rice paddy fields in Asia, where most of the global production takes place, little has been produced so far that focuses on the European context. Detection and mapping methods that work well in the Asian context will not offer the same performance in Europe, where different seasonal cycles, environmental contexts, and rice varieties make distinctive features dissimilar to the Asian case. In this context, water management is a key clue; watering practices are distinctive for rice with respect to other crops, and within rice there exist diverse cultivation practices including organic and non-organic approaches. In this paper, we focus on satellite-observed water management to identify rice paddy fields cultivated with a traditional agricultural approach. Building on established research results, and guided by the output of experiments on real-world cases, a new method for analyzing time-series of Sentinel-1 data has been developed, which can identify traditional rice fields with a high degree of reliability. Typical watering practices for traditional rice cultivation leave distinctive marks on the yearly sequence of spaceborne radar reflectivity that are identified by the proposed classifier. The method is tested on a small sample of rice paddy fields, built by direct collection of ground reference information. Automated setting of parameters was sufficient to achieve accuracy values beyond 90%, and scanning of a range of values led to touch full score on an independent test set. This work is a part of a broader initiative to build space-based tools for collecting additional pieces of evidence to support food chain traceability; the whole system will consider various parameters, whose analysis procedures are still at their early stages of development.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Moreira, Eugênio, Daniel Ribeiro Cardoso, and Paulo Jorge Alcobia Simões. "Tecnologias da informação aplicadas à avaliação da visibilidade de bens tombados em contextos urbanos | Information technologies applied to the assessment of the visibility of historical heritage in urban contexts." InfoDesign - Revista Brasileira de Design da Informação 15, no. 1 (August 28, 2018): 1–16. http://dx.doi.org/10.51358/id.v15i1.579.

Повний текст джерела
Анотація:
O presente artigo tratará da descrição e conceituação de um dispositivo baseado em tecnologias da informação capaz de avaliar a visibilidade de bens tombados, criando representações automatizadas para essas leituras. Para tanto, partirá de discussão sobre o conceito de visibilidade de bens tombados, apontando as referências em fóruns internacionais e os entendimentos construídos a partir da experiência de órgãos de proteção, buscando extrair-lhes um pensamento geral. Depois, são trazidas teorias de percepção do espaço que adotam uma abordagem quantitativa e que trazem importantes insights para o problema. Esse conhecimento serve de base para a formalização de um pensamento geral de análise, descrito através de diagramas. Como resultados, são apresentados os primeiros testes de implantação em ambiente computacional e, tratando-se de uma pesquisa em andamento, conclui-se com uma discussão sobre os limites e potencialidades observados.*****This article describes and conceptualizes a device based on information technologies capable of evaluating aspects of the environment of built cultural heritages, creating automated representations. In order to do so, it will start from a discussion about the management of the surrounding areas of these buildings, from its origins to the experiences of the national protection agency (IPHAN), where the terms environment and visibility are related. Then we introduce theories of space perception that use a quantitative approach and bring important insights to the problem. This knowledge serves as the basis for the formalization of a general thought of analysis, described through diagrams. As results, we present the first tests of implementation in the computational environment and, as a research in progress, we conclude with a discussion about the limits and potentialities observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Molnar, Petra. "Technology on the margins: AI and global migration management from a human rights perspective." Cambridge International Law Journal 8, no. 2 (December 2019): 305–30. http://dx.doi.org/10.4337/cilj.2019.02.07.

Повний текст джерела
Анотація:
Experiments with new technologies in migration management are increasing. From Big Data predictions about population movements in the Mediterranean, to Canada's use of automated decision-making in immigration and refugee applications, to artificial-intelligence lie detectors deployed at European borders, States are keen to explore the use of new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives. These technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability. This paper examines how technologies used in the management of migration impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as States single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guise of national security, or even under tropes of humanitarianism and development. The way that technology operates is a useful lens that highlights State practices, democracy, notions of power, and accountability. Technology is not inherently democratic and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognising potential harms, because technology and its development is inherently global and transnational. More oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants such as freedom from discrimination, privacy rights and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Morano, Rosato, Tajani, Manganelli, and Liddo. "Contextualized Property Market Models vs. Generalized Mass Appraisals: An Innovative Approach." Sustainability 11, no. 18 (September 6, 2019): 4896. http://dx.doi.org/10.3390/su11184896.

Повний текст джерела
Анотація:
The present research takes into account the current and widespread need for rational valuation methodologies, able to correctly interpret the available market data. An innovative automated valuation model has been simultaneously implemented to three Italian study samples, each one constituted by two-hundred residential units sold in the years 2016–2017. The ability to generate a “unique” functional form for the three different territorial contexts considered, in which the relationships between the influencing factors and the selling prices are specified by different multiplicative coefficients that appropriately represent the market phenomena of each case study analyzed, is the main contribution of the proposed methodology. The method can provide support for private operators in the assessment of the territorial investment conveniences and for the public entities in the decisional phases regarding future tax and urban planning policies.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Iohov, Olexandr, Victor Maliuk, Olexandr Salnikov, and Olena Novykova. "Application of the ontology of the choice of selection to describe the processes of interaction of subjects of military communication management." Advanced Information Systems 5, no. 1 (June 22, 2021): 54–61. http://dx.doi.org/10.20998/2522-9052.2021.1.07.

Повний текст джерела
Анотація:
The ways of improving the mechanisms of information and analytical support of the command control system in the state of emergency are analyzed. The approach to the application of the ontology of the choice problem for decision-making in the field of law enforcement management using the procedure of integration of information resources based on the binary partial order relation is used. The purpose of the article is to increase the efficiency of decision-making in the management system of the military command in a state of emergency by applying the ontology of the choice problem based on a set of semantically significant results. Results of the research. Analysis and processing of large arrays of information in the field of military command management in a state of emergency should be carried out in an automated mode on the basis of a distributed software environment based on the principles of ontologies. Ontological systems, as a result of the inverse mapping of natural systems, provide the correct aggregation of various thematic processes through the formation of a structured set of information objects-concepts of the subject area, which are defined as a single type of data. The ontological representation of the contexts of units-concepts provides their integrated use in the process of solving complex tasks by the governing bodies of the command in a state of emergency. One of the constructive ways to integrate information resources as passive knowledge systems is to activate their concepts based on the process of forming thematic ontologies and combining these ontologies by building an ontology of the choice problem over them. The uniqueness of the ontology of the choice problem to any homotopy type allows to build the procedure of integration of information resources on the basis of a binary partial order relation. The partial order relation allows to reflect in an integrated way interaction of contexts of the notion-concepts defining subjects of information resources. The contradiction between the increase in the amount of information needed for decision-making in the field of management of interdepartmental critical systems and the constant requirement to reduce the time for its processing in the information-analytical systems has been resolved.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Motz, Gary, Alexander Zimmerman, Kimberly Cook, and Alyssa Bancroft. "Collections Management and High-Throughput Digitization using Distributed Cyberinfrastructure Resources." Biodiversity Information Science and Standards 2 (July 5, 2018): e25643. http://dx.doi.org/10.3897/biss.2.25643.

Повний текст джерела
Анотація:
Collections digitization relies increasingly upon computational and data management resources that occasionally exceed the capacity of natural history collections and their managers and curators. Digitization of many tens of thousands of micropaleontological specimen slides, as evidenced by the effort presented here by the Indiana University Paleontology Collection, has been a concerted effort in adherence to the recommended practices of multifaceted aspects of collections management for both physical and digital collections resources. This presentation highlights the contributions of distributed cyberinfrastructure from the National Science Foundation-supported Extreme Science and Engineering Discovery Environment (XSEDE) for web-hosting of collections management system resources and distributed processing of millions of digital images and metadata records of specimens from our collections. The Indiana University Center for Biological Research Collections is currently hosting its instance of the Specify collections management system (CMS) on a virtual server hosted on Jetstream, the cloud service for on-demand computational resources as provisioned by XSEDE. This web-service allows the CMS to be flexibly hosted on the cloud with additional services that can be provisioned on an as-needed basis for generating and integrating digitized collections objects in both web-friendly and digital preservation contexts. On-demand computing resources can be used for the manipulation of digital images for automated file I/O, scripted renaming of files for adherence to file naming conventions, derivative generation, and backup to our local tape archive for digital disaster preparedness and long-term storage. Here, we will present our strategies for facilitating reproducible workflows for general collections digitization of the IUPC nomenclatorial types and figured specimens in addition to the gigapixel resolution photographs of our large collection of microfossils using our GIGAmacro system (e.g., this slide of conodonts). We aim to demonstrate the flexibility and nimbleness of cloud computing resources for replicating this, and other, workflows to enhance the findability, accessibility, interoperability, and reproducibility of the data and metadata contained within our collections.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Tilley, Alexander, Joctan Dos Reis Lopes, and Shaun P. Wilkinson. "PeskAAS: A near-real-time, open-source monitoring and analytics system for small-scale fisheries." PLOS ONE 15, no. 11 (November 13, 2020): e0234760. http://dx.doi.org/10.1371/journal.pone.0234760.

Повний текст джерела
Анотація:
Small-scale fisheries are responsible for landing half of the world’s fish catch, yet there are very sparse data on these fishing activities and associated fisheries production in time and space. Fisheries-dependent data underpin scientific guidance of management and conservation of fisheries systems, but it is inherently difficult to generate robust and comprehensive data for small-scale fisheries, particularly given their dispersed and diverse nature. In tackling this challenge, we use open source software components including the Shiny R package to build PeskAAS; an adaptable and scalable digital application that enables the collation, classification, analysis and visualisation of small-scale fisheries catch and effort data. We piloted and refined this system in Timor-Leste; a small island developing nation. The features that make PeskAAS fit for purpose are that it is: (i) fully open-source and free to use (ii) component-based, flexible and able to integrate vessel tracking data with catch records; (iii) able to perform spatial and temporal filtering of fishing productivity by fishing method and habitat; (iv) integrated with species-specific length-weight parameters from FishBase; (v) controlled through a click-button dashboard, that was co-designed with fisheries scientists and government managers, that enables easy to read data summaries and interpretation of context-specific fisheries data. With limited training and code adaptation, the PeskAAS workflow has been used as a framework on which to build and adapt systematic, standardised data collection for small-scale fisheries in other contexts. Automated analytics of these data can provide fishers, managers and researchers with insights into a fisher’s experience of fishing efforts, fisheries status, catch rates, economic efficiency and geographic preferences and limits that can potentially guide management and livelihood investments.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Savova, G. K., K. C. Kipper-Schuler, J. F. Hurdle, and S. M. Meystre. "Extracting Information from Textual Documents in the Electronic Health Record: A Review of Recent Research." Yearbook of Medical Informatics 17, no. 01 (August 2008): 128–44. http://dx.doi.org/10.1055/s-0038-1638592.

Повний текст джерела
Анотація:
Summary Objectives We examine recent published research on the extraction of information from textual documents in the Electronic Health Record (EHR). Methods Literature review of the research published after 1995, based on PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers already included. Results 174 publications were selected and are discussed in this review in terms of methods used, pre-processing of textual documents, contextual features detection and analysis, extraction of information in general, extraction of codes and of information for decision-support and enrichment of the EHR, information extraction for surveillance, research, automated terminology management, and data mining, and de-identification of clinical text. Conclusions Performance of information extraction systems with clinical text has improved since the last systematic review in 1995, but they are still rarely applied outside of the laboratory they have been developed in. Competitive challenges for information extraction from clinical text, along with the availability of annotated clinical text corpora, and further improvements in system performance are important factors to stimulate advances in this field and to increase the acceptance and usage of these systems in concrete clinical and biomedical research contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Heldal, Frode, Endre Sjøvold, and Kenneth Stålsett. "Shared cognition in intercultural teams: collaborating without understanding each other." Team Performance Management: An International Journal 26, no. 3/4 (April 13, 2020): 211–26. http://dx.doi.org/10.1108/tpm-06-2019-0051.

Повний текст джерела
Анотація:
Purpose Severe misunderstandings have been proved to cause significant delays and financial overruns in large engineering projects with teams consisting of people from Western and Asian cultures. The purpose of this study was to determine if differences in shared cognition may explain some of the crucial misunderstandings in intercultural production teams. Design/methodology/approach The study has used systematizing the person–group relationship (SPGR) survey methodology, supported by interviews, to study mental models in six South Korean teams that also includes Norwegian engineers (52 individuals). In so doing, the study uses the theoretical framework of Healey et al. (2015), where X-mental representations involve actions that are automated and subconscious and C-mental representations involve actions that are verbalized reasonings and conscious. People may share mental models on the X-level without sharing on the C-level, depicting a situation where teams are coordinated without understanding why (surface discordance). Findings The findings of the study are that people with different cultural backgrounds in an intercultural team may learn to adapt to each other when the context is standardized, without necessarily understanding underlying meanings and intentions behind actions (surface discordance). This may create a perception about team members not needing to explicate opinions (sharing at the C-level). This in turn may create challenges in anomalous situations, where deliberate sharing of C-mental models is required to find new solutions and/or admit errors so that they may be adjusted. The findings indicate that the non-sharing of explicated reasonings (C-mental models) between Norwegians and Koreans contributed in sharing C-mental models, despite having an implicit agreement on how to perform standard tasks (sharing X-mental models). Research limitations/implications The study is limited to Norwegians and Koreans working in production teams. Future studies could benefit from more cultures and/or different team contexts. The authors’ believe that the findings may also concern other standardized environments and corroborate previous perspectives on intercultural teams needing to both train (develop similar X-mental representations) and reflect together (develop similar C-mental representations). Practical implications Based on our findings we suggest the using of cross-cultural training at a deeper level than previously suggested, training in both social interaction patterns as well as verbalizing logical reasoning together. This entails reaching a shared and joint understanding of not only actions but also values, feelings and teamwork functions. This can be enabled by group conversations and training in dynamic team patterns. Important is, however, that standardized contexts may dampen the perception of the need to do both. Originality/value The study contributes to current research on intercultural teams by focusing on a dual-mode perspective on shared cognition, relating these to contextual factors. In this, the authors’ answer the call in previous research for more information on contextual matters and a focus on interaction in intercultural teams. The study also shows how the differences between X-mental and C-mental shared mental models play out in a practical setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Westerlund, Parvaneh, Ingemar Andersson, Tero Päivärinta, and Jörgen Nilsson. "Towards automated pre-ingest workflow for bridging information systems and digital preservation services." Records Management Journal 29, no. 3 (November 18, 2019): 289–304. http://dx.doi.org/10.1108/rmj-05-2018-0011.

Повний текст джерела
Анотація:
Purpose This paper aims to automate pre-ingest workflow for preserving digital content, such as records, through middleware that integrates potentially many information systems with potentially several alternative digital preservation services. Design/methodology/approach This design research approach resulted in a design for model- and component-based software for such workflow. A proof-of-concept prototype was implemented and demonstrated in context of a European research project, ForgetIT. Findings The study identifies design issues of automated pre-ingest for digital preservation while using middleware as a design choice for this purpose. The resulting model and solution suggest functionalities and interaction patterns based on open interface protocols between the source systems of digital content, middleware and digital preservation services. The resulting workflow automates the tasks of fetching digital objects from the source system with metadata extraction, preservation preparation and transfer to a selected preservation service. The proof-of-concept verified that the suggested model for pre-ingest workflow and the suggested component architecture was technologically implementable. Future research and development needs to include new solutions to support context-aware preservation management with increased support for configuring submission agreements as a basis for dynamic automation of pre-ingest and more automated error handling. Originality/value The paper addresses design issues for middleware as a design choice to support automated pre-ingest in digital preservation. The suggested middleware architecture supports many-to-many relationships between the source information systems and digital preservation services through open interface protocols, thus enabling dynamic digital preservation solutions for records management.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Darr, Asaf. "Automatons, sales-floor control and the constitution of authority." Human Relations 72, no. 5 (July 25, 2018): 889–909. http://dx.doi.org/10.1177/0018726718783818.

Повний текст джерела
Анотація:
Workplace authority in contemporary contexts is increasingly being constituted through online automatons, internet platforms whose logic is diametrically opposed to the notion of hierarchical knowledge. They govern the organization of work and derive legitimacy from three principles: (1) the streaming of information into a network composed of all workers; (2) the transparency of the information and measurements they provide to workers; and (3) their automatic self-regulation, which obscures the role of management in their design. Via interviews and on-site observation in a large computer chain store, I examined how one automaton controls workers through a complex system of sales contests. To lure workers into active engagement with the automaton, management offers hefty prizes to contest winners and also strives to legitimate the automaton’s operation by presenting the contests as fair and just. Through the behavioural scripts inscribed into it, the automaton fosters belief in markets as efficient means of resource allocation and promotes self-interested behaviour and arm’s-length social ties. Smart artefacts like this automaton, which foster belief and generate authority through workers’ prescribed engagement with them, are, I argue, emerging as effective managerial tools in a variety of work contexts, part of a pattern of increasing automation of workplace authority.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Beck, A., T. Ganslandt, M. Hummel, M. Kiehntopf, U. Sax, F. Ückert, S. Semler, and H. U. Prokosch. "IT Infrastructure Components for Biobanking." Applied Clinical Informatics 01, no. 04 (2010): 419–29. http://dx.doi.org/10.4338/aci-2010-05-ra-0034.

Повний текст джерела
Анотація:
Summary Objective: Within translational research projects in the recent years large biobanks have been established, mostly supported by homegrown, proprietary software solutions. No general requirements for biobanking IT infrastructures have been published yet. This paper presents an exemplary biobanking IT architecture, a requirements specification for a biorepository management tool and exemplary illustrations of three major types of requirements. Methods: We have pursued a comprehensive literature review for biobanking IT solutions and established an interdisciplinary expert panel for creating the requirements specification. The exemplary illustrations were derived from a requirements analysis within two university hospitals. Results: The requirements specification comprises a catalog with more than 130 detailed requirements grouped into 3 major categories and 20 subcategories. Special attention is given to multitenancy capabilities in order to support the project-specific definition of varying research and bio-banking contexts, the definition of workflows to track sample processing, sample transportation and sample storage and the automated integration of preanalytic handling and storage robots. Conclusion: IT support for biobanking projects can be based on a federated architectural framework comprising primary data sources for clinical annotations, a pseudonymization service, a clinical data warehouse with a flexible and user-friendly query interface and a biorepository management system. Flexibility and scalability of all such components are vital since large medical facilities such as university hospitals will have to support biobanking for varying monocentric and multicentric research scenarios and multiple medical clients.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Morrison, Clare, Michelle Beattie, Joseph Wherton, Cameron Stark, Julie Anderson, Carolyn Hunter-Rowe, and Nicola M. Gray. "Testing and implementing video consulting for outpatient appointments: using quality improvement system thinking and codesign principles." BMJ Open Quality 10, no. 1 (March 2021): e001259. http://dx.doi.org/10.1136/bmjoq-2020-001259.

Повний текст джерела
Анотація:
Increasing demand for outpatient appointments (OPA) is a global challenge for healthcare providers. Non-attendance rates are high, not least because of the challenges of attending hospital OPAs due to transport difficulties, cost, poor health, caring and work responsibilities. Digital solutions may help ameliorate these challenges. This project aimed to implement codesigned outpatient video consultations across National Health Service (NHS) Highland using system-wide quality improvement approaches to implementation, involving patients, carers, clinical and non-clinical staff, national and local strategic leads. System mapping; an intensive codesign process involving extensive stakeholder engagement and real-time testing; Plan, Do, Study, Act cycles; and collection of clinician and patient feedback were used to optimise the service. Standardised processes were developed and implemented, which made video consulting easy to use for patients, embedded video into routine health service systems for clinicians and non-clinical staff, and automated much of the administrative burden. All clinicians and staff are using the system and both groups identified benefits in terms of travel time and costs saved. Transferable lessons for other services are identified, providing a practical blueprint for others to adapt and use in their own contexts to help implement and sustain video consultation services now and in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Shkrygun, Yu. "Management of Logistics Activities of Enterprises in the Context of Industry 4.0." Economic Herald of the Donbas, no. 4 (66) (2021): 53–61. http://dx.doi.org/10.12958/1817-3772-2021-4(66)-53-61.

Повний текст джерела
Анотація:
In the modern conditions of digital economy and a large number of force majeure events, both economic and social, the issues of providing production with the necessary material and digital resources and their efficient use become especially relevant; improving the operational and strategic management of warehousing, inventories, differentiated transport flows, sales activities and customer experience. In order to ensure the effective operation of enterprises should organize their activities in such a way as to avoid risk, loss and costs associated with the organization of logistics processes, production, transportation and marketing, customer focus, and maximize revenue. First of all, these goals can be achieved by improving the management of logistics activities of enterprises, taking into account the analysis of the components of the concept, their relationship and taking into account the presence of accelerated and uneven digitization process. It is established that it is expedient to develop and implement management decisions in the following key areas: procurement and supply of material resources (calculation of the optimal volume of the supply of material resources, optimization of procurement strategy, improving procurement management using the method of multicriteria assessment) traffic flow management (introduction of cargo flow management information systems, use of automated document processing in the design of the cargo transportation process, development of proposals for optimizing transport loading, use of Internet technology to automate transport processes); customer experience management (analysis of shipment volumes, forecasting shipment volumes to consumers, development of proposals to increase the level of logistics services, formation of a system of contractual relations with consumers, improvement of customer-oriented approach to service of different categories of consumers in the context of relationship marketing); sales management (justification of the network approach to the organization of sales activities of enterprises; improving the mechanism of public-private partnership in sales management of enterprises based on the organizational and legal form of the syndicate, methodological approach to choosing the optimal sales channel for finished products; commerce as an effective tool for promoting products on the market).
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Fernández-Isabel, Alberto, and Rubén Fuentes-Fernández. "Social-Aware Driver Assistance Systems for City Traffic in Shared Spaces." Sensors 19, no. 2 (January 9, 2019): 221. http://dx.doi.org/10.3390/s19020221.

Повний текст джерела
Анотація:
Shared spaces are gaining presence in cities, where a variety of players and mobility types (pedestrians, bicycles, motorcycles, and cars) move without specifically delimited areas. This makes the traffic they comprise challenging for automated systems. The information traditionally considered (e.g., streets, and obstacle positions and speeds) is not enough to build suitable models of the environment. The required explanatory and anticipation capabilities need additional information to improve them. Social aspects (e.g., goal of the displacement, companion, or available time) should be considered, as they have a strong influence on how people move and interact with the environment. This paper presents the Social-Aware Driver Assistance System (SADAS) approach to integrate this information into traffic systems. It relies on a domain-specific modelling language for social contexts and their changes. Specifications compliant with it describe social and system information, their links, and how to process them. Traffic social properties are the formalization within the language of relevant knowledge extracted from literature to interpret information. A multi-agent system architecture manages these specifications and additional processing resources. A SADAS can be connected to other parts of traffic systems by means of subscription-notification mechanisms. The case study to illustrate the approach applies social knowledge to predict people’s movements. It considers a distributed system for obstacle detection and tracking, and the intelligent management of traffic signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Yang, Xihua. "Deriving RUSLE cover factor from time-series fractional vegetation cover for hillslope erosion modelling in New South Wales." Soil Research 52, no. 3 (2014): 253. http://dx.doi.org/10.1071/sr13297.

Повний текст джерела
Анотація:
Soil loss due to water erosion, in particular hillslope erosion, can be estimated using predictive models such as the Revised Universal Soil Loss Equation (RUSLE). One of the important and dynamic elements in the RUSLE model is the cover and management factor (C-factor), which represents effects of vegetation canopy and ground cover in reducing soil loss. This study explores the potential for using fractional vegetation cover, rather than traditional green vegetation indices (e.g. NDVI), to estimate C-factor and consequently hillslope erosion hazard across New South Wales (NSW), Australia. Values of the C-factor were estimated from the emerging time-series fractional cover products derived from Moderate Resolution Imaging Spectroradiometer (MODIS). Time-series C-factor and hillslope erosion maps were produced for NSW on monthly and annual bases for a 13-year period from 2000 to 2012 using automated scripts in a geographic information system. The estimated C-factor time-series values were compared with previous study and field measurements in NSW revealing good consistency in both spatial and temporal contexts. Using these time-series maps, the relationship was analysed between ground cover and hillslope erosion and their temporal variation across NSW. Outcomes from this time-series study are being used to assess hillslope erosion hazard, sediment and water quality (particularly after severe bushfires) across NSW at local, catchment and regional scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Huynh, Nghi, Marc Frappier, Herman Pooda, Amel Mammar, and Régine Laleau. "SGAC: A Multi-Layered Access Control Model with Conflict Resolution Strategy." Computer Journal 62, no. 12 (May 13, 2019): 1707–33. http://dx.doi.org/10.1093/comjnl/bxz039.

Повний текст джерела
Анотація:
AbstractThis paper presents SGAC (Solution de Gestion Automatisée du Consentement / automated consent management solution), a new healthcare access control model and its support tool, which manages patient wishes regarding access to their electronic health records (EHR). This paper also presents the verification of access control policies for SGAC using two first-order-logic model checkers based on distinct technologies, Alloy and ProB. The development of SGAC has been achieved within the scope of a project with the University of Sherbrooke Hospital (CHUS), and thus has been adapted to take into account regional laws and regulations applicable in Québec and Canada, as they set bounds to patient wishes: for safety reasons, under strictly defined contexts, patient consent can be overriden to protect his/her life (break-the-glass rules). Since patient wishes and those regulations can be in conflict, SGAC provides a mechanism to address this problem based on priority, specificity and modality. In order to protect patient privacy while ensuring effective caregiving in safety-critical situations, we check four types of properties: accessibility, availability, contextuality and rule effectivity. We conducted performance tests comparison: implementation of SGAC versus an implementation of another access control model, XACML, and property verification with Alloy versus ProB. The performance results show that SGAC performs better than XACML and that ProB outperforms Alloy by two order of magnitude thanks to its programmable approach to constraint solving.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Pääkkönen, Juho, Salla-Maaria Laaksonen, and Mikko Jauho. "Credibility by automation: Expectations of future knowledge production in social media analytics." Convergence: The International Journal of Research into New Media Technologies 26, no. 4 (February 5, 2020): 790–807. http://dx.doi.org/10.1177/1354856520901839.

Повний текст джерела
Анотація:
Social media analytics is a burgeoning new field associated with high promises of societal relevance and business value but also methodological and practical problems. In this article, we build on the sociology of expectations literature and research on expertise in the interaction between humans and machines to examine how analysts and clients make their expectations about social media analytics credible in the face of recognized problems. To investigate how this happens in different contexts, we draw on thematic interviews with 10 social media analytics and client companies. In our material, social media analytics appears as a field facing both hopes and skepticism – toward data, analysis methods, or the users of analytics – from both the clients and the analysts. In this setting, the idea of automated analysis through algorithmic methods emerges as a central notion that lends credibility to expectations about social media analytics. Automation is thought to, first, extend and make expert interpretation of messy social media data more rigorous; second, eliminate subjective judgments from measurement on social media; and, third, allow for coordination of knowledge management inside organizations. Thus, ideas of automation importantly work to uphold the expectations of the value of analytics. Simultaneously, they shape what kinds of expertise, tools, and practices come to be involved in the future of analytics as knowledge production.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Tuan, Annamaria, Daniele Dalli, Alessandro Gandolfo, and Anastasia Gravina. "Theories and methods in CSRC research: a systematic literature review." Corporate Communications: An International Journal 24, no. 2 (April 1, 2019): 212–31. http://dx.doi.org/10.1108/ccij-11-2017-0112.

Повний текст джерела
Анотація:
Purpose The authors have systematically reviewed 534 corporate social responsibility communication (CSRC) papers, updating the current debate about the ontological and epistemological paradigms that characterize the field, and providing evidence of the interactions between these paradigms and the related methodological choices. The purpose of this paper is to provide theoretical and methodological implications for future research in the CSRC research domain. Design/methodology/approach The authors used the Scopus database to search for titles, abstracts and related keywords with two queries sets relating to corporate social responsibility (e.g. corporate ethical, corporate environmental, social responsibility, corporate accountability) and CSRC (e.g. reporting, disclosure, dialogue, sensemaking). The authors identified 534 empirical papers (2000–2016), which the authors coded manually to identify the research methods and research designs (Creswell, 2013). The authors then developed an ad hoc dictionary whose keywords relate to the three primary CSRC approaches (instrumental, normative and constitutive). Using the software Linguistic Inquiry and Word Count, the authors undertook an automated content analysis in order to measure these approaches’ relative popularity and compare the methods employed in empirical research. Findings The authors found that the instrumental approach, which belongs to the functionalist paradigm, dominates the CSRC literature with its relative weight being constant over time. The normative approach also belongs to the functionalist paradigm, but plays a minor yet enduring role. The constitutive approach belongs to the interpretive paradigm and grew slightly over time, but still remains largely beyond the instrumental approach. In the instrumental approach, many papers report on descriptive empirical analyses. In the constitutive approach, theory-method relationships are in line with the various paradigmatic traits, while the normative approach presents critical issues. Regarding methodology, according to the findings, the literature review underlines three major limitations that characterize the existing empirical evidence and provides avenues for future research. While multi-paradigmatic research is promoted in the CRSC literature (Crane and Glozer, 2016; Morsing, 2017; Schoeneborn and Trittin, 2013), the authors found no empirical evidence. Originality/value This is the first paper to systematically review empirical research in the CSRC field and is also the first to address the relationship between research paradigms, theoretical approaches, and methods. Further, the authors suggest a novel way to develop systematic reviews (i.e. via quantitative, automated content analysis), which can now also be applied in other literature streams and in other contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cybulski, Jacob L., and Rens Scheepers. "Data science in organizations: Conceptualizing its breakthroughs and blind spots." Journal of Information Technology 36, no. 2 (February 26, 2021): 154–75. http://dx.doi.org/10.1177/0268396220988539.

Повний текст джерела
Анотація:
The field of data science emerged in recent years, building on advances in computational statistics, machine learning, artificial intelligence, and big data. Modern organizations are immersed in data and are turning toward data science to address a variety of business problems. While numerous complex problems in science have become solvable through data science, not all scientific solutions are equally applicable to business. Many data-intensive business problems are situated in complex socio-political and behavioral contexts that still elude commonly used scientific methods. To what extent can such problems be addressed through data science? Does data science have any inherent blind spots in this regard? What types of business problems are likely to be addressed by data science in the near future, which will not, and why? We develop a conceptual framework to inform the application of data science in business. The framework draws on an extensive review of data science literature across four domains: data, method, interfaces, and cognition. We draw on Ashby’s Law of Requisite Variety as theoretical principle. We conclude that data-scientific advances across the four domains, in aggregate, could constitute requisite variety for particular types of business problems. This explains why such problems can be fully or only partially addressed, solved, or automated through data science. We distinguish between situations that can be improved due to cross-domain compensatory effects, and problems where data science, at best, only contributes merely to better understanding of complex phenomena.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Miksa, Tomasz, Simon Oblasser, and Andreas Rauber. "Automating Research Data Management Using Machine-Actionable Data Management Plans." ACM Transactions on Management Information Systems 13, no. 2 (June 30, 2022): 1–22. http://dx.doi.org/10.1145/3490396.

Повний текст джерела
Анотація:
Many research funders mandate researchers to create and maintain data management plans (DMPs) for research projects that describe how research data is managed to ensure its reusability. A DMP, being a static textual document, is difficult to act upon and can quickly become obsolete and impractical to maintain. A new generation of machine-actionable DMPs (maDMPs) was therefore proposed by the Research Data Alliance to enable automated integration of information and updates. maDMPs open up a variety of use cases enabling interoperability of research systems and automation of data management tasks. In this article, we describe a system for machine-actionable data management planning in an institutional context. We identify common use cases within research that can be automated to benefit from machine-actionability of DMPs. We propose a reference architecture of an maDMP support system that can be embedded into an institutional research data management infrastructure. The system semi-automates creation and maintenance of DMPs, and thus eases the burden for the stakeholders responsible for various DMP elements. We evaluate the proposed system in a case study conducted at the largest technical university in Austria and quantify to what extent the DMP templates provided by the European Commission and a national funding body can be pre-filled. The proof-of-concept implementation shows that maDMP workflows can be semi-automated, thus workload on involved parties can be reduced and quality of information increased. The results are especially relevant to decision makers and infrastructure operators who want to design information systems in a systematic way that can utilize the full potential of maDMPs.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Maxant, Jérôme, Rémi Braun, Mathilde Caspard, and Stephen Clandillon. "ExtractEO, a Pipeline for Disaster Extent Mapping in the Context of Emergency Management." Remote Sensing 14, no. 20 (October 20, 2022): 5253. http://dx.doi.org/10.3390/rs14205253.

Повний текст джерела
Анотація:
Rapid mapping of disasters using any kind of satellite imagery is a challenge. The faster the response, the better the service is for the end users who are managing the emergency activities. Indeed, production rapidity is crucial whatever the satellite data in input. However, the speed of delivery must not be at the expense of crisis information quality. The automated flood and fire extraction pipelines, presented in this technical note, make it possible to take full advantage of advanced algorithms in short timeframes, and leave enough time for an expert operator to validate the results and correct any unmanaged thematic errors. Although automated algorithms aren’t flawless, they greatly facilitate and accelerate the detection and mapping of crisis information, especially for floods and fires. ExtractEO is a pipeline developed by SERTIT and dedicated to disaster mapping. It brings together automatic data download and pre-processing, along with highly accurate flood and fire detection chains. Indeed, the thematic quality assessment revealed F1-score values of 0.91 and 0.88 for burnt area and flooded area detection, respectively, from various kinds of high- and very-high- resolution data (optical and SAR).
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Omonov, F. A. "ADAPTATION OF SITUATIONAL MANAGEMENT PRINCIPLES FOR USE IN AUTOMATED DISPATCHING PROCESSES IN PUBLIC TRANSPORT." International Journal of Advance Scientific Research 02, no. 03 (March 1, 2022): 59–66. http://dx.doi.org/10.37547/ijasr-02-03-09.

Повний текст джерела
Анотація:
The article presents a schematic algorithm that shows the sequence of the dispatcher's work on the analysis of the automated dispatching management of the city public transport, the process of studying the contents of the movement, as well as the elimination of any inconvenience (increased waiting time for passengers, the implementation of an inefficient movement procedure, Traffic Safety).
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ivanova, I. A., O. S. Osipova, and V. N. Pulyaeva. "Automating Processes of Personnel Motivation Management in the Context of Implementing a Principle of Social Justice." Management Science 9, no. 4 (January 30, 2020): 63–74. http://dx.doi.org/10.26794/2404-022x-2019-9-4-63-74.

Повний текст джерела
Анотація:
The paper is devoted to studying the experience of Russian organizations in solving the problem of automation of one of the main HR functions — motivation and work incentives of employees of the organizations. After personnel administration, HR motivation management has become the second HR function, which is successfully automated in Russian organizations. The methodological basis of the study is a modern understanding of the theory of human capital. The research methodology provides for the reconstruction of the labor stimulation process based on the development and implementation of key performance indicators in conjunction with the automation processes of operational HR processes. The work results as follows: systematization of software that automates the basic processes for staff motivation; development of recommendations for HR departments on the development and implementation of an automated system of main performance indicators. The authors have proved that the correctly implemented automation of the HR function to calculate compensation and benefits during the transition to a more complex remuneration system makes it possible in practice not just to increase the economic results of the organization, but also to implement the fundamental principle of effective management — the principle of a fair assessment of the results of each employee’s work.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Sizov, Pavel. "Supply Chain Management in the Context of Digitalization." SHS Web of Conferences 93 (2021): 03009. http://dx.doi.org/10.1051/shsconf/20219303009.

Повний текст джерела
Анотація:
The article deals with the actual problems of supply chain management in modern conditions. A review of the references on the use of digital technologies in supply chain management has been carried out. The relevance of the use of automated systems in supply chain management has been shown. The author's methodological approach to the construction of information support for the network structure of supply chains has been proposed. Practical approbation according to the logical scheme for the selected research object has been demonstrated. The blocks of an automated control system for information support of the network structure of supply chains have been described.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Makhlouf Shabou, Basma. "Digital diplomatics and measurement of electronic public data qualities." Records Management Journal 25, no. 1 (March 16, 2015): 56–77. http://dx.doi.org/10.1108/rmj-01-2015-0006.

Повний текст джерела
Анотація:
Purpose – This paper aims to present a recent study on the definition and measurement of quality dimensions of public electronic records and archives (QADEPs: Qualités des archives et documents électroniques publics). It develops an original model and a complete method with tools to define and measure electronic public data qualities within public institutions. It highlights also the relationship between diplomatics principles and the measurement of trustworthiness of electronic data in particular. This paper presents a general overview of the main results of this study, with also illustrative examples to demonstrate the feasibility of measuring the qualities of electronic archives in the context of public institutions. Design/methodology/approach – This research was conducted in two phases. The first one was the conceptual phase in which the quality dimensions were identified and defined with specific sets of indicators and variables. The second phase was the empirical phase which involved the testing of the model on real electronic documents belonging to several public institutions to validate its relevance and applicability. These tests were performed at the Archives of the State of Wallis and the Archives of the State of Geneva, thanks to different measurement tools designed especially for this stage of the research. Findings – The QADEPs model analyzes the qualities of electronic records in public institutions through three dimensions: trustworthiness, exploitability and representativeness. These dimensions were divided into eight sub-dimensions comprising 17 indicators for a total of 46 variables. These dimensions and their variables tried to cover the main aspects of quality standards for electronic data and public documents. The study demonstrates that nearly 60 per cent of the measured variables could be automated. Research limitations/implications – The QADEPs model was defined and tested in a Swiss context on a limited sample of electronic public data to validate, essentially, its feasibility. It would be useful to extend this approach and test it on a broader sample in different contexts abroad. Practical implications – The decisionmaking of records retention in organizations and public institutions in particular is difficult to establish and justify because it is based generally on subjective and non-defendable practices. The QADEPs model offers specific metrics with their related measuring tools to evaluate and identify what is valuable and what is eliminable within the whole set of institutional electronic information. The model should reinforce the information governance of those institutions and help them control the risks related to information management. Originality/value – The current practice of archival appraisal does not yet invest in a meticulous examination of the nature of documents that should be preserved permanently. The lack of studies on the definition and measurement of the qualities of electronic and public electronic records prevents verification as to whether archival materials are significant. This paper fills in some of the gaps.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kazakova, Elena O. "“MAKE THE MOST OF EVERYTHING”: THE WORLD AS A RESOURCE IN 1920s CHILDREN’S NON-FICTION." Practices & Interpretations: A Journal of Philology, Teaching and Cultural Studies 7, no. 4 (December 22, 2022): 22–43. http://dx.doi.org/10.18522/2415-8852-2022-4-21-43.

Повний текст джерела
Анотація:
The article discusses the process of building a “resource approach” in relation to nature in early Soviet children’s non-fiction. On examples from books on energy, technology, minerals, flora and fauna, it is shown how the sentimentalization of nature is abandoned and redirected towards consumer discourse. Methodologically the article is in the zone of intersection of literary approaches, adhering to the tradition of rhetoric studies and attempts to contextualize it, using the arsenal of digital humanities and automated construction of concordance according to the lemmas “benefit”, “useful”, “use” with the corpus tool (DetKorpus). The analysis of the identified contexts of word usage demonstrates how, with the help of the concept of “benefit”, authors create a new way of nature management in the readers’ minds, in which “taking advantage” becomes a key condition in interaction with the outside world. With regard to the extraction of raw materials, energy resources, the development of territories, and wildlife, authors use the rhetoric of “metaphorical violence” – the suppression and coercion of free natural forces that should be put at the service of man. Such ideological components of this phenomenon as the campaign to promote the first five-year plan and M. Gorky’s texts, conceptualizing the project of an anthropocentric world, in which a person acts as a source of power, reorganizing the “first nature” into culture – “second nature”, are considered. The existence of a special trend within the framework of non-fiction, formed by a specific author’s rhetoric, with a double pragmatics of texts, consisting not only the translation of ideas, but also involvement of children in economic and production processes, is summarized.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Lapshov, Yuriy A., Denis G. Lyamkin, Viktor N. Negoda, and Andrey A. Pertsev. "THE ONTOLOGICAL MODELING IN PLANNING THE STAFFING SUPPORT PERFORMING A CODE REVIEW FOR AUTOMATED SYSTEMS." Автоматизация процессов управления 2, no. 68 (2022): 108–17. http://dx.doi.org/10.35752/1991-2927-2022-2-68-108-117.

Повний текст джерела
Анотація:
The paper deals with the issues of automating the processes of ontological modeling in order to automate the processes of forming a team of performers of work on code review of automated systems. It is proposed to define the roles and responsibilities of team members based on the integrated automated information and analytical processes that are based on ontological modeling, value engineering, active accumulation and use of design experience, a role-based approach to personnel management in combination with an aspect-oriented approach to design the automated systems. Ontological modeling is considered as an incremental process of sequential expansion of the basic ontological model of end-to-end design of automated systems in the context of three target aspects: staffing support of the project, automated code review, and the value engineering. The use of the first two aspects arise from the very content of this process, the use of the latter one is due to the dependence of all works on the budget and the widespread practice of saving on code review. The incremental process used in these aspects is ensured by the inclusion by the aggregation facilities concepts and relations in the basic ontological model.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Azevedo-Sa, Hebert, Suresh Kumaar Jayaraman, X. Jessie Yang, Lionel P. Robert, and Dawn M. Tilbury. "Context-Adaptive Management of Drivers’ Trust in Automated Vehicles." IEEE Robotics and Automation Letters 5, no. 4 (October 2020): 6908–15. http://dx.doi.org/10.1109/lra.2020.3025736.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії