Статті в журналах з теми "Numeric traces and learning indicators"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Numeric traces and learning indicators.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-49 статей у журналах для дослідження на тему "Numeric traces and learning indicators".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Gregory, Peter, and Alan Lindsay. "Domain Model Acquisition in Domains with Action Costs." Proceedings of the International Conference on Automated Planning and Scheduling 26 (March 30, 2016): 149–57. http://dx.doi.org/10.1609/icaps.v26i1.13762.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper addresses the challenge of automated numeric domain model acquisition from observations. Many industrial and commercial applications of planning technology rely on numeric planning models. For example, in the area of autonomous systems and robotics, an autonomous robot often has to reason about its position in space, power levels and storage capacities. It is essential for these models to be easy to construct. Ideally, they should be automatically constructed. Learning the structure of planning domains from observations of action traces has produced successful results in classical planning. In this work, we present the first results in generalising approaches from classical planning to numeric planning. We restrict the numeric domains to those that include fixed action costs. Taking the finite state automata generated by the LOCM family of algorithms, we learn costs associated with machines; specifically to the object transitions and the state parameters. We learn action costs from action traces (with only the final cost of the plans as extra information) using a constraint programming approach. We demonstrate the effectiveness of this approach on standard benchmarks.
2

Batchakui, Bernabé, Thomas Djotio, Ibrahim Moukouop, and Alex Ndouna. "Object-Based Trace Model for Automatic Indicator Computation in the Human Learning Environments." International Journal of Emerging Technologies in Learning (iJET) 16, no. 21 (November 15, 2021): 26. http://dx.doi.org/10.3991/ijet.v16i21.25033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes a traces model in the form of an object or class model (in the UML sense) which allows the automatic calculation of indicators of various kinds and independently of the computer environment for human learning (CEHL). The model is based on the establishment of a trace-based system that encompasses all the logic of traces collecting and indicators calculation. It is im-plemented in the form of a trace database. It is an important contribution in the field of the exploitation of the traces of apprenticeship in a CEHL because it pro-vides a general formalism for modeling the traces and allowing the calculation of several indicators at the same time. Also, with the inclusion of calculated indica-tors as potential learning traces, our model provides a formalism for classifying the various indicators in the form of inheritance relationships, which promotes the reuse of indicators already calculated. Economically, the model can allow organi-zations with different learning platforms to invest only in one traces Management System. At the social level, it can allow a better sharing of trace databases be-tween the various research institutions in the field of CEHL.
3

Jędrzejec, Bartosz, and Krzysztof Świder. "Automatically conducted learning from textually expressed vacationers’ opinions." ITM Web of Conferences 21 (2018): 00024. http://dx.doi.org/10.1051/itmconf/20182100024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The automatically conducted consumers’ opinions investigation is one of the most interesting potential applications of text analytics. In our study we perform a two steps procedure of learning from the textually expressed reviews concerning hotel services offered by a travel company. In the first stage we accomplish the necessary Extract-Transform-Load process utilizing one of the available web portals and required language resources. In the second stage each of the suitably pre-processed opinions is “linguistically evaluated”, which results in a vector of numeric indicators characterizing its sentiment.
4

Ozerova, G. P. "Usage of Learning Management System Web Analytics in Blended Learning Self-Study Evaluation." Vysshee Obrazovanie v Rossii = Higher Education in Russia 29, no. 8-9 (September 9, 2020): 117–26. http://dx.doi.org/10.31992/0869-3617-2020-29-8-9-117-126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning Management System (LMS) analytics data is proposed to be used in developing algorithms for evaluating students’ self-studies. Development of such algorithms is relevant considering annual growth of disciplines that apply blended learning. In blended learning model selfstudy can be done online in LMS which makes it possible to analyze patterns how students interact with learning materials and perform exercises of various complexity. Different criteria and indicators are aggregated into numeric metrics that following designed methodology evaluates self-study performance of each student. Designed methodology uses algorithms that evaluate self-study results by using empirical LMS analytics data. Developed algorithms allow us on one hand to interpret empirical data for self-studies evaluation, and on the other hand to correct and improve students’ learning path. This paper presents results of using developed methodology deployed in LMS BlackBoard on the example of Information Technology blended learning course in Far Eastern Federal University.
5

Yang, Linyi, Jiazheng Li, Ruihai Dong, Yue Zhang, and Barry Smyth. "NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-Task Financial Forecasting." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11604–12. http://dx.doi.org/10.1609/aaai.v36i10.21414.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail. Traditionally, financial forecasting has heavily relied on quantitative indicators and metrics derived from structured financial statements. Earnings conference call data, including text and audio, is an important source of unstructured data that has been used for various prediction tasks using deep earning and related approaches. However, current deep learning-based methods are limited in the way that they deal with numeric data; numbers are typically treated as plain-text tokens without taking advantage of their underlying numeric structure. This paper describes a numeric-oriented hierarchical transformer model (NumHTML) to predict stock returns, and financial risk using multi-modal aligned earnings calls data by taking advantage of the different categories of numbers (monetary, temporal, percentages etc.) and their magnitude. We present the results of a comprehensive evaluation of NumHTML against several state-of-the-art baselines using a real-world publicly available dataset. The results indicate that NumHTML significantly outperforms the current state-of-the-art across a variety of evaluation metrics and that it has the potential to offer significant financial gains in a practical trading context.
6

Mohssine, Bentaib, Aitdaoud Mohammed, Namir Abdelwahed, and Talbi Mohammed. "Adaptive Help System Based on Learners ‘Digital Traces’ and Learning Styles." International Journal of Emerging Technologies in Learning (iJET) 16, no. 10 (May 25, 2021): 288. http://dx.doi.org/10.3991/ijet.v16i10.19839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning management system (LMS) such as Claroline, Ganesha, Chamilo, Moodle ..., are commonly and well used in e-education (e-learning). Most of theTechnology Enhanced Learning (TEL) focus on supporting teachers in the creation and organization of online courses. However, in general, they do not consider individual differences of each learner. In addition, they do not provide enough indicators which will help to track the learners. In this paper, we investigate the benefits of integrating learning styles in the Web-based educational systems. Also we are interested in the use of interaction traces in order to address the lack of feedback between the learner and the teacher. Generally, we aim to offer a tool that allows the tutor and the instructional designer to interpret learner courses, in order to provide help as needed for each individual.
7

Juhaňák, Libor, Karla Brücknerová, Barbora Nekardová, and Jiří Zounek. "Goal Setting and Goal Orientation as Predictors of Learning Satisfaction and Online Learning Behavior in Higher Education Blended Courses." Studia paedagogica 28, no. 3 (April 2, 2024): 39–58. http://dx.doi.org/10.5817/sp2023-3-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study investigated how goal setting and goal orientation are related to student learning behavior and engagement in an online learning environment, and how learning behavior, goal setting, and goal orientation are related to student satisfaction with the course they are studying. A total of 882 students from 76 different courses participated in this study, which used both self-reported data from a questionnaire and indicators based on digital traces in an online learning environment. The results of multilevel regression analyses showed that student ability to set learning goals (i.e., goal setting) was positively related to both student learning satisfaction and student learning behavior. Intrinsic goal orientation positively predicted student satisfaction with the course. Extrinsic goal orientation did not show a significant effect in any of the observed relationships. The analyzed indicators of student learning behavior showed no statistically significant association with learning satisfaction. Possible explanations for these findings are discussed, and limitations and directions for future research are suggested.
8

Salihoun, Mohammed, Fatima Guerouate, Naoual Berbiche, and Mohamed Sbihi. "How to Assist Tutors to Rebuild Groups Within an ITS by Exploiting Traces. Case of a Closed Forum." International Journal of Emerging Technologies in Learning (iJET) 12, no. 03 (March 27, 2017): 169. http://dx.doi.org/10.3991/ijet.v12i03.6506.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer Supported Collaborative Learning (CSCL) is a new mode of teaching and one of the popular approaches for learning process. It allows virtual interactions between groups by providing tools such as: chat, internal email and discussion forums. One of the major problems caused by this learning process is the neglect and isolation of learners in groups, and usually is the cause of a heterogeneous group through social, cognitive or emotional ways. The method used is based on the exploitation of traces left on the online learning platform by learners and groups. The data collected from the environment can be observed and exploited in order to build social and cognitive indicators. Our approach is to design a model which assists the tutor to rebuild groups who are not homogeneous in order to prevent their isolation and abandonment. Our model offers the tutor the opportunity to rebuild the groups in an automatic way and based on the characteristics of quantitative indicators of all learners. Our work allowed us to test our algorithm from a functional and technical point of view and also identifies real variables from a collaborative online learning. It also allowed us to evaluate six different indicators proposed for this experiment, showing that they may assist the tutor to rebuild many groups again. The results show us that after the rebuilding groups, there has been a lot of participation in the forum and a considerable number of shares and documents deposited to the forum for each group. This high frequency of interaction between learners, lead them to a fruitful collaboration, and a good quality work at the end. The integration of other more advanced indicators may provide to tutor a better visibility to rebuild the groups that face difficulties.
9

Yudha, Firma, and Alex Haris Fauzi. "Efektivitas Penggunaan Media Kartu Numerik pada Siswa Jenjang Prasekolah." Indonesian Journal of Mathematics and Natural Science Education 2, no. 1 (June 30, 2021): 28–33. http://dx.doi.org/10.35719/mass.v2i1.56.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstrak Pembelajaran di jenjang prasekolah lebih banyak bermain sambil belajar. Pembelajaran matematika juga dapat diterima anak dalam dunia penddikan pra sekolah, yaitu dengan cara mengenal numerik 1-10. Hal demikian dilakukan dengan menggunakan media kartu numerik secara klasikal. Jenis penelitian ini adalah deskriptif kualitatif dengan subjek penelitian anak-anak pra sekolah dhiva school dengan jumlah 18 anak. Dalam penelitian ini diketahui bahwa siswa pra sekolah mampu mengenal numerik, mampu menulis dan mampu mengurutkan numerik 1-10. efektifitas media kartu numerik, dengan menggunakan 4 indikator hasilnya untuk indikator kemauan belajar siswa kategori sangat baik, dengan rata-rata 84,4. Untuk indikator keterampilan siswa juga kriteria sangat baik, dengan skor rata-rata 83,27. Indikator berikutnya mengenal angka 1-10 dengan benar dan tepat juga memproleh kriteria sangat baik, dengan rata-rata 83. Indikator yang terakhir diperoleh skor rata-rata 80,94 jika dibulatkan menjadi 81, jadi untuk indikator ini memperoleh kriteria sangat baik. Abstract Their experiences in preschool more level playing while learning. Math lessons are also a possible child in pre-world education, school namely through numerical know 1-10. It thereby conducted using the card classical. The purpose of this study was to describe the effectiveness of numeric card media for pre-school level students in terms of attitudes and abilities in understanding the material.Numerically kind of research this is the qualitative descriptive subject of study children preschool Dhiva school of 18. Children in this research are that students are, numerical know schools can pre and capable of writing. 1-10 numeric rank The effectiveness of media, numerical card using the 4 indicators for the student learning a very good, category with an average 84,4. To the student skills also excellent, criteria with the average score 83,27. Know the following indicators 1-10 correctly and precise criteria, get very well with an average of 83. The last average score obtained 80,94 if rounded into 81, so this question received excellent. So that the introduction of numeric card media for pre-school students is very effective in increasing the ability of students to learn to count (numbers).
10

Martinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra, and Rafael Sebastián. "Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations." Mathematics 8, no. 9 (September 2, 2020): 1479. http://dx.doi.org/10.3390/math8091479.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Reinforcement learning is one of the most promising machine learning techniques to get intelligent behaviors for embodied agents in simulations. The output of the classic Temporal Difference family of Reinforcement Learning algorithms adopts the form of a value function expressed as a numeric table or a function approximator. The learned behavior is then derived using a greedy policy with respect to this value function. Nevertheless, sometimes the learned policy does not meet expectations, and the task of authoring is difficult and unsafe because the modification of one value or parameter in the learned value function has unpredictable consequences in the space of the policies it represents. This invalidates direct manipulation of the learned value function as a method to modify the derived behaviors. In this paper, we propose the use of Inverse Reinforcement Learning to incorporate real behavior traces in the learning process to shape the learned behaviors, thus increasing their trustworthiness (in terms of conformance to reality). To do so, we adapt the Inverse Reinforcement Learning framework to the navigation problem domain. Specifically, we use Soft Q-learning, an algorithm based on the maximum causal entropy principle, with MARL-Ped (a Reinforcement Learning-based pedestrian simulator) to include information from trajectories of real pedestrians in the process of learning how to navigate inside a virtual 3D space that represents the real environment. A comparison with the behaviors learned using a Reinforcement Learning classic algorithm (Sarsa(λ)) shows that the Inverse Reinforcement Learning behaviors adjust significantly better to the real trajectories.
11

Simorangkir, Frida Marta Argareta, and Dyan Wulan Sari HS. "LITERASI NUMERIK DI SD SWASTA PKMI EFESUS AEK BATU." JS (JURNAL SEKOLAH) 5, no. 4 (September 20, 2021): 32. http://dx.doi.org/10.24114/js.v5i4.28198.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Numeric Literacy in Elementary School PKMI Efesus Aek Batu. The purpose of this study is to describe (1) the implementation of numerical literacy in learning, (2) efforts to overcome obstacles in numerical literacy, (3) supporting factors and (4) inhibiting factors of numerical literacy. This type of research is descriptive qualitative. The results of the research are (1) the implementation of numerical literacy in learning based on three stages, namely the stages of habituation, development, and learning and according to the five indicators of numerical literacy. (2) The efforts made to overcome the obstacles in numerical literacy are in accordance with the literacy goals in schools. (3) The supporting factors are all school members who are the targets of implementing numerical literacy. And (4) the inhibiting factors are reviewed based on the class base, school culture and society.Keyword : Literacy, Numeric, Elementary SchoolAbstrak :Literasi Numerik di SD Swasta PKMI Efesus Aek Batu. Tujuan penelitian ini untuk mendeskripsikan (1) pelaksanaan literasi numerik dalam pembelajaran, (2) upaya mengatasi kendala dalam literasi numerik, (3) faktor pendukung dan (4) faktor penghambat literasi numerik.Jenis penelitian adalah deskriptif kualitatif. Hasil penelitian yaitu (1) pelaksanaan literasi numerik dalam pembelajaran berdasarkan tiga tahapan yaitu tahap pembiasaan, pengembangan, dan pembelajaran serta sesuai dengan lima indikator literasi numerik. (2) Upaya yang dilakukan untuk mengatasi kendala dalam literasi numerik sudah sesuai pada tujuan literasi di sekolah. (3) Faktor pendukung yaitu seluruh warga sekolah yang menjadi sasaran pelaksanaan literasi numerik. Dan (4) Faktor penghambat ditinjau berdasarkan basis kelas, budaya sekolah dan masyarakat.Kata kunci : literasi, numerik, sekolah dasar
12

Uvarov, A. Yu, and G. M. Vodopian. "About two indicators of the school digital renewal process." Informatics and education 38, no. 5 (November 17, 2023): 5–15. http://dx.doi.org/10.32517/0234-0453-2023-38-5-5-15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The indicators of the process of digital technologies introduction (school digital renewal indicators) widely used today successfully capture the innovative processes that occur at the initial stages (levels) of digital renewal. At the stages of mature informatization and digital transformation, the changes associated with the expansion of the traditional classroom system and the transition to personalized and competency-based learning organization come to the fore.The authors propose two indicators that capture changes in the chronotope of study work and the teaching individualization level. The proposed indicators allow to reflect the dynamics of changes at the final stages of digital renewal. The use of school portals, which are being introduced at the final stages of digital renewal, provides conditions for assessing changes in the proposed indicators using digital traces, without contacting (surveys, questionnaires) the educational process participants.The development of bases, methods, and tools for calculating the proposed indicators is of theoretical and practical interest for solving the problems of managing the digital renewal of education at the final stages of this process.
13

Park, Hyunjae, and Young-June Choi. "Frequency-Based Representation of Massive Alerts and Combination of Indicators by Heterogeneous Intrusion Detection Systems for Anomaly Detection." Sensors 22, no. 12 (June 10, 2022): 4417. http://dx.doi.org/10.3390/s22124417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although the application of a wide range of sensors has been generalized through the development of technology, the processing of massive alerts generated through data analysis and monitoring remains a challenge. This problem is also found in cyber security because the intrusion detection system (IDS) produces a tremendous number of alerts. Massive alerts not only significantly increase resources for analysis, but also make it difficult to analyze the overall situation of the system. In order to handle massive alerts, we propose using an indicator as a frequency-based representation. The proposed indicator is generated from categorical parameters of alerts that occur within a unit time utilizing frequency and is used for situational awareness with machine learning to detect whether there is a threat or not. The advantage of using indicators is that they can determine the situation for a period without analyzing individual alerts, which helps security experts to recognize the situation in the system and focus on targets that require in-depth analysis. In addition, the conversion from the categorical parameters which is highly related to analysis to numeric parameter allows for applying machine learning. For performance evaluation, we collect data from an HAI testbed similar to real critical infrastructure and conduct experiments using indicators and XGBoost, a classification machine learning algorithm against five famous vulnerability attacks. Consequently, we show that the proposed method can detect attacks with more than 90 percent accuracy, and the performance is enhanced using heterogeneous intrusion detection systems.
14

Chemsi, Ghizlane, Mounir Sadiq, Mohamed Radid, and Mohammed Talbi. "Formative E-Assessment and Behavioral Commitment of Students: Case of the Faculty of Science Ben M’sik." International Journal of Emerging Technologies in Learning (iJET) 14, no. 12 (June 27, 2019): 4. http://dx.doi.org/10.3991/ijet.v14i12.10389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Digital technology contributes to the development of new teaching practices and a new assessment culture [1,2,3]. Various studies [4,5] show that digital technology facilitates the formative assessment process, an assessment mode that favors the learner’s commitment and encourages the learner to adopt an efficient learning method [6]. All the more, the media coverage of this type of assessment generates a large number of indicators enabling to analyze the behavioral commitment of students in their learning process. The objective of this study is to analyze the impact of formative E-assessment on the students’ behavioral commitment based on the digital traces of the students’ work, this traceability is considered a very important source permitting to find out about the students’ behavioral commitment. The assessment was achieved via the interactive Moodle platform, it was tested on a group of students from the Faculty of Sciences Ben M’sik in Casablanca, Morocco. The results of the experiment carried out for a period of two years (2015-2016 and 2017-2018) revealed that the majority of the students are committed to take the assessment test. We observed a real implication of the latter, where the action traces of the students in terms of participation and contribution could be collected and analyzed by the teacher for feedback purposes.
15

Supriana, I. Wayan Supriana, Made Agung Raharja, and I. Made Satria Bimantara. "Pengembangan Sistem Prediksi Bantuan Program Keluarga Harapan (PKH) Berbasis Machine Learning." SINTECH (Science and Information Technology) Journal 6, no. 1 (April 30, 2023): 26–36. http://dx.doi.org/10.31598/sintechjournal.v6i1.1297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Family Hope Program (PKH) is a poverty alleviation program which is one of the government's strategies in reducing the poverty line. This program provides cash social assistance to poor families who are included in the list of beneficiary families with a focus on education and health. The purpose of implementing the PKH program is not only to reduce poverty and increase human resources but to break the poverty chain. The implementation of PKH in its realization experienced many obstacles that caused the program not to be on target, this was because the data verification process was not yet effective and was still carried out manually. A process is needed to digitize the distribution and realization of the family of hope program. Through this research, a system was developed that can predict the value of PKH beneficiary assistance. The system developed is based on machine learning with a prediction model using Artificial Neural Network (ANN) and Backpropagation learning algorithm. Parameters in the learning system using PKH assessment as many as 8 indicators from the data of PKH beneficiaries in Tabanan Regency. Based on the prediction model testing using two data treatments, namely with and without preprocessing data. Parameters treated with data on numeric attributes and categories provide optimal values with an R2 Score of 0.695824 with a number of hidden layers of 500 and a max epoch of 375
16

Begimbetova, Guldana, Ultusyn Zauirbek, and Akbota Seitova. "EVALUATION OF INDIVIDUAL WORK OF STUDENTS IN DISTANCE LEARNING BASED ON A DIGITAL EDUCATIONAL TRACK." Bulletin Series of Physics & Mathematical Sciences 75, no. 3 (September 15, 2021): 173–81. http://dx.doi.org/10.51889/2021-3.1728-7901.21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article considers the assessment of students' independent work in a distance learningon the basis of digital educational traces. Independent work in a mixed training can be performed on the online platform LMS, and the use of educational digital data allows you to study the specifics of student interaction with the course materials and the ability to solve the problem. For the assessment of time and the quality of independent work of students is determined by a set of criteria and indicators, selected quantitative metrics and offers a methodology that can be used to assess the progress of each student. The method includes algorithms for assessing the success of independent work on the basis of empirical data educational analytics. The developed algorithms allow to interpret the information on performance of independent work, to evaluate its success and to correct a trajectory of training of the student.
17

Cruz-Iglesias, Esther, Pilar Gil-Molina, and Itziar Rekalde-Rodríguez. "A Navigation Chart for Sustainability for the Ocean i3 Educational Project." Sustainability 14, no. 8 (April 15, 2022): 4764. http://dx.doi.org/10.3390/su14084764.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The complex nature of sustainability challenges implies the need to provide students with interdisciplinary learning experiences and environments based on active and reflective learning. To know whether these experiences result in real learning, there must be a way of capturing and measuring the competences required to promote sustainable development using suitable indicators. This paper presents the process of building a competence map that is used as a navigation chart to monitor the sustainable education competences in the Ocean i3 experience. An action-research methodological approach is used involving participant observation, field notes, informal interviews, and documentary analysis. The participants were 38 students, 23 teachers, 3 project coordinators, and 2 researchers, and the context of the study is the five workshops carried out in the Ocean i3 project. The result is a navigation chart that traces the students’ learning journey through dialogue between the competences, learning outcomes, and activities. In conclusion, this approach to curricular planning can serve to inspire other learning environments and experiences on how to tackle the challenge of envisioning their sustainability competence development pathway. Above all, it can serve to improve competence development training schedules for sustainability.
18

Kuntariati, Utik, Putu Dian Yuliani Paramita, and Gede Eka Wahyu. "IMPLEMENTATION OF HIGHER ORDER THINKING SKILLS (HOTS)-BASED EDUCATION TO ENHANCE THE QUALITY OF ENGLISH LANGUAGE FOR STUDENTS AT SDN 1 TIMPAG." JURNAL PAJAR (Pendidikan dan Pengajaran) 7, no. 3 (May 31, 2023): 671. http://dx.doi.org/10.33578/pjr.v7i3.9441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Quality of education 4.0 is aimed at producing competent and quality human resources in carrying out activities in the industrial world later ( Pohan , 2019). In the current implementation of education in capturing information on the resulting human resources, it is required to obtain information on a very broad scale and human resources are also required to be able to understand various matters, one of which is understanding related to English. In this study, the researcher used a qualitative descriptive method to describe the HOTS concept which would be implemented in English learning activities at SDN 1 Timpag . Qualitative research generally describes the dominant data using words rather than describing phenomena through a numeric number. Based on the results of the research that has been done, some conclusions that can be drawn in this study are that the HOTS indicator learning method is developed in its entirety, there are 6 indicators including Creating, Evaluating, Analyzing, Applying, Remembering and Understanding . At SDN 1 Timpag as a whole there are several implementation indicator ideas that can be improved in the implementation of HOTS in English materials including story telling planning, introduction to animal objects and body anatomy and simple greetings and the use of grammar.
19

COHEN, WILLIAM W., and PREMKUMAR T. DEVANBU. "AUTOMATICALLY EXPLORING HYPOTHESES ABOUT FAULT PREDICTION: A COMPARATIVE STUDY OF INDUCTIVE LOGIC PROGRAMMING METHODS." International Journal of Software Engineering and Knowledge Engineering 09, no. 05 (October 1999): 519–46. http://dx.doi.org/10.1142/s0218194099000292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We evaluate a class of learning algorithms known as inductive logic programming (ILP) methods on the task of predicting fault density in C++ classes. Using these methods, a large space of possible hypotheses is searched in an automated fashion; further, the hypotheses are based directly on an abstract logical representation of the software, eliminating the need to manually propose numerical metrics that predict fault density. We compare two ILP systems, FOIL and FLIPPER, and conclude that FLIPPER generally outperforms FOIL on this problem. We analyze the reasons for the differing performance of these two systems, and based on the analysis, propose two extensions to FLIPPER: a user-directed bias towards easy-to-evaluate clauses, and an extension that allows FLIPPER to learn "counting clauses". Counting clauses augment logic programs with a variation of the "number restrictions" used in description logics, and significantly improve performance on this problem when prior knowledge is used. We also evaluate the use of ILP techniques for automatic generation of Boolean indicators and numeric metrics from the calling tree representation.
20

NEVREDINOV, Aleksandr R. "An approach to neural network analysis of text information in the economic assessment of companies." Economic Analysis: Theory and Practice 20, no. 8 (August 30, 2021): 1574–94. http://dx.doi.org/10.24891/ea.20.8.1574.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Subject. When evaluating enterprises, maximum accuracy and comprehensiveness of analysis are important, although the use of various indicators of organization’s financial condition and external factors provide a sufficiently high accuracy of forecasting. Many researchers are increasingly focusing on the natural language processing to analyze various text sources. This subject is extremely relevant against the needs of companies to quickly and extensively analyze their activities. Objectives. The study aims at exploring the natural language processing methods and sources of textual information about companies that can be used in the analysis, and developing an approach to the analysis of textual information. Methods. The study draws on methods of analysis and synthesis, systematization, formalization, comparative analysis, theoretical and methodological provisions contained in domestic and foreign scientific works on text analysis, including for purposes of company evaluation. Results. I offer and test an approach to using non-numeric indicators for company analysis. The paper presents a unique model, which is created on the basis of existing developments that have shown their effectiveness. I also substantiate the use of this approach to analyze a company’s condition and to include the analysis results in models for overall assessment of the state of companies. Conclusions. The findings improve scientific and practical understanding of techniques for the analysis of companies, the ways of applying text analysis, using machine learning. They can be used to support management decision-making to automate the analysis of their own and other companies in the market, with which they interact.
21

Legashev, L. V., I. P. Bolodurina, L. S. Grishina, I. A. Lashneva, and A. A. Sermyagin. "Prediction of milk quantitative traits based on infrared spectroscopy using machine-learning methods." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 22, no. 3 (2022): 47–56. http://dx.doi.org/10.14529/ctcr220305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fourier transform mid-infrared spectroscopy is a fast and cheap way to analyze cow's milk samples to determine fat, protein, lactose and other quantitative and qualitative indicators of milk quality. Modern tools for data analysis will reveal the relationship between different pairs of quantitative and qualitative characteristics of milk. Purpose of the study. Perform predictions on some key milk quality traits based on infrared spectroscopy data to study the accuracy of the developed mathematical model. Methods. The work was carried out in the winter period of 2022 on the basis of an experimental herd of Holsteinized black-and-white cattle (Krasnodar Territory). The analysis of milk traits was carried out with an automatic analyzer MilkoScan (FOSS) using the method of infrared spectroscopy by unloading the obtained spectra when analyzing the composition of raw milk. 23 indicators of the quantitative milk traits were studied: mass fraction of fat, protein (true and total), lactose, DSMR (dry skimmed milk residue), dry matter, casein, traces of acetone and beta-hydroxybutyrate, urea, freezing point, acidity of milk, myristic, palmitic, stearic, oleic fatty acids (FA), long-chain fatty acids, medium-chain fatty acids, short-chain fatty acids, monounsaturated and polyunsaturated fatty acids, saturated fatty acids, trans fatty acids. Methods based on linear regression, approaches to the regularization of the linear regression model (Ridge, Lasso and ElasticNet), as well as polynomial regression, the partial regression method (PLSRegression) and the Bayesian regression method for the problem of predicting key features of milk traits were considered. A method for reducing the dimensionality of infrared spectroscopy data is implemented based on the algorithm of random search of readings along the length of the window, and the most significant features are identified. Results. Models have been developed for predicting six main indicators of milk quality – mass fraction of fat ('Fat'), mass fraction of casein ('Cas.B'), fatty acids – myristic ('C14:0') and oleic ('C18: 1'), monounsaturated ('MUFA') and polyunsaturated fatty acids ('PUFA') – with an average absolute error not exceeding 0,016. Conclusion. The results obtained in the course of the study will further improve the predictive ability of the equation for determining the quality and composition of milk according to new breeding traits of milk productivity, reduce analysis costs and monitor the health of animals at an early stage.
22

Balanev, Dmitry Yu, and Viktor A. Shamakov. "Diagnostic markers of human motor activity in a digital learning environment." Vestnik Tomskogo gosudarstvennogo universiteta, no. 485 (2022): 138–44. http://dx.doi.org/10.17223/15617793/485/15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article aims to study the dynamics of psychological and psychophysiological indicators of students’ cognitive activity in a digital learning environment. As a research platform, we used the well-known task of speed–accuracy tradeoff, adapted for work in the LMS Moodle using a set of plug-ins that implement both the stimulus situation itself and the ability to monitor the characteristics of the user’s motor activity displayed in the track of the cursor control device (mouse tracking). As part of the study, LMS users performed a cognitive test, included in the digital learning environment and demonstrating the manifestation of Fits’s law in two versions: with instructions for the accuracy and speed of searching for the center of a reference figure that appears on a computer screen. The results were obtained in the form of tracks of the movement of a computer mouse, the main characteristics of the motor activity of the subjects – the time to complete both parts of the test, the distance traveled, the accuracy of hitting targets, and speed and acceleration – were analyzed and evaluated. All participants’ indicators significantly differed depending on the stages of the experiment. These studies were presented in the form of a set of formal statistical generalizations and visualization, reflecting the characteristic features of the subjects’ actions, the distance traveled, speed and acceleration on all segments of the path. As a result of data analysis, 6 groups of users were identified, differing in the degree of severity of indicators in terms of the characteristics of speed, accuracy and amplitude of movement. Of particular interest were such traces of a person’s actions, which, on the one hand, were an expression of his skills or experiences, and, on the other hand, were available not only for subjective, but also for objective analysis. As an example, we can point to the handwriting of a person who became one of the promising objects of study by psychologists of the early twentieth century. The concept of “handwriting” was easily transferred to other ways of transmitting information, for example, to the work of an operator with a key when transmitting messages using the “Morse code” method. Thus, the use of a computer mouse monitoring system in a digital learning environment provides an opportunity to quickly and adaptively provide interaction with the user, taking into account their psychophysiological characteristics and emotional state. The proposed diagnostic system allows us not only to determine the quantitative characteristics of the path traveled and the click map of a computer mouse, but also to classify the user into one or another group, taking into account their psychological characteristics.
23

Chang, Lipeng, Yuechuan Wei, Shuiyu He, and Xiaozhong Pan. "Research on Side-Channel Analysis Based on Deep Learning with Different Sample Data." Applied Sciences 12, no. 16 (August 18, 2022): 8246. http://dx.doi.org/10.3390/app12168246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the in-depth integration of deep learning and side-channel analysis (SCA) technology, the security threats faced by embedded devices based on the Internet of Things (IoT) have become increasingly prominent. By building a neural network model as a discriminator, the correlation between the side information leaked by the cryptographic device, the key of the cryptographic algorithm, and other sensitive data can be explored. Then, the security of cryptographic products can be evaluated and analyzed. For the AES-128 cryptographic algorithm, combined with the CW308T-STM32F3 demo board on the ChipWhisperer experimental platform, a Correlation Power Analysis (CPA) is performed using the four most common deep learning methods: the multilayer perceptron (MLP), the convolutional neural network (CNN), the recurrent neural network (RNN), and the long short-term memory network (LSTM) model. The performance of each model is analyzed in turn when the samples are small data sets, sufficient data sets, and data sets of different scales. Finally, each model is comprehensively evaluated by indicators such as classifier accuracy, network loss, training time, and rank of side-channel attacks. The experimental results show that the convolutional neural network CNN classifier has higher accuracy, lower loss, better robustness, stronger generalization ability, and shorter training time. The rank value is 2, that is, only two traces can recover the correct key byte information. The comprehensive performance effect is better.
24

Oh, Jiwon, Heesu Hwang, Yoonmi Nam, Myeong-Il Lee, Myeong-Jin Lee, Wonseok Ku, Hye-Won Song, et al. "Machine Learning-Assisted Gas-Specific Fingerprint Detection/Classification Strategy Based on Mutually Interactive Features of Semiconductor Gas Sensor Arrays." Electronics 11, no. 23 (November 24, 2022): 3884. http://dx.doi.org/10.3390/electronics11233884.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A high-performance machine learning-assisted gas sensor strategy based on the integration of supervised and unsupervised learning with a gas-sensitive semiconductor metal oxide (SMO) gas sensor array is introduced. A 4-SMO sensor array was chosen as a test sensor system for detecting carbon monoxide (CO) and ethyl alcohol (C2H5OH) mixtures using 15 different combinations. Gas sensing detection/classification was performed with different numbers of gas sensor and machine learning algorithms. K-Means clustering was successfully employed to rationally identify the similarity features of targeted gases among 4 different groups, i.e., matrix gas, two single-component gases, and one two-gas mixture, based on only unlabeled voltage-based gas sensing information. Detailed classification was performed through a multitude of supervised algorithms, i.e., 2-layer artificial neural networks (ANNs), 4-layer deep neural networks (DNNs), 1-dimensional convolutional neural networks (1D CNNs), and 2-dimensional CNNs (2D CNNs). The numerical-based DNNs and image-based CNNs are shown to be excellent approaches for gas detection and classification, as indicated by the highest accuracy and lowest loss indicators. Through the analysis of the influence of the number of sensors on the arrayed gas sensor system, the application of machine learning methodology to an arrayed gas sensor system demonstrates four unique features, i.e., a data augmentation methodology, machine learning approach of combining K-means clustering and neural networks, and a systematic approach to optimized sensor combinations, potentially leading to the practical sensor networks based on chemical sensors. Even two SMO sensor combinations are shown to be highly effective in gas discrimination against diverse gas environments assisted through numeric-based DNNs and image-based 1D CNNs, overcoming the simple clustering proposed through the unsupervised K-means clustering.
25

Ulumudin, Ikhya, Asma Aisha, and Ferdi Widiputera. "The Implementation of Knowledge Assessment In Curriculum 2013 in Elementary Schools." Technium Social Sciences Journal 7 (May 5, 2020): 86–97. http://dx.doi.org/10.47577/tssj.v7i1.442.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this study was to determine the understanding and implementation of the knowledge assessment carried out by teachers at the elementary school level starting from planning, implementing, processing results, utilizing results and identifying problems in implementing knowledge assessment. This study used a qualitative approach. Research took place in Sleman Regency, Yogyakarta Province. The research subjects were public and private elementary school teachers who had taught in classes applying Curriculum 2013. The research instruments consisted of questionnaires and FGD (focus group discussion) guidelines. The results showed that on the assessment planning, teachers lack understanding in the formulation of instrument outlines, especially in describing the Basic Competency into the question indicators, making questions about the aspects of knowledge based on the indicator, and making scoring guidelines. In the implementation of the assessment, some teachers did not understand the use of various assessment techniques for knowledge aspect. The frequency of carrying out daily knowledge assessments varies, including for every KD (basic competency), every day, for every sub-theme, for several sub-themes, and for every theme. In assessment result processing, some elementary school teachers still lack understanding about processing and reporting in knowledge assessment whether in the form of numeric scores, predicates, and descriptions. In the utilization of the assessment, all elementary school teachers in Sleman Regency have applied the assessment of learning function as each daily assessment result is used by the teacher to review student achievement in each KD. In the aspect of assessment for learning, some teachers have made use of assessments for remedial activities and lesson enrichment and to evaluate the learning process. Lastly, regarding the use of assessment as learning, some teachers have provided results of student assessment and used it to provide guidance and advice to students.
26

Muhtadi, Muhtadi, Sutama Sutama, Sofyan Anif, and Harun Joko Prayitno. "PENGEMBANGAN KEMITRAAN KA WASAN MATEMATIKA(MATH MASTER ZONE) DI KECAMATAN SA WAHAN KABUP ATEN NGANJUK." Warta LPM 18, no. 1 (March 1, 2015): 20–28. http://dx.doi.org/10.23917/warta.v18i1.1163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
PEMITRA service activities in the kecamatan Sawahan kabupaten Nganjuk, had conducted educational activities, training, and mentoring for elementary school teachers, principals, supervisors and the person in charge of Math Masters Zone program to improve the basic concepts of mathematics and its teaching method. In this Pemitra conducted training activities learning methods of Math Masters, so that teachers had an understanding and basic mathematical concepts were true, and instructed it so easy and fun to their students. In the implementation of this Pemitra service activities, had given results and achievements were very enjoyable because math test scores elementary school students in the kecamatan Sawahan on the 2013/2014 school year had average to 8.5 and the highest value reached 10.0. Math Olympiad activities also quite lively, as students becomed more enthusiastic andexcited. Based on indicators of progress and this proves that the method Math Master who had been introduced by the Executive Team Pemitra, had helped to solve problems Numeric Phobia for elementary school students in the kecamatan Sawahan. Mathematics was considered subjects which so easy and very fun by the students.Constraints and problems still be faced today is the support and consistency of Nganjuk government has not been consistent in developing subdistrict Sawahan as Math Master Zone.
27

Silva-Filho, José Humberto, Sonia Regina Pasian, and Francisco de Assis Carvalho do Vale. "Typical performance of elderly patients with Alzheimer disease on the Wisconsin Card Sorting Test (WCST)." Dementia & Neuropsychologia 1, no. 2 (June 2007): 181–89. http://dx.doi.org/10.1590/s1980-57642008dn10200011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The Wisconsin Card Sorting Test (WCST) is a neuropsychological assessment tool designed to assess executive functions, frequently used in cases of cognitive disorders. However, Brazilian neuroscientific settings lack standardization studies of psychological assessment instruments, especially in the neuropsychological area. Thus, the assessment of clinical groups including dementias and particularly Alzheimer´s disease (AD) patients, may be compromised by the lack of analytical references. Objective: To characterize the performance of elderly patients with Alzheimer´s disease on the WCST, aiming at establishing preliminary evaluative norms. Method: Thirty-six elderly patients (mean age of 75.8 years) with mild AD from a teaching hospital were assessed using the printed form of the WCST. Results: The elderly patients with AD had impaired performance on the various WCST technical indicators, highlighting cognitive deficit with traces of stereotyped behavior and failures in working memory, conceptualization and learning. The results allowed preliminary norms to be defined for elderly AD patients on the various WCST indicators, grading their performance in eight diagnostic areas and yielding the identification of different levels of impairment of executive functions in these elderly patients. Conclusions: The results demonstrated specific aspects of performance on the WCST by elderly people with AD, highlighting the effect of the disease on cognitive performance and executive functioning. Those normative references, although preliminary make a significant contribution to the neuropsychological assessment of AD patients in the Brazilian context, within the informative scope of the WCST.
28

Gonzalez Romero, Alicia. "Un Hypertexto para la enseñanza de la Estadística No Paramétrica: Propuesta didáctica para el proceso de enseñanza-aprendizaje de las Ciencias Sociales / A Hypertext for the Teaching of Nonparametric Statistics: Didactic Proffer for the Teaching-Learning Process of the Social Sciences." Revista Internacional de Aprendizaje en Ciencia, Matemáticas y Tecnología 3, no. 2 (October 28, 2016): 121–30. http://dx.doi.org/10.37467/gka-revedumat.v3.249.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACTA hypertext was realized as a didactic guide to construct and explain the different hypothesis tests related with nonparametric statistics by using the Raimon Duvals semiotic theoretical base, the learning based on Donald Shon´s problems, and exercises of social kind. Microsoft Excel was used as didactic software , the tests were developed according to the constructivism. There was an explanation consistent in an hypothesis test beginning with thw the binomial development and it was continued with exercises analysis and development of the hypothesis tests exercises included in the nonparametric statistics program. Working with a hypertext and the Excel software allowed to save capture time and numeric calculations. The indicators used to measure the learning, coincide in the display that shows that working with the hypertext favors the nonparametric statistics learning task. Nevertheless, the students recommended to add more theory in order to explain each topic and to add more exercises, with this we can conclude that the method is perfectible.RESUMENSe realizó un Hypertexto, como guía didáctica para construir y explicar las diferentes pruebas de hipótesis relacionadas con la Estadística no Paramétrica, con el sustento teórico de los Registros semióticos de Raimond Duval, el aprendizaje basado en problemas de Donald Shon y ejercicios de corte social, Se utilizó el programa Microsoft Excel como sosftware didáctico, se desarrollaron las pruebas de acuerdo con el constructivismo, se explicó en que consiste una prueba de hipótesis partiendo del desarrollo de la binomial y se continuó con el análisis y desarrollo de ejercicios de las pruebas de hipótesis contenidas en el programa de Estadística no Parametrica. El trabajar con un hipertexto, y el programa Excel, permitió ahorrar tiempo de captura y cálculos numéricos. Los indicadores utilizados para medir el aprendizaje, coinciden al mostrar que trabajar con el Hypertexto favorece el aprendizaje de la Estadística no Paramétrica, Sin embargo, los estudiantes recomendaron, agregar más teoría para explicar cada tema y añadir más ejercicios, con lo que se concluye que el método es perfectible. Contacto principal: Aliciagr_1@hotmail.com
29

Zhang, Huan, Yibin Yao, Chaoqian Xu, Wei Xu, and Junbo Shi. "Transformer-Based Global Zenith Tropospheric Delay Forecasting Model." Remote Sensing 14, no. 14 (July 11, 2022): 3335. http://dx.doi.org/10.3390/rs14143335.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Zenith tropospheric delay (ZTD) plays an important role in high-precision global navigation satellite system (GNSS) positioning and meteorology. At present, commonly used ZTD forecasting models comprise empirical, meteorological parameter, and neural network models. The empirical model can only fit approximate periodic variations, and its accuracy is relatively low. The accuracy of the meteorological parameter model depends heavily on the accuracy of the meteorological parameters. The recurrent neural network (RNN) is suitable for short-term series data prediction, but for long-term series, the ZTD prediction accuracy is clearly reduced. Long short-term memory (LSTM) has superior forecasting accuracy for long-term ZTD series; however, the LSTM model is complex, cannot be parallelized, and is time-consuming. In this study, we propose a novel ZTD time-series forecasting utilizing transformer-based machine-learning methods that are popular in natural language processing (NLP) and forecasting global ZTD, the training parameters provided by the global geodetic observing system (GGOS). The proposed transformer model leverages self-attention mechanisms by encoder and decoder modules to learn complex patterns and dynamics from long ZTD time series. The numeric results showed that the root mean square error (RMSE) of the forecasting ZTD results were 1.8 cm and mean bias, STD, MAE, and R 0.0, 1.7, 1.3, and 0.95, respectively, which is superior to that of the LSTM, RNN, convolutional neural network (CNN), and GPT3 series models. We investigated the global distribution of these accuracy indicators, and the results demonstrated that the accuracy in continents was superior to maritime space transformer ZTD forecasting model accuracy at high latitudes superior to that at low latitude. In addition to the overall accuracy improvement, the proposed transformer ZTD forecast model also mitigates the accuracy variations in space and time, thereby guaranteeing high accuracy globally. This study provides a novel method to estimate the ZTD, which could potentially contribute to precise GNSS positioning and meteorology.
30

Sota, Jani. "Contribution to the General History of School Health Service in Albania (From Origins to Nowadays)." Interdisciplinary Journal of Research and Development 9, no. 1 (March 20, 2022): 13. http://dx.doi.org/10.56345/ijrdv9n103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We are all witnesses of the great traces left by the school health service in Albania, especially after the Second World War, in the protection of the health of preschool and school age, we are witnesses, when the bodies and personalities of health and education defined the role of health care in kindergartens and schools. The purpose of this study is to highlight the common features and characteristics of the health service in preschool and school institutions with a special identity, in order to care for the physical and mental health of children and students, hygienic care and disease prevention. infectious infections, as well as health education in the learning process. Methodology: study of source materials, such as newspapers, magazines and monographs; A comparison has been made between the school health service in Albania and those of western countries during the years, in order to derive their common features and goals. Procedure: analysing in detail some aspects of identity of school service or of any contemporary feature of our country or western countries, we will see that: the historical development of the school health service in Albania, compared to those of western countries; Albanian government policies to protect the health of children and students; organization of health service in kindergartens and schools of towns and villages; regulations of the school health service which determined the character of this service, the prophylactic and medical one according to the recommendations of the World Health Organization (WHO) to ensure the development of physical and psychomotor norms of children and students, etc ; indicators of the improvement of the hygienic-sanitary conditions of preschool and school institutions after 1990; etc. Received: 4 January 2022 / Accepted: 24 February 2022 / Published: 20 March 2022
31

Kryukov, V. V., I. A. Ryzhova, and I. N. Emelianova. "Dynamics of Psychosocial, Psychopathological and Neuropsychological Characteristics in Clean-up Workers of the Consequences of Chernobyl Disaster: Results of 30-years Study." Doctor.Ru 21, no. 8 (2022): 52–59. http://dx.doi.org/10.31550/1727-2378-2022-21-8-52-59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Study Objective: To assess the possibilities of social adaptation in persistent psychopathological, including cognitive impairments. Study Design: Longitudinal study. Materials and Methods. A 30-year observation, treatment and rehabilitation of clean-up workers of accident at the Chernobyl nuclear power plant, who underwent an in-wide clinical, psychopathological and neuropsychological examination at the Moscow Research Institute of Psychiatry, was carried out. 373 people were selected for the study. A dynamic psychopathological and neuropsychological assessment of their condition is given, as well as a description of their social status and adaptive capabilities. In the neuropsychological section of the study, the results of systematic cognitive training were evaluated. The neuropsychological test battery included methods for assessing memory, attention, and thinking. Study Results. The subjects had modally non-specific memory impairments, in particular due to insufficient fixation of mnestic traces, limited ability to analyze and rethink information in the process of memorization, a decrease in the overall level of activity, exhaustion of voluntary attention, general mental fatigue and stress in the process of mental activity, learning difficulties new skills. However, the quantitative characteristics of cognitive functions did not change from hospitalization to hospitalization, which means that there was no progressive cognitive decline in the cohort. Psychocorrectional measures consisted in conducting cognitive training that stimulates the development of cognitions and the disclosure of unused cognitive resources. The level of somatic and mental well-being of Chernobyl disaster consequences participants correlates with that of patients with organic, cerebrovascular diseases and differs statistically significantly from the indicators of healthy individuals, the same relationships were noted when assessing microsocial support and the level and nature of self-perception. The results of the assessment of social functioning showed a fairly stable social adaptation of former clean-up workers and positive personality attitudes. Conclusion. Systematic medical and rehabilitation care, the formation of therapeutic partnerships based on the preserved personal qualities of patients made it possible to significantly compensate for psychopathological disorders, slow down cognitive decline and ensure fairly stable social adaptation. Keywords: participants of the elimination of Chernobyl disaster consequences, longitudinal study, long-term medical and rehabilitation assistance, the possibility of correcting mental disorders, social adaptation.
32

Olsen, A. L., L. H. Magnussen, L. H. Skjaerven, J. Assmus, M. A. Sundal, O. Furnes, G. Hallan, and L. I. Strand. "FRI0644-HPR PATIENT EDUCATION AND BASIC BODY AWARENESS THERAPY VERSUS PATIENT EDUCATION ONLY IN PATIENTS WITH HIP OSTEOARTHRITIS: A RANDOMIZED CONTROLLED TRIAL." Annals of the Rheumatic Diseases 79, Suppl 1 (June 2020): 926.1–926. http://dx.doi.org/10.1136/annrheumdis-2020-eular.2540.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background:Patients with hip osteoarthritis tend to develop stereotype and energy demanding movement strategies with potential negative effects on disease progression and daily life functioning. A multi-perspective view on movement quality is applied in the physiotherapy modality Basic Body Awareness Therapy (BBAT), with its movement awareness learning pedagogy. BBAT has been found beneficial for functional movement quality, symptoms, and psychological aspects of health in patients with various long-lasting conditions.Objectives:To investigate the short-term (6 months) effects of BBAT, added to Patient Education (PE) compared with PE only in patients with hip osteoarthritis.Methods:A block-randomized controlled trial with 6 months follow-up was conducted. Patients were allocated to 3.5 hours of PE plus 12 weekly sessions of BBAT, each lasting 90 minutes (intervention group), or to PE only (comparison group). Primary outcomes: Numeric Rating Scale (NRS) for pain during walking and Hip Osteoarthritis Outcome Score, subscale Activities of Daily Life (HOOS A). Secondary outcomes included physical capacity tests: Chair test, Stairs test, six-minutes walking test (6MWT), movement quality evaluation: Body Awareness Rating Scale – Movement Quality and Experience (BARS-MQE), and self-reported measures: Activity level (UCLA), function (HOOS subscales P, S, SP, QL and Harris Hip Score (HHS), self-efficacy (Arthritis Self-efficacy Scale, ASES), and health (EuroQol, EQ-5D-5L).Patient Global Impression of Change (PGIC) on pain and function was registered at 6 months.ANCOVA of change was used in intention-to-treat and per protocol analysis.Results:101 patients were included, average age 63 years, 80% female. There was no difference in change between the groups on the primary outcomes at 6 months. However, movement quality (BARS-MQE) improved more (p<0.001) in the intervention group, and the patients reported more improvement in pain (PGIC) than the comparison patients (p=0.031). In per protocol analysis, including 30 patients who attended at least 10 BBAT sessions, intervention patients had statistically significant better scores on self-efficacy (ASES pain, p=0.049), health (EQ5D5L VAS, p=0.037) and function (HHS, p=0.029) than the comparison patients.Conclusion:Patients with hip osteoarthritis were not found by the primary outcome measures to improve more by BBAT added to PE than by PE alone. Movement quality improved, however, significantly more in the intervention group. With sufficient compliance to BBAT, significant more improvement in additional health indicators was demonstrated.References:[1] Egloff C, Hugle T, Valderrabano V. Biomechanics and pathomechanisms of osteoarthritis. Swiss Med Wkly. 2012;142:w13583.[2] Smith TO, Purdy R, Lister S, Salter C, Fleetcroft R, Conaghan P. Living with osteoarthritis:Systematic review and meta-ethnography. Scand J Rheumatol. 2014;43(6):441-452.[3] Skjaerven LH, Kristoffersen K, Gard G. An eye for movement quality: a phenomenological study of movement quality reflecting a group of physiotherapists’ understanding of the phenomenon. Physiother Theory Pract. 2008;24(1):13-27.Acknowledgments:The authors thank the funding institution; The Norwegian Fund for Post-graduate training in Physiotherapy.Disclosure of Interests:None declared
33

Tomczyk, Łukasz, Darwin Muñoz, Julio Perier, Magali Arteaga, Gabriel Barros, Mariana Porta, and Enzo Puglia. "ICT AND PRESERVICE TEACHERS. SHORT CASE STUDY ABOUT CONDITIONS OF TEACHER PREPARATION IN: DOMINICAN REPUBLIC, ECUADOR, URUGUAY AND POLAND." Knowledge International Journal 32, no. 1 (July 26, 2019): 15–24. http://dx.doi.org/10.35120/kij320115t.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paper aims at presenting the most important indicators teacher preparation in the context of the developing information society. The text was written as part of the SELI project. It is the international study which seeks to answer the question about the factors determining the efficient use of ICT among the pedagogy students. This theoretical study joins the debate on the curricula and local, national and global conditions related to the education of teachers of the future. The text presents data from three countries from Latin America and the Caribbean region and one from Europe.The development of Information and Communication Technologies (ICTs) is key for any society that wishes to develop and face the local and global challenges that arise every day. However, teachers play a fundamental role in ensuring that these technologies are taught and reach the entire population adequately. During this process, an important number of challenges and problems must be faced, as a result of the current context in which the Dominican Republic, as a developing country, finds itself. In the present work we make a brief description of the main challenges and defies faced by ICT teachers.In the section referring to Ecuador, a general outline of the academic education for preservice teachers is presented. The numeric data are presented about the education system and the preparation of future teachers. The legal grounds have also been described. While discussing the conditions in Ecuador, the authors focus on the technical aspects of education, like the use of e-learning technologies. They also refer briefly to the need for lifelong learning.The purpose of this paper is to provide an overview of the way the challenges of ICT are approached in preservice teacher education in Uruguay. Initially, some background information is provided about how preservice teacher education is organized for at the different levels. Secondly, the focus is on the way Information and Communication Technologies (ICT) have been included in the preservice teacher education and training curricula and the changes that this inclusion is undergoing. The overview shows a transitioning process, from a rather disperse and fragmented approach with a variety of courses, projects and programs, to a more consistent and centralized one. The article finishes with a few conclusions and ponders some Uruguay perspectives, joining an ongoing debate around unanswered questions and identified challenges.The Polish section presents several important changes associated with the reform of the education system, which affects the professional teacher preparation. The authors also present the examples of the academic curricula for Information Technologies and Media in Education courses carried out in the Pedagogical University of Cracow.Based on the short analyses, we have noticed that despite the geographical, language and cultural differences, teacher preparation in the area of ICT use shows many common features which are the global challenges. These shared elements include: legal systems preparing teachers to perform their profession, the development of digital literacy, modernisation of the academic curricula and technical infrastructure, and motivation to use ICT solutions among the preservice teachers.
34

Pichler, Axel, and Nils Reiter. "Zur Operationalisierung literaturwissenschaftlicher Begriffe in der algorithmischen Textanalyse. Eine Annäherung über Norbert Altenhofers hermeneutische Modellinterpretation von Kleists Das Erdbeben in Chili ." Journal of Literary Theory 15, no. 1-2 (November 6, 2021): 1–29. http://dx.doi.org/10.1515/jlt-2021-2008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The present article discusses and reflects on possible ways of operationalizing the terminology of traditional literary studies for use in computational literary studies. By »operationalization«, we mean the development of a method for tracing a (theoretical) term back to text-surface phenomena; this is done explicitly and in a rule-based manner, involving a series of substeps. This procedure is presented in detail using as a concrete example Norbert Altenhofer’s »model interpretation« (Modellinterpretation) of Heinrich von Kleist’s The Earthquake in Chile. In the process, we develop a multi-stage operation – reflected upon throughout in terms of its epistemological implications – that is based on a rational-hermeneutic reconstruction of Altenhofer’s interpretation, which focuses on »mysteriousness« (Rätselhaftigkeit), a concept from everyday language. As we go on to demonstrate, when trying to operationalize this term, one encounters numerous difficulties, which is owing to the fact that Altenhofer’s use of it is underspecified in a number of ways. Thus, for instance, and contrary to Altenhofer’s suggestion, Kleist’s sentences containing »relativizing or perspectivizing phrases such as ›it seemed‹ or ›it was as if‹« (Altenhofer 2007, 45) do by no means, when analyzed linguistically, suggest a questioning or challenge of the events narrated, since the unreal quality of those German sentences only relates to the comparison in the subordinate clause, not to the respective main clause. Another indicator central to Altenhofer’s ascription of »mysteriousness« is his concept of a »complete facticity« (lückenlose Faktizität) which »does not seem to leave anything ›open‹« (Altenhofer 2007, 45). Again, the precise designation of what exactly qualifies facticity as »complete« is left open, since Kleist’s novella does indeed select for portrayal certain phenomena and actions within the narrated world (and not others). The degree of factuality in Kleist’s text may be higher than it is in other texts, but it is by no means »complete«. In the context of Altenhofer’s interpretation, »complete facticity« may be taken to mean a narrative mode in which terrible events are reported using conspicuously sober and at times drastic language. Following the critical reconstruction of Altenhofer’s use of terminology, the central terms and their relationship to one another are first explicated (in natural language), which already necessitates intensive conceptual work. We do so implementing a hierarchical understanding of the terms discussed: the definition of one term uses other terms which also need to be defined and operationalized. In accordance with the requirements of computational text analysis, this hierarchy of terms should end in »directly measurable« terms – i. e., in terms that can be clearly identified on the surface of the text. This, however, leads to the question of whether (and, if so, on the basis of which theoretical assumptions) the terminology of literary studies may be traced back in this way to text-surface phenomena. Following the pragmatic as well as the theoretical discussion of this complex of questions, we indicate ways by which such definitions may be converted into manual or automatic recognition. In the case of manual recognition, the paradigm of annotation – as established and methodologically reflected in (computational) linguistics – will be useful, and a well-controlled annotation process will help to further clarify the terms in question. The primary goal, however, is to establish a recognition rule by which individuals may intersubjectively and reliably identify instances of the term in question in a given text. While it is true that in applying this method to literary studies, new challenges arise – such as the question of the validity and reliability of the annotations –, these challenges are at present being researched intensively in the field of computational literary studies, which has resulted in a large and growing body of research to draw on. In terms of computer-aided recognition, we examine, by way of example, two distinct approaches: 1) The kind of operationalization which is guided by precedent definitions and annotation rules benefits from the fact that each of its steps is transparent, may be validated and interpreted, and that existing tools from computational linguistics can be integrated into the process. In the scenario used here, these would be tools for recognizing and assigning character speech, for the resolution of coreference and the assessment of events; all of these, in turn, may be based on either machine learning, prescribed rules or dictionaries. 2) In recent years, so-called end-to-end systems have become popular which, with the help of neural networks, »infer« target terms directly from a numerical representation of the data. These systems achieve superior results in many areas. However, their lack of transparency also raises new questions, especially with regard to the interpretation of results. Finally, we discuss options for quality assurance and draw a first conclusion. Since numerous decisions have to be made in the course of operationalization, and these, in practice, are often pragmatically justified, the question quickly arises as to how »good« a given operationalization actually is. And since the tools borrowed from computational linguistics (especially the so-called inter-annotator agreement) can only partially be transferred to computational literary studies and, moreover, objective standards for the quality of a given implementation will be difficult to find, it ultimately falls to the community of researchers and scholars to decide, based on their research standards, which operationalizations they accept. At the same time, operationalization is the central link between the computer sciences and literary studies, as well as being a necessary component for a large part of the research done in computational literary studies. The advantage of a conscious, deliberate and reflective operationalization practice lies not only in the fact that it can be used to achieve reliable quantitative results (or that a certain lack of reliability at least is a known factor); it also lies in its facilitation of interdisciplinary cooperation: in the course of operationalization, concrete sets of data are discussed, as are the methods for analysing them, which taken together minimizes the risk of misunderstandings, »false friends« and of an unproductive exchange more generally.
35

Siembieda, William. "Toward an Enhanced Concept of Disaster Resilience: A Commentary on Behalf of the Editorial Committee." Journal of Disaster Research 5, no. 5 (October 1, 2010): 487–93. http://dx.doi.org/10.20965/jdr.2010.p0487.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. Introduction This Special Issue (Part 2) expands upon the theme “Building Local Capacity for Long-term Disaster Resilience” presented in Special Issue Part 1 (JDR Volume 5, Number 2, April 2010) by examining the evolving concept of disaster resilience and providing additional reflections upon various aspects of its meaning. Part 1 provided a mixed set of examples of resiliency efforts, ranging from administrative challenges of integrating resilience into recovery to the analysis of hazard mitigation plans directed toward guiding local capability for developing resiliency. Resilience was broadly defined in the opening editorial of Special Issue Part 1 as “the capacity of a community to: 1) survive a major disaster, 2) retain essential structure and functions, and 3) adapt to post-disaster opportunities for transforming community structure and functions to meet new challenges.” In this editorial essay we first explore in Section 2 the history of resilience and then locate it within current academic and policy debates. Section 3 presents summaries of the papers in this issue. 2. Why is Resilience a Contemporary Theme? There is growing scholarly and policy interest in disaster resilience. In recent years, engineers [1], sociologists [2], geographers [3], economists [4], public policy analysts [5, 6], urban planners [7], hazards researchers [8], governments [9], and international organizations [10] have all contributed to the literature about this concept. Some authors view resilience as a mechanism for mitigating disaster impacts, with framework objectives such as resistance, absorption, and restoration [5]. Others, who focus on resiliency indicators, see it as an early warning system to assess community resiliency status [3, 8]. Recently, it has emerged as a component of social risk management that seeks to minimize social welfare loss from catastrophic disasters [6]. Manyena [11] traces scholarly exploration of resilience as an operational concept back at least five decades. Interest in resilience began in the 1940s with studies of children and trauma in the family and in the 1970s in the ecology literature as a useful framework to examine and measure the impact of assault or trauma on a defined eco-system component [12]. This led to modeling resilience measures for a variety of components within a defined ecosystem, leading to the realization that the systems approach to resiliency is attractive as a cross-disciplinary construct. The ecosystem analogy however, has limits when applied to disaster studies in that, historically, all catastrophic events have changed the place in which they occurred and a “return to normalcy” does not occur. This is true for modern urban societies as well as traditional agrarian societies. The adoption of “The Hyogo Framework for Action 2005-2015” (also known as The Hyogo Declaration) provides a global linkage and follows the United Nations 1990s International Decade for Natural Disaster Reduction effort. The 2005 Hyogo Declaration’s definition of resilience is: “The capacity of a system, community or society potentially exposed to hazards to adapt by resisting or changing in order to reach and maintain an acceptable level of functioning and structure.” The proposed measurement of resilience in the Hyogo Declaration is determined by “the degree to which the social system is capable of organizing itself to increase this capacity for learning from past disasters for better future protection and to improve risk reduction measures.” While very broad, this definition contains two key concepts: 1) adaptation, and 2) maintaining acceptable levels of functioning and structure. While adaptation requires certain capacities, maintaining acceptable levels of functioning and structure requires resources, forethought, and normative action. Some of these attributes are now reflected in the 2010 National Disaster Recovery Framework published by the U.S. Federal Emergency Management Agency (FEMA) [13]. With the emergence of this new thinking on resilience related to disasters, it is now a good time to reflect on the concept and assess what has recently been said in the literature. Bruneau et al. [1] offer an engineering sciences definition for community seismic resilience: “The ability of social units (e.g., organizations, communities) to mitigate hazards, contain the effects of disasters when they occur, and carry out recovery activities in ways that minimize social disruption and mitigate the effects of future earthquakes.” Rose [4] writes that resiliency is the ability of a system to recover from a severe shock. He distinguishes two types of resilience: (1) inherent – ability under normal circumstances and (2) adaptive – ability in crisis situations due to ingenuity or extra effort. By opening up resilience to categorization he provides a pathway to establish multi-disciplinary approaches, something that is presently lacking in practice. Rose is most concerned with business disruption which can take extensive periods of time to correct. In order to make resource decisions that lower overall societal costs (economic, social, governmental and physical), Rose calls for the establishment of measurements that function as resource decision allocation guides. This has been done in part through risk transfer tools such as private insurance. However, it has not been well-adopted by governments in deciding how to allocate mitigation resources. We need to ask why the interest in resilience has grown? Manyena [11] argues that the concept of resilience has gained currency without obtaining clarity of understanding, definition, substance, philosophical dimensions, or applicability to disaster management and sustainable development theory and practice. It is evident that the “emergency management model” does not itself provide sufficient guidance for policymakers since it is too command-and-control-oriented and does not adequately address mitigation and recovery. Also, large disasters are increasingly viewed as major disruptions of the economic and social conditions of a country, state/province, or city. Lowering post-disaster costs (human life, property loss, economic advancement and government disruption) is being taken more seriously by government and civil society. The lessening of costs is not something the traditional “preparedness” stage of emergency management has concerned itself with; this is an existing void in meeting the expanding interests of government and civil society. The concept of resilience helps further clarify the relationship between risk and vulnerability. If risk is defined as “the probability of an event or condition occurring [14]#8221; then it can be reduced through physical, social, governmental, or economic means, thereby reducing the likelihood of damage and loss. Nothing can be done to stop an earthquake, volcanic eruption, cyclone, hurricane, or other natural event, but the probability of damage and loss from natural and technological hazards can be addressed through structural and non-structural strategies. Vulnerability is the absence of capacity to resist or absorb a disaster impact. Changes in vulnerability can then be achieved by changes in these capacities. In this regard, Franco and Siembieda describe in this issue how coastal cities in Chile had low resilience and high vulnerability to the tsunami generated by the February 2010 earthquake, whereas modern buildings had high resilience and, therefore, were much less vulnerable to the powerful earthquake. We also see how the framework for policy development can change through differing perspectives. Eisner discusses in this issue how local non-governmental social service agencies are building their resilience capabilities to serve target populations after a disaster occurs, becoming self-renewing social organizations and demonstrating what Leonard and Howett [6] term “social resilience.” All of the contributions to this issue illustrate the lowering of disaster impacts and strengthening of capacity (at the household, community or governmental level) for what Alesch [15] terms “post-event viability” – a term reflecting how well a person, business, community, or government functions after a disaster in addition to what they might do prior to a disaster to lessen its impact. Viability might become the definition of recovery if it can be measured or agreed upon. 3. Contents of This Issue The insights provided by the papers in this issue contribute greater clarity to an understanding of resilience, together with its applicability to disaster management. In these papers we find tools and methods, process strategies, and planning approaches. There are five papers focused on local experiences, three on state (prefecture) experiences, and two on national experiences. The papers in this issue reinforce the concept of resilience as a process, not a product, because it is the sum of many actions. The resiliency outcome is the result of multiple inputs from the level of the individual and, at times, continuing up to the national or international organizational level. Through this exploration we see that the “resiliency” concept accepts that people will come into conflict with natural or anthropogenic hazards. The policy question then becomes how to lower the impact(s) of the conflict through “hard or soft” measures (see the Special Issue Part 1 editorial for a discussion of “hard” vs. “soft” resilience). Local level Go Urakawa and Haruo Hayashi illustrate how post-disaster operations for public utilities can be problematic because many practitioners have no direct experience in such operations, noting that the formats and methods normally used in recovery depend on personal skills and effort. They describe how these problems are addressed by creating manuals on measures for effectively implementing post-disaster operations. They develop a method to extract priority operations using business impact analysis (BIA) and project management based business flow diagrams (BFD). Their article effectively illustrates the practical aspects of strengthening the resiliency of public organizations. Richard Eisner presents the framework used to initiate the development and implementation of a process to create disaster resilience in faith-based and community-based organizations that provide services to vulnerable populations in San Francisco, California. A major project outcome is the Disaster Resilience Standard for Community- and Faith-Based Service Providers. This “standard” has general applicability for use by social service agencies in the public and non-profit sectors. Alejandro Linayo addresses the growing issue of technological risk in cities. He argues for the need to understand an inherent conflict between how we occupy urban space and the technological risks created by hazardous chemicals, radiation, oil and gas, and other hazardous materials storage and movement. The paper points out that information and procedural gaps exist in terms of citizen knowledge (the right to know) and local administrative knowledge (missing expertise). Advances and experience accumulated by the Venezuela Disaster Risk Management Research Center in identifying and integrating technological risk treatment for the city of Merida, Venezuela, are highlighted as a way to move forward. L. Teresa Guevara-Perez presents the case that certain urban zoning requirements in contemporary cities encourage and, in some cases, enforce the use of building configurations that have been long recognized by earthquake engineering as seismically vulnerable. Using Western Europe and the Modernist architectural movement, she develops the historical case for understanding discrepancies between urban zoning regulations and seismic codes that have led to vulnerable modern building configurations, and traces the international dissemination of architectural and urban planning concepts that have generated vulnerability in contemporary cities around the world. Jung Eun Kang, Walter Gillis Peacock, and Rahmawati Husein discuss an assessment protocol for Hazard Mitigation Plans applied to 12 coastal hazard zone plans in the state of Texas in the U.S. The components of these plans are systematically examined in order to highlight their respective strengths and weaknesses. The authors describe an assessment tool, the plan quality score (PQS), composed of seven primary components (vision statement, planning process, fact basis, goals and objectives, inter-organizational coordination, policies & actions, and implementation), as well as a component quality score (CQS). State (Prefecture) level Charles Real presents the Natural Hazard Zonation Policies for Land Use Planning and Development in California in the U.S. California has established state-level policies that utilize knowledge of where natural hazards are more likely to occur to enhance the effectiveness of land use planning as a tool for risk mitigation. Experience in California demonstrates that a combination of education, outreach, and mutually supporting policies that are linked to state-designated natural hazard zones can form an effective framework for enhancing the role of land use planning in reducing future losses from natural disasters. Norio Maki, Keiko Tamura, and Haruo Hayashi present a method for local government stakeholders involved in pre-disaster plan making to describe performance measures through the formulation of desired outcomes. Through a case study approach, Nara and Kyoto Prefectures’ separate experiences demonstrate how to conduct Strategic Earthquake Disaster Reduction Plans and Action Plans that have deep stakeholder buy-in and outcome measurability. Nara’s plan was prepared from 2,015 stakeholder ideas and Kyoto’s plan was prepared from 1,613 stakeholder ideas. Having a quantitative target for individual objectives ensures the measurability of plan progress. Both jurisdictions have undertaken evaluations of plan outcomes. Sandy Meyer, Eugene Henry, Roy E. Wright and Cynthia A. Palmer present the State of Florida in the U.S. and its experience with pre-disaster planning for post-disaster redevelopment. Drawing upon the lessons learned from the impacts of the 2004 and 2005 hurricane seasons, local governments and state leaders in Florida sought to find a way to encourage behavior that would create greater community resiliency in 2006. The paper presents initial efforts to develop a post-disaster redevelopment plan (PDRP), including the experience of a pilot county. National level Bo-Yao Lee provides a national perspective: New Zealand’s approach to emergency management, where all hazard risks are addressed through devolved accountability. This contemporary approach advocates collaboration and coordination, aiming to address all hazard risks through the “4Rs” – reduction, readiness, response, and recovery. Lee presents the impact of the Resource Management Act (1991), the Civil Defence Emergency Management Act (2002), and the Building Act (2004) that comprise the key legislation influencing and promoting integrated management for environment and hazard risk management. Guillermo Franco and William Siembieda provide a field assessment of the February 27, 2010, M8.8 earthquake and tsunami event in Chile. The papers present an initial damage and life-loss review and assessment of seismic building resiliency and the country’s rapid updating of building codes that have undergone continuous improvement over the past 60 years. The country’s land use planning system and its emergency management system are also described. The role of insurance coverage reveals problems in seismic coverage for homeowners. The unique role of the Catholic Church in providing temporary shelter and the central government’s five-point housing recovery plan are presented. A weakness in the government’s emergency management system’s early tsunami response system is noted. Acknowledgements The Editorial Committee extends its sincere appreciation to both the contributors and the JDR staff for their patience and determination in making Part 2 of this special issue possible. Thanks also to the reviewers for their insightful analytic comments and suggestions. Finally, the Committee wishes to again thank Bayete Henderson for his keen and thorough editorial assistance and copy editing support.
36

Li, Zhou. "Consumer behavior analysis model based on machine learning." Journal of Intelligent & Fuzzy Systems, December 7, 2020, 1–11. http://dx.doi.org/10.3233/jifs-189483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The accurate mastery of demand information enables retailers to better respond to consumers and effectively manage inventory. However, the precise connection and interaction between this information collection and inventory management is more difficult to measure. In view of this, this paper proposes a research on inventory model based on consumer web search. Moreover, centering on the two main actors in the online search environment, consumers and retailers, this paper fully considers their characteristics and situations to construct an inventory model in the online search environment, and analyzes the ordering strategy. Moreover, based on the digital traces left by consumers in the decision-making process, this paper uses general search indicators and specific search indicators to measure consumer web search to explore the relationship between consumer web search indicators and the demand conversion rate proposed in the model. Moreover, this paper analyzes the model with examples. The research results are in line with model construction expectations.
37

Gassais, Robin, Naser Ezzati-Jivan, Jose M. Fernandez, Daniel Aloise, and Michel R. Dagenais. "Multi-level host-based intrusion detection system for Internet of things." Journal of Cloud Computing 9, no. 1 (November 23, 2020). http://dx.doi.org/10.1186/s13677-020-00206-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe growth of the Internet of things (IoT) has ushered in a new area of inter-connectivity and innovation in the home. Many devices, once separate, can now be interacted with remotely, improving efficiency and organization. This, however, comes at the cost of rising security vulnerabilities. Vendors are competing to create and release quickly innovative connected objects, without focusing on the security issues. As a consequence, attacks involving smart devices, or targeting them, are proliferating, creating threats to user’s privacy and even their physical security. Additionally, the heterogeneous technologies involved in IoT make attempts to develop protection on smart devices much harder. Most of the intrusion detection systems developed for those platforms are based on network activity. However, on many systems, intrusions cannot easily or reliably be detected from network traces. We propose a novel host-based automated framework for intrusion detection. Our work combines user space and kernel space information and machine learning techniques to detect various kinds of intrusions in smart devices. Our solution use tracing techniques to automatically get devices behavior, process this data into numeric arrays to train several machine learning algorithms, and raise alerts whenever an intrusion is found. We implemented several machine learning algorithms, including deep learning ones, to achieve high detection capabilities, while adding little overhead on the monitored devices. We tested our solution within a realistic home automation system with actual threats.
38

Jeppu, Vineeth, Ayan A. Singh, and Alex Gonzalez. "Improvement of Key Financial Performance Indicators in the Insurance Industry Using Machine Learning – A Quantitative Analysis." International Journal of Smart Sensor and Adhoc Network., January 2023, 1–6. http://dx.doi.org/10.47893/ijssan.2023.1225.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AI and Machine learning are playing a vital role in the financial domain in predicting future growth and risk and identifying key performance areas. We look at how machine learning and artificial intelligence (AI) directly or indirectly alter financial management in the banking and insurance industries. First, a non-technical review of the prior machine learning and AI methodologies beneficial to KPI management is provided. This paper will analyze and improve key financial performance indicators in insurance using machine learning (ML) algorithms. Before applying an ML algorithm, we must determine the attributes directly impacting the business and target attributes. The details must be manually mapped from string values to fit the model and its required datatypes for applying these specific features to an ML model. We propose hashing to convert string values to numeric values for data analysis within our model. After the string values are hashed, we can introduce our model. In our case, we have chosen to use a decision tree model. Decision Trees are beneficial for this use case as this algorithm generates rulesets that govern the target value output. These rulesets can then be applied to the financial dataset and infer the “best fit” value that might be wrong/missing. Finally, because of the model, we can use this most accurate data version to detect general ledger transactional data patterns.
39

Caspari-Sadeghi, Sima. "Applying Learning Analytics in Online Environments: Measuring Learners’ Engagement Unobtrusively." Frontiers in Education 7 (January 25, 2022). http://dx.doi.org/10.3389/feduc.2022.840947.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prior to the emergence of Big Data and technologies such as Learning Analytics (LA), classroom research focused mainly on measuring learning outcomes of a small sample through tests. Research on online environments shows that learners’ engagement is a critical precondition for successful learning and lack of engagement is associated with failure and dropout. LA helps instructors to track, measure and visualize students’ online behavior and use such digital traces to improve instruction and provide individualized support, i.e., feedback. This paper examines 1) metrics or indicators of learners’ engagement as extracted and displayed by LA, 2) their relationship with academic achievement and performance, and 3) some freely available LA tools for instructors and their usability. The paper concludes with making recommendations for practice and further research by considering challenges associated with using LA in classrooms.
40

Boekaerts, Monique, Mariel F. Musso, and Eduardo C. Cascallar. "Predicting attribution of letter writing performance in secondary school: A machine learning approach." Frontiers in Education 7 (November 23, 2022). http://dx.doi.org/10.3389/feduc.2022.1007803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The learning research literature has identified the complex and multidimensional nature of learning tasks, involving not only (meta) cognitive processes but also affective, linguistic, and behavioral contextualized aspects. The present study aims to analyze the interactions among activated domain-specific information, context-sensitive appraisals, and emotions, and their impact on task engagement as well as task satisfaction and attribution of the perceived learning outcome, using a machine learning approach. Data was collected from 1130 vocational high-school students of both genders, between 15 and 20 years of age. Prospective questionnaires were used to collect information about the students’ home environment and domain-specific variables. Motivation processes activated during the learning episode were measured with Boekaerts’ on-line motivation questionnaire. The traces that students left behind were also inspected (e.g., time spent, use of provided tools, content, and technical aspects of writing). Artificial neural networks (ANN) were used to provide information on the multiple interactions between the measured domain-specific variables, situation-specific appraisals and emotions, trace data, and background variables. ANN could identify with high precision students who used a writing skill, affect, and self-regulation strategies attribution on the basis of domain variables, appraisals, emotions, and performance indicators. ANN detected important differences in the factors that seem to underlie the students’ causal attributions.
41

Wilms, Lisa, Martin Komainda, Dina Hamidi, Friederike Riesch, Juliane Horn, and Johannes Isselstein. "How do grazing beef and dairy cattle respond to virtual fences? A review." Journal of Animal Science, April 15, 2024. http://dx.doi.org/10.1093/jas/skae108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Virtual fencing (VF) is a modern fencing technology that requires the animal to wear a device (e.g. a collar) that emits acoustic signals to replace the visual cue of traditional physical fences (PF) and, if necessary, mild electric signals. The use of devices that provide electric signals leads to concerns regarding the welfare of virtually fenced animals. The objective of this review is to give an overview of the current state of VF research into the welfare and learning behavior of cattle. Therefore, a systematic literature search was conducted using two online databases and reference lists of relevant articles. Studies included were peer-reviewed and written in English, used beef or dairy cattle, and tested neck-mounted VF devices. Further inclusion criteria were a combination of audio and electrical signals and a set-up as a pasture trial, which implied that animals grazed in groups on grassland for four hours minimum while at least one fence side was virtually fenced. The eligible studies (n = 13) were assigned to one or two of the following categories: animal welfare (n studies = 8) or learning behavior (n studies = 9). As data availability for conducting a meta-analysis was not sufficient, a comparison of the means of welfare indicators (daily weight gain, daily lying time, steps per hour, daily number of lying bouts, fecal cortisol metabolites (FCM)) for virtually and physically fenced animals was done instead. In an additional qualitative approach, the results from the welfare-related studies were assembled and discussed. For the learning behavior, the number of acoustic and electric signals and their ratio were used in a linear regression model with duration in days as a numeric predictor to assess the learning trends over time. There were no significant differences between VF and PF for most welfare indicators (except FCM with lower values for VF; P = 0.0165). The duration in days did not have a significant effect on the number of acoustic and electric signals. However, a significant effect of trial duration on the ratio of electric to acoustic signals (P = 0.0014) could be detected, resulting in a decreasing trend of the ratio over time, which suggests successful learning. Overall, we conclude that the VF research done so far is promising but is not yet sufficient to ensure that the technology could not have impacts on the welfare of certain cattle types. More research is necessary to investigate especially possible long-term effects of VF.
42

Ruiz Nakashima, Rosária Helena, Daniela Melaré Vieira Barros, and Sergio Ferreira do Amaral. "O USO PEDAGÓGICO DA LOUSA DIGITAL ASSOCIADO À TEORIA DOS ESTILOS DE APRENDIZAGEM." Revista de Estilos de Aprendizaje 2, no. 4 (October 1, 2009). http://dx.doi.org/10.55777/rea.v2i4.897.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Resumo:O objetivo deste trabalho é apresentar uma proposta de utilização da lousa digital, como um instrumento tecnológico interativo, que possibilita a elaboração de atividades pedagógicas, associadas à Teoria dos Estilos de Aprendizagem. A lousa digital incorpora todos os recursos que o computador oferece, mas com diferencial de permitir a interação entre o professor e os alunos, favorecendo a construção coletiva do conhecimento. Os referenciais de Estilos de Aprendizagem utilizados se baseiam nas investigações desenvolvidas pelos pesquisadores Catalina Alonso, Domingo Gallego, Peter Honey, José Luis García Cué e Daniela Melaré Vieira Barros, que defendem que Estilos de Aprendizagem são os traços cognitivos, afetivos e fisiológicos, que servem como indicadores relativamente estáveis, de como os alunos percebem, interagem e respondem aos seus ambientes de aprendizagem. THE PEDAGOGICAL USE OF DIGITAL WHITEBOARD ASSOCIATE TO THE LEARNING STYLES THEORY Abstract:The objective of this paper is to present a proposal to use the digital whiteboard as an interactive technology that enables the development of educational activities associated with the learning styles theory. The digital whiteboard incorporates all the features the computer offers, but with the differential to allow the interaction between the teacher and students, fostering the collective construction of knowledge. The Learning Styles of references used are based on investigations conducted by researchers Catalina Alonso, Domingo Gallego, Peter Honey, José Luis García Cué honey Daniela Vieira and Barros, who argue that the Learning Styles are traces cognitive, affective and physiological, which serve as relatively stable indicators of how students perceive, interact and respond to their environments of learning.
43

Ristow Hadlich, Rodrigo, Jason Loprete, and Dimitris Assanis. "A Deep Learning Approach to Predict In-Cylinder Pressure of a Compression Ignition Engine." Journal of Engineering for Gas Turbines and Power, January 13, 2024, 1–12. http://dx.doi.org/10.1115/1.4064480.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract As emissions regulations for greenhouse gas emissions become more strict, it is important to increase the efficiency of engines by improving on the design and operation. Current optimization methods involve performing large numbers of experimental investigations on physical engines or making use of detailed Computational Fluid Dynamics modeling efforts to provide visual and statistical insights on in-cylinder behavior. The latter still requires experimental data for model validation. Both of these methods share a common set of problems, that of being monetarily expensive and time consuming. Previous work has proposed an alternative method for engine optimization using machine learning (ML) models and experimental validation data to predict scalar values representing different parameters. With such models developed, one can then quickly iterate on operating conditions to find the point that maximizes an application-dependent reward function. While these ML methods provide information on individual performance parameters, they lack key information of in-cylinder indicators such as cylinder pressure traces and heat release curves that are traditionally used for performance analysis. This work details the process of implement- ing a Multilayer Perceptron (MLP) model capable of accurately predicting crank-angle resolved high-speed in-cylinder pressure using equivalence ratio, fuel injection pressure and injection timing as input features. It was demonstrated that the model was able to approximate engine behavior with mean squared error lower than 0.05 on a 1-55 range in the test set. This approach shows potential for greatly accelerating the optimization process in engine applications.
44

Ахметова, Ж. Б. "МОНИТОРИНГ И ОЦЕНКА САМОСТОЯТЕЛЬНОЙ РАБОТЫ СТУДЕНТОВ В ДИСТАНЦИОННОМ ОБУЧЕНИИ НА ОСНОВЕ ЭВТАГОГИКИ". BULLETIN Series Physical and Mathematical Sciences 80, № 4(2022) (25 вересня 2023). http://dx.doi.org/10.51889/6223.2022.69.28.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Мақалада эвтагогика негізінде қашықтан оқыту бойынша студенттердің өздік жұмысын бақылау мен бағалау қарастырылған. Эвтагогика теориясы білім алушыға нені, қашан және қалай үйренетінін анықтауға мүмкіндік беретін өзін-өзі оқытудың шығармашылық тәсілі ретінде айқындалған. Қашықтан оқыту барысындағы студенттердің цифрлық ізі негізіндегі өздік жұмысының уақыты мен сапасын бақылау мен бағалау үшін эвтагогикаға негізделген критерийлер мен көрсеткіштер жиынтығы анықталып, сандық көрсеткіштері іріктелді және әрбір студенттің үлгерімін бағалауға мүмкіндік беретін әдістеме ұсынылды. Мақалада эмпирикалық деректер мен оқу аналитикасына негізделген өздік жұмыстың табыстылығын бағалау алгоритмдері қамтылған. Ұсынылған алгоритмдер қашықтықтан оқытуда өзіндік жұмысты орындаудың цифрлық іздерін түсіндіруге, оның жетістігін бағалауға және студенттің оқу траекториясын түзету арқылы білім сапасын арттыруға мүмкіндік береді. Түйін сөздер: қашықтықтан оқыту, өздік жұмыс, эвтагогика, цифрлық із, бақылау, цифрлық технологиялар. В статье проводится мониторинг и оценка самостоятельной работы студентов дистанционного обучения на основе эвтагогики. Теория эвтагогики определяется как творческий подход к самообучению, который позволяет обучаемуся определять, что, когда и как изучать. В целях контроля и оценки времени и качества самостоятельной работы студентов на основе цифрового следа при дистанционном обучении определен набор критериев и показателей на основе эвтагогики, выбраны количественные показатели и предложена методика, позволяющая оценить успеваемость каждого обучающегося. В статье приведены алгоритмы оценки успешности самостоятельности на основе эмпирических данных и аналитики обучения. Предложенные алгоритмы позволяют интерпретировать цифровые следы самостоятельной работы в дистанционном обучении, оценивать ее успешность и повышать качество обучения за счет коррекции траектории обучения студента. Ключевые слова: дистанционное обучение, самостоятельная работа, эвтагогика, цифровой след, контроль, цифровые технологии. The article monitors and evaluates the independent work of distance learning students based on eutagogy. Eutagogy theory is defined as a creative approach to self-learning that allows the learner to determine what, when and how to learn. In order to control and evaluate the time and quality of independent work of students on the basis of a digital trace in distance learning, a set of criteria and indicators based on eutagogy has been defined, quantitative indicators have been selected, and a methodology has been proposed to assess the progress of each student. The article presents algorithms for assessing the success of independence based on empirical data and learning analytics. The proposed algorithms make it possible to interpret digital traces of independent work in distance learning, evaluate its success and improve the quality of education by correcting the student's learning path. Keywords: distance learning, independent work, eutagogy, digital footprint, control, digital technologies.
45

Clearkin, Louis. "P055 Giant Cell Arteritis Diagnosis: Metadiagnosis by robust Bayesian estimate of strength of belief." Rheumatology 62, Supplement_2 (April 1, 2023). http://dx.doi.org/10.1093/rheumatology/kead104.096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background/Aims Metadiagnosis expresses diagnostic belief as: (i) a numeric probability, (ii) degree of confidence in the probability estimate and (iii) defines the derived information as decision thresholds. We use this approach to stratify suspected GCA patients, by disease probability, into three classes. (a) Likely to have GCA, (b) Unlikely to have GCA, and (c) Uncertain GCA status. Methods We developed a Bayesian algorithm that computes a patient’s disease probability based on their signs, symptoms and reference-standard tests. This algorithm forms the computational engine of the first iteration of an online APP. The computational engine incorporated statistical weights (likelihood ratios) from a meta-analysis of 68 unique studies (14 037 unique patients with suspected GCA). The algorithm requires the specification of a disease prevalence. This is the “prior” probability any patient has GCA. We then update this probability incrementally with each additional item of information elicited from the questionnaire proforma (e.g. female? headache? visual abnormality? …) by incremental evidence aggregation. Finally, the outcome of the patient's reference standard test is incorporated (e.g. positive, negative, indeterminate) and a final posterior GCA probability is computed for each patient. Results Quantitative GCA risk estimate for each individual patient, and how that was altered by the result of reference standard test, produces three distinct groups: a) Low probability group - estimated risk of GCA is significantly lower than the prevalence in the cohort. The reference standard test in this group was uniformly negative.b) intermediate probability group - estimated risk of GCA is around the same as the prevalence in the cohort. The reference standard test (TAB) is almost always negative in this group, but not exclusively.c) high probability group - estimated risk of GCA far exceeds the base or population risk level, and the reference standard test (TAB) is typically confirmatory in this group. Conclusion Applying Bayesian algorithmic metadiagnosis, based on established values of how indicators of disease impact on diagnosis, allows numerical disease prediction and non-mathematically communicating the precision of diagnosis may enhance informed patient choice leading to personalised, targeted intervention. An inbuilt artificial neural network (a deep learning method feedback-loop) trains the algorithm to identify the value of all relevant indicators and so improves predictive accuracy. We plan further studies to establish how this approach could impact on patient well-being and resource utilisation. Disclosure L. Clearkin: None.
46

Wang, Di, Yihui Guo, Qian Yin, Hanzhong Cao, Xiaohong Chen, Hua Qian, Muhuo Ji, and Jianfeng Zhang. "Analgesia quality index improves the quality of postoperative pain management: a retrospective observational study of 14,747 patients between 2014 and 2021." BMC Anesthesiology 23, no. 1 (August 19, 2023). http://dx.doi.org/10.1186/s12871-023-02240-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background The application of artificial intelligence patient-controlled analgesia (AI-PCA) facilitates the remote monitoring of analgesia management, the implementation of mobile ward rounds, and the automatic recording of all types of key data in the clinical setting. However, it cannot quantify the quality of postoperative analgesia management. This study aimed to establish an index (analgesia quality index (AQI)) to re-monitor and re-evaluate the system, equipment, medical staff and degree of patient matching to quantify the quality of postoperative pain management through machine learning. Methods Utilizing the wireless analgesic pump system database of the Cancer Hospital Affiliated with Nantong University, this retrospective observational study recruited consecutive patients who underwent postoperative analgesia using AI-PCA from June 1, 2014, to August 31, 2021. All patients were grouped according to whether or not the AQI was used to guide the management of postoperative analgesia: The control group did not receive the AQI guidance for postoperative analgesia and the experimental group received the AQI guidance for postoperative analgesia. The primary outcome was the incidence of moderate-to-severe pain (numeric rating scale (NRS) score ≥ 4) and the second outcome was the incidence of total adverse reactions. Furthermore, indicators of AQI were recorded. Results A total of 14,747 patients were included in this current study. The incidence of moderate-to-severe pain was 26.3% in the control group and 21.7% in the experimental group. The estimated ratio difference was 4.6% between the two groups (95% confidence interval [CI], 3.2% to 6.0%; P < 0.001). There were significant differences between groups. Otherwise, the differences in the incidence of total adverse reactions between the two groups were nonsignificant. Conclusions Compared to the traditional management of postoperative analgesia, application of the AQI decreased the incidence of moderate-to-severe pain. Clinical application of the AQI contributes to improving the quality of postoperative analgesia management and may provide guidance for optimum pain management in the postoperative setting.
47

Dlouhá, Jana. "Editorial 10 (1)." Envigogika 10, no. 1 (June 30, 2015). http://dx.doi.org/10.14712/18023061.486.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dear readers,We offer you a new English issue of Envigogika which thematically focuses on case studies of regional sustainable development where social actors play specific roles in communication processes – it documents both the promotion of positive changes at regional level and/or also provide evidence to illuminate seemingly unresolvable conflicts. The concept of social learning from an educational point of view frames this thematic edition – as with any other learning process, stakeholder dialogue has a transformative aspect, the opportunity to confront and possibly change opinions and act on the basis of agreed emergent standpoints. In particular, this collection of case studies specifically tries to illuminate role of science and education in regional development, and attempts to introduce methods of analysis of diverse social relationships as well as practical ways of facilitation of communication processes.In this issue of Envigogika two types of case studies are presented – regional development and regional conflicts. Progress in both is highly dependent on the involvement of actors who shape discussions and consequently frame the issue. Analysis of social aspects is hence highly desirable and first steps undertaken here show some interesting results.The first area of interest (development issues) is a traditional focus of Actor Analysis (AA) and this method is widely used abroad in the envisioning stage and helps to facilitate negotiation processes. In the Czech context however, deliberation processes take place rather spontaneously and without a proper analytical stage, and reflections on negotiations in specific cases illustrates exactly this. A hypothesis about the need for continuous cultivation of democratic conditions in the Czech Republic, (with help of sound scientific analytical methods) specifically concerning deliberation processes, was posed as a result of a collaborative research process. This hypothesis was explored in different ways by our invited authors.To provide a brief overview of the issue:Simon Burandt, Fabienne Gralla and Beatrice John in their article Actor Analysis in Case Studies for (regional) Sustainable Development introduce the Actor Analysis analytical tool used to reflect regional (sustainable) development challenges throughout several articles in this issue. This method can be used with the aim not only of studying social capital, but also to have an impact on decision making and community choices. Its role in describing social players and their interactions, to assist in understanding regional development processes and potential conflicts, and to provide information for strategy development is demonstrated through a specific case (the Ore Mountains). The steps of an actor analysis described in the article can be read as guideline for implementing this analysis and an analytical perspective on this process is provided by this article.An outstanding Czech sustainability oriented local economy project is presented in an article by Jan Labohý, Yvonna Gaillyová and Radim Machů: A sustainability assessment of the Hostětín cider house project. The authors assess the sustainability of the project in relation to different kinds of capital using complex indicators that uncover different aspects of the production process and its local cultural characteristics; moreover, effects to the local economy are measured using the local multiplier effect indicator. From this assessment it is clear that the cider house project meets the primary goals of regional sustainable development in a long term perspective.Another – opposite, negative – case is described by Jan Skalík in the analyses the Debate about the Šumava National Park in the Czech Chamber of Deputies. The article demonstrates persisting conflict and its roots with help of the text analysis method applied to the transcripts of parliamentary debates about National Park Šumava (ŠNP) in the Chamber of Deputies between 1990 and 2013. The relationship between politicians and local people within decision-making process, which is depicted as a consequence of this conflict, is then discussed. Interesting conclusions concern the plurality of dialogue and roles of the actors within it; the influence of scientists on the solutions; and the inflammatory and emotional characteristics of recent debate.As a contrast, which serves as a counterargument to show the power of civic society, Vendula Zahumenská refers to a case in Hradec Králové where environmentalists and local developers have been in conflict concerning the development and commercial use of the Na Plachtě natural monument. This case study shows the role of public participation in environmental protection and describes the specific opportunities for influencing environmental decision-making.But there are cases in CR where declared economic interests are so strong that they eliminate dialogue with civic society – for example, as a result of brown coal mining and its associated industrial development, 106 towns and villages were obliterated in North Bohemia and its population was resettled to newly built prefabricated housing estates. A Case study analysing biographic interviews with the displaced people of Tuchomyšl is presented by Ivana Hermová. The author shows that the former Tuchomyšlers continue to identify strongly with the social space of the obliterated village, and discovers how they reflect on their forced eviction 35 years after the physical destruction of the village.That these conclusions concerning the involvement of social actors might be reflected (and used) in the practice of school education, is described by Alois Hynek, Břetislav Svozil, Jakub Trojan and Jan Trávníček. In a reflection on the Deblínsko landscape project these authors refer to the roles of stakeholders including a university, primary school and kindergarten, and also owners, users, decision-makers, shareholders and stakeholders within public administration. The project is driven by Masaryk University which applies sustainability/security concepts in practice while closely relating these activities with research and teaching. This experience shows that social learning processes can start early among children/pupils/students.A brief analytical overview of cases in this special issue, as well as an overview of information and experiences from a database of case studies from different regions of the Czech Republic and from abroad (compiled by authors beyond the scope of this issue), is provided in an article Potential for social learning in sustainable regional development: analysis of stakeholder interaction … by Jana Dlouhá and Martin Zahradník. The conditions for the success or failure of environmental or sustainable development strategies from a social point of view have been analysed here with a focus on the roles of actors in a dialogue about regional sustainability issues within cooperative or conflict situations and concern for the communication processes among actors, scientists included. As a result of this analysis, interesting hypotheses were formulated, related to the role of future visioning as a ground for discussion, communication frameworks which involve all concerned actors, and the (non)existence of facilitation practices. These findings highlight the importance of reflecting on development issues’ social aspects to help understand and promote democratic decision making processes at regional level.The case studies which follow the research section of the issue take the opportunity to provide a colourful depiction of local sustainable development conditions. The Description of old industrial regions in Europe and potential for their transformation is described by Joern Harfst and David Osebik who stress social learning as an important transformative factor. In particular the involvement of research partners may support joint learning effects and knowledge transfer between all actors. Establishment of trusting working relationships may be crucial to overcome certain reservations on all sides before innovative approaches can be pursued successfully.The Vulkanland case study case written by Michael Ober traces the first glimpses of a sustainable development vision for a border region with little hope for economic prosperity to the successful development of a new identity which has reinforced local peoples’ self-confidence. The initiators of the project first imagined a future built on different standards than the past and consequently managed to substantially transform this region within a period of 15 years. The ‘Steirisches Vulkanland’ region now includes 79 municipalities which together promote local, green, self-sustaining businesses and continue to be ambitious about their future visions including achieving energy independence.As part of the theme illustrated in this Special issue and mentioned also within the analysis of the cases is a text Discovery of a supposed extinct settlement species made at Königsmühle in the Ore Mountains (published previously in Envigogika 9/1 last year but worth republishing in English in the context of this thematic issue). Author Petr Mikšíček pays attention to footprints left in the landscape by bygone generations of inhabitants (and also to present-day footprints left by our generation) and struggles to retain this memory for future generations. Clashes with the interests of some of the actors (land owners in this case) are necessary to preserve the footprints that are on the brink of being wiped out.A brief introduction to the new publication Analysis and support for participatory decision-making processes aimed at regional sustainable development strategies through the use of actor analysis methodology which is available fully online here is presented in the Information section of the Issue.From this overview, some general conclusions can be derived:Conflict situations described in this issue emerged when traditional concepts were enforced by strong actors (without joint envisioning and planning with the others); these circumstances usually do not allow for balanced discussions about the future. However the important role of minor actors such as scientists was also revealed. Experiences with their involvement provided a chance to highlight the role of scientists in policy-making.Based on the findings of this and other related research, the role of scientists can be framed not only as providers of the (rather technical) expertise to reach the goals that were set within the environment or SD oriented decision-making, but also as entering policy negotiations providing an insight into the processes they undergo. If invited at an early stage of decision-making, they can have a considerable impact on its results (then their involvement can be described as an action research). This finding might be used in planning of similar practical and scientific projects.As we can see, several interesting ideas resulted from a comparative meta-analysis of the case studies and were outlined in this issue of Envigogika. In general, it is a social point of view that provides an insight into the nature of the examples presented from the Czech Republic and the good practices from abroad. A scientific method of description is used here to reflect policy mechanisms as well as to indicate a way forward for integrating decision making practice into very sensitive, local or regional sustainability contexts. We sincerely hope that this will precipitate a broad process of public dialogue among experts as well as other actors – beyond the realm of academic discussions only, but nevertheless with substantial academic input.We wish you an enjoyable read and a pleasant and relaxing summer!On behalf of the Envigogika editorial teamJana and Jiří DlouhýAcknowledgementResearch in several articles of this issue was supported by the following projects: Interdisciplinary network of cooperation for policy development in the field of sustainable development (Mezioborová síť spolupráce pro policy development v oblasti udržitelného rozvoje – MOSUR, 2011‑2014) CZ.1.07_2.4.00_17.0130 from the OPVK program of Ministry of Education, Youth and Sports; and TD020120 (TAČR), and 14/36005S (GAČR).
48

Egliston, Ben. "Building Skill in Videogames: A Play of Bodies, Controllers and Game-Guides." M/C Journal 20, no. 2 (April 26, 2017). http://dx.doi.org/10.5204/mcj.1218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
IntroductionIn his now-seminal book, Pilgrim in the Microworld (1983), David Sudnow details his process of learning to play the game Breakout on the Atari 2600. Sudnow develops an account of his graduation from a novice (having never played a videogame prior, and middle-aged at time of writing) to being able to fluidly perform the various configurative processes involved in an acclimated Breakout player’s repertoire.Sudnow’s account of videogame skill-development is not at odds with common-sense views on the matter: people become competent at videogames by playing them—we get used to how controllers work and feel, and to the timings of the game and those required of our bodies, through exposure. We learn by playing, failing, repeating, and ultimately internalising the game’s rhythms—allowing us to perform requisite actions. While he does not put it in as many words, Sudnow’s account affords parity to various human and nonhuman stakeholders involved in videogame-play: technical, temporal, and corporeal. Essentially, his point is that intertwined technical systems like software and human-interface devices—with their respective temporal rhythms, which coalesce and conflict with those of the human player—require management to play skilfully.The perspective Sudnow develops here is no doubt important, but modes of building competency cannot be strictly fixed around a player-videogame relationship; a relatively noncontroversial view in game studies. Videogame scholars have shown that there is currency in understanding how competencies in gameplay arise from engaging with ancillary objects beyond the thresholds of player-game relations; the literature to date casting a long shadow across a broad spectrum of materials and practices. Pursuing this thread, this article addresses the enterprise (and conceptualisation) of ‘skill building’ in videogames (taken as the ability to ‘beat games’ or cultivate the various competencies to do so) via the invocation of peripheral objects or practices. More precisely, this article develops the perspective that we need to attend to the impacts of ancillary objects on play—positioned as hybrid assemblage, as described in the work of writers like Sudnow. In doing so, I first survey how the intervention of peripheral game material has been researched and theorised in game studies, suggesting that many accounts deal too simply with how players build skill through these means—eliding the fact that play works as an engine of many moving parts. We do not simply become ‘better’ at videogames by engaging peripheral material. Furthering this view, I visit recent literature broadly associated with disciplines like post-phenomenology, which handles the hybridity of play and its extension across bodies, game systems, and other gaming material—attending to how skill building occurs; that is, through the recalibration of perceptual faculties operating in the bodily and temporal dimensions of videogame play. We become ‘better’ at videogames by drawing on peripheral gaming material to augment how we negotiate the rhythms of play.Following on from this, I conclude by mobilising post-phenomenological thinking to further consider skill-building through peripheral material, showing how such approaches can generate insights into important and emerging areas of this practice. Following recent games research, such as the work of James Ash, I adopt Bernard Stiegler’s formulation of technicity—pointing toward the conditioning of play through ancillary gaming objects: focusing particularly on the relationship between game skill, game guides, and embodied processes of memory and perception.In short, this article considers videogame skill-building, through means beyond the game, as a significant recalibration of embodied, temporal, and technical entanglements involved in play. Building Skill: From Guides to BodiesThere is a handsome literature that has sought to conceptualise the influence of ancillary game material, which can be traced to earlier theories of media convergence (Jenkins). More incisive accounts (pointing directly at game-skill) have been developed since, through theoretical rubrics such as paratext and metagaming. A point of congruence is the theme of relation: the idea that the locus of understanding and meaning can be specified through things outside the game. For scholars like Mia Consalvo (who popularised the notion of paratext in game studies), paratexts are a central motor in play. As Consalvo suggests, paratexts are quite often primed to condition how we do things in and around videogames; there is a great instructive potential in material like walkthrough guides, gaming magazines and cheating devices. Subsequent work has since made productive use of the concept to investigate game-skill and peripheral material and practice. Worth noting is Chris Paul’s research on World of Warcraft (WoW). Paul suggests that players disseminate high-level strategies through a practice known as ‘Theorycraft’ in the game’s community: one involving the use of paratextual statistics applications to optimise play—the results then disseminated across Web-forums (see also: Nardi).Metagaming (Salen and Zimmerman 482) is another concept that is often used to position the various extrinsic objects or practices installed in play—a concept deployed by scholars to conceptualise skill building through both games and the things at their thresholds (Donaldson). Moreover, the ability to negotiate out-of-game material has been positioned as a form of skill in its own right (see also: Donaldson). Becoming familiar with paratextual resources and being able to parse this information could then constitute skill-building. Ancillary gaming objects are important, and as some have argued, central in gaming culture (Consalvo). However, critical areas are left unexamined with respect to skill-building, because scholars often fail to place paratexts or metagaming in the contexts in which they operate; that is, amongst the complex technical, embodied and temporal conjunctures of play—such as those described by Sudnow. Conceptually, much of what Sudnow says in Microworld undergirds the post-human, object-oriented, or post-phenomenological literature that has begun to populate game studies (and indeed media studies more broadly). This materially-inflected writing takes seriously the fact that technical objects (like videogames) and human subjects are caught up in the rhythms of each other; digital media exists “as a mode or cluster of operations in consort with matter”, as Anna Munster tells us (330).To return to videogames, Patrick Crogan and Helen Kennedy argue that gameplay is about a “technicity” between human and nonhuman things, irreducible to any sole actor. Play is a confluence of metastable forces and conditions, a network of distributed agencies (see also Taylor, Assemblage). Others like Brendan Keogh forward post-phenomenological approaches (operating under scholars like Don Ihde)—looking past the subject-centred nature of videogame research. Ultimately, these theorists situate play as an ‘exploded diagram’, challenging anthropocentric accounts.This position has proven productive in research on ‘skilled’ or ‘high-level’ play (fertile ground for considering competency-development). Emma Witkowski, T.L. Taylor (Raising), and Todd Harper have suggested that skilled play in games emerges from the management of complex embodied and technical rhythms (echoing the points raised prior by Sudnow).Placing Paratexts in PlayWhile we have these varying accounts of how skill develops within and beyond player-game relationships, these two perspectives are rarely consolidated. That said, I address some of the limited body of work that has sought to place the paratext in the complex and distributed conjunctures of play; building a vocabulary and framework via encounters with what could loosely be called post-phenomenological thinking (not dissimilar to the just surveyed accounts). The strength of this work lies in its development of a more precise view of the operational reality of playing ‘with’ paratexts. The recent work of Darshana Jayemanne, Bjorn Nansen, and Thomas Apperley theorises the outward expansion of games and play, into diverse material, social, and spatial dimensions (147), as an ‘aesthetics of recruitment’. Consideration is given to ‘paratextual’ play and skill. For instance, they provide the example of players invoking the expertise they have witnessed broadcast through Websites like Twitch.tv or YouTube—skill-building operating here across various fronts, and through various modalities (155). Players are ‘recruited’, in different capacities, through expanded interfaces, which ultimately contour phenomenological encounters with games.Ash provides a fine-grained account in research on spatiotemporal perception and videogames—one much more focused on game-skill. Ash examines how high-level communities of players cultivate ‘spatiotemporal sensitivity’ in the game Street Fighter IV through—in Stiegler’s terms—‘exteriorising’ (Fault) game information into various data sets—producing what he calls ‘technicity’. In this way, Ash suggests that these paratextual materials don’t merely ‘influence play’ (Technology 200), but rather direct how players perceive time, and habituate exteriorised temporal rhythms into their embodied facility (a translation of high-level play). By doing so, the game can be played more proficiently. Following the broadly post-phenomenological direction of these works, I develop a brief account of two paratextual practices. Like Ash, I deploy the work of Stiegler (drawing also on Ash’s usage). I utilise Stiegler’s theoretical schema of technicity to roughly sketch how some other areas of skill-building via peripheral material can be placed within the context of play—looking particularly at the conditioning of embodied faculties of player anticipation, memory and perception through play and paratext alike. A Technicity of ParatextThe general premise of Stiegler’s technicity is that the human cannot be thought of independent from their technical supplements—that is, ‘exterior’ technical objects which could include, but are not limited to, technologies (Fault). Stiegler argues that the human, and their fundamental memory structure is finite, and as such is reliant on technical prostheses, which register and transmit experience (Fault 17). This technical supplement is what Stiegler terms ‘tertiary retention’. In short, for Stiegler, technicity can be understood as the interweaving of ‘lived’ consciousness (Cinematic 21) with tertiary retentional apparatus—which is palpably felt in our orientations in and toward time (Fault) and space (including the ‘space’ of our bodies, see New Critique 11).To be more precise, tertiary retention conditions the relationship between perception, anticipation, and subjective memory (or what Stiegler—by way of phenomenologist Edmund Husserl, whose work he renovates—calls primary retention, protention, and secondary retention respectively). As Ash demonstrates (Technology), Stiegler’s framework is rich with potential in investigating the relationship between videogames and their peripheral materials. Invoking technicity, we can rethink—and expand on—commonly encountered forms of paratexts, such as game guides or walkthroughs (an example Consalvo gives in Cheating). Stiegler’s framework provides a means to assess the technical organisation (through both games and paratexts) of embodied and temporal conditions of ‘skilled play’. Following Stiegler, Consalvo’s example of a game guide is a kind of ‘exteriorisation of play’ (to the guide) that adjusts the embodied and temporal conditions of anticipation and memory (which Sudnow would tell us are key in skill-development). To work through an example, if I was playing a hard game (such as Dark Souls [From Software]), the general idea is that I would be playing from memories of the just experienced, and with expectations of what’s to come based on everything that’s happened prior (following Stiegler). There is a technicity in the game’s design here, as Ash would tell us (Technology 190-91). By way of Stiegler (and his reading of Heidegger), Ash argues a popular trend in game design is to force a technologically-mediated interplay between memory, anticipation, and perception by making videogames ‘about’ a “a future outside of present experience” (Technology 191), but hinging this on past-memory. Players then, to be ‘skilful’, and move forward through the game environment without dying, need to manage cognitive and somatic memory (which, in Dark Souls, is conventionally accrued through trial-and-error play; learning through error incentivised through punitive game mechanics, such as item-loss). So, if I was playing against one of the game’s ‘bosses’ (powerful enemies), I would generally only be familiar with the way they manoeuvre, the speed with which they do so, and where and when to attack based on prior encounter. For instance, my past-experience (of having died numerous times) would generally inform me that using a two-handed sword allows me to get in two attacks on a boss before needing to retreat to avoid fatal damage. Following Stiegler, we can understand the inscription of videogame experience in objects like game guides as giving rise to anticipation and memory—albeit based on a “past that I have not lived but rather inherited as tertiary retentions” (Cinematic 60). Tertiary retentions trigger processes of selection in our anticipations, memories, and perceptions. Where videogame technologies are traditionally the tertiary retentions in play (Ash, Technologies), the use of game-guides refracts anticipation, memory, and perception through joint systems of tertiary retention—resulting in the outcome of more efficiently beating a game.To return to my previous example of navigating Dark Souls: where I might have died otherwise, via the guide, I’d be cognisant to the timings within which I can attack the boss without sustaining damage, and when to dodge its crushing blows—allowing me to eventually defeat it and move toward the stage’s end (prompting somatic and cognitive memory shifts, which influence my anticipation in-game). Through ‘neurological’ accounts of technology—such as Stiegler’s technicity—we can think more closely about how playing with a skill-building apparatus (like a game guide) works in practice; allowing us to identify how various situations ingame can be managed via deferring functions of the player (such as memory) to exteriorised objects—shifting conditions of skill building. The prism of technicity is also useful in conceptualising some of the new ways players are building skill beyond the game. In recent years, gaming paratexts have transformed in scope and scale. Gaming has shifted into an age of quantification—with analytics platforms which harvest, aggregate, and present player data gaining significant traction, particularly in competitive and multiplayer videogames. These platforms perform numerous operations that assist players in developing skill—and are marketed as tools for players to improve by reflecting on their own practices and the practices of others (functioning similarly to the previously noted practice of TheoryCraft, but operating at a wider scale). To focus on one example, the WarCraftLogs application in WoW (Image 1) is a highly-sophisticated form of videogame analytics; the perspective of technicity providing insights into its functionality as skill-building apparatus.Image 1: WarCraftLogs. Image credit: Ben Egliston. Following Ash’s use of Stiegler (Technology), quantifying the operations that go into playing WoW can be conceptualised as what Stiegler calls a system of traces (Technology 196). Because of his central thesis of ‘technical existence’, Stiegler maintains that ‘interiority’ is coincident with technical support. As such, there is no calculation, no mental phenomena, that does not arise from internal manipulation of exteriorised symbols (Cinematic 52-54). Following on with his discussion of videogames, Ash suggests that in the exteriorisation of gameplay there is “no opposition between gesture, calculation and the representation of symbols” (Technology 196); the symbols working as an ‘abbreviation’ of gameplay that can be read as such. Drawing influence from this view, I show that ‘Big Data’ analytics platforms like WarCraftLogs similarly allow users to ‘read’ play as a set of exteriorised symbols—with significant outcomes for skill-building; allowing users to exteriorise their own play, examine the exteriorised play of others, and compare exteriorisations of their own play with those of others. Image 2: WarCraftLogs Gameplay Breakdown. Image credit: Ben Egliston.Image 2 shows a screenshot of the WarCraftLogs interface. Here we can see the exteriorisation of gameplay, and how the platform breaks down player inputs and in-game occurrences (written and numeric, like Ash’s game data). The screenshot shows a ‘raid boss’ (where players team up to defeat powerful computer-controlled enemies)—atomising the sequence of inputs a player has made over the course of the encounter. This is an accurate ledger of play—a readout that can speak to mechanical performance (specific ingame events occurred at a specific time), as well as caching and providing parses of somatic inputs and execution (e.g. ability to trace the rates at which players expend in-game resources can provide insights into rapidity of button presses). If information falls outside what is presented, players can work with an Application Programming Interface to develop customised readouts (this is encouraged through other game-data platforms, like OpenDota in Dota 2). Through this system, players can exteriorise their own input and output or view the play of others—both useful in building skill. The first point here—of exteriorising one’s own experience—resonates with Stiegler’s renovation of Husserl's ‘temporal object’—that is, an object that exists in and is formed through time—through temporal fluxes of what appears, what happens and what manifests itself in disappearing (Cinematic 14). Stiegler suggests that tertiary retentional apparatus (e.g. a gramophone) allow us to re-experience a temporal object (e.g. a melody) which would otherwise not be possible due to the finitude of human memory.To elaborate, Stiegler argues that primary memories recede into secondary memory (which is selective reactivation of perception), but through technologies of recording, (such as game-data) we can re-experience these things verbatim. So ultimately, games analytics platforms—as exteriorised technologies of recording—facilitate this after-the-fact interplay between primary and secondary memory where players can ‘audit’ their past performance, reflecting on well-played encounters or revising error. These platforms allow the detailed examination of responses to game mechanics, and provide readouts of the technical and embodied rhythms of play (which can be incorporated into future play via reading the data). Beyond self-reflection, these platforms allow the examination of other’s play. The aggregation and sorting of game-data makes expertise both visible and legible. To elaborate, players are ranked on their performance based on all submitted log-data, offering a view of how expertise ‘works’.Image 3: Top-Ranking Players in WarCraftLogs. Image credit: Ben Egliston.Image 3 shows the top-ranked players on an encounter (the top 10 of over 100,000 logs), which means that these players have performed most competently out of all gameplay parses (the metric being most damage dealt per-second in defeating a boss). Users of the platform can look in detail at the actions performed by top players in that encounter—reading and mobilising data in a similar manner to game-guides; markedly different, however, in terms of the scope (i.e. there are many available logs to draw from) and richness of the data (more detailed and current—with log rankings recalibrated regularly). Conceptually, we can also draw parallels with previous work (see: Ash, Technology)—where the habituation of expert game data can produce new videogame technicities; ways of ‘experiencing’ play as ‘higher-level’ organisation of space and time (Ash, Technology). So, if a player wanted to ‘learn from the experts’ they would restructure their own rhythms of play around high-level logs which provide an ordered readout of various sequences of inputs involved in playing well. Moreover, the platform allows players to compare their logs to those of others—so these various introspective and outward-facing uses can work together, conditioning anticipations with inscriptions of past-play and ‘prosthetic’ memories through other’s log-data. In my experience as a WoW player, I often performed better (or built skill) by comparing and contrasting my own detailed readouts of play to the inputs and outputs of the best players in the world.To summarise, through technicity, I have briefly shown how exteriorising play shifts the conditions of skill-building from recalibrating msnesic and anticipatory processes through ‘firsthand’ play, to reworking these functions through engaging both games and extrinsic objects, like game guides and analytics platforms. Additionally, by reviewing and adopting various usages of technicity, I have pointed out how we might more holistically situate the gaming paratext in skill building. Conclusion There is little doubt—as exemplified through both scholarly and popular interest—that paratextual videogame material reframes modes of building game skill. Following recent work, and by providing a brief account of two paratextual practices (venturing the framework of technicity, via Stiegler and Ash—showing the complication of memory, perception, and anticipation in skill-building), I have contended that videogame-skill building—via paratextual material—can be rendered a process of operating outside of, but still caught up in, the complex assemblages of time, bodies, and technical architectures described by Sudnow at this article’s outset. Additionally, by reviewing and adopting ideas associated with technics and post-phenomenology, this article has aimed to contribute to the development of more ‘complete’ accounts of the processes and practices comprising skill building regimens of contemporary videogame players.References Ash, James. “Technology, Technicity and Emerging Practices of Temporal Sensitivity in Videogames.” Environment and Planning A 44.1 (2012): 187-201.———. “Technologies of Captivation: Videogames and the Attunement of Affect.” Body and Society 19.1 (2013): 27-51.Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge: Massachusetts Institute of Technology P, 2007. Crogan, Patrick, and Helen Kennedy. “Technologies between Games and Culture.” Games and Culture 4.2 (2009): 107-14.Donaldson, Scott. “Mechanics and Metagame: Exploring Binary Expertise in League of Legends.” Games and Culture (2015). 4 Jun. 2015 <http://journals.sagepub.com/doi/abs/10.1177/1555412015590063>.From Software. Dark Souls. Playstation 3 Game. 2011.Harper, Todd. The Culture of Digital Fighting Games: Performance and Practice. New York: Routledge, 2014.Jayemanne, Darshana, Bjorn Nansen, and Thomas H. Apperley. “Postdigital Interfaces and the Aesthetics of Recruitment.” Transactions of the Digital Games Research Association 2.3 (2016): 145-72.Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006.Keogh, Brendan. “Across Worlds and Bodies.” Journal of Games Criticism 1.1 (2014). Jan. 2014 <http://gamescriticism.org/articles/keogh-1-1/>.Munster, Anna. “Materiality.” The Johns Hopkins Guide to Digital Media. Eds. Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson. Baltimore: Johns Hopkins UP, 2014. 327-30. Nardi, Bonnie. My Life as Night Elf Priest: An Anthropological Account of World of Warcraft. Ann Arbor: Michigan UP, 2010. OpenDota. OpenDota. Web browser application. 2017.Paul, Christopher A. “Optimizing Play: How Theory Craft Changes Gameplay and Design.” Game Studies: The International Journal of Computer Game Research 11.2 (2011). May 2011 <http://gamestudies.org/1102/articles/paul>.Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge: Massachusetts Institute of Technology P, 2004.Stiegler, Bernard. Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford UP, 1998.———. For a New Critique of Political Economy. Cambridge: Polity, 2010.———. Technics and Time, 3: Cinematic Time and the Question of Malaise. Stanford: Stanford UP, 2011.Sudnow, David. Pilgrim in the Microworld. New York: Warner Books, 1983.Taylor, T.L. “The Assemblage of Play.” Games and Culture 4.4 (2009): 331-39.———. Raising the Stakes: E-Sports and the Professionalization of Computer Gaming. Cambridge: Massachusetts Institute of Technology P, 2012.WarCraftLogs. WarCraftLogs. Web browser application. 2016.Witkowski, Emma. “On the Digital Playing Field: How We ‘Do Sport’ with Networked Computer Games.” Games and Culture 7.5 (2012): 349-74.
49

McQuillan, Dan. "The Countercultural Potential of Citizen Science." M/C Journal 17, no. 6 (October 12, 2014). http://dx.doi.org/10.5204/mcj.919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
What is the countercultural potential of citizen science? As a participant in the wider citizen science movement, I can attest that contemporary citizen science initiatives rarely characterise themselves as countercultural. Rather, the goal of most citizen science projects is to be seen as producing orthodox scientific knowledge: the ethos is respectability rather than rebellion (NERC). I will suggest instead that there are resonances with the counterculture that emerged in the 1960s, most visibly through an emphasis on participatory experimentation and the principles of environmental sustainability and social justice. This will be illustrated by example, through two citizen science projects that have a commitment to combining social values with scientific practice. I will then describe the explicitly countercultural organisation, Science for the People, which arose from within the scientific community itself, out of opposition to the Vietnam War. Methodological and conceptual weaknesses in the authoritative model of science are explored, suggesting that there is an opportunity for citizen science to become anti-hegemonic by challenging the hegemony of science itself. This reformulation will be expressed through Deleuze and Guattari's notion of nomadic science, the means through which citizen science could become countercultural. Counterculture Before examining the countercultural potential of citizen science, I set out some of the grounds for identifying a counterculture drawing on the ideas of Theodore Roszak, who invented the term counterculture to describe the new forms of youth movements that emerged in the 1960s (Roszak). This was a perspective that allowed the carnivalesque procession of beatniks, hippies and the New Left to be seen as a single paradigm shift combining psychic and social revolution. But just as striking and more often forgotten is the way Roszak characterised the role of the counterculture as mobilising a vital critique of the scientific worldview (Roszak 273-274). The concept of counterculture has been taken up in diverse ways since its original formation. We can draw, for example, on Lawrence Grossberg's more contemporary analysis of counterculture (Grossberg) to clarify the main concepts and contrast them with a scientific approach. Firstly, a counterculture works on and through cultural formations. This positions it as something the scientific community would see as the other, as the opposite to the objective, repeatable and quantitative truth-seeking of science. Secondly, a counterculture is a diverse and hybrid space without a unitary identity. Again, scientists would often see science as a singular activity applied in modulated forms depending on the context, although in practice the different sciences can experience each other as different tribes. Thirdly, a counterculture is lived as a transformative experience where the participant is fundamentally changed at a psychic level through participation in unique events. Contrast this with the scientific idea of the separation of observer and observed, and the objective repeatability of the experiment irrespective of the experimenter. Fourthly, a counterculture is associated with a unique moment in time, a point of shift from the old to the new. For the counterculture of the 1960s this was the Age of Aquarius. In general, the aim of science and scientists is to contribute to a form of truth that is essentially timeless, in that a physical law is assumed to hold across all time (and space), although science also has moments of radical change with regard to scientific paradigms. Finally, and significantly for the conclusions of this paper, according to Roszak a counterculture stands against the mainstream. It offers a challenge not at the level of detail but, to the fundamental assumptions of the status quo. This is what “science” cannot do, in as much as science itself has become the mainstream. It was the character of science as the bedrock of all values that Roszak himself opposed and for which he named and welcomed the counterculture. Although critical of some of the more shallow aspects of its psychedelic experimentation or political militancy, he shared its criticism of the technocratic society (the technocracy) and the egocentric mode of consciousness. His hope was that the counterculture could help restore a visionary imagination along with a more human sense of community. What Is Citizen Science? In recent years the concept of citizen science has grown massively in popularity, but is still an open and unstable term with many variants. Current moves towards institutionalisation (Citizen Science Association) are attempting to marry growth and stabilisation, with the first Annual General Meeting of the European Citizen Science Association securing a tentative agreement on the common principles of citizen science (Haklay, "European"). Key papers and presentations in the mainstream of the movement emphasise that citizen science is not a new activity (Bonney et al.) with much being made of the fact that the National Audubon Society started its annual Christmas Bird Count in 1900 (National Audubon Society). However, this elides the key role of the Internet in the current surge, which takes two distinct forms; the organisation of distributed fieldwork, and the online crowdsourcing of data analysis. To scientists, the appeal of citizen science fieldwork follows from its distributed character; they can research patterns over large scales and across latitudes in ways that would be impossible for a researcher at a single study site (Toomey). Gathering together the volunteer, observations are made possible by an infrastructure of web tools. The role of the citizen in this is to be a careful observer; the eyes and ears of the scientist in cyberspace. In online crowdsourcing, the internet is used to present pattern recognition tasks; enrolling users in searching images for signs of new planets or the jets of material from black holes. The growth of science crowdsourcing is exponential; one of the largest sites facilitating this kind of citizen science now has well in excess of a million registered users (Zooniverse). Such is the force of the technological aura around crowdsourced science that mainstream publications often conflate it with the whole of citizen science (Parr). There are projects within citizen science which share core values with the counterculture as originally defined by Roszak, in particular open participation and social justice. These projects also show characteristics from Grossberg's analysis of counterculture; they are diverse and hybrid spaces, carry a sense of moving from an old era to a new one, and have cultural forms of their own. They open up the full range of the scientific method to participation, including problem definition, research design, analysis and action. Citizen science projects that aim for participation in all these areas include the Extreme Citizen Science research group (ExCiteS) at University College London (UCL), the associated social enterprise Mapping for Change (Mapping for Change), and the Public Laboratory for Open Technology and Science (Public Lab). ExCiteS sees its version of citizen science as "a situated, bottom-up practice" that "takes into account local needs, practices and culture". Public Lab, meanwhile, argue that many citizen science projects only offer non-scientists token forms of participation in scientific inquiry that rarely amount to more that data collection and record keeping. They counter this through an open process which tries to involve communities all the way from framing the research questions, to prototyping tools, to collating and interpreting the measurements. ExCiteS and Public Lab also share an implicit commitment to social justice through scientific activity. The Public Lab mission is to "put scientific inquiry at the heart of civic life" and the UCL research group strive for "new devices and knowledge creation processes that can transform the world". All of their work is framed by environmental sustainability and care for the planet, whether it's enabling environmental monitoring by indigenous communities in the Congo (ExCiteS) or developing do-it-yourself spectrometry kits to detect crude oil pollution (Public Lab, "Homebrew"). Having provided a case for elements of countercultural DNA being present in bottom-up and problem-driven citizen science, we can contrast this with Science for the People, a scientific movement that was born out of the counterculture. Countercultural Science from the 1970s: Science for the People Science for the People (SftP) was a scientific movement seeded by a rebellion of young physicists against the role of US science in the Vietnam War. Young members of the American Physical Society (APS) lobbied for it to take a position against the war but were heavily criticised by other members, whose written complaints in the communications of the APS focused on the importance of scientific neutrality and the need to maintain the association's purely scientific nature rather than allowing science to become contaminated by politics (Sarah Bridger, in Plenary 2, 0:46 to 1:04). The counter-narrative from the dissidents argued that science is not neutral, invoking the example of Nazi science as a justification for taking a stand. After losing the internal vote the young radicals left to form Scientists and Engineers for Social and Political Action (SESPA), which later became Science for the People (SftP). As well as opposition to the Vietnam War, SftP embodied from the start other key themes of the counterculture, such as civil rights and feminism. For example, the first edition of Science for the People magazine (appearing as Vol. 2, No. 2 of the SESPA Newsletter) included an article about leading Black Panther, Bobby Seale, alongside a piece entitled “Women Demand Equality in Science.” The final articles in the same issue are indicators of SftP's dual approach to science and change; both the radicalisation of professionals (“Computer Professionals for Peace”) and the demystification of technical practices (“Statistics for the People”) (Science for the People). Science for the People was by no means just a magazine. For example, their technical assistance programme provided practical support to street health clinics run by the Black Panthers, and brought SftP under FBI surveillance (Herb Fox, in Plenary 1, 0:25 to 0:35). Both as a magazine and as a movement, SftP showed a tenacious longevity, with the publication being produced every two months between August 1970 and May/June 1989. It mutated through a network of affiliated local groups and international links, and was deeply involved in constructing early critiques of nuclear power and genetic determinism. SftP itself seems to have had a consistent commitment to non-hierarchical processes and, as one of the founders expressed it, a “shit kicking” approach to putting its principles in to practice (Al Weinrub, in Plenary 1, 0:25 to 0:35). SftP criticised power, front and centre. It is this opposition to hegemony that puts the “counter” into counterculture, and is missing from citizen science as currently practised. Cracks in the authority of orthodox science, which can be traced to both methodologies and basic concepts, follow in this paper. These can be seen as an opportunity for citizen science to directly challenge orthodox science and thus establish an anti-hegemonic stance of its own. Weaknesses of Scientific Hegemony In this section I argue that the weaknesses of scientific hegemony are in proportion to its claims to authority (Feyerabend). Through my scientific training as an experimental particle physicist I have participated in many discussions about the ontological and epistemological grounds for scientific authority. While most scientists choose to present their practice publicly as an infallible machine for the production of truths, the opinions behind the curtain are far more mixed. Physicist Lee Somolin has written a devastating critique of science-in-practice that focuses on the capture of the institutional economy of science by an ideological grouping of string theorists (Smolin), and his account is replete with questions about science itself and ethnographic details that bring to life the messy behind-the-scenes conflicts in scientific-knowledge making. Knowledge of this messiness has prompted some citizen science advocates to take science to task, for example for demanding higher standards in data consistency from citizen science than is often the case in orthodox science (Haklay, "Assertions"; Freitag, "Good Science"). Scientists will also and invariably refer to reproducibility as the basis for the authority of scientific truths. The principle that the same experiments always get the same results, irrespective of who is doing the experiment, and as long as they follow the same method, is a foundation of scientific objectivity. However, a 2012 study of landmark results in cancer science was able to reproduce only 11 per cent of the original findings (Begley and Ellis). While this may be an outlier case, there are broader issues with statistics and falsification, a bias on positive results, weaknesses in peer review and the “publish or perish” academic culture (The Economist). While the pressures are all-too-human, the resulting distortions are rarely acknowledged in public by scientists themselves. On the other hand, citizen science has been slow to pick up the gauntlet. For example, while some scientists involved in citizen science have commented on the inequality and inappropriateness of orthodox peer review for citizen science papers (Freitag, “What Is the Role”) there has been no direct challenge to any significant part of the scientific edifice. I argue that the nearest thing to a real challenge to orthodox science is the proposal for a post-normal science, which pre-dates the current wave of citizen science. Post-normal science tries to accommodate the philosophical implications of post-structuralism and at the same time position science to tackle problems such as climate change, intractable to reproducibility (Funtowicz and Ravetz). It accomplishes this by extending the domains in which science can provide meaningful answers to include issues such as global warming, which involve high decision stakes and high uncertainty. It extends traditional peer review into an extended peer community, which includes all the stakeholders in an issue, and may involve active research as well as quality assessment. The idea of extended peer review has obvious overlaps with community-oriented citizen science, but has yet to be widely mobilised as a theoretical buttress for citizen-led science. Prior even to post-normal science are the potential cracks in the core philosophy of science. In her book Cosmopolitics, Isabelle Stengers characterises the essential nature of scientific truth as the ability to disqualify and exclude other truth claims. This, she asserts, is the hegemony of physics and its singular claim to decide what is real and what is true. Stengers traces this, in part, to the confrontation more than one hundred years ago between Max Planck and Ernst Mach, whereas the latter argued that claims to an absolute truth should be replaced by formulations that tied physical laws to the human practices that produced them. Planck stood firmly for knowledge forms that were unbounded by time, space or specific social-material procedures (Stengers). Although contemporary understandings of science are based on Planck's version, citizen science has the potential to re-open these questions in a productive manner for its own practices, if it can re-conceive of itself as what Deleuze and Guattari would call nomadic science (Deleuze; Deleuze & Guattari). Citizen Science as Nomadic Science Deleuze and Guattari referred to orthodox science as Royal Science or Striated Science, referring in part to its state-like form of authority and practice, as well as its psycho-social character. Their alternative is a smooth or nomadic science that, importantly for citizen science, does not have the ambition to totalise knowledge. Nomadic science is a form of empirical investigation that has no need to be hooked up to a grand narrative. The concept of nomadic science is a natural fit for bottom-up citizen science because it can valorise truths that are non-dual and that go beyond objectivity to include the experiential. In this sense it is like the extended peer review of post-normal science but without the need to be limited to high-risk high-stakes questions. As there is no a priori problem with provisional knowledges, it naturally inclines towards the local, the situated and the culturally reflective. The apparent unreliability of citizen science in terms of participants and tools, which is solely a source of anxiety, can become heuristic for nomadic science when re-cast through the forgotten alternatives like Mach's formulation; that truths are never separated from the specifics of the context and process that produced them (Stengers 6-18; 223). Nomadic science, I believe, will start to emerge through projects that are prepared to tackle toxic epistemology as much as toxic pollutants. For example, the Community Based Auditing (CBA) developed by environmental activists in Tasmania (Tattersall) challenges local alliances of state and extractive industries by undermining their own truth claims with regards to environmental impact, a process described in the CBA Toolbox as disconfirmation. In CBA, this mixture of post-normal science and Stenger's critique is combined with forms of data collection and analysis known as Community Based Sampling (Tattersall et al.), which would be recognisable to any citizen science project. The change from citizen science to nomadic science is not a total rupture but a shift in the starting point: it is based on an overt critique of power. One way to bring this about is being tested in the “Kosovo Science for Change” project (Science for Change Kosovo), where I am a researcher and where we have adopted the critical pedagogy of Paulo Freire as the starting point for our empirical investigations (Freire). Critical pedagogy is learning as the co-operative activity of understanding—how our lived experience is constructed by power, and how to make a difference in the world. Taking a position such as nomadic science, openly critical of Royal Science, is the anti-hegemonic stance that could qualify citizen science as properly countercultural. Citizen Science and Counterculture Counterculture, as I have expressed it, stands against or rejects the hegemonic culture. However, there is a strong tendency in contemporary social movements to take a stance not only against the dominant structures but against hegemony itself. They contest what Richard Day calls the hegemony of hegemony (Day). I witnessed this during the counter-G8 mobilisation of 2001. Having been an activist in the 1980s and 1990s I was wearily familiar with the sectarian competitiveness of various radical narratives, each seeking to establish itself as the correct path. So it was a strongly affective experience to stand in the convergence centre and listen to so many divergent social groups and movements agree to support each other's tactics, expressing a solidarity based on a non-judgemental pluralism. Since then we have seen the emergence of similarly anti-hegemonic countercultures around the Occupy and Anonymous movements. It is in this context of counterculture that I will try to summarise and evaluate the countercultural potential of citizen science and what being countercultural might offer to citizen science itself. To be countercultural it is not enough for citizen science to counterpose participation against the institutional and hierarchical aspects of professional science. As an activity defined purely by engagement it offers to plug the legitimacy gap for science while still being wholly dependent on it. A countercultural citizen science must pose a strong challenge to the status quo, and I have suggested that a route to this would be to develop as nomadic science. This does not mean replacing or overthrowing science but constructing an other to science with its own claim to empirical methods. It is fair to ask what this would offer citizen science that it does not already have. At an abstract level it would gain a freedom of movement; an ability to occupy Deleuzian smooth spaces rather than be constrained by the striation of established science. The founders of Science for the People are clear that it could never have existed if it had not been able to draw on the mass movements of its time. Being countercultural would give citizen science an affinity with the bottom-up, local and community-based issues where empirical methods are likely to have the most social impact. One of many examples is the movement against fracking (the hydraulic fracturing of deep rock formations to release shale gas). Together, these benefits of being countercultural open up the possibility for forms of citizen science to spread rhizomatically in a way that is not about immaterial virtual labour but is itself part of a wider cultural change. The possibility of a nomadic science stands as a doorway to the change that Roszak saw at the heart of the counterculture, a renewal of the visionary imagination. References Begley, C. Glenn, and Lee M. Ellis. "Drug Development: Raise Standards for Preclinical Cancer Research." Nature 483.7391 (2012): 531–533. 8 Oct. 2014 ‹http://www.nature.com/nature/journal/v483/n7391/full/483531a.html›. Bonney, Rick, et al. "Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy." BioScience 59.11 (2009): 977–984. 6 Oct. 2014 ‹http://bioscience.oxfordjournals.org/content/59/11/977›. Citizen Science Association. "Citizen Science Association." 2014. 6 Oct. 2014 ‹http://citizenscienceassociation.org/›. Day, Richard J.F. Gramsci Is Dead: Anarchist Currents in the Newest Social Movements. London: Pluto Press, 2005. Deleuze, Giles. Nomadology: The War Machine. New York, NY: MIT Press, 1986. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. London: Bloomsbury Academic, 2013. ExCiteS. "From Non-Literate Data Collection to Intelligent Maps." 26 Aug. 2013. 8 Oct. 2014 ‹http://www.ucl.ac.uk/excites/projects/excites-projects/intelligent-maps/intelligent-maps›. Feyerabend, Paul K. Against Method. 4th ed. London: Verso, 2010. Freire, Paulo. Pedagogy of the Oppressed. Continuum International Publishing Group, 2000. Freitag, Amy. "Good Science and Bad Science in Democratized Science." Oceanspaces 22 Jan. 2014. 9 Oct. 2014 ‹http://oceanspaces.org/blog/good-science-and-bad-science-democratized-science›. ---. "What Is the Role of Peer-Reviewed Literature in Citizen Science?" Oceanspaces 29 Jan. 2014. 10 Oct. 2014 ‹http://oceanspaces.org/blog/what-role-peer-reviewed-literature-citizen-science›. Funtowicz, Silvio O., and Jerome R. Ravetz. "Science for the Post-Normal Age." Futures 25.7 (1993): 739–755. 8 Oct. 2014 ‹http://www.sciencedirect.com/science/article/pii/001632879390022L›. Grossberg, Lawrence. "Some Preliminary Conjunctural Thoughts on Countercultures." Journal of Gender and Power 1.1 (2014). 3 Nov. 2014 ‹http://gender-power.amu.edu.pl/?page_id=20›. Haklay, Muki. "Assertions on Crowdsourced Geographic Information & Citizen Science #2." Po Ve Sham - Muki Haklay’s Personal Blog 16 Jan. 2014. 8 Oct. 2014 ‹http://povesham.wordpress.com/2014/01/16/assertions-on-crowdsourced-geographic-information-citizen-science-2/›. ---. "European Citizen Science Association Suggestion for 10 Principles of Citizen Science." Po Ve Sham - Muki Haklay’s Personal Blog 14 May 2014. 6 Oct. 2014 ‹http://povesham.wordpress.com/2014/05/14/european-citizen-science-association-suggestion-for-10-principles-of-citizen-science/›. Mapping for Change. "Mapping for Change." 2014. 6 June 2014 ‹http://www.mappingforchange.org.uk/›. National Audubon Society. "Christmas Bird Count." 2014. 6 Oct. 2014 ‹http://birds.audubon.org/christmas-bird-count›. NERC. "Best Practice Guides to Choosing and Using Citizen Science for Environmental Projects." Centre for Ecology & Hydrology May 2014. 9 Oct. 2014 ‹http://www.ceh.ac.uk/products/publications/understanding-citizen-science.html›. Parr, Chris. "Why Citizen Scientists Help and How to Keep Them Hooked." Times Higher Education 6 June 2013. 6 Oct. 2014 ‹http://www.timeshighereducation.co.uk/news/why-citizen-scientists-help-and-how-to-keep-them-hooked/2004321.article›. Plenary 1: Stories from the Movement. Film. Science for the People, 2014. Plenary 2: The History and Lasting Significance of Science for the People. Film. Science for the People, 2014. Public Lab. "Public Lab: A DIY Environmental Science Community." 2014. 6 June 2014 ‹http://publiclab.org/›. ---. "The Homebrew Oil Testing Kit." Kickstarter 24 Sep. 2014. 8 Oct. 2014 ‹https://www.kickstarter.com/projects/publiclab/the-homebrew-oil-testing-kit›. Roszak, Theodore. The Making of a Counter Culture. Garden City, N.Y.: Anchor Books/Doubleday, 1969. Science for Change Kosovo. "Citizen Science Kosovo." Facebook, n.d. 17 Aug. 2014 ‹https://www.facebook.com/CitSciKS›. Science for the People. "SftP Magazine." 2013. 8 Oct. 2014 ‹http://science-for-the-people.org/sftp-resources/magazine/›. Smolin, Lee. The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Reprint ed. Boston: Mariner Books, 2007. Stengers, Isabelle. Cosmopolitics I. Trans. Robert Bononno. Minneapolis: U of Minnesota P, 2010. Tattersall, Philip J. "What Is Community Based Auditing and How Does It Work?." Futures 42.5 (2010): 466–474. 9 Oct. 2014 ‹http://www.sciencedirect.com/science/article/pii/S0016328709002055›. ---, Kim Eastman, and Tasmanian Community Resource Auditors. Community Based Auditing: Tool Boxes: Training and Support Guides. Beauty Point, Tas.: Resource Publications, 2010. The Economist. "Trouble at the Lab." 19 Oct. 2013. 8 Oct. 2014 ‹http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble›. Toomey, Diane. "How Rise of Citizen Science Is Democratizing Research." 28 Jan. 2014. 6 Oct. 2014 ‹http://e360.yale.edu/feature/interview_caren_cooper_how_rise_of_citizen_science_is_democratizing_research/2733/›. UCL. "Extreme Citizen Science (ExCiteS)." July 2013. 6 June 2014 ‹http://www.ucl.ac.uk/excites/›. Zooniverse. "The Ever-Expanding Zooniverse - Updated." Daily Zooniverse 3 Feb. 2014. 6 Oct. 2014 ‹http://daily.zooniverse.org/2014/02/03/the-ever-expanding-zooniverse-updated/›.

До бібліографії