Artykuły w czasopismach na temat „Extranets (Computer networks)”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Extranets (Computer networks).

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 47 najlepszych artykułów w czasopismach naukowych na temat „Extranets (Computer networks)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Morgan, David. "Deploying extranets?" Network Security 2004, nr 12 (grudzień 2004): 12–14. http://dx.doi.org/10.1016/s1353-4858(04)00170-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Nan, Zhi Zeng i Ruoyu Jin. "A Survey on Users' Perspectives to Functionalities of Web-Based Construction Collaboration Extranets". International Journal of e-Collaboration 15, nr 4 (październik 2019): 1–17. http://dx.doi.org/10.4018/ijec.2019100101.

Pełny tekst źródła
Streszczenie:
Construction collaboration extranets (CCEs) provide various functionalities depending on the vendors' origins, history, experiences, and financial status. Previous research has listed and described the functionalities that extranet systems can be capable of providing. However, no publication was found so far to systematically analyze users' perspectives to the provided functionalities. This article is to bridge this gap through a questionnaire survey to the users. It aims at examining user's attitude to functionalities of CCEs. The result may be useful to information system vendors, end-users and researchers involved in CCEs development and implementation.
Style APA, Harvard, Vancouver, ISO itp.
3

Phaltankar, Kaustubh. "Practical Guide for Implementing Secure Intranets and Extranets". EDPACS 27, nr 12 (czerwiec 2000): 1–2. http://dx.doi.org/10.1201/1079/43258.27.12.20000601/30345.5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Guo, Jie, Xihao Fu, Liqiang Lin, Hengjun Ma, Yanwen Guo, Shiqiu Liu i Ling-Qi Yan. "ExtraNet". ACM Transactions on Graphics 40, nr 6 (grudzień 2021): 1–16. http://dx.doi.org/10.1145/3478513.3480531.

Pełny tekst źródła
Streszczenie:
Both the frame rate and the latency are crucial to the performance of realtime rendering applications such as video games. Spatial supersampling methods, such as the Deep Learning SuperSampling (DLSS), have been proven successful at decreasing the rendering time of each frame by rendering at a lower resolution. But temporal supersampling methods that directly aim at producing more frames on the fly are still not practically available. This is mainly due to both its own computational cost and the latency introduced by interpolating frames from the future. In this paper, we present ExtraNet, an efficient neural network that predicts accurate shading results on an extrapolated frame, to minimize both the performance overhead and the latency. With the help of the rendered auxiliary geometry buffers of the extrapolated frame, and the temporally reliable motion vectors, we train our ExtraNet to perform two tasks simultaneously: irradiance in-painting for regions that cannot find historical correspondences, and accurate ghosting-free shading prediction for regions where temporal information is available. We present a robust hole-marking strategy to automate the classification of these tasks, as well as the data generation from a series of high-quality production-ready scenes. Finally, we use lightweight gated convolutions to enable fast inference. As a result, our ExtraNet is able to produce plausibly extrapolated frames without easily noticeable artifacts, delivering a 1.5× to near 2× increase in frame rates with minimized latency in practice.
Style APA, Harvard, Vancouver, ISO itp.
5

Ahmed Ali, Adel, i Ahmed M. Al-Naamany. "Converged Networking: A Review of Concepts and Technologies". Sultan Qaboos University Journal for Science [SQUJS] 5 (1.12.2000): 209. http://dx.doi.org/10.24200/squjs.vol5iss0pp209-225.

Pełny tekst źródła
Streszczenie:
Converged networking is an emerging technology thrust that integrates voice, video, and data traffic on a single network. Converged networking encompasses several aspects, all of which are related to the aggregation of networking activity. Such aspects include, Payload convergence, Protocol convergence, Physical convergence, Device convergence, Application convergence, Technology convergence, etc. In recent years the Internet has developed into a global data network that is highly accepted as a multimedia information platform, which has the potential to develop into an alternative carrier network in the future. Several convergence scenarios have been recently proposed, ranging from integrating communication services and computer application into two separate networks, to building a seamless multimedia network, which converges the Central Office based network and the Internet in a single network, thereby enabling telecommunications operators and service provider's tremendous investment in existing network infrastructure to be fully utilized. This paper offers introduction and review of the networking technologies. The paper presents the existing multiple networks into two infrastructures: an ATM/Frame Relay (Ethernet)- based corporate network with integrated voice, video, and data traffic and an Internet-based network for secure intranet, extranet and remote access. This work is aimed at summarizing the internetworking basics and technologies which are essential for the emerging converged networking systems. The specific areas addressed here are networking basics, networking technologies, types of traffic, and convergence of computer and communication networks.
Style APA, Harvard, Vancouver, ISO itp.
6

Hu, Xiujian, Guanglei Sheng, Piao Shi i Yuanyuan Ding. "TbsNet: the importance of thin-branch structures in CNNs". PeerJ Computer Science 9 (16.06.2023): e1429. http://dx.doi.org/10.7717/peerj-cs.1429.

Pełny tekst źródła
Streszczenie:
The performance of a convolutional neural network (CNN) model is influenced by several factors, such as depth, width, network structure, size of the receptive field, and feature map scaling. The optimization of the best combination of these factors poses as the main difficulty in designing a viable architecture. This article presents an analysis of key factors influencing network performance, offers several strategies for constructing an efficient convolutional network, and introduces a novel architecture named TbsNet (thin-branch structure network). In order to minimize computation costs and feature redundancy, lightweight operators such as asymmetric convolution, pointwise convolution, depthwise convolution, and group convolution are implemented to further reduce the network’s weight. Unlike previous studies, the TbsNet architecture design rejects the reparameterization method and adopts a plain, simplified structure which eliminates extraneous branches. We conduct extensive experiments, including network depth, width, etc. TbsNet performs well on benchmark platforms, Top 1 Accuracy on CIFAR-10 is 97.02%, on CIFAR-100 is 83.56%, and on ImageNet-1K is 86.17%. Tbs-UNet’s DSC on the Synapse dataset is 78.39%, higher than TransUNet’s 0.91%. TbsNet can be competent for some downstream tasks in computer vision, such as medical image segmentation, and thus is competitive with prior state-of-the-art deep networks such as ResNet, ResNeXt, RepVgg, ParNet, ConvNeXt, and MobileNet.
Style APA, Harvard, Vancouver, ISO itp.
7

Ababneh, Nedal. "Performance Evaluation of a Topology Control Algorithm for Wireless Sensor Networks". International Journal of Distributed Sensor Networks 6, nr 1 (1.01.2010): 671385. http://dx.doi.org/10.1155/2010/671385.

Pełny tekst źródła
Streszczenie:
A main design challenge in the area of sensor networks is energy efficiency to prolong the network operable lifetime. Since most of the energy is spent for radio communication, an effective approach for energy conservation is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. Assuming that node position information is unavailable, we present a topology control algorithm, termed OTC, for sensor networks. It uses two-hop neighborhood information to select a subset of nodes to be active among all nodes in the neighborhood. Each node in the network selects its own set of active neighbors from among its one-hop neighbors. This set is determined such that it covers all two-hop neighbors. OTC does not assume the network graph to be a Unit Disk Graph; OTC also works well on general weighted network graphs. OTC is evaluated against two well-known algorithms from the literature, namely, Span and GAF through realistic simulations using TOSSIM. In terms of operational lifetime, load balancing and Spanner property OTC shows promising results. Apart from being symmetric and connected, the resulting graph when employing OTC shows good spanner properties.
Style APA, Harvard, Vancouver, ISO itp.
8

Karamysheva, N. S., D. S. Svishchev, K. V. Popov i S. A. Zinkin. "Implementation of Agent-Based Metacomputersystems and Applications". Proceedings of the Southwest State University 26, nr 1 (28.06.2022): 148–71. http://dx.doi.org/10.21869/2223-1560-2022-26-1-148-171.

Pełny tekst źródła
Streszczenie:
Purpose of research. Creation of a methodology for designing a prototype of a metacomputer distributed computing system, taking into account the current stage of the evolution of hardware and cloud-network software to provide users with the means to create applications with inter-program parallelism and the ability of components to work together.Methods. Logical models of artificial intelligence, semantic networks and conceptual graphs, agent-based technology, virtualization of network resources. The method of conducting a full-scale experiment was that when the application was launched for execution in a virtual agent-based metacomputer, a network infrastructure was used with remote access to the Fast Ethernet laboratory network via the Internet, and then time characteristics were measured.Results. A technique for designing cloud-network metacomputer systems and applications is proposed, and prototype middleware software based on multi-agent technology is created. The goal of the study has been achieved, since the developed agent-based environment allows the implementation of universal programming control structures - transition by one or more conditions, cycle, sequence, parallelization, for which executable conceptual specifications have been introduced.Conclusion. An approach to the implementation of a distributed metacomputer application in a computer network environment based on conceptual graphs describing the exchange of messages and data processing by software agents is proposed. The performance of the application under conditions of extraneous load on the network was demonstrated.
Style APA, Harvard, Vancouver, ISO itp.
9

Malik, Najeeb ur Rehman, Usman Ullah Sheikh, Syed Abdul Rahman Abu-Bakar i Asma Channa. "Multi-View Human Action Recognition Using Skeleton Based-FineKNN with Extraneous Frame Scrapping Technique". Sensors 23, nr 5 (2.03.2023): 2745. http://dx.doi.org/10.3390/s23052745.

Pełny tekst źródła
Streszczenie:
Human action recognition (HAR) is one of the most active research topics in the field of computer vision. Even though this area is well-researched, HAR algorithms such as 3D Convolution Neural Networks (CNN), Two-stream Networks, and CNN-LSTM (Long Short-Term Memory) suffer from highly complex models. These algorithms involve a huge number of weights adjustments during the training phase, and as a consequence, require high-end configuration machines for real-time HAR applications. Therefore, this paper presents an extraneous frame scrapping technique that employs 2D skeleton features with a Fine-KNN classifier-based HAR system to overcome the dimensionality problems.To illustrate the efficacy of our proposed method, two contemporary datasets i.e., Multi-Camera Action Dataset (MCAD) and INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset was used in experiment. We used the OpenPose technique to extract the 2D information, The proposed method was compared with CNN-LSTM, and other State of the art methods. Results obtained confirm the potential of our technique. The proposed OpenPose-FineKNN with Extraneous Frame Scrapping Technique achieved an accuracy of 89.75% on MCAD dataset and 90.97% on IXMAS dataset better than existing technique.
Style APA, Harvard, Vancouver, ISO itp.
10

Molla, D., R. Schwitter, F. Rinaldi, J. Dowdall i M. Hess. "ExtrAns: Extracting answers from technical texts". IEEE Intelligent Systems 18, nr 4 (lipiec 2003): 12–17. http://dx.doi.org/10.1109/mis.2003.1217623.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Hameed, Marwah. "Modified Fuzzy Neural Network Approach for Academic Performance Prediction of Students in Early Childhood Education". Bonfring International Journal of Networking Technologies and Applications 11, nr 1 (13.02.2024): 17–20. http://dx.doi.org/10.9756/bijnta/v11i1/bij24007.

Pełny tekst źródła
Streszczenie:
Modern education relies heavily on educational technology, which provides students with unique learning opportunities and enhances their ability to learn. For many years now, computers and other technological tools have been an integral part of education. However, compared to other educational levels, the incorporation of educational technology in early childhood education is a more recent trend. It is because of this that materials and procedures tailored to young children must be created, implemented, and studied. The use of artificial intelligence techniques in educational technology resources has resulted in better engagement for students. Early childhood special education students' academic achievement is predicted using a Modified Fuzzy Neural Network (MFNN). Before constructing the classifier, the dataset had to be preprocessed to remove any extraneous information. As a follow-up, this study will put to the test an organized approach to the implementation of customized fuzzy neural networks for the prediction of academic achievement in early childhood settings. Considerations for the analysis of academic achievement in early childhood education are discussed in this article, including recommendations for the implementation of proposed modified fuzzy neural networks. In terms of evaluation metrics such as Precision, recall, accuracy, and the F1 coefficient, the proposed model outperforms conventional machine-learning (ML) techniques.
Style APA, Harvard, Vancouver, ISO itp.
12

ILYA V., VOLODIN, PUTYATO MICHAEL M., MAKARYAN ALEXANDER S. i EVGLEVSKY VYACHESLAV YU. "CLASSIFICATION OF ATTACK MECHANISMS AND RESEARCH OF PROTECTION METHODS FOR SYSTEMS USING MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE ALGORITHMS". CASPIAN JOURNAL: Control and High Technologies 54, nr 2 (2021): 90–98. http://dx.doi.org/10.21672/2074-1707.2021.53.1.090-098.

Pełny tekst źródła
Streszczenie:
This article provides a complete classification of attacks using artificial intelligence. Three main identified sections were considered: attacks on information systems and computer networks, attacks on artificial intelligence models (poisoning attacks, evasion attacks, extraction attacks, privacy attacks), attacks on human consciousness and opinion (all types of deepfake). In each of these sections, the mechanisms of attacks were identified and studied, in accordance with them, the methods of protection were set. In conclusion, a specific example of an attack using a pretrained model was analyzed and protected against it using the method of modifying the input data, namely, image compression in order to get rid of extraneous noise.
Style APA, Harvard, Vancouver, ISO itp.
13

Styles, K., i M. Goldsworthy. "GI extranet: a case study in applying technology for competitive advantage". International Journal of Networking and Virtual Organisations 1, nr 1 (2002): 82. http://dx.doi.org/10.1504/ijnvo.2002.001464.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Liu, Zhen, Yi-Liang Han i Xiao-Yuan Yang. "A compressive sensing–based adaptable secure data collection scheme for distributed wireless sensor networks". International Journal of Distributed Sensor Networks 15, nr 6 (czerwiec 2019): 155014771985651. http://dx.doi.org/10.1177/1550147719856516.

Pełny tekst źródła
Streszczenie:
Toward the goal of high security and efficiency for data collection in wireless sensor network, this article proposed an adaptable secure compressive sensing–based data collection scheme for distributed wireless sensor network. It adopted public key cryptography technology to solve the key distribution problem, and compressive sensing over finite fields to reduce the communication cost of data collection. Under hardness of decisional learning with errors problem on lattice, it can ensure indistinguishability against chosen ciphertext attack (IND-CCA1) security scheme for collected data on the extranet and indistinguishability against chosen plaintext attack security for data during the process of distributed collection on the intranet. Owing to the similar linear structure for lattices and compressive sensing, data encryption collection can be all in the form of efficient linear operations, and internode data aggregation can be in the form of addition operation.
Style APA, Harvard, Vancouver, ISO itp.
15

Yong, Chern Han, i Limsoon Wong. "From the static interactome to dynamic protein complexes: Three challenges". Journal of Bioinformatics and Computational Biology 13, nr 02 (kwiecień 2015): 1571001. http://dx.doi.org/10.1142/s0219720015710018.

Pełny tekst źródła
Streszczenie:
Protein interactions and complexes behave in a dynamic fashion, but this dynamism is not captured by interaction screening technologies, and not preserved in protein–protein interaction (PPI) networks. The analysis of static interaction data to derive dynamic protein complexes leads to several challenges, of which we identify three. First, many proteins participate in multiple complexes, leading to overlapping complexes embedded within highly-connected regions of the PPI network. This makes it difficult to accurately delimit the boundaries of such complexes. Second, many condition- and location-specific PPIs are not detected, leading to sparsely-connected complexes that cannot be picked out by clustering algorithms. Third, the majority of complexes are small complexes (made up of two or three proteins), which are extra sensitive to the effects of extraneous edges and missing co-complex edges. We show that many existing complex-discovery algorithms have trouble predicting such complexes, and show that our insight into the disparity between the static interactome and dynamic protein complexes can be used to improve the performance of complex discovery.
Style APA, Harvard, Vancouver, ISO itp.
16

Nuli Namassivaya, Sunkari Nithigna, Sindhu Kovilala i MD Sibli Hussain. "Encrypted chat application using RSA Algorithm". international journal of engineering technology and management sciences 7, nr 2 (2023): 854–59. http://dx.doi.org/10.46647/ijetms.2023.v07i02.095.

Pełny tekst źródła
Streszczenie:
In some ways, the potency and effectiveness of the knowledge systems rely upon its design and the way the knowledge area unit is transmitted among totally different parties. Similarly, a crucial side of computer code development is the security of the knowledge that flows through open communication channels. One of the foremost widespread designs is User/server design which creates the centralization of knowledge storage and process modification, and supply flexibility for applying authentication strategies and coding algorithms inside info systems. whereas the number of users increases, it needs to increase the authentication and coding levels as high as possible. users/servers could be a technology that enables the opening of an associate degree interactive session between the user's browser and also the server. during this study, we tend to use users/server design to accomplish secure messaging/chat between users while not the server having the ability to decode the message. During this manner, a Server Cryptography- based mostly Secure electronic messaging System mistreatment RSA (Rivest- Shamir Adelman), it is a widely used public-key cryptography and authentication system for encryption of digital electronic messaging transactions like email over the computer network, extranet, and net, to write in code and decipher messages is developed using Java Web Application.
Style APA, Harvard, Vancouver, ISO itp.
17

Sychugov, Anton, Vadim Miheychikov i Maksim Chernyshov. "Application of Neural Networks for Object Recognition in Railway Transportation". Proceedings of Petersburg Transport University 20, nr 2 (20.06.2023): 478–91. http://dx.doi.org/10.20295/1815-588x-2023-2-478-491.

Pełny tekst źródła
Streszczenie:
Purpose: With the help of vision systems and neural networks, such as YOLOv8 and MASK R-CNN, it is possible to quickly and accurately detect objects that can lead to an accident or delay trains. YOLOv8 is one of the most popular real-time object detection algorithms that uses deep neural networks to classify and localize objects. YOLOv8 can detect objects in images and videos with high speed and accuracy. This model can work on various hardware platforms, including mobile devices and computers. MASK R-CNN is an even more advanced object detection algorithm that has the ability to highlight objects and their contours with high accuracy. MASK R-CNN uses convolutional neural networks and mask segmentation techniques to detect objects. It can work both in real time and on static images. When vision systems are equipped with YOLOv8 and MASK R-CNN neural networks, they can quickly respond to extraneous objects that appear on the rails. The purpose of the article is to develop algorithms for detecting railway transport objects and obstacles using technical vision and neural networks, and to evaluate the effectiveness of algorithms. Methods: The YOLOv8 algorithm is based on the architecture of convolutional neural networks and uses supervised learning methods. This model takes an image as input and provides estimates of the probability that a certain object is present in the image in real time. To achieve this, YOLOv8 employs region of interest (ROI) detection methods, allowing to determine the areas of the image on which objects may be located. The MASK R-CNN algorithm uses more sophisticated methods, such as mask segmentation methods and proportional resizing of the area of interest (RoIAlign) to achieve more accurate results of object detecting in images and videos. It is also based on convolutional neural networks and uses supervised learning methods. MASK R-CNN uses mask segmentation methods to determine the contour of an object in an image, as well as the RoIAlign method, which allows for superior quality when processing various image sizes. Common mathematical methods that are used in YOLOv8 and MASK R-CNN are methods of convolutional neural network, supervised learning and optimization of the loss function. They are based on deep learning algorithms such as stochastic gradient descent and backward propagation of errors. Results: An algorithm for detecting foreign objects on the route of rolling stock using a technical vision system, calculation of the evaluation of the quality of neural networks performance, error matrices have been formed, the results of neural network processing have been obtained. Practical significance: An algorithm for detecting foreign objects on the route of the moving rolling stock using a technical vision system has been developed, two neural networks have been trained to detect railway transport objects and obstacles on the way.
Style APA, Harvard, Vancouver, ISO itp.
18

Laksana, Eugene, Melissa Aczon, Long Ho, Cameron Carlin, David Ledbetter i Randall Wetzel. "The impact of extraneous features on the performance of recurrent neural network models in clinical tasks". Journal of Biomedical Informatics 102 (luty 2020): 103351. http://dx.doi.org/10.1016/j.jbi.2019.103351.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

GALL, HARALD C., RENÉ R. KLÖSCH i ROLAND T. MITTERMEIR. "USING DOMAIN KNOWLEDGE TO IMPROVE REVERSE ENGINEERING". International Journal of Software Engineering and Knowledge Engineering 06, nr 03 (wrzesień 1996): 477–505. http://dx.doi.org/10.1142/s021819409600020x.

Pełny tekst źródła
Streszczenie:
Integrating application domain knowledge into reverse engineering is an important step to overcome the shortcomings of conventional reverse engineering approaches that are based exclusively on information derivable from source code. In this paper, we show the basic concepts of a program transformation process from a conventional to an object-oriented architecture which incorporates extraneous higher-level knowledge in its process. To which degree this knowledge might stem from some general domain knowledge, and to which extent it needs to be introduced as application dependent knowledge by a human expert is discussed. The paper discusses these issues in the context of the architectural transformation of legacy systems to an object-oriented architecture.
Style APA, Harvard, Vancouver, ISO itp.
20

M. Sujitha, N. Leela i B. Kanimozhi. "Computer Aided Diagnosis (CAD) and Classification of Microbial and other Wide Range of Dermal Diseases using AI and Medical Image Processing". Journal of Innovative Image Processing 5, nr 3 (wrzesień 2023): 307–22. http://dx.doi.org/10.36548/jiip.2023.3.006.

Pełny tekst źródła
Streszczenie:
The human body's largest and most defensive organ is its skin. It covers internal organs and shields the human body from extraneous substances outside of it. Numerous illnesses can harm a person's skin caused by microbes such as bacteria, fungi, and viruses; for example, MRSA (methicillin-resistant Staphylococcus aureus) infection, Herpes zoster, Acne vulgaris, warts, eczema, psoriasis, and the fifth disease. It can also be damaged by carcinogenic and tumor-inducing agents, leading to skin cancer such as melanoma, which is more fatal and life-threatening to human life. Skin diseases can be diagnosed by blood tests, tissue sample collection (biopsy), and skin examination by dermatologists and experts. If non-expert doctors or laboratory technicians examine the skin, it can lead to medical errors and misdiagnosis. A proper and precise diagnosis and detection are required to treat the specific disease. This research aims to detect dermal diseases through sample images and classify and identify the cause of disease with greater accuracy and precision in a time- and cost-efficient way. This research uses medical processing algorithms such as pre-processing and segmentation of the diseased image and image classification algorithms such as deep learning, part of a neural network, to classify the diseased medical images.
Style APA, Harvard, Vancouver, ISO itp.
21

Пальчикова, И. Г., И. А. Будаева i Е. С. Смирнов. "Generation of a digital passport for gunshot residues using the computer vision techniques". Вычислительные технологии 29, nr 1 (24.02.2024): 93–106. http://dx.doi.org/10.25743/ict.2024.29.1.009.

Pełny tekst źródła
Streszczenie:
Разработано программное обеспечение ImgOpinion, которое позволяет выполнять оптико-структурный анализ цифрового изображения объекта методами компьютерного зрения. Оно предназначено для использования в экспертных лабораториях в составе аппаратно-программного комплекса компьютерного зрения MS-Unit, который включает специализированный осветитель “Фотобокс 3138” с цифровой регистрирующей камерой. Выходные данные строятся в виде цифрового паспорта огнестрельного повреждения, в котором приводятся криминалистически значимые характеристики объекта исследования, а также выявленных на нем следов выстрела. The ImgOpinion software, capable to perform the optical and structural analysis of a digital image of the object by computer vision methods, was developed. It can be employed in expert laboratories as a part of the computer vision hardware-software complex MS-Unit together with specialized illuminator “Photobox 3138” with a digital recording camera. The output data is constructed in the form of a digital passport of the gunshot residue, which contains forensically significant characteristics of the object under investigation, as well as a gunshot residue detected on it. Purpose. The research has solved the problem of creating a specialised hardware and software complex for forensic examination, the use of which partially automates the processes of identification and quantitative characterisation of gunshot traces and products. Methodology. Non-destructive research methods are implemented in computer vision systems, which include two main stages: a digital image acquisition and its mathematical processing. The hardware and software complex of a computer vision usually consists of an illuminator, a digital recording video camera and a specialised program for processing the raw data. Findings. ImgOpinion software has been developed, which allows performing the optical-structural analyses of digital images using the computer vision methods and is intended for use in expert laboratories as part of the MS-Unit computer vision hardware and software complex, which includes a specialized illuminator “Photobox 3138” with a digital recording camera. Autonomous and portable device has a mobile design and provides white light with colour temperature 5000 K (CIE D50) and colour rendering index CRI = 97+. Six independently switched LED monochrome illuminators with narrow spectral bands from 365 to 880 nm (UV to IR spectral bands) provide the uniform illumination without extraneous stray light on the working field (light intensity drop below 2 % at the edges of the working field). Value. The ImgOpinion output is constructed in the form of a digital passport of a gunshot injury, which contains forensically relevant characteristics of the object of investigation. The digital passport is an adaptation of the forensically significant information on the firearm traces for the integration into specialized databases for further development of the automation of expert evaluation processes, conducting the automated trace comparison and the determination of incident circumstances by it.
Style APA, Harvard, Vancouver, ISO itp.
22

Sun, Jiuyun, Huanhe Dong, Ya Gao, Yong Fang i Yuan Kong. "The Short-Term Load Forecasting Using an Artificial Neural Network Approach with Periodic and Nonperiodic Factors: A Case Study of Tai'an, Shandong Province, China". Computational Intelligence and Neuroscience 2021 (26.10.2021): 1–8. http://dx.doi.org/10.1155/2021/1502932.

Pełny tekst źródła
Streszczenie:
Accurate electricity load forecasting is an important prerequisite for stable electricity system operation. In this paper, it is found that daily and weekly variations are prominent by the power spectrum analysis of the historical loads collected hourly in Tai’an, Shandong Province, China. In addition, the influence of the extraneous variables is also very obvious. For example, the load dropped significantly for a long period of time during the Chinese Lunar Spring Festival. Therefore, an artificial neural network model is constructed with six periodic and three nonperiodic factors. The load from January 2016 to August 2018 was divided into two parts in the ratio of 9 : 1 as the training set and the test set, respectively. The experimental results indicate that the daily prediction model with selected factors can achieve higher forecasting accuracy.
Style APA, Harvard, Vancouver, ISO itp.
23

Peeters, Hendrik, Sebastian Habig i Sabine Fechner. "Does Augmented Reality Help to Understand Chemical Phenomena during Hands-On Experiments?–Implications for Cognitive Load and Learning". Multimodal Technologies and Interaction 7, nr 2 (19.01.2023): 9. http://dx.doi.org/10.3390/mti7020009.

Pełny tekst źródła
Streszczenie:
Chemical phenomena are only observable on a macroscopic level, whereas they are explained by entities on a non-visible level. Students often demonstrate limited ability to link these different levels. Augmented reality (AR) offers the possibility to increase contiguity by embedding virtual models into hands-on experiments. Therefore, this paper presents a pre- and post-test study investigating how learning and cognitive load are influenced by AR during hands-on experiments. Three comparison groups (AR, animation and filmstrip), with a total of N = 104 German secondary school students, conducted and explained two hands-on experiments. Whereas the AR group was allowed to use an AR app showing virtual models of the processes on the submicroscopic level during the experiments, the two other groups were provided with the same dynamic or static models after experimenting. Results indicate no significant learning gain for the AR group in contrast to the two other groups. The perceived intrinsic cognitive load was higher for the AR group in both experiments as well as the extraneous load in the second experiment. It can be concluded that AR could not unleash its theoretically derived potential in the present study.
Style APA, Harvard, Vancouver, ISO itp.
24

Jones, R. Kenny, Paul Guerrero, Niloy J. Mitra i Daniel Ritchie. "ShapeCoder: Discovering Abstractions for Visual Programs from Unstructured Primitives". ACM Transactions on Graphics 42, nr 4 (26.07.2023): 1–17. http://dx.doi.org/10.1145/3592416.

Pełny tekst źródła
Streszczenie:
We introduce ShapeCoder, the first system capable of taking a dataset of shapes, represented with unstructured primitives, and jointly discovering (i) useful abstraction functions and (ii) programs that use these abstractions to explain the input shapes. The discovered abstractions capture common patterns (both structural and parametric) across a dataset, so that programs rewritten with these abstractions are more compact, and suppress spurious degrees of freedom. ShapeCoder improves upon previous abstraction discovery methods, finding better abstractions, for more complex inputs, under less stringent input assumptions. This is principally made possible by two methodological advancements: (a) a shape-to-program recognition network that learns to solve sub-problems and (b) the use of e-graphs, augmented with a conditional rewrite scheme, to determine when abstractions with complex parametric expressions can be applied, in a tractable manner. We evaluate ShapeCoder on multiple datasets of 3D shapes, where primitive decompositions are either parsed from manual annotations or produced by an unsupervised cuboid abstraction method. In all domains, ShapeCoder discovers a library of abstractions that captures high-level relationships, removes extraneous degrees of freedom, and achieves better dataset compression compared with alternative approaches. Finally, we investigate how programs rewritten to use discovered abstractions prove useful for downstream tasks.
Style APA, Harvard, Vancouver, ISO itp.
25

Chen, Xinyue, Shuo Li, Shipeng Liu, Robin Fowler i Xu Wang. "MeetScript: Designing Transcript-based Interactions to Support Active Participation in Group Video Meetings". Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (28.09.2023): 1–32. http://dx.doi.org/10.1145/3610196.

Pełny tekst źródła
Streszczenie:
While videoconferencing is prevalent, concurrent participation channels are limited. People experience challenges keeping up with the discussion, and misunderstanding frequently occurs. Through a formative study, we probed into the design space of providing real-time transcripts as an extra communication space for video meeting attendees. We then present MeetScript, a system that provides parallel participation channels through real-time interactive transcripts. MeetScript visualizes the discussion through a chat-alike interface and allows meeting attendees to make real-time collaborative annotations. Over time, MeetScript gradually hides extraneous content to retain the most essential information on the transcript, with the goal of reducing the cognitive load required on users to process the information in real time. In an experiment with 80 users in 22 teams, we compared MeetScript with two baseline conditions where participants used Zoom alone (business-as-usual), or Zoom with an adds-on transcription service (Otter.ai). We found that MeetScript significantly enhanced people's non-verbal participation and recollection of their teams' decision-making processes compared to the baselines. Users liked that MeetScript allowed them to easily navigate the transcript and contextualize feedback and new ideas with existing ones.
Style APA, Harvard, Vancouver, ISO itp.
26

Wang, Lei, i Wenqi He. "Analysis of Community Outdoor Public Spaces Based on Computer Vision Behavior Detection Algorithm". Applied Sciences 13, nr 19 (2.10.2023): 10922. http://dx.doi.org/10.3390/app131910922.

Pełny tekst źródła
Streszczenie:
Community outdoor public spaces are indispensable to urban residents’ daily lives. Analyzing community outdoor public spaces from a behavioral perspective is crucial and an effective way to support human-centered development in urban areas. Traditional behavioral analysis often relies on manually collected behavioral data, which is time-consuming, labor-intensive, and lacks data breadth. With the use of sensors, the breadth of behavioral data has greatly increased, but its accuracy is still insufficient, especially in the fine-grained differentiation of populations and behaviors. Computer vision is more efficient in distinguishing populations and recognizing behaviors. However, most existing computer vision applications face some challenges. For example, behavior recognition is limited to pedestrian trajectory recognition, and there are few that recognize the diverse behaviors of crowds. In view of these gaps, this paper proposes a more efficient approach that employs computer vision tools to examine different populations and different behaviors, obtain important statistical measures of spatial behavior, taking the Bajiao Cultural Square in Beijing as a test bed. This population and behavior recognition model presents several improvement strategies: Firstly, by leveraging an attention mechanism, which emulates the human selective cognitive mechanism, it is capable of accentuating pertinent information while disregarding extraneous data, and the ResNet backbone network can be refined by integrating channel attention. This enables the amplification of critical feature channels or the suppression of irrelevant feature channels, thereby enhancing the efficacy of population and behavior recognition. Secondly, it uses public datasets and self-made data to construct the dataset required by this model to improve the robustness of the detection model in specific scenarios. This model can distinguish five types of people and six kinds of behaviors, with an identification accuracy of 83%, achieving fine-grained behavior detection for different populations. To a certain extent, it solves the problem that traditional data face of large-scale behavioral data being difficult to refine. The population and behavior recognition model was adapted and applied in conjunction with spatial typology analysis, and we can conclude that different crowds have different behavioral preferences. There is inconsistency in the use of space by different crowds, there is inconsistency between behavioral and spatial function, and behavior is concentrated over time. This provides more comprehensive and reliable decision support for fine-grained planning and design.
Style APA, Harvard, Vancouver, ISO itp.
27

Lin, Hsien-I., i C. S. George Lee. "Neuro-fuzzy-based skill learning for robots". Robotica 30, nr 6 (8.12.2011): 1013–27. http://dx.doi.org/10.1017/s026357471100124x.

Pełny tekst źródła
Streszczenie:
SUMMARYEndowing robots with the ability of skill learning enables them to be versatile and skillful in performing various tasks. This paper proposes a neuro-fuzzy-based, self-organizing skill-learning framework, which differs from previous work in its capability of decomposing a skill by self-categorizing it into significant stimulus-response units (SRU, a fundamental unit of our skill representation), and self-organizing learned skills into a new skill. The proposed neuro-fuzzy-based, self-organizing skill-learning framework can be realized by skill decomposition and skill synthesis. Skill decomposition aims at representing a skill and acquiring it by SRUs, and is implemented by stages with a five-layer neuro-fuzzy network with supervised learning, resolution control, and reinforcement learning to enable robots to identify a sufficient number of significant SRUs for accomplishing a given task without extraneous actions. Skill synthesis aims at organizing a new skill by sequentially planning learned skills composed of SRUs, and is realized by stages, which establish common SRUs between two similar skills and self-organize a new skill from these common SRUs and additional new SRUs by reinforcement learning. Computer simulations and experiments with a Pioneer 3-DX mobile robot were conducted to validate the self-organizing capability of the proposed skill-learning framework in identifying significant SRUs from task examples and in common SRUs between similar skills and learning new skills from learned skills.
Style APA, Harvard, Vancouver, ISO itp.
28

Nguyen, Viet Hung, Ngoc Nam Pham, Cong Thang Truong, Duy Tien Bui, Huu Thanh Nguyen i Thu Huong Truong. "Retina-based quality assessment of tile-coded 360-degree videos". EAI Endorsed Transactions on Industrial Networks and Intelligent Systems 9, nr 32 (21.06.2022): e2. http://dx.doi.org/10.4108/eetinis.v9i32.1058.

Pełny tekst źródła
Streszczenie:
Nowadays, omnidirectional content, which delivers 360-degree views of scenes, is a significant aspect of Virtual Reality systems. While 360 video requires a lot of bandwidth, users only see visible tiles, therefore a large amount of bitrate can be saved without affecting the user’s experience on the service. The fact leads to current video adaptation solutions to filter out superfluous parts and extraneous bandwidth. To form a good basis for these adaptations, it is necessary to understand human’s video quality perception. In our research, we contribute to building an effective omnidirectional video database that can be applied to study the effects of the five zones of the human retina. We also design a new video quality assessment method to analyze the impacts of those zones of a 360 video according to the human retina. The proposed scheme is found to outperform 22 current objective quality measures by 11 to 31% in terms of the PCC parameter.
Style APA, Harvard, Vancouver, ISO itp.
29

S.Saravana Kumar, Et al. "Automated Detection of Autism Spectrum Disorder Using Bio-Inspired Swarm Intelligence Based Feature Selection and Classification Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 9 (5.11.2023): 2342–50. http://dx.doi.org/10.17762/ijritcc.v11i9.9242.

Pełny tekst źródła
Streszczenie:
Autism spectrum disorders, or ASDs, are neurological conditions that affect humans. ASDs typically come with sensory issues like sensitivity to touch or soundor odour. Though genetics are the main causes, their early discovery and treatments are imperative. In recent years, intelligent diagnosis using MLTs (Machine Learning Techniques) have been developed to support conventional clinical methods in the domain of healthcare. Feature selections from healthcare databases consume nondeterministic polynomial timesand are hard tasks where again MLTs have been of great use. AGWOs (Adaptive Grey Wolf Optimizations) were used in this study to determine most significant features and efficient classification strategies in datasets of ASDs. Initially, pre-processing strategies based on SMOTEs (Synthetic Minority Oversampling Techniques) removed extraneous data from ASD datasets and subsequently AGWOs repeat this procedure to find smallest features with maximum classifications values for recall and accuracy. Finally, KVSMs (Kernel Support Vector Machines) classify instances of ASDs from the input datasets. The experimental results of suggested method are evaluated for classifying ASDs from datasets instances of Toddlers, Children, Adolescents, and Adults in terms of recalls, precisions, F-measures, and classification errors.
Style APA, Harvard, Vancouver, ISO itp.
30

Mirzazade, Ali, Cosmin Popescu, Thomas Blanksvärd i Björn Täljsten. "Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge". Remote Sensing 13, nr 14 (7.07.2021): 2665. http://dx.doi.org/10.3390/rs13142665.

Pełny tekst źródła
Streszczenie:
For the inspection of structures, particularly bridges, it is becoming common to replace humans with autonomous systems that use unmanned aerial vehicles (UAV). In this paper, a framework for autonomous bridge inspection using a UAV is proposed with a four-step workflow: (a) data acquisition with an efficient UAV flight path, (b) computer vision comprising training, testing and validation of convolutional neural networks (ConvNets), (c) point cloud generation using intelligent hierarchical dense structure from motion (DSfM), and (d) damage quantification. This workflow starts with planning the most efficient flight path that allows for capturing of the minimum number of images required to achieve the maximum accuracy for the desired defect size, then followed by bridge and damage recognition. Three types of autonomous detection are used: masking the background of the images, detecting areas of potential damage, and pixel-wise damage segmentation. Detection of bridge components by masking extraneous parts of the image, such as vegetation, sky, roads or rivers, can improve the 3D reconstruction in the feature detection and matching stages. In addition, detecting damaged areas involves the UAV capturing close-range images of these critical regions, and damage segmentation facilitates damage quantification using 2D images. By application of DSfM, a denser and more accurate point cloud can be generated for these detected areas, and aligned to the overall point cloud to create a digital model of the bridge. Then, this generated point cloud is evaluated in terms of outlier noise, and surface deviation. Finally, damage that has been detected is quantified and verified, based on the point cloud generated using the Terrestrial Laser Scanning (TLS) method. The results indicate this workflow for autonomous bridge inspection has potential.
Style APA, Harvard, Vancouver, ISO itp.
31

Paudel, Niroj, Bishnu Maya Kharel, Ramesh Prasad Sah i Rajesh Shrestha. "Comparative Study of E-Learning Frameworks: Recommendation for Nepal". International Journal of Advanced Research in Computer Science and Software Engineering 7, nr 8 (30.08.2017): 273. http://dx.doi.org/10.23956/ijarcsse.v7i8.66.

Pełny tekst źródła
Streszczenie:
E-learning is essentially the network-enabled transfer of skills and knowledge. E-learning refers to using electronic applications and processes to learn. E-learning applications and processes include Web-based learning, computer-based learning, virtual classrooms and digital collaboration. Content is delivered via the internet, intranet/extranet, audio or video tape, satellite TV and CD-ROM This study is a reflection to the current scenario of how ICT is being used in education sector in Nepal that depicts how E-learning Framework aid to different districts of rural areas of Nepal[8]. Since this research is based on fundamental research it is absolutely focused to enhance the knowledge of the researcher E-learning is better, but it implicates that, though its use, a better ICT education service can be delivered to most communities and pollution, still not familiar by ICT education and services For instances E-learning would be the higher priority to deliver education in rural education areas due to its remoteness and unavailability of new technology and other resources, moreover, it may enable the researchers to carry out further detailed study on the subject matter and for students to carryout similar task related to this study.The overall objective of this Study is to explore the practicability of E-learning application in context of Nepal, with special reference to aid provided by framework Nepal along with to acknowledge the practical consequences observed by framework Nepal after providing the E-learning services in various part of Nepal for access the practice of E-learning Frameworks. Also, recommend a suitable E-learning Framework for Nepal. This study is a reflection to the current scenario of how ICT is being used in education sector in Nepal that depicts how E-learning Framework aid to different districts of rural areas of Nepal. Since this research is based on fundamental research it is absolutely focused to enhance the knowledge of the researcher [9].
Style APA, Harvard, Vancouver, ISO itp.
32

Mohammed Saleh Al Ansari, Et al. "Nanoparticle Incorporation to Enhance Titanium Alloy Electric Discharge Machining Capabilities". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 9 (5.11.2023): 3722–30. http://dx.doi.org/10.17762/ijritcc.v11i9.9596.

Pełny tekst źródła
Streszczenie:
The primary objective of this study was to examine the effects of the Powder-blended Micro Electric Discharge Machining (PMEDM) technique on micromachining applications, specifically when sea water is employed as the dielectric medium. Tiny apertures with a width of 200µm were punctured over Ti-6Al-4V plates. In the initial round of experimentation, the machining performance was evaluated by subjecting sea water to various process variables without the inclusion of any other ingredients. The effects of input variables, including electrode material, hole voltage, current, Pulse-on-time, and Duty factor, on the Material Removal Rate (MRR), Tool Wear Rate (TWR), Overcut (OC), Circularity Error (CE), and Taper Ratio (TR) were analysed by conducting tests in accordance with Taguchi's L18 plan design. The method employed to ascertain the optimal parametric configuration for multi-objective enhancement involved utilising the strategy of soliciting inclination based on similarity to an ideal arrangement. The present study aimed to investigate the effects of foreign materials on the dielectric-based miniature electrical discharge machining (EDM) process in sea water. This was achieved by utilising powders with varying weight concentrations and molecule sizes, including non-conductive (Al2O3), semi-conductive (SiC), and conductive (Al) powders. The experimental setup ensured that the variable boundaries were maintained at their ideal settings. The results indicate that the choice of tool has a notable influence on the functioning of micro EDM when sea water is employed as the dielectric, without the inclusion of extraneous particles. The performance metric of multi-objective execution in PMEDM is influenced by the conductivity of supplementary compounds. An 83.18 percent increase in Material Removal Rate (MRR) was observed when SiC added compounds were utilised in Powder Mixed Electrical Discharge Machining (PMEDM). Conversely, there was a drop in Tool Wear Rate (TWR) by 36.42 percent, Open Circuit voltage (OC) by 21.48 percent, Current Efficiency (CE) by 45.15 percent, and Tool Roughness (TR) by 22.87 percent.
Style APA, Harvard, Vancouver, ISO itp.
33

Halteh, Khaled, i Hakem Sharari. "Employing Artificial Neural Networks and Multiple Discriminant Analysis to Evaluate the Impact of the COVID-19 Pandemic on the Financial Status of Jordanian Companies". Interdisciplinary Journal of Information, Knowledge, and Management 18 (2023): 251–67. http://dx.doi.org/10.28945/5112.

Pełny tekst źródła
Streszczenie:
Aim/Purpose: This paper aims to empirically quantify the financial distress caused by the COVID-19 pandemic on companies listed on Amman Stock Exchange (ASE). The paper also aims to identify the most important predictors of financial distress pre- and mid-pandemic. Background: The COVID-19 pandemic has had a huge toll, not only on human lives but also on many businesses. This provided the impetus to assess the impact of the pandemic on the financial status of Jordanian companies. Methodology: The initial sample comprised 165 companies, which was cleansed and reduced to 84 companies as per data availability. Financial data pertaining to the 84 companies were collected over a two-year period, 2019 and 2020, to empirically quantify the impact of the pandemic on companies in the dataset. Two approaches were employed. The first approach involved using Multiple Discriminant Analysis (MDA) based on Altman’s (1968) model to obtain the Z-score of each company over the investigation period. The second approach involved developing models using Artificial Neural Networks (ANNs) with 15 standard financial ratios to find out the most important variables in predicting financial distress and create an accurate Financial Distress Prediction (FDP) model. Contribution: This research contributes by providing a better understanding of how financial distress predictors perform during dynamic and risky times. The research confirmed that in spite of the negative impact of COVID-19 on the financial health of companies, the main predictors of financial distress remained relatively steadfast. This indicates that standard financial distress predictors can be regarded as being impervious to extraneous financial and/or health calamities. Findings: Results using MDA indicated that more than 63% of companies in the dataset have a lower Z-score in 2020 when compared to 2019. There was also an 8% increase in distressed companies in 2020, and around 6% of companies came to be no longer healthy. As for the models built using ANNs, results show that the most important variable in predicting financial distress is the Return on Capital. The predictive accuracy for the 2019 and 2020 models measured using the area under the Receiver Operating Characteristic (ROC) graph was 87.5% and 97.6%, respectively. Recommendations for Practitioners: Decision makers and top management are encouraged to focus on the identified highly liquid ratios to make thoughtful decisions and initiate preemptive actions to avoid organizational failure. Recommendation for Researchers: This research can be considered a stepping stone to investigating the impact of COVID-19 on the financial status of companies. Researchers are recommended to replicate the methods used in this research across various business sectors to understand the financial dynamics of companies during uncertain times. Impact on Society: Stakeholders in Jordanian-listed companies should concentrate on the list of most important predictors of financial distress as presented in this study. Future Research: Future research may focus on expanding the scope of this study by including other geographical locations to check for the generalisability of the results. Future research may also include post-COVID-19 data to check for changes in results.
Style APA, Harvard, Vancouver, ISO itp.
34

Kornilova, Alla A., Sergey N. Gaydamaka, Marina A. Gladchenko, Dmitry Ya Agasarov, Igor V. Kornilov i Maxim A. Gerasimov. "Thermal Waves and Flow Features of Pulsed Thermally Stimulated Biochemical Reactions in Viral Particles Interaction with Cells". Radioelectronics. Nanosystems. Information Technologies. 14, nr 1 (12.04.2022): 87–96. http://dx.doi.org/10.17725/rensit.2022.14.087.

Pełny tekst źródła
Streszczenie:
This review considers the features of excitation, propagation over a long distance, and the action of non-dissipative high-frequency temperature waves in relation to their influence on the efficiency of the system for remote recognition of critical cells by viruses. It is shown that the action of such waves leads to screening of critical cells due to a change in their surface atomic and molecular structure, which leads to a significant change in the dispersion and other electromagnetic characteristics of these cells. This leads to a very strong weakening of the efficiency of the system of remote recognition of such cells by viruses, which corresponds to the effective "passive" self-defense of the body and blocking the activity of viruses. It is also shown that the effect of such temperature waves can be an "active" method of self-defense of the body, which reconfigures the virus recognition system to extraneous (non-critical) cells or other macrocomplexes. In this case, the result of the attack by the virus will be the mutual destruction of the "false target" and the virus due to the natural apoptosis of this non-critical object when the virus penetrates it.
Style APA, Harvard, Vancouver, ISO itp.
35

Shimada, Kunio. "Elucidation of Response and Electrochemical Mechanisms of Bio-Inspired Rubber Sensors with Supercapacitor Paradigm". Electronics 12, nr 10 (19.05.2023): 2304. http://dx.doi.org/10.3390/electronics12102304.

Pełny tekst źródła
Streszczenie:
The electrochemical paradigm of a supercapacitor (SC) is effective for investigating cutting-edge deformable and haptic materials made of magnetic compound fluid (MCF) rubber in order to advance the production of bio-inspired sensors as artificial haptic sensors mimicking human tissues. In the present study, we measure the cyclic voltammetry (CV) profiles and electric properties with electrochemical impedance spectroscopy (EIS) to morphologically evaluate the intrinsic structure of MCF rubber containing fillers and agents. In addition, the electrochemical mechanisms of molecule and particle behavior are theorized using the SC physical framework. The solid-doped fillers in the MCF rubber characterized the behavior of the electrical double-layer capacitor (EDLC). Meanwhile, the liquid agents showed the characteristics of a pseudocapacitor (PC) due to the redox response among the molecules and particles. The potential responses to extraneous stimuli relevant to the EIS properties, categorized as slow adaption (SA), fast adaption (FA), and other type (OT), were also analyzed in terms of the sensory response of the bio-inspired sensor. The categories were based on how the response was induced from the EIS properties. By controlling the EIS properties with different types of doping agents, sensors with various sensory responses become feasible.
Style APA, Harvard, Vancouver, ISO itp.
36

Do, Quang-Huy, Rémi Antony, Bernard Ratier i Johann Bouclé. "Improving Device-to-Device Reproducibility of Light-Emitting Diodes Based on Layered Halide Perovskites". Electronics 13, nr 6 (11.03.2024): 1039. http://dx.doi.org/10.3390/electronics13061039.

Pełny tekst źródła
Streszczenie:
Layered halide perovskites have emerged as a promising contender in solid-state lighting; however, the fabrication of perovskite light-emitting devices in laboratories usually experiences low device-to-device reproducibility since perovskite crystallization is highly sensitive to ambient conditions. Although device processing inside gloveboxes is primarily used to reduce the influence of oxygen and moisture, several extraneous variables, including thermal fluctuations in the inert atmosphere or contaminations from residual solvents, can destabilize the crystallization process and alter the properties of the emissive layers. Here, we examine typical experimental configurations used in research laboratories to deposit layered perovskite films in inert atmospheres and discuss their crucial influences on the formation of polycrystalline thin films. Our results demonstrate that fluctuations in the glovebox properties (concentrations of residual O2 and H2O or solvent traces), even in very short timescales, can negatively impact the consistency of the perovskite film formation, while thermal variation plays a relatively minor role in this phenomenon. Furthermore, the careful storage of chemical species inside the workstation is critical for reproducing high-quality perovskite layers. Consequently, when applying our most controlled environment for perovskite deposition, the photoluminescence lifetime of perovskite thin films shows a standard deviation of only 3%, whereas the reference set-up yields a 15% standard deviation. Regarding complete perovskite light-emitting diodes, the uncertainties in statistical luminance and EQE data are significantly reduced from 230% and 140% to 38% and 42%, respectively.
Style APA, Harvard, Vancouver, ISO itp.
37

Michel, Janet, David Evans, Marcel Tanner i Thomas C. Sauter. "Identifying Policy Gaps in a COVID-19 Online Tool Using the Five-Factor Framework". Systems 10, nr 6 (15.12.2022): 257. http://dx.doi.org/10.3390/systems10060257.

Pełny tekst źródła
Streszczenie:
Introduction: Worldwide health systems are being faced with unprecedented COVID-19-related challenges, ranging from the problems of a novel condition and a shortage of personal protective equipment to frequently changing medical guidelines. Many institutions were forced to innovate and many hospitals, as well as telehealth providers, set up online forward triage tools (OFTTs). Using an OFTT before visiting the emergency department or a doctor’s practice became common practice. A policy can be defined as what an institution or government chooses to do or not to do. An OFTT, in this case, has become both a policy and a practice. Methods: The study was part of a broader multiphase sequential explanatory design. First, an online survey was carried out using a questionnaire to n = 176 patients who consented during OFTT usage. Descriptive analysis was carried out to identify who used the tool, for what purpose, and if the participant followed the recommendations. The quantitative results shaped the interview guide’s development. Second, in-depth interviews were held with a purposeful sample of n = 19, selected from the OFTT users who had consented to a further qualitative study. The qualitative findings were meant to explain the quantitative results. Third, in-depth interviews were held with healthcare providers and authorities (n = 5) that were privy to the tool. Framework analysis was adopted using the five-factor framework as a lens with which to analyze the qualitative data only. Results: The five-factor framework proved useful in identifying gaps that affected the utility of the COVID-19 OFTT. The identified gaps could fit and be represented by five factors: primary, secondary, tertiary, and extraneous factors, along with a lack of systems thinking. Conclusion: A theory or framework provides a road map to systematically identify those factors affecting policy implementation. Knowing how and why policy practice gaps come about in a COVID-19 OFFT context facilitates better future OFTTs. The framework in this study, although developed in a universal health coverage (UHC) context in South Africa, proved useful in a telehealth context in Switzerland, in Europe. The importance of systems thinking in developing digital tools cannot be overemphasized.
Style APA, Harvard, Vancouver, ISO itp.
38

Lande-Marghade, Pallavi. "Anaesthesia TV: Beginning of a New Revolution!" Journal of Anaesthesia and Critical Care Reports 4, nr 1 (2018): 1–2. http://dx.doi.org/10.13107/jaccr.2018.v04i01.075.

Pełny tekst źródła
Streszczenie:
How about attending an online live streaming conference happening at Hawaii from the comforts of your home, doesn’t that sound exciting? Well, honestly speaking we cannot deny the invasion of technology into our day-to-day lives. The amalgamation of the technology into social media disseminating vital information is very evident. On 27th April our sister concern, Anaesthesia TV performed live streaming of the regional anaesthesia conference called PRAC 2018 (Pune Regional Anaesthesia Conference). This was the first time in the history of anaesthesia conferences in India, where it was streamed online and received a magnanimous response with over 7430 viewers across the globe over the period of two days. Viewers were from more than 30 countries namely India, Indonesia, Bangladesh, Pakistan, USA, UK, Brazil Egypt, Iran, Iraq, UAE, Muscat, Australia, Nigeria, Saudi Arabia, Palestine, Somalia, Bhutan, China Syria, Maldives Sudan, Sri Lanka, Malaysia and Russia. Thus, giving it a global outreach in a true sense. It was an excellent opportunity for the digital generation for broader content dissemination even to the remotest areas. We received excellent feedback from all the viewers regarding very good quality audiovisual transmission. Viewers also enjoyed re-running of the sessions they missed out or were interested over and over again. Anaesthesia TV relies on the concept of academic philanthropy and technology. It provides a unique concept of retaining the presentations on the website creating a record for posterity for both the speakers as well as the conference organizers. It is a joint effort undertaken by Dr Ashok Shyam who has founded Ortho TV and myself. The conference details will be available for posterity for a long time. Currently details of conferences and organisers are lost once the conference website goes offline [which happens in a year]. By putting conference details on Anaesthesia TV, the details will be available online on our website and the entire program can be put up in pdf format. Anaesthesia TV will also post the details of the conference and links to program and conference websites on our portals [Anaesthesia TV, Facebook, Twitter, etc]. This will help popularize the conference and get more delegates for the event. Information about the conference can be put up on our website much before the conference. All videos will be organized on Anaesthesia TV under the banner of the conference and this itself will work as a marketing tool for the society and conference which will help in adding to the reputation of the society. The primary aim of every speaker is to showcase their work and share their knowledge with peers. Anaesthesia TV will provide an open access forum where this knowledge can be showcased in front of the world and give a chance of worldwide recognition for the speaker. Since the portal is a global platform it will also invite comments and suggestions from peers across the globe and also develop new connections and networks. Every video will be in a journal article format with proper scientific citation. Anyone who wishes to refer to the video can use that citation in their paper publication or presentation. This will be indexed primarily with Google scholar and possibly with other indexing bodies and will definitely add to citation list and H index of every speaker author. It will let the speaker earn academic credits for a conference presentation similar to a paper. This will be a great benefit in building ones academic credentials. One of the major benefits is the record that will exist for posterity in the author’s name. Even after years, people can listen to your talk and benefit from it. One of the many advantages of online conference is its cost-effectiveness for the audience. It saves the extraneous cost of travel, accommodation and food expenses. The authors however do not encourage this every time as it loses the chance for face to face networking with other professionals. One would also lose out on other social opportunities to interact and put the questions to the speakers themselves. But nevertheless it is very convenient and accessible and you could attend education sessions right from the comforts of your home. In case you were keen on attending in hall A which clashes with the session of interest in hall B you can catch up later through the recordings. On the lighter note there is no need to stress about the wardrobe. A few tips to make most of the online conferences 1. Put in on the calendar 2. Make attendance a priority- avoid distractions at home, work 3. Engage in the live or social events which can impact discussions 4. Buy full access whenever possible. As long as you have Internet connection and a smart phone, tablet or a computer use them wisely and you can garner much of the same value from the experience. Make the most of it! If you have tried one and loved it tell more people about it! Spread the word and more online opportunities will arise. It’s time for us to go social! Visit us at www.anaesthesiatv.com
Style APA, Harvard, Vancouver, ISO itp.
39

"Equant provides extranet". Network Security 2000, nr 10 (październik 2000): 3–4. http://dx.doi.org/10.1016/s1353-4858(00)10008-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

"National extranet further secured". Network Security 2000, nr 3 (marzec 2000): 4. http://dx.doi.org/10.1016/s1353-4858(00)03008-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Shi, Yinghui, Huiyun Yang, Zongkai Yang, Wei Liu, Di Wu i Harrison Hao Yang. "Examining the effects of note-taking styles on college students’ learning achievement and cognitive load". Australasian Journal of Educational Technology, 12.08.2022, 1–11. http://dx.doi.org/10.14742/ajet.6688.

Pełny tekst źródła
Streszczenie:
This study investigated the effects of note-taking styles on college students’ learning achievement and cognitive load in a 6-week lecture-based computer network course. Forty-two students were randomly assigned into one of three groups, which consisted of collaborative note-taking, laptop note-taking, and traditional longhand note-taking. The results showed that students in the collaborative note-taking group did better on learning achievement and cognitive load than students in the other two groups. Particularly, students in the collaborative note-taking group had a significantly higher rate of learning achievement and a significantly lower level of extraneous load than students in the longhand note-taking group. Implications for practice or policy: College students can improve their learning achievement more effectively through collaborative note-taking style than individual note-taking style. College students can reduce extraneous load and improve germane load levels through collaborative note-taking. Instructors and administrators should encourage college students to take more collaborative notes during classroom instruction.
Style APA, Harvard, Vancouver, ISO itp.
42

Faulconer, Emily Kaye, Darryl Chamberlain i Beverly L. Wood. "A Case Study of Community of Inquiry Presences and Cognitive Load in Asynchronous Online STEM Courses". Online Learning 26, nr 3 (1.09.2022). http://dx.doi.org/10.24059/olj.v26i3.3386.

Pełny tekst źródła
Streszczenie:
The design and facilitation of asynchronous online courses can have notable impacts on students related to persistence, performance, and perspectives. This case study presents current conditions for cognitive load and Community of Inquiry (CoI) presences in an asynchronous online introductory undergraduate STEM course. Researchers present the novel use of Python script to clean and organize data and a simplification of the instructional efficiency calculation for use of anonymous data. Key relationships between cognitive load and CoI presences are found through validated use of NASA-TLX instrument and transcript analysis of discussion posts. The data show that student presences are not consistent throughout a course but are consistent across sections. Instructor presences are not consistent throughout a course or across sections. The study also explored predominant factors within each presence, confirming previous reports of low cognitive presence in discussions. The highest extraneous cognitive load was reported for understanding expectations and preparing an initial post. These results provide support for improvements to course design and instructor professional development to promote Community of Inquiry and reduce extraneous cognitive load.
Style APA, Harvard, Vancouver, ISO itp.
43

Bäuerle, Alex, Patrick Albus, Raphael Störk, Tina Seufert i Timo Ropinski. "exploRNN: teaching recurrent neural networks through visual exploration". Visual Computer, 14.07.2022. http://dx.doi.org/10.1007/s00371-022-02593-0.

Pełny tekst źródła
Streszczenie:
AbstractDue to the success and growing job market of deep learning (DL), students and researchers from many areas are interested in learning about DL technologies. Visualization has been used as a modern medium during this learning process. However, despite the fact that sequential data tasks, such as text and function analysis, are at the forefront of DL research, there does not yet exist an educational visualization that covers recurrent neural networks (RNNs). Additionally, the benefits and trade-offs between using visualization environments and conventional learning material for DL have not yet been evaluated. To address these gaps, we propose exploRNN, the first interactively explorable educational visualization for RNNs. exploRNNis accessible online and provides an overview of the training process of RNNs at a coarse level, as well as detailed tools for the inspection of data flow within LSTM cells. In an empirical between-subjects study with 37 participants, we investigate the learning outcomes and cognitive load of exploRNN compared to a classic text-based learning environment. While learners in the text group are ahead in superficial knowledge acquisition, exploRNN is particularly helpful for deeper understanding. Additionally, learning with exploRNN is perceived as significantly easier and causes less extraneous load. In conclusion, for difficult learning material, such as neural networks that require deep understanding, interactive visualizations such as exploRNN can be helpful.
Style APA, Harvard, Vancouver, ISO itp.
44

"A Secure Electronic Messaging System in Client Server Cryptography-RSA Algorithm". International Journal of Engineering and Advanced Technology 8, nr 6S3 (22.11.2019): 1804–8. http://dx.doi.org/10.35940/ijeat.f1343.0986s319.

Pełny tekst źródła
Streszczenie:
The potency and effectiveness of the knowledge systems, in some ways, rely upon its design and the way knowledge area unit transmitted among totally different parties. Similarly, a really crucial side within the computer code development is that the security of knowledge that flows through open communication channels. one in all the foremost widespread design is client/server design that creates the centralization of knowledge storage and process modify, and supply flexibility for applying authentication strategies and coding algorithms inside info systems. whereas the amount of shoppers increase, its need increasing the authentication and coding level as high as potential. Client/server could be a technology that enables to open associate degree interactive session between the user's browser and also the server. during this study, we tend to used client/server design to accomplish secure messaging/chat between shoppers while not the server having the ability to decode the message by applying 2 layer security: one layer of coding between the shoppers and also the server, and also the second layer of coding between the shoppers within the chat space. during this manner, a shopper / Server Cryptography- based mostly Secure electronic messaging System mistreatment RSA (Rivest- Shamir-Adelman), that could be a wide used public-key cryptography and authentication system for encryption of digital electronic messaging transactions like email over the computer network, extranet and net, to write in code and decipher messages in an exceedingly terminal window is developed.
Style APA, Harvard, Vancouver, ISO itp.
45

Fatima, Mah Noor, Mohammad S. Obaidat, Khalid Mahmood, Salman Shamshad, Muhammad Asad Saleem i Muhammad Faizan Ayub. "Privacy-Preserving Three-Factor Authentication Protocol for Wireless Sensor Networks Deployed in Agricultural Field". ACM Transactions on Sensor Networks, 3.07.2023. http://dx.doi.org/10.1145/3607142.

Pełny tekst źródła
Streszczenie:
The agriculture is the backbone of economic system, and it plays an essential part in the survival of a nation’s prosperity. In addition to providing raw materials and food, it also offers numerous employment opportunities. Consequently, agriculture necessitates modern technology to increase productivity. In these circumstances, Wireless Sensor Networks (WSNs) are used to detect climatic parameters like light, humidity, carbon dioxide, acidity, soil moisture, and temperature in an agricultural field. However, the research that is being done at the moment is unable to circumvent the issue that safety and effectiveness cannot coexist. Several studies employ time-consuming cryptographic security structures, whereas the majority of lightweight systems are designed without reviewing certain security aspects like resistance to ephemeral secret leakage (ESL) attacks, perfect forward secrecy, and so on. According to our opinion, this issue may be overcome through the use of lightweight cryptographic primitives, paying particular attention to protocol weaknesses, and keeping in mind the ever-changing security needs of individuals. We present an extensive lightweight three-factor authentication protocol with diverse security criteria, along with the adaptive privacy preservation, that is suited for user-friendly situation in the WSN enviroment. This is accomplished by removing all extraneous cryptographic structures. It has been illustrated that proposed protocol is much better in terms of the privacy and security aspects via the usage of security aspects, proof of the real-or-random (ROR) model, protocols of internet security, and applications that are subjected to experimental validation utilizing automated validation. and comparison with other protocols’ security aspects. The performance analysis shows how superior our proposed protocol is to other competing protocols in terms of communication and computational overheads, with respective efficiencies of 53% and 39%. Moreover, using the ROR model reveals that this study has benefits in performance compared with other competing research.
Style APA, Harvard, Vancouver, ISO itp.
46

Rao, Prahalad, Satish Bukkapatnam, Omer Beyca, Zhenyu (James) Kong i Ranga Komanduri. "Real-Time Identification of Incipient Surface Morphology Variations in Ultraprecision Machining Process". Journal of Manufacturing Science and Engineering 136, nr 2 (16.01.2014). http://dx.doi.org/10.1115/1.4026210.

Pełny tekst źródła
Streszczenie:
Real-time monitoring and control of surface morphology variations in their incipient stages are vital for assuring nanometric range finish in the ultraprecision machining (UPM) process. A real-time monitoring approach, based on predicting and updating the process states from sensor signals (using advanced neural networks (NNs) and Bayesian analysis) is reported for detecting the incipient surface variations in UPM. An ultraprecision diamond turning machine is instrumented with three miniature accelerometers, a three-axis piezoelectric dynamometer, and an acoustic emission (AE) sensor for process monitoring. The machine tool is used for face-turning aluminum 6061 discs to a surface finish (Ra) in the range of 15–25 nm. While the sensor signals (especially the vibration signal in the feed direction) are sensitive to surface variations, the extraneous noise from the environment, machine elements, and sensing system prevents direct use of raw signal patterns for early detection of surface variations. Also, nonlinear and time-varying nature of the process dynamics does not lend conventional statistical process monitoring techniques suitable for characterizing UPM-machined surfaces. Consequently, instead of just monitoring the raw sensor signal patterns, the nonlinear process dynamics wherefrom the signal evolves are more effectively captured using a recurrent predictor neural network (RPNN). The parameters of the RPNN (weights and biases) serve as the surrogates of the process states, which are updated in real-time, based on measured sensor signals using a Bayesian particle filter (PF) technique. We show that the PF-updated RPNN can effectively capture the complex signal evolution patterns. We use a mean-shift statistic, estimated from the PF-estimated surrogate states, to detect surface variation-induced changes in the process dynamics. Experimental investigations show that variations in surface characteristics can be detected within 15 ms of their inception using the present approach, as opposed to 30 ms or higher with the conventional statistical change detection methods tested.
Style APA, Harvard, Vancouver, ISO itp.
47

Ruggill, Judd, i Ken McAllister. "The Wicked Problem of Collaboration". M/C Journal 9, nr 2 (1.05.2006). http://dx.doi.org/10.5204/mcj.2606.

Pełny tekst źródła
Streszczenie:
In “Dilemmas in a General Theory of Planning,” urban planners Horst Rittel and Melvin Webber outline what they term “wicked problems.” According to Rittel and Webber, wicked problems are unavoidably “ill-defined,” that is, unlike “problems in the natural sciences, which are definable and separable and may have solutions that are findable…[wicked problems] are never solved. At best they are only re-solved—over and over again” (160). Rittel and Webber were thinking specifically of the challenges involved in making decisions within immensely complex social circumstances—building highways through cities and designing low income housing projects, for example—but public policy-making and urban design are not the only fields rife with wicked problems. Indeed, the nub of Rittel and Webber’s articulation of wicked problems concerns a phenomenon common to many disciplines: interdisciplinary collaboration. As anyone who has collaborated with people outside her area of expertise will acknowledge, interdisciplinary collaboration itself is among the wickedest problems of all. By way of introduction, we direct the Learning Games Initiative (LGI), a transdisciplinary, inter-institutional research group that studies, teaches with, and builds computer games. In the seven years since LGI was inaugurated, we have undertaken many productive and well-received collaborations, including: 1) leading workshops at national and international conferences; 2) presenting numerous academic talks; 3) editing academic journals; 4) writing books, book chapters, journal articles, and other scholarly materials; 5) exhibiting creative and archival work in museums, galleries, and libraries; and 6) building one of the largest academic research archives of computer games, systems, paraphernalia, and print-, video-, and audio-scholarship in the world. We thus have a fair bit of experience with the wicked problem of collaboration. The purpose of this article is to share some of that experience with readers and to describe candidly some of the challenges we have faced—and sometimes overcome—working collaboratively across disciplinary, institutional, and even international boundaries. Collaborative Circle? Michael Farrell, whose illuminating analysis of “collaborative circles” has lent much to scholars’ understandings of group dynamics within creative contexts, succinctly describes how many such groups form: “A collaborative circle is a set of peers in the same discipline who, through open exchange of support, ideas, and criticism develop into an interdependent group with a common vision that guides their creative work” (266). Farrell’s model, while applicable to several of the smaller projects LGI has nurtured over the years, does not capture the idiosyncratic organizational method that has evolved more broadly within our collective. Rather, LGI has always tended to function according to a model more akin to that found in used car dealerships, one where “no reasonable offer will be refused.” LGI is open to anyone willing to think hard and get their hands dirty, which of course has molded the organization and its projects in remarkable ways. Unlike Farrell’s collaborative circles, for example, LGI’s collaborative model actually decentralizes the group’s study and production of culture. Any member from anywhere—not just “peers in the same discipline”—can initiate or join a project provided she or he is willing to trade in the coin of the realm: sweat equity. Much like the programmers of the open source software movement, LGI’s members work only on what excites them, and with other similarly motivated people. The “buy-in,” simply, is interest and a readiness to assume some level of responsibility for the successes and failures of a given project. In addition to decentralizing the group, LGI’s collaborative model has emerged such that it naturally encourages diversity, swelling our ranks with all kinds of interesting folks, from fine artists to clergy members to librarians. In large part this is because our members view “peers” in the most expansive way possible; sure, optical scientists can help us understand how virtual cameras simulate the real properties of lenses and research linguists can help us design more effective language-in-context tools for our games. However, in an organization that always tries to understand the layers of meaning-making that constitute computer games, such technical expertise is only one stratum. For a game about the cultural politics of ancient Greece that LGI has been working on for the past year, our members invited a musical instrument maker, a potter, and a school teacher to join the development team. These new additions—all experts and peers as far as LGI is concerned—were not merely consultants but became part of the development team, often working in areas of the project completely outside their own specialties. While some outsiders have criticized this project—currently known as “Aristotle’s Assassins”—for being too slow in development, the learning taking place as it moves forward is thrilling to those on the inside, where everyone is learning from everyone else. One common consequence of this dynamic is, as Farrell points out, that the work of the individual members is transformed: “Those who are merely good at their discipline become masters, and, working together, very ordinary people make extraordinary advances in their field” (2). Additionally, the diversity that gives LGI its true interdisciplinarity also makes for praxical as well as innovative projects. The varying social and intellectual concerns of the LGI’s membership means that every collaboration is also an exploration of ethics, responsibility, epistemology, and ideology. This is part of what makes LGI so special: there are multiple levels of learning that underpin every project every day. In LGI we are fond of saying that games teach multiple things in multiple ways. So too, in fact, does collaborating on one of LGI’s projects because members are constantly forced to reevaluate their ways of seeing in order to work with one another. This has been particularly rewarding in our international projects, such as our recently initiated project investigating the relationships among the mass media, new media, and cultural resource management practices. This project, which is building collaborative relationships among a team of archaeologists, game designers, media historians, folklorists, and grave repatriation experts from Cambodia, the Philippines, Australia, and the U.S., is flourishing, not because its members are of the same discipline nor because they share the same ideology. Rather, the team is maturing as a collaborative and productive entity because the focus of its work raises an extraordinary number of questions that have yet to be addressed by national and international researchers. In LGI, much of the sweat equity we contribute involves trying to answer questions like these in ways that are meaningful for our international research teams. In our experience, it is in the process of investigating such questions that effective collaborative relationships are cemented and within which investigators end up learning about more than just the subject matter at hand. They also learn about the micro-cultures, histories, and economies that provide the usually invisible rhetorical infrastructures that ground the subject matter and to which each team member is differently attuned. It is precisely because of this sometimes slow, sometimes tense learning/teaching dynamic—a dynamic too often invoked in both academic and industry settings to discourage collaboration—that François Chesnais calls attention to the fact that collaborative projects frequently yield more benefits than the sum of their parts suggests possible. This fact, says Chesnais, should lead institutions to value collaborative projects more highly as “resource-creating, value-creating and surplus-creating potentialities” (22). Such work is always risky, of course, and Jitendra Mohan, a scholar specializing in cross-cultural collaborations within the field of psychology, writes that international collaboration “raises methodological problems in terms of the selection of culturally-coloured items and their historical as well as semantic meaning…” (314). Mohan means this as a warning and it is heeded as such by LGI members; at the same time, however, it is precisely the identification and sorting out of such methodological problems that seems to excite our best collaborations and most innovative work. Given such promise, it is easy to see why LGI is quite happy to adopt the used car dealer’s slogan “no reasonable offer refused.” In fact, in LGI we see our open-door policy for projects as mirroring our primary object of study: games. This is another factor that we believe contributes to the success of our members’ collaborations. Commercial computer game development is a notoriously interdisciplinary and collaborative endeavor. By collaborating in a fashion similar to professional game developers, LGI members are constantly fashioning more complex understandings of the kinds of production practices and social interactions involved in game development; these practices and interactions are crucial to game studies precisely because they shape what games consist of, how they mean, and the ways in which they are consumed. For this reason, we think it foolish to refuse any reasonable offer to help us explore and understand these meaning-making processes. Wicked Problem Backlash Among the striking points that Rittel and Webber make about wicked problems is that solutions to them are usually created with great care and planning, and yet inevitably suffer severe criticism (at least) or utter annihilation (at worst). Far from being indicative of a bad solution, this backlash against a wicked problem’s solution is an integral element of what we call the “wicked problem dialectic.” The backlash against attempts to establish and nurture transdisciplinary collaboration is easy to document at multiple levels. For example, although our used car dealership model has created a rich research environment, it has also made the quotidian work of doing projects difficult. For one thing, organizing something as simple as a project meeting can take Herculean efforts. The wage earners are on a different schedule than the academics, who are on a different schedule from the artists, who are on a different schedule from the librarians. Getting everyone together in the same room at the same time (even virtually) is like herding cats. As co-directors of LGI, we have done our best to provide the membership with both synchronous and asynchronous resources to facilitate communication (e.g., conference-call enabled phones, online forums, chat clients, file-sharing software, and so on), but nothing beats face-to-face meetings, especially when projects grow complex or deadlines impend mercilessly. Nonetheless, our members routinely fight the meeting scheduling battle, despite the various communication options we have made available through our group’s website and in our physical offices. Most recently we have found that an organizational wiki makes the process of collecting and sharing notes, drawings, videos, segments of code, and drafts of writing decidedly easier than it had been, especially when the projects involve people who do not live a short distance (or a cheap phone call) away from each other. Similarly, not every member has the same amount of time to devote to LGI and its projects despite their considerable and demonstrated interest in them. Some folks are simply busier than others, and cannot contribute to projects as much as they might like. This can be a real problem when a project requires a particular skill set, and the owner of those skills is busy doing other things like working at a paying job or spending time with family. LGI’s projects are always done in addition to members’ regular workload, and it is understandable when that workload has to take precedence. Like regular exercise and eating right, the organization’s projects are the first things to go when life’s demands intrude. Different projects handle this challenge in a variety of ways, but the solutions always tend to reflect the general structure of the project itself. In projects that follow what Andrea Lunsford and Lisa Ede refer to as “hierarchical collaborations”—projects that are clearly structured, goal-oriented, and define clear roles for its participants—milestones and deadlines are set at the beginning of the project and are often tied to professional rewards that stand-in for a paycheck: recommendation letters, all-expenses-paid conference trips, guest speaking invitations, and so forth (133). Less organized projects—what Lunsford and Ede call “dialogic collaborations”—deal with time scheduling challenges differently. Inherently, dialogic collaborations such as these tend to be less hampered by time because they are loosely structured, accept and often encourage members to shift roles, and often value the process of working toward the project’s goals as highly as actually attaining them (134). The most common adaptive strategy used in these cases is simply for the most experienced members of the team to keep the project in motion. As long as something is happening, dialogic collaborations can be kept fruitful for a very long time, even when collaborators are only able to contribute once or twice a month. In our experience, as long as each project’s collaborators understand its operative expectations—which can, by the way, be a combination of hierarchical and dialogical modes—their work proceeds smoothly. Finally, there is the matter of expenses. As an institutionally unaffiliated collective, the LGI has no established revenue stream, which means project funding is either grant-based or comes out of the membership’s pockets. As anyone who has ever applied for a grant knows, it is one thing to write a grant, and another thing entirely to get it. Things are especially tough when grant monies are scarce, as they have been (at least on this side of the pond) since the U.S. economy started its downward spiral several years ago. Tapping the membership’s pockets is not really a viable funding option either. Even modest projects can be expensive, and most folks do not have a lot of spare cash to throw around. What this means, ultimately, is that even though our group’s members have carte blanche to do as they will, they must do so in a resource-starved environment. While it is sometimes disappointing that we are not able to fund certain projects despite their artistic and scholarly merit, LGI members learned long ago that such hardships rarely foreclose all opportunities. As Anne O’Meara and Nancy MacKenzie pointed out several years ago, many “seemingly extraneous features” of collaborative projects—not only financial limitations, but also such innocuous phenomena as where collaborators meet, the dance of their work and play patterns, their conflicting responsibilities, geographic separations, and the ways they talk to each other—emerge as influential factors in all collaborations (210). Thus, we understand in LGI that while our intermittent funding has influenced the dimension and direction of our group, it has also led to some outcomes that in hindsight we are glad we were led to. For example, while LGI originally began studying games in order to discover where production-side innovations might be possible, a series of funding shortfalls and serendipitous academic conversations led us to favor scholarly writing, which has now taken precedence over other kinds of projects. At the most practical level, this works out well because writing costs nothing but time, plus there is a rather desperate shortage of good game scholarship. Moreover, we have discovered that as LGI members have refined their scholarship and begun turning out books, chapters, and articles on a consistent basis, both they and the organization accrue publicity and credibility. Add to this the fact that for many of the group’s academics, traditional print-based work is more valued in the tenure and promotion economy than is, say, an educational game, an online teachers’ resource, or a workshop for a local parent-teacher association, and you have a pretty clear research path blazed by what Kathleen Clark and Rhunette Diggs have called “dialectical collaboration,” that is, collaboration marked by “struggle and opposition, where tension can be creative, productive, clarifying, as well as difficult” (10). Conclusion In sketching out our experience directing a highly collaborative digital media research collective, we hope we have given readers a sense of why collaboration is almost always a “wicked problem.” Collaborators negotiate different schedules, work demands, and ways of seeing, as well as resource pinches that hinder the process by which innovative digital media collaborations come to fruition. And yet, it is precisely because collaboration can be so wicked that it is so valuable. In constantly requiring collaborators to assess and reassess their rationales, artistic visions, and project objectives, collaboration makes for reflexive, complex, and innovative projects, which (at least to us) are the most satisfying and useful of all. References Chesnais, François. “Technological Agreements, Networks and Selected Issues in Economic Theory.” In Technological Collaboration: The Dynamics of Cooperation in Industrial Innovation. Rod Coombs, Albert Richards, Vivien Walsh, and Pier Paolo Saviotti, eds. Northampton, MA: Edward Elgar, 1996. 18-33. Clark, Kathleen D., and Rhunette C. Diggs. “Connected or Separated?: Toward a Dialectical View of Interethnic Relationships.” In Building Diverse Communities: Applications of Communication Research. McDonald, Trevy A., Mark P. Orbe, and Trevellya Ford-Ahmed, eds. Cresskill, NJ: Hampton Press, 2002. 3-25. Farrell, Michael P. Collaborative Circles: Friendship Dynamics & Creative Work. Chicago: U of Chicago P, 2001. Lunsford, Andrea, and Lisa Ede. Singular Texts/Plural Authors: Perspectives on Collaborative Writing. Carbondale: Southern Illinois UP, 1990. Mohan, Jitendra. “Cross-Cultural Experience of Collaboration in Personality Research.” Personality across Cultures: Recent Developments and Debates. Jitendra Mohan, ed. Oxford: Oxford UP, 2000. 313-335. O’Meara, Anne, and Nancy R. MacKenzie. “Reflections on Scholarly Collaboration.” In Common Ground: Feminist Collaboration in the Academy. Elizabeth G. Peck and JoAnna Stephens Mink, eds. Albany: State U of New York P, 1998. 209-26. Rittel, Horst W. J., and Melvin M. Weber. “Dilemmas in a General Theory of Planning.” Policy Sciences 4 (1973): 155-69. Citation reference for this article MLA Style Ruggill, Judd, and Ken McAllister. "The Wicked Problem of Collaboration." M/C Journal 9.2 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0605/07-ruggillmcallister.php>. APA Style Ruggill, J., and K. McAllister. (May 2006) "The Wicked Problem of Collaboration," M/C Journal, 9(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0605/07-ruggillmcallister.php>.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii