Journal articles on the topic 'Void search algorithm'

To see the other types of publications on this topic, follow the link: Void search algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Void search algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Yalong, Wei Yu, Xuan Ma, Hisakazu Ogura, and Dongfen Ye. "Multi-Objective Optimization for High-Dimensional Maximal Frequent Itemset Mining." Applied Sciences 11, no. 19 (September 26, 2021): 8971. http://dx.doi.org/10.3390/app11198971.

Full text
Abstract:
The solution space of a frequent itemset generally presents exponential explosive growth because of the high-dimensional attributes of big data. However, the premise of the big data association rule analysis is to mine the frequent itemset in high-dimensional transaction sets. Traditional and classical algorithms such as the Apriori and FP-Growth algorithms, as well as their derivative algorithms, are unacceptable in practical big data analysis in an explosive solution space because of their huge consumption of storage space and running time. A multi-objective optimization algorithm was proposed to mine the frequent itemset of high-dimensional data. First, all frequent 2-itemsets were generated by scanning transaction sets based on which new items were added in as the objects of population evolution. Algorithms aim to search for the maximal frequent itemset to gather more non-void subsets because non-void subsets of frequent itemsets are all properties of frequent itemsets. During the operation of algorithms, lethal gene fragments in individuals were recorded and eliminated so that individuals may resurge. Finally, the set of the Pareto optimal solution of the frequent itemset was gained. All non-void subsets of these solutions were frequent itemsets, and all supersets are non-frequent itemsets. Finally, the practicability and validity of the proposed algorithm in big data were proven by experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhong, Deyun, Benyu Li, Tiandong Shi, Zhaopeng Li, Liguan Wang, and Lin Bi. "Repair of Voids in Multi-Labeled Triangular Mesh." Applied Sciences 11, no. 19 (October 6, 2021): 9275. http://dx.doi.org/10.3390/app11199275.

Full text
Abstract:
In this paper, we propose a novel mesh repairing method for repairing voids from several meshes to ensure a desired topological correctness. The input to our method is several closed and manifold meshes without labels. The basic idea of the method is to search for and repair voids based on a multi-labeled mesh data structure and the idea of graph theory. We propose the judgment rules of voids between the input meshes and the method of void repairing based on the specified model priorities. It consists of three steps: (a) converting the input meshes into a multi-labeled mesh; (b) searching for quasi-voids using the breadth-first searching algorithm and determining true voids via the judgment rules of voids; (c) repairing voids by modifying mesh labels. The method can repair the voids accurately and only few invalid triangular facets are removed. In general, the method can repair meshes with one hundred thousand facets in approximately one second on very modest hardware. Moreover, it can be easily extended to process large-scale polygon models with millions of polygons. The experimental results of several data sets show the reliability and performance of the void repairing method based on the multi-labeled triangular mesh.
APA, Harvard, Vancouver, ISO, and other styles
3

Frommholz, D. "IMAGE INTERPOLATION ON THE CPU AND GPU USING LINE RUN SEQUENCES." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (May 17, 2022): 53–60. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-53-2022.

Full text
Abstract:
Abstract. This paper describes an efficient implementation of an image interpolation algorithm based on inverse distance weighting (IDW). The time-consuming search for support pixels bordering the voids to be filled is facilitated through gapless sweeps of different directions over the image. The scanlines needed for the sweeps are constructed from a path prototype per orientation whose regular substructures get reused and shifted to produce aligned duplicates covering the entire input bitmap. The line set is followed concurrently to detect existing samples around nodata patches and compute the distance to the pixels to be newly set. Since the algorithm relies on integer line rasterization only and does not need auxiliary data structures beyond the output image and weight aggregation bitmap for intensity normalization, it will run on multi-core central and graphics processing units (CPUs and GPUs). Also, occluded support pixels of non-convex void patches are ignored, and over- or undersampling close-by and distant valid neighbors is compensated. Runtime and accuracy compared to generated IDW ground truth get evaluated for the CPU and GPU implementation of the algorithm on single-channel and multispectral bitmaps of various filling degrees.
APA, Harvard, Vancouver, ISO, and other styles
4

Chaitanya, G. V. A., and G. S. Gupta. "Liquid flow in heap leaching using the discrete liquid flow model and graph-based void search algorithm." Hydrometallurgy 221 (August 2023): 106151. http://dx.doi.org/10.1016/j.hydromet.2023.106151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Xiaoyuan, and Changhe Yuan. "Exact Algorithms for MRE Inference." Journal of Artificial Intelligence Research 55 (March 22, 2016): 653–83. http://dx.doi.org/10.1613/jair.4867.

Full text
Abstract:
Most Relevant Explanation (MRE) is an inference task in Bayesian networks that finds the most relevant partial instantiation of target variables as an explanation for given evidence by maximizing the Generalized Bayes Factor (GBF). No exact MRE algorithm has been developed previously except exhaustive search. This paper fills the void by introducing two Breadth-First Branch-and-Bound (BFBnB) algorithms for solving MRE based on novel upper bounds of GBF. One upper bound is created by decomposing the computation of GBF using a target blanket decomposition of evidence variables. The other upper bound improves the first bound in two ways. One is to split the target blankets that are too large by converting auxiliary nodes into pseudo-targets so as to scale to large problems. The other is to perform summations instead of maximizations on some of the target variables in each target blanket. Our empirical evaluations show that the proposed BFBnB algorithms make exact MRE inference tractable in Bayesian networks that could not be solved previously.
APA, Harvard, Vancouver, ISO, and other styles
6

Okamoto, Yoshifumi, Yusuke Tominaga, Shinji Wakao, and Shuji Sato. "Topology optimization of magnetostatic shielding using multistep evolutionary algorithms with additional searches in a restricted design space." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 33, no. 3 (April 29, 2014): 894–913. http://dx.doi.org/10.1108/compel-10-2012-0202.

Full text
Abstract:
Purpose – The purpose of this paper is to improve the multistep algorithm using evolutionary algorithm (EA) for the topology optimization of magnetostatic shielding, and the paper reveals the effectiveness of methodology by comparison with conventional optimization method. Furthermore, the design target is to obtain the novel shape of magnetostatic shielding. Design/methodology/approach – The EAs based on random search allow engineers to define general-purpose objects with various constraint conditions; however, many iterations are required in the FEA for the evaluation of the objective function, and it is difficult to realize a practical solution without island and void distribution. Then, the authors proposed the multistep algorithm with design space restriction, and improved the multistep algorithm in order to get better solution than the previous one. Findings – The variant model of optimized topology derived from improved multistep algorithm is defined to clarify the effectiveness of the optimized topology. The upper curvature of the inner shielding contributed to the reduction of magnetic flux density in the target domain. Research limitations/implications – Because the converged topology has many pixel element unevenness, the special smoother to remove the unevenness will play an important role for the realization of practical magnetostatic shielding. Practical implications – The optimized topology will give us useful detailed structure of magnetostatic shielding. Originality/value – First, while the conventional algorithm could not find the reasonable shape, the improved multistep optimization can capture the reasonable shape. Second, An additional search is attached to the multistep optimization procedure. It is shown that the performance of improved multistep algorithm is better than that of conventional algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Savulionienė, Loreta, and Leonidas Sakalauskas. "Statistinis dažnų posekių paieškos algoritmas." Informacijos mokslai 58 (January 1, 2011): 126–43. http://dx.doi.org/10.15388/im.2011.0.3118.

Full text
Abstract:
Šiuolaikinis gyvenimas susijęs su dideliais informacijos bei duomenų kiekiais. Paieška yra viena iš pagrindinių kompiuterio darbo operacijų. Paieškos tikslas – rasti dideliame duomenų kiekyje tam tikrą elementą ar elementų seką arba patvirtinti, kad jos nėra. Pagrindinis duomenų gavybos tikslas – rasti duomenyse prasmę, t. y. ryšius tarp duomenų, jų pasikartojamumą ir pan. Straipsnyje pasiūlytas naujas statistinis dažnų posekių paieškos algoritmas, eksperimentų rezultatai bei išvados. Statistinio dažnų posekių paieškos algoritmo esmė – greitai nustatyti dažnus posekius. Šis algoritmas netikrina viso rinkmenos turinio keletą kartų. Vykdant algoritmą rinkmena peržiūrima vieną kartą pagal pasirinktą tikimybę p. Šis algoritmas yra netikslus, tačiau jo vykdymo laikas daug trumpesnis nei tiksliųjų algoritmų. Statistinis dažnų posekių paieškos algoritmas gali būti taikomas struktūrų paieškos uždaviniui, kai aktualu nustatyti, koks posekis yra dažniausias, tačiau nėra labai svarbu tikslus dažnų posekių skaičius.Pagrindiniai žodžiai: posekis, kandidatinė seka, duomenų rinkinys, dažnas elementas, elementų rinkinių generavimas, hash funkcija, pirmos rūšies klaida, antros rūšies klaida, pasikliautinumo intervalas.Statistical Algorithm for Mining Frequent SequencesLoreta Savulioniene, Leonidas Sakalauskas SummaryModern life involves large amounts of data and information. Search is one of the major operations performed by a computer. Search goal is to find a sequence (element) in large amounts of data or to confirm that it does not exist. Amounts of data in databases have reached terabytes, and therefore data retrieval, analysis, rapid decision-making become increasingly complicated. Large quantities of information cover both important and void information. The main goal of data mining is to find the meaning in data, i.e. a relationship between the data, their reproducibility, etc. This technology applies to business, medicine and other areas where large amounts of information are processed and a relationship among data is detected, i.e. new information is obtained from large amounts of data. The paper proposes a new statistic algorithm for repeated sequence search. The essence of this statistic algorithm is to identify repeated sequences quickly. During the algorithm all contents of the file are not checked several times. During the algorithm, the file is checked once according to the chosen probability p. This algorithm is inaccurate, but its execution time is shorter than of the accurate algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fiorini, Bartolomeo, Kazuya Koyama, and Albert Izard. "Studying large-scale structure probes of modified gravity with COLA." Journal of Cosmology and Astroparticle Physics 2022, no. 12 (December 1, 2022): 028. http://dx.doi.org/10.1088/1475-7516/2022/12/028.

Full text
Abstract:
Abstract We study the effect of two Modified Gravity (MG) theories, f(R) and nDGP, on three probes of large-scale structure, the real space power spectrum estimator Q 0, bispectrum and voids, and validate fast approximate COLA simulations against full N-body simulations for the prediction of these probes. We find that using the first three even multipoles of the redshift space power spectrum to estimate Q 0 is enough to reproduce the MG boost factors of the real space power spectrum for both halo and galaxy catalogues. By analysing the bispectrum and reduced bispectrum of Dark Matter (DM), we show that the strong MG signal present in the DM bispectrum is mainly due to the enhanced power spectrum. We warn about adopting screening approximations in simulations as this neglects non-linear contributions that can source a significant component of the MG bispectrum signal at the DM level, but we argue that this is not a problem for the bispectrum of galaxies in redshift space where the signal is dominated by the non-linear galaxy bias. Finally, we search for voids in our mock galaxy catalogues using the ZOBOV watershed algorithm. To apply a linear model for Redshift-Space Distortion (RSD) in the void-galaxy cross-correlation function, we first examine the effects of MG on the void profiles entering into the RSD model. We find relevant MG signals in the integrated-density, velocity dispersion and radial velocity profiles in the nDGP theory. Fitting the RSD model for the linear growth rate, we recover the linear theory prediction in an nDGP model, which is larger than the ΛCDM prediction at the 3σ level. In f(R) theory we cannot naively compare the results of the fit with the linear theory prediction as this is scale-dependent, but we obtain results that are consistent with the ΛCDM prediction.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Zhiyue, Baohui Wang, Jing Wang, Lide Fang, Xiaoting Li, Fan Wang, and Ning Zhao. "Liquid film characteristics measurement based on NIR in gas–liquid vertical annular upward flow." Measurement Science and Technology 33, no. 6 (March 10, 2022): 065014. http://dx.doi.org/10.1088/1361-6501/ac57ed.

Full text
Abstract:
Abstract Liquid film plays a crucial role in void fraction, friction pressure drops, momentum and heat transfer in the two-phase flow. The film thickness measurement experiments of annular flow at four pressure conditions have been conducted using near-infrared sensor. The signal is processed by variational mode decomposition, whose parameters are optimized using the sparrow search algorithm. The envelope spectrum and Pearson correlation coefficient judgment criteria were adopted for signal reconstruction, and the value of the liquid film thickness is obtained. The effect (such as flow rate, pressure, entrainment, etc) of the liquid film thickness are analyzed theoretically. The characterization parameters Weg″, Wel, Nμl and X mod have been extracted and optimized, and a new average liquid film thickness correlation is proposed. The laboratory results indicate that the mean absolute percentage error of the predictive correlation is 4.35% (current data) and 12.02% (literatures data) respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Chida, Nariyoshi, and Tachio Terauchi. "Repairing Regular Expressions for Extraction." Proceedings of the ACM on Programming Languages 7, PLDI (June 6, 2023): 1633–56. http://dx.doi.org/10.1145/3591287.

Full text
Abstract:
While synthesizing and repairing regular expressions (regexes) based on Programming-by-Examples (PBE) methods have seen rapid progress in recent years, all existing works only support synthesizing or repairing regexes for membership testing, and the support for extraction is still an open problem. This paper fills the void by proposing the first PBE-based method for synthesizing and repairing regexes for extraction. Our work supports regexes that have real-world extensions such as backreferences and lookarounds. The extensions significantly affect the PBE-based synthesis and repair problem. In fact, we show that there are unsolvable instances of the problem if the synthesized regexes are not allowed to use the extensions, i.e., there is no regex without the extensions that correctly classify the given set of examples, whereas every problem instance is solvable if the extensions are allowed. This is in stark contrast to the case for the membership where every instance is guaranteed to have a solution expressible by a pure regex without the extensions. The main contribution of the paper is an algorithm to solve the PBE-based synthesis and repair problem for extraction. Our algorithm builds on existing methods for synthesizing and repairing regexes for membership testing, i.e., the enumerative search algorithms with SMT constraint solving. However, significant extensions are needed because the SMT constraints in the previous works are based on a non-deterministic semantics of regexes. Non-deterministic semantics is sound for membership but not for extraction, because which substrings are extracted depends on the deterministic behavior of actual regex engines. To address the issue, we propose a new SMT constraint generation method that respects the deterministic behavior of regex engines. For this, we first define a novel formal semantics of an actual regex engine as a deterministic big-step operational semantics, and use it as a basis to design the new SMT constraint generation method. The key idea to simulate the determinism in the formal semantics and the constraints is to consider continuations of regex matching and use them for disambiguation. We also propose two new search space pruning techniques called approximation-by-pure-regex and approximation-by-backreferences that make use of the extraction information in the examples. We have implemented the synthesis and repair algorithm in a tool called R3 (Repairing Regex for extRaction) and evaluated it on 50 regexes that contain real-world extensions. Our evaluation shows the effectiveness of the algorithm and that our new pruning techniques substantially prune the search space.
APA, Harvard, Vancouver, ISO, and other styles
11

Islam, Rashedul, Md Nasim Akhtar, Badlishah R. Ahmad, Utpal Kanti Das, Mostafijur Rahman, and Zahereel Ishwar Abdul Khalib. "An approach to building energy clusters using particle swarm optimization algorithm for allocating the tasks in computational grid." Indonesian Journal of Electrical Engineering and Computer Science 14, no. 2 (May 1, 2019): 826. http://dx.doi.org/10.11591/ijeecs.v14.i2.pp826-833.

Full text
Abstract:
<span>The proper mapping in case of allocation of available tasks among particles is a challenging job to accomplish. It requires proper procedural approach and effectual algorithm or strategy. The deterministic polynomial time for task allocation problem is relative. The existence of proper and exact approach for allocation problem is void. However, for the survival of the grid and executing the assigned tasks, the reserved tasks need to be allocated equally among the particles of the grid space. At the same time, the applied model for task allocation must not consume unnecessary time and memory. We applied Particle Swarm Optimization (PSO) for allocating the task. Additionally, the particles will be divided into three clusters based on their energy level. Each cluster will have its own cluster header. Cluster headers will be used to search the task into space. In a single cluster, particles member will be of same energy level status such as full energy, half energy, and no energy level. As a result, the system will use the limited time for searching task for the remaining tasks in it if a particular task requires allocating half task to a particle.</span>
APA, Harvard, Vancouver, ISO, and other styles
12

Takubo, Tomohito, Erika Miyake, Atsushi Ueno, and Masaki Kubo. "Welding Line Detection Using Point Clouds from Optimal Shooting Position." Journal of Robotics and Mechatronics 35, no. 2 (April 20, 2023): 492–500. http://dx.doi.org/10.20965/jrm.2023.p0492.

Full text
Abstract:
A method for welding line detection using point cloud data is proposed to automate welding operations combined with a contact sensor. The proposed system targets a fillet weld, in which the joint line between two metal plates attached vertically is welded. In the proposed method, after detecting the position and orientation of two flat plates regarding a single viewpoint as a rough measurement, the flat plates are measured from the optimal shooting position in each plane in detail to detect a precise weld line. When measuring a flat plate from an angle, the 3D point cloud obtained by a depth camera contains measurement errors. For example, a point cloud measuring a plane has a wavy shape or void owing to light reflection. However, by shooting the plane vertically, the point cloud has fewer errors. Using these characteristics, a two-step measurement algorithm for determining weld lines was proposed. The weld line detection results show an improvement of 5 mm compared with the rough and precise measurements. Furthermore, the average measurement error was less than 2.5 mm, and it is possible to narrow the range of the search object contact sensor for welding automation.
APA, Harvard, Vancouver, ISO, and other styles
13

Butt, Suhail, Kamalrulnizam Bakar, Nadeem Javaid, Niayesh Gharaei, Farruh Ishmanov, Muhammad Afzal, Muhammad Mehmood, and Muhammad Mujahid. "Exploiting Layered Multi-Path Routing Protocols to Avoid Void Hole Regions for Reliable Data Delivery and Efficient Energy Management for IoT-Enabled Underwater WSNs." Sensors 19, no. 3 (January 26, 2019): 510. http://dx.doi.org/10.3390/s19030510.

Full text
Abstract:
The key concerns to enhance the lifetime of IoT-enabled Underwater Wireless Sensor Networks (IoT-UWSNs) are energy-efficiency and reliable data delivery under constrained resource. Traditional transmission approaches increase the communication overhead, which results in congestion and affect the reliable data delivery. Currently, many routing protocols have been proposed for UWSNs to ensure reliable data delivery and to conserve the node’s battery with minimum communication overhead (by avoiding void holes in the network). In this paper, adaptive energy-efficient routing protocols are proposed to tackle the aforementioned problems using the Shortest Path First (SPF) with least number of active nodes strategy. These novel protocols have been developed by integrating the prominent features of Forward Layered Multi-path Power Control One (FLMPC-One) routing protocol, which uses 2-hop neighbor information, Forward Layered Multi-path Power Control Two (FLMPC-Two) routing protocol, which uses 3-hop neighbor information and ’Dijkstra’ algorithm (for shortest path selection). Different Packet Sizes (PSs) with different Data Rates (DRs) are also taken into consideration to check the dynamicity of the proposed protocols. The achieved outcomes clearly validate the proposed protocols, namely: Shortest Path First using 3-hop neighbors information (SPF-Three) and Breadth First Search with Shortest Path First using 3-hop neighbors information (BFS-SPF-Three). Simulation results show the effectiveness of the proposed protocols in terms of minimum Energy Consumption (EC) and Required Packet Error Rate (RPER) with a minimum number of active nodes at the cost of affordable delay.
APA, Harvard, Vancouver, ISO, and other styles
14

Tolpin, David, and Solomon Shimony. "MCTS Based on Simple Regret." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 570–76. http://dx.doi.org/10.1609/aaai.v26i1.8126.

Full text
Abstract:
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final ``arm pull'' (the actual move selection) that collects a reward, rather than all ``arm pulls''. Therefore, it makes more sense to minimize the simple regret, as opposed to the cumulative regret. We begin by introducing policies for multi-armed bandits with lower finite-time and asymptotic simple regret than UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms UCT empirically. Optimizing the sampling process is itself a metareasoning problem, a solution of which can use value of information (VOI) techniques. Although the theory of VOI for search exists, applying it to MCTS is non-trivial, as typical myopic assumptions fail. Lacking a complete working VOI theory for MCTS, we nevertheless propose a sampling scheme that is ``aware'' of VOI, achieving an algorithm that in empirical evaluation outperforms both UCT and the other proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Hedjazian, N., Y. Capdeville, and T. Bodin. "Multiscale seismic imaging with inverse homogenization." Geophysical Journal International 226, no. 1 (March 27, 2021): 676–91. http://dx.doi.org/10.1093/gji/ggab121.

Full text
Abstract:
Summary Seismic imaging techniques such as elastic full waveform inversion (FWI) have their spatial resolution limited by the maximum frequency present in the observed waveforms. Scales smaller than a fraction of the minimum wavelength cannot be resolved, and only a smoothed, effective version of the true underlying medium can be recovered. These finite-frequency effects are revealed by the upscaling or homogenization theory of wave propagation. Homogenization aims at computing larger scale effective properties of a medium containing small-scale heterogeneities. We study how this theory can be used in the context of FWI. The seismic imaging problem is broken down in a two-stage multiscale approach. In the first step, called homogenized FWI (HFWI), observed waveforms are inverted for a smooth, fully anisotropic effective medium, that does not contain scales smaller than the shortest wavelength present in the wavefield. The solution being an effective medium, it is difficult to directly interpret it. It requires a second step, called downscaling or inverse homogenization, where the smooth image is used as data, and the goal is to recover small-scale parameters. All the information contained in the observed waveforms is extracted in the HFWI step. The solution of the downscaling step is highly non-unique as many small-scale models may share the same long wavelength effective properties. We therefore rely on the introduction of external a priori information, and cast the problem in a Bayesian formulation. The ensemble of potential fine-scale models sharing the same long wavelength effective properties is explored with a Markov chain Monte Carlo algorithm. We illustrate the method with a synthetic cavity detection problem: we search for the position, size and shape of void inclusions in a homogeneous elastic medium, where the size of cavities is smaller than the resolving length of the seismic data. We illustrate the advantages of introducing the homogenization theory at both stages. In HFWI, homogenization acts as a natural regularization helping convergence towards meaningful solution models. Working with fully anisotropic effective media prevents the leakage of anisotropy induced by the fine scales into isotropic macroparameters estimates. In the downscaling step, the forward theory is the homogenization itself. It is computationally cheap, allowing us to consider geological models with more complexity (e.g. including discontinuities) and use stochastic inversion techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Tolpin, David, and Solomon Shimony. "MCTS Based on Simple Rerget." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.

Full text
Abstract:
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS),is based on UCB, a policy for the Multi-armed Bandit problem (MAB) thatminimizes the cumulative regret. However, search differs from MAB inthat in MCTS it is usually only the final ``arm pull''that collects a reward, rather than all ``arm pulls''.Therefore, it makes more sense to minimize the simple, rather thancumulative, regret. We introduce policies formulti-armed bandits with lower simpleregret than UCB and develop a two-stage scheme (SR+CR) for MCTSwhich outperforms UCT empirically. We also propose a samplingscheme based on value of information (VOI), achieving an algorithmthat empirically outperforms other proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Xinyu, Peiwen Hao, Aihui Wang, Liangqi Zhang, Bo Gu, and Xinyan Lu. "Non-destructive detection of highway hidden layer defects using a ground-penetrating radar and adaptive particle swarm support vector machine." PeerJ Computer Science 7 (March 30, 2021): e417. http://dx.doi.org/10.7717/peerj-cs.417.

Full text
Abstract:
In this paper, a method that uses a ground-penetrating radar (GPR) and the adaptive particle swarm support vector machine (SVM) method is proposed for detecting and recognizing hidden layer defects in highways. Three common road features, namely cracks, voids, and subsidence, were collected using ground-penetrating imaging. Image segmentation was performed on acquired images. Original features were extracted from thresholded binary images and were compressed using the kl algorithm. The SVM classification algorithm was used for condition classification. For parameter optimization of the SVM algorithm, the grid search method and particle swarm optimization algorithm were used. The recognition rate using the grid search method was 88.333%; the PSO approach often yielded local maxima, and the recognition rate was 86.667%; the improved adaptive PSO algorithm avoided local maxima and increased the recognition rate to 91.667%.
APA, Harvard, Vancouver, ISO, and other styles
18

Storch, Tobias. "Finding Mount Everest and Handling Voids." Evolutionary Computation 19, no. 2 (June 2011): 325–44. http://dx.doi.org/10.1162/evco_a_00032.

Full text
Abstract:
Evolutionary algorithms (EAs) are randomized search heuristics that solve problems successfully in many cases. Their behavior is often described in terms of strategies to find a high location on Earth's surface. Unfortunately, many digital elevation models describing it contain void elements. These are elements not assigned an elevation. Therefore, we design and analyze simple EAs with different strategies to handle such partially defined functions. They are experimentally investigated on a dataset describing the elevation of Earth's surface. The largest value found by an EA within a certain runtime is measured, and the median over a few runs is computed and compared for the different EAs. For the dataset, the distribution of void elements seems to be neither random nor adversarial. They are so-called semirandomly distributed. To deepen our understanding of the behavior of the different EAs, they are theoretically considered on well-known pseudo-Boolean functions transferred to partially defined ones. These modifications are also performed in a semirandom way. The typical runtime until an optimum is found by an EA is analyzed, namely bounded from above and below, and compared for the different EAs. We figure out that for the random model it is a good strategy to assume that a void element has a worse function value than all previous elements. Whereas for the adversary model it is a good strategy to assume that a void element has the best function value of all previous elements.
APA, Harvard, Vancouver, ISO, and other styles
19

Asming, Vladimir, Andrey Fedorov, and Anzhelika Prokudina. "The program LOS for interactive seismic and infrasonic data processing." Russian Journal of Seismology 3, no. 1 (March 30, 2021): 27–40. http://dx.doi.org/10.35540/2686-7907.2021.1.02.

Full text
Abstract:
For many years, the Kola Division of the Geophysical Survey of the Russian Academy of Sciences carries out work on testing and implementation of modern techniques and algorithms for seismic and infrasonic data processing and event location. The KoD staff has developed several original algorithms that appeared to be useful for seismic and infrasonic event location and discrimination. In 2020, the LOS program was created. The most efficient tools for data processing and analysis, event location algorithms have been united in the program. The program also contains a modern mapping system and database. The following tools have been implemented: bandpass and adaptive filtration, polarization analysis and backazimuth computation for 3C stations, computation of backazimuths, and apparent velocities for seismic and infrasonic arrays (beamforming). To analyze records of infrasonic arrays the program has a cross-correlation tool, which enables to observe changes of signal’s backazimuths and apparent velocities in time. For seismic event location, the program uses two basic algorithms: minimization of origin time estimation residual and grid search based on generalized beamforming approach. These algorithms can be used in different combinations depending on the location scenario selected by a user. In addition, a new location algorithm based on a presentation of the seismic medium in a form of random graph where vertices correspond to points in the medium and edges are wave paths between the points, has been implemented. It can be useful for locating events in a substantially heterogeneous media, possibly with voids and cavities, as well as for taking into account the relief. This algorithm can be used, in particular, when locating events in mines using local mine seismic networks. The LOS program has been put into the practice of the Kola Division.
APA, Harvard, Vancouver, ISO, and other styles
20

Borodin, Kirill, and Nurlan Zhangabayuly Zhangabay. "Mechanical characteristics, as well as physical-and-chemical properties of the slag-filled concretes, and investigation of the predictive power of the metaheuristic approach." Curved and Layered Structures 6, no. 1 (January 1, 2019): 236–44. http://dx.doi.org/10.1515/cls-2019-0020.

Full text
Abstract:
AbstractOur article is devoted to development and verification of the metaheuristic optimisation algorithm in the course of selection of the compositional proportions of the slag-filled concretes. The experimental selection of various compositions and working modes, which ensure one and the same fixed level of a basic property, is the very labour-intensive and time-consuming process. This process cannot be feasible in practice in many situations, for example, in the cases, where it is necessary to investigate composite materials with equal indicators of resistance to aggressive environments or with equal share of voids in the certain range of dimensions. There are many similar articles in the scientific literature. Therefore, it is possible to make the conclusion on the topicality of the above-described problem. In our article, we will consider development of the methodology of the automated experimental-and-statistical determination of optimal compositions of the slag-filled concretes. In order to optimise search of local extremums of the complicated functions of the multi-factor analysis, we will utilise the metaheuristic optimisation algorithm, which is based on the concept of the swarm intelligence. Motivation in respect of utilisation of the swarm intelligence algorithm is conditioned by the absence of the educational pattern, within which it is not necessary to establish a certain pattern of learning as it is necessary to do in the neural-network algorithms. In the course of performance of this investigation, we propose this algorithm, as well as procedure of its verification on the basis of the immediate experimental verification.
APA, Harvard, Vancouver, ISO, and other styles
21

Muhammad, Shamsuddeen Hassan, and Abdulrasheed Mustapha. "A Form of List Viterbi Algorithm for Decoding Convolutional Codes." U.Porto Journal of Engineering 4, no. 2 (October 31, 2018): 42–48. http://dx.doi.org/10.24840/2183-6493_004.002_0004.

Full text
Abstract:
Viterbi algorithm is a maximum likelihood decoding algorithm. It is used to decode convolutional code in several wireless communication systems, including Wi-Fi. The standard Viterbi algorithm gives just one decoded output, which may be correct or incorrect. Incorrect packets are normally discarded thereby necessitating retransmission and hence resulting in considerable energy loss and delay. Some real-time applications such as Voice over Internet Protocol (VoIP) telephony do not tolerate excessive delay. This makes the conventional Viterbi decoding strategy sub-optimal. In this regard, a modified approach, which involves a form of List Viterbi for decoding the convolutional code is investigated. The technique employed combines the bit-error correction capabilities of both the Viterbi algorithm and the Cyclic Redundancy Check (CRC) procedures. It first uses a form of ‘List Viterbi Algorithm’ (LVA), which generates a list of possible decoded output candidates after the trellis search. The CRC check is then used to determine the presence of correct outcome. Results of experiments conducted using simulation shows considerable improvement in bit-error performance when compared to classical approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Semin, Mikhail, Evgenii Grishin, Lev Levin, and Artem Zaitsev. "Automated ventilation control in mines. Challenges, state of the art, areas for improvement." Journal of Mining Institute 246 (January 23, 2021): 623–32. http://dx.doi.org/10.31897/pmi.2020.6.4.

Full text
Abstract:
The article is divided into three main parts. The first part provides an overview of the existing literature on theoretical methods for calculating the optimal air distribution in mines according to the criteria of energy efficiency and providing all sections of mines with the required amount of air. It is shown that by the current moment there are many different formulations of the problem of searching the optimal air distribution, many different approaches and methods for optimizing air distribution have been developed. The case of a single (main) fan is most fully investigated, while for many fans a number of issues still remain unresolved. The second part is devoted to the review of existing methods and examples of the automated mine ventilation control systems implementation in Russia and abroad. Two of the most well-known concepts for the development of such systems are automated ventilation control systems (AVCS) in Russia and the CIS countries and Ventilation on demand (VOD) abroad. The main strategies of ventilation management in the framework of the AVCS and VOD concepts are described and also the key differences between them are shown. One of the key differences between AVCS and VOD today is the automatic determination of the operation parameters of fan units and ventilation doors using the optimal control algorithm, which is an integral part of the AVCS. The third part of the article describes the optimal control algorithm developed by the team of the Mining Institute of the Ural Branch of the Russian Academy of Sciences with the participation of the authors of the article. In this algorithm, the search for optimal air distribution is carried out by the system in a fully automated mode in real time using algorithms programmed into the microcontrollers of fans and ventilation doors. Minimization of energy consumption is achieved due to the most efficient selection of the fan speed and the rate of ventilation doors opening and also due to the air distribution shift control and the partial air recirculation systems introduction. It is noted that currently the available literature poorly covers the issue related to emergency operation modes ventilation systems of mines and also with the adaptation of automated control systems to different mining methods. According to the authors, further development of automated ventilation control systems should be carried out, in particular, in these two areas.
APA, Harvard, Vancouver, ISO, and other styles
23

Mayor, Vicente, Rafael Estepa, Antonio Estepa, and German Madinabeitia. "Deploying a Reliable UAV-Aided Communication Service in Disaster Areas." Wireless Communications and Mobile Computing 2019 (April 8, 2019): 1–20. http://dx.doi.org/10.1155/2019/7521513.

Full text
Abstract:
When telecommunication infrastructure is damaged by natural disasters, creating a network that can handle voice channels can be vital for search and rescue missions. Unmanned Aerial Vehicles (UAV) equipped with WiFi access points could be rapidly deployed to provide wireless coverage to ground users. This WiFi access network can in turn be used to provide a reliable communication service to be used in search and rescue missions. We formulate a new problem for UAVs optimal deployment which considers not only WiFi coverage but also the mac sublayer (i.e., quality of service). Our goal is to dispatch the minimum number of UAVs for provisioning a WiFi network that enables reliable VoIP communications in disaster scenarios. Among valid solutions, we choose the one that minimizes energy expenditure at the user’s WiFi interface card in order to extend ground user’s smartphone battery life as much as possible. Solutions are found using well-known heuristics such as K-means clusterization and genetic algorithms. Via numerical results, we show that the IEEE 802.11 standard revision has a decisive impact on the number of UAVs required to cover large areas, and that the user’s average energy expenditure (attributable to communications) can be reduced by limiting the maximum altitude for drones or by increasing the VoIP speech quality.
APA, Harvard, Vancouver, ISO, and other styles
24

Fathurrahman, Fathurrahman, and Yupi Kuspandi Putra. "Analisis Perbandingan Pengaruh Pertumbuhan Ekonomi Terhadap Tingkat Kesejahteraan Masyarakat Pada Desa Suralaga Dengan Menggunakan Algoritma Naive Bayes Dan Support Vector Machine (Svm)." Infotek : Jurnal Informatika dan Teknologi 4, no. 1 (January 28, 2021): 1–10. http://dx.doi.org/10.29408/jit.v4i1.2961.

Full text
Abstract:
Data is an inanimate object that is meaningless and useless for anything. This statement is a statement that is not based on existing facts and realities. In principle, data is an inanimate object whose collection can be very influential in all aspects of human life. Data can shock the world if processed and published. Because data is so influential, humans can speak freely which is unlikely to be debated. Data is able to influence the development and progress of a nation in all respects such as: economy, health, policy, security and so on. Therefore, data obtained by means of surveys and so on, must be treated carefully in order to be able to provide maximum contribution in decision making. The search for stable economic growth and environmentally sustainable quality is fast becoming a topical issue among governments, international agencies and other stakeholders interested in sustainable development. The highest accuracy value is shown by experiments using K-Vold Validation 8 and K-Vold Validation 10. While the tolerance given to K-Vold Validation 8 (0.49%) is smaller than K = Vold Validation 10 of (0.58%). This means that K-Vold Validation 8 is tighter than K-Vold Validation 10. So that the best used in decision making is K-Vold Validation 8 at 99.62% with a tolerance of 0.49%. The results of data processing using the Naive Bayes algorithm and the Support Vector Machine both illustrate that the economic influence on the level of welfare of the Suralaga Village community is very large and it can be concluded that the average Suralaga Village community is included in the category of not prosperous. This is indicated by the fact that there are still many people who depend on their livelihoods from working as laborers and foreign workers
APA, Harvard, Vancouver, ISO, and other styles
25

Dalpe, Allisa J., May-Win L. Thein, and Martin Renken. "PERFORM: A Metric for Evaluating Autonomous System Performance in Marine Testbed Environments Using Interval Type-2 Fuzzy Logic." Applied Sciences 11, no. 24 (December 15, 2021): 11940. http://dx.doi.org/10.3390/app112411940.

Full text
Abstract:
Trust and confidence in autonomous behavior is required to send autonomous vehicles into operational missions. The authors introduce the Performance Evaluation and Review Framework Of Robotic Missions (PERFORM), a framework to enable a rigorous and replicable autonomy test environment, thereby filling the void between that of merely simulating autonomy and that of completing true field missions. A generic architecture for defining the missions under test is proposed and a unique Interval Type-2 Fuzzy Logic approach is used as the foundation for the mathematically rigorous autonomy evaluation framework. The test environment is designed to aid in (1) new technology development (i.e., providing direct comparisons and quantitative evaluations between autonomy algorithms), (2) the validation of the performance of specific autonomous platforms, and (3) the selection of the appropriate robotic platform(s) for a given mission type (e.g., for surveying, surveillance, search and rescue). Three case studies are presented to apply the metric to various test scenarios. Results demonstrate the flexibility of the technique with the ability to tailor tests to the user’s design requirements accounting for different priorities related to acceptable risks and goals of a given mission.
APA, Harvard, Vancouver, ISO, and other styles
26

Konovalov, V. A. "GENERALIZED MATHEMATICAL MODEL OF MONEY LAUNDERING AND TERRORISM FINANCING RISK TYPOLOGY IN BIG DATA OF SOCIO-ECONOMIC SYSTEMS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 214 (April 2022): 42–51. http://dx.doi.org/10.14489/vkit.2022.04.pp.042-051.

Full text
Abstract:
The big data of socio-economic systems is investigated in order to synthesize a generalized model of the typology of the risk of money laundering. The analysis of actual requirements to the typology model was carried out. A model for input, output and classification of objects, sets, categories in big data represented by topos has been developed. It is shown that classified objects can be entered into an artificial intelligence database and used to search and analyze typologies of money laundering risks. The concept is introduced and the category-theoretic formula of the event of the risk of money laundering is synthesized. In addition, a retrospective event was singled out and its theoretical justification was carried out. A method for fragmentation of chains of Markov occurrences is proposed and analyzed. Within the framework of the category-theoretic approach, chains are represented by category compositions. A new scheme developed instead of the well-known scheme of the normal Markov algorithm is analyzed. In the scheme, locally final occurrences were discovered and theoretically substantiated, which were not taken into account and were not used by Markov. Locally final occurrences in the scheme of the Markov algorithm are found in parallel chains of occurrences and voids. It has been established that locally final substitutions play an important role in the search and analysis of typologies of money laundering risks in big data of socio-economic systems. It has been determined that typologies of money laundering risks can be represented as categories of sets and n-categories. A formula for an incomplete risk typology has been developed. It is concluded that the synthesized model of the typology of money laundering risks provides for the detection and analysis of local typologies of individual risks.
APA, Harvard, Vancouver, ISO, and other styles
27

Moews, Ben, Morgan A. Schmitz, Andrew J. Lawler, Joe Zuntz, Alex I. Malz, Rafael S. de Souza, Ricardo Vilalta, Alberto Krone-Martins, and Emille E. O. Ishida. "Ridges in the Dark Energy Survey for cosmic trough identification." Monthly Notices of the Royal Astronomical Society 500, no. 1 (October 15, 2020): 859–70. http://dx.doi.org/10.1093/mnras/staa3204.

Full text
Abstract:
ABSTRACT Cosmic voids and their corresponding redshift-projected mass densities, known as troughs, play an important role in our attempt to model the large-scale structure of the Universe. Understanding these structures enables us to compare the standard model with alternative cosmologies, constrain the dark energy equation of state, and distinguish between different gravitational theories. In this paper, we extend the subspace-constrained mean shift algorithm, a recently introduced method to estimate density ridges, and apply it to 2D weak lensing mass density maps from the Dark Energy Survey Y1 data release to identify curvilinear filamentary structures. We compare the obtained ridges with previous approaches to extract trough structure in the same data, and apply curvelets as an alternative wavelet-based method to constrain densities. We then invoke the Wasserstein distance between noisy and noiseless simulations to validate the denoising capabilities of our method. Our results demonstrate the viability of ridge estimation as a precursor for denoising weak lensing observables to recover the large-scale structure, paving the way for a more versatile and effective search for troughs.
APA, Harvard, Vancouver, ISO, and other styles
28

Rajesh Sharma, R., and P. Marikkannu. "Hybrid RGSA and Support Vector Machine Framework for Three-Dimensional Magnetic Resonance Brain Tumor Classification." Scientific World Journal 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/184350.

Full text
Abstract:
A novel hybrid approach for the identification of brain regions using magnetic resonance images accountable for brain tumor is presented in this paper. Classification of medical images is substantial in both clinical and research areas. Magnetic resonance imaging (MRI) modality outperforms towards diagnosing brain abnormalities like brain tumor, multiple sclerosis, hemorrhage, and many more. The primary objective of this work is to propose a three-dimensional (3D) novel brain tumor classification model using MRI images with both micro- and macroscale textures designed to differentiate the MRI of brain under two classes of lesion, benign and malignant. The design approach was initially preprocessed using 3D Gaussian filter. Based on VOI (volume of interest) of the image, features were extracted using 3D volumetric Square Centroid Lines Gray Level Distribution Method (SCLGM) along with 3D run length and cooccurrence matrix. The optimal features are selected using the proposed refined gravitational search algorithm (RGSA). Support vector machines, over backpropagation network, andk-nearest neighbor are used to evaluate the goodness of classifier approach. The preliminary evaluation of the system is performed using 320 real-time brain MRI images. The system is trained and tested by using a leave-one-case-out method. The performance of the classifier is tested using the receiver operating characteristic curve of 0.986 (±002). The experimental results demonstrate the systematic and efficient feature extraction and feature selection algorithm to the performance of state-of-the-art feature classification methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Boccaletti, A., P. Thébault, N. Pawellek, A. M. Lagrange, R. Galicher, S. Desidera, J. Milli, et al. "Two cold belts in the debris disk around the G-type star NZ Lupi." Astronomy & Astrophysics 625 (May 2019): A21. http://dx.doi.org/10.1051/0004-6361/201935135.

Full text
Abstract:
Context. Planetary systems hold the imprint of the formation and of the evolution of planets especially at young ages, and in particular at the stage when the gas has dissipated leaving mostly secondary dust grains. The dynamical perturbation of planets in the dust distribution can be revealed with high-contrast imaging in a variety of structures. Aims. SPHERE, the high-contrast imaging device installed at the VLT, was designed to search for young giant planets in long period, but is also able to resolve fine details of planetary systems at the scale of astronomical units in the scattered-light regime. As a young and nearby star, NZ Lup was observed in the course of the SPHERE survey. A debris disk had been formerly identified with HST/NICMOS. Methods. We observed this system in the near-infrared with the camera in narrow and broad band filters and with the integral field spectrograph. High contrasts are achieved by the mean of pupil tracking combined with angular differential imaging algorithms. Results. The high angular resolution provided by SPHERE allows us to reveal a new feature in the disk which is interpreted as a superimposition of two belts of planetesimals located at stellocentric distances of ~85 and ~115 au, and with a mutual inclination of about 5°. Despite the very high inclination of the disk with respect to the line of sight, we conclude that the presence of a gap, that is, a void in the dust distribution between the belts, is likely. Conclusions. We discuss the implication of the existence of two belts and their relative inclination with respect to the presence of planets.
APA, Harvard, Vancouver, ISO, and other styles
30

Alwan, Ghanim M. "Enhancement of Uniformity of Solid Particles in Spouted Bed Using Stochastic Optimization." Iraqi Journal of Chemical and Petroleum Engineering 16, no. 3 (September 30, 2015): 23–33. http://dx.doi.org/10.31699/ijcpe.2015.3.3.

Full text
Abstract:
Performance of gas-solid spouted bed benefit from solids uniformity structure (UI).Therefore, the focus of this work is to maximize UI across the bed based on process variables. Hence, UI is to be considered as the objective of the optimization process .Three selected process variables are affecting the objective function. These decision variables are: gas velocity, particle density and particle diameter. Steady-state solids concentration measurements were carried out in a narrow 3-inch cylindrical spouted bed made of Plexiglas that used 60° conical shape base. Radial concentration of particles (glass and steel beads) at various bed heights and different flow patterns were measured using sophisticated optical probes. Stochastic Genetic Algorithm (GA) has been found better than deterministic search for study mutation of process variables of the non-linear bed. Spouted bed behaved as hybrid system. Global GA could provide confirmed data and selected best operating conditions. Optimization technique would guide the experimental work and reduce the risk and cost of operation. Optimum results could improve operating of the bed at high-performance and stable conditions. Maximum uniformity has been found at high-density, small size of solid beads and low gas velocity. Density of solids has been effective variable on UI.Velocity of gas and diameter of solid particles has been observed more sensitive decision variables with UI mutations. Uniformity of solid particles would enhance hydrodynamic parameters, heat and mass transfer in the bed because of improving of hold-up and voids distributions of solids. The results of the optimization have been compared with the experimental data using sophisticated optical probe and Computed Tomography technique.
APA, Harvard, Vancouver, ISO, and other styles
31

Mayor, Vicente, Rafael Estepa, Antonio Estepa, and Germán Madinabeitia. "Energy-Efficient UAVs Deployment for QoS-Guaranteed VoWiFi Service." Sensors 20, no. 16 (August 10, 2020): 4455. http://dx.doi.org/10.3390/s20164455.

Full text
Abstract:
This paper formulates a new problem for the optimal placement of Unmanned Aerial Vehicles (UAVs) geared towards wireless coverage provision for Voice over WiFi (VoWiFi) service to a set of ground users confined in an open area. Our objective function is constrained by coverage and by VoIP speech quality and minimizes the ratio between the number of UAVs deployed and energy efficiency in UAVs, hence providing the layout that requires fewer UAVs per hour of service. Solutions provide the number and position of UAVs to be deployed, and are found using well-known heuristic search methods such as genetic algorithms (used for the initial deployment of UAVs), or particle swarm optimization (used for the periodical update of the positions). We examine two communication services: (a) one bidirectional VoWiFi channel per user; (b) single broadcast VoWiFi channel for announcements. For these services, we study the results obtained for an increasing number of users confined in a small area of 100 m2 as well as in a large area of 10,000 m2. Results show that the drone turnover rate is related to both users’ sparsity and the number of users served by each UAV. For the unicast service, the ratio of UAVs per hour of service tends to increase with user sparsity and the power of radio communication represents 14–16% of the total UAV energy consumption depending on ground user density. In large areas, solutions tend to locate UAVs at higher altitudes seeking increased coverage, which increases energy consumption due to hovering. However, in the VoWiFi broadcast communication service, the traffic is scarce, and solutions are mostly constrained only by coverage. This results in fewer UAVs deployed, less total power consumption (between 20% and 75%), and less sensitivity to the number of served users.
APA, Harvard, Vancouver, ISO, and other styles
32

Cox, Louis. "Information Structures for Causally Explainable Decisions." Entropy 23, no. 5 (May 13, 2021): 601. http://dx.doi.org/10.3390/e23050601.

Full text
Abstract:
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.
APA, Harvard, Vancouver, ISO, and other styles
33

Ankit, Kumar. "Can A Faraday’s Cage In Today’s Digital Age Safeguard Our Privacy?" Advanced Computing and Communications, December 30, 2019. http://dx.doi.org/10.34048/2019.4.f1.

Full text
Abstract:
We all grew up learning from experiments and observations, making sure that we learn and understand things with interaction. In India, we are more familiar with cricket and football than lawn-tennis or golf, or we all learn our mother tongue by being exposed to it. Imagine me having exposure to only indoor variety of games, would I develop a taste for outdoor games like cricket or football? This learning by exposure is also equally easily applicable on food, religion, language and other habits, which define a person, his thoughts and his habits. The larger question is whether we can weave a new habit without having an exposure. How do we know what we don’t know when our choices to know are restricted. In the increasing digital market of trans-boundary content, goods and services, the choices for us are often made by a super intelligent, biased algorithm, backed by immense computing power specifically designed to detect tastes of sports, food, music, living lifestyle, depending on the way we have used our smartphones, connected devices, location shares, search strings and photographs. We are aware that our browsers use cookies and trackers, and have ability to smartly direct us to the algorithm chosen choices. I would like to imagine a situation where our actions, emotions and urge to know is not used against us by directing us to content which fills up the void of our search with content that has been tailored to make us feel happy. This leads to a question whether we exercise the choices we actually want to exercise, or do we just go by the flow of algorithms to let it determine what we should experience? This is the question for today which has implications on how the global economy sustains and caters to its users.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Hong, and Rutesh B. Patil. "The Discrete Topology Optimization of Structures Using the Improved Hybrid Discretization Model." Journal of Mechanical Design 134, no. 12 (November 15, 2012). http://dx.doi.org/10.1115/1.4007841.

Full text
Abstract:
In the discrete topology optimization, material state is either solid or void and there is no topology uncertainty caused by any intermediate material state. In this paper, the improved hybrid discretization model is introduced for the discrete topology optimization of structures. The design domain is discretized into quadrilateral design cells and each quadrilateral design cell is further subdivided into triangular analysis cells. The dangling and redundant solid design cells are completely eliminated from topology solutions in the improved hybrid discretization model to avoid sharp protrusions. The local stress constraint is directly imposed on each triangular analysis cell to make the designed structure safe. The binary bit-array genetic algorithm is used to search for the optimal topology to circumvent the geometrical bias against the vertical design cells. The presented discrete topology optimization procedure is illustrated by two topology optimization examples of structures.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhou, Hong, and Pranjal P. Killekar. "The Modified Quadrilateral Discretization Model for the Topology Optimization of Compliant Mechanisms." Journal of Mechanical Design 133, no. 11 (November 1, 2011). http://dx.doi.org/10.1115/1.4004986.

Full text
Abstract:
The modified quadrilateral discretization model for the topology optimization of compliant mechanisms is introduced in this paper. The design domain is discretized into quadrilateral design cells. There is a certain location shift between two neighboring rows of quadrilateral design cells. This modified quadrilateral discretization model allows any two contiguous design cells to share an edge whether they are in the horizontal, vertical, or diagonal direction. Point connection is completely eliminated. In the proposed topology optimization method, design variables are all binary, and every design cell is either solid or void to prevent gray cell problem that is usually caused by intermediate material states. Local stress constraint is directly imposed on each analysis cell to make the synthesized compliant mechanism safe. Genetic algorithm is used to search the optimum. No postprocessing is required for topology uncertainty caused by either point connection or gray cell. The presented modified quadrilateral discretization model and the proposed topology optimization procedure are demonstrated by two synthesis examples of compliant mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Hong, and Avinash R. Mandala. "Topology Optimization of Compliant Mechanisms Using the Improved Quadrilateral Discretization Model." Journal of Mechanisms and Robotics 4, no. 2 (April 12, 2012). http://dx.doi.org/10.1115/1.4006194.

Full text
Abstract:
The improved quadrilateral discretization model for the topology optimization of compliant mechanisms is introduced in this paper. The design domain is discretized into quadrilateral design cells and each quadrilateral design cell is further subdivided into triangular analysis cells. All kinds of dangling quadrilateral design cells and sharp-corner triangular analysis cells are removed in the improved quadrilateral discretization model to promote the material utilization. Every quadrilateral design cell or triangular analysis cell is either solid or void to implement the discrete topology optimization and eradicate the topology uncertainty caused by intermediate material states. The local stress constraint is directly imposed on each triangular analysis cell to make the synthesized compliant mechanism safe. The binary bit-array genetic algorithm is used to search for the optimal topology to circumvent the geometrical bias against the vertical design cells. Two topology optimization examples of compliant mechanisms are solved based on the proposed improved quadrilateral discretization model to verify its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Hong. "Topology Optimization of Compliant Mechanisms Using Hybrid Discretization Model." Journal of Mechanical Design 132, no. 11 (October 20, 2010). http://dx.doi.org/10.1115/1.4002663.

Full text
Abstract:
The hybrid discretization model for topology optimization of compliant mechanisms is introduced in this paper. The design domain is discretized into quadrilateral design cells. Each design cell is further subdivided into triangular analysis cells. This hybrid discretization model allows any two contiguous design cells to be connected by four triangular analysis cells whether they are in the horizontal, vertical, or diagonal direction. Topological anomalies such as checkerboard patterns, diagonal element chains, and de facto hinges are completely eliminated. In the proposed topology optimization method, design variables are all binary, and every analysis cell is either solid or void to prevent the gray cell problem that is usually caused by intermediate material states. Stress constraint is directly imposed on each analysis cell to make the synthesized compliant mechanism safe. Genetic algorithm is used to search the optimum and to avoid the need to choose the initial guess solution and conduct sensitivity analysis. The obtained topology solutions have no point connection, unsmooth boundary, and zigzag member. No post-processing is needed for topology uncertainty caused by point connection or a gray cell. The introduced hybrid discretization model and the proposed topology optimization procedure are illustrated by two classical synthesis examples of compliant mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhu, Xiaoe, Rita Yi Man Li, M. James C. Crabbe, and Khunanan Sukpascharoen. "Can a chatbot enhance hazard awareness in the construction industry?" Frontiers in Public Health 10 (November 30, 2022). http://dx.doi.org/10.3389/fpubh.2022.993700.

Full text
Abstract:
Safety training enhances hazard awareness in the construction industry. Its effectiveness is a component of occupational safety and health. While face-to-face safety training has dominated in the past, the frequent lockdowns during COVID-19 have led us to rethink new solutions. A chatbot is messaging software that allows people to interact, obtain answers, and handle sales and inquiries through a computer algorithm. While chatbots have been used for language education, no study has investigated their usefulness for hazard awareness enhancement after chatbot training. In this regard, we developed four Telegram chatbots for construction safety training and designed the experiment as the treatment factor. Previous researchers utilized eye-tracking in the laboratory for construction safety research; most have adopted it for qualitative analyses such as heat maps or gaze plots to study visual paths or search strategies via eye-trackers, which only studied the impact of one factor. Our research has utilized an artificial intelligence-based eye-tracking tool. As hazard awareness can be affected by several factors, we filled this research void using 2-way interaction terms using the design of experiment (DOE) model. We designed an eye-tracking experiment to study the impact of site experience, Telegram chatbot safety training, and task complexity on hazard awareness, which is the first of its kind. The results showed that Telegram chatbot training enhanced the hazard awareness of participants with less onsite experience and in less complex scenarios. Low-cost chatbot safety training could improve site workers' danger awareness, but the design needs to be adjusted according to participants' experience. Our results offer insights to construction safety managers in safety knowledge sharing and safety training.
APA, Harvard, Vancouver, ISO, and other styles
39

Saxena, Anupam. "An Adaptive Material Mask Overlay Method: Modifications and Investigations on Binary, Well Connected Robust Compliant Continua." Journal of Mechanical Design 133, no. 4 (April 1, 2011). http://dx.doi.org/10.1115/1.4003804.

Full text
Abstract:
The material mask overlay strategy employs negative masks to create material voids within the design region to synthesize perfectly binary (0-1), well connected continua. Previous implementations use either a constant number of circular masks or increase the latter via a sequence of subsearches making the procedure computationally expensive. Here, a modified algorithm is presented wherein the number of masks is adaptively varied within a single search, in addition to their positions and sizes, thereby generating material voids, both efficiently and effectively. A stochastic, mutation-only search with different mutation strategies is employed. The honeycomb parameterization naturally eliminates all subregion connectivity anomalies without requiring additional suppression methods. Boundary smoothening as a new preprocessing step further facilitates accurate evaluations of intermediate and final designs with moderated notches. Thus, both material and contour boundary interpretation steps, that can alter the synthesized solutions, are avoided during postprocessing. Various features, e.g., (i) effective use of the negative masks, (ii) convergence, (iii) mesh dependency, (iv) solution dependence on the reaction force, and (v) parallel search are investigated through the synthesis of small deformation fully compliant mechanisms that are designed to be robust under the specified loads. The proposed topology search algorithm shows promise for design of single-material large deformation continua as well.
APA, Harvard, Vancouver, ISO, and other styles
40

Lu, Shujie, Haoyu Jiang, Chengwei Li, Baoyu Hong, Pu Zhang, and Wenli Liu. "Genetic Algorithm for TMS Coil Position Optimization in Stroke Treatment." Frontiers in Public Health 9 (March 11, 2022). http://dx.doi.org/10.3389/fpubh.2021.794167.

Full text
Abstract:
Transcranial magnetic stimulation (TMS), a non-invasive technique to stimulate human brain, has been widely used in stroke treatment for its capability of regulating synaptic plasticity and promoting cortical functional reconstruction. As shown in previous studies, the high electric field (E-field) intensity around the lesion helps in the recovery of brain function, thus the spatial location and angle of coil truly matter for the significant correlation with therapeutic effect of TMS. But, the error caused by coil placement in current clinical setting is still non-negligible and a more precise coil positioning method needs to be proposed. In this study, two kinds of real brain stroke models of ischemic stroke and hemorrhagic stroke were established by inserting relative lesions into three human head models. A coil position optimization algorithm, based on the genetic algorithm (GA), was developed to search the spatial location and rotation angle of the coil in four 4 × 4 cm search domains around the lesion. It maximized the average intensity of the E-field in the voxel of interest (VOI). In this way, maximum 17.48% higher E-field intensity than that of clinical TMS stimulation was obtained. Besides, our method also shows the potential to avoid unnecessary exposure to the non-target regions. The proposed algorithm was verified to provide an optimal position after nine iterations and displayed good robustness for coil location optimization between different stroke models. To conclude, the optimized spatial location and rotation angle of the coil for TMS stroke treatment could be obtained through our algorithm, reducing the intensity and duration of human electromagnetic exposure and presenting a significant therapeutic potential of TMS for stroke.
APA, Harvard, Vancouver, ISO, and other styles
41

"Optimal Cellular Automata Technique for Image Segmentation." International Journal of Innovative Technology and Exploring Engineering 9, no. 3 (January 10, 2020): 1474–78. http://dx.doi.org/10.35940/ijitee.c8037.019320.

Full text
Abstract:
Leukemia death secured 10 thplace among the most dangerous death in the world. The main reason is due to the delay in diagnosis which in turn delayed the treatment process. Hence it becomes an exigent requirement to diagnose leukemia in its early stage. Segmentation of WBC is the initial phase of leukemia detection using image processing.This paper aims to extract WBC from the image background. There exists various techniques for WBC segmentation in the literature. Yet, they provides inaccurate results.Cellular Automata can be effectively implemented in image processing. In this paper, we have proposed an Optimal Cellular Automata approach for image segmentation.In this approach, the optimal value for alive cells is obtained through particle swarm Optimization with Gravitational Search Algorithm (PSOGSA). The optimal value have fed in to the cellular automata model and get the segmented image. The results are validated based on the parameters likeRand Index (RI), Global Consistency Error (GCE), and Variation of Information (VOI). The Experimental results of proposed technique shows better results when compared to the previously proposed techniques namely, Hybrid K-Means with Cluster Center Estimation, Region Splitting and Clustering Technique and Cellular Automata. The proposed technique outperformed all other techniques.
APA, Harvard, Vancouver, ISO, and other styles
42

Ferdous, Javedul, Hae-Na Lee, Sampath Jayarathna, and Vikas Ashok. "Enabling Efficient Web Data-Record Interaction for People with Visual Impairments via Proxy Interfaces." ACM Transactions on Interactive Intelligent Systems, January 10, 2023. http://dx.doi.org/10.1145/3579364.

Full text
Abstract:
Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and as such assistive technologies used by BVI users, i.e., screen reader and screen magnifier, are not geared for efficient interaction with such scattered content. Specifically, for blind screen reader users, content navigation is predominantly one-dimensional despite the support for skipping content, and therefore navigating to-and-fro between different parts of the webpage is tedious and frustrating. Similarly, low vision screen magnifier users have to continuously pan back-and-forth between different portions of a webpage, given that only a portion of the screen is viewable at any instant due to content enlargement. The extant techniques to overcome inefficient web interaction for BVI users have mostly focused on general web-browsing activities, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for facilitating quick and easy access to desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom machine learning-based algorithms to automatically extract auxiliary segments on any webpage containing data records; and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted auxiliary segments using either basic keyboard shortcuts or mouse actions. Evaluation studies with 14 blind participants and 16 low vision participants showed significant improvement in web usability with InSupport, driven by increased reduction in interaction time and user effort, compared to the state-of-the-art solutions.
APA, Harvard, Vancouver, ISO, and other styles
43

Leaver, Tama. "Going Dark." M/C Journal 24, no. 2 (April 28, 2021). http://dx.doi.org/10.5204/mcj.2774.

Full text
Abstract:
The first two months of 2021 saw Google and Facebook ‘go dark’ in terms of news content on the Australia versions of their platforms. In January, Google ran a so-called “experiment” which removed or demoted current news in the search results available to a segment of Australian users. While Google was only darkened for some, in February news on Facebook went completely dark, with the company banning all news content and news sharing for users within Australian. Both of these instances of going dark occurred because of the imminent threat these platforms faced from the News Media Bargaining Code legislation that was due to be finalised by the Australian parliament. This article examines how both Google and Facebook responded to the draft Code, focussing on their threats to go dark, and the extent to which those threats were carried out. After exploring the context which produced the threats of going dark, this article looks at their impact, and how the Code was reshaped in light of those threats before it was finally legislated in early March 2021. Most importantly, this article outlines why Google and Facebook were prepared to go dark in Australia, and whether they succeeded in trying to prevent Australia setting the precedent of national governments dictating the terms by which digital platforms should pay for news content. From the Digital Platforms Inquiry to the Draft Code In July 2019, the Australian Treasurer released the Digital Platforms Inquiry Final Report which had been prepared by the Australian Competition and Consumer Commission (ACCC). It outlined a range of areas where Australian law, policies and practices were not keeping pace with the realities of a digital world of search giants, social networks, and streaming media. Analysis of the submissions made as part of the Digital Platforms Inquiry found that the final report was “primarily framed around the concerns of media companies, particularly News Corp Australia, about the impact of platform companies’ market dominance of content distribution and advertising share, leading to unequal economic bargaining relationships and the gradual disappearance of journalism jobs and news media publishers” (Flew et al. 13). As such, one of the most provocative recommendations made was the establishment of a new code that would “address the imbalance in the bargaining relationship between leading digital platforms and news media businesses” (Australian Competition and Consumer Commission, Digital Platforms Inquiry 16). The ACCC suggested such a code would assist Australian news organisations of any size in negotiating with Facebook, Google and others for some form of payment for news content. The report was released at a time when there was a greatly increased global appetite for regulating digital platforms. Thus the battle over the Code was watched across the world as legislation that had the potential to open the door for similar laws in other countries (Flew and Wilding). Initially the report suggested that the digital giants should be asked to develop their own codes of conduct for negotiating with news organisations. These codes would have then been enforced within Australia if suitably robust. However, after months of the big digital platforms failing to produce meaningful codes of their own, the Australian government decided to commission their own rules in this arena. The ACCC thus prepared the draft legislation that was tabled in July 2020 as the Australian News Media Bargaining Code. According to the ACCC the Code, in essence, tried to create a level playing field where Australian news companies could force Google and Facebook to negotiate a ‘fair’ payment for linking to, or showing previews of, their news content. Of course, many commentators, and the platforms themselves, retorted that they already bring significant value to news companies by referring readers to news websites. While there were earlier examples of Google and Facebook paying for news, these were largely framed as philanthropy: benevolent digital giants supporting journalism for the good of democracy. News companies and the ACCC argued this approach completely ignored the fact that Google and Facebook commanded more than 80% of the online advertising market in Australia at that time (Meade, “Google, Facebook and YouTube”). Nor did the digital giants acknowledge their disruptive power given the bulk of that advertising revenue used to flow to news companies. Some of the key features of this draft of the Code included (Australian Competition and Consumer Commission, “News Media Bargaining Code”): Facebook and Google would be the (only) companies initially ‘designated’ by the Code (i.e. specific companies that must abide by the Code), with Instagram included as part of Facebook. The Code applied to all Australian news organisations, and specifically mentioned how small, regional, and rural news media would now be able to meaningfully bargain with digital platforms. Platforms would have 11 weeks after first being contacted by a news organisation to reach a mutually negotiated agreement. Failure to reach agreements would result in arbitration (using a style of arbitration called final party arbitration which has both parties present a final offer or position, with an Australian arbiter simply choosing between the two offers in most cases). Platforms were required to give 28 days notice of any change to their algorithms that would impact on the ways Australian news was ranked and appeared on their platform. Penalties for not following the Code could be ten million dollars, or 10% of the platform’s annual turnover in Australia (whichever was greater). Unsurprisingly, Facebook, Google and a number of other platforms and companies reacted very negatively to the draft Code, with their formal submissions arguing: that the algorithm change notifications would give certain news companies an unfair advantage while disrupting the platforms’ core business; that charging for linking would break the underlying free nature of the internet; that the Code overstated the importance and reach of news on each platform; and many other objections were presented, including strong rejections of the proposed model of arbitration which, they argued, completely favoured news companies without providing any real or reasonable limit on how much news organisations could ask to be paid (Google; Facebook). Google extended their argument by making a second submission in the form of a report with the title ‘The Financial Woes of News Publishers in Australia’ (Shapiro et al.) that argued Australian journalism and news was financially unsustainable long before digital platforms came along. However, in stark contrast the Digital News Report: Australia 2020 found that Google and Facebook were where many Australians found their news; in 2020, 52% of Australians accessed news on social media (up from 46% the year before), with 39% of Australians getting news from Facebook, and that number jumping to 49% when specifically focusing on news seeking during the first COVID-19 pandemic peak in April 2021 (Park et al.). The same report highlighted that 43% of people distrust news found on social media (with a further 29% neutral, and only 28% of people explicitly trusting news found via social media). Moreover, 64% of Australians were concerned about misinformation online, and of all the platforms mentioned in the survey, respondents were most concerned about Facebook as a source of misinformation, with 36% explicitly indicating this was the place they were most concerned about encountering ‘fake news’. In this context Facebook and Google battled the Code by launching a public relations campaigns, appealing directly to Australian consumers. Google Drives a Bus Across Australia Google’s initial response to the draft Code was a substantial public relations campaign which saw the technology company advocating against the Code but not necessarily the ideas behind it. Google instead posited their own alternative way of paying for journalism in Australia. On the main Google search landing page, the usually very white surrounds of the search bar included the text “Supporting Australian journalism: a constructive path forward” which linked to a Google page outlining their version of a ‘Fair Code’. Popup windows appeared across many of Google’s services and apps, noting Google “are willing to pay to support journalism”, with a button labelled ‘Hear our proposal’. Figure 1: Popup notification on Google Australia directing users to Google’s ‘A Fair Code’ proposal rebutting the draft Code. (Screen capture by author, 29 January 2021) Google’s popups and landing page links were visible for more than six months as the Code was debated. In September 2020, a Google blog post about the Code was accompanied by a YouTube video campaign featuring Australia comedian Greta Lee Jackson (Google Australia, Google Explains Arbitration). Jackson used the analogy of Google as a bus driver, who is forced to pay restaurants for delivering customers to them, and then pay part of the running costs of restaurants, too. The video reinforced Google’s argument that the draft Code was asking digital platforms to pay potentially enormous costs for news content without acknowledging the value of Google bringing readers to the news sites. However, the video opened with the line that “proposed laws can be confusing, so I'll use an analogy to break it down”, setting a tone that would seem patronising to many people. Moreover, the video, and Google’s main argument, completely ignored the personal data Google receives every time a user searches for, or clicks on, a news story via Google Search or any other Google service. If Google’s analogy was accurate, then the bus driver would be going through every passenger’s bag while they were on the bus, taking copies of all their documents from drivers licenses to loyalty cards, keeping a record of every time they use the bus, and then using this information to get advertisers to pay for a tailored advertisement on the back of the seat in front of every passenger, every time they rode the bus. Notably, by the end of March 2021, the video had only received 10,399 views, which suggests relatively few people actually clicked on it to watch. In early January 2021, at the height of the debate about the Code, Google ran what they called “an experiment” which saw around 1% of Australian users suddenly only receive “older or less relevant content” when searching for news (Barnet, “Google’s ‘Experiment’”). While ostensibly about testing options for when the Code became law, the unannounced experiment also served as a warning shot. Google very effectively reminded users and politicians about their important role in determining which news Australian users find, and what might happen if Google darkened what they returned as news results. On 21 January 2021, Mel Silva, the Managing Director and public face of Google in Australia and New Zealand gave public testimony about the company’s position before a Senate inquiry. Silva confirmed that Google were indeed considering removing Google Search in Australia altogether if the draft Code was not amended to address their key concerns (Silva, “Supporting Australian Journalism: A Constructive Path Forward An Update on the News Media Bargaining Code”). Google’s seemingly sudden escalation in their threat to go dark led to articles such as a New York Times piece entitled ‘An Australia with No Google? The Bitter Fight behind a Drastic Threat’ (Cave). Google also greatly amplified their appeal to the Australian public, with a video featuring Mel Silva appearing frequently on all Google sites in Australia to argue their position (Google Australia, An Update). By the end of March 2021, Silva’s video had been watched more than 2.2 million times on YouTube. Silva’s testimony, video and related posts from Google all characterised the Code as: breaking “how Google search works in Australia”; creating a world where links online are paid for and thus both breaking Google and “undermin[ing] how the web works”; and saw Google offer their News Showcase as a viable alternative that, in Google’s view, was “a fair one” (Silva, “Supporting Australian Journalism”). Google emphasised submissions about the Code which backed their position, including World Wide Web inventor Tim Berners-Lee who agreed that the idea of charging for links could have a more wide-reaching impact, challenging the idea of a free web (Leaver). Google also continued to release their News Showcase product in other parts of the world. They emphasised that there were existing arrangements for Showcase in Australia, but the current regulatory uncertainty meant it was paused in Australia until the debates about the Code were resolved. In the interim, news media across Australia, and the globe, were filled with stories speculating what an Australia would look like if Google went completely dark (e.g. Cave; Smyth). Even Microsoft weighed in to supporting the Code and offer their search engine Bing as a viable alternative to fill the void if Google really did go dark (Meade, “Microsoft’s Bing”). In mid-February, the draft Code was tabled in Australian parliament. Many politicians jumped at the chance to sing the Code’s praises and lament the power that Google and Facebook have across various spheres of Australian life. Yet as these speeches were happening, the Australian Treasurer Josh Frydenberg was holding weekend meetings with executives from Google and Facebook, trying to smooth the path toward the Code (Massola). In these meetings, a number of amendments were agreed to, including the Code more clearly taking in to account any existing deals already on the table before it became law. In these meetings the Treasurer made in clear to Google that if the deals done prior to the Code were big enough, he would consider not designating Google under the Code, which in effect would mean Google is not immediately subject to it (Samios and Visentin). With that concession in hand Google swiftly signed deals with over 50 Australian news publishers, including Seven West Media, Nine, News Corp, The Guardian, the ABC, and some smaller publishers such as Junkee Media (Taylor; Meade, “ABC Journalism”). While the specific details of these deals were not made public, the deals with Seven West Media and Nine were both reported to be worth around $30 million Australian dollars (Dudley-Nicholson). In reacting to Google's deals Frydenberg described them as “generous deals, these are fair deals, these are good deals for the Australian media businesses, deals that they are making off their own bat with the digital giants” (Snape, “‘These Are Good Deals’”). During the debates about the Code, Google had ultimately ensured that every Australian user was well aware that Google was, in their words, asking for a “fair” Code, and before the Code became law even the Treasurer was conceding that Google’s was offering a “fair deal” to Australian news companies. Facebook Goes Dark on News While Google never followed through on their threat to go completely dark, Facebook took a very different path, with a lot less warning. Facebook’s threat to remove all news from the platform for users in Australia was not made explicit in their formal submissions the draft of the Code. However, to be fair, Facebook’s Managing Director in Australia and New Zealand Will Easton did make a blog post at the end of August 2020 in which he clearly stated: “assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram” (Easton). During the negotiations in late 2020 Instagram was removed as an initial target of the Code (just as YouTube was not included as part of Google) along with a number of other concessions, but Facebook were not sated. Yet Easton’s post about removing news received very little attention after it was made, and certainly Facebook made no obvious attempt to inform their millions of Australian users that news might be completely blocked. Hence most Australians were shocked when that was exactly what Facebook did. Facebook’s power has, in many ways, always been exercised by what the platform’s algorithms display to users, what content is most visible and equally what content is made invisible (Bucher). The morning of Wednesday, 17 February 2021, Australian Facebook users awoke to find that all traditional news and journalism had been removed from the platform. Almost all pages associated with news organisations were similarly either disabled or wiped clean, and that any attempt to share links to news stories was met with a notification: “this post can’t be shared”. The Australian Prime Minister Scott Morrison reacted angrily, publicly lamenting Facebook’s choice to “unfriend Australia”, adding their actions were “as arrogant as they were disappointing”, vowing that Australia would “not be intimidated by big tech” (Snape, “Facebook Unrepentant”). Figure 2: Facebook notification appearing when Australians attempted to share news articles on the platform. (Screen capture by author, 20 February 2021) Facebook’s news ban in Australia was not limited to official news pages and news content. Instead, their ban initially included a range of pages and services such as the Australian Bureau of Meteorology, emergency services pages, health care pages, hospital pages, services providing vital information about the COVID-19 pandemic, and so forth. The breadth of the ban may have been purposeful, as one of Facebook’s biggest complaints was that the Code defined news too broadly (Facebook). Yet in the Australian context, where the country was wrestling with periodic lockdowns and the Coronavirus pandemic on one hand, and bushfires and floods on the other, the removal of these vital sources of information showed a complete lack of care or interest in Australian Facebook users. Beyond the immediate inconvenience of not being able to read or share news on Facebook, there were a range of other, immediate, consequences. As Barnet, amongst others, warned, a Facebook with all credible journalism banned would almost certainly open the floodgates to a tide of misinformation, with nothing left to fill the void; it made Facebook’s “public commitment to fighting misinformation look farcical” (Barnet, “Blocking Australian News”). Moreover, Bossio noted, “reputational damage from blocking important sites that serve Australia’s public interest overnight – and yet taking years to get on top of user privacy breaches and misinformation – undermines the legitimacy of the platform and its claimed civic intentions” (Bossio). If going dark and turning off news in Australia was supposed to win the sympathy of Australian Facebook users, then the plan largely backfired. Yet as with Google, the Australian Treasurer was meeting with Mark Zuckerberg and Facebook executives behind closed doors, which did eventually lead to changes before the Code was finally legislated (Massola). Facebook gained a number of concessions, including: a longer warning period before a Facebook could be designated by the Code; a longer period before news organisations would be able to expect negotiations to be concluded; an acknowledgement that existing deals would be taken in to account during negotiations; and, most importantly, a clarification that if Facebook was to once again block news this would both prevent them being subject to the Code and was not be something the platform could be punished for. Like Google, though, Facebook’s biggest gain was again the Treasurer making it clear that by making deals in advance on the Code becoming law, it was likely that Facebook would not be designated, and thus not subject to the Code at all (Samios and Visentin). After these concessions the news standoff ended and on 23 February the Australian Treasurer declared that after tense negotiations Facebook had “refriended Australia”; the company had “committed to entering into good-faith negotiations with Australian news media businesses and seeking to reach agreements to pay for content” (Visentin). Over the next month there were some concerns voiced about slow progress, but then major deals were announced between Facebook and News Corp Australia, and with Nine, with other deals following closely (Meade, “Rupert Murdoch”). Just over a week after the ban began, Facebook returned news to their platform in Australia. Facebook obviously felt they had won the battle, but Australia Facebook users were clearly cannon fodder, with their interests and wellbeing ignored. Who Won? The Immediate Aftermath of the Code After the showdowns with Google and Facebook, the final amendments to the Code were made and it was legislated as the News Media and Digital Platforms Mandatory Bargaining Code (Australian Treasury), going into effect on 2 March 2021. However, when it became legally binding, not one single company was ‘designated’, meaning that the Code did not immediately apply to anyone. Yet deals had been struck, money would flow to Australian news companies, and Facebook had returned news to its platform in Australia. At the outset, Google, Facebook, news companies in Australia and the Australian government all claimed to have won the battle over the Code. Having talked up their tough stance on big tech platforms when the Digital Platforms Inquiry landed in 2019, the Australian Government was under public pressure to deliver on that rhetoric. The debates and media coverage surrounding the Code involved a great deal of political posturing and gained much public attention. The Treasurer was delighted to see deals being struck that meant Facebook and Google would pay Australian news companies. He actively portrayed this as the government protecting Australia’s interest and democracy. The fact that the Code was leveraged as a threat does mean that the nuances of the Code are unlikely to be tested in a courtroom in the near future. Yet as a threat it was an effective one, and it does remain in the Treasurer’s toolkit, with the potential to be deployed in the future. While mostly outside the scope of this article, it should definitely be noted that the biggest winner in the Code debate was Rupert Murdoch, executive chairman of News Corp. They were the strongest advocates of regulation forcing the digital giants to pay for news in the first place, and had the most to gain and least to lose in the process. Most large news organisations in Australia have fared well, too, with new revenue flowing in from Google and Facebook. However, one of the most important facets of the Code was the inclusion of mechanisms to ensure that regional and small news publishers in Australia would be able to negotiate with Facebook and Google. While some might be able to band together and strike terms (and some already have) it is likely that many smaller news companies in Australia will miss out, since the deals being struck with the bigger news companies appear to be big enough to ensure they are not designated, and thus not subject to the Code (Purtill). A few weeks after the Code became law ACCC Chair Rod Sims stated that the “problem we’re addressing with the news media code is simply that we wanted to arrest the decline in money going to journalism” (Kohler). On that front the Code succeeded. However, there is no guarantee the deals will mean money will support actual journalists, rather than disappearing as extra corporate profits. Nor is there any onus on Facebook or Google to inform news organisations about changes to their algorithms that might impact on news rankings. Also, as many Australia news companies are now receiving payments from Google and Facebook, there is a danger the news media will become dependent on that revenue, which may make it harder for journalists to report on the big tech giants without some perceptions of a conflict of interest. In a diplomatic post about the Code, Google thanked everyone who had voiced concerns with the initial drafts of the legislation, thanked Australian users, and celebrated that their newly launched Google News Showcase had “two million views of content” with more than 70 news partners signed up within Australia (Silva, “An Update”). Given that News Showcase had already begun rolling out elsewhere in the world, it is likely Google were already aware they were going to have to contribute to the production of journalism across the globe. The cost of paying for news in Australia may well have fallen within the parameters Google had already decided were acceptable and inevitable before the debate about the Code even began (Purtill). In the aftermath of the Code becoming legislation, Google also posted a cutting critique of Microsoft, arguing they were “making self-serving claims and are even willing to break the way the open web works in an effort to undercut a rival” (Walker). In doing so, Google implicitly claimed that the concessions and changes to the Code they had managed to negotiate effectively positioned them as having championed the free and open web. At the end of February 2021, in a much more self-congratulatory post-mortem of the Code entitled “The Real Story of What Happened with News on Facebook in Australia”, Facebook reiterated their assertion that they bring significant value to news publishers and that the platform receives no real value in return, stating that in 2020 Facebook provided “approximately 5.1 billion free referrals to Australian publishers worth an estimated AU$407 million to the news industry” (Clegg). Deploying one last confused metaphor, Facebook argued the original draft of the Code was “like forcing car makers to fund radio stations because people might listen to them in the car — and letting the stations set the price.” Of course, there was no mention that following that metaphor, Facebook would have bugged the car and used that information to plaster the internal surfaces with personalised advertising. Facebook also touted the success of their Facebook News product in the UK, albeit without setting a date for the rollout of the product in Australia. While Facebook did concede that “the decision to stop the sharing of news in Australia appeared to come out of nowhere”, what the company failed to do was apologise to Australian Facebook users for the confusion and inconvenience they experienced. Nevertheless, on Facebook’s own terms, they certainly positioned themselves as having come out winners. Future research will need to determine whether Facebook’s actions damaged their reputation or encouraged significant numbers of Australians to leave the platform permanently, but in the wake of a number of high-profile scandals, including Cambridge Analytica (Vaidhyanathan), it is hard to see how Facebook’s actions would not have further undermined consumer trust in the company and their main platform (Park et al.). In fighting the Code, Google and Facebook were not just battling the Australian government, but also the implication that if they paid for news in Australia, they likely would also have to do so in other countries. The Code was thus seen as a dangerous precedent far more than just a mechanism to compel payment in Australia. Since both companies ensured they made deals prior to the Code becoming law, neither was initially ‘designated’, and thus neither were actually subject to the Code at the time of writing. The value of the Code has been as a threat and a means to force action from the digital giants. How effective it is as a piece of legislation remains to be seen in the future if, indeed, any company is ever designated. For other countries, the exact wording of the Code might not be as useful as a template, but its utility to force action has surely been noted. Like the inquiry which initiated it, the Code set “the largest digital platforms, Google and Facebook, up against the giants of traditional media, most notably Rupert Murdoch’s News Corporation” (Flew and Wilding 50). Yet in a relatively unusual turn of events, both sides of that battle claim to have won. At the same time, EU legislators watched the battle closely as they considered an “Australian-style code” of their own (Dillon). Moreover, in the month immediately following the Code being legislated, both the US and Canada were actively pursuing similar regulation (Baier) with Facebook already threatening to remove news and go dark for Canadian Facebook users (van Boom). For Facebook, and Google, the battle continues, but fighting the Code has meant the genie of paying for news content is well and truly out of the bottle. References Australian Competition and Consumer Commission. Digital Platforms Inquiry: Final Report. 25 July 2019. <https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/final-report-executive-summary>. ———. “News Media Bargaining Code: Draft Legislation.” Australian Competition and Consumer Commission, 22 July 2020. <https://www.accc.gov.au/focus-areas/digital-platforms/news-media-bargaining-code/draft-legislation>. Australian Treasury. Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Act 2021. Attorney-General’s Department, 2 Mar. 2021. <https://www.legislation.gov.au/Details/C2021A00021/Html/Text>. Baier, Jansen. “US Could Allow News Distribution Fees for Google, Facebook.” MediaFile, 31 Mar. 2021. <http://www.mediafiledc.com/us-could-allow-news-distribution-fees-for-google-facebook/>. Barnet, Belinda. “Blocking Australian News Shows Facebook’s Pledge to Fight Misinformation Is Farcical.” The Guardian, 18 Feb. 2021. <http://www.theguardian.com/commentisfree/2021/feb/18/blocking-australian-news-shows-facebooks-pledge-to-fight-misinformation-is-farcical>. ———. “Google’s ‘Experiment’ Hiding Australian News Just Shows Its Inordinate Power.” The Guardian, 14 Jan. 2021. <http://www.theguardian.com/commentisfree/2021/jan/14/googles-experiment-hiding-australian-news-just-shows-its-inordinate-power>. Bossio, Diana. “Facebook Has Pulled the Trigger on News Content — and Possibly Shot Itself in the Foot.” The Conversation, 18 Feb. 2021. <http://theconversation.com/facebook-has-pulled-the-trigger-on-news-content-and-possibly-shot-itself-in-the-foot-155547>. Bucher, Taina. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14.7 (2012): 1164–80. DOI:10.1177/1461444812440159. Cave, Damien. “An Australia with No Google? The Bitter Fight behind a Drastic Threat.” The New York Times, 22 Jan. 2021. <https://www.nytimes.com/2021/01/22/business/australia-google-facebook-news-media.html>. Clegg, Nick. “The Real Story of What Happened with News on Facebook in Australia.” About Facebook, 24 Feb. 2021. <https://about.fb.com/news/2021/02/the-real-story-of-what-happened-with-news-on-facebook-in-australia/>. Dillon, Grace. “EU Contemplates Australia-Style Media Bargaining Code; China Imposes New Antitrust Rules.” ExchangeWire.com, 9 Feb. 2021. <https://www.exchangewire.com/blog/2021/02/09/eu-contemplates-australia-style-media-bargaining-code-china-imposes-new-antitrust-rules/>. Dudley-Nicholson, Jennifer. “Google May Escape Laws after Spending Spree.” The Daily Telegraph, 17 Feb. 2021. <https://www.dailytelegraph.com.au/news/national/google-may-escape-tough-australian-news-laws-after-a-lastminute-spending-spree/news-story/d3b37406bf279ff6982287d281d1fbdd>. Easton, Will. “An Update about Changes to Facebook’s Services in Australia.” About Facebook, 1 Sep. 2020. <https://about.fb.com/news/2020/08/changes-to-facebooks-services-in-australia/>. Facebook. Facebook Response to the Australian Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Facebook_0.pdf>. Flew, Terry, et al. “Return of the Regulatory State: A Stakeholder Analysis of Australia’s Digital Platforms Inquiry and Online News Policy.” The Information Society 37.2 (2021): 128–45. DOI:10.1080/01972243.2020.1870597. Flew, Terry, and Derek Wilding. “The Turn to Regulation in Digital Communication: The ACCC’s Digital Platforms Inquiry and Australian Media Policy.” Media, Culture & Society 43.1 (2021): 48–65. DOI:10.1177/0163443720926044. Google. Draft News Media and Platforms Mandatory Bargaining Code: Submissions in Response. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Google_0.pdf>. Google Australia. An Update from Google on the News Media Bargaining Code. 2021. YouTube. <https://www.youtube.com/watch?v=dHypeuHePEI>. ———. Google Explains Arbitration under the News Media Bargaining Code. 2020. YouTube. <https://www.youtube.com/watch?v=6Io01W3migk>. Kohler, Alan. “The News Bargaining Code Is Officially Dead.” The New Daily, 16 Mar. 2021. <https://thenewdaily.com.au/news/2021/03/17/alan-kohler-news-bargaining-code-dead/>. Leaver, Tama. “Web’s Inventor Says News Media Bargaining Code Could Break the Internet. He’s Right — but There’s a Fix.” The Conversation, 21 Jan. 2021. <http://theconversation.com/webs-inventor-says-news-media-bargaining-code-could-break-the-internet-hes-right-but-theres-a-fix-153630>. Massola, James. “Frydenberg, Facebook Negotiating through the Weekend.” The Sydney Morning Herald, 20 Feb. 2021. <https://www.smh.com.au/politics/federal/frydenberg-facebook-negotiating-through-the-weekend-on-new-media-laws-20210219-p573zp.html>. Meade, Amanda. “ABC Journalism to Appear on Google’s News Showcase in Lucrative Deal.” The Guardian, 22 Feb. 2021. <http://www.theguardian.com/media/2021/feb/23/abc-journalism-to-appear-on-googles-showcase-in-lucrative-deal>. ———. “Google, Facebook and YouTube Found to Make Up More than 80% of Australian Digital Advertising.” The Guardian, 23 Oct. 2020. <http://www.theguardian.com/media/2020/oct/23/google-facebook-and-youtube-found-to-make-up-more-than-80-of-australian-digital-advertising>. ———. “Microsoft’s Bing Ready to Step in If Google Pulls Search from Australia, Minister Says.” The Guardian, 1 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/01/microsofts-bing-ready-to-step-in-if-google-pulls-search-from-australia-minister-says>. ———. “Rupert Murdoch’s News Corp Strikes Deal as Facebook Agrees to Pay for Australian Content.” The Guardian, 15 Mar. 2021. <http://www.theguardian.com/media/2021/mar/16/rupert-murdochs-news-corp-strikes-deal-as-facebook-agrees-to-pay-for-australian-content>. Park, Sora, et al. Digital News Report: Australia 2020. Canberra: News and Media Research Centre, 16 June 2020. DOI:10.25916/5ec32f8502ef0. Purtill, James. “Facebook Thinks It Won the Battle of the Media Bargaining Code — but So Does the Government.” ABC News, 25 Feb. 2021. <https://www.abc.net.au/news/science/2021-02-26/facebook-google-who-won-battle-news-media-bargaining-code/13193106>. Samios, Zoe, and Lisa Visentin. “‘Historic Moment’: Treasurer Josh Frydenberg Hails Google’s News Content Deals.” The Sydney Morning Herald, 17 Feb. 2021. <https://www.smh.com.au/business/companies/historic-moment-treasurer-josh-frydenberg-hails-google-s-news-content-deals-20210217-p573eu.html>. Shapiro, Carl, et al. The Financial Woes of News Publishers in Australia. 27 Aug. 2020. <https://www.accc.gov.au/system/files/Google%20Annex.PDF>. Silva, Mel. “An Update on the News Media Bargaining Code.” Google Australia, 1 Mar. 2021. <http://www.google.com.au/google-in-australia/an-open-letter/>. ———. “Supporting Australian Journalism: A Constructive Path Forward – An Update on the News Media Bargaining Code.” Google Australia, 22 Jan. 2021. <https://about.google/intl/ALL_au/google-in-australia/jan-6-letter/>. Smyth, Jamie. “Australian Companies Forced to Imagine Life without Google.” Financial Times, 9 Feb. 2021. <https://www.ft.com/content/fa66e8dc-afb1-4a50-8dfa-338a599ad82d>. Snape, Jack. “Facebook Unrepentant as Prime Minister Dubs Emergency Services Block ‘Arrogant.’” ABC News, 18 Feb. 2021. <https://www.abc.net.au/news/2021-02-18/facebook-unrepentant-scott-morrison-dubs-move-arrogant/13169340>. ———. “‘These Are Good Deals’: Treasurer Praises Google News Deals amid Pressure from Government Legislation.” ABC News, 17 Feb. 2021. <https://www.abc.net.au/news/2021-02-17/treasurer-praises-good-deals-between-google-news-seven/13163676>. Taylor, Josh. “Guardian Australia Strikes Deal with Google to Join News Showcase.” The Guardian, 20 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/20/guardian-australia-strikes-deal-with-google-to-join-news-showcase>. Vaidhyanathan, Siva. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford UP, 2018. Van Boom, Daniel. “Facebook Could Block News in Canada like It Did in Australia.” CNET, 29 Mar. 2021. <https://www.cnet.com/news/facebook-could-block-news-in-canada-like-it-did-in-australia/>. Visentin, Lisa. “Facebook Refriends Australia after Last-Minute Changes to Media Code.” The Sydney Morning Herald, 23 Feb. 2021. <https://www.smh.com.au/politics/federal/government-agrees-to-last-minute-amendments-to-media-code-20210222-p574kc.html>. Walker, Kent. “Our Ongoing Commitment to Supporting Journalism.” Google, 12 Mar. 2021. <https://blog.google/products/news/google-commitment-supporting-journalism/>.
APA, Harvard, Vancouver, ISO, and other styles
44

Голик, В. И., О. Г. Бурдзиева, and Б. В. Дзеранов. "Management of massif geomechanics through optimization of development technologies." Геология и геофизика Юга России, no. 1 (April 9, 2020). http://dx.doi.org/10.23671/vnc.2020.1.59070.

Full text
Abstract:
Актуальность статьи объясняется необходимость поиска новых ресурсов повышение эффективности использования недр, в том числе, за счет рационального использования геомеханических особенностей скальных массивов при техногенном воздействии на них и механизма взаимодействия пород, слагающих массивы скальных месторождений. Объектом исследования служат структурно напряженные скальные массивы Садонского рудного узла. Целью исследований является оценка перспектив технологий разработки месторождений при освоении запасов в условиях техногенной ослабленности массивов. Методы решения основной задачи исследования образуют собой комплекс, в том числе, систематизация связанных с управлением массивом сведений, разработка критериев эффективности добычи руд и формирование концепции ресурсосберегающей технологии разработки месторождений. Результаты и обсуждение результатов. Сформулирована концепция технологии разработки месторождений на основе методов управления состоянием массива путем назначения оптимального уровня напряжений, формируемых совокупностью сейсмотектонических воздействий и техногенной сейсмичности. Дана типизация методов расчета устойчивых пролетов выработок. Приведены примеры решения горнотехнических задач рекомендуемыми методами расчета. Предложена схема-алгоритм взаимодействия параметров управления массивом. Определено, что перспективы технологий разработки месторождений Садонской группы связаны с реализацией концепции управления массивами пород путем регулирования величины напряжений. Доказано, что учет геомеханических факторов позволяет на всех стадиях отработки месторождения корректировать параметры разработки с повышением качества добываемых руд и уменьшением опасности для работающих. В этих условиях удовлетворительные показатели могут быть обеспечены только на первой стадии разработки при выемке первичных камер. Отработка целиков во вторую стадию увеличивает напряжения до критического состояния, что сопровождается потерей запасов или снижением качества руд до убыточных пределов кондиций. Рекомендовано отработку новых запасов и доработку имеющихся запасов осуществлять по комбинированной схеме: ценные руды с закладкой технологических пустот твердеющими смесями, руды с меньшим содержанием металлов выщелачиванием с использованием хвостов подземного выщелачивания для управления напряжениями, а выщелоченные руды выполняют искусственных целиков, перераспределяя техногенные и природные напряжения. The relevance of the article is explained by the need to search for new resources to increase the efficiency of subsoil use, including due to the rational use of geomechanical features of rock massifs with anthropogenic impact on them and the mechanism of interaction of rocks composing rock massifs. The object of study is the structurally stressed rock massifs of the Sadon ore cluster. The aim of the research is to assess the prospects of field development technologies during the development of reserves in conditions of technogenic weakening of arrays. Methods for solving the main research problem form a complex, including the systematization of information related to managing the array, the development of criteria for the efficiency of ore mining and the formation of the concept of resource-saving technology for developing deposits. Results and discussion of results. The concept of field development technology is formulated on the basis of methods for controlling the state of an array by assigning the optimal level of stresses generated by a combination of seismotectonic impacts and technogenic seismicity. Typification of methods for calculating stable spans of workings is given. Examples of solving mining problems with the recommended calculation methods are given. An algorithm-algorithm for the interaction of array control parameters is proposed. It was determined that the prospects for the development of deposits in the Sadon Group are related to the implementation of the concept of managing rock masses by regulating stresses. It is proved that taking geomechanical factors into account allows at all stages of field development to adjust development parameters with an increase in the quality of mined ores and a decrease in the hazard for workers. Under these conditions, satisfactory performance can only be achieved at the first stage of development when the primary chambers are removed. The development of pillars in the second stage increases stresses to a critical state, which is accompanied by a loss of reserves or a decrease in the quality of ores to unprofitable limits. It was recommended that the development of new reserves and the refinement of existing reserves be carried out according to a combined scheme: valuable ores with the laying of technological voids with hardening mixtures, ores with a lower metal content leaching using underground leaching tails to control stresses, and leached ores perform artificial pillars, redistributing technogenic and natural stresses.
APA, Harvard, Vancouver, ISO, and other styles
45

Robinson, Jessica Yarin. "Fungible Citizenship." M/C Journal 25, no. 2 (April 25, 2022). http://dx.doi.org/10.5204/mcj.2883.

Full text
Abstract:
Social media companies like to claim the world. Mark Zuckerberg says Facebook is “building a global community”. Twitter promises to show you “what’s happening in the world right now”. Even Parler claims to be the “global town square”. Indeed, among the fungible aspects of digital culture is the promise of geographic fungibility—the interchangeability of location and national provenance. The taglines of social media platforms tap into the social imagination of the Internet erasing distance—Marshall McLuhan’s global village on a touch screen (see fig. 1). Fig. 1: Platform taglines: YouTube, Twitter, Parler, and Facebook have made globality part of their pitch to users. Yet users’ perceptions of geographic fungibility remain unclear. Scholars have proposed forms of cosmopolitan and global citizenship in which national borders play less of a role in how people engage with political ideas (Delanty; Sassen). Others suggest the potential erasure of location may be disorienting (Calhoun). “Nobody lives globally”, as Hugh Dyer writes (64). In this article, I interrogate popular and academic assumptions about global political spaces, looking at geographic fungibility as a condition experienced by users. The article draws on interviews conducted with Twitter users in the Scandinavian region. Norway, Sweden, and Denmark offer an interesting contrast to online spaces because of their small and highly cohesive political cultures; yet these countries also have high Internet penetration rates and English proficiency levels, making them potentially highly globally connected (Syvertsen et al.). Based on a thematic analysis of these interviews, I find fungibility emerges as a key feature of how users interact with politics at a global level in three ways: invisibility: fungibility as disconnection; efficacy: fungibility as empowerment; and antagonism: non-fungibility as strategy. Finally, in contrast to currently available models, I propose that online practices are not characterised so much by cosmopolitan norms, but by what I describe as fungible citizenship. Geographic Fungibility and Cosmopolitan Hopes Let’s back up and take a real-life example that highlights what it means for geography to be fungible. In March 2017, at a high-stakes meeting of the US House Intelligence Committee, a congressman suddenly noticed that President Donald Trump was not only following the hearing on television, but was live-tweeting incorrect information about it on Twitter. “This tweet has gone out to millions of Americans”, said Congressman Jim Himes, noting Donald Trump’s follower count. “16.1 million to be exact” (C-SPAN). Only, those followers weren’t just Americans; Trump was tweeting to 16.1 million followers worldwide (see Sevin and Uzunoğlu). Moreover, the committee was gathered that day to address an issue related to geographic fungibility: it was the first public hearing on Russian attempts to interfere in the 2016 American presidential race—which occurred, among other places, on Twitter. In a way, democratic systems are based on fungibility. One person one vote. Equality before the law. But land mass was not imagined to be commutable, and given the physical restrictions of communication, participation in the public sphere was largely assumed to be restricted by geography (Habermas). But online platforms offer a fundamentally different structure. Nancy Fraser observes that “public spheres today are not coextensive with political membership. Often the interlocutors are neither co-nationals nor fellow citizens” (16). Netflix, YouTube, K-Pop, #BLM: the resources that people draw on to define their worlds come less from nation-specific media (Robertson 179). C-SPAN’s online feed—if one really wanted to—is as easy to click on in Seattle as in Stockholm. Indeed, research on Twitter finds geographically dispersed networks (Leetaru et al.). Many Twitter users tweet in multiple languages, with English being the lingua franca of Twitter (Mocanu et al.). This has helped make geographic location interchangeable, even undetectable without use of advanced methods (Stock). Such conditions might set the stage for what sociologists have envisioned as cosmopolitan or global public spheres (Linklater; Szerszynski and Urry). That is, cross-border networks based more on shared interest than shared nationality (Sassen 277). Theorists observing the growth of online communities in the late 1990s and early 2000s proposed that such activity could lead to a shift in people’s perspectives on the world: namely, by closing the communicative distance with the Other, people would also close the moral distance. Delanty suggested that “discursive spaces of world openness” could counter nationalist tendencies and help mobilise cosmopolitan citizens against the negative effects of globalisation (44). However, much of this discourse dates to the pre-social media Internet. These platforms have proved to be more hierarchical, less interactive, and even less global than early theorists hoped (Burgess and Baym; Dahlgren, “Social Media”; Hindman). Although ordinary citizens certainly break through, entrenched power dynamics and algorithmic structures complicate the process, leading to what Bucher describes as a reverse Panopticon: “the possibility of constantly disappearing, of not being considered important enough” (1171). A 2021 report by the Pew Research Center found most Twitter users receive few if any likes and retweets of their content. In short, it may be that social media are less like Marshall McLuhan’s global village and more like a global version of Marc Augé’s “non-places”: an anonymous and disempowering whereabouts (77–78). Cosmopolitanism itself is also plagued by problems of legitimacy (Calhoun). Fraser argues that global public opinion is meaningless without a constituent global government. “What could efficacy mean in this situation?” she asks (15). Moreover, universalist sentiment and erasure of borders are not exactly the story of the last 15 years. Media scholar Terry Flew notes that given Brexit and the rise of figures like Trump and Bolsonaro, projections of cosmopolitanism were seriously overestimated (19). Yet social media are undeniably political places. So how do we make sense of users’ engagement in the discourse that increasingly takes place here? It is this point I turn to next. Citizenship in the Age of Social Media In recent years, scholars have reconsidered how they understand the way people interact with politics, as access to political discourse has become a regular, even mundane part of our lives. Increasingly they are challenging old models of “informed citizens” and traditional forms of political participation. Neta Kligler-Vilenchik writes: the oft-heard claims that citizenship is in decline, particularly for young people, are usually based on citizenship indicators derived from these legacy models—the informed/dutiful citizen. Yet scholars are increasingly positing … citizenship [is not] declining, but rather changing its form. (1891) In other words, rather than wondering if tweeting is like a citizen speaking in the town square or merely scribbling in the margins of a newspaper, this line of thinking suggests tweeting is a new form of citizen participation entirely (Bucher; Lane et al.). Who speaks in the town square these days anyway? To be clear, “citizenship” here is not meant in the ballot box and passport sense; this isn’t about changing legal definitions. Rather, the citizenship at issue refers to how people perceive and enact their public selves. In particular, new models of citizenship emphasise how people understand their relation to strangers through discursive means (Asen)—through talking, in other words, in its various forms (Dahlgren, “Talkative Public”). This may include anything from Facebook posts to online petitions (Vaughan et al.) to digital organising (Vromen) to even activities that can seem trivial, solitary, or apolitical by traditional measures, such as “liking” a post or retweeting a news story. Although some research finds users do see strategic value in such activities (Picone et al.), Lane et al. argue that small-scale acts are important on their own because they force us to self-reflect on our relationship to politics, under a model they call “expressive citizenship”. Kligler-Vilenchik argues that such approaches to citizenship reflect not only new technology but also a society in which public discourse is less formalised through official institutions (newspapers, city council meetings, clubs): “each individual is required to ‘invent themselves’, to shape and form who they are and what they believe in—including how to enact their citizenship” she writes (1892). However, missing from these new understandings of politics is a spatial dimension. How does the geographic reach of social media sites play into perceptions of citizenship in these spaces? This is important because, regardless of the state of cosmopolitan sentiment, political problems are global: climate change, pandemic, regulation of tech companies, the next US president: many of society’s biggest issues, as Beck notes, “do not respect nation-state or any other borders” (4). Yet it’s not clear whether users’ correlative ability to reach across borders is empowering, or overwhelming. Thus, inspired particularly by Delanty’s “micro” cosmopolitanism and Dahlgren’s conditions for the formation of citizenship (“Talkative Public”), I am guided by the following questions: how do people negotiate geographic fungibility online? And specifically, how do they understand their relationship to a global space and their ability to be heard in it? Methodology Christensen and Jansson have suggested that one of the underutilised ways to understand media cultures is to talk to users directly about the “mediatized everyday” (1474). To that end, I interviewed 26 Twitter users in Norway, Denmark, and Sweden. The Scandinavian region is a useful region of study because most people use the Web nearly every day and the populations have high English proficiency (Syvertsen et al.). Participants were found in large-scale data scrapes of Twitter, using linguistic and geographic markers in their profiles, a process similar to the mapping of the Australian Twittersphere (Bruns et al.). The interviewees were selected because of their mixed use of Scandinavian languages and English and their participation in international networks. Participants were contacted through direct messages on Twitter or via email. In figure 2, the participants’ timeline data have been graphed into a network map according to who users @mentioned and retweeted, with lines representing tweets and colours representing languages. The participants include activists, corporate consultants, government employees, students, journalists, politicians, a security guard, a doctor, a teacher, and unemployed people. They range from age 24 to 60. Eight are women, reflecting the gender imbalance of Twitter. Six have an immigrant background. Eight are right-leaning politically. Participants also have wide variation in follower counts in order to capture a variety of experiences on the platform (min=281, max=136,000, median=3,600, standard deviation=33,708). All users had public profiles, but under Norwegian rules for research data, they will be identified here by an ID and their country, gender, and follower count (e.g., P01, Sweden, M, 23,000). Focussing on a single platform allowed the interviews to be more specific and makes it easier to compare the participants’ responses, although other social media often came up in the course of the interviews. Twitter was selected because it is often used in a public manner and has become an important channel for political communication (Larsson and Moe). The interviews lasted around an hour each and were conducted on Zoom between May 2020 and March 2021. Fig. 2: Network map of interview participants’ Twitter timelines. Invisibility: The Abyss of the Global Village Each participant was asked during the interview how they think about globality on Twitter. For many, it was part of the original reason for joining the platform. “Twitter had this reputation of being the hangout of a lot of the world’s intellectuals”, said P022 (Norway, M, 136,000). One Swedish woman described a kind of cosmopolitan curation process, where she would follow people on every continent, so that her feed would give her a sense of the world. “And yes, you can get that from international papers”, she told me, “but if I actually consumed as much as I do on Twitter in papers, I would be reading papers and articles all day” (P023, Sweden, F, 384). Yet while globality was part of the appeal, it was also an abstraction. “I mean, the Internet is global, so everything you do is going to end up somewhere else”, said one Swedish user (P013, M, 12,000). Users would echo the taglines that social media allow you to “interact with someone half a world away” (P05, Norway, M, 3,300) but were often hard-pressed to recall specific examples. A strong theme of invisibility—or feeling lost in an abyss—ran throughout the interviews. For many users this manifested in a lack of any visible response to their tweets. Even when replying to another user, the participants didn’t expect much dialogic engagement with them (“No, no, that’s unrealistic”.) For P04 (Norway, F, 2,000), tweeting back a heart emoji to someone with a large following was for her own benefit, much like the intrapersonal expressions described by Lane et al. that are not necessarily intended for other actors. P04 didn’t expect the original poster to even see her emoji. Interestingly, invisibility was more of a frustration among users with several thousand followers than those with only a few hundred. Having more followers seemed to only make Twitter appear more fickle. “Sometimes you get a lot of attention and sometimes it’s completely disregarded” said P05 (Norway, M, 3,300). P024 (Sweden, M, 2,000) had essentially given up: “I think it’s fun that you found me [to interview]”, he said, “Because I have this idea that almost no one sees my tweets anymore”. In a different way, P08 (Norway, F) who had a follower count of 121,000, also felt the abstraction of globality. “It’s almost like I’m just tweeting into a void or into space”, she said, “because it's too many people to grasp or really understand that these are real people”. For P08, Twitter was almost an anonymous non-place because of its vastness, compared with Facebook and Instagram where the known faces of her friends and family made for more finite and specific places—and thus made her more self-conscious about the visibility of her posts. Efficacy: Fungibility as Empowerment Despite the frequent feeling of global invisibility, almost all the users—even those with few followers—believed they had some sort of effect in global political discussions on Twitter. This was surprising, and seemingly contradictory to the first theme. This second theme of empowerment is characterised by feelings of efficacy or perception of impact. One of the most striking examples came from a Danish man with 345 followers. I wondered before the interview if he might have automated his account because he replied to Donald Trump so often (see fig. 3). The participant explained that, no, he was just trying to affect the statistics on Trump’s tweet, to get it ratioed. He explained: it's like when I'm voting, I'm not necessarily thinking [I’m personally] going to affect the situation, you know. … It’s the statistics that shows a position—that people don't like it, and they’re speaking actively against it. (P06, Denmark, M, 345) Other participants described their role similarly—not as making an impact directly, but being “one ant in the anthill” or helping information spread “like rings in the water”. One woman in Sweden said of the US election: I can't go to the streets because I'm in Stockholm. So I take to their streets on Twitter. I'm kind of helping them—using the algorithms, with retweets, and re-enforcing some hashtags. (P018, Sweden, F, 7,400) Note that the participants rationalise their Twitter activities through comparisons to classic forms of political participation—voting and protesting. Yet the acts of citizenship they describe are very much in line with new norms of citizenship (Vaughan et al.) and what Picone et al. call “small acts of engagement”. They are just acts aimed at the American sphere instead of their national sphere. Participants with large followings understood their accounts had a kind of brand, such as commenting on Middle Eastern politics, mocking leftist politicians, or critiquing the media. But these users were also sceptical they were having any direct impact. Rather, they too saw themselves as being “a tiny part of a combined effect from a lot of people” (P014, Norway, M, 39,000). Fig. 3: Participant P06 replies to Trump. Antagonism: Encounters with Non-Fungibility The final theme reflects instances when geography became suddenly apparent—and thrown back in the faces of the users. This was often in relation to the 2020 American election, which many of the participants were following closely. “I probably know more about US politics than Swedish”, said P023 (Sweden, F, 380). Particularly among left-wing users who listed a Scandinavian location in their profile, tweeting about the topic had occasionally led to encounters with Americans claiming foreign interference. “I had some people telling me ‘You don't have anything to do with our politics. You have no say in this’” said P018 (Sweden, F, 7,400). In these instances, the participants likewise deployed geography strategically. Participants said they would claim legitimacy because the election would affect their country too. “I think it’s important for the rest of the world to give them [the US] that feedback. That ‘we’re depending on you’” said P017 (Sweden, M, 280). As a result of these interactions, P06 started to pre-emptively identify himself as Danish in his tweets, which in a way sacrificed his own geographic fungibility, but also reinforced a wider sense of geographic fungibility on Twitter. In one of his replies to Donald Trump, Jr., he wrote, “Denmark here. The world is hoping for real leader!” Conclusion: Fungible Citizenship The view that digital media are global looms large in academic and popular imagination. The aim of the analysis presented here is to help illuminate how these perceptions play into practices of citizenship in digital spaces. One of the contradictions inherent in this research is that geographic or linguistic information was necessary to find the users interviewed. It may be that users who are geographically anonymous—or even lie about their location—would have a different relationship to online globality. With that said, several key themes emerged from the interviews: the abstraction and invisibility of digital spaces, the empowerment of geographic fungibility, and the occasional antagonistic deployment of non-fungibility by other users and the participants. Taken together, these themes point to geographic fungibility as a condition that can both stifle as well as create new arenas for political expression. Even spontaneous and small acts that aren’t expected to ever reach an audience (Lane et al.) nevertheless are done with an awareness of social processes that extend beyond the national sphere. Moreover, algorithms and metrics, while being the source of invisibility (Bucher), were at times a means of empowerment for those at a physical distance. In contrast to the cosmopolitan literature, it is not so much that users didn’t identify with their nation as their “community of membership” (Sassen)—they saw it as giving them an important perspective. Rather, they considered politics in the EU, US, UK, Russia, and elsewhere to be part of their national arena. In this way, the findings support Delanty’s description of “changes within … national identities rather than in the emergence in new identities” (42). Yet the interviews do not point to “the desire to go beyond ethnocentricity and particularity” (42). Some of the most adamant and active global communicators were on the right and radical right. For them, opposition to immigration and strengthening of national identity were major reasons to be on Twitter. Cross-border communication for them was not a form of resistance to nationalism but wholly compatible with it. Instead of the emergence of global or cosmopolitan citizenship then, I propose that what has emerged is a form of fungible citizenship. This is perhaps a more ambivalent, and certainly a less idealistic, view of digital culture. It implies that users are not elevating their affinities or shedding their national ties. Rather, the transnational effects of political decisions are viewed as legitimate grounds for political participation online. This approach to global platforms builds on and nuances current discursive approaches to citizenship, which emphasise expression (Lane et al.) and contribution (Vaughan et al.) rather than formal participation within institutions. Perhaps the Scandinavian users cannot cast a vote in US elections, but they can still engage in the same forms of expression as any American with a Twitter account. That encounters with non-fungibility were so notable to the participants also points to the mundanity of globality on social media. Vaughan et al. write that “citizens are increasingly accustomed to participating in horizontal networks of relationships which facilitate more expressive, smaller forms of action” (17). The findings here suggest that they are also accustomed to participating in geographically agnostic networks, in which their expressions of citizenship are at once small, interchangeable, and potentially global. References Asen, Robert. "A Discourse Theory of Citizenship." Quarterly Journal of Speech 90.2 (2004): 189–211. Augé, Marc. Non-Places: Introduction to an Anthropology of Supermodernity. Trans. John Howe. London: Verso, 1995. Beck, Ulrich. The Cosmopolitan Vision. Trans. Ciaran Cronin. Cambridge: Polity, 2006. Bruns, Axel, et al. "The Australian Twittersphere in 2016: Mapping the Follower/Followee Network." Social Media + Society 3.4 (2017): 1–15. Bucher, Taina. "Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook." New Media & Society 14.7 (2012): 1164–80. Burgess, Jean, and Nancy Baym. Twitter: A Biography. New York: New York UP, 2020. C-SPAN. Russian Election Interference, House Select Intelligence Committee. 24 Feb. 2017. Transcript. 21 Mar. 2017 <https://www.c-span.org/video/?425087-1/fbi-director-investigating-links-trump-campaign-russia>. Calhoun, Craig. Nations Matter: Culture, History, and the Cosmopolitan Dream. New York: Routledge, 2007. Christensen, Miyase, and André Jansson. "Complicit Surveillance, Interveillance, and the Question of Cosmopolitanism: Toward a Phenomenological Understanding of Mediatization." New Media & Society 17.9 (2015): 1473–91. Dahlgren, Peter. "In Search of the Talkative Public: Media, Deliberative Democracy and Civic Culture." Javnost – The Public 9.3 (2002): 5–25. ———. "Social Media and Political Participation: Discourse and Deflection." Critique, Social Media and the Information Society. Eds. Christian Fuchs and Marisol Sandoval. New York: Routledge, 2014. 191–202. Delanty, Gerard. "The Cosmopolitan Imagination: Critical Cosmopolitanism and Social Theory." British Journal of Sociology 57.1 (2006): 25–47. Dyer, Hugh C. Coping and Conformity in World Politics. Routledge, 2009. Flew, Terry. "Globalization, Neo-Globalization and Post-Globalization: The Challenge of Populism and the Return of the National." Global Media and Communication 16.1 (2020): 19–39. Fraser, Nancy. "Transnationalizing the Public Sphere: On the Legitimacy and Efficacy of Public Opinion in a Post-Westphalian World." Theory, Culture & Society 24.4 (2007): 7–30. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Trans. Thomas Burger. Cambridge, Mass.: MIT P, 1991 [1962]. Kligler-Vilenchik, Neta. "Alternative Citizenship Models: Contextualizing New Media and the New ‘Good Citizen’." New Media & Society 19.11 (2017): 1887–903. Lane, Daniel S., Kevin Do, and Nancy Molina-Rogers. "What Is Political Expression on Social Media Anyway? A Systematic Review." Journal of Information Technology & Politics (2021): 1–15. Larsson, Anders Olof, and Hallvard Moe. "Twitter in Politics and Elections: Insights from Scandinavia." Twitter and Society. Eds. Katrin Weller et al. New York: Peter Lang, 2014. 319–30. Linklater, Andrew. "Cosmopolitan Citizenship." Handbook of Citizenship Studies. Eds. Engin F. Isin and Bryan S. Turner. London: Sage, 2002. 317–32. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Ark, 1987 [1964]. Mocanu, Delia, et al. "The Twitter of Babel: Mapping World Languages through Microblogging Platforms." PLOS ONE 8.4 (2013): e61981. Picone, Ike, et al. "Small Acts of Engagement: Reconnecting Productive Audience Practices with Everyday Agency." New Media & Society 21.9 (2019): 2010–28. Robertson, Alexa. Mediated Cosmopolitanism: The World of Television News. Cambridge: Polity, 2010. Sassen, Saskia. "Towards Post-National and Denationalized Citizenship." Handbook of Citizenship Studies. Eds. Engin F. Isin and Bryan S. Turner. London: Sage, 2002. 277–91. Sevin, Efe, and Sarphan Uzunoğlu. "Do Foreigners Count? Internationalization of Presidential Campaigns." American Behavioral Scientist 61.3 (2017): 315–33. Stock, Kristin. "Mining Location from Social Media: A Systematic Review." Computers, Environment and Urban Systems 71 (2018): 209–40. Syvertsen, Trine, et al. The Media Welfare State: Nordic Media in the Digital Era. New Media World. Ann Arbor: U of Michigan P, 2014. Szerszynski, Bronislaw, and John Urry. "Cultures of Cosmopolitanism." The Sociological Review 50.4 (2002): 461–81. Vaughan, Michael, et al. "The Role of Novel Citizenship Norms in Signing and Sharing Online Petitions." Political Studies (2022). Vromen, Ariadne. Digital Citizenship and Political Engagement: The Challenge from Online Campaigning and Advocacy Organisations. London: Palgrave Macmillan, 2017.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography