Journal articles on the topic 'Mistake of approximation'

To see the other types of publications on this topic, follow the link: Mistake of approximation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 journal articles for your research on the topic 'Mistake of approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Khvorostyanov, V. I., and J. A. Curry. "Comment on Kokkola et al. (2008) – Comparisons with analytical solutions from Khvorostyanov and Curry (2007) on the critical droplet radii and supersaturations of CCN with insoluble fractions." Atmospheric Chemistry and Physics Discussions 9, no. 2 (April 15, 2009): 9537–50. http://dx.doi.org/10.5194/acpd-9-9537-2009.

Full text
Abstract:
Abstract. Analytical solutions for the critical radii rcr and supersaturations scr of the cloud condensation nuclei with insoluble fractions were derived by Khvorostyanov and Curry (2007, hereafter KC07). Similar solutions were found later by Kokkola et al. (2008, hereafter Kok08); however, Kok08 used the approximation of an ideal dilute solution, while KC07 used more accurate assumptions that account for nonideality of solutions. Kok08 found a large discrepancy with KC07 in the critical supersaturations. Various possible reasons of this are analyzed. It is shown that the major discrepancy was caused by a simple mistake in Kok08 in the equation for the critical supersaturation: erroneous ''plus'' sign between the Kelvin and Raoult terms instead of correct ''minus'' sign. If this mistake is corrected, the equations from Kok08 mostly repeat the equations from KC07, except that Kok08 use the dilute solution approximation. If the mistake in Kok08 is corrected, then the differences in the critical radii and supersaturations do not exceed 16–18%, which characterizes the possible errors of an ideal diluted solution approximation. If the Kok08 scheme is corrected and applied to a nonideal solution, then the difference with KC07 does not exceed 0.4–1%.
APA, Harvard, Vancouver, ISO, and other styles
2

Sandelin, B. "The Danger of Approximation: Wicksell's Mistake on the Average Period of Investment." History of Political Economy 22, no. 3 (September 1, 1990): 551–55. http://dx.doi.org/10.1215/00182702-22-3-551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khvorostyanov, V. I., and J. A. Curry. "Comment on "Comparisons with analytical solutions from Khvorostyanov and Curry (2007) on the critical droplet radii and supersaturations of CCN with insoluble fractions" by Kokkola et al. (2008)." Atmospheric Chemistry and Physics 9, no. 16 (August 20, 2009): 6033–39. http://dx.doi.org/10.5194/acp-9-6033-2009.

Full text
Abstract:
Abstract. Analytical solutions for the critical radii and supersaturations of the cloud condensation nuclei (CCN) with insoluble fractions were derived by Khvorostyanov and Curry (2007, hereafter KC07). These solutions generalize Köhler's solutions for an arbitrary soluble fraction of CCN, and have two limiting cases: large soluble fraction (Köhler's original solution); and a new "low soluble fraction" limit. Similar solutions were found subsequently by Kokkola et al. (2008, hereafter Kok08); however, Kok08 used the approximation of an ideal and dilute solution, while KC07 used more accurate assumptions that account for nonideality of solutions. Kok08 found a large discrepancy with KC07 in the critical supersaturations. It is shown that the major discrepancy with KC07 found in Kok08 was caused by the simple mistake in Kok08, where comparison was made not with the general solution from KC07, but with the Köhler's solution or with some unknown quantity, not even with the "low soluble fraction" limit. If general solutions from the two works are compared, the equations from Kok08 mostly repeat the equations from KC07, except that Kok08 use the ideal dilute solution approximation. If the mistake in Kok08 is corrected, then the differences in the critical radii and supersaturations do not exceed 16–18%, which characterizes the errors of the ideal dilute solution approximation. If the Kok08 scheme is modified following KC07 to account for the non-ideality of solution, then the difference with KC07 does not exceed 0.4–1%.
APA, Harvard, Vancouver, ISO, and other styles
4

Stella, K., T. Vinith, K. Sriram, and P. Vignesh. "A Reliable Low Power Multiplier Using Fixed Width Scalable Approximation." Journal of Physics: Conference Series 2070, no. 1 (November 1, 2021): 012135. http://dx.doi.org/10.1088/1742-6596/2070/1/012135.

Full text
Abstract:
Abstract Recent Approximate computing is a change in perspective in energy-effective frameworks plan and activity, in light of the possibility that we are upsetting PC frameworks effectiveness by requesting a lot of precision from them. Curiously, enormous number of utilization areas, like DSP, insights, and AI. Surmised figuring is appropriate for proficient information handling and mistake strong applications, for example, sign and picture preparing, PC vision, AI, information mining and so forth Inexact registering circuits are considered as a promising answer for lessen the force utilization in inserted information preparing. This paper proposes a FPGA execution for a rough multiplier dependent on specific partial part-based truncation multiplier circuits. The presentation of the proposed multiplier is assessed by contrasting the force utilization, the precision of calculation, and the time delay with those of a rough multiplier dependent on definite calculation introduced. The estimated configuration acquired energy effective mode with satisfactory precision. When contrasted with ordinary direct truncation proposed model fundamentally impacts the presentation. Thusly, this novel energy proficient adjusting based inexact multiplier design outflanked another cutthroat model.
APA, Harvard, Vancouver, ISO, and other styles
5

Tsung, Chen-Kun, Hann-Jang Ho, and Sing-Ling Lee. "A Game Theoretical Approach for Solving Winner Determination Problems." Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/845071.

Full text
Abstract:
Determining the winners in combinatorial auctions to maximize the auctioneer's revenue is an NP-complete problem. Computing an optimal solution requires huge computation time in some instances. In this paper, we apply three concepts of the game theory to design an approximation algorithm: the stability of the Nash equilibrium, the self-learning of the evolutionary game, and the mistake making of the trembling hand assumption. According to our simulation results, the proposed algorithm produces near-optimal solutions in terms of the auctioneer's revenue. Moreover, reasonable computation time is another advantage of applying the proposed algorithm to the real-world services.
APA, Harvard, Vancouver, ISO, and other styles
6

Bilbiie, Florin O. "Optimal Forward Guidance." American Economic Journal: Macroeconomics 11, no. 4 (October 1, 2019): 310–45. http://dx.doi.org/10.1257/mac.20170335.

Full text
Abstract:
Optimal forward guidance is the simple policy of keeping interest rates low for some optimally determined number of periods after the liquidity trap ends and moving to normal-times optimal policy thereafter. I solve for the optimal duration in closed form in a new Keynesian model and show that it is close to fully optimal Ramsey policy. The simple rule “announce a duration of half of the trap’s duration times the disruption” is a good approximation, including in a medium-scale dynamic stochastic general equilibrium (DSGE) model. By anchoring expectations of Delphic agents (who mistake commitment for bad news), the simple rule is also often welfare-preferable to Odyssean commitment. (JEL D84, E12, E43, E52, E56)
APA, Harvard, Vancouver, ISO, and other styles
7

GROUBA, V. D., A. V. ZORIN, and L. A. SEVASTIANOV. "THE SUPERPOSITION APPROXIMATION: A CRITICAL REVIEW." International Journal of Modern Physics B 18, no. 01 (January 10, 2004): 1–44. http://dx.doi.org/10.1142/s0217979204023465.

Full text
Abstract:
We have examined the superposition approximation introduced by J. G. Kirkwood. One discusses the origin of mistakes when using this approximation for calculating structural and thermodynamic properties of systems.
APA, Harvard, Vancouver, ISO, and other styles
8

BARRIGA-CARRASCO, M. D., and A. Y. POTEKHIN. "Proton stopping in plasmas considering e−–e− collisions." Laser and Particle Beams 24, no. 4 (October 2006): 553–58. http://dx.doi.org/10.1017/s0263034606060733.

Full text
Abstract:
The purpose of the present paper is to describe the effects of electron-electron collisions on proton electronic stopping in plasmas of any degeneracy. Plasma targets are considered fully ionized so electronic stopping is only due to the free electrons. The stopping due to free electrons is obtained from an exact quantum mechanical evaluation in the random phase approximation, which takes into account the degeneracy of the target plasma. The result is compared with common classical and degenerate approximations. Differences are around 30% in some cases which can produce bigger mistakes in further energy deposition and projectile range studies. We focus our analysis on plasmas in the limit of weakly coupled plasmas then electron-electron collisions have to be considered. Differences with the same results without taking into account collisions are more than 50%.
APA, Harvard, Vancouver, ISO, and other styles
9

Platkiewicz, Jonathan, Eran Stark, and Asohan Amarasingham. "Spike-Centered Jitter Can Mistake Temporal Structure." Neural Computation 29, no. 3 (March 2017): 783–803. http://dx.doi.org/10.1162/neco_a_00927.

Full text
Abstract:
Jitter-type spike resampling methods are routinely applied in neurophysiology for detecting temporal structure in spike trains (point processes). Several variations have been proposed. The concern has been raised, based on numerical experiments involving Poisson spike processes, that such procedures can be conservative. We study the issue and find it can be resolved by reemphasizing the distinction between spike-centered (basic) jitter and interval jitter. Focusing on spiking processes with no temporal structure, interval jitter generates an exact hypothesis test, guaranteeing valid conclusions. In contrast, such a guarantee is not available for spike-centered jitter. We construct explicit examples in which spike-centered jitter hallucinates temporal structure, in the sense of exaggerated false-positive rates. Finally, we illustrate numerically that Poisson approximations to jitter computations, while computationally efficient, can also result in inaccurate hypothesis tests. We highlight the value of classical statistical frameworks for guiding the design and interpretation of spike resampling methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Golovaneva, Marina. "Dualism of Approximation Principle in Linguadidactics." Bulletin of Kemerovo State University. Series: Humanities and Social Sciences 2020, no. 4 (January 18, 2021): 287–96. http://dx.doi.org/10.21603/2542-1840-2020-4-4-287-296.

Full text
Abstract:
Dualism is a specific quality of approximation principle. The present research featured the potential of approximation principle for linguadidactics, namely to what degree it can be used to teach Russian as a foreign language and perform correction work in class. The research objective was to assess the efficiency of this principle. The study was based on the method of observation. The article introduces analyses of scientific linguadidactic literature and some typical situations of educational process. The author separates correction work from speech activity, i.e. talking, writing, and reading. The author believes that speech mistakes must be corrected immediately, involving the student in the correction process. Graphic facilities should be used to illustrate the norm. Therefore, in practical linguadidactics, approximation principle should be minimal. Yet, approximation is impossible to avoid in the abovementioned types of speech activity, which hints at the dualism of this principle. Therefore, all errors must be corrected using graphic means, if possible, by both the teacher and the student. Students should be encouraged to participate in the correction process, while the teacher maintains supervisory control.
APA, Harvard, Vancouver, ISO, and other styles
11

CAPASSO, MARCO, LUCIA ALESSI, MATTEO BARIGOZZI, and GIORGIO FAGIOLO. "ON APPROXIMATING THE DISTRIBUTIONS OF GOODNESS-OF-FIT TEST STATISTICS BASED ON THE EMPIRICAL DISTRIBUTION FUNCTION: THE CASE OF UNKNOWN PARAMETERS." Advances in Complex Systems 12, no. 02 (April 2009): 157–67. http://dx.doi.org/10.1142/s0219525909002131.

Full text
Abstract:
This paper discusses some problems possibly arising when approximating via Monte-Carlo simulations the distributions of goodness-of-fit test statistics based on the empirical distribution function. We argue that failing to re-estimate unknown parameters on each simulated Monte-Carlo sample — and thus avoiding to employ this information to build the test statistic — may lead to wrong, overly-conservative. Furthermore, we present some simple examples suggesting that the impact of this possible mistake may turn out to be dramatic and does not vanish as the sample size increases.
APA, Harvard, Vancouver, ISO, and other styles
12

DE LEO, STEFANO, GISELE DUCAT, and PIETRO ROTELLI. "REMARKS UPON THE MASS OSCILLATION FORMULAS." Modern Physics Letters A 15, no. 33 (October 30, 2000): 2057–68. http://dx.doi.org/10.1142/s0217732300002395.

Full text
Abstract:
The standard formula for mass oscillations is often based upon the approximation t≈L and the hypotheses that neutrinos have been produced with a definite momentum p or, alternatively, with definite energy E. This represents an inconsistent scenario and gives an unjustified reduction by a factor of two in the mass oscillation formulas. Such an ambiguity has been a matter of speculations and mistakes in discussing flavor oscillations. We present a series of results and show how the problem of the factor two in the oscillation length is not a consequence of Gedanken experiments, i.e. oscillations in time. The common velocity scenario yields the maximum simplicity.
APA, Harvard, Vancouver, ISO, and other styles
13

Husham Ali, Basheer, Ahmed Adeeb Jalal, and Wasseem N. Ibrahem Al-Obaydy Al-Obaydy. "Data loss prevention (DLP) by using MRSH-v2 algorithm." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 4 (August 1, 2020): 3615. http://dx.doi.org/10.11591/ijece.v10i4.pp3615-3622.

Full text
Abstract:
Sensitive data may be stored in different forms. Not only legal owners but also malicious people are interesting of getting sensitive data. Exposing valuable data to others leads to severe Consequences. Customers, organizations, and /or companies lose their money and reputation due to data breaches. There are many reasons for data leakages. Internal threats such as human mistakes and external threats such as DDoS attacks are two main reasons for data loss. In general, data may be categorized based into three kinds: data in use, data at rest, and data in motion. Data Loss Prevention (DLP) are good tools to identify important data. DLP can do analysis for data content and send feedback to administrators to make decision such as filtering, deleting, or encryption. Data Loss Prevention (DLP) tools are not a final solution for data breaches, but they consider good security tools to eliminate malicious activities and protect sensitive information. There are many kinds of DLP techniques, and approximation matching is one of them. Mrsh-v2 is one type of approximation matching. It is implemented and evaluated by using TS dataset and confusion matrix. Finally, Mrsh-v2 has high score of true positive and sensitivity, and it has low score of false negative.
APA, Harvard, Vancouver, ISO, and other styles
14

Cezikturk, Ozlem. "Spreadsheets for Numerical Analysis: A conceptual tool." Academic Perspective Procedia 2, no. 1 (April 6, 2019): 57–65. http://dx.doi.org/10.33793/acperpro.02.01.12.

Full text
Abstract:
Using spreadsheets for mathematics education is not a new idea. However, analysing student logical mistakes for preserving error types and conceptual decision making would be first to study. A class of students were given two home works in order for grading. They were told to carry on calculations via spreadsheets and give the homework exam with spreadsheet file. In H1, questions were on root finding and solution of simultaneous equations. In H2, questions were on line and curve approximations, interpolation, numerical integration and numerical differentiation. These files are analysed by content analysis of qualitative method. By this way, it is hypothesized that they would do similar errors regarding error types and their decision making would inform us about their conceptual learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Prather, Michael J., and Christopher D. Holmes. "A perspective on time: loss frequencies, time scales and lifetimes." Environmental Chemistry 10, no. 2 (2013): 73. http://dx.doi.org/10.1071/en13017.

Full text
Abstract:
Environmental context The need to describe the Earth’s system or any of its components with a quantity that has units of time is ubiquitous. These quantities are used as metrics of the system to describe the response to a perturbation, the cumulative effect of an action or just the budget in terms of sources and sinks. Given a complex, non-linear system, there are many different ways to derive such quantities, and careful definitions are needed to avoid mistaken approximations while providing useful parameters describing the system. Abstract Diagnostic quantities involving time include loss frequency, decay times or time scales and lifetimes. For the Earth’s system or any of its components, all of these are calculated differently and have unique diagnostic properties. Local loss frequency is often assumed to be a simple, linear relationship between a species and its loss rate, but this fails in many important cases of atmospheric chemistry where reactions couple across species. Lifetimes, traditionally defined as total burden over loss rate, are mistaken for a time scale that describes the complete temporal behaviour of the system. Three examples here highlight: local loss frequencies with non-linear chemistry (tropospheric ozone); simple atmospheric chemistry with multiple reservoirs (methyl bromide) and fixed chemistry but evolving lifetimes (methyl chloroform). These are readily generalised to other biogeochemistry and Earth system models.
APA, Harvard, Vancouver, ISO, and other styles
16

Gopinathan, Sreenath M., Alessandra Bigongiari, and Maria Heckl. "Analytical approximations for heat release rate laws in the time- and frequency-domains." International Journal of Spray and Combustion Dynamics 12 (January 2020): 175682772093049. http://dx.doi.org/10.1177/1756827720930491.

Full text
Abstract:
This paper focusses on the relationship between the heat release rate and the acoustic field, which is a crucial element in modelling thermoacoustic instabilities. The aim of the paper is twofold. The first aim is to develop a transformation tool, which makes it easy to switch between the time-domain representation (typically a heat release law involving time-lags) and the frequency-domain representation (typically a flame transfer function) of this relationship. Both representations are characterised by the same set of parameters n1, n2, …, nk. Their number is quite small, and they have a clear physical meaning: they are time-lag dependent coupling coefficients. They are closely linked to the impulse response of the flame in the linear regime in that they are proportional to the discretised (with respect to time) impulse response. In the nonlinear regime, the parameters n1, n2, …, nk become amplitude-dependent. Their interpretation as time-lag dependent coupling coefficients prevails; however, the link with the impulse response is lost. Nonlinear flames are commonly described in the frequency-domain by an amplitude-dependent flame transfer function, the so-called flame describing function. The time-domain equivalent of the flame describing function is sometimes mistaken for a ‘nonlinear impulse response’, but this is not correct. The second aim of this paper is to highlight this misconception and to provide the correct interpretation of the time-domain equivalent of the flame describing function.
APA, Harvard, Vancouver, ISO, and other styles
17

Loudon, Rodney. "One-dimensional hydrogen atom." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 472, no. 2185 (January 2016): 20150534. http://dx.doi.org/10.1098/rspa.2015.0534.

Full text
Abstract:
The theory of the one-dimensional (1D) hydrogen atom was initiated by a 1952 paper but, after more than 60 years, it remains a topic of debate and controversy. The aim here is a critique of the current status of the theory and its relation to relevant experiments. A 1959 solution of the Schrödinger equation by the use of a cut-off at x = a to remove the singularity at the origin in the 1/| x | form of the potential is clarified and a mistaken approximation is identified. The singular atom is not found in the real world but the theory with cut-off has been applied successfully to a range of four practical three-dimensional systems confined towards one dimension, particularly their observed large increases in ground state binding energy. The true 1D atom is in principle restored when the short distance a tends to zero but it is sometimes claimed that the solutions obtained by the limiting procedure differ from those obtained by solution of the basic Schrödinger equation without any cut-off in the potential. The treatment of the singularity by a limiting procedure for applications to practical systems is endorsed.
APA, Harvard, Vancouver, ISO, and other styles
18

Movassagh, Kiarash, Arif Raihan, Balakumar Balasingam, and Krishna Pattipati. "A Critical Look at Coulomb Counting Approach for State of Charge Estimation in Batteries." Energies 14, no. 14 (July 6, 2021): 4074. http://dx.doi.org/10.3390/en14144074.

Full text
Abstract:
In this paper, we consider the problem of state-of-charge estimation for rechargeable batteries. Coulomb counting is a well-known method for estimating the state of charge, and it is regarded as accurate as long as the battery capacity and the beginning state of charge are known. The Coulomb counting approach, on the other hand, is prone to inaccuracies from a variety of sources, and the magnitude of these errors has not been explored in the literature. We formally construct and quantify the state-of-charge estimate error during Coulomb counting due to four types of error sources: (1) current measurement error; (2) current integration approximation error; (3) battery capacity uncertainty; and (4) timing oscillator error/drift. It is demonstrated that the state-of-charge error produced can be either time-cumulative or state-of-charge-proportional. Time-cumulative errors accumulate over time and have the potential to render the state-of-charge estimation utterly invalid in the long term.The proportional errors of the state of charge rise with the accumulated state of charge and reach their worst value within one charge/discharge cycle. The study presents methods for reducing time-cumulative and state-of-charge-proportional mistakes through simulation analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

Slongo, Juliano Scholz, Jefferson Gund, Thiago Alberto Rigo Passarin, Daniel Rodrigues Pipa, Júlio Endress Ramos, Lucia Valeria Arruda, and Flávio Neves Junior. "Effects of Thermal Gradients in High-Temperature Ultrasonic Non-Destructive Tests." Sensors 22, no. 7 (April 6, 2022): 2799. http://dx.doi.org/10.3390/s22072799.

Full text
Abstract:
Ultrasonic inspection techniques and non-destructive tests are widely applied in evaluating products and equipment in the oil, petrochemical, steel, naval, and energy industries. These methods are well established and efficient for inspection procedures at room temperature. However, errors can be observed in the positioning and sizing of the flaws when such techniques are used during inspection procedures under high working temperatures. In such situations, the temperature gradients generate acoustic anisotropy and consequently distortion of the ultrasonic beams. Failure to consider such distortions in ultrasonic signals can result, in extreme situations, in mistaken decision making by inspectors and professionals responsible for guaranteeing product quality or the integrity of the evaluated equipment. In this scenario, this work presents a mathematical tool capable of mitigating positioning errors through the correction of focal laws. For the development of the tool, ray tracing concepts are used, as well as a model of heat propagation in solids and an experimentally defined linear approximation of dependence between sound speed and temperature. Using the focal law correction tool, the relative firing delays of the active elements are calculated considering the temperature gradients along the sonic path, and the results demonstrate a reduction of more than 68% in the error of flaw positioning.
APA, Harvard, Vancouver, ISO, and other styles
20

Swetz, Frank J. "The Volume of a Sphere: A Chinese Derivation." Mathematics Teacher 88, no. 2 (February 1995): 142–45. http://dx.doi.org/10.5951/mt.88.2.0142.

Full text
Abstract:
The corpus of mathematical knowledge possessed by the ancient Chinese can primarily be found in one book, Jiuzhang suanshu, or Nine Chapters on the Mathematical Art. This book was compiled about 200 B.C. as a compendium of the mathematics known and used in China. It was a bureaucratic handbook and contained all the mathematics believed necessary for a court official to run the empire; for example, the book possessed sections devoted to computing tax rates and salaries, finding the volumes of geometric solids needed in the construction of fortifications, and undertaking land surveys (Swetz 1972). In the presence of such an officially sanctioned handbook, mathematical scholarship was limited to copying this work and perhaps commenting on its text. Through this process, the contents of the Nine Chapters were both perpetuated and clarified. The crude mathematical approximations it contained were refined, and actual mistakes in its formulas and procedures were eventually corrected. One such error to undergo this evolution was the determination of the volume of a sphere with a known radius. Examining just how ancient Chinese mathematicians attempted to resolve this problem lends valuable insights into early mathematical thinking and problem solving
APA, Harvard, Vancouver, ISO, and other styles
21

Barbu, Adrian, and Hongyu Mou. "The Compact Support Neural Network." Sensors 21, no. 24 (December 20, 2021): 8494. http://dx.doi.org/10.3390/s21248494.

Full text
Abstract:
Neural networks are popular and useful in many fields, but they have the problem of giving high confidence responses for examples that are away from the training data. This makes the neural networks very confident in their prediction while making gross mistakes, thus limiting their reliability for safety-critical applications such as autonomous driving and space exploration, etc. This paper introduces a novel neuron generalization that has the standard dot-product-based neuron and the radial basis function (RBF) neuron as two extreme cases of a shape parameter. Using a rectified linear unit (ReLU) as the activation function results in a novel neuron that has compact support, which means its output is zero outside a bounded domain. To address the difficulties in training the proposed neural network, it introduces a novel training method that takes a pretrained standard neural network that is fine-tuned while gradually increasing the shape parameter to the desired value. The theoretical findings of the paper are bound on the gradient of the proposed neuron and proof that a neural network with such neurons has the universal approximation property. This means that the network can approximate any continuous and integrable function with an arbitrary degree of accuracy. The experimental findings on standard benchmark datasets show that the proposed approach has smaller test errors than the state-of-the-art competing methods and outperforms the competing methods in detecting out-of-distribution samples on two out of three datasets.
APA, Harvard, Vancouver, ISO, and other styles
22

Zagrevskiy, V., and O. Zagrevskiy. "ANALYTICAL MODEL FOR COMPENSATING MOTION ERROR IN THE ADAPTIVE MOTION CONTROL OF THE BIOMECHANICAL SYSTEM." Human Sport Medicine 19, no. 2 (July 13, 2019): 79–85. http://dx.doi.org/10.14529/hsm190210.

Full text
Abstract:
Aim. The article deals with developing software to simulate the motion of an object with the given parameters of initial and final phase status. Materials and methods. A motion error in sports exercise is the result of kinematic deviation from the parameters of a given motion program. The mathematical apparatus of adaptive control allows neutralizing motion mistakes between a program and a real trajectory. It is based on utilising the information about current parameters of a phase status of a moving object in a mathematical structure of the control function. The article proposes and experimentally proves the hypothesis about the computer synthesis of motions in biomechanical systems based on the mathematical apparatus of adaptive control. In the computational experiments, a mathematical description of an object is based on a well-established law of open-time approximation (A.P. Batenko, 1977), which requires that both velocity and coordinates simultaneously take the given values. Motion time in this law is an uncontrolled parameter. The mathematical model of a moving object is built as a system of a first-order differential equation. Results. A mathematical model describing the motion of a material point with given phase coordinates at the initial and final points in time is implemented in a computer program. The program works based on the integrated development environment Visual Studio Express 2013 and the Visual Basic 2010 language environment. Conclusion. The developed computer model of adaptive control achieves the aim of any motion, which implies transferring an object from a given initial state to the required final state.
APA, Harvard, Vancouver, ISO, and other styles
23

Asokan, R., and T. Vijayakumar. "Design of Extended Hamming Code Technique Encryption for Audio Signals by Double Code Error Prediction." September 2021 3, no. 3 (October 27, 2021): 179–92. http://dx.doi.org/10.36548/jitdw.2021.3.003.

Full text
Abstract:
Noise can scramble a message that is sent. This is true for both voicemails and digital communications transmitted to and from computer systems. During transmission, mistakes tend to happen. Computer memory is the most commonplace to use Hamming code error correction. With extra parity/redundancy bits added to Hamming code, single-bit errors may be detected and corrected. Short-distance data transmissions often make use of Hamming coding. The redundancy bits are interspersed and evacuated subsequently when scaling it for longer data lengths. The new hamming code approach may be quickly and easily adapted to any situation. As a result, it's ideal for sending large data bitstreams since the overhead bits per data bit ratio is much lower. The investigation in this article is extended Hamming codes for product codes. The proposal particularly emphasises on how well it functions with low error rate, which is critical for multimedia wireless applications. It provides a foundation and a comprehensive set of methods for quantitatively evaluating this performance without the need of time-consuming simulations. It provides fresh theoretical findings on the well-known approximation, where the bit error rate roughly equal to the frame error rate times the minimal distance to the codeword length ratio. Moreover, the analytical method is applied to actual design considerations such as shorter and punctured codes along with the payload and redundancy bits calculation. Using the extended identity equation on the dual codes, decoding can be done at the first instance. The achievement of 43.48% redundancy bits is obtained during the testing process which is a huge proportion reduced in this research work.
APA, Harvard, Vancouver, ISO, and other styles
24

Lazariev, Viktor. "Features of the use of legal terminology in the countries of the European Union." ScienceRise: Juridical Science, no. 3(17) (September 30, 2021): 4–8. http://dx.doi.org/10.15587/2523-4153.2021.241513.

Full text
Abstract:
The article examines the issue of the peculiarities of the use of legal terminology in the countries of the European Union. It is emphasized, that the reform of domestic legislation and its approximation to world standards requires thorough research of European legislation. In particular, the cornerstone of today's challenge is the proper use and common approaches to the use of legal terminology. That is why theoretical research on the peculiarities of the use of legal terminology in the legislation of the European Union is necessary to properly improve domestic legislation and avoid mistakes and misunderstandings in the future. Emphasis is placed on the fact that domestic and foreign researchers have not developed a unified approach to understanding the term "terminology". That is why this term is used in three different meanings. It is also noted, that the category "term" is not new and has been in the field of view of researchers for a long time, but its direct study has only recently begun. It is noted, that legal terminology is considered as technical, i.e.it is the most noticeable and striking linguistic feature of legal language. In this sense, legal terminology is used to denote concepts that belong to the legal field. It is emphasized that, in contrast to the systemic languages, used in national texts, the European Union resorts to a form of "cultural communication", governed by multilingualism. Such communication is considered cultural because it is not rooted in any of the national cultures of the EU Member States. Legal acts, adopted by the EU institutions, which are to be applied and enforced in all Member States, must avoid cultural specificities and, therefore, the concepts or terminology specific to any one national legal system must be used with caution
APA, Harvard, Vancouver, ISO, and other styles
25

Tsukanov, Ruslan, and Viktor Riabkov. "Transport category airplane flight range calculation accounting center-of-gravity position shift and engine throttling characteristics." Aerospace technic and technology, no. 5 (October 6, 2021): 4–14. http://dx.doi.org/10.32620/aktt.2021.5.01.

Full text
Abstract:
A problem facing world commercial aviation is a provision of the flight range and an increase in the fuel efficiency of transport category airplanes using fuel trim transfer application, which allows for decreasing airplane trim drag at cruise flight. In the existing mathematical models, center-of-gravity position is usually assumed fixed, but with fuel usage, center-of-gravity shifts within the definite range of center-of-gravity positions. Until the fuel trim transfer was not used in airplanes, the center-of-gravity shift range was rather short, that allowed to use the specified assumption without any considerable mistakes. In case of fuel trim transfer use, center-of-gravity shifts can reach 15…20 % of mean aerodynamic chord, that requires considering the center-of-gravity actual position during the flight range calculation. Early made estimated calculations showed the necessity of following mathematical model improvement using accounting the real engine throttling characteristics. The goal of this publication is to develop a method of flight range calculation taking transport category airplane into account actual center-of-gravity position with fuel using and variation in engine-specific fuel consumption according to their throttling characteristics. On the basis of real data from engine maintenance manuals, formulas are obtained for approximation throttling characteristics of turbofan engines in the form of dimensionless specific fuel consumption (related to the specific fuel consumption at full thrust) dependence on the engine throttling coefficient. A mathematical model (algorithm and its program implementation using С language in Power Unit 11.7 R03 system) has been developed to calculate the airplane flight range accounting its actual center-of-gravity position shift with fuel usage and variation in specific fuel consumption according to engine throttling characteristics. Using comparison with known payload-range diagram, adequacy of developed mathematical model is shown. Recommendations to improve the mathematical model are also given.
APA, Harvard, Vancouver, ISO, and other styles
26

Simpkin, Adam J., Felix Simkovic, Jens M. H. Thomas, Martin Savko, Andrey Lebedev, Ville Uski, Charles Ballard, et al. "SIMBAD: a sequence-independent molecular-replacement pipeline." Acta Crystallographica Section D Structural Biology 74, no. 7 (June 8, 2018): 595–605. http://dx.doi.org/10.1107/s2059798318005752.

Full text
Abstract:
The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here, SIMBAD, a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD, this approach has solved novel cases that have otherwise proved difficult to solve.
APA, Harvard, Vancouver, ISO, and other styles
27

Bammert, K., A. Hegazy, and H. Lange. "Determination of the Distribution of Incident Solar Radiation in Cavity Receivers With Approximately Real Parabolic Dish Collectors." Journal of Solar Energy Engineering 112, no. 4 (November 1, 1990): 237–43. http://dx.doi.org/10.1115/1.2929929.

Full text
Abstract:
The absorption of solar heat and the attendant thermal and mechanical loadings on the tubes of cavity receivers depend predominantly on the flux distribution of the incident solar radiation. For an axially symmetric cavity receiver with a parabolic dish collector, it is simple to determine the insulation pattern on the receiver internal surfaces if the system is ideal. In such a system the surface of the dish is perfectly parabolic (no contour flaws are present), and the sun’s central ray impinges on the dish surface parallel to the focal axis (no sun tracking flaws are present). These two conditions cannot be achieved in practice, and therefore the feasible parabolic dish system is referred to as a “real” system although, in actual fact, it is only an approximation to any actual system. The purpose of this paper is to devise calculation principles which permit analysis of a receiver designed for ideal conditions (Bammert and Seifert, 1983; Bammert and Hegazy, 1984; Johanning, 1987) to verify its structural adequacy under the nonideal conditions to be expected in reality. Of the many possible imperfections in real collectors, two were selected which increase the loadings sustained. The first case concerns flaws in the contour of the dish surface. These locally increase the radiation concentration on the receiver inside walls and tubing. In the second case, sun-tracking errors give rise to axially asymmetric radiation distributions. In both examples, greater than design basis loadings will occur in the receiver tubing. Both kinds of flaws considered in this paper are of a purely deterministic nature. Other flaws statistically distributed on the dish surface (Ko¨hne and Kleih, 1987, Gu¨ven, Bannerot, and Mistree, 1983; O’Neill and Hudson, 1978; Ratzel et al., 1987) do not cause structural overloading but must be taken into account in thermal performance analysis. The paper presents a method of analyzing the flux distribution on the internal surfaces of a cavity receiver with an approximately real parabolic dish collector. After a discussion of the theoretical principles, the effects of the collector contour and sun tracking errors on the insulation pattern are described with reference to an example.
APA, Harvard, Vancouver, ISO, and other styles
28

Ratushniy, R. I., N. Goderdzi, M. Yu Goncharuk-Khomyn, S. B. Kostenko, I. V. Penzelyk, and A. S. Chobeі. "АNALYSIS OF CLINICAL AND EXPERIMENTAL FORECASTING OF INFLUENCE OF ERGONOMICS OF DENTISTS WORK ON THE RESULT OF ENDONTIC TREATMENT." Ukrainian Dental Almanac, no. 1 (March 28, 2022): 61–69. http://dx.doi.org/10.31718/2409-0255.1.2022.11.

Full text
Abstract:
Abstract. The ergonomic aspect of work is one of the keys to the daily practice of a dentist. In-depth study and development of ways to optimize the basic ergonomic principles of work, monitoring the dynamics of implementation and, if necessary, the possibility of their correction is an important scientific and practical issue that can increase the level of dental care. The aim of the study was to assess the relationship between the ergonomic components of the work of dentists and the outcome of endodontic treatment and the risk of errors in the treatment of root canals with software. Materials and methods: target research methods Rapid Upper Limb Assessment (for the upper extremities) and Rapid Entire Body Assessment (for the whole body of the dentist), StatPlus Pro software, X-ray examination. Simulation of changes in the position of individual components of the musculoskeletal system during the treatment of root canals and the accompanying analysis of the obtained numerical parameters of these changes were performed using adapted software Tecnomatix Jack (Siemens). Results and discussion. Each stage of the dentist's work cycle during endodontic interventions was stratified in the form of segmented scenarios, which were compared with video monitoring data to ensure a sufficient level of approximation; then separated the facts of deviations from the ergonomically-reasoned position of the body of the dentist, taking into account the recurrence of their occurrence and providing an interpretation of the results in terms of quantitative and qualitative characteristics of the observed deviations. Based on the data of systematization of the main deviations of individual elements of the musculoskeletal system and the body of the dentist as a whole from the ergonomic-reasoned ranges, first corrected the most critical violations in the digital environment and re-test according to RULA and REBA organization of the work process. Systematization of data of the regression analysis, it is possible to note characteristic feature of decrease in the frequency of registration of the cases of the mistakes made during endodontic treatment of all groups of teeth, at the achievement of the highest values of RULA and REBA indicators. According to the results of modeling and theoretical justification of the implementation of the necessary ergonomic changes for each of the dentists of the study, the sample formulated a set of individual recommendations, the implementation of which in the workflow helped increase the effectiveness of endodontic interventions and reduce the number of errors. Conclusions. Modeling the main patterns of changes in the working position of dentists during root canal treatment and analysis of such in the structure of the digital environment according to the data, promotes targeted identification of problematic elements of the workflow in terms of compliance with ergonomic criteria and specifics of their changes, the possible fact of using optically magnifying equipment, work with rotary and manual types of endodontic instruments. The proposed approach to the optimization of endodontic treatment in terms of compliance with relevant ergonomic criteria is individual-specific, and the systematization of general characteristics, which were registered among the entire sample, will help expand the integrated system of improving the quality and efficiency of dental care in its structure. Prospects for further research. Evaluation of the practical significance and actual feasibility of using the approach of discrete-event modeling of triangulation relations to optimize the ergonomic components of the work process during endodontic treatment.
APA, Harvard, Vancouver, ISO, and other styles
29

"Correction of Non-Linearity of Load Cell using Adaptive Technique with Mathematical Approximation." International Journal of Recent Technology and Engineering 9, no. 1 (May 30, 2020): 1112–15. http://dx.doi.org/10.35940/ijrte.c5117.059120.

Full text
Abstract:
Load Cell is used to evaluate unknown objects ' weight. It presents noise at the output due to different inner and external variables. The output deviates from the required response. This project's primary goal is to use Adaptive and Approximation methods to rectify a load cell's output reaction. Approximation is used to generate the reference or training signal at first using Approximation techniques. To generate the training signal, Least Square Approximation (LSA) and Particle Swarm Optimization (PSO) techniques are used and optimized to the desired value. This training signal is later used in an adaptive scheme as a reference signal. Adaptive methods are used to correct the load cell's output reaction. In the adaptive filter, Least Means Square Algorithms are used to remove the noisy load cell output with the adaptive filter. The noise is primarily caused by the creeping and drifting mistake at the output. The Adaptive Filter utilizes the reference signal produced by approximation methods to eliminate both creeping and drifting errors and to produce a load cell's required reaction.
APA, Harvard, Vancouver, ISO, and other styles
30

"HOW LONG DOES IT TAKE FOR YOUR MONEY OR DEBT TO DOUBLE? EXPLAINING THE RULE OF 72 TO STUDENTS." Journal of Management and Business Education 5, no. 1 (February 7, 2022): 38–47. http://dx.doi.org/10.35564/jmbe.2022.0003.

Full text
Abstract:
The ‘rule of 72’ provides a useful approximation of when an investment or debt will double. Students can apply it for an estimate, avoiding mistakes later when using technology for a precise answer. On standardized tests, moreover, such devices may be disallowed. In job interviews, too, quickly approximating the doubling answer demonstrates the impressive problem-solving ability. Illustrations abound online and in traditional media showing how to use it. This is not the same as explaining why it works or the limitations. Inquiring students want to know. This paper combines familiar territories in math and application to provide a relatively simple mathematical explanation.
APA, Harvard, Vancouver, ISO, and other styles
31

Mayhew, Kent W. "New Thermodynamics: Understanding Temperature’s Limitations." European Journal of Applied Physics 2, no. 2 (March 25, 2020). http://dx.doi.org/10.24018/ejphysics.2020.2.2.5.

Full text
Abstract:
We shall enhance our understanding of temperature, as was introduced in a previous paper [1]. Temperature is traditionally treated as if it has a linear relation of a system’s thermal energy, throughout most temperature regimes. The limitation of temperature’s relations will be discussed. Also, an improved understanding as to why various system’s measurement of temperature, does represent a measurement of that system’s thermal energy. It will further be discussed why statistical thermodynamics is mistaken with its various assertions, ending with a discussion as to why Maxwell-Boltzmann’s speed distribution is at best, only a rough to good approximation for what is witnessed in experimental gaseous systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Ahmed, Md Dilsad, Walter King Yan Ho, Shaheen Begum, and Guillermo Felipe López Sánchez. "Perfectionism, Self-Esteem, and the Will to Win Among Adolescent Athletes: The Effects of the Level of Achievements and Gender." Frontiers in Psychology 12 (August 10, 2021). http://dx.doi.org/10.3389/fpsyg.2021.580446.

Full text
Abstract:
This study examined the relationships between perfectionism, self-esteem, and the will to win and the effects of gender and the level of achievement on these variables. A total of 318 adolescents in the age group of 12–19 years (M = 16.10 ± 1.01) completed the self-esteem questionnaire, the will-to-win questionnaire, and the perfectionism inventory. Interstate level (ISL) athletes obtained higher scores than interdistrict level (IDL) athletes on the following variables: self-esteem, the will to win, and four of the eight dimensions of perfectionism (i.e., concern over mistakes, the need for approval, organization, and planfulness). Further, male athletes obtained higher self-esteem and perfectionism (i.e., the need for approval and rumination) scores than female athletes. Self-esteem, the will to win, and the dimensions of perfectionism were positively and significantly interrelated. However, one dimension, namely, perceived parental pressure, was unrelated to any factor except striving for excellence. Further, the will to win, concern over mistakes, high standard for others, and planfulness were unrelated to striving for excellence. The results of the discriminant analysis revealed that there was no significant difference between ISL and IDL athletes (variance explained = 9.480%). Finally, using path analysis showed that Model 3 (perfectionism self-esteem will-to-win) has provided good model fit such as Bentler's comparative fit index (CFI) (0.987), Tucker-Lewis index (TLI) (0.876), normed fit index (NFI) (0.973), and the root mean square error of approximation (RMSEA) (0.097).
APA, Harvard, Vancouver, ISO, and other styles
33

Trng, Vi Thi. "A Study on the UES of English Collocation in Writing by Students at Thai Nguyen University." International Journal of Social Science and Human Research 04, no. 05 (May 15, 2021). http://dx.doi.org/10.47191/ijsshr/v4-i5-19.

Full text
Abstract:
The current research investigates the use of collocations in students’ academic writings to obtain information about the popular types of collocations they use, the collocational errors, and the sources of errors. The design of the study is a qualitative research which employed document analysis as the instrument to collect data. 50 students were the population and the samples as well. The results show that students have a tendency of using Type 1 (Verb-Noun) and Type 2 (Adjective-Noun) collocations more than the other types. With regard to the collocational errors, it is noted that Type 1 and Type 2 are also the top types in which students make mistakes. Additionally, verbs and adjectives are the main parts that students mostly have problems with. On examining the sources of errors, the researcher found five causes including approximation, the ignorance of rule restriction, negative transfer, the use of synonyms, and false concept hypothesized. Among these error sources, negative transfer is the most important factor leading to students’ collocational errors.
APA, Harvard, Vancouver, ISO, and other styles
34

Muralha, João, Luís Eça, and Christiaan M. Klaij. "Code Verification of a Pressure-Based Solver for Subsonic Compressible Flows." Journal of Verification, Validation and Uncertainty Quantification 5, no. 4 (October 29, 2020). http://dx.doi.org/10.1115/1.4048750.

Full text
Abstract:
Abstract Although most flows in maritime applications can be modeled as incompressible, for certain phenomena like sloshing, slamming, and cavitation, this approximation falls short. For these events, it is necessary to consider compressibility effects. This paper presents the first step toward a solver for multiphase compressible flows: a single-phase compressible flow solver for perfect gases. The main purpose of this work is code verification of the solver using the method of manufactured solutions. For the sake of completeness, the governing equations are described in detail including the changes to the SIMPLE algorithm used in the incompressible flow solver to ensure mass conservation and pressure–velocity–density coupling. A manufactured solution for laminar subsonic flow was therefore designed. With properly defined boundary conditions, the observed order of grid convergence matches the formal order, so it can be concluded that the flow solver is free of coding mistakes, to the extent tested by the method of manufactured solutions. The performance of the pressure-based SIMPLE solver is quantified by reporting iteration counts for all grids. Furthermore, the use of pressure–weighted interpolation (PWI), also known as Rhie–Chow interpolation, to avoid spurious pressure oscillations in incompressible flow, though not strictly necessary for compressible flow, does show some benefits in the low Mach number range.
APA, Harvard, Vancouver, ISO, and other styles
35

Watson, David S. "Conceptual challenges for interpretable machine learning." Synthese 200, no. 2 (March 1, 2022). http://dx.doi.org/10.1007/s11229-022-03485-5.

Full text
Abstract:
AbstractAs machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are largely overlooked by authors in this area. I argue that the vast majority of IML algorithms are plagued by (1) ambiguity with respect to their true target; (2) a disregard for error rates and severe testing; and (3) an emphasis on product over process. Each point is developed at length, drawing on relevant debates in epistemology and philosophy of science. Examples and counterexamples from IML are considered, demonstrating how failure to acknowledge these problems can result in counterintuitive and potentially misleading explanations. Without greater care for the conceptual foundations of IML, future work in this area is doomed to repeat the same mistakes.
APA, Harvard, Vancouver, ISO, and other styles
36

Sans, A. "Abductive reasoning on humans/AI interactions in medical contexts." European Journal of Public Health 30, Supplement_5 (September 1, 2020). http://dx.doi.org/10.1093/eurpub/ckaa165.646.

Full text
Abstract:
Abstract The proposal of this talk is to explain alternatives to obtain ethical reasoning in the humans/AI interactions in medical (especially public health) contexts. One of the ethical problems in AI is the alignment mechanisms between human values and machines automatisms. This research is based on obtaining a system capable to infer from rational human activity in a certain behavior, so it can be captured how a human moves and the way that human beings learn and teach ethical values. One way is mimetic alignment, which are values imitation processes based trough big data preferences analysis, linguistic expressions, etc. However, this approximation commits two mistakes. First, preferences are confused with values, and then the second problem is that naturalistic fallacy is committed. From this point of view, naturalistic fallacy occurs if the research is focused on alignment meaning instead of value one, and the subsequent answer is preference analysis based. Therefore, prescriptions are derivate from descriptions. The chain of reasoning that leads us to commit this fallacy begins with the confusion that values and preferences are equivalent. An alternative proposal is anchored values alignment, which is based on anchoring normative values processes of a machine that has a behavior to interact. Through abductive reasoning, this way of thinking tries to capture the idea that a value is not in any set of things, instead it is some action guiding. The relevance of abduction is its temptative value to project beyond descriptive reasoning as statically one, which it is currently used in works on medical diagnosis precisely for the characteristics that clinical eye needs.
APA, Harvard, Vancouver, ISO, and other styles
37

Pudycheva, Halyna. "COMPARISON OF ECONOMETRIC METHODS FOR EFFICIENCY ASSESSMENT IN ENERGY SECTOR." Market Infrastructure, no. 60 (2021). http://dx.doi.org/10.32843/infrastruct60-41.

Full text
Abstract:
Evaluation of the efficiency of enterprises in energy sector is a rather difficult problem, since the production of useful energy services (electricity and heat energy) is often accompanied by the emission of harmful substances (carbon dioxide, sulfur dioxide, nitrogen oxides, etc.), which should be taken into account when assessing efficiency of the enterprises’ activity. The purpose of this article is to determine the main features of the application of main econometric methods, which are used in order to assess the efficiency of enterprises in energy sector, as well as to identify the advantages and disadvantages of these methods. Using general scientific research methods, namely analysis, synthesis, theoretical generalization, abstraction and analogy, the author characterizes the following parametric methods: Stochastic Frontier Approach (SFA), Distribution Free Approach (DFA) and Thick Frontier Approach (TFA). Moreover, the following nonparametric methods are considered: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). All these methods are based on the calculation of approximation of indicators of the enterprise to the potential or actual efficiency frontier. The concept of “frontier efficiency” is characterized. The main features of the above-mentioned methodological approaches are considered by the author. The advantages and disadvantages of given methodological approaches due to the existence of mistakes, quantity of input and output factors, subjectivity of estimation, accuracy of the results obtained, etc. are indicated in the article. The author shows that the analyzed methodical approaches can be applied for the estimation of the efficiency of enterprises in energy sector, taking into account the multiple inputs and outputs of such enterprises. It is emphasized that further research will be focused on determination of the efficiency of enterprises in energy sector. The conducted analysis could be used as a basis for further managerial decision-making, both at the micro level (enterprise) and at the macro level (regions and the state as a whole).
APA, Harvard, Vancouver, ISO, and other styles
38

Gelashvili, Ia. "THE NEGATIVE IMPACT OF PANDEMICS ON THE TOURISM SECTOR AND SOLUTION WAYS." Globalization and Business, May 17, 2022, 138–42. http://dx.doi.org/10.35945/gb.2022.13.021.

Full text
Abstract:
The paper describes the situation in the tourism sector due to the pandemic crisis. The root causes for the fall in the tourism business are learnt and analyzed. The specific ways and recommendations are given for overcoming crisis and supporting positive trends in the sector. Tourist Industry is the fast-growing sector in almost every country of the world. The safe environment, domestic stability, proper level of economic development, etc. are the pre-conditions for sector development. Tourism itself supports creating new jobs, increasing income, diversifying economy, protecting the environment, approximating the cultures, etc. Tourism has a growing tendency in Georgia, reflecting in significant increase of foreign visitors and accordingly, growth of the sector share in the revenues of a state budget. Though considering the situation in the world, meaning the global pandemics COVID 19, the problems were noticeable in Georgia too. The share of the tourism industry in country’s GDP was 11% but due to pandemics the number of visitors was dramatically decreased, followed by the closure of the country and internal strict regulations causing paralyzing of entire tourist infrastructure. The following data demonstrates the situation better, in 2019 the income from the tourism was USD 3.5 billion, while 170 thousand people were employed in the sector. The pandemic crisis caused the loss of 30 million GEL per month. In general, tourism market lost about 75.2 million jobs throughout entire world. The tourist agencies, guest houses, catering services and freelancer guides, etc. were mostly affected by the pandemics. Reduction of the travelers’ flow by 50% left thousands of people without income. Moreover, the crisis in the tourism sector affected other fields of economy, though the most significant was decrease of currency income. Since 2021 the positive trends are noticeable in reviving the tourism industry, for instance, according to the National Tourism Agency, in April, 2021 the number of foreign visitors was increased by 140,4 % in Georgia, while it was increased by 222,8 % in May and reached 172,333 people. It is a fact that calculation of the losses caused by the global pandemics and recovery will take a long time for Georgian business. Currently, the most important thing is to avoid replication of the past mistakes, like introduction ungrounded, economically unjustified lockdowns. The pandemic management methods and approaches should be improved significantly and even more it has to be changed totally, the low rate of vaccination of the country has to be raised as well. Stability of the political situation is very important, as political stability is a crucial factor for development of tourism. Introduction of tourism supporting state policy is important, which should create strong ground for developing tour operators and other agents of tourist industry, the affords for fund raising, attracting highly qualified human resource have to be intensified and the control of efficient and targeted appliance of the investment resources has to become stricter, that will increase the quality of the touristic services.
APA, Harvard, Vancouver, ISO, and other styles
39

Leaver, Tama. "The Social Media Contradiction: Data Mining and Digital Death." M/C Journal 16, no. 2 (March 8, 2013). http://dx.doi.org/10.5204/mcj.625.

Full text
Abstract:
Introduction Many social media tools and services are free to use. This fact often leads users to the mistaken presumption that the associated data generated whilst utilising these tools and services is without value. Users often focus on the social and presumed ephemeral nature of communication – imagining something that happens but then has no further record or value, akin to a telephone call – while corporations behind these tools tend to focus on the media side, the lasting value of these traces which can be combined, mined and analysed for new insight and revenue generation. This paper seeks to explore this social media contradiction in two ways. Firstly, a cursory examination of Google and Facebook will demonstrate how data mining and analysis are core practices for these corporate giants, central to their functioning, development and expansion. Yet the public rhetoric of these companies is not about the exchange of personal information for services, but rather the more utopian notions of organising the world’s information, or bringing everyone together through sharing. The second section of this paper examines some of the core ramifications of death in terms of social media, asking what happens when a user suddenly exists only as recorded media fragments, at least in digital terms. Death, at first glance, renders users (or post-users) without agency or, implicitly, value to companies which data-mine ongoing social practices. Yet the emergence of digital legacy management highlights the value of the data generated using social media, a value which persists even after death. The question of a digital estate thus illustrates the cumulative value of social media as media, even on an individual level. The ways Facebook and Google approach digital death are examined, demonstrating policies which enshrine the agency and rights of living users, but become far less coherent posthumously. Finally, along with digital legacy management, I will examine the potential for posthumous digital legacies which may, in some macabre ways, actually reanimate some aspects of a deceased user’s presence, such as the Lives On service which touts the slogan “when your heart stops beating, you'll keep tweeting”. Cumulatively, mapping digital legacy management by large online corporations, and the affordances of more focussed services dealing with digital death, illustrates the value of data generated by social media users, and the continued importance of the data even beyond the grave. Google While Google is universally synonymous with search, and is the world’s dominant search engine, it is less widely understood that one of the core elements keeping Google’s search results relevant is a complex operation mining user data. Different tools in Google’s array of services mine data in different ways (Zimmer, “Gaze”). Gmail, for example, uses algorithms to analyse an individual’s email in order to display the most relevant related advertising. This form of data mining is comparatively well known, with most Gmail users knowingly and willingly accepting more personalised advertising in order to use Google’s email service. However, the majority of people using Google’s search engine are unaware that search, too, is increasingly driven by the tracking, analysis and refining of results on the basis of user activity (Zimmer, “Externalities”). As Alexander Halavais (160–180) quite rightly argues, recent focus on the idea of social search – the deeper integration of social network information in gauging search results – is oxymoronic; all search, at least for Google, is driven by deep analysis of personal and aggregated social data. Indeed, the success of Google’s mining of user data has led to concerns that often invisible processes of customisation and personalisation will mean that the supposedly independent or objective algorithms producing Google’s search results will actually yield a different result for every person. As Siva Vaidhyanathan laments: “as users in a diverse array of countries train Google’s algorithms to respond to specialized queries with localised results, each place in the world will have a different list of what is important, true, or ‘relevant’ in response to any query” (138). Personalisation and customisation are not inherently problematic, and frequently do enhance the relevance of search results, but the main objection raised by critics is not Google’s data mining, but the lack of transparency in the way data are recorded, stored and utilised. Eli Pariser, for example, laments the development of a ubiquitous “filter bubble” wherein all search results are personalised and subjective but are hidden behind the rhetoric of computer-driven algorithmic objectivity (Pariser). While data mining informs and drives many of Google’s tools and services, the cumulative value of these captured fragments of information is best demonstrated by the new service Google Now. Google Now is a mobile app which delivers an ongoing stream of search results but without the need for user input. Google Now extrapolates the rhythms of a person’s life, their interests and their routines in order to algorithmically determine what information will be needed next, and automatically displays it on a user’s mobile device. Clearly Google Now is an extremely valuable and clever tool, and the more information a user shares, the better the ongoing customised results will be, demonstrating the direct exchange value of personal data: total personalisation requires total transparency. Each individual user will need to judge whether they wish to share with Google the considerable amount of personal information needed to make Google Now work. The pressing ethical question that remains is whether Google will ensure that users are sufficiently aware of the amount of data and personal privacy they are exchanging in order to utilise such a service. Facebook Facebook began as a closed network, open only to students at American universities, but has transformed over time to a much wider and more open network, with over a billion registered users. Facebook has continually reinvented their interface, protocols and design, often altering both privacy policies and users’ experience of privacy, and often meeting significant and vocal resistance in the process (boyd). The data mining performed by social networking service Facebook is also extensive, although primarily aimed at refining the way that targeted advertising appears on the platform. In 2007 Facebook partnered with various retail loyalty services and combined these records with Facebook’s user data. This information was used to power Facebook’s Beacon service, which added details of users’ retail history to their Facebook news feed (for example, “Tama just purchased a HTC One”). The impact of all of these seemingly unrelated purchases turning up in many people’s feeds suddenly revealed the complex surveillance, data mining and sharing of these data that was taking place (Doyle and Fraser). However, as Beacon was turned on, without consultation, for all Facebook users, there was a sizable backlash that meant that Facebook had to initially switch the service to opt-in, and then discontinue it altogether. While Beacon has been long since erased, it is notable that in early 2013 Facebook announced that they have strengthened partnerships with data mining and profiling companies, including Datalogix, Epsilon, Acxiom, and BlueKai, which harness customer information from a range of loyalty cards, to further refine the targeting ability offered to advertisers using Facebook (Hof). Facebook’s data mining, surveillance and integration across companies is thus still going on, but no longer directly visible to Facebook users, except in terms of the targeted advertisements which appear on the service. Facebook is also a platform, providing a scaffolding and gateway to many other tools and services. In order to use social games such as Zynga’s Farmville, Facebook users agree to allow Zynga to access their profile information, and use Facebook to authenticate their identity. Zynga has been unashamedly at the forefront of user analytics and data mining, attempting to algorithmically determine the best way to make virtual goods within their games attractive enough for users to pay for them with real money. Indeed, during a conference presentation, Zynga Vice President Ken Rudin stated outright that Zynga is “an analytics company masquerading as a games company” (Rudin). I would contend that this masquerade succeeds, as few Farmville players are likely to consider how their every choice and activity is being algorithmically scrutinised in order to determine what virtual goods they might actually buy. As an instance of what is widely being called ‘big data’, the data miing operations of Facebook, Zynga and similar services lead to a range of ethical questions (boyd and Crawford). While users may have ostensibly agreed to this data mining after clicking on Facebook’s Terms of Use agreement, the fact that almost no one reads these agreements when signing up for a service is the Internet’s worst kept secret. Similarly, the extension of these terms when Facebook operates as a platform for other applications is a far from transparent process. While examining the recording of user data leads to questions of privacy and surveillance, it is important to note that many users are often aware of the exchange to which they have agreed. Anders Albrechtslund deploys the term ‘social surveillance’ to usefully emphasise the knowing, playful and at times subversive approach some users take to the surveillance and data mining practices of online service providers. Similarly, E.J. Westlake notes that performances of self online are often not only knowing but deliberately false or misleading with the aim of exploiting the ways online activities are tracked. However, even users well aware of Facebook’s data mining on the site itself may be less informed about the social networking company’s mining of offsite activity. The introduction of ‘like’ buttons on many other Websites extends Facebook’s reach considerably. The various social plugins and ‘like’ buttons expand both active recording of user activity (where the like button is actually clicked) and passive data mining (since a cookie is installed or updated regardless of whether a button is actually pressed) (Gerlitz and Helmond). Indeed, because cookies – tiny packets of data exchanged and updated invisibly in browsers – assign each user a unique identifier, Facebook can either combine these data with an existing user’s profile or create profiles about non-users. If that person even joins Facebook, their account is connected with the existing, data-mined record of their Web activities (Roosendaal). As with Google, the significant issue here is not users knowingly sharing their data with Facebook, but the often complete lack of transparency in terms of the ways Facebook extracts and mines user data, both on Facebook itself and increasingly across applications using Facebook as a platform and across the Web through social plugins. Google after Death While data mining is clearly a core element in the operation of Facebook and Google, the ability to scrutinise the activities of users depends on those users being active; when someone dies, the question of the value and ownership of their digital assets becomes complicated, as does the way companies manage posthumous user information. For Google, the Gmail account of a deceased person becomes inactive; the stored email still takes up space on Google’s servers, but with no one using the account, no advertising is displayed and thus Google can earn no revenue from the account. However, the process of accessing the Gmail account of a deceased relative is an incredibly laborious one. In order to even begin the process, Google asks that someone physically mails a series of documents including a photocopy of a government-issued ID, the death certificate of the deceased person, evidence of an email the requester received from the deceased, along with other personal information. After Google have received and verified this information, they state that they might proceed to a second stage where further documents are required. Moreover, if at any stage Google decide that they cannot proceed in releasing a deceased relative’s Gmail account, they will not reveal their rationale. As their support documentation states: “because of our concerns for user privacy, if we determine that we cannot provide the Gmail content, we will not be able to share further details about the account or discuss our decision” (Google, “Accessing”). Thus, Google appears to enshrine the rights and privacy of individual users, even posthumously; the ownership or transfer of individual digital assets after death is neither a given, nor enshrined in Google’s policies. Yet, ironically, the economic value of that email to Google is likely zero, but the value of the email history of a loved one or business partner may be of substantial financial and emotional value, probably more so than when that person was alive. For those left behind, the value of email accounts as media, as a lasting record of social communication, is heightened. The question of how Google manages posthumous user data has been further complicated by the company’s March 2012 rationalisation of over seventy separate privacy policies for various tools and services they operate under the umbrella of a single privacy policy accessed using a single unified Google account. While this move was ostensibly to make privacy more understandable and transparent at Google, it had other impacts. For example, one of the side effects of a singular privacy policy and single Google identity is that deleting one of a recently deceased person’s services may inadvertently delete them all. Given that Google’s services include Gmail, YouTube and Picasa, this means that deleting an email account inadvertently erases all of the Google-hosted videos and photographs that individual posted during their lifetime. As Google warns, for example: “if you delete the Google Account to which your YouTube account is linked, you will delete both the Google Account AND your YouTube account, including all videos and account data” (Google, “What Happens”). A relative having gained access to a deceased person’s Gmail might sensibly delete the email account once the desired information is exported. However, it seems less likely that this executor would realise that in doing so all of the private and public videos that person had posted on YouTube would also permanently disappear. While material possessions can be carefully dispersed to specific individuals following the instructions in someone’s will, such affordances are not yet available for Google users. While it is entirely understandable that the ramification of policy changes are aimed at living users, as more and more online users pass away, the question of their digital assets becomes increasingly important. Google, for example, might allow a deceased person’s executor to elect which of their Google services should be kept online (perhaps their YouTube videos), which traces can be exported (perhaps their email), and which services can be deleted. At present, the lack of fine-grained controls over a user’s digital estate at Google makes this almost impossible. While it violates Google’s policies to transfer ownership of an account to another person, if someone does leave their passwords behind, this provides their loved ones with the best options in managing their digital legacy with Google. When someone dies and their online legacy is a collection of media fragments, the value of those media is far more apparent to the loved ones left behind rather than the companies housing those media. Facebook Memorialisation In response to users complaining that Facebook was suggesting they reconnect with deceased friends who had left Facebook profiles behind, in 2009 the company instituted an official policy of turning the Facebook profiles of departed users into memorial pages (Kelly). Technically, loved ones can choose between memorialisation and erasing an account altogether, but memorialisation is the default. This entails setting the account so that no one can log into it, and that no new friends (connections) can be made. Existing friends can access the page in line with the user’s final privacy settings, meaning that most friends will be able to post on the memorialised profile to remember that person in various ways (Facebook). Memorialised profiles (now Timelines, after Facebook’s redesign) thus become potential mourning spaces for existing connections. Since memorialised pages cannot make new connections, public memorial pages are increasingly popular on Facebook, frequently set up after a high-profile death, often involving young people, accidents or murder. Recent studies suggest that both of these Facebook spaces are allowing new online forms of mourning to emerge (Marwick and Ellison; Carroll and Landry; Kern, Forman, and Gil-Egui), although public pages have the downside of potentially inappropriate commentary and outright trolling (Phillips). Given Facebook has over a billion registered users, estimates already suggest that the platform houses 30 million profiles of deceased people, and this number will, of course, continue to grow (Kaleem). For Facebook, while posthumous users do not generate data themselves, the fact that they were part of a network means that their connections may interact with a memorialised account, or memorial page, and this activity, like all Facebook activities, allows the platform to display advertising and further track user interactions. However, at present Facebook’s options – to memorialise or delete accounts of deceased people – are fairly blunt. Once Facebook is aware that a user has died, no one is allowed to edit that person’s Facebook account or Timeline, so Facebook literally offers an all (memorialisation) or nothing (deletion) option. Given that Facebook is essentially a platform for performing identities, it seems a little short-sighted that executors cannot clean up or otherwise edit the final, lasting profile of a deceased Facebook user. As social networking services and social media become more ingrained in contemporary mourning practices, it may be that Facebook will allow more fine-grained control, positioning a digital executor also as a posthumous curator, making the final decision about what does and does not get kept in the memorialisation process. Since Facebook is continually mining user activity, the popularity of mourning as an activity on Facebook will likely mean that more attention is paid to the question of digital legacies. While the user themselves can no longer be social, the social practices of mourning, and the recording of a user as a media entity highlights the fact that social media can be about interactions which in significant ways include deceased users. Digital Legacy Services While the largest online corporations have fairly blunt tools for addressing digital death, there are a number of new tools and niche services emerging in this area which are attempting to offer nuanced control over digital legacies. Legacy Locker, for example, offers to store the passwords to all of a user’s online services and accounts, from Facebook to Paypal, and to store important documents and other digital material. Users designate beneficiaries who will receive this information after the account holder passes away, and this is confirmed by preselected “verifiers” who can attest to the account holder’s death. Death Switch similarly provides the ability to store and send information to users after the account holder dies, but tests whether someone is alive by sending verification emails; fail to respond to several prompts and Death Switch will determine a user has died, or is incapacitated, and executes the user’s final instructions. Perpetu goes a step further and offers the same tools as Legacy Locker but also automates existing options from social media services, allowing users to specify, for example, that their Facebook, Twitter or Gmail data should be downloaded and this archive should be sent to a designated recipient when the Perpetu user dies. These tools attempt to provide a more complex array of choices in terms of managing a user’s digital legacy, providing similar choices to those currently available when addressing material possessions in a formal will. At a broader level, the growing demand for these services attests to the ongoing value of online accounts and social media traces after a user’s death. Bequeathing passwords may not strictly follow the Terms of Use of the online services in question, but it is extremely hard to track or intervene when a user has the legitimate password, even if used by someone else. More to the point, this finely-grained legacy management allows far more flexibility in the utilisation and curation of digital assets posthumously. In the process of signing up for one of these services, or digital legacy management more broadly, the ongoing value and longevity of social media traces becomes more obvious to both the user planning their estate and those who ultimately have to manage it. The Social Media Afterlife The value of social media beyond the grave is also evident in the range of services which allow users to communicate in some fashion after they have passed away. Dead Social, for example, allows users to schedule posthumous social media activity, including the posting of tweets, sending of email, Facebook messages, or the release of online photos and videos. The service relies on a trusted executor confirming someone’s death, and after that releases these final messages effectively from beyond the grave. If I Die is a similar service, which also has an integrated Facebook application which ensures a user’s final message is directly displayed on their Timeline. In a bizarre promotional campaign around a service called If I Die First, the company is promising that the first user of the service to pass away will have their posthumous message delivered to a huge online audience, via popular blogs and mainstream press coverage. While this is not likely to appeal to everyone, the notion of a popular posthumous performance of self further complicates that question of what social media can mean after death. Illustrating the value of social media legacies in a quite different but equally powerful way, the Lives On service purports to algorithmically learn how a person uses Twitter while they are live, and then continue to tweet in their name after death. Internet critic Evgeny Morozov argues that Lives On is part of a Silicon Valley ideology of ‘solutionism’ which casts every facet of society as a problem in need of a digital solution (Morozov). In this instance, Lives On provides some semblance of a solution to the problem of death. While far from defeating death, the very fact that it might be possible to produce any meaningful approximation of a living person’s social media after they die is powerful testimony to the value of data mining and the importance of recognising that value. While Lives On is an experimental service in its infancy, it is worth wondering what sort of posthumous approximation might be built using the robust data profiles held by Facebook or Google. If Google Now can extrapolate what a user wants to see without any additional input, how hard would it be to retool this service to post what a user would have wanted after their death? Could there, in effect, be a Google After(life)? Conclusion Users of social media services have differing levels of awareness regarding the exchange they are agreeing to when signing up for services provided by Google or Facebook, and often value the social affordances without necessarily considering the ongoing media they are creating. Online corporations, by contrast, recognise and harness the informatic traces users generate through complex data mining and analysis. However, the death of a social media user provides a moment of rupture which highlights the significant value of the media traces a user leaves behind. More to the point, the value of these media becomes most evident to those left behind precisely because that individual can no longer be social. While beginning to address the issue of posthumous user data, Google and Facebook both have very blunt tools; Google might offer executors access while Facebook provides the option of locking a deceased user’s account as a memorial or removing it altogether. Neither of these responses do justice to the value that these media traces hold for the living, but emerging digital legacy management tools are increasingly providing a richer set of options for digital executors. While the differences between material and digital assets provoke an array of legal, spiritual and moral issues, digital traces nevertheless clearly hold significant and demonstrable value. For social media users, the death of someone they know is often the moment where the media side of social media – their lasting, infinitely replicable nature – becomes more important, more visible, and casts the value of the social media accounts of the living in a new light. For the larger online corporations and service providers, the inevitable increase in deceased users will likely provoke more fine-grained controls and responses to the question of digital legacies and posthumous profiles. It is likely, too, that the increase in online social practices of mourning will open new spaces and arenas for those same corporate giants to analyse and data-mine. References Albrechtslund, Anders. “Online Social Networking as Participatory Surveillance.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/article/view/2142/1949›. boyd, danah. “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence 14.1 (2008): 13–20. ———, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662–679. Carroll, Brian, and Katie Landry. “Logging On and Letting Out: Using Online Social Networks to Grieve and to Mourn.” Bulletin of Science, Technology & Society 30.5 (2010): 341–349. Doyle, Warwick, and Matthew Fraser. “Facebook, Surveillance and Power.” Facebook and Philosophy: What’s on Your Mind? Ed. D.E. Wittkower. Chicago, IL: Open Court, 2010. 215–230. Facebook. “Deactivating, Deleting & Memorializing Accounts.” Facebook Help Center. 2013. 7 Mar. 2013 ‹http://www.facebook.com/help/359046244166395/›. Gerlitz, Carolin, and Anne Helmond. “The Like Economy: Social Buttons and the Data-intensive Web.” New Media & Society (2013). Google. “Accessing a Deceased Person’s Mail.” 25 Jan. 2013. 21 Apr. 2013 ‹https://support.google.com/mail/answer/14300?hl=en›. ———. “What Happens to YouTube If I Delete My Google Account or Google+?” 8 Jan. 2013. 21 Apr. 2013 ‹http://support.google.com/youtube/bin/answer.py?hl=en&answer=69961&rd=1›. Halavais, Alexander. Search Engine Society. Polity, 2008. Hof, Robert. “Facebook Makes It Easier to Target Ads Based on Your Shopping History.” Forbes 27 Feb. 2013. 1 Mar. 2013 ‹http://www.forbes.com/sites/roberthof/2013/02/27/facebook-makes-it-easier-to-target-ads-based-on-your-shopping-history/›. Kaleem, Jaweed. “Death on Facebook Now Common as ‘Dead Profiles’ Create Vast Virtual Cemetery.” Huffington Post. 7 Dec. 2012. 7 Mar. 2013 ‹http://www.huffingtonpost.com/2012/12/07/death-facebook-dead-profiles_n_2245397.html›. Kelly, Max. “Memories of Friends Departed Endure on Facebook.” The Facebook Blog. 27 Oct. 2009. 7 Mar. 2013 ‹http://www.facebook.com/blog/blog.php?post=163091042130›. Kern, Rebecca, Abbe E. Forman, and Gisela Gil-Egui. “R.I.P.: Remain in Perpetuity. Facebook Memorial Pages.” Telematics and Informatics 30.1 (2012): 2–10. Marwick, Alice, and Nicole B. Ellison. “‘There Isn’t Wifi in Heaven!’ Negotiating Visibility on Facebook Memorial Pages.” Journal of Broadcasting & Electronic Media 56.3 (2012): 378–400. Morozov, Evgeny. “The Perils of Perfection.” The New York Times 2 Mar. 2013. 4 Mar. 2013 ‹http://www.nytimes.com/2013/03/03/opinion/sunday/the-perils-of-perfection.html?pagewanted=all&_r=0›. Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You. London: Viking, 2011. Phillips, Whitney. “LOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.” First Monday 16.12 (2011). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/3168›. Roosendaal, Arnold. “We Are All Connected to Facebook … by Facebook!” European Data Protection: In Good Health? Ed. Serge Gutwirth et al. Dordrecht: Springer, 2012. 3–19. Rudin, Ken. “Actionable Analytics at Zynga: Leveraging Big Data to Make Online Games More Fun and Social.” San Diego, CA, 2010. Vaidhyanathan, Siva. The Googlization of Everything. 1st ed. Berkeley: University of California Press, 2011. Westlake, E.J. “Friend Me If You Facebook: Generation Y and Performative Surveillance.” TDR: The Drama Review 52.4 (2008): 21–40. Zimmer, Michael. “The Externalities of Search 2.0: The Emerging Privacy Threats When the Drive for the Perfect Search Engine Meets Web 2.0.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/2136/1944›. ———. “The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance.” Web Search. Eds. Amanda Spink & Michael Zimmer. Berlin: Springer, 2008. 77–99.
APA, Harvard, Vancouver, ISO, and other styles
40

Henderson, Neil James. "Online Persona as Hybrid-Object: Tracing the Problems and Possibilities of Persona in the Short Film Noah." M/C Journal 17, no. 3 (June 10, 2014). http://dx.doi.org/10.5204/mcj.819.

Full text
Abstract:
Introduction The short film Noah (2013) depicts the contemporary story of an adolescent relationship breakdown and its aftermath. The film tells the story by showing events entirely as they unfold on the computer screen of Noah, the film’s teenaged protagonist. All of the characters, including Noah, appear on film solely via technological mediation.Although it is a fictional representation, Noah has garnered a lot of acclaim within an online public for the authenticity and realism of its portrayal of computer-mediated life (Berkowitz; Hornyak; Knibbs; Warren). Judging by the tenor of a lot of this commentary, the film has keyed in to a larger cultural anxiety around issues of communication and relationships online. Many reviewers and interested commentators have expressed concern at how closely Noah’s distracted, frenetic and problematic multitasking resembles their own computer usage (Beggs; Berkowitz; Trumbore). They frequently express the belief that it was this kind of behaviour that led to the relationship breakdown depicted in the film, as Noah proves to be “a lot better at opening tabs than at honest communication” (Knibbs para. 2).I believe that the cultural resonance of the film stems from the way in which the film is an implicit attempt to assess the nature of contemporary online persona. By understanding online persona as a particular kind of “hybrid object” or “quasi-object”—a combination of both human and technological creation (Latour We Have)—the sense of the overall problems, as well as the potential, of online persona as it currently exists, is traceable through the interactions depicted within the film. By understanding social relationships as constituted through dynamic interaction (Schutz), I understand the drama of Noah to stem principally from a tension in the operation of online persona between a) the technological automation of presentation that forms a core part of the nature of contemporary online persona, and b) the need for interaction in effective relationship development. However, any attempt to blame this tension on an inherent tendency in technology is itself problematised by the film’s presentation of an alternative type of online persona, in a Chatroulette conversation depicted in the film’s second half.Persona and Performance, Mediation and DelegationMarshall (“Persona Studies” 163) describes persona as “a new social construction of identity and public display.” This new type of social construction has become increasingly common due to a combination of “changes in work, transformation of our forms of social connection and networking via new technologies, and consequent new affective clusters and micropublics” (Marshall “Persona Studies” 166). New forms of “presentational” media play a key role in the construction of persona by providing the resources through which identity is “performed, produced and exhibited by the individual or other collectives” (Marshall “Persona Studies” 160).In this formulation of persona, it is not clear how performance and presentation interlink with the related concepts of production and exhibition. Marshall’s concept of “intercommunication” suggests a classificatory scheme for these multiple registers of media and communication that are possible in the contemporary media environment. However, Marshall’s primary focus has so far been on the relationship between existing mediated communication forms, and their historical transformation (Marshall “Intercommunication”). Marshall has not as yet made clear the theoretical link between performance, presentation, production and exhibition. Actor-Network Theory (ANT) can provide this theoretical link, and a way of understanding persona as it operates in an online context: as online persona.In ANT, everything that exists is an object. Objects are performative actors—the associations between objects produce the identity of objects and the way they perform. The performative actions of objects, equally, produce the nature of the associations between them (Latour Reassembling). Neither objects nor associations have a prior existence outside of their relationship to each other (Law).For Latour, the semiotic distinction between “human” and “non-human” is itself an outcome of the performances of objects and their associations. There are also objects, which Latour calls “quasi-objects” or “hybrids,” that do not fit neatly on one side of the human/non-human divide or the other (Latour We Have). Online persona is an example of such a hybrid or quasi-object: it is a combination of both human creation and technological mediation.Two concepts formulated by Latour provide some qualitative detail about the nature of the operation of Actor-Networks. Firstly, Latour emphasises that actors are also “mediators.” This name emphasises that when an actor acts to create a connection between two or more other objects, it actively transforms the way that objects encounter the performance of other objects (Latour Reassembling). This notion of mediation resembles Hassan’s definition of “media” as an active agent of transferral (Hassan). But Latour emphasises that all objects, not just communication technologies, act as mediators. Secondly, Latour describes how an actor can take on the actions originally performed by another actor. He refers to this process as “delegation.” Delegation, especially delegation of human action to a technological delegate, can render action more efficient in two ways. It can reduce the effort needed for action, causing “the transformation of a major effort into a minor one.” It can also reduce the time needed to exert effort in performing an action: the effort need not be ongoing, but can be “concentrated at the time of installation” (Latour “Masses” 229-31).Online persona, in the terminology of ANT, is a constructed, performative presentation of identity. It is constituted through a combination of human action, ongoing mediation of present human action, and the automation, through technological delegation, of previous actions. The action of the film Noah is driven by the changes in expected and actual interaction that these various aspects of persona encourage.The Problems and Potential of Online PersonaBy relaying the action entirely via a computer screen, the film Noah is itself a testament to how encounters with others solely via technological mediation can be genuinely meaningful. Relaying the action in this way is in fact creatively productive, providing new ways of communicating details about characters and relationships through the layout of the screen. For instance, the film introduces the character of Amy, Noah’s girlfriend, and establishes her importance to Noah through her visual presence as part of a photo on his desktop background at the start of the film. The film later communicates the end of the relationship when the computer boots up again, but this time with Amy’s photo notably absent from the background.However, the film deviates from a “pure” representation of a computer screen in a number of ways. Most notably, the camera frame is not static, and moves around the screen in order to give the viewer the sense that the camera is simulating Noah’s eye focus. According to the directors, the camera needed to show viewers where the focus of the action was as the story progressed. Without this indication of where to focus, it was hard to keep viewers engaged and interested in the story (Paulas).Within the story of the film itself, the sense of drama surrounding Noah’s actions similarly stem from the exploration of the various aspects of what it is and is not possible to achieve in the performance of persona – both the positive and the negative consequences. At the start of the film, Noah engages in a Skype conversation with his girlfriend Amy. While Noah is indeed “approximating being present” (Berkowitz para. 3) for the initial part of this conversation, once Noah hears an implication that Amy may want to break up with him, the audience sees his eye movements darting between Amy’s visible face in Skype and Amy’s Facebook profile, and nowhere else.It would be a mistake to think that this double focus means Noah is not fully engaging with Amy. Rather, he is engaging with two dimensions of Amy’s available persona: her Facebook profile, and her Skype presence. Noah is fully focusing on Amy at this point of the film, but the unitary persona he experiences as “Amy” is constructed from multiple media channels—one dynamic and real-time, the other comparatively stable and static. Noah’s experience of Amy is multiplexed, a unitary experience constructed from multiple channels of communication. This may actually enhance Noah’s affective involvement with Amy.It is true that at the very start of the Skype call, Noah is focusing on several unrelated activities, not just on Amy. The available technological mediators enable this division of attention. But more than that, the available technological mediators also assume in their functioning that the user’s attention can be and should be divided. Thus some of the distractions Noah experiences at this time are of his own making (e.g. the simple game he plays in a browser window), while others are to some degree configured by the available opportunity to divide one’s attention, and the assumption of others that the user will do so. One of the distractions faced by Noah comes in the form of repeated requests from his friend “Kanye East” to play the game Call of Duty. How socially obligated is Noah to respond to these requests as promptly as possible, regardless of what other important things (that his friend doesn’t know about) he may be doing?Unfortunately, and for reasons which the audience never learns, the Skype call terminates abruptly before Noah can fully articulate his concerns to Amy. With a keen eye, the audience can see that the image of Amy froze not long after Noah started talking to her in earnest. She did indeed appear to be having problems with her Skype, as her later text message suggested. But there’s no indication why Amy decided, as described in the same text message, to postpone the conversation after the Skype call failed.This is a fairly obvious example of the relatively common situation in which one actor unexpectedly refuses to co-operate with the purposes of another (Callon). Noah’s uncertainty at how to address this non-cooperation leads to the penultimate act of the film when he logs in to Amy’s Facebook account. In order to fully consider the ethical issues involved, a performative understanding of the self and of relationships is insufficient. Phenomenological understandings of the self and social relationships are more suited to ethical considerations.Online Persona and Social RelationshipsIn the “phenomenological sociology” of Alfred Schutz, consciousness is inescapably temporal, constantly undergoing slight modification by the very process of progressing through time. The constitution of a social relationship, for Schutz, occurs when two (and only two) individuals share a community of space and time, simultaneously experiencing the same external phenomena. More importantly, it also requires that these two individuals have an ongoing, mutual and simultaneous awareness of each other’s progress and development through time. Finally, it requires that the individuals be mutually aware of the very fact that they are aware of each other in this ongoing, mutual and simultaneous way (Schutz).Schutz refers to this ideal-typical relationship state as the “We-relationship,” and the communal experience that constitutes it as “growing older together.” The ongoing awareness of constantly generated new information about the other is what constitutes a social relationship, according to Schutz. Accordingly, a lack of such information exchange will lead to a weaker social bond. In situations where direct interaction does not occur, Schutz claimed that individuals would construct their knowledge of the other through “typification”: pre-learned schemas of identity of greater or lesser generality, affixed to the other based on whatever limited information may be available.In the film, when Amy is no longer available via Skype, an aspect of her persona is still available for interrogation. After the failed Skype call, Noah repeatedly refreshes Amy’s Facebook profile, almost obsessively checking her relationship status to see if it has changed from reading “in a relationship.” In the process he discovers that, not long after their aborted Skype conversation, Amy has changed her profile picture—from one that had an image of the two of them together, to one that contains an image of Amy only. He also in the process discovers that someone he does not know named “Dylan Ramshaw” has commented on all of Amy’s current and previous profile pictures. Dylan’s Facebook profile proves resistant to interrogation—Noah’s repeated, frustrated attempts to click on Dylan’s profile picture to bring up more detail yields no results. In the absence of an aspect of persona that undergoes constant temporal change, any new information attained—a profile picture changed, a not-previously noticed regular commenter discovered—seems to gain heightened significance in defining not just the current relationship status with another, but the trajectory which that relationship is taking. The “typification” that Noah constructs of Amy is that of a guilty, cheating girlfriend.The penultimate act of the film occurs when Noah chooses to log in to Amy’s Facebook account using her password (which he knows), “just to check for sketchy shit,” or so he initially claims to Kanye East. His suspicions appear to be confirmed when he discovers that private exchanges between Amy and Dylan which indicate that they had been meeting together without Noah’s knowledge. The suggestion to covertly read Amy’s private Facebook messages comes originally from Kanye East, when he asks Noah “have you lurked [covertly read] her texts or anything?” Noah’s response strongly suggests the normative uncertainty that the teenaged protagonist feels at the idea; his initial response to Kanye East reads “is that the thing to do now?” The operation of Facebook in this instance has two, somewhat contradictory, delegated tasks: let others feel connected to Amy and what she’s doing, but also protect Amy’s privacy. The success of the second goal interferes with Noah’s desire to achieve the first. And so he violates her privacy.The times that Noah’s mouse hovers and circles around a button that would send a message from Amy’s account or update Amy’s Facebook profile are probably the film’s most cringe-inducing moments. Ultimately Noah decides to update Amy’s relationship status to single. The feedback he receives to Amy’s account immediately afterwards seems to confirm his suspicions that this was what was going to happen anyway: one friend of Amy’s says “finally” in a private message, and the suspicious “Dylan” offers up a shoulder to cry on. Apparently believing that this reflects the reality of their relationship, Noah leaves the status on Amy’s Facebook profile as “single.”The tragedy of the film is that Noah’s assumptions were quite incorrect. Rather than reflecting their updated relationship status, the change revealed to Amy that he had violated her privacy. Dylan’s supposedly over-familiar messages were perfectly acceptable on the basis that Dylan was not actually heterosexual (and therefore a threat to Noah’s role as boyfriend), but gay.The Role of Technology: “It’s Complicated”One way to interpret the film would be to blame Noah’s issues on technology per se. This is far too easy. Rather, the film suggests that Facebook was to some degree responsible for Noah’s relationship issues and the problematic way in which he tried to address them. In the second half of the film, Noah engages in a very different form of online interaction via the communication service known as Chatroulette. This interaction stands in sharp contrast to the interactions that occurred via Facebook.Chatroulette is a video service that pairs strangers around the globe for a chat session. In the film, Noah experiences a fairly meaningful moment on Chatroulette with an unnamed girl on the service, who dismisses Facebook as “weird and creepy”. The sheer normative power of Facebook comes across when Noah initially refuses to believe the unnamed Chatroulette girl when she says she does not have a Facebook profile. She suggests, somewhat ironically, that the only way to have a real, honest conversation with someone is “with a stranger, in the middle of the night”, as just occurred on Chatroulette.Besides the explicit comparison between Facebook and Chatroulette in the dialogue, this scene also provides an implicit comparison between online persona as it is found on Facebook and as it is found on Chatroulette. The style of interaction on each service is starkly different. On Facebook, users largely present themselves and perform to a “micro-public” of their “friends.” They largely engage in static self-presentations, often “interacting” only through interrogating the largely static self-presentations of others. On Chatroulette, users interact with strangers chosen randomly by an algorithm. Users predominantly engage in dialogue one-on-one, and interaction tends to be a mutual, dynamic affair, much like “real life” conversation.Yet while the “real-time” dialogue possible on Chatroulette may seem more conducive to facilitating Schutz’ idea of “growing older together,” the service also has its issues. The randomness of connection with others is problematic, as the film frankly acknowledges in the uncensored shots of frontal male nudity that Noah experiences in his search for a chat partner. Also, the problematic lack of a permanent means of staying in contact with each other is illustrated by a further tragic moment in the film when the session with the unnamed girl ends, with Noah having no means of ever being able to find her again.ConclusionIt is tempting to dismiss the problems that Noah encounters while interacting via mediated communication with the exhortation to “just go out and live [… ] life in the real world” (Trumbore para. 4), but this is also over-simplistic. Rather, what we can take away from the film is that there are trade-offs to be had in the technological mediation of self-presentation and communication. The questions that we need to address are: what prompts the choice of one form of technological mediation over another? And what are the consequences of this choice? Contemporary persona, as conceived by David Marshall, is motivated by the commodification of the self, and by increased importance of affect in relationships (Marshall “Persona Studies”). In the realm of Facebook, the commodification of the self has to some degree flattened the available interactivity of the online self, in favour of what the unnamed Chatroulette girl derogatorily refers to as “a popularity contest.”The short film Noah is to some degree a cultural critique of dominant trends in contemporary online persona, notably of the “commodification of the self” instantiated on Facebook. By conceiving of online persona in the terms of ANT outlined here, it becomes possible to envision alternatives to this dominant form of persona, including a concept of persona as commodification. Further, it is possible to do this in a way that avoids the trap of blaming technology for all problems, and that recognises both the advantages and disadvantages of different ways of constructing online persona. The analysis of Noah presented here can therefore provide a guide for more sophisticated and systematic examinations of the hybrid-object “online persona.”References Beggs, Scott. “Short Film: The Very Cool ‘Noah’ Plays Out Madly on a Teenager’s Computer Screen.” Film School Rejects 11 Sep. 2013. 3 Mar. 2014. Callon, M. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay.” Power, Action and Belief: A New Sociology of Knowledge? Ed. John Law. London, UK: Routledge & Kegan Paul, 1986. 196–223. Berkowitz, Joe. “You Need to See This 17-Minute Film Set Entirely on a Teen’s Computer Screen.” Fast Company 10 Sep. 2013. 1 Mar. 2014. Hassan, Robert. Media, Politics and the Network Society. Maidenhead: Open University Press, 2004. Hornyak, Tim. “Short Film ‘Noah’ Will Make You Think Twice about Facebook—CNET.” CNET 19 Sep. 2013. 2 Mar. 2014. Knibbs, Kate. “‘Have You Lurked Her Texts?’: How the Directors of ‘Noah’ Captured the Pain of Facebook-Era Dating.” Digital Trends 14 Sep. 2013. 9 Feb. 2014. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press, 2005. Latour, Bruno. We Have Never Been Modern. Cambridge, Mass: Harvard University Press, 1993. Latour, Bruno. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” Shaping Technology/Building Society: Studies in Sociotechnical Change. Ed. Wiebe E. Bijker and John Law. Cambridge, MA: MIT Press, 1992. 225–58. Law, John. “After ANT: Complexity, Naming and Topology.” Actor-Network Theory and After. Ed. John Law and John Hassard. Oxford: Blackwell Publishers, 1999. 1–14. Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2014): 153–170. Marshall, P. David. “The Intercommunication Challenge: Developing a New Lexicon of Concepts for a Transformed Era of Communication.” ICA 2011: Proceedings of the 61st Annual ICA Conference. Boston, MA: Intrenational Communication Association, 2011. 1–25. Paulas, Rick. “Step inside the Computer Screen of ‘Noah.’” VICE 18 Jan. 2014. 8 Feb. 2014. Schutz, Alfred. The Phenomenology of the Social World. Trans. George Walsh and Frederick Lehnert. London, UK: Heinemann, 1972. Trumbore, Dave. “Indie Spotlight: NOAH - A 17-Minute Short Film from Patrick Cederberg and Walter Woodman.” Collider 2013. 2 Apr. 2014. Warren, Christina. “The Short Film That Takes Place Entirely inside a Computer.” Mashable 13 Sep.2013. 9 Feb. 2014. Woodman, Walter, and Patrick Cederberg. Noah. 2013.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography