Journal articles on the topic 'Multiple Code Theory'

To see the other types of publications on this topic, follow the link: Multiple Code Theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiple Code Theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bucci, Wilma. "Symptoms and symbols: A multiple code theory of Somatization." Psychoanalytic Inquiry 17, no. 2 (January 1997): 151–72. http://dx.doi.org/10.1080/07351699709534117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shokrollahi, A., B. Hassibi, B. M. Hochwald, and W. Sweldens. "Representation theory for high-rate multiple-antenna code design." IEEE Transactions on Information Theory 47, no. 6 (2001): 2335–67. http://dx.doi.org/10.1109/18.945251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bucci, Wilma. "Dissociation from the Perspective of Multiple Code Theory, Part I." Contemporary Psychoanalysis 43, no. 2 (April 2007): 165–84. http://dx.doi.org/10.1080/00107530.2007.10745903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bucci, Wilma. "Dissociation from the Perspective of Multiple Code Theory—Part II." Contemporary Psychoanalysis 43, no. 3 (July 2007): 305–26. http://dx.doi.org/10.1080/00107530.2007.10745912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bornstein, Robert F. "Review of Psychoanalysis and cognitive science: A multiple code theory." Psychoanalytic Psychology 15, no. 4 (1998): 569–75. http://dx.doi.org/10.1037/h0092769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Shangdi, and Dawei Zhao. "Construction of Multi-receiver Multi-fold Authentication Codes from Singular Symplectic Geometry over Finite Fields." Algebra Colloquium 20, no. 04 (October 7, 2013): 701–10. http://dx.doi.org/10.1142/s1005386713000679.

Full text
Abstract:
As an extension of the basic model of MRA-codes, multi-receiver multi-fold authentication codes can use a single key distribution phase for multiple message transmission. A multi-receiver multi-fold authentication code is constructed from the singular symplectic geometry over finite fields in this paper. The parameters and probabilities of success in impersonation and substitution attacks by malicious groups of receivers of this code are computed.
APA, Harvard, Vancouver, ISO, and other styles
7

Kesselring, Markus S., Fernando Pastawski, Jens Eisert, and Benjamin J. Brown. "The boundaries and twist defects of the color code and their applications to topological quantum computation." Quantum 2 (October 19, 2018): 101. http://dx.doi.org/10.22331/q-2018-10-19-101.

Full text
Abstract:
The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.
APA, Harvard, Vancouver, ISO, and other styles
8

Yan, Xiren, and Qiyi Wang. "Coding of Shared Track Gray Encoder." Journal of Dynamic Systems, Measurement, and Control 122, no. 3 (January 4, 1999): 573–76. http://dx.doi.org/10.1115/1.1286628.

Full text
Abstract:
A conventional absolute encoder consists of multiple tracks and therefore has a large size. Based on the theory of shared control, a new kind of absolute encoder called the shared track Gray encoder is proposed. This encoder has only one track, and the code has the characteristics of Gray code. In this paper, first the working principle of the encoder is introduced and the existence condition of the codes for the shared track Gray encoder is derived. Then the search procedure for the codes using self-developed assembler programs is given. Finally, the pattern of the encoder is presented. Since the shared track Gray encoder has only one track and the code is progressive, it is much smaller in size and higher in accuracy than a conventional one. [S0022-0434(00)01503-3]
APA, Harvard, Vancouver, ISO, and other styles
9

Tran, Vinh T., Nagaraj C. Shivaramaiah, Thuan D. Nguyen, Joon W. Cheong, Eamonn P. Glennon, and Andrew G. Dempster. "Generalised Theory on the Effects of Sampling Frequency on GNSS Code Tracking." Journal of Navigation 71, no. 2 (November 28, 2017): 257–80. http://dx.doi.org/10.1017/s0373463317000741.

Full text
Abstract:
Synchronisation of the received Pseudorandom (PRN) code and its locally generated replica is fundamental when estimating user position in Global Navigation Satellite System (GNSS) receivers. It has been observed through experiments that user position accuracy decreases if sampling frequency is an integer multiple of the nominal code rate. This paper provides an accuracy analysis based on the number of samples and the residual code phase of each code chip. The outcomes reveal that the distribution of residual code phases in the code phase range [0, 1/ns), where ns is the number of samples per code chip, is the root cause of accuracy degradation, rather than the ratio between sampling frequency and nominal code rate. Doppler frequencies, coherent integration periods, front-end filter bandwidths and received Carrier to Noise ratios (C/N0) also influence receiver accuracy. Also provided are a sampling frequency selection guideline and new proposed estimates of the correlation output and the Delay Locked Loop (DLL) tracking error, which can be applied to precisely model GNSS receiver baseband signal processing.
APA, Harvard, Vancouver, ISO, and other styles
10

EMMART, NIALL, and CHARLES WEEMS. "SEARCH-BASED AUTOMATIC CODE GENERATION FOR MULTIPRECISION MODULAR EXPONENTIATION ON MULTIPLE GENERATIONS OF GPU." Parallel Processing Letters 23, no. 04 (December 2013): 1340009. http://dx.doi.org/10.1142/s0129626413400094.

Full text
Abstract:
Multiprecision modular exponentiation has a variety of uses, including cryptography, prime testing and computational number theory. It is also a very costly operation to compute. GPU parallelism can be used to accelerate these computations, but to use the GPU efficiently, a problem must involve many simultaneous exponentiation operations. Handling a large number of TLS/SSL encrypted sessions in a data center is an important problem that fits this profile. We are developing a framework that enables generation of highly efficient implementations of exponentiation operations for different NVIDIA GPU architectures and problem instances. One of the challenges in generating such code is that NVIDIA's PTX is not a true assembly language, but is instead a virtual instruction set that is compiled and optimized in different ways for different generations of GPU hardware. Thus, the same PTX code runs with different levels of efficiency on different machines. And as the precision of the computations changes, each architecture has its own break-even points where a different algorithm or parallelization strategy must be employed. To make the code efficient for a given problem instance and architecture thus requires searching a multidimensional space of algorithms and configurations, by generating PTX code for each combination, executing it, validating the numerical result, and evaluating its performance. Our framework automates much of this process, and produces exponentiation code that is up to six times faster than the best known hand-coded implementations for the NVIDIA GTX 580. Our goal for the framework is to enable users to relatively quickly find the best configuration for each new GPU architecture. However, in migrating to the GTX 680, which has three times as many cores as the GTX 580, we found that the best performance our system could achieve was significantly less than for the GTX 580. The decrease was traced to a radical shift in the NVIDIA architecture that greatly reduces the storage resources for each core. Further analysis and feasibility simulations indicate that it should be possible, through changes in our code generators to adapt for different storage models, to take greater advantage of the parallelism on the GTX 680. That will add a new dimension to our search space, but will also give our framework greater flexibility for dealing with future architectures.
APA, Harvard, Vancouver, ISO, and other styles
11

KAMEYAMA, Michitaka, and Saneaki TAMAKI. "Code Assignment Algorithm for Highly Parallel Multiple-Valued k-Ary Operation Circuits Using Partition Theory." Interdisciplinary Information Sciences 3, no. 1 (1997): 13–24. http://dx.doi.org/10.4036/iis.1997.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Di Trani, Michela, Rachele Mariani, Alessia Renzi, Paul Samuel Greenman, and Luigi Solano. "Alexithymia according to Bucci's multiple code theory: A preliminary investigation with healthy and hypertensive individuals." Psychology and Psychotherapy: Theory, Research and Practice 91, no. 2 (October 3, 2017): 232–47. http://dx.doi.org/10.1111/papt.12158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Solano, Luigi. "Some thoughts between body and mind in the light of Wilma Bucci’s multiple code theory." International Journal of Psychoanalysis 91, no. 6 (December 2010): 1445–64. http://dx.doi.org/10.1111/j.1745-8315.2010.00359.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shinohara, Katsuyuki, and Toshi Minami. "Encoding of line drawings with multiple hexagonal grid chain code." Systems and Computers in Japan 17, no. 12 (1986): 1–10. http://dx.doi.org/10.1002/scj.4690171201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fan, Jihao, Yonghui Li, Min-Hsiu Hsieh, and Hanwu Chen. "On quantum tensor product codes." Quantum Information and Computation 17, no. 13&14 (November 2017): 1105–22. http://dx.doi.org/10.26421/qic17.13-14-3.

Full text
Abstract:
We present a general framework for the construction of quantum tensor product codes (QTPC). In a classical tensor product code (TPC), its parity check matrix is constructed via the tensor product of parity check matrices of the two component codes. We show that by adding some constraints on the component codes, several classes of dual-containing TPCs can be obtained. By selecting different types of component codes, the proposed method enables the construction of a large family of QTPCs and they can provide a wide variety of quantum error control abilities. In particular, if one of the component codes is selected as a burst-error-correction code, then QTPCs have quantum multiple-burst-error-correction abilities, provided these bursts fall in distinct subblocks. Compared with concatenated quantum codes (CQC), the component code selections of QTPCs are much more flexible than those of CQCs since only one of the component codes of QTPCs needs to satisfy the dual-containing restriction. We show that it is possible to construct QTPCs with parameters better than other classes of quantum error-correction codes (QECC), e.g., CQCs and quantum BCH codes. Many QTPCs are obtained with parameters better than previously known quantum codes available in the literature. Several classes of QTPCs that can correct multiple quantum bursts of errors are constructed based on reversible cyclic codes and maximum-distance-separable (MDS) codes.
APA, Harvard, Vancouver, ISO, and other styles
16

CHEN, S., W. T. TSAI, and X. P. CHEN. "SAMEA: OBJECT-ORIENTED SOFTWARE MAINTENANCE ENVIRONMENT FOR ASSEMBLY PROGRAMS." International Journal of Software Engineering and Knowledge Engineering 02, no. 02 (June 1992): 197–226. http://dx.doi.org/10.1142/s0218194092000105.

Full text
Abstract:
This paper presents a software maintenance environment for assembly programs, SAMEA. It presents an object-oriented database support for displaying, understanding, modifying and configuring assembly programs for software maintenance. Understanding of assembly programs is based on the theory of explicit representation of various structural and functionality elements of code and multiple relationships among them. Modification of program is based on an object-oriented incremental editor and a set of rules to check the correctness of instruction format. The characteristics of SAMEA are: integration of multiple tools, on-line information, ease of adoption of new tools, and finally, support of software maintenance activities such as program understanding, ripple effect analysis, and program redocumentation. The ripple effect of a contemplated change is the parts of the code that need to be reexamined for possible modification. Assembly code elements and relations among code elements are represented as objects in SAMEA which is built on top of an object-oriented database GemStone. SAMEA consists of 26K lines of C code and 4K lines of GemStone code. We have successfully populated 18K lines of BAL code in SAMEA.
APA, Harvard, Vancouver, ISO, and other styles
17

Ni'amah, Khoirun. "Broadband Channel Based on Polar Codes At 2.3 GHz Frequency for 5G Networks in Digitalization Era." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 6, no. 1 (July 23, 2022): 247–57. http://dx.doi.org/10.31289/jite.v6i1.7310.

Full text
Abstract:
This research using a polar code-based broadband channel that is affected by human blockage using one of the 5G cellular network frequencies at 2.3 GHz, 99 MHz bandwidth, 128 blocks of Fast Fourier Transform (FFT) with Cyclic prefix-Orthogonal Frequency Division Multiplexing ( CP-OFDM) and Binary Shift Keying (BPSK) modulation. The use of high frequencies causes the technology to be sensitive to the surrounding environment and attenuation such as human blockage. Broadband channel modeling on a 5G network is presented in a representative Power Delay Profile (PDP) with the influence of human blockage, which is obtained as many as 41 paths which have multiple delays of 10 ns on each path. This research also uses the scaling method on representative PDP because the use of FFT will produce 128 blocks, and the results of this scaling show that there are 9 lanes with multiple delays of 50 ns. The results of this study are close to the average Bit Error Rate (BER) of 10-4. BER performance without polar code is affected by human blockage requires Signal to Noise (SNR) of 30 dB, for theory BER on BPSK modulation requires SNR of 34.5 dB and BER performance using polar code only requires SNR of 23 dB. These results indicate that using a polar code can reduce or save power usage by 7 dB without a polar codes.
APA, Harvard, Vancouver, ISO, and other styles
18

Minami, Toshi, and Katsuyuki Shinohara. "Encoding of Line Drawings with a Multiple Grid Chain Code." IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8, no. 2 (March 1986): 269–76. http://dx.doi.org/10.1109/tpami.1986.4767780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Wen, Junfei Xu, and Qi Chen. "Knowledge Distillation-Based Multilingual Code Retrieval." Algorithms 15, no. 1 (January 17, 2022): 25. http://dx.doi.org/10.3390/a15010025.

Full text
Abstract:
Semantic code retrieval is the task of retrieving relevant codes based on natural language queries. Although it is related to other information retrieval tasks, it needs to bridge the gaps between the language used in the code (which is usually syntax-specific and logic-specific) and the natural language which is more suitable for describing ambiguous concepts and ideas. Existing approaches study code retrieval in a natural language for a specific programming language, however it is unwieldy and often requires a large amount of corpus for each language when dealing with multilingual scenarios.Using knowledge distillation of six existing monolingual Teacher Models to train one Student Model—MPLCS (Multi-Programming Language Code Search), this paper proposed a method to support multi-programing language code search tasks. MPLCS has the ability to incorporate multiple languages into one model with low corpus requirements. MPLCS can study the commonality between different programming languages and improve the recall accuracy for small dataset code languages. As for Ruby used in this paper, MPLCS improved its MRR score by 20 to 25%. In addition, MPLCS can compensate the low recall accuracy of monolingual models when perform language retrieval work on other programming languages. And in some cases, MPLCS’ recall accuracy can even outperform the recall accuracy of monolingual models when they perform language retrieval work on themselves.
APA, Harvard, Vancouver, ISO, and other styles
20

Tseng, Shin-Pin, Eddy Wijanto, Po-Han Lai, and Hsu-Chih Cheng. "Bipolar Optical Code Division Multiple Access Techniques Using a Dual Electro-Optical Modulator Implemented in Free-Space Optics Communications." Sensors 20, no. 12 (June 24, 2020): 3583. http://dx.doi.org/10.3390/s20123583.

Full text
Abstract:
This study developed a bipolar optical code division multiple access (Bi-OCDMA) technique based on spectral amplitude coding for the formation and transmission of optical-polarized and coded signals over wireless optical channels. Compared with conventional Bi-OCDMA schemes, the proposed free-space optics communication system that uses a dual electro-optical modulator design improves the transmission rate. In theory, multiple access interference can be removed by using correlation subtraction schemes. The experiment results revealed that the proposed system can be employed to accurately extract codewords from an M-sequence and subsequently reconstruct the desired original data. Moreover, the proposed architecture can be implemented easily in simple and cost-effective designs and may be beneficial for broadening the use of Bi-OCDMA schemes in wireless optical communications.
APA, Harvard, Vancouver, ISO, and other styles
21

Sufi, Fahim. "Algorithms in Low-Code-No-Code for Research Applications: A Practical Review." Algorithms 16, no. 2 (February 13, 2023): 108. http://dx.doi.org/10.3390/a16020108.

Full text
Abstract:
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23 research works using LCNC platforms, such as SetXRM, the vf-OS platform, Aure-BPM, CRISP-DM, and Microsoft Power Platform (MPP). About 61% of these existing studies resorted to MPP as their primary choice. The critical research problems solved by these research works were within the area of global news analysis, social media analysis, landslides, tornadoes, COVID-19, digitization of process, manufacturing, logistics, and software/app development. The main reasons identified for solving research problems with LCNC algorithms were as follows: (1) obtaining research data from multiple sources in complete automation; (2) generating artificial intelligence-driven insights without having to manually code them. In the course of describing this review, this paper also demonstrates a practical approach to implement a cyber-attack monitoring algorithm with the most popular LCNC platform.
APA, Harvard, Vancouver, ISO, and other styles
22

Tan, Wen Jun, Wai Teng Tang, Rick Siow Mong Goh, Stephen John Turner, and Weng-Fai Wong. "A Code Generation Framework for Targeting Optimized Library Calls for Multiple Platforms." IEEE Transactions on Parallel and Distributed Systems 26, no. 7 (July 1, 2015): 1789–99. http://dx.doi.org/10.1109/tpds.2014.2329494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Galetti, Erica, David Halliday, and Andrew Curtis. "A simple and exact acoustic wavefield modeling code for data processing, imaging, and interferometry applications." GEOPHYSICS 78, no. 6 (November 1, 2013): F17—F27. http://dx.doi.org/10.1190/geo2012-0443.1.

Full text
Abstract:
Improvements in industrial seismic, seismological, acoustic, or interferometric theory and applications often result in quite subtle changes in sound quality, seismic images, or information which are nevertheless crucial for improved interpretation or experience. When evaluating new theories and algorithms using synthetic data, an important aspect of related research is therefore that numerical errors due to wavefield modeling are reduced to a minimum. We present a new MATLAB code based on the Foldy method that models theoretically exact, direct, and scattered parts of a wavefield. Its main advantage lies in the fact that while all multiple scattering interactions are taken into account, unlike finite-difference or finite-element methods, numerical dispersion errors are avoided. The method is therefore ideal for testing new theory in industrial seismics, seismology, acoustics, and in wavefield interferometry in particular because the latter is particularly sensitive to the dynamics of scattering interactions. We present the theory behind the Foldy acoustic modeling method and provide examples of its implementation. We also benchmark the code against a good finite-difference code. Because our Foldy code was written and optimized to test new theory in seismic interferometry, examples of its application to seismic interferometry are also presented, showing its validity and importance when exact modeling results are needed.
APA, Harvard, Vancouver, ISO, and other styles
24

Mariani, R., M. Di Trani, A. Negri, and R. Tambelli. "Linguistic analysis of autobiographical narratives in unipolar and bipolar mood disorders in light of multiple code theory." Journal of Affective Disorders 273 (August 2020): 24–31. http://dx.doi.org/10.1016/j.jad.2020.03.170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Marquardt, Kyle L., and Daniel Pemstein. "IRT Models for Expert-Coded Panel Data." Political Analysis 26, no. 4 (September 3, 2018): 431–56. http://dx.doi.org/10.1017/pan.2018.28.

Full text
Abstract:
Data sets quantifying phenomena of social-scientific interest often use multiple experts to code latent concepts. While it remains standard practice to report the average score across experts, experts likely vary in both their expertise and their interpretation of question scales. As a result, the mean may be an inaccurate statistic. Item-response theory (IRT) models provide an intuitive method for taking these forms of expert disagreement into account when aggregating ordinal ratings produced by experts, but they have rarely been applied to cross-national expert-coded panel data. We investigate the utility of IRT models for aggregating expert-coded data by comparing the performance of various IRT models to the standard practice of reporting average expert codes, using both data from the V-Dem data set and ecologically motivated simulated data. We find that IRT approaches outperform simple averages when experts vary in reliability and exhibit differential item functioning (DIF). IRT models are also generally robust even in the absence of simulated DIF or varying expert reliability. Our findings suggest that producers of cross-national data sets should adopt IRT techniques to aggregate expert-coded data measuring latent concepts.
APA, Harvard, Vancouver, ISO, and other styles
26

Ilic, Radovan, Vesna Jokic-Spasic, Petar Belicev, and Milos Dragovic. "The Monte Carlo srna code as the engine in istar proton dose planning software for the tesla accelerator installation." Nuclear Technology and Radiation Protection 19, no. 2 (2004): 30–35. http://dx.doi.org/10.2298/ntrp0402030i.

Full text
Abstract:
This paper describes the application of SRNA Monte Carlo package for proton transport simulations in complex geometry and different material composition. SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The compound nuclei decay was simulated by our own and the Russian MSDM models using ICRU 63 data. The developed package consists of two codes SRNA-2KG, which simulates proton transport in the combinatorial geometry and SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield?s data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of proton beam characterization by Multi-Layer Faraday Cup, spatial distribution of positron emitters obtained by SRNA-2KG code, and intercomparison of computational codes in radiation dosimetry, indicate the immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in SRNA pack age, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumor.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Fangming, Lulu Zhao, Limin Li, Yifei Hu, Xinglong Jiang, Jinpei Yu, and Guang Liang. "GNSS Signal Acquisition Algorithm Based on Two-Stage Compression of Code-Frequency Domain." Applied Sciences 12, no. 12 (June 20, 2022): 6255. http://dx.doi.org/10.3390/app12126255.

Full text
Abstract:
The recently-emerging compressed sensing (CS) theory makes GNSS signal processing at a sub-Nyquist rate possible if it has a sparse representation in certain domain. The previously proposed code-domain compression acquisition algorithms have high computational complexity and low acquisition accuracy under high dynamic conditions. In this paper, a GNSS signal acquisition algorithm based on two-stage compression of the code-frequency domain is proposed. The algorithm maps the incoming signal of the same interval to multiple carrier frequency bins and overlaps the mapped signal that belongs to the same code phase. Meanwhile, the code domain compression is introduced to the preprocessed signal, replacing circular correlation with compressed reconstruction to obtain Doppler frequency and code phase. Theoretical analyses and simulation results show that the proposed algorithm can improve the frequency search accuracy and reduce the computational complexity by about 50% in high dynamics.
APA, Harvard, Vancouver, ISO, and other styles
28

Moses, D. M., and H. G. L. Prion. "A three-dimensional model for bolted connections in wood." Canadian Journal of Civil Engineering 30, no. 3 (June 1, 2003): 555–67. http://dx.doi.org/10.1139/l03-009.

Full text
Abstract:
Recent criticism of the bolted connection requirements in the Canadian wood design code CSA Standard O86 indicates that the code lacks consideration of the different modes of failure, particularly as they relate to multiple-bolt connections. A finite element model is proposed to predict load–displacement behaviour, stress distributions, ultimate strength, and mode of failure in single- and multiple-bolt connections. The three-dimensional (3-D) model uses anisotropic plasticity for the wood member and elastoplasticity for the bolt. The Weibull weakest link theory is used to predict failure at given levels of probability. Predictions for connection behaviour in Douglas-fir and laminated strand lumber (LSL) correspond to experimentally observed behaviour. The output from the 3-D model is used for a multiple-bolt connection spring model to illustrate many of the phenomena described in the literature.Key words: bolt, Douglas-fir, connection, model, plasticity, weakest link, wood.
APA, Harvard, Vancouver, ISO, and other styles
29

Cleveland, Marisa, and Simon Cleveland. "Exploring Public Relations in the Firm." International Journal of Smart Education and Urban Society 12, no. 4 (October 2021): 52–62. http://dx.doi.org/10.4018/ijseus.2021100105.

Full text
Abstract:
While postmodern theory has been explored within multiple domains, its application to the domain of public relations is somewhat scant. As a result, constructs, such as power, crisis communication, and code of ethics within the communication profession in the firm require further examination. This study investigates the current postmodernist perspective within organization theory and expands the body of knowledge on postmodernism and public relations through the application of seminal audit literature analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Matveev, Nikolay, and Andrey Turlikov. "Review of random multiple access methods for massive machine type communication." Information and Control Systems, no. 6 (January 16, 2020): 54–67. http://dx.doi.org/10.31799/1684-8853-2019-6-54-67.

Full text
Abstract:
Introduction: Intensive research is currently underway in the field of data transmission systems for the Internet of Things in relation to various scenarios of Massive Machine Type Communication. The presence of a large number of devices in such systems necessitates the use the methods of random multiple access to a common communication channel. It is proposed in some works to increase the channel utilization efficiency by the use of error correction coding methods for conflict resolution (Coded Random Access). The vast variety of options for using such communication systems has made it impossible to compare algorithms implementing this approach under the same conditions. This is a problem that restrains the development of both the theory and practice of using error correction code methods for conflict resolution. Purpose: Developing a unified approach to the description of random multiple access algorithms; performing, on the base on this approach, a review and comparative analysis of algorithms in which error correction code methods are used for conflict resolution. Results: A model of a random multiple access system is formulated in the form of a set of assumptions that reflect both the features of various scenarios of Massive Machine Type Communication and the main features of random multiple access algorithms, including Coded Random Access approaches. The system models are classified by the following features: 1) a finite or infinite number of subscribers; 2) stable, unstable or metastable systems; 3) systems with retransmissions or without them; 4) systems with losses or without them. For a lossy system, the main characteristics are Throughput (the proportion of successfully delivered messages) and Packet Loss Rate (probability of a message loss). For a lossless system, the basic characteristics are the algorithm speed and the average delay. A systematic review and comparative analysis of Coded Random Access algorithms have been carried out. The result of the comparative analysis is presented in a visual tabular form. Practical relevance: The proposed model of a random multiple access system can be used as a methodological basis for research and development of random multiple access algorithms for both existing and new scenarios of Massive Machine Type Communication. The systematic results of the review allow us to identify the promising areas of research in the field of data transmission systems for the Internet of Things.
APA, Harvard, Vancouver, ISO, and other styles
31

PLONSKI, ISIDOR S. "Multiple original spellings, living authors and the Code: The case of Malachius raniansis Mawlood, Hammamurad & Abdulla, 2017 (Coleoptera: Melyridae)." Zootaxa 4927, no. 2 (February 12, 2021): 297–300. http://dx.doi.org/10.11646/zootaxa.4927.2.11.

Full text
Abstract:
The present communication is primarily nomenclaturial–classical taxonomy is only touched in a side note on a diagnosis. It uses technical terminology coined by Alain Dubois, who is interested in the study of the concepts and theory of biological nomenclature (i.e. the “objective connection between the real world of populations of organisms and the world of language” (Dubois & Ohler 1997)), and who discusses the current ‘International Code for Zoological Nomenclature’ [hereafter just called ‘the Code’] in great detail. The terms are explained where necessary–but see also the glossaries in Dubois et al. (2019) and the works by A. Dubois cited below.
APA, Harvard, Vancouver, ISO, and other styles
32

Miguel de Almeida Areias, Pedro, Timon Rabczuk, and Joaquim Infante Barbosa. "The extended unsymmetric frontal solution for multiple-point constraints." Engineering Computations 31, no. 7 (September 30, 2014): 1582–607. http://dx.doi.org/10.1108/ec-10-2013-0263.

Full text
Abstract:
Purpose – The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach – Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings – When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications – Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications – A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications – More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value – Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
APA, Harvard, Vancouver, ISO, and other styles
33

Et.al, Nurul Husna Mahadzir. "Sentiment Analysis of Code-Mixed Text: A Review." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 11, 2021): 2469–78. http://dx.doi.org/10.17762/turcomat.v12i3.1239.

Full text
Abstract:
In recent times, sentiment analysis has become one of the most active research and progressively popular areas in information retrieval and text mining. To date, sentiment analysis has been applied in various domains such as product, movie, sport and political reviews. Most of the previous work in this field has focused on analyzing only a single language, especially English. However, with the need of globalization and the increasing number of the Internet used worldwide; it is common to see the post written in multiple languages. Moreover, in an unstructured content like Twitter posts, people tend to mix languages in one sentence, which make sentiment analysis process even harder and more challenging. This paper reviews the state-of-the-art of sentiment analysis for code-mixed, which includes the detail discussions of each focus area, qualitative comparison and limitations of current approaches. This paper also highlights challenges along this line of research and suggests several recommendations for future works that should be explored.
APA, Harvard, Vancouver, ISO, and other styles
34

Katamba, David, Cedric Marvin Nkiko, and Consolate Ademson. "Managing stakeholders’ influence on embracing business code of conduct and ethics in a local pharmaceutical company." Review of International Business and Strategy 26, no. 2 (June 6, 2016): 261–90. http://dx.doi.org/10.1108/ribs-02-2014-0028.

Full text
Abstract:
Purpose This paper aims to avail a soft approach to embracing the process of creating a business code of conduct and ethics and make it work for a pharmaceutical company [player] which wants to remain relevant before stakeholders and society, amidst escalating inducements to go against the acceptable pharmaceutical behaviour. Design/methodology/approach Data collection was guided by qualitative methodologies. A four stepwise process was followed: data collection at the case company – Kampala Pharmaceutical Industries (KPI), Uganda; validation of data collected at KPI; data collection from external stakeholders of KPI; and re-validation of KPI data based on data collected from external stakeholders. In all this, combination of semi-structured and informal interviews with CEOs, senior staff managers, non-participant observation of ethical related activities plus organizing a stakeholder engagement workshop on business code of conduct and ethics was achieved. This workshop helped document what ought to be an ideal design process to secure stakeholder buy-in of the code of business ethics. A local pharmaceutical company in Uganda, KPI was used, which, for continuous five years since its adoption of the business code of conduct and ethics, registered commercial viability without any record of unethical practices. Triangulation was used to ensure credibility and validity of the results. For data analysis, a three-stepwise process was followed, which helped develop a framework within which the collected data revealed themes which were later analyzed. For generalization of the findings, the “adaptive theory approach” was used. Findings When poorly introduced in an organization, the business code of conduct and ethics can work against the company simply because it will be received with “intentional rebellion” from stakeholders, notably staff. However, when a soft stakeholder engagement and consultative approach is used and followed during the business code of ethics and conduct’s design process, multiple stakeholders feel proud and are much willing to live by the promise spelt out in it. Cited notable benefits of living by the code include reputational enhancement, strategic competitiveness and increased possibilities of wining cross-border cooperation among like-minded pharmaceutical players. In the efforts to reap from the code of ethics, communication was observed as an indispensable activity. Refresher trainings to remind the stakeholders about the promises in the code are also needed as time passes by, otherwise they forget. Needless to say, rewarding those who live an exemplary life in embracing and living by the code was cited as key in sustaining the ethical agenda. Lastly, managing multiple stakeholders influences is a curvilinear fashion and involves back and forth consultations. Practical implications The lessons learnt from KPI can be borrowed and used by both global pharmaceutical players and national/local players, especially those that face challenges living by the promise of their existing codes or those without business code of conduct and ethics. That is, both players can use the suggested process to help participants in their medicine supply chain to come up with working business codes of conduct, as well as guide the stakeholder consultative process which results in stakeholder buy-in. Originality/value For many years, issues surrounding bioethics have dominated priorities of World Health Organization (WHO), UNESCO and many international and national development allies. However, there is an escalating violation of medical codes of conduct and ethics. Hence, this publication is a step toward the implementation of the principles and objectives of the UNESCO Universal Declaration on Bioethics and Human Rights which is currently challenged with a difficult question posed by life sciences – How far can we go given the dented medical relationship between ethics, medical science and freedom?
APA, Harvard, Vancouver, ISO, and other styles
35

Bravyi, Sergey, Guillaume Duclos-Cianci, David Poulin, and Martin Suchara. "Subsystem surface codes with three-qubit check operators." Quantum Information and Computation 13, no. 11&12 (November 2013): 963–85. http://dx.doi.org/10.26421/qic13.11-12-4.

Full text
Abstract:
We propose a simplified version of the Kitaev's surface code in which error correction requires only three-qubit parity measurements for Pauli operators XXX and ZZZ. The new code belongs to the class of subsystem stabilizer codes. It inherits many favorable properties of the standard surface code such as encoding of multiple logical qubits on a planar lattice with punctured holes, efficient decoding by either minimum-weight matching or renormalization group methods, and high error threshold. The new subsystem surface code (SSC) gives rise to an exactly solvable Hamiltonian with 3-qubit interactions, topologically ordered ground state, and a constant energy gap. We construct a local unitary transformation mapping the SSC Hamiltonian to the one of the ordinary surface code thus showing that the two Hamiltonians belong to the same topological class. We describe error correction protocols for the SSC and determine its error thresholds under several natural error models. In particular, we show that the SSC has error threshold approximately $0.6\%$ for the standard circuit-based error model studied in the literature. We also consider a model in which three-qubit parity operators can be measured directly. We show that the SSC has error threshold approximately 0.97% in this setting.
APA, Harvard, Vancouver, ISO, and other styles
36

Monkova, Katarina, and Peter Monka. "Newly Developed Software Application for Multiple Access Process Planning." Advances in Mechanical Engineering 6 (January 1, 2014): 539071. http://dx.doi.org/10.1155/2014/539071.

Full text
Abstract:
The purchase of a complex system for computer aided process planning (CAPP) can be expensive for little and some middle sized plants, sometimes an inaccessible investment, with a long recoupment period. According to this fact and the author's experience with Eastern European plants, they decided to design a new database application which is suitable for production, stock, and economic data holding as well as processing and exploitation within the manufacturing process. The application can also be used to process a plan according to the selected criteria, for technological documentation and NC program creation. It was based on the theory of a multivariant approach to computer aided plan generation. Its fundamental features, the internal mathematical structure and new code system of processed objects, were prepared by the authors. The verification of the designed information system in real practice has shown that it enables about 30% cost and production time reduction and decreases input material assortment variability.
APA, Harvard, Vancouver, ISO, and other styles
37

Neal, Jennifer Watling, Kristen J. Mills, Kathryn McAlindon, Zachary P. Neal, and Jennifer A. Lawlor. "Multiple Audiences for Encouraging Research Use: Uncovering a Typology of Educators." Educational Administration Quarterly 55, no. 1 (June 26, 2018): 154–81. http://dx.doi.org/10.1177/0013161x18785867.

Full text
Abstract:
Purpose: We apply diffusion of innovations theory to examine two key research questions designed to inform efforts to improve the research–practice gap in education: (1) Are there distinct types of educators that differ in their prioritization of the compatibility, observability, complexity, relative advantage, and trialability of research? and (2) Are educators’ roles or context associated with their categorization in this typology? Research Method: Using semistructured interview data in two Michigan counties from intermediate school district staff ( N = 24), district central office staff ( N = 18), principals ( N = 22), and school building staff ( N = 23), we first used directed content analysis to code for mentions of compatibility, observability, complexity, relative advantage, and trialability. Next, using the coded data, we conducted a hierarchical agglomerative cluster analysis and follow-up cross-tabulations to assess whether cluster memberships were associated with educators’ roles or county context. Findings: Educators in our sample could be categorized in one of five clusters distinguished primarily by different patterns of prioritization of the compatibility, observability, and complexity of research. Membership in these clusters did not vary by role but did vary by county, suggesting the importance of context for educators’ perceptions of research. Implications for Research and Practice: These findings suggest that narrowing the research–practice gap in education will require attending to multiple audiences of educators with distinct priorities that guide their perceptions and use of educational research and evidence-based practices.
APA, Harvard, Vancouver, ISO, and other styles
38

Pleshivtseva, Yuliya, Edgar Rapoport, Bernard Nacke, Alexander Nikanorov, Paolo Di Barba, Michele Forzan, Sergio Lupi, and Elisabetta Sieni. "Design concepts of induction mass heating technology based on multiple-criteria optimization." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 36, no. 2 (March 6, 2017): 386–400. http://dx.doi.org/10.1108/compel-05-2016-0216.

Full text
Abstract:
Purpose The purpose of this paper is to describe main ideas and demonstrate results of the research activities carried out by the authors in the field of design concepts of induction mass heating technology based on multiple-criteria optimization. The main goal of the studies is the application of different optimization methods and numerical finite element method (FEM) codes for field analysis to solve the multi-objective optimization problem that is mathematically formulated in terms of the most important optimization criteria, for example, maximum temperature uniformity, maximum energy efficiency and minimum scale formation. Design/methodology/approach Standard genetic algorithm (GA), non-dominated sorting genetic algorithm (NSGA) and alternance method of parametric optimization based on the optimal control theory are applied as effective tools for the practice-oriented problems for multiple-criteria optimization of induction heaters’ design based on non-linear coupled electromagnetic and temperature field analysis. Different approaches are used for combining FEM codes for interconnected field analysis and optimization algorithms into the automated optimization procedure. Findings Optimization procedures are tested and investigated for two- and three-criteria optimization problems solution on the examples of induction heating of a graphite disk, induction heating of aluminum and steel billets prior to hot forming. Practical implications Solved problems are based on the design of practical industrial applications. The developed optimization procedures are planned to be applied to the wide range of real-life problems of the optimal design and control of different electromagnetic devices and systems. Originality/value The paper describes main ideas and results of the research activities carried out by the authors during past years in the field of multiple-criteria optimization of induction heaters’ design based on numerical coupled electromagnetic and temperature field analysis. Implementing the automated procedure that combines a numerical FEM code for coupled field analysis with an optimization algorithm and its subsequent application for designing induction heaters makes the proposed approach specific and original. The paper also demonstrates that different optimization strategies used (standard GA, NSGA-II and the alternance method of optimal control theory) are effective for real-life industrial applications for multiple-criteria optimization of induction heaters design.
APA, Harvard, Vancouver, ISO, and other styles
39

Qiu, Cheng, Zhen Xing Zheng, Wei Xia, and Zhao Yao Zhou. "Numerical Modeling and Algorithm for Analysis of Powder Rolling." Advanced Materials Research 211-212 (February 2011): 1182–88. http://dx.doi.org/10.4028/www.scientific.net/amr.211-212.1182.

Full text
Abstract:
There are multiple nonlinearities during the course of powder rolling. Considering Material and Geometrical Nonlinearity during powder rolling, a constitutive model complying with an elliptical yield criterion aiming to the powder rolling is constructed based on the Updated Lagrange (U.L.) formulation by which the basic theory of numerical simulation is deduced. The numerical algorithm is discussed and it is implemented into user-subroutines of Marc. With the code, numerical simulation of powder rolling is performed. It is shown that the result of simulation was consistent with that of experiment and the whole result is dependable. The constitutive model and code developed in this work are correct and have good convergence during calculation. Its robustness is to be validated in simulations with complicated geometrical shape and boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
40

Galle, Annika. "Comply or explain’ in Belgium, Germany, Italy, the Netherlands and the UK: Insufficient explanations and an empirical analysis." Corporate Ownership and Control 12, no. 1 (2014): 862–73. http://dx.doi.org/10.22495/cocv12i1c9p9.

Full text
Abstract:
This study analyses the level and quality of the application of the comply or explain principle for listed companies in Belgium, Germany, Italy, the Netherlands and the UK. Although the comply or explain principle has nowadays become a central element in the corporate governance of the EU, a common understanding of the scope and necessary conditions for it to work effectively has not yet been achieved. This study explains the comply or explain principle from the perspective of the economic theory (legitimacy theory and theory on market failure) and is the first study of the application of the principle in which consecutive years are analysed for multiple countries simultaneously with one research method. In previous research the quality of the explanations for the code provisions not complied with and the explanatory factors have often been overlooked, while these are the key elements of the current European debate. In this study 237 annual accounts for the years 2005-2007 are analysed for five countries. The results show that company size and the period of time the comply or explain principle has been applicable in a country predict the level and quality of compliance. Although the level of code compliance is high, the quality of the explanations for code provisions not complied with is insufficient. Further fine-tuning of the comply or explain principle is necessary to achieve the most effective application in order to make the principle work in practice as intended
APA, Harvard, Vancouver, ISO, and other styles
41

Kvasnička, Vladimír, and Jiří Pospíchal. "Constructive enumeration of acyclic molecules." Collection of Czechoslovak Chemical Communications 56, no. 9 (1991): 1777–802. http://dx.doi.org/10.1135/cccc19911777.

Full text
Abstract:
Simple combinatorial theory of constructive enumeration of rooted trees and trees is suggested. As a byproduct of this approach very simple recursive formulae for numerical (i.e. nonconstructive) enumeration are obtained. The method may be simply generalized for (rooted) trees with edges evaluated by multiplicities and vertices evaluated by alphabetic – atomic symbols. In the process of constructive enumeration the (rooted) trees are represented by unambiguous linear code composed of valences of vertices, edge multiplicities, and atomic symbols assigned to vertices. The elaborated theory may serve as a simple algorithmic background of computer programs for contsructive enumeration of acyclic molecular structures containing heteroatoms and multiple bonds.
APA, Harvard, Vancouver, ISO, and other styles
42

Shin, Jae Kyun, and S. Krishnamurty. "On Identification and Canonical Numbering of Pin-Jointed Kinematic Chains." Journal of Mechanical Design 116, no. 1 (March 1, 1994): 182–88. http://dx.doi.org/10.1115/1.2919344.

Full text
Abstract:
This paper deals with the development of a standard code for the unique representation of pin-jointed kinematic chains based on graph theory. Salient features of this method include the development of an efficient and robust algorithm for the identification of isomorphism in kinematic chains; the formulation of a unified procedure for the analysis of symmetry in kinematic chains; and the utilization of symmetry in the coding process resulting in the unique well-arranged numbering of the links. This method is not restricted to simple jointed kinematic chains only, and it can be applied to any kinematic chain which can be represented as simple graphs including open jointed and multiple jointed chains. In addition, the method is decodable as the original chain can be reconstructed unambiguously from the code values associated with the chains.
APA, Harvard, Vancouver, ISO, and other styles
43

Kokhanovsky, A. A., and T. Nauss. "Reflection and transmission of solar light by clouds: asymptotic theory." Atmospheric Chemistry and Physics Discussions 6, no. 4 (August 31, 2006): 8301–34. http://dx.doi.org/10.5194/acpd-6-8301-2006.

Full text
Abstract:
Abstract. The authors introduce the radiative transfer model CLOUD for reflection, transmission, and absorption characteristics of terrestrial clouds and discuss the accuracy of the approximations used within the model. A Fortran implementation of CLOUD is available for download. This model is fast, accurate, and capable of calculating multiple raditiative transfer characteristics of cloudy media including the spherical and plane albedo, reflection and transmission functions, absorptance as well as global and diffuse transmittance. The approximations are based on the asymptotic solutions of the radiative transfer equations. While the analytic part of the solutions is treated in the code in an approximate way, the correspondent reflection function (RF) of a semi-infinite water cloud R∞ is calculated using numerical solutions of the radiative transfer equations in the assumption of Deirmendjian's cloud C1 model. In the case of ice clouds, the fractal ice crystal model is used. The resulting values of R∞ with respect to the viewing geometry are stored in a look-up table (LUT).
APA, Harvard, Vancouver, ISO, and other styles
44

Mitchell, Richard C. "Grounded Theory and Autopoietic Social Systems: Are They Methodologically Compatible?" Qualitative Sociology Review 3, no. 2 (August 15, 2007): 105–18. http://dx.doi.org/10.18778/1733-8077.3.2.06.

Full text
Abstract:
The paper offers a secondary analysis from a grounded theory doctoral study that reconsiders its “grounded systemic design” (Mitchell, 2005, 2007). While theorists across multiple disciplines fiercely debate the ontological implications of Niklas Luhmann’s autopoietic systems theory (Deflem 1998; Graber and Teubner 1998; King and Thornhill 2003; Mingers 2002; Neves 2001; O’Byrne 2003; Verschraegen 2002, for example), few investigators have yet to adopt his core constructs empirically (see Gregory, Gibson and Robinson 2005 for an exception). Glaser’s (1992, 2005) repeated concerns for grounded theorists to elucidate a “theoretical code” has provided an additional entry point into this project of integrating grounded theory with Luhmann’s abstract conceptual thinking about how global society operates. The author argues that this integration of methodology and systems thinking provides an evolution of grounded theory – rather than its ongoing “erosion” as Greckhamer and Koro-Ljungberg (2005) have feared – and a transportable set of methodological and analytical constructs is presented as a basis for further grounded study.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Jing, Qiaoyun Liao, and Lipei Li. "Cognitive Feature Extraction of Puns Code-Switching Based on Neural Network Optimization Algorithm." International Transactions on Electrical Energy Systems 2022 (October 5, 2022): 1–11. http://dx.doi.org/10.1155/2022/6535308.

Full text
Abstract:
Code-switching is the choice of a language, a variant of using multiple languages in the same conversation. Broadly speaking, code-switching refers to adjusting one’s language style, appearance, behavior, and expression in order to improve the comfort of others in exchange for fair treatment, quality service, and employment opportunities. “Besieged City” is considered a masterpiece of 20th-century China. From the data point of view, this work has a total of 110 code shifts, but there are many studies on this language phenomenon, but none of them involve the perspective of register. However, language translation research based on register theory is of great significance. It is generally believed that the human brain’s thinking is divided into three basic ways: abstract (logical) thinking, image (intuitive) thinking, and inspiration (awareness) thinking. Artificial neural networks are the second way to simulate human thinking. Therefore, this paper proposes research on cognitive feature extraction of pun code-switching based on a neural network optimization algorithm. It mainly introduces code-switching under cognitive language and also briefly analyzes code-switching and speech feature extraction and uses a neural network optimization algorithm to conduct an in-depth analysis of code-switching. Finally, in the experimental part, the experimental analysis of the famous novel “Besieged City” is carried out, the application of 89 language code-switching in the text is deeply analyzed, and the data analysis of its three variables is carried out from the perspective of register. The experimental results show that: in novels, there are two types of code-switching: preparation and improvisation. 24 code-switches are prepared, accounting for 26.9%, and 10 code-switches are improvisation, accounting for 8.9%. As for the verbal code-change, there are both preparatory and impromptu ones. 36 code-switching cases were improvised, accounting for 40.4%, and 19 code-switching cases were prepared, accounting for 21.3%. The analysis also confirms that the more formal the text, the less linguistic transformation it contains.
APA, Harvard, Vancouver, ISO, and other styles
46

Roelfsema, Pieter R., Andreas K. Engel, Peter König, and Wolf Singer. "The Role of Neuronal Synchronization in Response Selection: A Biologically Plausible Theory of Structured Representations in the Visual Cortex." Journal of Cognitive Neuroscience 8, no. 6 (November 1996): 603–25. http://dx.doi.org/10.1162/jocn.1996.8.6.603.

Full text
Abstract:
Recent experimental results in the visual cortex of cats and monkeys have suggested an important role for synchronization of neuronal activity on a millisecond time scale. Synchronization has been found to occur selectively between neuronal responses to related image components. This suggests that not only the firing rates of neurons but also the relative timing of their action potentials is used as a coding dimension. Thus, a powerful relational code would be available, in addition to the rate code, for the representation of perceptual objects. This could alleviate difficulties in the simultaneous representation of multiple objects. In this article we present a set of theoretical arguments and predictions concerning the mechanisms that could group neurons responding to related image components into coherently active aggregates. Synchrony is likely to be mediated by synchronizing connections; we introduce the concept of an interaction skeleton to refer to the subset of synchronizing connections that are rendered effective by a particular stimulus configuration. If the image is segmented into objects, these objects can typically be segmented further into their constituent parts. The synchronization behavior of neurons that represent the various image components may accurately reflect this hierarchical clustering. We propose that the range of synchronizing interactions is a dynamic parameter of the cortical network, so that the grain of the resultant grouping process may be adapted to the actual behavioral requirements. It can be argued that different aspects of purposeful behavior rely on separable processes by which sensory input is transformed into adjustments of motor activity. Indeed, neurophysiological evidence has suggested separate processing streams originating in the primary visual cortex for object identification and sensorimotor coordination. However, such a separation calls for a mechanism that avoids interference effects in the presence of multiple objects, or when multiple motor programs are simultaneously prepared. In this article we suggest that synchronization between responses of neurons in both the visual cortex and in areas that are involved in response selection and execution might allow for a selective routing of sensory information to the appropriate motor program.
APA, Harvard, Vancouver, ISO, and other styles
47

VU, Anh Ngoc, and Ngoc Son Pham. "Double multiple stream tube theory coupled with dynamic stall and wake correction for aerodynamic investigation of vertical axis wind turbine." Science and Technology Development Journal 23, no. 4 (December 6, 2020): 771–80. http://dx.doi.org/10.32508/stdj.v23i4.2396.

Full text
Abstract:
This study describes an effectively analytic methodology to investigate the aerodynamic performance of H vertical axis wind turbine (H-VAWT). An in-house code based on double multiple stream tube theory (DMST) coupled with dynamic stall and wake correction is implemented to estimate the power coefficient. Design optimization of airfoil shape is conducted to study the influences of the dynamic stall and turbulent wakes. Airfoil shape is universally investigated by using the Class/Shape function transformation method. The airfoil study shows that the upper curve tends to be less convex than the lower curve in order to extract more energy of the wind upstream and generate less drag of the blade downstream. The optimal results show that the power coefficient increases by 6.5% with the new airfoil shape.
APA, Harvard, Vancouver, ISO, and other styles
48

Atamturktur, Sez, and Ismail Farajpour. "Resource allocation for code development in partitioned models." Engineering Computations 32, no. 7 (October 5, 2015): 1981–2004. http://dx.doi.org/10.1108/ec-05-2014-0111.

Full text
Abstract:
Purpose – Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena, partitioning has become a widely implemented computational approach. Partitioned analysis involves the exchange of inputs and outputs from constituent models (partitions) via iterative coupling operations, through which the individually developed constituent models are allowed to affect each other’s inputs and outputs. Partitioning, whether multi-scale or multi-physics in nature, is a powerful technique that can yield coupled models that can predict the behavior of a system more complex than the individual constituents themselves. The paper aims to discuss these issues. Design/methodology/approach – Although partitioned analysis has been a key mechanism in developing more realistic predictive models over the last decade, its iterative coupling operations may lead to the propagation and accumulation of uncertainties and errors that, if unaccounted for, can severely degrade the coupled model predictions. This problem can be alleviated by reducing uncertainties and errors in individual constituent models through further code development. However, finite resources may limit code development efforts to just a portion of possible constituents, making it necessary to prioritize constituent model development for efficient use of resources. Thus, the authors propose here an approach along with its associated metric to rank constituents by tracing uncertainties and errors in coupled model predictions back to uncertainties and errors in constituent model predictions. Findings – The proposed approach evaluates the deficiency (relative degree of imprecision and inaccuracy), importance (relative sensitivity) and cost of further code development for each constituent model, and combines these three factors in a quantitative prioritization metric. The benefits of the proposed metric are demonstrated on a structural portal frame using an optimization-based uncertainty inference and coupling approach. Originality/value – This study proposes an approach and its corresponding metric to prioritize the improvement of constituents by quantifying the uncertainties, bias contributions, sensitivity analysis, and cost of the constituent models.
APA, Harvard, Vancouver, ISO, and other styles
49

Sanvito, Andrea, Vincenzo Dossena, and Giacomo Persico. "Formulation, Validation, and Application of a Novel 3D BEM Tool for Vertical Axis Wind Turbines of General Shape and Size." Applied Sciences 11, no. 13 (June 24, 2021): 5874. http://dx.doi.org/10.3390/app11135874.

Full text
Abstract:
Low order models based on the Blade Element Momentum (BEM) theory exhibit modeling issues in the performance prediction of Vertical Axis Wind Turbines (VAWT) compared to Computational Fluid Dynamics, despite the widespread engineering practice of such methods. The present study shows that the capability of BEM codes applied to VAWTs can be greatly improved by implementing a novel three-dimensional set of high-order corrections and demonstrates this by comparing the BEM predictions against wind-tunnel experiments conducted on three small-scale VAWT models featuring different rotor design (H-shaped and Troposkein), blade profile (NACA0021 and DU-06-W200), and Reynolds number (from 0.8×105 to 2.5×105). Though based on the conventional Double Multiple Stream Tube (DMST) model, the here-presented in-house BEM code incorporates several two-dimensional and three-dimensional corrections including: accurate extended polar data, flow curvature, dynamic stall, a spanwise-distributed formulation of the tip losses, a fully 3D approach in the modeling of rotors featuring general shape (such as but not only, the Troposkein one), and accounting for the passive effects of supporting struts and pole. The detailed comparison with experimental data of the same models, tested in the large-scale wind tunnel of the Politecnico di Milano, suggests the very good predictive capability of the code in terms of power exchange, torque coefficient, and loads, on both time-mean and time-resolved basis. The peculiar formulation of the code allows including in a straightforward way the usual spanwise non-uniformity of the incoming wind and the effects of skew, thus allowing predicting the turbine operation in a realistic open-field in presence of the environmental boundary layer. A systematic study on the operation of VAWTs in multiple environments, such as in coastal regions or off-shore, and highlighting the sensitivity of VAWT performance to blade profile selection, rotor shape and size, wind shear, and rotor tilt concludes the paper.
APA, Harvard, Vancouver, ISO, and other styles
50

Anderson, Tyler Kimball, and Almeida Jacqueline Toribio. "Attitudes towards lexical borrowing and intra-sentential code-switching among Spanish-English bilinguals." Spanish in Context 4, no. 2 (December 6, 2007): 217–40. http://dx.doi.org/10.1075/sic.4.2.05and.

Full text
Abstract:
The present study seeks to evaluate bilinguals’ attitudes towards the contact forms that are manifested in the speech of Spanish-English bilinguals in the United States, and the factors that contribute to this linguistic assessment. Towards that end, bilinguals of diverse proficiencies are presented with five versions of the fairytale Little Red Riding Hood/La Caperucita Roja: a normative Spanish text, two Spanish texts that contrast in the type of English lexical insertions made, and two code-switched texts, differentiated by type of intra-sentential alternation represented. Multiple measures are used to evaluate participants’ attitudes, including scalar judgments on personality characteristics of the authors of the texts. Data from fifty-three participants unveil a continuum of preferences that largely confirms the hypotheses posited: Spanish-English bilinguals evaluate single-noun insertions more positively than code-switching and report more fine-grained distinctions — differentiating specific versus core noun insertions and felicitous versus infelicitous code-switching — as commensurate with social and linguistic factors, such as language heritage and linguistic competence.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography