Academic literature on the topic 'Low-Order Code'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low-Order Code.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Low-Order Code"

1

Barletta, Luca, and Arnaldo Spalvieri. "Design of Short, High-Rate DVB-S2-Like Semi-Regular LDPC Codes." Research Letters in Communications 2008 (2008): 1–4. http://dx.doi.org/10.1155/2008/324503.

Full text
Abstract:
This work focuses on high-rate () moderate-length () low-density parity-check codes. High-rate codes allow to maintain good quality of the preliminary decisions that are used in carrier recovery, while a moderate code length allows to keep the latency low. The interleaver of the LDPC matrix that we consider is inspired to the DVB-S2 standard one. A novel approach for avoiding short cycles is analyzed. A modified BP decoding algorithm is applied in order to deal with longer cycles. Simulations and results for the AWGN channel are presented, both for BPSK signalling and for coded modulation based on the partition .
APA, Harvard, Vancouver, ISO, and other styles
2

Văduva, Bogdan, and Honoriu Vălean. "Designing a Low-Code CRUD framework." Carpathian Journal of Electronic and Computer Engineering 14, no. 1 (September 1, 2021): 11–19. http://dx.doi.org/10.2478/cjece-2021-0003.

Full text
Abstract:
Abstract Nowadays programmers write source code for inserting, editing and deleting records of a relational table. The majority of commercial relational databases include a specific management tool that offers such possibilities and most database programmers take this ability as granted. When it comes to real life applications, programmers use Object Oriented (OO) paradigm to build user friendly windows/screens/forms for database operations. The current work shows a different approach using a Low-code CRUD (Create, Read, Update, Delete) framework. Views and guidelines of how to design a Low-code CRUD framework will be detailed. “Low-code” motivation is due to the fact that the new framework will provide the ability to use less code in order to build fast and efficient complex applications. It will be up to the reader to envision a specific framework.
APA, Harvard, Vancouver, ISO, and other styles
3

Schmidt, K. U. "On cosets of the generalized first-order reed-muller code with low PMEPR." IEEE Transactions on Information Theory 52, no. 7 (July 2006): 3220–32. http://dx.doi.org/10.1109/tit.2006.876252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shridhar, Shreyas. "Analysis of Low Code-No Code Development Platforms in comparison with Traditional Development Methodologies." International Journal for Research in Applied Science and Engineering Technology 9, no. 12 (December 31, 2021): 508–13. http://dx.doi.org/10.22214/ijraset.2021.39328.

Full text
Abstract:
Abstract: This paper examines the overview of low-code/no-code development platforms in comparison with traditional development methodologies and examines the benefits and limitations of the same. For several decades, businesses have had multiple options when they demanded new information systems. They could develop a new system using in-house developers, or they could order a system from an external merchant. This offers a close fit to business obligations. However, nowadays, there is a new alternative that is becoming increasingly prevalent. Low code/no code (LC/NC) applications can cater to business requirements efficiently, can be implemented instantly, and the cost is much less than systems developed in-house. Few, if any, programming skills are required. Keywords: Traditional development, No code development, low code development, Low code No code development, Software development life cycle (SDLC)
APA, Harvard, Vancouver, ISO, and other styles
5

Belhadj, Salima, Abdelmounaim Moulay Lakhdar, and Ridha Ilyas Bendjillali. "Performance comparison of channel coding schemes for 5G massive machine type communications." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 902. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp902-908.

Full text
Abstract:
<p><span>Channel coding for the fifth generation (5G) mobile communication is currently facing new challenges as it needs to uphold diverse emerging applications and scenarios. Massive machine-type communication (mMTC) constitute one of the main usage scenarios in 5G systems, which promise to provide low data rate services to a large number of low power and low complexity devices. Research on efficient coding schemes for such use case is still ongoing and no decision has been made yet. Therefore, This paper compares the performance of different coding schemes, namely: tail-biting convolutional code (TBCC), low density parity check codes (LDPC), Turbo code and Polar codes, in order to select the appropriate channel coding technique for 5G-mMTC scenario. The considered codes are evaluated in terms of bit error rate (BER) and block error rate (BLER) for short information block lengths (K ≤ 256). We further investigate their Algorithmic complexity in terms of the number of basic operations. The Simulation results indicate that polar code with CRC-aided successive cancelation list decoder has better performance compared with other coding schemes for 5G-mMTC scenario.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
6

Ganmor, E., R. Segev, and E. Schneidman. "Sparse low-order interaction network underlies a highly correlated and learnable neural population code." Proceedings of the National Academy of Sciences 108, no. 23 (May 20, 2011): 9679–84. http://dx.doi.org/10.1073/pnas.1019641108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ke, Jing, Xiaochun Lu, Xue Wang, Xiaofei Chen, and Sheng Tang. "Concatenated Coding for GNSS Signals in Urban Environments." Applied Sciences 10, no. 18 (September 14, 2020): 6397. http://dx.doi.org/10.3390/app10186397.

Full text
Abstract:
This work investigated concatenated coding schemes for Global Navigation Satellite System (GNSS) signals in order to increase their error correction capability in urban environments. In particular, a serial concatenated code that combines an outer Reed–Solomon (RS) code with an inner low-density parity-check (LDPC) code was designed, and the performance was investigated over the land mobile satellite (LMS) channel for characterizing multipath and shadow fading in urban environments. The performance of the proposed concatenated coding scheme was compared to that of a B-CNAV1 message, in which two interleaved 64-ary LDPC codes were employed. The simulation results demonstrate that the proposed concatenated code can obtain a similar error correction performance to the two interleaved 64-ary LDPC codes in both the additive white Gaussian noise (AWGN) and LMS channels at a lower complexity level.
APA, Harvard, Vancouver, ISO, and other styles
8

Haah, Jeongwan, and Matthew B. Hastings. "Codes and Protocols for Distilling T, controlled-S, and Toffoli Gates." Quantum 2 (June 7, 2018): 71. http://dx.doi.org/10.22331/q-2018-06-07-71.

Full text
Abstract:
We present several different codes and protocols to distill T, controlled-S, and Toffoli (or CCZ) gates. One construction is based on codes that generalize the triorthogonal codes, allowing any of these gates to be induced at the logical level by transversal T. We present a randomized construction of generalized triorthogonal codes obtaining an asymptotic distillation efficiency γ→1. We also present a Reed-Muller based construction of these codes which obtains a worse γ but performs well at small sizes. Additionally, we present protocols based on checking the stabilizers of CCZ magic states at the logical level by transversal gates applied to codes; these protocols generalize the protocols of. Several examples, including a Reed-Muller code for T-to-Toffoli distillation, punctured Reed-Muller codes for T-gate distillation, and some of the check based protocols, require a lower ratio of input gates to output gates than other known protocols at the given order of error correction for the given code size. In particular, we find a 512 T-gate to 10 Toffoli gate code with distance 8 as well as triorthogonal codes with parameters [[887,137,5]],[[912,112,6]],[[937,87,7]] with very low prefactors in front of the leading order error terms in those codes.
APA, Harvard, Vancouver, ISO, and other styles
9

Xue, Rui, Tong Wang, Yanbo Sun, and Huaiyu Tang. "Optimized Design for NB-LDPC-Coded High-Order CPM: Power and Iterative Efficiencies." Symmetry 12, no. 8 (August 13, 2020): 1353. http://dx.doi.org/10.3390/sym12081353.

Full text
Abstract:
In this paper, a non-binary low-density parity-check (NB-LDPC) coded high-order continuous phase modulation (CPM) system is designed and optimized to improve power and iterative efficiencies. Firstly, the minimum squared normalized Euclidean distance and the 99% double-sided power bandwidth are introduced to design a competitive CPM, improving its power efficiency under a given code rate and spectral efficiency. Secondly, a three-step method based on extrinsic information transfer (EXIT) and entropy theory is used to design NB-LDPC codes, which reduces the convergence threshold approximately 0.42 and 0.58 dB compared with the candidate schemes. Thirdly, an extrinsic information operation is proposed to address the positive feedback issue in iterative detection and decoding and the value of bit error rate (BER) can approximately be reduced by 5×10−3. Finally, iteration optimization employing the EXIT chart and mutual information between demodulation and decoding is performed to achieve a suitable tradeoff for the communication reliability and iterative decoding delay. Simulation results show that the resulting scheme provides an approximately 3.95 dB coding gain compared to the uncoded CPM and achieves approximately 0.5 and 0.7 dB advantages compared with the candidate schemes. The resulting NB-LDPC-coded high-order CPM for a given code rate and spectral efficiency converges earlier into a turbo cliff region compared with other competitors and significantly improves power and iterative efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Dan-feng, Hai Tian, and Rui Xue. "Adaptive Rate-Compatible Non-Binary LDPC Coding Scheme for the B5G Mobile System." Sensors 19, no. 5 (March 2, 2019): 1067. http://dx.doi.org/10.3390/s19051067.

Full text
Abstract:
This paper studies an adaptive coding scheme for B5G (beyond 5th generation) mobile system-enhanced transmission technology. Different from the existing works, the authors develop a class of rate-compatible, non-binary, low-density parity check (RC-NB-LDPC) codes, which expresses the strong connection between the algebra-based and graph-theoretic-based constructions. The constructed codes can not only express rate-compatible (RC) features, but also possess a quasi-cyclic (QC) structure that facilitates the encoding implementation. Further, in order to achieve the code rate-adaptive allocation scheme, the authors propose using the K-means++ clustering algorithm to cluster different channel environments, considering various factors that affect channel characteristics. Finally, in order to present the advantages of the adaptive coding scheme, the authors construct a coding scheme for image transmission. The numerical results demonstrate that the developed code can obtain better waterfall performance in a larger code rate range, which is more suitable for data transmission; the adaptive coding transmission scheme can obtain higher reconstructed image quality compared to the fixed code rate-coding scheme. Moreover, when considering unequal error protection (UEP), the proposed scheme can further improve the reconstructed image quality.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Low-Order Code"

1

Abd, El-Sallam Amar. "Low order channel estimation for CDMA systems." Thesis, Curtin University, 2005. http://hdl.handle.net/20.500.11937/2420.

Full text
Abstract:
New approaches and algorithms are developed for the identification and estimation of low order models that represent multipath channel effects in Code Division Multiple Access (CDMA) communication systems. Based on these parsimonious channel models, low complexity receivers such as RAKE receivers are considered to exploit these propagation effects and enhance the system performance. We consider the scenario where multipath is frequency selective slowly fading and where the channel components including delays and attenuation coefficients are assumed to be constant over one or few signalling intervals. We model the channel as a long FIR-like filter (or a tapped delay line filter) with the number of taps related to the ratio between the channel delay-spread and the chip duration. Due to the high data rate of new CDMA systems, the channel length in terms of the chip duration will be very large. With classical channel estimation techniques this will result in poor estimates of many of the channel parameters where most of them are zero leading to a reduction in the system performance. Unlike classical techniques which estimate directly the channel response given the number of taps or given an estimate of the channel length, the proposed techniques in this work will firstly identify the significant multipath parameters using model selection techniques, then estimate these identified parameters. Statistical tests are proposed to determine whether or not each individual parameter is significant. A low complexity RAKE receiver is then considered based on estimates of these identified parameters only. The level of significance with which we will make this assertion will be controlled based on statistical tests such as multiple hypothesis tests. Frequency and time domain based approaches and model selection techniques are proposed to achieve the above proposed objectives.The frequency domain approach for parsimonious channel estimation results in an efficient implementation of RAKE receivers in DS-CDMA systems. In this approach, we consider a training based strategy and estimate the channel delays and attenuation using the averaged periodogram and modified time delay estimation techniques. We then use model selection techniques such as the sphericity test and multiple hypotheses tests based on F-Statistics to identify the model order and select the significant channel paths. Simulations show that for a pre-defined level of significance, the proposed technique correctly identifies the significant channel parameters and the parsimonious RAKE receiver shows improved statistical as well as computational performance over classical methods. The time domain approach is based on the Bootstrap which is appropriate for the case when the distribution of the test statistics required by the multiple hypothesis tests is unknown. In this approach we also use short training data and model the channel response as an FIR filter with unknown length. Model parameters are then estimated using low complexity algorithms in the time domain. Based on these estimates, bootstrap based multiple hypotheses tests are applied to identify the non-zero coefficients of the FIR filter. Simulation results demonstrate the power of this technique for RAKE receivers in unknown noise environments. Finally we propose adaptive blind channel estimation algorithms for CDMA systems. Using only the spreading code of the user of interest and the received data sequence, four different adaptive blind estimation algorithms are proposed to estimate the impulse response of frequency selective and frequency non-selective fading channels. Also the idea is based on minimum variance receiver techniques. Tracking of a frequency selective varying fading channel is also considered.A blind based hierarchical MDL model selection method is also proposed to select non-zero parameters of the channel response. Simulation results show that the proposed algorithms perform better than previously proposed algorithms. They have lower complexity and have a faster convergence rate. The proposed algorithms can also be applied to the design of adaptive blind channel estimation based RAKE receivers.
APA, Harvard, Vancouver, ISO, and other styles
2

Abd, El-Sallam Amar. "Low order channel estimation for CDMA systems." Curtin University of Technology, Department of Electrical and Computer Engineering, 2005. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=16299.

Full text
Abstract:
New approaches and algorithms are developed for the identification and estimation of low order models that represent multipath channel effects in Code Division Multiple Access (CDMA) communication systems. Based on these parsimonious channel models, low complexity receivers such as RAKE receivers are considered to exploit these propagation effects and enhance the system performance. We consider the scenario where multipath is frequency selective slowly fading and where the channel components including delays and attenuation coefficients are assumed to be constant over one or few signalling intervals. We model the channel as a long FIR-like filter (or a tapped delay line filter) with the number of taps related to the ratio between the channel delay-spread and the chip duration. Due to the high data rate of new CDMA systems, the channel length in terms of the chip duration will be very large. With classical channel estimation techniques this will result in poor estimates of many of the channel parameters where most of them are zero leading to a reduction in the system performance. Unlike classical techniques which estimate directly the channel response given the number of taps or given an estimate of the channel length, the proposed techniques in this work will firstly identify the significant multipath parameters using model selection techniques, then estimate these identified parameters. Statistical tests are proposed to determine whether or not each individual parameter is significant. A low complexity RAKE receiver is then considered based on estimates of these identified parameters only. The level of significance with which we will make this assertion will be controlled based on statistical tests such as multiple hypothesis tests. Frequency and time domain based approaches and model selection techniques are proposed to achieve the above proposed objectives.
The frequency domain approach for parsimonious channel estimation results in an efficient implementation of RAKE receivers in DS-CDMA systems. In this approach, we consider a training based strategy and estimate the channel delays and attenuation using the averaged periodogram and modified time delay estimation techniques. We then use model selection techniques such as the sphericity test and multiple hypotheses tests based on F-Statistics to identify the model order and select the significant channel paths. Simulations show that for a pre-defined level of significance, the proposed technique correctly identifies the significant channel parameters and the parsimonious RAKE receiver shows improved statistical as well as computational performance over classical methods. The time domain approach is based on the Bootstrap which is appropriate for the case when the distribution of the test statistics required by the multiple hypothesis tests is unknown. In this approach we also use short training data and model the channel response as an FIR filter with unknown length. Model parameters are then estimated using low complexity algorithms in the time domain. Based on these estimates, bootstrap based multiple hypotheses tests are applied to identify the non-zero coefficients of the FIR filter. Simulation results demonstrate the power of this technique for RAKE receivers in unknown noise environments. Finally we propose adaptive blind channel estimation algorithms for CDMA systems. Using only the spreading code of the user of interest and the received data sequence, four different adaptive blind estimation algorithms are proposed to estimate the impulse response of frequency selective and frequency non-selective fading channels. Also the idea is based on minimum variance receiver techniques. Tracking of a frequency selective varying fading channel is also considered.
A blind based hierarchical MDL model selection method is also proposed to select non-zero parameters of the channel response. Simulation results show that the proposed algorithms perform better than previously proposed algorithms. They have lower complexity and have a faster convergence rate. The proposed algorithms can also be applied to the design of adaptive blind channel estimation based RAKE receivers.
APA, Harvard, Vancouver, ISO, and other styles
3

Saad, Ibrahim Mohamed Mohamed. "Extracting Parallelism from Legacy Sequential Code Using Transactional Memory." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71861.

Full text
Abstract:
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yequn. "Advanced Coding Techniques For Fiber-Optic Communications And Quantum Key Distribution." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/555940.

Full text
Abstract:
Coding is an essential technology for efficient fiber-optic communications and secure quantum communications. In particular, low-density parity-check (LDPC) coding is favoured due to its strong error correction capability and high-throughput implementation feasibility. In fiber-optic communications, it has been realized that advanced high-order modulation formats and soft-decision forward error correction (FEC) such as LDPC codes are the key technologies for the next-generation high-speed optical communications. Therefore, energy-efficient LDPC coding in combination with advanced modulation formats is an important topic that needs to be studied for fiber-optic communications. In secure quantum communications, large-alphabet quantum key distribution (QKD) is becoming attractive recently due to its potential in improving the efficiency of key exchange. To recover the carried information bits, efficient information reconciliation is desirable, for which the use of LDPC coding is essential. In this dissertation, we first explore different efficient LDPC coding schemes for optical transmission of polarization-division multiplexed quadrature-amplitude modulation (QAM) signals. We show that high energy efficiency can be achieved without incurring extra overhead and complexity. We then study the transmission performance of LDPC-coded turbo equalization for QAM signals in a realistic fiber link as well as that of pragmatic turbo equalizers. Further, leveraging the polarization freedom of light, we expand the signal constellation into a four-dimensional (4D) space and evaluate the performance of LDPC-coded 4D signals in terms of transmission reach. Lastly, we study the security of a proposed weak-coherent-state large-alphabet QKD protocol and investigate the information reconciliation efficiency based on LDPC coding.
APA, Harvard, Vancouver, ISO, and other styles
5

Matteo, Poggiali. "Development of a new combustor liners thermal design procedure through low order codes and uncertainty quantification tools." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238634.

Full text
Abstract:
Turbomachinery plays an important role in the propulsion and heavy-duty industry. Improving the efficiency and reliability of gas turbines continue to be an important driver in the development of modern engines and power generation. There are two straightforward and effective methods for improving the performance of a gas turbine engine. One is increasing the engine pressure ratio to raise the thermal efficiency, the other is increasing the outlet temperature of a combustor to raise the specific thrust. Therefore, the gas turbine combustors working conditions are moving towards higher temperature rise and higher heat capacity. Therefore, the design requirements for a combustor become stricter, such as a wider working range, shorter length, and smaller distribution of outlet temperature. Simultaneously, the combustor is required to have a longer life and lower pollutant emission. In this scenario, a fundamental role is played by the cooling system. The definition of the most appropriate scheme represents one of the most challenging tasks in the combustor since it directly determines the components life. During the design and subsequent optimization phases of a combustor cooling system, the designer must consider several uncertainties related to manufacturing, geometry and operating conditions. These gaps can be very impacting on the system performance, so it is obvious that the design becomes a matter of optimization of the whole system. This requires an accurate assessment of trade-offs to meet all requirements. The design choices made in the first phases influence the following developments and it is essential to have a tool as efficient and flexible as possible to rely on. During the initial stages, 1-D codes are still widely used in industrial practice, and a low-order approach is preferred over high-fidelity simulations. These tools are important for designers because they allow having a good understanding of the problem, in relatively short times and with low general costs. Although these analyses have a good predictive level, they are often used when input quantities that characterize the problem are roughly known. These gaps lead to the inclusion of uncertainties within the code, which propagate and eventually influence the solution. The final common objective is to optimize the various components to find out the configuration in which the machine is independent from the uncertainties that may afflict it, thus arriving at a robust design. The aim of this thesis is the development of a numerical procedure for the preliminary thermal design of combustor liners (Therm-1D/Dakota). This procedure is based on the coupling of a one-dimensional tool (Therm-1D), developed by DIEF of the University of Florence, and a software that allows uncertainty quantifications analyses (Dakota). This has allowed the development of an innovative, faster, and more reliable procedure for the preliminary design and optimization of combustor cooling systems that is able to estimate the uncertainties affecting the results of this numerical simulations. In this way, the output quantities are as independent as possible from input uncertainties.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Low-Order Code"

1

King, Tim. Programming the M68000. 2nd ed. Wokingham: Benjamin / Cummings, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

King, Tim. Programming the M68000. 2nd ed. Menlo Park, Calif: Addison-Wesley, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mann, Peter. Vector Calculus. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822370.003.0034.

Full text
Abstract:
This chapter gives a non-technical overview of differential equations from across mathematical physics, with particular attention to those used in the book. It is a common trend in physics and nature, or perhaps just the way numbers and calculus come together, that, to describe the evolution of things, most theories use a differential equation of low order. This chapter is useful for those with no prior knowledge of the differential equations and explains the concepts required for a basic exposition of classical mechanics. It discusses separable differential equations, boundary conditions and initial value problems, as well as particular solutions, complete solutions, series solutions and general solutions. It also discusses the Cauchy–Lipschitz theorem, flow and the Fourier method, as well as first integrals, complete integrals and integral curves.
APA, Harvard, Vancouver, ISO, and other styles
4

van der Wal, Jenneke. A Featural Typology of Bantu Agreement. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198844280.001.0001.

Full text
Abstract:
The Bantu languages are in some sense remarkably uniform (subject, verb, order (SVO) basic word order, noun classes, verbal morphology), but this extensive language family also show a wealth of morphosyntactic variation. Two core areas in which such variation is attested are subject and object agreement. The book explores the variation in Bantu subject and object marking on the basis of data from 75 Bantu languages, discovering striking patterns (the Relation between Asymmetry and Non-Doubling Object Marking (RANDOM), and the Asymmetry Wants Single Object Marking (AWSOM) correlation), and providing a novel syntactic analysis. This analysis takes into account not just phi agreement, but also nominal licensing and information structure. A Person feature, associated with animacy, definiteness, or givenness, is shown to be responsible for differential object agreement, while at the same time accounting for doubling vs. non-doubling object marking—a hybrid solution to an age-old debate in Bantu comparative morphosyntax. It is furthermore proposed that low functional heads can Case-license flexibly downwards or upwards, depending on the relative topicality of the two arguments involved. This accounts for the properties of symmetric object marking in ditransitives (for Appl), and subject inversion constructions (for v). By keeping Agree constant and systematically determining which featural parameters are responsible for the attested variation, the proposed analysis argues for an emergentist view of features and parameters (following Biberauer 2018, 2019), and against both Strong Uniformity and Strong Modularity.
APA, Harvard, Vancouver, ISO, and other styles
5

Essential facts about Covid-19: the disease, the responses, and an uncertain future. For South African learners, teachers, and the general public. Academy of Science of South Africa (ASSAf), 2021. http://dx.doi.org/10.17159/assaf.2021/0072.

Full text
Abstract:
The first cases of a new coronavirus (SARS-CoV-2) were identified toward the end of 2019 in Wuhan, China. Over the following months, this virus spread to everywhere in the world. By now no country has been spared the devastation from the loss of lives from the disease (Covid-19) and the economic and social impacts of responses to mitigate the impact of the virus. Our lives in South Africa have been turned upside down as we try to make the best of this bad situation. The 2020 school year was disrupted with closure and then reopening in a phased approach, as stipulated by the Department of Education. This booklet is a collective effort by academics who are Members of the Academy of Science of South Africa (ASSAf) and other invited scholars to help you appreciate some of the basic scientific facts that you need to know in order to understand the present crisis and the various options available to respond to it. We emphasise that the threat of infectious diseases is not an entirely new phenomenon that has sprung onto the stage out of nowhere. Infectious diseases and pandemics have been with us for centuries, in fact much longer. Scientists have warned us for years of the need to prepare for the next pandemic. Progress in medicine in the course of the 20th century has been formidable. Childhood mortality has greatly decreased almost everywhere in the world, thanks mainly, but not only, to the many vaccines that have been developed. Effective drugs now exist for many deadly diseases for which there were once no cures. For many of us, this progress has generated a false sense of security. It has caused us to believe that the likes of the 1918 ‘Spanish flu’ pandemic, which caused some 50 million deaths around the world within a span of a few months, could not be repeated in some form in today’s modern world. The Covid-19 pandemic reminds us that as new cures for old diseases are discovered, new diseases come along for which we are unprepared. And every hundred or so years one of these diseases wreaks havoc on the world and interferes severely with our usual ways of going about our lives. Today’s world has become increasingly interconnected and interdependent, through trade, migrations, and rapid air travel. This globalisation makes it easier for epidemics to spread, somewhat offsetting the power of modern medicine. In this booklet we have endeavoured to provide an historical perspective, and to enrich your knowledge with some of the basics of medicine, viruses, and epidemiology. Beyond the immediate Covid-19 crisis, South Africa faces a number of other major health challenges: highly unequal access to quality healthcare, widespread tuberculosis, HIV infection causing AIDS, a high prevalence of mental illness, and a low life expectancy, compared to what is possible with today’s medicine. It is essential that you, as young people, also learn about the nature of these new challenges, so that you may contribute to finding future solutions.
APA, Harvard, Vancouver, ISO, and other styles
6

Shengelia, Revaz. Modern Economics. Universal, Georgia, 2021. http://dx.doi.org/10.36962/rsme012021.

Full text
Abstract:
Economy and mankind are inextricably interlinked. Just as the economy or the production of material wealth is unimaginable without a man, so human existence and development are impossible without the wealth created in the economy. Shortly, both the goal and the means of achieving and realization of the economy are still the human resources. People have long ago noticed that it was the economy that created livelihoods, and the delays in their production led to the catastrophic events such as hunger, poverty, civil wars, social upheavals, revolutions, moral degeneration, and more. Therefore, the special interest of people in understanding the regulatory framework of the functioning of the economy has existed and exists in all historical epochs [A. Sisvadze. Economic theory. Part One. 2006y. p. 22]. The system of economic disciplines studies economy or economic activities of a society. All of them are based on science, which is currently called economic theory in the post-socialist space (the science of economics, the principles of economics or modern economics), and in most countries of the world - predominantly in the Greek-Latin manner - economics. The title of the present book is also Modern Economics. Economics (economic theory) is the science that studies the efficient use of limited resources to produce and distribute goods and services in order to satisfy as much as possible the unlimited needs and demands of the society. More simply, economics is the science of choice and how society manages its limited resources. Moreover, it should be emphasized that economics (economic theory) studies only the distribution, exchange and consumption of the economic wealth (food, beverages, clothing, housing, machine tools, computers, services, etc.), the production of which is possible and limited. And the wealth that exists indefinitely: no economic relations are formed in the production and distribution of solar energy, air, and the like. This current book is the second complete updated edition of the challenges of the modern global economy in the context of the coronary crisis, taking into account some of the priority directions of the country's development. Its purpose is to help students and interested readers gain a thorough knowledge of economics and show them how this knowledge can be applied pragmatically (professionally) in professional activities or in everyday life. To achieve this goal, this textbook, which consists of two parts and tests, discusses in simple and clear language issues such as: the essence of economics as a science, reasons for origin, purpose, tasks, usefulness and functions; Basic principles, problems and peculiarities of economics in different economic systems; Needs and demand, the essence of economic resources, types and limitations; Interaction, mobility, interchangeability and efficient use of economic resources. The essence and types of wealth; The essence, types and models of the economic system; The interaction of households and firms in the market of resources and products; Market mechanism and its elements - demand, supply and price; Demand and supply elasticity; Production costs and the ways to reduce them; Forms of the market - perfect and incomplete competition markets and their peculiarities; Markets for Production Factors and factor incomes; The essence of macroeconomics, causes and importance of origin; The essence and calculation of key macroeconomic indicators (gross national product, gross domestic product, net national product, national income, etc.); Macroeconomic stability and instability, unemployment, inflation and anti-inflationary policies; State regulation of the economy and economic policy; Monetary and fiscal policy; Income and standard of living; Economic Growth; The Corona Pandemic as a Defect and Effect of Globalization; National Economic Problems and New Opportunities for Development in the conditions of the Coronary Crisis; The Socio-economic problems of moral obsolescence in digital technologies; Education and creativity are the main solution way to overcome the economic crisis caused by the coronavirus; Positive and negative effects of tourism in Georgia; Formation of the middle class as a contributing factor to the development of tourism in Georgia; Corporate culture in Georgian travel companies, etc. The axiomatic truth is that economics is the union of people in constant interaction. Given that the behavior of the economy reflects the behavior of the people who make up the economy, after clarifying the essence of the economy, we move on to the analysis of the four principles of individual decision-making. Furtermore, the book describes how people make independent decisions. The key to making an individual decision is that people have to choose from alternative options, that the value of any action is measured by the value of what must be given or what must be given up to get something, that the rational, smart people make decisions based on the comparison of the marginal costs and marginal returns (benefits), and that people behave accordingly to stimuli. Afterwards, the need for human interaction is then analyzed and substantiated. If a person is isolated, he will have to take care of his own food, clothes, shoes, his own house and so on. In the case of such a closed economy and universalization of labor, firstly, its productivity will be low and, secondly, it will be able to consume only what it produces. It is clear that human productivity will be higher and more profitable as a result of labor specialization and the opportunity to trade with others. Indeed, trade allows each person to specialize, to engage in the activities that are most successful, be it agriculture, sewing or construction, and to buy more diverse goods and services from others at a relatively lower price. The key to such human interactions is that trade is mutually beneficial; That markets are usually the good means of coordination between people and that the government can improve the results of market functioning if the market reveals weakness or the results of market functioning are not fair. Moroever, it also shows how the economy works as a whole. In particular, it is argued that productivity is a key determinant of living standards, that an increase in the money supply is a major source of inflation, and that one of the main impediments to avoiding inflation is the existence of an alternative between inflation and unemployment in the short term, that the inflation decrease causes the temporary decline in unemployement and vice versa. The Understanding creatively of all above mentioned issues, we think, will help the reader to develop market economy-appropriate thinking and rational economic-commercial-financial behaviors, to be more competitive in the domestic and international labor markets, and thus to ensure both their own prosperity and the functioning of the country's economy. How he/she copes with the tasks, it is up to the individual reader to decide. At the same time, we will receive all the smart useful advices with a sense of gratitude and will take it into account in the further work. We also would like to thank the editor and reviewers of the books. Finally, there are many things changing, so it is very important to realize that the XXI century has come: 1. The century of the new economy; 2. Age of Knowledge; 3. Age of Information and economic activities are changing in term of innovations. 1. Why is the 21st century the century of the new economy? Because for this period the economic resources, especially non-productive, non-recoverable ones (oil, natural gas, coal, etc.) are becoming increasingly limited. According to the World Energy Council, there are currently 43 years of gas and oil reserves left in the world (see “New Commersant 2007 # 2, p. 16). Under such conditions, sustainable growth of real gross domestic product (GDP) and maximum satisfaction of uncertain needs should be achieved not through the use of more land, labor and capital (extensification), but through more efficient use of available resources (intensification) or innovative economy. And economics, as it was said, is the science of finding the ways about the more effective usage of the limited resources. At the same time, with the sustainable growth and development of the economy, the present needs must be met in a way that does not deprive future generations of the opportunity to meet their needs; 2. Why is the 21st century the age of knowledge? Because in a modern economy, it is not land (natural resources), labor and capital that is crucial, but knowledge. Modern production, its factors and products are not time-consuming and capital-intensive, but science-intensive, knowledge-intensive. The good example of this is a Japanese enterprise (firm) where the production process is going on but people are almost invisible, also, the result of such production (Japanese product) is a miniature or a sample of how to get the maximum result at the lowest cost; 3. Why is the 21st century the age of information? Because the efficient functioning of the modern economy, the effective organization of the material and personal factors of production largely depend on the right governance decision. The right governance decision requires prompt and accurate information. Gone are the days when the main means of transport was a sailing ship, the main form of data processing was pencil and paper, and the main means of transmitting information was sending letters through a postman on horseback. By the modern transport infrastructure (highways, railways, ships, regular domestic and international flights, oil and gas pipelines, etc.), the movement of goods, services and labor resoucres has been significantly accelerated, while through the modern means of communication (mobile phone, internet, other) the information is spreading rapidly globally, which seems to have "shrunk" the world and made it a single large country. The Authors of the book: Ushangi Samadashvili, Doctor of Economic Sciences, Associate Professor of Ivane Javakhishvili Tbilisi State University - Introduction, Chapters - 1, 2, 3, 4, 5, 6, 9, 10, 11,12, 15,16, 17.1,18 , Tests, Revaz Shengelia, Doctor of Economics, Professor of Georgian Technical University, Chapters_7, 8, 13. 14, 17.2, 17.4; Zhuzhuna Tsiklauri - Doctor of Economics, Professor of Georgian Technical University - Chapters 13.6, 13.7,17.2, 17.3, 18. We also thank the editor and reviewers of the book.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Low-Order Code"

1

Chaudhary, Hafiz Ahmad Awais, Ivan Guevara, Jobish John, Amandeep Singh, Tiziana Margaria, and Dirk Pesch. "Low-Code Internet of Things Application Development for Edge Analytics." In Internet of Things. IoT through a Multi-disciplinary Perspective, 293–312. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18872-5_17.

Full text
Abstract:
AbstractInternet of Things (IoT) applications combined with edge analytics are increasingly developed and deployed across a wide range of industries by engineers who are non-expert software developers. In order to enable them to build such IoT applications, we apply low-code technologies in this case study based on Model Driven Development. We use two different frameworks: DIME for the application design and implementation of IoT and edge aspects as well as analytics in R, and Pyrus for data analytics in Python, demonstrating how such engineers can build innovative IoT applications without having the full coding expertise. With this approach, we develop an application that connects a range of heterogeneous technologies: sensors through the EdgeX middleware platform with data analytics and web based configuration applications. The connection to data analytics pipelines can provide various kinds of information to the application users. Our innovative development approach has the potential to simplify the development and deployment of such applications in industry.
APA, Harvard, Vancouver, ISO, and other styles
2

Guevara, Ivan, Hafiz Ahmad Awais Chaudhary, and Tiziana Margaria. "Model-Driven Edge Analytics: Practical Use Cases in Smart Manufacturing." In Lecture Notes in Computer Science, 406–21. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19762-8_29.

Full text
Abstract:
AbstractIn the Internet of Things (IoT) era, devices and systems generate enormous amounts of real-time data, and demand real-time analytics in an uninterrupted manner. The typical solution, a cloud-centred architecture providing an analytics service, cannot guarantee real-time responsiveness because of unpredictable workloads and network congestion. Recently, edge computing has been proposed as a solution to reduce latency in critical systems. For computation processing and analytics on edge, the challenges include handling the heterogeneity of devices and data, and achieving processing on the edge in order to reduce the amount of data transmitted over the network.In this paper, we show how low-code, model-driven approaches benefit a Digital Platform for Edge analytics. The first solution uses EdgeX, an IIoT framework for supporting heterogeneous architectures with the eKuiper rule-based engine. The engine schedules fully automatically tasks that retrieve data from the Edge, as the infrastructure near the data is generated, allowing us to create a continuous flow of information. The second solution uses FiWARE, an IIoT framework used in industry, using IoT agents to accomplish a pipeline for edge analytics. In our architecture, based on the DIME LC/NC Integrated Modelling Environment, both integrations of EdgeX/eKuyper and FiWARE happen by adding an External Native DSL to this Digital Platform. The DSL comprises a family of reusable Service-Independent Building blocks (SIBs), which are the essential modelling entities and (service) execution capabilities in the architecture’s modelling layer. They provide users with capabilities to connect, control and organise devices and components, and develop custom workflows in a simple drag and drop manner.
APA, Harvard, Vancouver, ISO, and other styles
3

John, Jobish, Amrita Ghosal, Tiziana Margaria, and Dirk Pesch. "DSLs and Middleware Platforms in a Model-Driven Development Approach for Secure Predictive Maintenance Systems in Smart Factories." In Lecture Notes in Computer Science, 146–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89159-6_10.

Full text
Abstract:
AbstractIn many industries, traditional automation systems (operating technology) such as PLCs are being replaced with modern, networked ICT-based systems as part of a drive towards the Industrial Internet of Things (IIoT). The intention behind this is to use more cost-effective, open platforms that also integrate better with an organisation’s information technology (IT) systems. In order to deal with heterogeneity in these systems, middleware platforms such as EdgeX Foundry, IoTivity, FI-WARE for Internet of Things (IoT) systems are under development that provide integration and try to overcome interoperability issues between devices of different standards. In this paper, we consider the EdgeX Foundry IIoT middleware platform as a transformation engine between field devices and enterprise applications. We also consider security as a critical element in this and discuss how to prevent or mitigate the possibility of several security risks. Here we address secure data access control by introducing a declarative policy layer implementable using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Finally, we tackle the interoperability challenge at the application layer by connecting EdgeX with DIME, a model-driven/low-code application development platform that provides methods and techniques for systematic integration based on layered Domain-Specific Languages (DSL). Here, EdgeX services are accessed through a Native DSL, and the application logic is designed in the DIME Language DSL, lifting middleware development/configuration to a DSL abstraction level. Through the use of DSLs, this approach covers the integration space domain by domain, technology by technology, and is thus highly generalizable and reusable. We validate our approach with an example IIoT use case in smart manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
4

Meng, Xiangxiu, Xuejun Zhu, Yunpeng Ding, and Dengrong Qi. "Application of Image Recognition in Precise Inoculation Control System of Pleurotus Eryngii." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 988–1005. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_100.

Full text
Abstract:
AbstractThe traditional inoculation technology of Pleurotus eryngii is artificial inoculation, which has the disadvantages of low efficiency and high failure rate. In order to solve this problem, it is necessary to put forward the automatic control system of Pleurotus eryngii inoculation. In this paper, based on the system of high reliability, high efficiency, flexible configuration and other performance requirements, PLC is used as the core components of the control system and control the operation of the whole system. In order to improve the efficiency of the control system, the particle swarm optimization algorithm was used to optimize the interpolation time of the trajectory of the manipulator. Through simulation, it was found that the joint acceleration curve was smooth without mutation, and the running time was short. Because the position deviation of the Culture medium of Pleurotus eryngii to be inoculated will inevitably occur when it is transferred on the conveyor belt, the image recognition technology is used to accurately locate them. In order to improve the efficiency of image recognition, the genetic algorithm (GA) is used to improve Otsu to find the target region of Culture medium of Pleurotus eryngii to be inoculated, and the simulation results showed that the computational efficiency could be increased by 70%. In order to locate the center of the target region, the mean value method is used to find their centroid coordinates. At last, it is found by simulation that the centroid coordinates could be accurately calculated for a basket of 12 Pleuroides eryngii medium to be inoculated.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Hui, and Xin Yang. "Key Technologies of Sovereignty Network." In Co-governed Sovereignty Network, 95–182. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2670-8_4.

Full text
Abstract:
AbstractIn order to realize all functions of the sovereignty network while guarantee the security at the same time, what key technologies should be used in the architecture? This chapter will elaborate on the key technologies of the sovereignty network. Firstly, on application layer, we proposed a large-scale multilateral managed consortium blockchain technology named as Proof of Vote to construct the multi-identifier system (MIS) for all nations in the world to jointly and equally co-manage the top-level identifiers, which assures the intercommunication between different nations. Meanwhile, each nation owns autonomy on low-level identifiers of its corresponding top-level identifiers, which reflects independent of cyberspace sovereignty. Then, we proposed some efficient algorithms to support inter-translating and addressing for multi-identifier and huge size of hundred billion identifiers, together with the hyperbolic routing scheme forming the core technologies of network layer in MIN Architecture. Technologies for guaranteeing privacy, security, and transmission control, etc. were also presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Lutz, Alexander, and Axel Lachmeyer. "SciPPPer: Automatic Lock-Passage for Inland Vessels – Practical Results Focusing on Control Performance." In Lecture Notes in Civil Engineering, 959–68. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6138-0_85.

Full text
Abstract:
AbstractNavigating through locks is one of the most challenging tasks that skippers have to perform in inland navigation. Typical dimensions of a ship (width = 11.45 m) and a lock (width = 12 m) result in an error margin of less than 30 cm to the left and to the right of the ship when navigating within a lock chamber. Typical inland vessels on European waters have a length of 82 to 186 m. The wheel house on cargo vessels is located close to the stern of the vessel. This leads to low visibility of the bow in the lock chamber. In order to cope with this issue, a deck hand monitors the bow and announces distances to the skipper via radio. The quality of this information depends on the deck hand’s ability to judge distances correctly and is prone to error. This highly demanding maneuver needs to be performed up to 15 times per day. Each lock passage can take up to 30 minutes. The research project SciPPPer aims at automating this complex navigational task.The German acronym SciPPPer stands for Schleusenassistenzsystem basierend auf PPP und VDES für die Binnenschifffahrt – lock assistant system based on PPP and VDES for inland navigation. The idea is to fully automate the navigation into and out of a lock using high-precision GNSS (Global Navigation Satellite System) with PPP (precise point positioning) correction data which is transmitted from shore to ship using VDES (VHF Data Exchange System), an extension to AIS (Automatic Identification System). This absolute measurement data is complemented by relative measurement data using LiDAR and automotive RADAR and fused with inertial measurement data delivered by a mechanical gyro system. Apart from the challenge of precisely measuring the position and orientation of the vessel within the lock chamber, the control task poses an interesting problem as well. This contribution introduces both, the measuring and the control problem. However, the focus lies on the results of the control performance that was achieved on a full-bridge simulator as well as during real-world trials. A full-bridge simulator was used in order to test the control strategy and its algorithms safely. A number of different actuator configurations were investigated. Typical inland cargo vessels use one or two propellers with Kort nozzle and a twin rudder behind each propeller and a 360° turnable bow thruster. Typical inland passenger vessels use several (2–4) 360° turnable rudder propellers as main propulsion as well as a 360° turnable bow thruster or a classical tunnel thruster which can only apply forces to starboard or portside. These typical configurations were examined by simulation. The real-world trials were performed on a passenger vessel with three rudder propellers as main propulsion as well as a classical tunnel bow thruster acting left and right.This contribution presents the results of the simulator study as well as the real-world trials in terms of control performance. It explains specific challenges due to the navigation within an extremely confined space. The contribution concludes with lessons learned as well as an outlook focusing on the potential of the introduction of such a system to the inland navigation market.
APA, Harvard, Vancouver, ISO, and other styles
7

Yahya, Abid, Farid Ghani, R. Badlishah Ahmad, Mostafijur Rahman, Aini Syuhada, Othman Sidek, and M. F. M. Salleh. "Development of an Efficient and Secure Mobile Communication System with New Future Directions." In Handbook of Research on Computational Science and Engineering, 219–38. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-116-0.ch010.

Full text
Abstract:
This chapter presents performance of a new technique for constructing Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) encrypted codes based on a row division method. The new QC-LDPC encrypted codes are flexible in terms of large girth, multiple code rates, and large block lengths. In the proposed algorithm, the restructuring of the interconnections is developed by splitting the rows into subrows. This row division reduces the load on the processing node and ultimately reduces the hardware complexity. In this method of encrypted code construction, rows are used to form a distance graph. They are then transformed to a parity-check matrix in order to acquire the desired girth. In this work, matrices are divided into small sub-matrices, which result in improved decoding performance and reduce waiting time of the messages to be updated. Matrix sub-division increases the number of sub-matrices to be managed and memory requirement. Moreover, Prototype architecture of the LDPC codes has been implemented by writing Hardware Description Language (VHDL) code and targeted to a Xilinx Spartan-3E XC3S500E FPGA chip.
APA, Harvard, Vancouver, ISO, and other styles
8

Jin, Minhao, Libo Fan, Shaojie Luo, and Xiaotian Guo. "Application of SCMA Based on LoRaWAN Architecture." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220546.

Full text
Abstract:
Low-power Wide-Area Network (LPWAN), as a wireless connection technology of the Internet of things, has met the requirements of power consumption of network providers and operators, network coverage and Internet of things equipment. As the main technology of LPWAN, Long Range (LoRa) has entered the stage of scale deployment. However, in the MAC protocol of LoRa, multiple end nodes send data to the gateway at the same time, which will cause serious collision problems. When data collision occurs, the gateway will no longer receive collision data, and the data packet needs to be retransmitted after a certain time. In order to solve this problem and improve the throughput of the whole system, this paper proposes to introduce the sparse code multiple access (SCMA) technology of 5G non-orthogonal multiple access (NOMA) into Long Range Wide Area Network (LoRaWAN) architecture from the perspective of code domain. The theory and simulation results show that SCMA can effectively solve the problem of data collision and improve the system throughput.
APA, Harvard, Vancouver, ISO, and other styles
9

Hashemi, Manata. "Saving Face." In Coming of Age in Iran, 35–57. NYU Press, 2020. http://dx.doi.org/10.18574/nyu/9781479876334.003.0002.

Full text
Abstract:
This chapter examines the history and structure of face (aberu) and facework in Iran. Upon analyzing the interlinkages between saving face and cultural norms of modesty, the chapter argues that face-savers uphold a moral code comprised of four rules—hard work, self-sufficiency, appearance, and purity—that mitigate threats to their face. These rules serve as moral evaluative distinctions by which both face-savers and members of their community judge others’ moral worth. This process leads to a system of micro-stratification within low-income communities, whereby those who have accumulated moral capital by mimicking middle-class values hold a higher status than those who have not. By exchanging their moral capital for social and economic benefits, face-savers come to gain social mobility within poverty. Agency thus lies in the process of ritual compliance to the social order. The chapter further discusses how face-savers’ aspirations for middle-class lifestyles not only reflect the embourgeoisement of Iranian society beyond the middle and upper classes, but also the state’s own developmental initiatives.
APA, Harvard, Vancouver, ISO, and other styles
10

Grünthal, Riho. "Veps." In The Oxford Guide to the Uralic Languages, 291–307. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198767664.003.0017.

Full text
Abstract:
Veps is geographically the easternmost of the Finnic minority languages in north-west Russia. The Veps speech community has rapidly decreased since the 1930s owing to assimilation, migration, low birth-rate, and political and economic changes. Veps phonology and morphology are clearly distinct from those of other Finnic languages, whereas there are only minor differences between the three major dialects, northern, central, and southern Veps. The morpheme order is word + number + case, whereas possessive suffixes are used very rarely. Verbs have two conjugations, the subjective and the reflexive conjugations. The tense system is shifting towards a two-tense pattern consisting of present and past, instead of four tenses. The basic word order SVO is flexible and may change in terms of topicality and discourse functions. Literary language and spoken varieties differ considerably from one another. The latter displays ample code-switching and Russian vocabulary, whereas literary Veps has been influenced by Finnish.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Low-Order Code"

1

Schmidt, Kai-uwe. "On Cosets of the Generalized First-Order Reed¿Muller Code with Low PMEPR." In 2006 IEEE International Symposium on Information Theory. IEEE, 2006. http://dx.doi.org/10.1109/isit.2006.261649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manthey, René, Alexander Knospe, Carsten Lange, Christoph Schuster, and Antonio Hurtado. "Advanced Natural Circulation Reduced Order Model With Inclined Channel for Low Pressure Conditions." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81238.

Full text
Abstract:
Natural circulation with two-phase flow is a nonlinear dynamical systems, which can show a very complex and strange behavior under specific conditions. The application of stability analysis requires a large computational effort and is cumbersome in case of prediction the dynamical behavior by system codes alone. Therefore, model order reduction techniques are used to compensate this disadvantage by coupling with a bifurcation code such as MatCont. A reduced order model is derived by employing the POD-method to analyze the stability landscape of a low pressure natural circulation system representing passive safety systems such as the containment cooling condenser. The required full order model contains a classical natural circulation loop with a heated section and a riser. The two-phase region is modeled by a drift-flux mixture model. The reliability of the FOM is investigated by comparison with a reference model by the validated system code ATHLET.
APA, Harvard, Vancouver, ISO, and other styles
3

Manhartsgruber, Bernhard. "H 2-Optimal Low Order Transmission Line Models." In ASME/BATH 2019 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/fpmc2019-1688.

Full text
Abstract:
Abstract Transmission line modeling has played a crucial role in understanding the dynamics of fluid power systems. A vast body of literature exists from simple lumped parameter approaches to fully coupled three-dimensional fluid structure interaction models. When it comes to computationally efficient, yet physically sound low order models needed for fast computations iteratively called by optimization codes or for the purpose of model based control design, there is still room for improvement. Modal approximations of the input-output behaviour of liquid transmission lines have been around for decades. The basic idea of tuning the parameters of a canonical linear time invariant state space model to fit the transfer functions of a transmission line model in the H2-optimal sense under passivity constraints has been published by the author of the present paper in the past. However, the method so far was barely usable due to numerical difficulties in the underlying optimization process. A new implementation of the method employing quadruple-precision floating point numbers has recently been found to resolve the convergence problems and is reported in the present paper. The new version of the method is based on analytic computation of the cost and constraint functions as well as their gradients in the computer algebra package Maple and automatic code generation for compilation in FORTRAN. Results are very promising because both the entire low frequency behaviour and the first three eigenmodes of a transmission line model can be accurately covered by a model of order eight only.
APA, Harvard, Vancouver, ISO, and other styles
4

Poggiali, Matteo, Andrea Gamannossi, Leonardo Langone, and Alberto Amerini. "Civil aero-engine performance prediction using a low-order code and uncertainty quantification estimation." In SECOND INTERNATIONAL CONFERENCE ON MATERIAL SCIENCE, SMART STRUCTURES AND APPLICATIONS: ICMSS-2019. AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5138863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xiaofeng, and Yi Shen. "A Medical Image Compression Scheme Based on Low Order Linear Predictor and Most-likely Magnitude Huffman Code." In 2006 International Conference on Mechatronics and Automation. IEEE, 2006. http://dx.doi.org/10.1109/icma.2006.257487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mahmoudi, Yasser, Andrea Giusti, Epaminondas Mastorakos, and Ann P. Dowling. "Low-Order Modelling of Combustion Noise in an Aero-Engine: The Effect of Entropy Dispersion." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63147.

Full text
Abstract:
The present work studies the effect of entropy dispersion on the level of combustion noise at the turbine outlet of the Rolls-Royce ANTLE aero-engine. A new model for the decay of entropy waves, based on modelling dispersion effects, is developed and utilised in a low-order network model of the combustor (i.e. LOTAN code that solves the unsteady Euler equations). The proposed model for the dispersion of entropy waves only requires the mean velocity field as an input, obtained by RANS computations of the demonstrator combustor. LOTAN is then coupled with a low order model code (LINEARB) based on the semi-actuator disk model that studies propagation of combustion noise through turbine blades. Thus, by combining LOTAN and LINERAB we predict the combustion noise and its counterparts, direct and indirect noise, generated at the turbine exit. In comparison with experimental data it is found that without the inclusion of entropy dispersion, the level of combustion noise at the turbine exit is over-predicted by almost two orders of magnitude. The introduction of entropy dispersion in LOTAN results in much better agreement with the experimental data, highlighting the importance of entropy wave dispersion for the prediction of combustion noise in real engines. In more detail, the agreement with the experiment for high and low frequencies was very good. At intermediate frequencies the experimental measurements are still over-predicted, however the predicted noise is much smaller compared to the case without entropy dispersion. This discrepancy is attributed to (i) the role of turbulent mixing in the overall decay of the entropy fluctuations inside the combustor, not considered in the model developed for the decay of entropy waves, and (ii) the absence of a proper model in LINEARB for the decay of entropy waves as they pass through the turbine blade rows. These are areas that still need further development to improve the prediction of low-order network codes.
APA, Harvard, Vancouver, ISO, and other styles
7

El-Badawy, El-Sayed A., El-Sayed A. Youssef, and Hossam M. H. Shalaby. "PPM performance using a concatenated code over an optical-rf relay link." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.thc9.

Full text
Abstract:
Use of an optical-rf relay deep space communication link is being considered1 that transmits optical MPPM data from a deep space vehicle to a satellite, then retransmits the data via microwave to ground. High-order (255,127) Reed-Solomon (RS) codes2 over the MPPM optical link for improved error correction have been considered. In this work we show that either timing errors or optical power constraint may cause severe degradation in system performance using high-order RS codes. Using an (8,4) extended Bose-:Chaudhuri-Hocquenhem (BCH) code as an inner code in a concatenated code in which the outer code is a low-order (15,7) RS code and then transmitting the data using optical BPPM may get rid of the previous two limitations. Furthermore, we show that this concatenated code is superior to different orders of RS codes till (127,63) RS codes and is an attractive competitor to the (255,127) RS code from the point of view of performance. It also requires a lower bandwidth expansion factor and simpler system implementation. The price paid is the need for more energy which may be supplied by solar cells. It is also shown that using an interleaver after the inner encoder and then transmitting the data as M-ary PPM optical pulses with low values of M(M= 16) gives performance which competes with both (255,127) RS codes and needs lower bandwidth and low energy values but at the expense of complex system implementation.
APA, Harvard, Vancouver, ISO, and other styles
8

Campa, Giovanni, Sergio Mario Camporeale, Anai¨s Guaus, Julien Favier, Matteo Bargiacchi, Alessandro Bottaro, Ezio Cosatto, and Giulio Mori. "A Quantitative Comparison Between a Low Order Model and a 3D FEM Code for the Study of Thermoacoustic Combustion Instabilities." In ASME 2011 Turbo Expo: Turbine Technical Conference and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/gt2011-45969.

Full text
Abstract:
The study of thermoacoustic combustion instabilities has an important role for safety operation in modern gas turbines equipped with lean premixed dry low emission combustion systems. Gas turbine manufacturers often adopt simulation tools based on low order models for predicting the phenomenon of humming. These simulation codes provide fast responses and good physical insight, but only one-dimensional or two-dimensional simplified schemes can be generally examined. The finite element method can overcome such limitations, because it allows to examine three-dimensional geometries and to search the complex eigenfrequencies of the system. Large Eddy Simulation (LES) techniques are proposed in order to investigate the instability phenomenon, matching pressure fluctuations with turbulent combustion phenomena to study thermoacoustic combustion oscillations, even if they require large numerical resources. The finite element approach solves numerically the Helmholtz equation problem converted in a complex eigenvalue problem in the frequency domain. Complex eigenvalues of the system allow us to identify the complex eigenfrequencies of the combustion system analyzed, so that we can have a valid indication of the frequencies at which thermoacoustic instabilities are expected and of the growth rate of the pressure oscillations at the onset of instability. Through the collaboration among Ansaldo Energia, University of Genoa and Polytechnic University of Bari, a quantitative comparison between a low order model, called LOMTI, and the three-dimensional finite element method has been examined, in order to exploit the advantages of both the methodologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Bolinches-Gisbert, Marc, David Cadrecha Robles, Roque Corral, and Fernando Gisbert. "Prediction of Reynolds Number Effects on Low-Pressure Turbines Using a High-Order ILES Method." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91346.

Full text
Abstract:
Abstract A comprehensive comparison between Implicit Large Eddy Simulations (ILES) and experimental results of a modern highlift low-pressure turbine airfoil has been carried out for an array of Reynolds numbers (Re). Experimental data were obtained in a low-speed linear cascade at the Polithecnic University of Madrid using hot-wire anemometry and LDV. The numerical code is fourth order accurate, both in time and space. The spatial discretization of the compressible Navier-Stokes equations is based on a high-order Flux Reconstruction approach while a fourth order Runge-Kutta method is used to march in time the simulations. The losses, pressure coefficient distributions, and boundary layer and wake velocity profiles have been compared for an array of realistic Reynolds numbers. Moreover, boundary layer and wake velocity fluctuations are compared for the first time with experimental results. It is concluded that the accuracy of the numerical results is comparable to that of the experiments, especially for integral quantities such as the losses or exit angle. Turbulent fluctuations in the suction side boundary layer and the wakes are well predicted also. The elapsed time of the is about 140 hours on 40 Graphics Processor Units. The numerical tool is integrated within an industrial design system and reuses pre- and post-processing tools previously developed for another kind of applications. The trend of the losses with the Reynolds number has a sub-critical regime, where the losses scale with Re−1, and a supercrital regime, where the losses scale with Re−1/2. This trend can be seen both, in the simulations and the experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Yi, Kai Liu, and W. N. Dawes. "Fast High Order Large Eddy Simulations on Many Core Computing Systems for Turbomachinery." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-57468.

Full text
Abstract:
The overall aim of our research is to enable overnight high fidelity LES for realistic industry problems on affordable computing resource. We have adopted a “3E” approach: high spatial discretization Efficiency on general unstructured meshes, high Efficiency accurate time integration and high computing Efficiency on modern low cost HPC hardware. Our approach is centered on high order Flux Reconstruction with local time stepping — the STEFR algorithm [1]. In this paper, an offload-mode version of this code is described targeted at a heterogeneous many-core computing system based on low cost commodity hardware — Intel PHI cards. Three key techniques are introduced to achieve high FLOP rates — and optimal usage of non-equilibrium memory of both CPU and the many core coprocessor — with three levels of parallelization, multi-level nonequilibrium mesh partition and an asynchronous computing structure. A series of high order LES runs for a high lift low pressure turbine blade and a transonic turbine blade, with different order of accuracy, both fully wall-resolved and wall-modelled, were performed, analyzed and presented. This work demonstrates that the high order STEFR method has the potential to support over-night LES for realistic industrial problems on affordable computing resource.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Low-Order Code"

1

Bano, Masooda. The Missing Link: Low-Fee Private Tuition and Education Options for the Poor – The Demand-Side Dynamics in Pakistan. Research on Improving Systems of Education (RISE), September 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/113.

Full text
Abstract:
Low-fee private schools are today recognised as important players in the education market in developing countries, as they are argued to provide at least marginally better education than is on offer in the state schools. Leading international development agencies have begun encouraging governments in developing countries to include them within the policy-planning process. Based on fieldwork in two urban neighbourhoods in Pakistan, this paper shows that low-income parents are keen to secure good-quality education for their children, but they have to choose not only between state schools and low-fee private schools but also from among an array of low-fee tuition providers in their immediate neighbourhood to ensure that the child can cope in class, complete daily homework assignments, and pass exams in order to transition to the next grade. The evidence presented in this paper suggests that whether their child is enrolled in a state school or in a low-fee private school, the parents’ dependence on low-fee tuition providers is absolute: without their services, the child will not progress through the primary grades. Yet the sector remains entirely under-researched. The paper argues for the need to map the scale of this sector, document the household spending on it, and bring it within policy debates, placing it alongside low-fee private schools and state schools in order to provide access to primary education to all and improve the quality of education. At the same time it complicates the existing debates on low-fee private schools, by showing that parents on very low incomes — in this case households where mothers are employed as domestic workers and fathers are in casual employment — find them inaccessible; it also shows that household spending on education needs to take into account not just the charges imposed by low-fee schools, but also the cost of securing religious education, which is equally valued by the parents and is not free, and also the cost of paying the low-fee tuition provider. When all these costs are taken into account, the concerns that low-fee private schools are not truly accessible to the poor gain further traction. The paper also shows that mothers end up bearing the primary burden, having to work to cover the costs of their children’s education, because the core income provided by the father can barely cover the household costs.
APA, Harvard, Vancouver, ISO, and other styles
2

Maksud, A. K. M., Khandaker Reaz Hossain, and Amit Arulanantham. Mapping of Slums and Identifying Children Engaged in Worst Forms of Child Labour Living in Slums and Working in Neighbourhood Areas. Institute of Development Studies, May 2022. http://dx.doi.org/10.19088/clarissa.2022.002.

Full text
Abstract:
Dhaka has a population of about 19 million and many think it is a city of fortune. People come from all over the country to settle in Dhaka and many low-cost settlements (known as slums) have emerged since the country became independent. Findings of national survey reports suggest there is a high concentration of child labour in the slums of Dhaka, linked with the global supply chain of products. In order to understand the drivers of child labour in the slum areas of Dhaka, a research team formed of the Grambangla Unnayan Committee (GUC) with ChildHope UK designed and conducted a mapping and listing exercise, in consultation with CLARISSA consortium colleagues. The overall objective of the mapping and listing process was to identify and map children engaged in WFCL living in eight slum areas in Dhaka.
APA, Harvard, Vancouver, ISO, and other styles
3

Bano, Masooda. Low-Fee Private-Tuition Providers in Developing Countries: An Under-Appreciated and Under- Studied Market—Supply-Side Dynamics in Pakistan. Research on Improving Systems of Education (RISE), August 2022. http://dx.doi.org/10.35489/bsg-rise-wp_2022/107.

Full text
Abstract:
Although low-income parents’ dependence on low-fee private schools has been actively documented in the past decade, existing research and policy discussions have failed to recognise their heavy reliance on low-fee tuition providers in order to ensure that their children complete the primary cycle. By mapping a vibrant supply of low-fee tuition providers in two neighbourhoods in the twin cities of Rawalpindi and Islamabad in Pakistan, this paper argues for understanding the supply-side dynamics of this segment of the education market with the aim of designing better-informed policies, making better use of public spending on supporting private-sector players to reach the poor. Contrary to what is assumed in studies of the private tuition market, the low-fee tuition providers offering services in the Pakistani urban neighbourhoods are not teachers in government schools trying to make extra money by offering afternoon tutorial to children from their schools. Working from their homes, the tutors featured in this paper are mostly women who often have no formal teacher training but are imaginative in their use of a diverse set of teaching techniques to ensure that children from low-income households who cannot get support for education at home cope with their daily homework assignments and pass the annual exams to transition to the next grade. These tutors were motivated to offer tuition by a combination of factors ranging from the need to earn a living, a desire to stay productively engaged, and for some a commitment to help poor children. Arguing that parents expect them to take full responsibility for their children’s educational attainment, these providers view the poor quality of education in schools, the weak maternal involvement in children’s education, and changing cultural norms, whereby children no longer respect authority, as being key to explaining the prevailing low educational levels. The paper presents evidence that the private tuition providers, who may be viewed as education entrepreneurs, have the potential to be used by the state and development agencies to provide better quality education to children from low-income families.
APA, Harvard, Vancouver, ISO, and other styles
4

Cooper, Christopher, Jacob McDonald, and Eric Starkey. Wadeable stream habitat monitoring at Congaree National Park: 2018 baseline report. National Park Service, June 2021. http://dx.doi.org/10.36967/nrr-2286621.

Full text
Abstract:
The Southeast Coast Network (SECN) Wadeable Stream Habitat Monitoring Protocol collects data to give park resource managers insight into the status of and trends in stream and near-channel habitat conditions (McDonald et al. 2018a). Wadeable stream monitoring is currently implemented at the five SECN inland parks with wadeable streams. These parks include Horseshoe Bend National Military Park (HOBE), Kennesaw Mountain National Battlefield Park (KEMO), Ocmulgee Mounds National Historical Park (OCMU), Chattahoochee River National Recreation Area (CHAT), and Congaree National Park (CONG). Streams at Congaree National Park chosen for monitoring were specifically targeted for management interest (e.g., upstream development and land use change, visitor use of streams as canoe trails, and potential social walking trail erosion) or to provide a context for similar-sized stream(s) within the park or network (McDonald and Starkey 2018a). The objectives of the SECN wadeable stream habitat monitoring protocol are to: Determine status of upstream watershed characteristics (basin morphology) and trends in land cover that may affect stream habitat, Determine the status of and trends in benthic and near-channel habitat in selected wadeable stream reaches (e.g., bed sediment, geomorphic channel units, and large woody debris), Determine the status of and trends in cross-sectional morphology, longitudinal gradient, and sinuosity of selected wadeable stream reaches. Between June 11 and 14, 2018, data were collected at Congaree National Park to characterize the in-stream and near-channel habitat within stream reaches on Cedar Creek (CONG001, CONG002, and CONG003) and McKenzie Creek (CONG004). These data, along with the analysis of remotely sensed geographic information system (GIS) data, are presented in this report to describe and compare the watershed-, reach-, and transect-scale characteristics of these four stream reaches to each other and to selected similar-sized stream reaches at Ocmulgee Mounds National Historical Park, Kennesaw Mountain National Battlefield Park, and Chattahoochee National Recreation Area. Surveyed stream reaches at Congaree NP were compared to those previously surveyed in other parks in order to provide regional context and aid in interpretation of results. edar Creek’s watershed (CONG001, CONG002, and CONG003) drains nearly 200 square kilometers (77.22 square miles [mi2]) of the Congaree River Valley Terrace complex and upper Coastal Plain to the north of the park (Shelley 2007a, 2007b). Cedar Creek’s watershed has low slope and is covered mainly by forests and grasslands. Cedar Creek is designated an “Outstanding Resource Water” by the state of South Carolina (S.C. Code Regs. 61–68 [2014] and S.C. Code Regs. 61–69 [2012]) from the boundary of the park downstream to Wise Lake. Cedar Creek ‘upstream’ (CONG001) is located just downstream (south) of the park’s Bannister Bridge canoe landing, which is located off Old Bluff Road and south of the confluence with Meyers Creek. Cedar Creek ‘middle’ and Cedar Creek ‘downstream’ (CONG002 and CONG003, respectively) are located downstream of Cedar Creek ‘upstream’ where Cedar Creek flows into the relatively flat backswamp of the Congaree River flood plain. Based on the geomorphic and land cover characteristics of the watershed, monitored reaches on Cedar Creek are likely to flood often and drain slowly. Flooding is more likely at Cedar Creek ‘middle’ and Cedar Creek ‘downstream’ than at Cedar Creek ‘upstream.’ This is due to the higher (relative to CONG001) connectivity between the channels of the lower reaches and their out-of-channel areas. Based on bed sediment characteristics, the heterogeneity of geomorphic channel units (GCUs) within each reach, and the abundance of large woody debris (LWD), in-stream habitat within each of the surveyed reaches on Cedar Creek (CONG001–003) was classified as ‘fair to good.’ Although, there is extensive evidence of animal activity...
APA, Harvard, Vancouver, ISO, and other styles
5

Elbaum, Michael, and Peter J. Christie. Type IV Secretion System of Agrobacterium tumefaciens: Components and Structures. United States Department of Agriculture, March 2013. http://dx.doi.org/10.32747/2013.7699848.bard.

Full text
Abstract:
Objectives: The overall goal of the project was to build an ultrastructural model of the Agrobacterium tumefaciens type IV secretion system (T4SS) based on electron microscopy, genetics, and immunolocalization of its components. There were four original aims: Aim 1: Define the contributions of contact-dependent and -independent plant signals to formation of novel morphological changes at the A. tumefaciens polar membrane. Aim 2: Genetic basis for morphological changes at the A. tumefaciens polar membrane. Aim 3: Immuno-localization of VirB proteins Aim 4: Structural definition of the substrate translocation route. There were no major revisions to the aims, and the work focused on the above questions. Background: Agrobacterium presents a unique example of inter-kingdom gene transfer. The process involves cell to cell transfer of both protein and DNA substrates via a contact-dependent mechanism akin to bacterial conjugation. Transfer is mediated by a T4SS. Intensive study of the Agrobacterium T4SS has made it an archetypal model for the genetics and biochemistry. The channel is assembled from eleven protein components encoded on the B operon in the virulence region of the tumor-inducing plasmid, plus an additional coupling protein, VirD4. During the course of our project two structural studies were published presenting X-ray crystallography and three-dimensional reconstruction from electron microscopy of a core complex of the channel assembled in vitro from homologous proteins of E. coli, representing VirB7, VirB9, and VirB10. Another study was published claiming that the secretion channels in Agrobacterium appear on helical arrays around the membrane perimeter and along the entire length of the bacterium. Helical arrangements in bacterial membranes have since fallen from favor however, and that finding was partially retracted in a second publication. Overall, the localization of the T4SS within the bacterial membranes remains enigmatic in the literature, and we believe that our results from this project make a significant advance. Summary of achievements : We found that polar inflations and other membrane disturbances relate to the activation conditions rather than to virulence protein expression. Activation requires low pH and nutrient-poor medium. These stress conditions are also reflected in DNA condensation to varying degrees. Nonetheless, they must be considered in modeling the T4SS as they represent the relevant conditions for its expression and activity. We identified the T4SS core component VirB7 at native expression levels using state of the art super-resolution light microscopy. This marker of the secretion system was found almost exclusively at the cell poles, and typically one pole. Immuno-electron microscopy identified the protein at the inner membrane, rather than at bridges across the inner and outer membranes. This suggests a rare or transient assembly of the secretion-competent channel, or alternatively a two-step secretion involving an intermediate step in the periplasmic space. We followed the expression of the major secreted effector, VirE2. This is a single-stranded DNA binding protein that forms a capsid around the transferred oligonucleotide, adapting the bacterial conjugation to the eukaryotic host. We found that over-expressed VirE2 forms filamentous complexes in the bacterial cytoplasm that could be observed both by conventional fluorescence microscopy and by correlative electron cryo-tomography. Using a non-retentive mutant we observed secretion of VirE2 from bacterial poles. We labeled the secreted substrates in vivo in order detect their secretion and appearance in the plant cells. However the low transfer efficiency and significant background signal have so far hampered this approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Gottlieb, Yuval, Bradley Mullens, and Richard Stouthamer. investigation of the role of bacterial symbionts in regulating the biology and vector competence of Culicoides vectors of animal viruses. United States Department of Agriculture, June 2015. http://dx.doi.org/10.32747/2015.7699865.bard.

Full text
Abstract:
Symbiotic bacteria have been shown to influence host reproduction and defense against biotic and abiotic stressors, and this relates to possible development of a symbiont-based control strategy. This project was based on the hypothesis that symbionts have a significant impact on Culicoides fitness and vector competence for animal viruses. The original objectives in our proposal were: 1. Molecular identification and localization of the newly-discovered symbiotic bacteria within C. imicola and C. schultzei in Israel and C. sonorensis in California. 2. Determination of the prevalence of symbiotic bacteria within different vector Culicoides populations. 3. Documentation of specific symbiont effects on vector reproduction and defense: 3a) test for cytoplasmic incompatibility in Cardinium-infected species; 3b) experimentally evaluate the role of the symbiont on infection or parasitism by key Culicoides natural enemies (iridescent virus and mermithid nematode). 4. Testing the role(s) of the symbionts in possible protection against infection of vector Culicoides by BTV. According to preliminary findings and difficulties in performing experimental procedures performed in other insect symbiosis systems where insect host cultures are easily maintained, we modified the last two objectives as follows: Obj. 3, we tested how symbionts affected general fitness of Israeli Culicoides species, and thoroughly described and evaluated the correlation between American Culicoides and their bacterial communities in the field. We also tried alternative methods to test symbiont-Culicoides interactions and launched studies to characterize low-temperature stress tolerances of the main US vector, which may be related to symbionts. Obj. 4, we tested the correlation between EHDV (instead of BTV) aquisition and Cardinium infection. Culicoides-bornearboviral diseases are emerging or re-emerging worldwide, causing direct and indirect economic losses as well as reduction in animal welfare. One novel strategy to reduce insects’ vectorial capacity is by manipulating specific symbionts to affect vector fitness or performance of the disease agent within. Little was known on the bacterial tenants occupying various Culicoides species, and thus, this project was initiated with the above aims. During this project, we were able to describe the symbiont Cardinium and whole bacterial communities in Israeli and American Culicoides species respectively. We showed that Cardinium infection prevalence is determined by land surface temperature, and this may be important to the larval stage. We also showed no patent significant effect of Cardinium on adult fitness parameters. We showed that the bacterial community in C. sonorensis varies significantly with the host’s developmental stage, but it varies little across multiple wastewater pond environments. This may indicate some specific biological interactions and allowed us to describe a “core microbiome” for C. sonorensis. The final set of analyses that include habitat sample is currently done, in order to separate the more intimately-associated bacteria from those inhabiting the gut contents or cuticle surface (which also could be important). We were also able to carefully study other biological aspects of Culicoides and were able to discriminate two species in C. schultzei group in Israel, and to investigate low temperature tolerances of C. sonorensis that may be related to symbionts. Scientific implications include the establishment of bacterial identification and interactions in Culicoides (our work is cited in other bacteria-Culicoides studies), the development molecular identification of C. schultzei group, and the detailed description of the microbiome of the immature and matched adult stages of C. sonorensis. Agricultural implications include understanding of intrinsic factors that govern Culicoides biology and population regulation, which may be relevant for vector control or reduction in pathogen transmission. Being able to precisely identify Culicoides species is central to understanding Culicoides borne disease epidemiology.
APA, Harvard, Vancouver, ISO, and other styles
7

Monetary Policy Report - July de 2021. Banco de la República, October 2021. http://dx.doi.org/10.32468/inf-pol-mont-eng.tr3-2021.

Full text
Abstract:
Macroeconomic summary The Colombian economy sustained numerous shocks in the second quarter, pri¬marily related to costs and supply. The majority of these shocks were unantic¬ipated or proved more persistent than expected, interrupting the recovery in economic activity observed at the beginning of the year and pushing overall inflation above the target. Core inflation (excluding food and regulated items) increased but remained low, in line with the technical staff’s expectations. A third wave of the pandemic, which became more severe and prolonged than the previous outbreak, began in early April. This had both a high cost in terms of human life and a negative impact on Colombia's economic recovery. Between May and mid-June roadblocks and other disruptions to public order had a sig¬nificant negative effect on economic activity and inflation. The combination and magnitude of these two shocks likely led to a decline in gross domestic product (GDP) compared to the first quarter. Roadblocks also led to a significant in¬crease in food prices. The accumulated effects of global disruptions to certain value chains and increased international freight transportation prices, which since the end of 2020 have restricted supply and increased costs, also affected Colombia’s economy. The factors described above, which primarily affected the consumer price index (CPI) for goods and foods, explain to a significant degree the technical staff’s forecast errors and the increase in overall inflation above the 3% target. By contrast, increases in core inflation and in prices for regulated items were in line with the technical staff’s expectations, and can be explained largely by the elimination of various price relief measures put in place last year. An increase in perceived sovereign risk and the upward pressures that this im¬plies on international financing costs and the exchange rate were further con¬siderations. Despite significant negative shocks, economic growth in the first half of the year (9.1%) is now expected to be significantly higher than projected in the April re¬port (7.1%), a sign of a more dynamic economy that could recover more quickly than previously forecast. Diverse economic activity figures have indicated high¬er-than-expected growth since the end of 2020. This suggests that the negative effects on output from recurring waves of COVID-19 have grown weaker and less long-lasting with subsequent outbreaks. Nevertheless, the third wave of the coro¬navirus, and to an even greater degree the previously mentioned roadblocks and disruptions to public order, likely led to a decline in GDP in the second quar¬ter compared to the first. Despite this, data from the monthly economic tracking indicator (ISE) for April and May surpassed expectations, and new sector-level measures of economic activity suggest that the negative impact of the pandemic on output continues to moderate, amid reduced restrictions on mobility and im¬provements in the pace of vaccination programs. Freight transportation registers (June) and unregulated energy demand (July), among other indicators, suggest a significant recovery following the roadblocks in May. Given the above, annual GDP growth in the second quarter is expected to have been around 17.3% (previously 15.8%), explained in large part by a low basis of comparison. The technical staff revised its growth projection for 2021 upward from 6% to 7.5%. This forecast, which comes with an unusually high degree of uncertain¬ty, assumes no additional disruptions to public order and that any new waves of COVID-19 will not have significant additional negative effects on economic activity. Recovery in international demand, price levels for some of Colombia’s export com¬modities, and remittances from workers abroad have all performed better than projected in the previous report. This dynamic is expected to continue to drive recovery in the national income over the rest of the year. Continued ample international liquidity, an acceleration in vacci¬nation programs, and low interest rates can also be ex¬pected to favor economic activity. Improved performance in the second quarter, which led to an upward growth revision for all components of spending, is expected to continue, with the economy returning to 2019 production levels at the end of 2021, earlier than estimated in the April report. This forecast continues to account for the short-term effects on aggregate demand of a tax reform package along the lines of what is currently being pro-posed by the national government. Given the above, the central forecast scenario in this report projects growth in 2021 of 7.5% and in 2022 of 3.1% (Graph 1.1). In this scenar¬io, economic activity would nonetheless remain below potential. The noted improvement in these projections comes with a high degree of uncertainty. Annual inflation increased more than expected in June (3.63%) as a result of changes in food prices, while growth in core inflation (1.87%) was similar to projections.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography