Добірка наукової літератури з теми "Maximum order complexity"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Maximum order complexity".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Maximum order complexity":

1

Işık, Leyla, and Arne Winterhof. "Maximum-Order Complexity and Correlation Measures." Cryptography 1, no. 1 (May 13, 2017): 7. http://dx.doi.org/10.3390/cryptography1010007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Zhixiong, Ana I. Gómez, Domingo Gómez-Pérez, and Andrew Tirkel. "Correlation measure, linear complexity and maximum order complexity for families of binary sequences." Finite Fields and Their Applications 78 (February 2022): 101977. http://dx.doi.org/10.1016/j.ffa.2021.101977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sun, Zhimin, and Arne Winterhof. "On the Maximum Order Complexity of the Thue-Morse and Rudin-Shapiro Sequence." Uniform distribution theory 14, no. 2 (December 1, 2019): 33–42. http://dx.doi.org/10.2478/udt-2019-0012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractExpansion complexity and maximum order complexity are both finer measures of pseudorandomness than the linear complexity which is the most prominent quality measure for cryptographic sequences. The expected value of the Nth maximum order complexity is of order of magnitude log N whereas it is easy to find families of sequences with Nth expansion complexity exponential in log N. This might lead to the conjecture that the maximum order complexity is a finer measure than the expansion complexity. However, in this paper we provide two examples, the Thue-Morse sequence and the Rudin-Shapiro sequence with very small expansion complexity but very large maximum order complexity. More precisely, we prove explicit formulas for their N th maximum order complexity which are both of the largest possible order of magnitude N. We present the result on the Rudin-Shapiro sequence in a more general form as a formula for the maximum order complexity of certain pattern sequences.
4

Sun, Zhimin, Xiangyong Zeng, and Da Lin. "On the Nth maximum order complexity and the expansion complexity of a Rudin-Shapiro-like sequence." Cryptography and Communications 12, no. 3 (September 13, 2019): 415–26. http://dx.doi.org/10.1007/s12095-019-00396-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Popoli, Pierre. "On the Maximum Order Complexity of Thue–Morse and Rudin–Shapiro Sequences along Polynomial Values." Uniform distribution theory 15, no. 2 (December 1, 2020): 9–22. http://dx.doi.org/10.2478/udt-2020-0008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Both the Thue–Morse and Rudin–Shapiro sequences are not suitable sequences for cryptography since their expansion complexity is small and their correlation measure of order 2 is large. These facts imply that these sequences are highly predictable despite the fact that they have a large maximum order complexity. Sun and Winterhof (2019) showed that the Thue–Morse sequence along squares keeps a large maximum order complexity. Since, by Christol’s theorem, the expansion complexity of this rarefied sequence is no longer bounded, this provides a potentially better candidate for cryptographic applications. Similar results are known for the Rudin–Shapiro sequence and more general pattern sequences. In this paper we generalize these results to any polynomial subsequence (instead of squares) and thereby answer an open problem of Sun and Winterhof. We conclude this paper by some open problems.
6

Channon, Alastair. "Maximum Individual Complexity is Indefinitely Scalable in Geb." Artificial Life 25, no. 2 (May 2019): 134–44. http://dx.doi.org/10.1162/artl_a_00285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Geb was the first artificial life system to be classified as exhibiting open-ended evolutionary dynamics according to Bedau and Packard's evolutionary activity measures and is the only one to have been classified as such according to the enhanced version of that classification scheme. Its evolution is driven by biotic selection, that is (approximately), by natural selection rather than artificial selection. Whether or not Geb can generate an indefinite increase in maximum individual complexity is evaluated here by scaling two parameters: world length (which bounds population size) and the maximum number of neurons per individual. Maximum individual complexity is found to be asymptotically bounded when scaling either parameter alone. However, maximum individual complexity is found to be indefinitely scalable, to the extent evaluated so far (with run times in years and billions of reproductions per run), when scaling both world length and the maximum number of neurons per individual together. Further, maximum individual complexity is shown to scale logarithmically with (the lower of) maximum population size and maximum number of neurons per individual. This raises interesting questions and lines of thought about the feasibility of achieving complex results within open-ended evolutionary systems and how to improve on this order of complexity growth.
7

Cardone, Lorenzo, and Stefano Quer. "The Multi-Maximum and Quasi-Maximum Common Subgraph Problem." Computation 11, no. 4 (March 27, 2023): 69. http://dx.doi.org/10.3390/computation11040069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Maximum Common Subgraph problem has been long proven NP-hard. Nevertheless, it has countless practical applications, and researchers are still searching for exact solutions and scalable heuristic approaches. Driven by applications in molecular science and cyber-security, we concentrate on the Maximum Common Subgraph among an indefinite number of graphs. We first extend a state-of-the-art branch-and-bound procedure working on two graphs to N graphs. Then, given the high computational cost of this approach, we trade off complexity for accuracy, and we propose a set of heuristics to approximate the exact solution for N graphs. We analyze sequential, parallel multi-core, and parallel-many core (GPU-based) approaches, exploiting several leveraging techniques to decrease the contention among threads, improve the workload balance of the different tasks, reduce the computation time, and increase the final result size. We also present several sorting heuristics to order the vertices of the graphs and the graphs themselves. We compare our algorithms with a state-of-the-art method on publicly available benchmark sets. On graph pairs, we are able to speed up the exact computation by a 2× factor, pruning the search space by more than 60%. On sets of more than two graphs, all exact solutions are extremely time-consuming and of a complex application in many real cases. On the contrary, our heuristics are far less expensive (as they show a lower-bound for the speed up of 10×), have a far better asymptotic complexity (with speed ups up to several orders of magnitude in our experiments), and obtain excellent approximations of the maximal solution with 98.5% of the nodes on average.
8

Zhang, Xinhe, Wenbo Lv, and Haoran Tan. "Low-Complexity GSM Detection Based on Maximum Ratio Combining." Future Internet 14, no. 5 (May 23, 2022): 159. http://dx.doi.org/10.3390/fi14050159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Generalized spatial modulation (GSM) technology is an extension of spatial modulation (SM) technology, and one of its main advantages is to further improve band efficiency. However, the multiple active antennas for transmission also brings the demodulation difficulties at the receiver. To solve the problem of high computational complexity of the optimal maximum likelihood (ML) detection, two sub-optimal detection algorithms are proposed through reducing the number of transmit antenna combinations (TACs) detected at the receiver. One is the maximum ratio combining detection algorithm based on repetitive sorting strategy, termed as (MRC-RS), which uses MRC repetitive sorting strategy to select the most likely TACs in detection. The other is the maximum ratio combining detection algorithm, which is based on the iterative idea of the orthogonal matching pursuit, termed the MRC-MP algorithm. The MRC-MP algorithm reduces the number of TACs through finite iterations to reduce the computational complexity. For M-QAM constellation, a hard-limited maximum likelihood (HLML) detection algorithm is introduced to calculate the modulation symbol. For the M-PSK constellation, a low-complexity maximum likelihood (LCML) algorithm is introduced to calculate the modulation symbol. The computational complexity of these two algorithms for calculating the modulation symbol are independent of modulation order. The simulation results show that for GSM systems with a large number of TACs, the proposed two algorithms not only achieve almost the same bit error rate (BER) performance as the ML algorithm, but also can greatly reduce the computational complexity.
9

Ren, Dongping, Jianxi Guo, and Xiaoli Hao. "Bayesian network variable elimination method optimal elimination order construction." ITM Web of Conferences 45 (2022): 01012. http://dx.doi.org/10.1051/itmconf/20224501012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Variable Elimination (VE) is the most basic one of many Bayesian network inference algorithms. The speed and complexity of reasoning mainly depend on the order of elimination. Finding the optimal elimination order is a Nondeterministic Polynomial Hard (NP-Hard) problem, which is often solved by heuristic search in practice. In order to improve the speed of reasoning of the variable elimination method, the minimum, maximum potential, minimum missing edge and minimum added complexity search methods are studied. The Asian network is taken as an example to analyze and calculate the complexity and elimination of the above search method. Meta-order, through MATLAB R2018a, the above different search methods were constructed and reasoned separately. Finally, the performance of the four search methods was compared by inference time analysis. The experimental results show that the minimum increase complexity search method is better than other search methods, and the average time consuming is at least 0.012s, which can speed up the reasoning process of Bayesian network.
10

He, Zai-Yin, Abderrahmane Abbes, Hadi Jahanshahi, Naif D. Alotaibi, and Ye Wang. "Fractional-Order Discrete-Time SIR Epidemic Model with Vaccination: Chaos and Complexity." Mathematics 10, no. 2 (January 6, 2022): 165. http://dx.doi.org/10.3390/math10020165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research presents a new fractional-order discrete-time susceptible-infected-recovered (SIR) epidemic model with vaccination. The dynamical behavior of the suggested model is examined analytically and numerically. Through using phase attractors, bifurcation diagrams, maximum Lyapunov exponent and the 0−1 test, it is verified that the newly introduced fractional discrete SIR epidemic model vaccination with both commensurate and incommensurate fractional orders has chaotic behavior. The discrete fractional model gives more complex dynamics for incommensurate fractional orders compared to commensurate fractional orders. The reasonable range of commensurate fractional orders is between γ = 0.8712 and γ = 1, while the reasonable range of incommensurate fractional orders is between γ2 = 0.77 and γ2 = 1. Furthermore, the complexity analysis is performed using approximate entropy (ApEn) and C0 complexity to confirm the existence of chaos. Finally, simulations were carried out on MATLAB to verify the efficacy of the given findings.

Дисертації з теми "Maximum order complexity":

1

Popoli, Pierre. "Suites automatiques et morphiques de grande complexité le long des sous-suites." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse se situe à l'intersection des mathématiques et de l'informatique théorique. Une suite pseudo-aléatoire, bien qu'engendrée par un algorithme déterministe, possède un comportement proche de celui d'une suite aléatoire. Nous nous intéressons à différentes mesures de complexité pour les suites pseudo-aléatoires. D'un autre côté, les suites automatiques sont des suites non aléatoires. Cependant, certaines sous-suites des suites automatiques, comme les sous-suites polynomiales, sont bien plus aléatoires. Dans la première partie de cette thèse, nous établissons une borne inférieure de la complexité d'ordre maximal de la suite de Thue-Morse et des suites de motifs le long de tout polynôme unitaire, ce qui répond à une question de Sun et Winterhof (2019). Nous étudions ensuite le système de numération de Zeckendorf. Sa fonction somme des chiffres est une suite morphique non-automatique. Nous établissons une borne inférieure de la complexité d'ordre maximal de la suite de Fibonacci-Thue-Morse le long de tout polynôme unitaire. Nous calculons la complexité d'ordre maximal à l'aide du Graphe Acyclique Orienté de Mot (DAWG). Dans la deuxième partie de cette thèse, nous nous intéressons à la somme des chiffres binaire des carrés parfaits. Nous reprenons les travaux de Hare, Laishram et Stoll (2011) qui étudient le problème de déterminer les entiers impairs dont le poids de Hamming est égal à celui de son carré. Nous résolvons ce problème pour la majorité des cas restants et introduisons de nouveaux outils potentiellement utiles à la résolution complète du problème. Nos méthodes combinent la théorie des nombres, la combinatoire des mots et l'informatique. La dernière partie de cette thèse porte sur les corrélations de la suite de Rudin-Shapiro. La corrélation d'ordre 2 est historiquement très étudiée pour cette suite car elle possède un comportement aléatoire bien que la suite soit déterministe. Cependant, des corrélations d'ordre supérieur de cette suite ne possèdent plus ce comportement aléatoire. Dans la lignée des travaux de Aloui, Mauduit et Mkaouar (2021) sur les corrélations de la suite de Thue-Morse le long des nombres premiers, nous établissons un résultat sur les corrélations de la suite de Rudin-Shapiro le long des nombres premiers
The topic of this thesis lies at the interface between mathematics and computer science. A pseudorandom sequence is a sequence generated by a deterministic algorithm that has properties similar to those of a random sequence. We are interested in various complexity measures for these pseudorandom sequences. Automatic and morphic sequences are not random or pseudorandom, but certain subsequences of these sequences, such as polynomial subsequences for instance, are more random than the original sequences. In the first part of the thesis, we establish a lower bound on the maximal order complexity of the Thue-Morse sequence and related sequences along polynomial subsequences. This answers a question of Sun and Winterhof (2019). We then study the problem in the Zeckendorf numeration system. Its sum of digits function is a morphic non-automatic sequence. We establish a lower bound on the maximal order complexity of the Fibonacci-Thue-Morse sequence along unitary polynomials. We calculate the complexity with the help of the Directed Acyclic Word Graph (DAWG). In the second part, we are interested in the binary sum of digits of squares. We take up the work of Hare, Laishram and Stoll (2011) who studied the problem to determine the odd integers whose Hamming weight is the same as the one of their square. We solve the problem in the majority of the remaining cases, and introduce new tools that might be helpful to completely solve the problem. Our methods range from number theory, combinatorics on words to implementations in the area of computer science. In the third part of the thesis, we study the correlations of the Rudin-Shapiro sequence. The correlations of order 2 are well understood for this sequence, the behavior of this sequence is rather random whereas the original sequence is completely deterministic. The correlation of higher orders of this sequence do not show this random behavior. Aloui, Mauduit and Mkaouar (2021) studied the correlation of the Thue-Morse sequence along prime numbers. We provide a result on the correlation of the Rudin-Shapiro sequence along prime numbers
2

Zaylaa, Amira. "Analyse et extraction de paramètres de complexité de signaux biomédicaux." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR3315/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'analyse de séries temporelles biomédicales chaotiques tirées de systèmes dynamiques non-linéaires est toujours un challenge difficile à relever puisque dans certains cas bien spécifiques les techniques existantes basées sur les multi-fractales, les entropies et les graphes de récurrence échouent. Pour contourner les limitations des invariants précédents, de nouveaux descripteurs peuvent être proposés. Dans ce travail de recherche nos contributions ont porté à la fois sur l’amélioration d’indicateurs multifractaux (basés sur une fonction de structure) et entropiques (approchées) mais aussi sur des indicateurs de récurrences (non biaisés). Ces différents indicateurs ont été développés avec pour objectif majeur d’améliorer la discrimination entre des signaux de complexité différente ou d’améliorer la détection de transitions ou de changements de régime du système étudié. Ces changements agissant directement sur l’irrégularité du signal, des mouvements browniens fractionnaires et des signaux tirés du système du Lorenz ont été testés. Ces nouveaux descripteurs ont aussi été validés pour discriminer des fœtus en souffrance de fœtus sains durant le troisième trimestre de grossesse. Des mesures statistiques telles que l’erreur relative, l’écart type, la spécificité, la sensibilité ou la précision ont été utilisées pour évaluer les performances de la détection ou de la classification. Le fort potentiel de ces nouveaux invariants nous laisse penser qu’ils pourraient constituer une forte valeur ajoutée dans l’aide au diagnostic s’ils étaient implémentés dans des logiciels de post-traitement ou dans des dispositifs biomédicaux. Enfin, bien que ces différentes méthodes aient été validées exclusivement sur des signaux fœtaux, une future étude incluant des signaux tirés d’autres systèmes dynamiques nonlinéaires sera réalisée pour confirmer leurs bonnes performances
The analysis of biomedical time series derived from nonlinear dynamic systems is challenging due to the chaotic nature of these time series. Only few classical parameters can be detected by clinicians to opt the state of patients and fetuses. Though there exist valuable complexity invariants such as multi-fractal parameters, entropies and recurrence plot, they were unsatisfactory in certain cases. To overcome this limitation, we propose in this dissertation new entropy invariants, we contributed to multi-fractal analysis and we developed signal-based (unbiased) recurrence plots based on the dynamic transitions of time series. Principally, we aim to improve the discrimination between healthy and distressed biomedical systems, particularly fetuses by processing the time series using our techniques. These techniques were either validated on Lorenz system, logistic maps or fractional Brownian motions modeling chaotic and random time series. Then the techniques were applied to real fetus heart rate signals recorded in the third trimester of pregnancy. Statistical measures comprising the relative errors, standard deviation, sensitivity, specificity, precision or accuracy were employed to evaluate the performance of detection. Elevated discernment outcomes were realized by the high-order entropy invariants. Multi-fractal analysis using a structure function enhances the detection of medical fetal states. Unbiased cross-determinism invariant amended the discrimination process. The significance of our techniques lies behind their post-processing codes which could build up cutting-edge portable machines offering advanced discrimination and detection of Intrauterine Growth Restriction prior to fetal death. This work was devoted to Fetal Heart Rates but time series generated by alternative nonlinear dynamic systems should be further considered
3

Ennaoui, Karima. "Computational aspects of infinite automata simulation and closure system related issues." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC031/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La thèse est consacrée à des problématiques d’algorithmique et de complexité sur deux sujets. Le premier sujet s’intéresse à la composition comportementale des services web. Ce problème a été réduit à la simulation d’un automate par le produit fermé d’un ensemble d’automates. La thèse étudie dans sa première partie la complexité de ce problème en considérant deux paramètres : le nombre des instances considéré de chaque service et la présence des états hybrides : état à la fois intermédiaire et final dans un automate. Le second sujet porte sur les systèmes de fermeture et s’intéresse au calcul de l’extension maximale d’un système de fermeture ainsi qu’à l’énumération des clefs candidates d’une base implicative. On donne un algorithme incrémental polynomial qui génère l’extension maximale d’un treillis codé par une relation binaire. Puis, la notion de key-ideal est définie, en prouvant que leur énumération est équivalente à l’énumération des clefs candidates. Ensuite, on donne un algorithme qui permet de générer les key-ideal minimaux en temps incrémental polynomial et les key-ideal non minimaux en délai polynomial
This thesis investigates complexity and computational issues in two parts. The first concerns an issue related to web services composition problem: Deciding whether the behaviour of a web service can be composed out of an existing repository of web services. This question has been reduced to simulating a finite automata to the product closure of an automata set. We study the complexity of this problem considering two parameters; the number of considered instances in the composition and the presence of the so-called hybrid states (states that are both intermediate and final). The second part concerns closure systems and two related issues; Maximal extension of a closure system : we give an incremental polynomial algorithm that computes a lattice's maximal extension when the input is a binary relation. Candidate keys enumeration : we introduce the notion of key-ideal sets and prove that their enumeration is equivalent to candidate keys enumeration. We then give an efficient algorithm that generates all non-minimal key-ideal sets in a polynomial delay and all minimal ones in incremental polynomial time

Частини книг з теми "Maximum order complexity":

1

Yuan, Leqi, Kun Cheng, Haozhi Bian, Yaping Liao, and Chenxi Jiang. "Numerical Simulation of Flow Boiling Heat Transfer in Helical Tubes Under Marine Conditions." In Springer Proceedings in Physics, 1015–30. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_86.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLead-based cooled reactors in most countries and some small reactors at sea use helical tube steam generators. Compared with U-tubes, the convection heat transfer coefficient in the spiral tube is higher, the structure is more compact, and the secondary flow is generated under the action of centrifugal force and gravity, which can achieve the effect of wetting the inner wall of the tube. However, due to the importance of the steam generator in the reactor and the complexity of the flow and boiling in the helical tube, the aggregation behavior of bubbles, the distribution of the two-phase interface and the secondary flow in the tube will significantly affect the heat transfer characteristics, so the gas-liquid phase in the tube is studied. Distribution, changes in heat transfer coefficients, and fluid flow characteristics are very important.In order to study the boiling heat transfer characteristics of helical once-through steam generators under static and marine conditions to provide safe and reliable energy supply for offshore facilities such as marine floating, this study uses STAR-CCM+ software, VOF method and Rohsenow boiling model to study the heat transfer capacity and flow characteristics of flow boiling in a helical tube under swaying and tilting conditions. The gas-liquid phase distribution characteristics, secondary flow variation characteristics and convective heat transfer coefficient of the fluid under different swing functions and inclined positions are obtained by numerical calculation, and the law of physical parameters changing with the cycle is found. The research results show that the secondary flow and heat transfer capacity in the tube change with the cycle, and the change is most obvious at the tube length of 0.8m. 5% of the normal condition; when the inclination angle is 45°, the maximum increase of the convection heat transfer coefficient is 16.8%, and the maximum decrease is 6.6%.
2

Xiao, Yancai, Kun Fu, Zhuang Li, Zhiping Zeng, Jian Bai, Zhibin Huang, Xudong Huang, and Yu Yuan. "Research on Construction Process of Steel Beam Incremental Launching Based on Finite Element Method." In Lecture Notes in Civil Engineering, 254–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1260-3_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn order to ensure the normal operation of the traffic under the bridge, reasonable calculation methods and construction techniques should be adopted for the construction of the newly added railway station. This paper establishes a structural calculation finite element model to calculate and analyze the various construction stages of the steel beam incremental launching construction of the newly-added Gaoping station on the Yichuang-Wanzhou Railway, and systematically study the mechanical properties of the steel beam in the process. The results show that: (1) The deflection of each rod can meet the requirements of the railway bridge steel structure construction specification. However, when the length of the front cantilever of the steel beam reaches 11.4 m, the maximum deflection of the upper and lower chord bars is close to the limit. (2) The load-bearing capacity of each member of the steel beam meets the requirements, which indicates that the structural design of the steel beam and the incremental launching construction plan are reasonable. (3) In view of the complexity and uncertainty of the incremental launching construction process, real-time monitoring of the construction process is required, and the beam should be dropped in time when abnormal conditions occur to ensure the safe operation of the existing line.
3

Eriksen, Thomas Hylland, and Martina Visentin. "Threats to Diversity in a Overheated World." In Acceleration and Cultural Change, 27–45. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-33099-5_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMost of Eriksen’s research over the years has somehow or other dealt with the local implications of globalization. He has looked at ethnic dynamics, the challenges of forging national identities, creolization and cosmopolitanism, the legacies of plantation societies and, more recently, climate change in the era of ‘accelerated acceleration’. Here we want to talk not just about cultural diversity and not just look at biological diversity, but both, because he believes that there are some important pattern resemblances between biological and cultural diversity. And many of the same forces militate against that and threaten to create a flattened world with less diversity, less difference. And, obviously, there is a concern for the future. We need to have an open ended future with different options, maximum flexibility and the current situation with more homogenization. We live in a time when there are important events taking place, too, from climate change to environmental destruction, and we need to do something about that. In order to show options and possibilities for the future, we have to focus on diversity because complex problems need diverse answers.Martina: I would like to start with a passion of mine to get into one of your main research themes: diversity. I’m a Marvel fan and, what is emerging, is a reduction of what Marvel has always been about: diversity in comics. There seems to be a standardization that reduces the specificity of each superhero and so it seems that everyone is the same in a kind of indifference of difference. So in this hyper-diversity, I think there is also a reduction of diversity. Do you see something similar in your studies as well?Thomas: It’s a great example, and it could be useful to look briefly at the history of thought about diversity and the way in which it’s suddenly come onto the agenda in a huge way. If you take a look at the number of journal articles about diversity and related concepts, the result is stunning. Before 1990, the concept was not much used. In the last 30 years or so, it’s positively exploded. You now find massive research on biodiversity, cultural diversity, agro-biodiversity, biocultural diversity, indigenous diversity and so on. You’ll also notice that the growth curve has this ‘overheating shape’ indicating exponential growth in the use of the terms. And why is this? Well, I think this has something to do with what Hegel described when he said that ‘the owl of Minerva flies at dusk,’ which is to say that it is only when a phenomenon is being threatened or even gone that it catches widespread attention. Regarding diversity, we may be witnessing this mechanism. The extreme interest in diversity talk since around 1990 is largely a result of its loss which became increasingly noticeable since the beginning of the overheating years in the early 1990s. So many things happened at the same time, more or less. I was just reminded yesterday of the fact that Nelson Mandela was released almost exactly a year after the fall of the Berlin Wall. There were many major events taking place, seemingly independently of each other, in different parts of the world. This has something to do with what you’re talking about, because yes, I think you’re right, there has been a reduction of many kinds of diversity.So when we speak of superdiversity, which we do sometimes in migration studies (Vertovec, 2023), we’re really mainly talking about people who are diverse in the same ways, or rather people who are diverse in compatible ways. They all fit into the template of modernity. So the big paradox here of identity politics is that it expresses similarity more than difference. It’s not really about cultural difference because they rely on a shared language for talking about cultural difference. So in other words, in order to show how different you are from everybody else, you first have to become quite similar. Otherwise, there is a real risk that we’d end up like Ludwig Wittgenstein’s lion. In Philosophical Investigations (Wittgenstein, 1983), he remarks that if a lion could talk, we wouldn’t understand what it was saying. Lévi-Strauss actually says something similar in Tristes Tropiques (Lévi-Strauss, 1976) where he describes meeting an Amazonian people, I think it was the Nambikwara, who are so close that he could touch them, and yet it is as though there were a glass wall between them. That’s real diversity. It’s different in a way that makes translation difficult. And it’s another world. It’s a different ontology.These days, I’m reading a book by Leslie Bank and Nellie Sharpley about the Coronavirus pandemic in South Africa (Bank & Sharpley, 2022), and there are rural communities in the Eastern Cape which don’t trust biomedicine, so many refuse vaccinations. They resist it. They don’t trust it. Perhaps they trust traditional remedies slightly more. This was and is the situation with HIV-AIDS as well. This is a kind of diversity which is understandable and translateable, yet fundamental. You know, there are really different ways in which we see the Cosmos and the universe. So if you take the Marvel films, they’ve really sort of renovated and renewed the superhero phenomenon, which was almost dead when they began to revive it. As a kid around 1970, I was an avid reader of Superman and Batman. I also read a lot of Donald Duck and incidentally, a passion for i paperi and the Donald/Paperino universe is one curious commonality between Italy and Norway. Anyway, with the superheroes, everybody was very white. They represented a the white, conservative version of America. In the renewed Marvel universe, there are lots of literally very strong women, who are independent agents and not just pretty appendages to the men as they had often been in the past. You also had people with different cultural and racial identities. The Black Panther of Wakanda and all the mythology which went with it are very popular in many African countries. It’s huge in Nigeria, for example, and seems to add to the existing diversity. But then again, as we were saying and as you observed, these characters are diverse in comparable within a uniform framework, a pretty rigid cultural grammar which presupposes individualism: there are no very deep cultural differences in the way they see the world. So that’s the new kind of diversity, which really consists more of talking about diversity than being diverse. I should add that the superdiversity perspective is very useful, and I have often drawn on it myself in research on cultural complexity. But it remains framed within the language of modernity.Martina: What you just said makes me think of contradictory dimensions that are, however, held together by the same gaze. How is it that your approach helps hold together processes that nevertheless tell us the same thing about the concept of diversity?Thomas: When we talk about diversity, it may be fruitful to look at it from a different angle. We could look at traditional knowledge and bodily skills among indigenous peoples, for example, and ideas about nature and the afterlife. Typically, some would immediately object that this is wrong and we are right and they should learn science and should go to school, period. But that’s not the point when we approach them as scholars, because then we try to understand their worlds from within and you realize that this world is experienced and perceived in ways which are quite different from ours. One of the big debates in anthropology for a number of years now has concerned the relationship between culture and nature after Lévi-Strauss, the greatest anthropological theorist of the last century. His view was that all cultures have a clear distinction between culture and nature, which is allegedly a universal way of creating order. This view has been challenged by people who have done serious ethnographic work on the issue, from my Oslo colleague Signe Howell’s work in Malaysia to studies in Melanesia, but perhaps mainly in the Amazon, where anthropologists argue that there are many ways of conceptualising the relationship between humans and everything else. Many of these world-views are quite ecological in character. They see us as participants in the same universe as other animals, plants and even rocks and rivers, and might point out that ‘the land does not belong to us – we belong to the land’. That makes for a very different relationship to nature than the predatory, exploitative form typical of capitalist modernity. In other words, in these cultural worlds, there is no clear boundary between us humans and non-humans. If you go in that direction, you will discover that in fact, cultural diversity is about much more than giving rights to minorities and celebrating National Day in different ethnic costumes, or even establishing religious tolerance. That way of talking about diversity is useful, but it should not detract attention from deeper and older forms of diversity.
4

Greenlaw, Raymond, H. James Hoover, and Walter L. Ruzzo. "Complexity." In Limits to Parallel Computation. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195085914.003.0007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The goal of this chapter is to provide the formal basis for many key concepts that are used throughout the book. These include the notions of problem, definitions of important complexity classes, reducibility, and completeness, among others. Thus far, we have used the term "problem" somewhat vaguely. In order to compare the difficulty of various problems we need to make this concept precise. Problems typically come in two flavors: search problems and decision problems. Consider the following search problem, to find the value of the maximum flow in a network. Example 3.1.1 Maximum Flow Value (MaxFlow-V) Given: A directed graph G = (V,E) with each edge e labeled by an integer capacity c(e) ≥ 0, and two distinguished vertices, s and t. Problem: Compute the value of the maximum flow from source s to sink t in G. The problem requires us to compute a number — the value of the maximum flow. Note, in this case we are actually computing a function. Now consider a variant of this problem. Example 3.1.2 Maximum Flow Bit (MaxFlow-B) Given: A directed graph G = (V, E) with each edge e labeled by an integer capacity c(e)≥ 0, and two distinguished vertices, s and t, and an integer i. Problem: Is the ith bit of the value of the maximum flow from source s to sink t in G a 1? This is a decision problem version of the flow problem. Rather than asking for the computation of some value, the problem is asking for a "yes" or "no" answer to a specific question. Yet the decision problem MaxFlow-B is equivalent to the search problem MaxFlow-V in the sense that if one can be solved efficiently in parallel, so can the other. Why is this? First consider how solving an instance of MaxFlow-B can be reduced to solving an instance of MaxFlow-V. Suppose that you are asked a question for MaxFlow-B, that is, "Is bit i of the maximum flow a 1?" It is easy to answer this question by solving MaxFlow-V and then looking at bit i of the flow.
5

Gracia Nirmala Rani D., J. Shanthi, and S. Rajaram. "Machine Learning Optimization Techniques for 3D IC Physical Design." In Handbook of Research on Emerging Trends and Applications of Machine Learning, 47–61. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9643-1.ch003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The importance and growth of the digital IC have become more popular because of parameters such as small feature size, high speed, low cost, less power consumption, and temperature. There have been various techniques and methodologies developed so far using different optimization algorithms and data structures based on the dimensions of the IC to improve these parameters. All these existing algorithms illustrate explicit advantages in optimizing the chip area, maximum temperature of the chip, and wire length. Though there are some advantages in these traditional algorithms, there are few demerits such as execution time, integration, and computational complexity due to the necessity of handling large number of data. Machine learning techniques produce vibrant results in such fields where it is required to handle big data in order to optimize the scaling parameters of IC design. The objective of this chapter is to give an elaborate idea of applying machine learning techniques using Bayesian theorem to create automation tool for VLSI 3D IC design steps.
6

Elena Buruiana, Felicia, Lamiese Ismail, Federico Ferrari, and Hooman Soleymani Majd. "The Role of Ultra-Radical Surgery in the Management of Advanced Ovarian Cancer: State or Art." In Ovarian Cancer - Updates in Tumour Biology and Therapeutics [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.97638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ovarian cancer, also known as “silent killer”, has remained the most lethal gynaecological malignancy. The single independent risk factor linked with improved survival is maximum cytoreductive effort resulting in no macroscopic residual disease. This could be gained through ultra-radical surgery which demands tackling significant tumour burden in pelvis, lower and upper abdomen which usually constitutes bowel resection, liver mobilisation, ancillary cholecystectomy, extensive peritonectomy, diaphragmatic resection, splenectomy, resection of enlarged pelvic, paraaortic, and rarely cardio-phrenic lymph nodes in order to achieve optimal debulking. The above can be achieved through a holistic approach to patient’s care, meticulous patient selection, and full engagement of the family. The decision needs to be carefully balanced after obtaining an informed consent, and an appreciation of the impact of such surgery on the quality of life against the survival benefit. This chapter will describe the complexity and surgical challenges in the management of advanced ovarian cancer.
7

Bishop, Christopher M., and Michael E. Tipping. "Latent Variable Models and Data Visualisation." In Statistics and Neural Networks, 147–64. Oxford University PressOxford, 2000. http://dx.doi.org/10.1093/oso/9780198524229.003.0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Visualisation is a powerful and widely used technique for data analysis and data mining. For simple datasets a single projection of the data on to a two-dimensional plane, such as that provided by principal component analysis, may prove adequate. In the case of more complex datasets, however, it may be necessary to find multiple plots corresponding to different projection directions and/or different subsets of the data points in order to capture the full complexity of the data. Here we use latent variable models to construct a framework for data visualisation which allows simultaneous soft clustering and projection of the data in a probabilistic setting. We first show how standard principal component analysis can be formulated in terms of maximum likelihood under a latent variable model. Next we extend the formalism to include both mixtures and hierarchical mixtures of principal component models, and derive the corresponding visualisation algorithms. Finally, we illustrate the hierarchical approach to visualisation using datasets obtained from multiphase flows along oil pipelines, and from satellite image data.
8

Epstein, Irving R., and John A. Pojman. "Complex Oscillations and Chaos." In An Introduction to Nonlinear Chemical Dynamics. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195096705.003.0014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
After studying the first seven chapters of this book, the reader may have come to the conclusion that a chemical reaction that exhibits periodic oscillation with a single maximum and a single minimum must be at or near the apex of the pyramid of dynamical complexity. In the words of the song that is sung at the Jewish Passover celebration, the Seder, “Dayenu” (It would have been enough). But nature always has more to offer, and simple periodic oscillation is only the beginning of the story. In this chapter, we will investigate more complex modes of temporal oscillation, including both periodic behavior (in which each cycle can have several maxima and minima in the concentrations) and aperiodic behavior, or chaos (in which no set of concentrations is ever exactly repeated, but the system nonetheless behaves deterministically). Most people who study periodic behavior deal with linear oscillators and therefore tend to think of oscillations as sinusoidal. Chemical oscillators are, as we have seen, decidedly nonlinear, and their waveforms can depart quite drastically from being sinusoidal. Even after accepting that chemical oscillations can look as nonsinusoidal as the relaxation oscillations shown in Figure 4.4, our intuition may still resist the notion that a single period of oscillation might contain two, three, or perhaps twenty-three, maxima and minima. As an example, consider the behavior shown in Figure 8.1, where the potential of a bromide-selective electrode in the BZ reaction in a CSTR shows one large and two small extrema in each cycle of oscillation. The oscillations shown in Figure 8.1 are of the mixed-mode type, in which each period contains a mixture of large-amplitude and small-amplitude peaks. Mixedmode oscillations are perhaps the most commonly occurring form of complex oscillations in chemical systems. In order to develop some intuitive feel for how such behavior might arise, we employ a picture based on slow manifolds and utilized by a variety of authors (Boissonade, 1976; Rössler, 1976; Rinzel, 1987; Barkley, 1988) to analyze mixed-mode oscillations and other forms of complex dynamical behavior.
9

Gu, Pengfei, and Daao Yu. "OOD Problem Research in Biochemistry Based on Backdoor Adjustment." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia231392.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Due to its ability to deal well with the features of graph structure (social network, molecular structure), graph neural network (GNN) has recently shown amazing capabilities in many fields (biology, chemistry), which has aroused the attention of a large number of researchers on the operation mechanism of GNN model. In order to apply graph neural network to real environment, the problem of out-of-distribution generalization is solved. The difference of data distribution between training environment and real environment is an urgent problem to be solved. In the present study, from the point of view of data, the input graph data is processed to find the core part and filter out the noise part; From the perspective of model method, the parts related to this task (graph classification) in the model are extracted, so as to improve the accuracy and efficiency of the model. However, while these methods are valid from a theoretical point of view, they are using methods that are too simple to actually cut off the effects of non-causal components. However, the current methods of model Angle are to cut the network directly, without considering the information transfer between the model parameters. To solve these problems, the main work of this paper is to propose a distance-based environment selection method, which enables the backdoor adjustment to be implemented to the maximum extent and ensures the robustness of the model. At the same time, it proposes a way for the network to squeeze information during the clipping process, so that the effect of the model pruning algorithm can be optimized. Reduce the complexity of the model and improve the generalization of the model. The method presented in this paper has been validated on data sets in several biochemical fields, and the best performance has been achieved.
10

Beris, Antony N., and Brian J. Edwards. "Symplectic Geometry in Optics." In Thermodynamics of Flowing Systems: with Internal Microstructure. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195076943.003.0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The scope of this book is to address the fundamental problem of modeling transport processes within complex systems, i.e., systems with internal microstructure. The classical engineering approach involves the modeling of the systems as structured continua and the subsequent use of the models in order to derive (if possible) analytical results, exact or approximate. The advent of powerful computers and the promise through parallel processing of even more substantial computational gains in the near future have introduced yet another paragon to the established engineering practice: that of the numerical simulation. Numerical simulation has emerged as a viable alternative to experiments (contrast Computational Fluid Dynamics (CFD) simulations versus wind tunnel experiments); however, the key limitation to a wider application of numerical simulations in engineering practice lies in the reliability of the models (as well as in their simplicity). CFD applications are successful since the Navier/Stokes equations which they employ are quite capable of describing accurately enough the hydrodynamics of air and water. However, as we move our emphasis to materials of such internal complexity as polymer melts, liquid crystals, suspensions, etc., the development of reliable continuum models becomes an increasingly arduous task. The main objective of this treatise is to investigate a more systematic approach through which continuum models may be developed and analyzed. The key issue that the modeler has to cope with is how to construct models which describe more of the underlying physics without, at the same time, becoming excessively complex so that they either require a prohibitively large, experimentally determined number of adjustable parameters (such as current phenomenological theories) or a prohibitively large computational time (such as required for a detailed “brute force” description of the molecular dynamics). It is the thesis of the present work that a lot of effort can be saved if the appropriate formulation is used in deriving model equations, a formulation which is capable of exploiting to a maximum degree the inherent symmetry and consistency of the collective phenomena exhibited by a large number of internal degrees of freedom.

Тези доповідей конференцій з теми "Maximum order complexity":

1

Kodera, Yuta, Takuya Kusaka, Takeru Miyazaki, Yasuyuki Nogamit, Satoshi Uehara, and Robert H. Morelos-Zaragoza. "Evaluating the Maximum Order Complexity of a Uniformly Distributed Sequence Over Odd Characteristic." In 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). IEEE, 2018. http://dx.doi.org/10.1109/icce-china.2018.8448717.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Jiuyu, Yi Ma, and Rahim Tafazolli. "Achieving Maximum-likelihood Detection Performance with Square-order Complexity in Large Quasi-Symmetric MIMO Systems." In 2023 IEEE International Symposium on Information Theory (ISIT). IEEE, 2023. http://dx.doi.org/10.1109/isit54713.2023.10207002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Banerjee, Jai Nath, Madhura Mundale, Adnan Sachche, and Christopher McComb. "Complexity Reduction in Mass Customization to Facilitate Better Decision Support." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Mass customization provides a way to satisfy an increasing number of customers by tailoring product details to their preferences. Moreover, there is a considerable amount of pressure on industries that embrace mass customization in their product portfolios to consistently produce innovative and market defining technology. To ease this pressure, this study offers organizations a perspective on what combination of attributes is most influential on choice when approaching decision support for the purchase of a Configure-to-Order product. Laptops are used here as a case study, but the approach is generalizable to any family of products. This approach first uses metrics gained from a modified conjoint analysis applied to data gathered from a student consumer base. Customer satisfaction is measured as a function of how many product offerings are provided, identifying and quantifying a trade-off between customizability and consumer satisfaction. This aims to cater to the requirements of the consumer base and key business advantages allowing for greater usability and maximum profits.
4

Karjalainen, J. P., R. Karjalainen, and K. Huhtala. "An Extended Second Order Polynomial Model for Hydraulic Fluid Density." In ASME/BATH 2013 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/fpmc2013-4412.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, a second order polynomial model for predicting the pressure-temperature behaviour of the density of any hydraulic fluid is presented. The model is an extension of the previously published model by the same authors for more moderate operating temperatures. Nevertheless, for a user the extension will not add any more complexity to the model. Even at a wider operating range, the density model can still always be parameterized without any unknown variables, once the standard fluid characteristics are available. It is shown that compared to the measured values the maximum modelling error is well within 1% at the studied pressure range of up to 1500 bar, and at the studied temperature ranges overall covering from +20 to +130°C, with all the studied fluids. This study includes 10 highly different hydraulic fluids used in various fluid power applications as power transmission fluids or fuel oils. The studied fluids have a density range from 827 to 997.2 kg/m3, and an ISO VG range from 2.6 to 1187. Also the studied base fluids cover a wide range. Moreover, the studied fluids contain different additives or not even additives at all (crude oils). Neither the base fluid nor the additives will be discovered to affect the received modelling accuracy.
5

Antillon Moreira, Rodrigo, Ramanujan Jeughale, Toki Takahiro, Toma Motohiro, Kerron Andrews, Ryota Fujinaga, Salim Abdalla Al Ali, and Mohamed Abdulrahman Alzaabi. "Best Practice to Improve Slim Hole Maximum Reservoir Contact Well Drilling Performance." In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/202189-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Reservoir sections in MRC (Maximum Reservoir Contact) & ERD (Extended Reach Drilling) wells are mainly designed to drill 8 ½" hole, because of drilling limitations with smaller hole size. However, slim hole sizes offer opportunities to revitalize existing wells using re-entry drilling techniques in association with MRC and ERD designs. This paper discusses the best practices to be implemented in order to mitigate risk, reduce complexity and ensure improved drilling performance. Re-Entry wells in the field have a risk of well integrity issues such as corroded 9 5/8" casing. In order to mitigate this risk, the corroded 9 5/8" casing should be covered by 7" liner & tied-back to surface before drilling reservoir section. In this situation up to 18,000 ft of 4" DP is used in the wells to drill 6" hole and run 4 ½" lower completion. Offset well analysis, whip stock selection criteria, BHA design, drilling fluid selection, drilling and tripping practices based on torque & drag and hydraulics calculations are most important to achieve the well objective. The Slim hole MRC well was completed without any issues and achieved good drilling performance. It was observed that the actual drilling parameters such as torque, drag and stand pipe pressure were less than simulated parameters. NAF was selected in the section to reduce the friction factor, while motorized RSS and a reamer stabilizer were used in the BHA to reduce torque, drag and ensure a smooth well profile. A back reaming practice was implemented in hole section to reduce dog leg severity and the open hole was eventually displaced to viscosified brine to minimize the friction factor for running the 4 ½' lower completion. 8500 ft of 6" hole section was drilled and TD was reached at +/- 19,000ft within 50 days including recovering the existing completion, drilling 8 ½" & 6" hole and running completion. This paper aims to contribute to the oilfield industry by sharing the successfully implemented engineering design and operation execution methodology to overcome the complexities present in Re Entry Wells MRC/ERD wells required to be drilled with slim hole conditions under an optimal cost, time effectiveness and low risk.
6

Riggs, Marie K., Matt R. Bohm, and Philip J. Mountain. "Examining Relationships Between Device Complexity and Failure Modes of Minimally Invasive Surgical Staplers." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-66750.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Minimally invasive surgery (MIS) has become the standard approach for an increasing number and variety of procedures. Designing devices for such surgeries presents many challenges and must address efficiency, accuracy, and ease of use. The complexity of a device’s design likely influences its performance in real life situations. Therefore, identifying the complexity and potential for failures of a device is crucial in the early stage of design in order to ensure the effectiveness and safety of the final product. A complexity measure is explored utilizing design variables such as the maximum number of connections, number of total elements, and number of unique elements within a device. Reverse engineering of medical devices has been completed to begin understanding such complexity variables. The overall objective of this research is to determine the correlation between a medical device’s complexity measure and its failure modes. The nature and frequency of problems associated with various surgical medical devices must be characterized. This paper is an initial investigation and focuses on surgical stapling devices for MIS. The analysis pertains strictly to surgical staplers that simultaneously staple and transect tissue with a design that allows insertion through small incisions via a trocar, wound protector and retractor, or direct insertion. Adverse event reports involving minimally invasive surgical staplers have been retrieved from the U.S. Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database from January 2006 – January 2016 and examined to determine trends in the characterization of device problems and prevalence of such problems. A total of 13,312 reports are included in the analysis. 106 events resulted in death, 3234 resulted in injury, and 9972 involved a device malfunction. A yearly analysis has been conducted analyzing the trends in event type (death, injury, and malfunction) and device brands involved in the reports over the past decade. A sample of reports was taken in order to perform a detailed analysis of the event descriptions. The reports are categorized by phase and description of failure modes associated with surgical stapler use. The phases of use in which failures occur have been identified as packaging, reload, articulation, application, firing, cutting, removal, and staple line. FDA recall information associated with these devices was also investigated. An extensive study regarding adverse events reported to the FDA associated with surgical staplers has not been completed since 2004 to the authors’ knowledge, nor a study investigating this specific category of surgical stapling devices. These devices are constantly evolving in regards to their design features, and their application is expanding to more wideranging open and MIS procedures. Despite the prevalence of minimally invasive surgical stapler use, any incident of failure may put a patient’s health and safety at risk. Malformed staples as a result of the firing phase, removal issues, and leaking staple lines were the main contributors to surgical stapler failure in the adverse event reports analyzed. Bariatric and thoracic surgery accounted for the majority of procedure types identified within the reports. The range of procedures in the analysis verifies the expansion of surgical stapler use and application. Various failure modes can be attributed to user error; however, the FDA recall information associated with these devices indicates that device failure shares responsibility. The results of this work contribute to the awareness of both surgical stapling device designers and users, and the importance of such must be heavily emphasized in order to prevent future complications in the field.
7

Liu, Jianbin, André Sitte, and Jürgen Weber. "Adaptive Identification and Application of Flow Mapping for Electrohydraulic Valves." In SICFP’21 The 17:th Scandinavian International Conference on Fluid Power. Linköping University Electronic Press, 2021. http://dx.doi.org/10.3384/ecp182p173.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Good estimates of flow mapping for electrohydraulic valves are important in automation of fluid power system. The purpose of this paper is to propose adaptive identification methods based on a recursive least squares method (RLSM), a recursive maximum likelihood method (RMLM) and radial basis function neural network (RBFNN) to estimate the uncertain parameters in flow mapping for electrohydraulic valves. In order to reduce the complexity and improve the identification performance, model structures derived from prior knowledge are introduced. The methods are applied to map the pressure-flow characteristic of an electrohydraulic valve. With the help of simulation results, the accuracy and efficiency of these algorithms are demonstrated. Some issues like invertibility of flow mapping are discussed and suggestions to apply these methods are made.
8

Desmulliez, M. P. Y., P. W. Foulk, and B. S. Wherrett. "Hybrid Technology for Optoelectronic Parallel Processing : Basic Considerations and Future Trends." In Optics in Computing. Washington, D.C.: Optica Publishing Group, 1997. http://dx.doi.org/10.1364/oc.1997.otue.7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The combined exploitation of free-space optical interconnections and very large scale integrated (VLSI) electronics has recently allowed the construction of demonstrator systems [1,2] which exhibit on/off chip communication rates at least one order of magnitude greater than that of electronic systems. A recent article shows that the maximum on/off chip communication rate of the CMOS-SEED bitonic sorter built by the Scottish Collaboration Initiative in Optoelectronic Sciences (SCIOS) is 5.2 1011 pin-Hz [3]. This technology, called hybrid or smart-pixel-array technology, has provoked studies of the issues involved in determining the complexity of each processing element (the pixel) in order to optimize the overall system performance [3-5]. In order to contribute to the debate, the purpose of this article is fourfold: (i) to show that such performance relies on a small set of parameters which characterize the processing element in the optical and electronic domains as shown in table 1, (ii) to demonstrate that optimum performance lies in a narrow niche of the resulting parameter space, (iii) to indicate which technology has to be improved in order to harvest the full communication rate of such systems, and (iv) to outline an unexpected application for which the hybrid technology could contribute significantly.
9

Schimanowski, Alex, and Josef Schlattmann. "Estimation of Impact Energy for Seat Seals in Spring-Operated Pressure Relief Valves During the Reseating Process Under Compressible Fluid Service Conditions." In ASME 2019 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/pvp2019-93336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Spring operated pressure relief valves (SOPRVs) are used in many industrial fields for hazardous applications in order to protect people and environment as well as to reduce the risks of containment loss. Since SOPRVs are essential elements of safety systems, they are subject to strict functional and reliability requirements, which demand not only a proper disk lift but also give regulations concerning the maximum allowable valve tightness. However, despite the high practical importance of SOPRVs tightness through the whole life cycle, only initial tightness is considered in the literature and regulations. Therefore, in the present contribution we make an attempt to quantify the impact energy of the valve disk during the reseating of the valve, which is assumed to be the most relevant factor for repeated valve tightness. First, we present three different methods of increasing complexity for disk impact energy estimation. Subsequently, we demonstrate the application of these methods on the example of a SOPRV Type API 526 1E2 and discuss the findings. Finally, we consider a set of selected common used soft sealing materials and give implications for their usage within a range of set pressures and maximum disk lifts based on findings for the presented use case.
10

Kainulainen, Martyna. "A simplified method for evaluation of robustness of bridges." In IABSE Workshop, Helsinki 2017: Ignorance, Uncertainty, and Human Errors in Structural Engineering. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2017. http://dx.doi.org/10.2749/helsinki.2017.126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many theoretical methods for assessing robustness introduced so far, due to their complexity, appear to be almost impossible to use in practical design. Therefore, the present study proposes a simplified method for evaluation of the robustness of bridges. In the proposed methods two strategies for providing robustness are utilised: increasing local resistance in order to prevent the key element from failing; increasing damage tolerance by providing redundancy in order to recompense failure of the element. Accordingly, two separate approaches for evaluating bridge robustness are introduced. Both of them are based on a rating system, where each evaluated component is assigned with a partial factor, which value can vary from 0 to 10. Next, in each approach, all points from partial factors are summed up. The robustness of the bridge can be evaluated by comparing the final value with possible minimum and maximum values. The proposed evaluation method has been implemented in a case study: 118,8 m long four spans prestressed concrete girder overpass. The method appears to be promising in estimating the robustness level in considered bridges. Furthermore, it can provide assistance in identification of the components of the bridge that contribute to its local resistance and redundancy level.

До бібліографії