Dissertations / Theses on the topic 'Constraint networks'

To see the other types of publications on this topic, follow the link: Constraint networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Constraint networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Beaumont, Matthew, and n/a. "Handling Over-Constrained Temporal Constraint Networks." Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20041213.084512.

Full text
Abstract:
Temporal reasoning has been an active research area for over twenty years, with most work focussing on either enhancing the efficiency of current temporal reasoning algorithms or enriching the existing algebras. However, there has been little research into handling over-constrained temporal problems except to recognise that a problem is over-constrained and then to terminate. As many real-world temporal reasoning problems are inherently over-constrained, particularly in the scheduling domain, there is a significant need for approaches that can handle over-constrained situations. In this thesis, we propose two backtracking algorithms to gain partial solutions to over-constrained temporal problems. We also propose a new representation, the end-point ordering model, to allow the use of local search algorithms for temporal reasoning. Using this model we propose a constraint weighting local search algorithm as well as tabu and random-restart algorithms to gain partial solutions to over-constrained temporal problems. Specifically, the contributions of this thesis are: The introduction and empirical evaluation of two backtracking algorithms to solve over-constrained temporal problems. We provide two backtracking algorithms to close the gap in current temporal research to solve over-constrained problems; The representation of temporal constraint networks using the end-point ordering model. As current representation models are not suited for local search algorithms, we develop a new model such that local search can be applied efficiently to temporal reasoning; The development of a constraint weighting local search algorithm for under-constrained problems. As constraint weighting has proven to be efficient for solving many CSP problems, we implement a constraint weighting algorithm to solve under-constrained temporal problems; An empirical evaluation of constraint weighting local search against traditional backtracking algorithms. We compare the results of a constraint weighting algorithm with traditional backtracking approaches and find that in many cases constraint weighting has superior performance; The development of a constraint weighting local search, tabu search and random-restart local search algorithm for over-constrained temporal problems. We extend our constraint weighting algorithm to solve under-constrained temporal problems as well as implement two other popular local search algorithms: tabu search and random-restart; An empirical evaluation of all three local search algorithms against the two backtracking algorithms. We compare the results of all three local search algorithms with our twobacktracking algorithms for solving over-constrained temporal reasoning problems and find that local search proves to be considerably superior.
APA, Harvard, Vancouver, ISO, and other styles
2

Beaumont, Matthew. "Handling Over-Constrained Temporal Constraint Networks." Thesis, Griffith University, 2004. http://hdl.handle.net/10072/366603.

Full text
Abstract:
Temporal reasoning has been an active research area for over twenty years, with most work focussing on either enhancing the efficiency of current temporal reasoning algorithms or enriching the existing algebras. However, there has been little research into handling over-constrained temporal problems except to recognise that a problem is over-constrained and then to terminate. As many real-world temporal reasoning problems are inherently over-constrained, particularly in the scheduling domain, there is a significant need for approaches that can handle over-constrained situations. In this thesis, we propose two backtracking algorithms to gain partial solutions to over-constrained temporal problems. We also propose a new representation, the end-point ordering model, to allow the use of local search algorithms for temporal reasoning. Using this model we propose a constraint weighting local search algorithm as well as tabu and random-restart algorithms to gain partial solutions to over-constrained temporal problems. Specifically, the contributions of this thesis are: The introduction and empirical evaluation of two backtracking algorithms to solve over-constrained temporal problems. We provide two backtracking algorithms to close the gap in current temporal research to solve over-constrained problems; The representation of temporal constraint networks using the end-point ordering model. As current representation models are not suited for local search algorithms, we develop a new model such that local search can be applied efficiently to temporal reasoning; The development of a constraint weighting local search algorithm for under-constrained problems. As constraint weighting has proven to be efficient for solving many CSP problems, we implement a constraint weighting algorithm to solve under-constrained temporal problems; An empirical evaluation of constraint weighting local search against traditional backtracking algorithms. We compare the results of a constraint weighting algorithm with traditional backtracking approaches and find that in many cases constraint weighting has superior performance; The development of a constraint weighting local search, tabu search and random-restart local search algorithm for over-constrained temporal problems. We extend our constraint weighting algorithm to solve under-constrained temporal problems as well as implement two other popular local search algorithms: tabu search and random-restart; An empirical evaluation of all three local search algorithms against the two backtracking algorithms. We compare the results of all three local search algorithms with our twobacktracking algorithms for solving over-constrained temporal reasoning problems and find that local search proves to be considerably superior.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Francisco, Rodriguez Maria Andreina. "Consistency of Constraint Networks Induced by Automaton-Based Constraint Specifications." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156441.

Full text
Abstract:
In this work we discuss the consistency of constraints for which the set of solutions can be recognised by a deterministic finite automaton. Such an automaton induces a decomposition of the constraint into a conjunction of constraints. Since the level of filtering for the conjunction of constraints is not known, at any point during search there might be only one possible solution but, since all impossible values might not have yet been removed, we could be wasting time looking at impossible combinations of values. The so far most general result is that if the constraint hypergraph of such a decomposition is Berge-acyclic, then the decomposition provides hyper-arc consistency, which means that the decomposition achieves the best possible filtering. We focus our work on constraint networks that have alpha-acyclic, centred-cyclic or sliding-cyclic hypergraph representations. For each of these kinds of constraints networks we show systematically the necessary conditions to achieve hyper-arc consistency.
APA, Harvard, Vancouver, ISO, and other styles
4

Hassani, Bijarbooneh Farshid. "Constraint Programming for Wireless Sensor Networks." Doctoral thesis, Uppsala universitet, Avdelningen för datalogi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-241378.

Full text
Abstract:
In recent years, wireless sensor networks (WSNs) have grown rapidly and have had a substantial impact in many applications. A WSN is a network that consists of interconnected autonomous nodes that monitor physical and environmental conditions, such as temperature, humidity, pollution, etc. If required, nodes in a WSN can perform actions to affect the environment. WSNs present an interesting and challenging field of research due to the distributed nature of the network and the limited resources of the nodes. It is necessary for a node in a WSN to be small to enable easy deployment in an environment and consume as little energy as possible to prolong its battery lifetime. There are many challenges in WSNs, such as programming a large number of nodes, designing communication protocols, achieving energy efficiency, respecting limited bandwidth, and operating with limited memory. WSNs are further constrained due to the deployment of the nodes in indoor and outdoor environments and obstacles in the environment. In this dissertation, we study some of the fundamental optimisation problems related to the programming, coverage, mobility, data collection, and data loss of WSNs, modelled as standalone optimisation problems or as optimisation problems integrated with protocol design. Our proposed solution methods come from various fields of research including constraint programming, integer linear programming, heuristic-based algorithms, and data inference techniques.
ProFuN
APA, Harvard, Vancouver, ISO, and other styles
5

Draghici, Sorin. "Using constraints to improve generalisation and training of feedforward neural networks : constraint based decomposition and complex backpropagation." Thesis, University of St Andrews, 1996. http://hdl.handle.net/10023/13467.

Full text
Abstract:
Neural networks can be analysed from two points of view: training and generalisation. The training is characterised by a trade-off between the 'goodness' of the training algorithm itself (speed, reliability, guaranteed convergence) and the 'goodness' of the architecture (the difficulty of the problems the network can potentially solve). Good training algorithms are available for simple architectures which cannot solve complicated problems. More complex architectures, which have been shown to be able to solve potentially any problem do not have in general simple and fast algorithms with guaranteed convergence and high reliability. A good training technique should be simple, fast and reliable, and yet also be applicable to produce a network able to solve complicated problems. The thesis presents Constraint Based Decomposition (CBD) as a technique which satisfies the above requirements well. CBD is shown to build a network able to solve complicated problems in a simple, fast and reliable manner. Furthermore, the user is given a better control over the generalisation properties of the trained network with respect to the control offered by other techniques. The generalisation issue is addressed, as well. An analysis of the meaning of the term "good generalisation" is presented and a framework for assessing generalisation is given: the generalisation can be assessed only with respect to a known or desired underlying function. The known properties of the underlying function can be embedded into the network thus ensuring a better generalisation for the given problem. This is the fundamental idea of the complex backpropagation network. This network can associate signals through associating some of their parameters using complex weights. It is shown that such a network can yield better generalisation results than a standard backpropagation network associating instantaneous values.
APA, Harvard, Vancouver, ISO, and other styles
6

Craven, Daniel Shawn. "A formal analysis of the MLS LAN : TCB-to-TCBE, Session Status, & TCBE-to-Session Server Protocols /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sept%5FCraven.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vaez, Mohammad-Mehdi. "Nonblocking Banyan-type optical switching networks under crosstalk constraint." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chachra, Sumit, and Theodore Elhourani. "RESOURCE ALLOCATION IN SENSOR NETWORKS USING DISTRIBUTED CONSTRAINT OPTIMIZATION." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605299.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
Several algorithms have been proposed for solving constraint satisfaction and the more general constraint optimization problem in a distributed manner. In this paper we apply two such algorithms to the task of dynamic resource allocation in the sensor network domain using appropriate abstractions. The aim is to effectively track multiple targets by making the sensors coordinate with each other in a distributed manner, given a probabilistic representation of tasks (targets). We present simulation results and compare the performance of the DBA and DSA algorithms under varying experimental settings.
APA, Harvard, Vancouver, ISO, and other styles
9

Grigoleit, Mark Ted. "Optimisation of large scale network problems." Thesis, Curtin University, 2008. http://hdl.handle.net/20.500.11937/1405.

Full text
Abstract:
The Constrained Shortest Path Problem (CSPP) consists of finding the shortest path in a graph or network that satisfies one or more resource constraints. Without these constraints, the shortest path problem can be solved in polynomial time; with them, the CSPP is NP-hard and thus far no polynomial-time algorithms exist for solving it optimally. The problem arises in a number of practical situations. In the case of vehicle path planning, the vehicle may be an aircraft flying through a region with obstacles such as mountains or radar detectors, with an upper bound on the fuel consumption, the travel time or the risk of attack. The vehicle may be a submarine travelling through a region with sonar detectors, with a time or risk budget. These problems all involve a network which is a discrete model of the physical domain. Another example would be the routing of voice and data information in a communications network such as a mobile phone network, where the constraints may include maximum call delays or relay node capacities. This is a problem of current economic importance, and one for which time-sensitive solutions are not always available, especially if the networks are large. We consider the simplest form of the problem, large grid networks with a single side constraint, which have been studied in the literature. This thesis explores the application of Constraint Programming combined with Lagrange Relaxation to achieve optimal or near-optimal solutions of the CSPP. The following is a brief outline of the contribution of this thesis. Lagrange Relaxation may or may not achieve optimal or near-optimal results on its own. Often, large duality gaps are present. We make a simple modification to Dijkstra’s algorithm that does not involve any additional computational work in order to generate an estimate of path time at every node.We then use this information to constrain the network along a bisecting meridian. The combination of Lagrange Relaxation (LR) and a heuristic for filtering along the meridian provide an aggressive method for finding near-optimal solutions in a short time. Two network problems are studied in this work. The first is a Submarine Transit Path problem in which the transit field contains four sonar detectors at known locations, each with the same detection profile. The side constraint is the total transit time, with the submarine capable of 2 speeds. For the single-speed case, the initial LR duality gap may be as high as 30%. The first hybrid method uses a single centre meridian to constrain the network based on the unused time resource, and is able to produce solutions that are generally within 1% of optimal and always below 3%. Using the computation time for the initial Lagrange Relaxation as a baseline, the average computation time for the first hybrid method is about 30% to 50% higher, and the worst case CPU times are 2 to 4 times higher. The second problem is a random valued network from the literature. Edge costs, times, and lengths are uniform, randomly generated integers in a given range. Since the values given in the literature problems do not yield problems with a high duality gap, the values are varied and from a population of approximately 100,000 problems only the worst 200 from each set are chosen for study. These problems have an initial LR duality gap as high as 40%. A second hybrid method is developed, using values for the unused time resource and the lower bound values computed by Dijkstra’s algorithm as part of the LR method. The computed values are then used to position multiple constraining meridians in order to allow LR to find better solutions.This second hybrid method is able to produce solutions that are generally within 0.1% of optimal, with computation times that are on average 2 times the initial Lagrange Relaxation time, and in the worst case only about 5 times higher. The best method for solving the Constrained Shortest Path Problem reported in the literature thus far is the LRE-A method of Carlyle et al. (2007), which uses Lagrange Relaxation for preprocessing followed by a bounded search using aggregate constraints. We replace Lagrange Relaxation with the second hybrid method and show that optimal solutions are produced for both network problems with computation times that are between one and two orders of magnitude faster than LRE-A. In addition, these hybrid methods combined with the bounded search are up to 2 orders of magnitude faster than the commercial CPlex package using a straightforward MILP formulation of the problem. Finally, the second hybrid method is used as a preprocessing step on both network problems, prior to running CPlex. This preprocessing reduces the network size sufficiently to allow CPlex to solve all cases to optimality up to 3 orders of magnitude faster than without this preprocessing, and up to an order of magnitude faster than using Lagrange Relaxation for preprocessing. Chapter 1 provides a review of the thesis and some terminology used. Chapter 2 reviews previous approaches to the CSPP, in particular the two current best methods. Chapter 3 applies Lagrange Relaxation to the Submarine Transit Path problem with 2 speeds, to provide a baseline for comparison. The problem is reduced to a single speed, which demonstrates the large duality gap problem possible with Lagrange Relaxation, and the first hybrid method is introduced.Chapter 4 examines a grid network problem using randomly generated edge costs and weights, and introduces the second hybrid method. Chapter 5 then applies the second hybrid method to both network problems as a preprocessing step, using both CPlex and a bounded search method from the literature to solve to optimality. The conclusion of this thesis and directions for future work are discussed in Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
10

Comin, Carlo. "Complexity in Infinite Games on Graphs and Temporal Constraint Networks." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/368151.

Full text
Abstract:
This dissertation deals with a number of algorithmic problems motivated by automated temporal planning and formal verification of reactive and finite state systems. Particularly, we shall focus on game theoretical methods in order to obtain improved complexity bounds and faster algorithms for the following models: Hyper Temporal Networks, Conditional Simple/Hyper Temporal Networks, Conditional Simple Temporal Networks with Instantaneous Reaction Time, Update Games, Explicit McNaughton-Muller Games, Mean Payoff Games.
APA, Harvard, Vancouver, ISO, and other styles
11

Craven, Daniel Shawn. "A formal analysis of the MLS LAN: TCB-to-TCBE, Session Status, and TCBE-to-Session Server Protocols." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1448.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This thesis presents a formal analysis process and the results of applying that process to the MLS LAN: TCB-to- TCBE, Session Status, and TCBE-to-Session Server Protocols. The formal analysis process consists of several distinct stages: the creation of a detailed informal protocol description, analyzing that description to reveal assumptions and areas of interest not directly addressed in the protocol description, the transformation of that description and the related assumptions into a formal Strand Space representation, analyzing that representation to reveal assumptions and areas of interest, and concluding with an application of John Millen's automated Constraint Checker analysis tool to the Strand Space representations under an extremely limited set of conditions to prove certain protocol secrecy properties.
APA, Harvard, Vancouver, ISO, and other styles
12

Kabiri, Charles. "On the Performance of Underlay Cognitive Radio Networks with Interference Constraints and Relaying." Doctoral thesis, Blekinge Tekniska Högskola [bth.se], Faculty of Computing - Department of Communication Systems, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00609.

Full text
Abstract:
Efficiently allocating the scarce and expensive radio resources is a key challenge for advanced radio communication systems. To this end, cognitive radio (CR) has emerged as a promising solution which can offer considerable improvements in spectrum utilization. Furthermore, cooperative communication is a concept proposed to obtain spatial diversity gains through relays without requiring multiple antennas. To benefit from both CR and cooperative communications, a combination of CR networks (CRNs) with cooperative relaying referred to as cognitive cooperative relay networks (CCRNs) has recently been proposed. CCRNs can better utilize the radio spectrum by allowing the secondary users (SUs) to opportunistically access spectrum, share spectrum with primary users (PUs), and provide performance gains offered by cooperative relaying. In this thesis, a performance analysis of underlay CRNs and CCRNs in different fading channels is provided based on analytical expressions, numerical results, and simulations. To allocate power in the CCRNs, power allocation policies are proposed which consider the peak transmit power limit of the SUs and the outage probability constraint of the primary network. Thus, the impact of multiuser diversity, peak transmit power, fading parameters, and modulation schemes on the performance of the CRNs and CCRNs can be analyzed. The thesis is divided into an introduction and five research parts based on peer-reviewed conference papers and journal articles. The introduction provides fundamental background on spectrum sharing systems, fading channels, and performance metrics. In the first part, a basic underlay CRN is analyzed where the outage probability and the ergodic capacity of the network over general fading channels is derived. In the second part, the outage probability and the ergodic capacity of an underlay CRN are assessed capturing the effect of multiuser diversity on the network subject to Nakagami-m fading. Considering the presence of a PU transmitter (PU-Tx), a power allocation policy is derived and utilized for CRN performance analysis under Rayleigh fading. In the third part, the impact of multiple PU-Txs and multiple PU receivers (PU-Rxs) on the outage probability of an underlay CCRN is studied. The outage constraint at the PU-Rx and the peak transmit power constraint of the SUs are taken into account to derive the power allocation policies for the SUs. In the fourth part, analytical expressions for the outage probability and symbol error probability for CCRNs are derived where signal combining schemes at the SU receiver (SU-Rx) are compared. Finally, the fifth part applies a sleep/wake-up strategy and the min(N; T) policy to an underlay CRN. The SUs of the network operate as wireless sensor nodes under Nakagami-m fading. A power consumption function of the CRN is derived. Further, the impact of M/G/1 queue and fading channel parameters on the power consumption is assessed.
APA, Harvard, Vancouver, ISO, and other styles
13

Indrakumar, Selvaratnam. "Towards a constraint-based multi-agent approach to complex applications." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/2095/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Steck, Harald. "Constraint-based structural learning in Bayesian networks using finite data sets." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=962067202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ali, Murad. "Offline constraint-based routing in OSPF networks : a server based study." Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1346.

Full text
Abstract:

Many of the network applications today are demanding QoS guaranteed paths which the best effort routing protocols e.g. OSPF cannot calculate, because these protocols are topology driven, do not address many of the constraints by these applications and only calculate shortest paths. In this thesis offline constraint based routing is studied for Open Shortest Path First (OSPFv2) protocol single area network and an offline server is proposed for QoS guaranteed routing. The server builds traffic engineering (TE) database and calculates QoS guaranteed paths on behalf of all the routers in that area. The client routers only do best effort routing for normal data flows with no requirement for QoS guaranteed paths. The client routers use NETCONF protocol to download QoS routes from the offline server (OS). The offline server besides calculating QoS paths also reduces congestion and helps in efficiently utilizing the network resources, for example bandwidth.

APA, Harvard, Vancouver, ISO, and other styles
16

Dávid, László [Verfasser]. "Algorithms for the constraint-based analysis of metabolic networks / László Dávid." Berlin : Freie Universität Berlin, 2015. http://d-nb.info/1067442243/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Marashi, Sayed-Amir [Verfasser]. "Constraint-based analysis of substructures of metabolic networks / Sayed-Amir Marashi." Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1025356144/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Helmberg, Christoph, Sebastian Richter, and Dominic Schupke. "A Chance Constraint Model for Multi-Failure Resilience in Communication Networks." Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-175454.

Full text
Abstract:
For ensuring network survivability in case of single component failures many routing protocols provide a primary and a back up routing path for each origin destination pair. We address the problem of selecting these paths such that in the event of multiple failures, occuring with given probabilities, the total loss in routable demand due to both paths being intersected is small with high probability. We present a chance constraint model and solution approaches based on an explicit integer programming formulation, a robust formulation and a cutting plane approach that yield reasonably good solutions assuming that the failures are caused by at most two elementary events, which may each affect several network components.
APA, Harvard, Vancouver, ISO, and other styles
19

WANG, YUN. "Application-Specific Quality of Service Constraint Design in Wireless Sensor Networks." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1217598398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Muhammad, Sanusi. "Scalable and network aware video coding for advanced communications over heterogeneous networks." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7469.

Full text
Abstract:
This work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service. In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques. To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission. Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service. To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream.
APA, Harvard, Vancouver, ISO, and other styles
21

Grigoleit, Mark Ted. "Optimisation of large scale network problems." Curtin University of Technology, Department of Mathematics and Statistics, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=115092.

Full text
Abstract:
The Constrained Shortest Path Problem (CSPP) consists of finding the shortest path in a graph or network that satisfies one or more resource constraints. Without these constraints, the shortest path problem can be solved in polynomial time; with them, the CSPP is NP-hard and thus far no polynomial-time algorithms exist for solving it optimally. The problem arises in a number of practical situations. In the case of vehicle path planning, the vehicle may be an aircraft flying through a region with obstacles such as mountains or radar detectors, with an upper bound on the fuel consumption, the travel time or the risk of attack. The vehicle may be a submarine travelling through a region with sonar detectors, with a time or risk budget. These problems all involve a network which is a discrete model of the physical domain. Another example would be the routing of voice and data information in a communications network such as a mobile phone network, where the constraints may include maximum call delays or relay node capacities. This is a problem of current economic importance, and one for which time-sensitive solutions are not always available, especially if the networks are large. We consider the simplest form of the problem, large grid networks with a single side constraint, which have been studied in the literature. This thesis explores the application of Constraint Programming combined with Lagrange Relaxation to achieve optimal or near-optimal solutions of the CSPP. The following is a brief outline of the contribution of this thesis. Lagrange Relaxation may or may not achieve optimal or near-optimal results on its own. Often, large duality gaps are present. We make a simple modification to Dijkstra’s algorithm that does not involve any additional computational work in order to generate an estimate of path time at every node.
We then use this information to constrain the network along a bisecting meridian. The combination of Lagrange Relaxation (LR) and a heuristic for filtering along the meridian provide an aggressive method for finding near-optimal solutions in a short time. Two network problems are studied in this work. The first is a Submarine Transit Path problem in which the transit field contains four sonar detectors at known locations, each with the same detection profile. The side constraint is the total transit time, with the submarine capable of 2 speeds. For the single-speed case, the initial LR duality gap may be as high as 30%. The first hybrid method uses a single centre meridian to constrain the network based on the unused time resource, and is able to produce solutions that are generally within 1% of optimal and always below 3%. Using the computation time for the initial Lagrange Relaxation as a baseline, the average computation time for the first hybrid method is about 30% to 50% higher, and the worst case CPU times are 2 to 4 times higher. The second problem is a random valued network from the literature. Edge costs, times, and lengths are uniform, randomly generated integers in a given range. Since the values given in the literature problems do not yield problems with a high duality gap, the values are varied and from a population of approximately 100,000 problems only the worst 200 from each set are chosen for study. These problems have an initial LR duality gap as high as 40%. A second hybrid method is developed, using values for the unused time resource and the lower bound values computed by Dijkstra’s algorithm as part of the LR method. The computed values are then used to position multiple constraining meridians in order to allow LR to find better solutions.
This second hybrid method is able to produce solutions that are generally within 0.1% of optimal, with computation times that are on average 2 times the initial Lagrange Relaxation time, and in the worst case only about 5 times higher. The best method for solving the Constrained Shortest Path Problem reported in the literature thus far is the LRE-A method of Carlyle et al. (2007), which uses Lagrange Relaxation for preprocessing followed by a bounded search using aggregate constraints. We replace Lagrange Relaxation with the second hybrid method and show that optimal solutions are produced for both network problems with computation times that are between one and two orders of magnitude faster than LRE-A. In addition, these hybrid methods combined with the bounded search are up to 2 orders of magnitude faster than the commercial CPlex package using a straightforward MILP formulation of the problem. Finally, the second hybrid method is used as a preprocessing step on both network problems, prior to running CPlex. This preprocessing reduces the network size sufficiently to allow CPlex to solve all cases to optimality up to 3 orders of magnitude faster than without this preprocessing, and up to an order of magnitude faster than using Lagrange Relaxation for preprocessing. Chapter 1 provides a review of the thesis and some terminology used. Chapter 2 reviews previous approaches to the CSPP, in particular the two current best methods. Chapter 3 applies Lagrange Relaxation to the Submarine Transit Path problem with 2 speeds, to provide a baseline for comparison. The problem is reduced to a single speed, which demonstrates the large duality gap problem possible with Lagrange Relaxation, and the first hybrid method is introduced.
Chapter 4 examines a grid network problem using randomly generated edge costs and weights, and introduces the second hybrid method. Chapter 5 then applies the second hybrid method to both network problems as a preprocessing step, using both CPlex and a bounded search method from the literature to solve to optimality. The conclusion of this thesis and directions for future work are discussed in Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Shao-shan Carol. "A constraint optimization framework for discovery of cellular signaling and regulatory networks." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65772.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Computational and Systems Biology Program, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Cellular signaling and regulatory networks underlie fundamental biological processes such as growth, differentiation, and response to the environment. Although there are now various high-throughput methods for studying these processes, knowledge of them remains fragmentary. Typically, the majority of hits identified by transcriptional, proteomic, and genetic assays lie outside of the expected pathways. In addition, not all components in the regulatory networks can be exposed in one experiment because of systematic biases in the assays. These unexpected and hidden components of the cellular response are often the most interesting, because they can provide new insights into biological processes and potentially reveal new therapeutic approaches. However, they are also the most difficult to interpret. We present a technique, based on the Steiner tree problem, that uses a probabilistic protein-protein interaction network and high confidence measurement and prediction of protein-DNA interactions, to determine how these hits are organized into functionally coherent pathways, revealing many components of the cellular response that are not readily apparent in the original data. We report the results of applying this method to (1) phosphoproteomic and transcriptional data from the pheromone response in yeast, and (2) phosphoproteomic, DNaseI hypersensitivity sequencing and mRNA profiling data from the U87MG glioblastoma cell lines over-expressing the variant III mutant of the epidermal growth factor receptor (EGFRvIII). In both cases the method identifies changes in diverse cellular processes that extend far beyond the expected pathways. Analysis of the EGFRVIII network connectivity property and transcriptional regulators that link observed changes in protein phosphorylation and differential expression suggest a few intriguing hypotheses that may lead to improved therapeutic strategy for glioblastoma.
by Shao-shan Carol Huang.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Xiong, Lixiang. "Scheduling in packet switches with relaxed constraint." School of Electrical, Computer and Telecommunications Engineering - Faculty of Informatics, 2004. http://ro.uow.edu.au/theses/372.

Full text
Abstract:
In this thesis, I present a series of new scheduling algorithms for an ATM-like crossbar input-queued switching fabric of an IP router. These new scheduling algorithms are developed based on three popular existing scheduling algorithms: Parallel Iterative Matching, Round Robin Matching, and Iterative Round Robin Matching with SLIP. The basic idea of our research is to divide all outputs of the IP router into a few groups. All outputs in the same output group are multiplexed into a high-speed link. Cells (traffic) can be directed to a group of outputs instead of an individual output. The performance of our new scheduling algorithms are measured with simulation. The simulation results indicate that our new scheduling algorithms can achieve an excellent throughput while consuming much less computing time than existing scheduling algorithms
APA, Harvard, Vancouver, ISO, and other styles
24

Guziolowski, Carito. "Analysis of Large-Scale Biological Networks with Constraint-Based Approaches over Static Models." Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00541903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Larhlimi, Abdelhalim [Verfasser]. "New concepts and tools in constraint-based analysis of metabolic networks / Abdelhalim Larhlimi." Berlin : Freie Universität Berlin, 2009. http://d-nb.info/1023579944/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kocagoncu, Ece. "Dynamic speech networks in the brain : dual contribution of incrementality and constraints in access to semantics." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/270309.

Full text
Abstract:
This thesis explores the spatiotemporal network dynamics underlying natural speech comprehension, as measured by electro-magnetoencephalography (E/MEG). I focus on the transient effects of incrementality and constraints in speech on access to lexical semantics. Through three E/MEG experiments I address two core issues in systems neuroscience of language: 1) What are the network dynamics underpinning cognitive computations that take place when we map sounds to rich semantic representations? 2) How do the prior semantic and syntactic contextual constraints facilitate this mapping? Experiment 1 investigated the cognitive processes and relevant networks that come online prior to a word’s recognition point (e.g. “f” for butterfly) as we access meaning through speech in isolation. The results revealed that 300 ms before the word is recognised, the speech incrementally activated matching phonological and semantic representations resulting in transient competition. This competition recruited LIFG, and modality specific regions (LSMG, LSTG for the phonological; LAG and MTG for the semantic domain). Immediately after the word’s recognition point the semantic representation of the target concept was boosted, and rapidly accessed recruiting bilateral MTG and AG. Experiment 2 explored the cortical networks underpinning contextual semantic processing in speech. Participant listened to two-word spoken phrases where the semantic constraint provided by the modifier was manipulated. To separate out cognitive networks that are modulated by semantic constraint from task positive networks I performed a temporal independent component analysis. Among 14 networks extracted, only the activity of bilateral AG was modulated by semantic constraint between -400 to -300 ms before the noun’s recognition point. Experiment 3 addressed the influence of sentential syntactic constraint on anticipation and activation of upcoming syntactic frames in speech. Participants listened to sentences with local syntactic ambiguities. The analysis of the connectivity dynamics in the left frontotemporal syntax network showed that the processing of sentences that contained the less anticipated syntactic structure showed early increased feedforward information flow in 0-100 ms, followed by increased recurrent connectivity between LIFG and LpMTG from the 200-500 ms from the verb onset. Altogether the three experiments reveal novel insights into transient cognitive networks recruited incrementally over time both in the absence of and with context, as the speech unfolds, and how the activation of these networks are modulated by contextual syntactic and semantic constraints. Further I provide neural evidence that contextual constraints serve to facilitate speech comprehension, and how the speech networks recover from failed anticipations.
APA, Harvard, Vancouver, ISO, and other styles
27

Fu, Ser-Geon. "Genetic and evolutionary protocols for solving distributed asymmetric constraint satisfaction problems." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2007%20Spring%20Dissertations/FU_SER-GEON_10.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, Wei. "Integer Programming-based Methods for Computing Minimum Reaction Modifications of Metabolic Networks for Constraint Satisfaction." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planninga software framework for planning with the holistic approach /." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/8163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Servín, Meneses Christian. "An optimization approach for the cascade vulnerability problem." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Aravena, Andrés. "Probabilistic and constraint based modelling to determine regulation events from heterogeneous biological data." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00922346.

Full text
Abstract:
Cette thèse propose une méthode pour construire des réseaux de régulation causales réalistes, qui a une taux de faux positifs inférieur aux méthodes traditionnelles. Cette approche consiste à intégrer des informa- tions hétérogènes à partir de deux types de prédictions de réseau pour déterminer une explication causale du gène observé co-expression. Ce processus d'intégration se modélise comme un problème d'optimisation combinatoire, de complexité NP-difficile. Nous introduisons une approche heuristique pour déterminer une solution approchée en un temps d'exécution pratique. Notre évaluation montre que, pour l'espèce modèle E. coli, le réseau de régulation résultant de l'application de cette méthode a une précision supérieure à celle construite avec des outils traditionnels. La bactérie Acidithiobacillus ferrooxidans présente des défis particu- liers pour la détermination expérimentale de son réseau de régulation. En utilisant les outils que nous avons développés, nous proposons un réseau de régulation putatif et analysons la pertinence de ces régulateurs centraux. Il s'agit de la quatrième contribution de cette thèse. Dans une deuxième partie de cette thèse, nous explorons la façon dont ces relations réglementaires se manifestent, en développant une méthode pour compléter un réseau de signalisation lié à la maladie d'Alzheimer. Enfin, nous abordons le problème ma- thématique de la conception de la sonde de puces à ADN. Nous concluons que, pour prévoir pleinement les dynamiques d'hybridation, nous avons besoin d' une fonction de l'énergie modifiée pour les structures secondaires des molécules d'ADN attaché surface et proposons un schéma pour la détermination de cette fonction.
APA, Harvard, Vancouver, ISO, and other styles
32

Koker, Ezgi. "Chance Constrained Optimization Of Booster Disinfection In Water Distribution Networks." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613640/index.pdf.

Full text
Abstract:
Quality of municipal water is sustained by addition of disinfectant, generally chlorine, to the water distribution network. Because of health problems, chlorine concentration in the network is limited between maximum and minimum limits. Cancerogenic disinfectant by-products start to occur at high concentrations so it is desired to have minimum amount of chlorine without violating the limit. In addition to the health issues, minimum injection amount is favorable concerning cost. Hence, an optimization model is necessary which covers all of these considerations. However, there are uncertain factors as chlorine is reactive and decays both over time and space. Thus, probabilistic approach is necessary to obtain reliable and realistic results from the model. In this study, a linear programming model is developed for the chance constrained optimization of the water distribution network. The objective is to obtain minimum amount of injection mass subjected to maintaining more uniformly distributed chlorine concentrations within the limits while considering the randomness of chlorine concentration by probability distributions. Network hydraulics and chlorine concentration computations are done by the network simulation software, EPANET.
APA, Harvard, Vancouver, ISO, and other styles
33

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planning : a software framework for planning with the holistic approach." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/7754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jenkins, Glenn Llewellyn. "Evolved neural network approximation of discontinuous vector fields in unit quaternion space (S³) for anatomical joint constraint." Thesis, University of South Wales, 2007. https://pure.southwales.ac.uk/en/studentthesis/evolved-neural-network-approximation-of-discontinuous-vector-fields-in-unit-quaternion-space-s3-for-anatomical-joint-constraint(f375e712-038c-4a78-862a-944c0e36e360).html.

Full text
Abstract:
The creation of anatomically correct three-dimensional joints for the simulation of humans is a complex process, a key difficulty being the correction of invalid joint configurations to the nearest valid alternative. Personalised models based on individual joint mobility are in demand in both animation and medicine [1]. Medical models need to be highly accurate animated models less so, however if either are to be used in a real time environment they must have a low temporal cost (high performance). This work briefly explores Support Vector Machine neural networks as joint configuration classifiers that group joint configurations into invalid and valid. A far more detailed investigation is carried out into the use of topologically evolved feed forward neural networks for the generation of appropriately proportioned corrective components which when applied to an invalid joint configuration result in a valid configuration and the same configuration if the original configuration was valid. Discontinuous vector fields were used to represent constraints of varying size, dimensionality and complexity. This culminated in the creation corrective quaternion constraints represented by discontinuous vector fields, learned by topologically evolved neural networks and trained via the resilient back propagation algorithm. Quaternion constraints are difficult to implement and although alternative methods exist [2-6] the method presented here is superior in many respects. This method of joint constraint forms the basis of the contribution to knowledge along with the discovery of relationships between the continuity and distribution of samples in quaternion space and neural network performance. The results of the experiments for constraints on the rotation of limb with regular boundaries show that 3.7 x lO'Vo of patterns resulted in errors greater than 2% of the maximum possible error while for irregular boundaries 0.032% of patterns resulted in errors greater than 7.5%.
APA, Harvard, Vancouver, ISO, and other styles
35

Psujek, Sean Thomas. "The Role of Developmental Bias in a Simulated Evo-devo System." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1232655232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lukachan, George. "Scalable energy-efficient location aided routing (SELAR) protocol for wireless sensor networks." [Tampa, Fla] : University of South Florida, 2005. http://purl.fcla.edu/usf/dc/et/SFE0001366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Brettner, Leandra M., and Joanna Masel. "Protein stickiness, rather than number of functional protein-protein interactions, predicts expression noise and plasticity in yeast." BioMed Central, 2012. http://hdl.handle.net/10150/610103.

Full text
Abstract:
BACKGROUND:A hub protein is one that interacts with many functional partners. The annotation of hub proteins, or more generally the protein-protein interaction "degree" of each gene, requires quality genome-wide data. Data obtained using yeast two-hybrid methods contain many false positive interactions between proteins that rarely encounter each other in living cells, and such data have fallen out of favor.RESULTS:We find that protein "stickiness", measured as network degree in ostensibly low quality yeast two-hybrid data, is a more predictive genomic metric than the number of functional protein-protein interactions, as assessed by supposedly higher quality high throughput affinity capture mass spectrometry data. In the yeast Saccharomyces cerevisiae, a protein's high stickiness, but not its high number of functional interactions, predicts low stochastic noise in gene expression, low plasticity of gene expression across different environments, and high probability of forming a homo-oligomer. Our results are robust to a multiple regression analysis correcting for other known predictors including protein abundance, presence of a TATA box and whether a gene is essential. Once the higher stickiness of homo-oligomers is controlled for, we find that homo-oligomers have noisier and more plastic gene expression than other proteins, consistent with a role for homo-oligomerization in mediating robustness.CONCLUSIONS:Our work validates use of the number of yeast two-hybrid interactions as a metric for protein stickiness. Sticky proteins exhibit low stochastic noise in gene expression, and low plasticity in expression across different environments.
APA, Harvard, Vancouver, ISO, and other styles
38

Choi, Gi Wan. "Low-complexity and power-efficient wireless cooperative relay networks with enhanced reliability." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47721.

Full text
Abstract:
In recent years, global mobile data traffic has been increasing exponentially as mobile devices pervade our daily lives. To cope with the ever growing demands for higher data rates and seamless connectivity, one solution is to drastically increase the number of macro base stations in the conventional cellular architecture. However, this results in high deployment costs. Deploying low-power nodes such as relays that do not require a wired backhaul connection within a macrocell is one of cost-effective ways to extend high data rate coverage range. Relays are typically deployed to increase signal strength in poor coverage areas or to eliminate dead spots. But more importantly, relays provide a natural diversity, called cooperative diversity. In addition to a direct signal from a base station, extra copies of the same signal are forwarded from relays. Utilizing this diversity at the destination can yield significant performance enhancements. Thus, cooperative relay strategies need to be considered to enable high data rate coverage in a cost-effective manner. In this dissertation, we consider a simple single-relay network and present low-complexity and power-efficient cooperative relay designs that can achieve low error rate. We first study decode-and-forward (DF) relay networks with a single antenna at each node, where the relay decodes the received signal and forwards the re-encoded information to the destination. In DF relay scheme, decoding at the relay is not perfect and the error-propagation phenomenon is a detrimental problem, preventing the destination from collecting the cooperative diversity. To enable cooperative diversity in DF relay networks, we adopt link-adaptive power-scaling relay strategies where the relay scales the transmission power of the re-encoded signal based on the reliability of the source-relay link. We generalize power-profile designs and analyze the diversity order enabled by the general power-profile designs. We provide necessary and sufficient conditions for the designs to enable full cooperative diversity at the destination. In the second part of this dissertation, we extend the power-scaling relay strategy to DF multi-input multi-output (MIMO) relay networks, where multiple antennas are adopted at each node, and show that full cooperative diversity can also be achieved here. To collect spatial diversity provided by multiple antennas without using maximum-likelihood equalizers (MLEs) or near-ML detectors which exhibit high complexity, channel-controlled automatic repeat request (CC-ARQ) scheme is developed for DF MIMO relay networks to enable spatial diversity with linear equalizers (LEs) maintaining low-complexity. We also show that joint cooperative and spatial diversity can be achieved at the destination when the power-scaling strategy and the CC-ARQ with LEs are combined. Finally, amplify-and-forward (AF) MIMO relay designs, where the relay simply amplifies the received signal and forwards it to the destination, are studied with consideration of peak-power constraints at the relay. One practical concern for AF relaying is that the output signal at the relay may suffer from large peak-to-average power ratio (PAR), which may cause nonlinear distortion and/or saturation in the transmitted signal due to the limited linear range of power amplifiers. Thus, we first investigate peak-power constrained power-scaling strategies and find a sufficient condition to enable joint cooperative and spatial diversity at the destination. Based on this study, we propose simple and practical AF MIMO relay designs with peak-power constraint at the relay. CC-ARQ is also applied to AF MIMO relay networks to reduce the decoding complexity.
APA, Harvard, Vancouver, ISO, and other styles
39

Lang, Philipp Sebastian [Verfasser], and Erwin [Akademischer Betreuer] Frey. "Reptation in entangled polymer networks : a simulational study of the significance of constraint release for reptation / Philipp Sebastian Lang. Betreuer: Erwin Frey." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1104698013/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Piesker, Björn. "Constraint-basierte Generierung realitätsnaher Eisenbahnnetze." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2007/1532/.

Full text
Abstract:
Diese Arbeit befasst sich mit der Entwicklung einer Applikation, welche Infrastrukturdaten über Eisenbahnnetze generiert. Dabei bildet die Erzeugung der topologischen Informationen den Schwerpunkt dieser Arbeit. Der Anwender charakterisiert hierfür vorab das gewünschte Eisenbahnnetz, wobei die geforderten Eigenschaften die Randbedingungen darstellen, die bei der Synthese zu beachten sind. Zur Einhaltung dieser Bedingungen wird die Constraint-Programmierung eingesetzt, welche durch ihr spezielles Programmierparadigma konsistente Lösungen effizient erzeugt. Dies wird u.a. durch die Nachnutzung so genannter globaler Constraints erreicht. Aus diesem Grund wird insbesondere auf den Einsatz der Constraint-Programmierung bei der Modellierung und Implementierung der Applikation eingegangen.
This work deals with the development of an application, which generates infrastructure data of railway networks. The focus of this work concentrates on the generation process of topological information. As input for the application a characterization of the intended railway network is given as attributes, which are handled as constraints in the generation process. To satisfy these restrictions constraint programming, a special programming paradigm, which is able to search efficently consistent solutions, is applied. In particular, the use of so-called global constraints improves the computation. For that reason the role of constraint-programming in modelling and implementing these application is discussed in more detail.
APA, Harvard, Vancouver, ISO, and other styles
41

Andrade, Tales Pinheiro de. "Interações gênicas usando redes booleanas limiarizadas modeladas como um problema de satisfação de restrições." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-05062012-150618/.

Full text
Abstract:
As reações químicas que resultam da expressão de genes são complexas e ainda não são total- mente compreendidas. Sabe-se que os genes enviam, recebem, e processam informações formando uma complexa rede de comunicação, mas a arquitetura e dinâmica destas redes não são totalmente conhecidas. Dessa forma, um problema importante é determinar como os genes se relacionam dentro da célula. Esse processo de determinar o relacionamento entre os genes é conhecido como inferência de redes gênicas. Uma das formas para representar o relacionamento entre os genes é usar modelos matemáticos e computacionais de Redes Gênicas. Em especial, um dos modelos de grande interesse é o de Redes Booleanas (BN - do inglês Boolean Networks), no qual os genes podem assumir dois estados, ativo ou inativo, se estão, respectivamente, expressos ou não. Estes estados podem variar ao longo do tempo, dependendo de como os genes se relacionam. Nosso interesse está em estudar um caso particular deste modelo, conhecido como Redes Booleanas Limiarizadas, onde apenas uma classe de funções booleanas é utilizada para construir as BNs. Para inferir as Redes Booleanas Limiarizadas, usamos um algoritmo constituído de dois passos. Primeiro, usamos o arcabouço do Problema de Satisfação de Restrições (CSP - do inglês Constraint Satisfaction Problem) para inferir conjuntos de soluções consistentes com uma dada série temporal de um conjunto de genes. Em seguida analisamos o comportamento dinâmico das soluções encon- tradas , filtrando conjuntos de soluções de maior interesse para testes práticos em laboratório. Usando o arcabouço do CSP, construímos um solver, usando a biblioteca Gecode,1 para inferência de redes consistentes, usando como entrada uma série temporal oriunda de dados de microarrays. Em seguida, através da simulação da dinâmica de uma amostra das redes encontradas no passo anterior, fomos capazes de determinar algumas restrições interessantes para filtrar o conjunto de redes. Aplicamos o nosso método para três conjuntos de dados: dois artificiais, e para validação, usamos uma série temporal de uma rede artificial conhecida na literatura. Com isso fomos capazes de inferir conjuntos de redes gênicas de possível interesse para testes em laboratório.
The chemical reactions that result in gene expression are complex and not yet fully understood. It is known that genes send, receive and process information to form a complex network of com- munication, but the architecture and dynamics of these networks are not fully known. Thus, one major problem is to determine how genes are linked within the cell. This process of determining the relationship between genes is known as inference of genetic networks. One way to represent the relationship between genes is to use mathematical and computer models of genetic networks. In particular, one of the models of great interest are Boolean Networks (BN), in which genes can take two states, active or inactive, if they are, respectively, expressed or not. These states may vary over time, depending on how genes are related. Our interest is in studying a case of this particular model, known as thresholded Boolean networks, where only one class of Boolean functions is used to build the GNs. To infer the thresholded Boolean networks, we use an algorithm that consists of two steps. First, we use the framework of Constraint Satisfaction Problem (CSP) to infer sets of solutions consistent with a time series of a given set of genes. Then analyze the dynamic behavior of the solutions, filtering sets of solutions with interest for practical tests in the laboratory. Using the framework of the CSP, we constructed a solver, using the library Gecode, 2 for in- ference of consistent networks, using as input a time series arising from microarrays data. Then, by simulating the dynamics of a sample of networks found in the previous step, we were able to determine some interesting constraints to filter the set of networks. We apply our method to three datasets: two artificial, and for validation, we use a time series of an artificial network known from literature. Thus we were able to infer genetic networks sets of possible interest for laboratory tests.
APA, Harvard, Vancouver, ISO, and other styles
42

Tunc, Ilker. "Epidemic models on adaptive networks with network structure constraints." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539623618.

Full text
Abstract:
Spread of infectious diseases progresses as a result of contacts between the individuals in a population. Therefore, it is crucial to gain insight into the pattern of connections to better understand and possibly control the spread of infectious diseases. Moreover, people may respond to an epidemic by changing their social behaviors to prevent infection. as a result, the structure of the network of social contacts evolves adaptively as a function of the disease status of the nodes. Recently, the dynamic relationships between different network topologies and adaptation mechanisms have attracted great attention in modeling epidemic spread. However, in most of these models, the original network structure is not preserved due to the adaptation mechanisms involving random changes in the links. In this dissertation, we study more realistic models with network structure constraints to retain aspects of the original network structure.;We study a susceptible-infected-susceptible (SIS) disease model on an adaptive network with two communities. Different levels of heterogeneity in terms of average connectivity and connection strength are considered. We study the effects of a disease avoidance adaptation mechanism based on the rewiring of susceptible-infected links through which the disease could spread. We choose the rewiring rules so that the network structure with two communities would be preserved when the rewiring links occur uniformly. The high dimensional network system is approximated with a lower dimensional mean field description based on a moment closure approximation. Good agreement between the solutions of the mean field equations and the results of the simulations are obtained at the steady state. In contrast to the non-adaptive case, similar infection levels in both of the communities are observed even when they are weakly coupled. We show that the adaptation mechanism tends to bring both the infection level and the average degree of the communities closer to each other.;In this rewiring mechanism, the local neighborhood of a node changes and is never restored to its previous state. However, in real life people tend to preserve their neighborhood of friends. We propose a more realistic adaptation mechanism, where susceptible nodes temporarily deactivate their links to infected neighbors and reactivate the links to those neighbors after they recover. Although the original network is static, the subnetwork of active links is evolving.;We drive mean field equations that predict the behavior of the system at the steady state. Two different regimes are observed. In the slow network dynamics regime, the adaptation simply reduces the effective average degree of the network. However, in the fast network dynamics regime, the adaptation further suppresses the infection level by reducing the dangerous links. In addition, non-monotonic dependence of the active degree on the deactivation rate is observed.;We extend the temporary deactivation adaptation mechanism to a scale-free network, where the degree distribution shows heavy tails. It is observed that the tail of the degree distribution of the active subnetwork has a different exponent than that of the original network. We present a heuristic explanation supporting that observation. We derive improved mean field equations based on a new moment closure approximation which is derived by considering the active degree distribution conditioned on the total degree. These improved mean field equations show better agreement with the simulation results than standard mean field analysis based on homogeneity assumptions.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Ruogu. "Principles and Methods of Adaptive Network Algorithm Design under Various Quality-of-Service Requirements." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354642003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bit-Monnot, Arthur. "Temporal and Hierarchical Models for Planning and Acting in Robotics." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/17704/1/Arthur%20Bit-Monnot.pdf.

Full text
Abstract:
The field of AI planning has seen rapid progress over the last decade and planners are now able to find plan with hundreds of actions in a matter of seconds. Despite those important progresses, robotic systems still tend to have a reactive architecture with very little deliberation on the course of the plan they might follow. In this thesis, we argue that a successful integration with a robotic system requires the planner to have capacities for both temporal and hierarchical reasoning. The former is indeed a universal resource central in many robot activities while the latter is a critical component for the integration of reasoning capabilities at different abstraction levels, typically starting with a high level view of an activity that is iteratively refined down to motion primitives. As a first step to carry out this vision, we present a model for temporal planning unifying the generative and hierarchical approaches. At the center of the model are temporal action templates, similar to those of PDDL complemented with a specification of the initial state as well as the expected evolution of the environment over time. In addition, our model allows for the specification of hierarchical knowledge possibly with a partial coverage. Consequently, our model generalizes the existing generative and HTN approaches together with an explicit time representation. In the second chapter, we introduce a planning procedure suitable for our planning model. In order to support hierarchical features, we extend the existing Partial-Order Causal Link approach used in many constraintbased planners, with the notions of task and decomposition. We implement it in FAPE (Flexible Acting and Planning Environment) together with automated problem analysis techniques used for search guidance. We show FAPE to have performance similar to state of the art temporal planners when used in a generative setting. The addition of hierarchical information leads to further performance gain and allows us to outperform traditional planners. In the third chapter, we study the usual methods used to reason on temporal uncertainty while planning. We relax the usual assumption of total observability and instead provide techniques to reason on the observations needed to maintain a plan dispatchable. We show how such needed observations can be detected at planning time and incrementally dealt with by considering the appropriate sensing actions. In a final chapter, we discuss the place of the proposed planning system as a central component for the control of a robotic actor. We demonstrate how the explicit time representation facilitates plan monitoring and action dispatching when dealing with contingent events that require observation. We take advantage of the constraint-based and hierarchical representation to facilitate both plan-repair procedures as well opportunistic plan refinement at acting time.
APA, Harvard, Vancouver, ISO, and other styles
45

Comin, Carlo. "Complexité dans les Jeux Infinis sur les Graphes et les Réseaux de Contraintes Temporelles." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1061/document.

Full text
Abstract:
Cette thèse porte sur un certain nombre de problèmes algorithmiques motivés par la planification temporelle automatisée et la vérification formelle des systèmes réactifs et finis. Nous nous sommes concentrés sur les méthodes théoriques des jeux pour obtenir de nouvelles connaissances, des limites de complexité améliorées et des algorithmes plus rapides pour les modèles suivants: réseaux temporels hyper, réseaux conditionnels Simples / Hyper temporels, jeux de mise à jour, jeux Muller McNaughton et jeux Mean Payoff
This dissertation deals with a number of algorithmic problems motivated by automated temporal planning and formal verification of reactive and finite state systems. We focused on game theoretical methods to obtain novel insights, improved complexity bounds, and faster algorithms for the following models: Hyper Temporal Networks, Conditional Simple/Hyper Temporal Networks, Update Games, Muller McNaughton Games, and Mean Payoff Games
APA, Harvard, Vancouver, ISO, and other styles
46

Gong, Shujia. "Solutions to constrained path computation in multi-layer networks." Fairfax, VA : George Mason University, 2007. http://hdl.handle.net/1920/2932.

Full text
Abstract:
Thesis (Ph. D.)--George Mason University, 2007.
Title from PDF t.p. (viewed Jan. 18, 2008). Thesis director: Bijan Jabbari Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering. Vita: p. 113. Includes bibliographical references (p. 109-112). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
47

Trick, Michael Alan. "Networks with additional structured constraints." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/23371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Nikbakht, Homa. "Networks with mixed-delay constraints." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT046.

Full text
Abstract:
Les réseaux de communication sans fil modernes doivent s'adapter à différents types de trafic de données avec des contraintes de latence différentes. Les applications vidéo sensibles à la latence, en particulier, représentent une part croissante du trafic de données. En outre, les réseaux modernes doivent accepter des débits de données élevés, ce qu'ils peuvent faire par exemple avec des terminaux coopératifs ou avec l'assistance de relais tels que les drones. Cependant, la coopération introduit généralement des retards de communication supplémentaires et n'est donc pas applicable au trafic de données sensibles à la latence.Cette thèse porte sur les réseaux d'interférence avec des contraintes de latence mixtes et sur les architectures de systèmes où des émetteurs et/ou des récepteurs voisins peuvent coopérer. Dans de tels systèmes, les messages sensibles à la latence doivent être encodés et décodés sans délai et ainsi ne peuvent pas bénéficier des liens de coopération disponibles. Nous proposons différents schémas de codage pour permettre la transmission simultanée de messages sensibles et insensibles à la latence. Pour les schémas proposés, nous analysons les gains de multiplexage (MG) qu'ils réalisent sur le réseau de transfert intercellulaire souple de Wyner, le réseau symétrique de Wyner, le réseau hexagonal et le réseau hexagonal sectorisé. Pour le réseau de transfert souple de Wyner et le réseau symétrique de Wyner, nous identifions aussi des résultats étroits s'agissant de leurs limites en théorie de l'information et nous définissons ainsi l'ensemble exact de paires MG qui peuvent être obtenus simultanément pour les données sensibles et insensibles à la latence. Ces résultats montrent que lorsque les émetteurs et les récepteurs peuvent coopérer et que les taux de coopération sont suffisamment élevés, il est possible d'obtenir le plus grand MG possible pour les messages sensibles à la latence sans pénaliser la somme maximale des MG pour l'ensemble des messages sensibles et insensibles à la latence. Cependant, la somme des MG des systèmes que nous proposons pour le modèle hexagonal est diminuée en présence de données sensibles à la latence. Cette pénalité disparaît dans le cas du réseau hexagonal sectorisé quand chaque cellule est divisée en trois secteurs non interférents en équipant les stations de base d'antennes directionnelles.Nous proposons, de surcroît, des schémas de codage similaires en fonction de différents types d'activité aléatoire de la part des usagers du réseau. Nous considérons plus particulièrement deux configurations. Dans la première configuration, l'augmentation du taux de MG correspondant aux données sensibles à la latence diminue toujours la somme des MG. En revanche, dans la seconde configuration, pour certains paramètres, la plus grande somme des MG est obtenue au maximum du taux de MG correspondant aux données sensibles à la latence et donc l'augmentation des MG sensibles à la latence améliore la somme des MG.Nous étudions aussi un réseau d'accès radio "cloud" avec des contraintes de latence mixtes, c'est-à-dire où chaque utilisateur mobile peut simultanément envoyer un flux sensible à la latence et un flux qui la tolère et où seules les données sensibles sont décodées conjointement au sein du cloud. Pour ce réseau, nous dérivons les limites intérieures et extérieures de la région de capacité sous des contraintes de latence mixtes, et nous caractérisons précisément la région MG optimale. Lorsque le rapport signal/bruit (SNR) est élevé, nos résultats démontrent que, pour des capacités frontales modérées, le MG maximal pour les messages sensibles à la latence reste inchangé sur une large gamme de petits et moyens MG de messages sensibles à la latence. Pour un SNR modéré, les résultats montrent que lorsque le débit de messages sensibles à la latence est faible ou modéré, nous obtenons une somme de débit de données constante
Modern wireless communication networks have to accommodate different types of data traffic with different latency constraints. In particular, delay-sensitive video-applications represent an increasing portion of data traffic. Modern networks also have to accommodate high total data rates, which they can accomplish for example with cooperating terminals or with helper relays such as drones. However, cooperation typically introduces additional communication delays, and is thus not applicable to delay-sensitive data traffic.This thesis focuses on interference networks with mixed-delay constraints and on system architectures where neighbouring transmitters and/or neighbouring receivers can cooperate. In such systems, delay-sensitive messages have to be encoded and decoded without further delay and thus cannot benefit from available cooperation links.We propose various coding schemes that can simultaneously accommodate the transmission of both delay-sensitive and delay-tolerant messages. For the proposed schemes we analyze the multiplexing gains (MG) they achieve over Wyner's soft hand-off network, Wyner's symmetric network, the hexagonal network and the sectorized hexagonal network. For Wyner's soft hand-off network and Wyner's symmetric network, we also provide tight information-theoretic converse results and thus establish the exact set of MG pairs that can simultaneously be achieved for delay-sensitive and delay-tolerant data. These results demonstrate that when both transmitters and receivers cooperate and the cooperation rates are sufficiently large, it is possible to achieve the largest MG for delay-sensitive messages without penalizing the maximum sum MG of both delay-sensitive and delay-tolerant messages. In contrast, under our proposed schemes, the sending of delay-sensitive data in hexagonal models decreases the maximum sum MG. This penalty vanishes when we consider the sectorized hexagonal network where each cell is divided into three non-interfering sectors by employing directional antennas at the base stations.We further propose similar coding schemes for scenarios with different types of random user activity. We specifically consider two setups. In the first setup, each active transmitter always has delay-tolerant data to send and delay-sensitive data arrival is random. In the second setup, both delay-tolerant and delay-sensitive data arrivals are random. The obtained MG regions show that in the first setup, increasing the delay-sensitive MG always decreases the sum MG. In contrast, in the second setup, for certain parameters, the highest sum MG is achieved at maximum delay-sensitive MG and thus increasing the delay-sensitive MG provides a gain in sum MG.Additionally, we also study a cloud radio access network with mixed delay constraints, i.e., where each mobile user can simultaneously send a delay-sensitive and a delay-tolerant stream and only the delay-tolerant data is jointly decoded at the cloud unit. For this network, we derive inner and outer bounds on the capacity region under mixed delay constraints, and we exactly characterize the optimal MG region. At high signal-to-noise ratio (SNR), our results show that for moderate fronthaul capacities, the maximum MG for delay-sensitive messages remains unchanged over a large regime of small and moderate MGs of delay-sensitive messages. The sum MG is thus improved if some of the messages can directly be decoded at the base stations. At moderate SNR, the results show that when the data rate of delay-sensitive messages is small or moderate, the achievable sum rate is constant
APA, Harvard, Vancouver, ISO, and other styles
49

Terrasson, Guillaume. "Contribution à la conception d'émetteur-récepteur pour microcapteurs autonomes." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13682/document.

Full text
Abstract:
L’étude des réseaux de microcapteurs sans fil met clairement en évidence la contrainte principale de l’autonomie en énergie. En effet, ces microcomposants autonomes et communicants appelés aussi nœuds du réseau sont dispersés dans des lieux parfois peu ou pas accessibles. L’objectif de notre travail est de proposer une méthode de conception d’un émetteur-récepteur adapté à ce type de réseaux. Partant d’une modélisation au niveau système mettant en relief la part prépondérante du module radiofréquence sur la consommation moyenne d’un nœud, nous avons développé trois nouveaux outils de conception correspondant à différents niveaux de modélisation de la chaine de communication. Leur utilisation conjointe et les résultats de simulations obtenus nous offrent la possibilité de mettre en relation les spécifications et les performances d’un module radiofréquence avec la consommation. L’association de ces outils dans une méthode de conception itérative nous a permis de dimensionner une chaine de communication en fonction d’une contrainte de consommation. Finalement, nous avons conçu, fabriqué et testé, un amplificateur faible bruit (LNA ou Low Noise Amplifier) à 868 MHz qui présente des caractéristiques très intéressantes en termes de consommation
Survey on wireless microsensor networks highlights the main constraint of energy autonomy. In fact, these autonomous and communicating microcomponents named network nodes are scattered into few or not open environment. The goal of our work is to propose a transceiver design method adapted to microsensor networks. After a demonstration of predominant part of RF into the mean power consumption of a microsensor node, we developed three new simulation tools which correspond to different level of transceiver modelling. Their use and obtained simulation results demonstrate the relation between transceiver specifications and performances with power consumption. The association of these tools was used to propose a new design method under power consumption constraint. Finally, we designed, produced and tested a 868 MHz Low Noise Amplifier which presents interesting power consumption characteristics
APA, Harvard, Vancouver, ISO, and other styles
50

Irava, Venkata Srinivas. "Low-cost delay-constrained multicast routing heuristics and their evaluation." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Summer2006/v%5Firava%5F072106.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography