To see the other types of publications on this topic, follow the link: Parameters of network.

Dissertations / Theses on the topic 'Parameters of network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parameters of network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Åkesson, Emma. "Information visualization of network parameters in private cellular network solutions." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280858.

Full text
Abstract:
In the upcoming years, industrial enterprises are expected to undergo a major transformation, as the Internet of Things (IoT) reaches widespread adoption. A key enabler behind this transformation, known as Industry 4.0, is the 5th generation of cellular networks (5G). Through privately owned networks, enterprises will be able to utilize the 5G technology to tailor the network to meet their needs in terms of security, reliability, and quality of service. Although much of the technology is currently in place, few efforts have been dedicated to help enterprises understand and optimise the value that this new solution brings. One way of making 5G more accessible is through information visualization of its data. Dashboards are today the widely adopted tool for processing data in organisations. This study aimed at examining the affordances and challenges of information visualization of 5G network data in such a tool, in order to make 5G more accessible. A large number of commercial network management dashboards were reviewed in relation to research on effective dashboard design, and a prototype was developed and evaluated with seven user experience experts. Results from the expert review suggests information visualization clearly aided communication of the five visualized network parameters: throughput, latency, availability, coverage, and devices. However, to fully examine the usefulness of the tool, further research on the tool’s fit in an industry context needs to be conducted.
Inom de kommande åren förväntas industriföretag genomgå en stor transformation, i samband med att sakernas internet (engelskans Internet of Things, IoT) når utbredd användning. En viktig möjliggörare bakom denna transformation, känd som Industri 4.0, är den 5:e generationens mobilnät (5G). Genom privatägda mobilnät kommer företag att kunna använda 5G teknologin till att skräddarsy sina nätverk för att tillgodose de egna behoven gällande säkerhet, tillförlitlighet och kvalitet. Trots att mycket av 5G teknologin redan är på plats, har få ansträngningar gjorts för att hjälpa företag förstå och optimera värdet som denna nya lösning medför. Ett sätt som kan göra 5G mer lättförståeligt är genom informationsvisualisering av dess data. Dashboards är idag det mest använda verktyget för att bearbeta data i organisationer. Denna studie ämnade därför att undersöka fördelarna och nackdelarna med informationsvisualisering av data från ett privat 5G-nät i ett sådant verktyg. Ett stort antal kommersiella dashboards för nätverksprestationshantering granskades i förhållande till forskning inom området för effektiv design av dashboards, och en prototyp utvecklades och utvärderades med sju experter inom användarupplevelse. Resultaten från expertgranskningen tyder på att användningen av informationsvisualisering klart hjälpte i kommunikationen av de fem visualiserade nätverksparametrarna: genomströmning, svarstid, tillgänglighet, täckning och uppkopplade enheter. Däremot krävs ytterligare forskning kring verktygets roll i industriell kontext för att kunna göra en fullständig granskning av verktygets användbarhet.
APA, Harvard, Vancouver, ISO, and other styles
2

Biswas, Sanjeet Kumar. "Analysis and comparison of network performance with different network parameters." FIU Digital Commons, 1999. http://digitalcommons.fiu.edu/etd/1703.

Full text
Abstract:
The purpose of this study was to analyze the network performance by observing the effect of varying network size and data link rate on one of the most commonly found network configurations. Computer networks have been growing explosively. Networking is used in every aspect of business, including advertising, production, shipping, planning, billing, and accounting. Communication takes place through networks that form the basis of transfer of information. The number and type of components may vary from network to network depending on several factors such as requirement and actual physical placement of the networks. There is no fixed size of the networks and they can be very small consisting of say five to six nodes or very large consisting of over two thousand nodes. The varying network sizes make it very important to study the network performance so as to be able to predict the functioning and the suitability of the network. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected significantly due to the increase in the size of the network and that there exists a correlation between the various parameters and the size of the network. These variations are not only dependent on the magnitude of the change in the actual physical area of the network but also on the data link rate used to connect the various components of the network.
APA, Harvard, Vancouver, ISO, and other styles
3

Ikiz, Suheyla. "Performance Parameters Of Wireless Virtual Private Network." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607094/index.pdf.

Full text
Abstract:
ABSTRACT PERFORMANCE PARAMETERS OF WIRELESS VIRTUAL PRIVATE NETWORK KZ, Sü
heyla Ms.c, Department of Information Systems Supervisor: Assoc. Prof. Dr. Nazife Baykal Co-Supervisor: Assist. Prof. Dr. Yusuf Murat Erten January 2006, 78 pages As the use of PC&rsquo
s and handheld devices increase, it expected that wireless communication would also grow. One of the major concerns in wireless communication is the security. Virtual Private Network (VPN) is the most secure solution that ensures three main aspect of security: authentication, accountability and encryption can use in wireless networks. Most VPNs have built on IP Security Protocol (IPSec) to support end-to-end secure data transmission. IPSec is a wellunderstood and widely used mechanism for wired network communication. Because, wireless networks have limited bandwidth and wireless devices have limited power and less capable CPU, the performance of the networks when VPN&rsquo
s are used is an important research area. We have investigated the use of VPNs in wireless LANs to provide end &ndash
to &ndash
end security. We have selected IPSec as the VPN protocol and investigated the effects of using IPSec on the throughput, packet loss, and delay of the wireless LANs. For this purpose, we have set up a test bed and based, our results on the actual measurements obtained from the experiments performed using the test bed. v The wireless LAN we have used is an 802.11g network and the results show that the performance of the network is adversely affected when VPN&rsquo
s are used but the degradation is not as bad as expected.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramaisa, Motlalepula. "Inferring congestion from delay and loss characteristics using parameters of the three-parameter Weibull distribution." Diss., Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-08282007-112036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gustavsson, Jonas. "Automated Performance Optimization of GSM/EDGE Network Parameters." Thesis, Linköping University, Linköping University, Communication Systems, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-52565.

Full text
Abstract:

The GSM network technology has been developed and improved during several years which have led to an increased complexity. The complexity results in more network parameters and together with different scenarios and situations they form a complex set of configurations. The definition of the network parameters is generally a manual process using static values during test execution. This practice can be costly, difficult and laborious and as the network complexity continues to increase, this problem will continue to grow.This thesis presents an implementation of an automated performance optimization algorithm that utilizes genetic algorithms for optimizing the network parameters. The implementation has been used for proving that the concept of automated optimization is working and most of the work has been carried out in order to use it in practice. The implementation has been applied to the Link Quality Control algorithm and the Improved ACK/NACK feature, which is an apart of GSM EDGE Evolution.


GSM-nätsteknologin har utvecklats och förbättrats under lång tid, vilket har lett till en ökad komplexitet. Denna ökade komplexitet har resulterat i fler nätverksparameterar, tillstånd och standarder. Tillsammans utgör de en komplex uppsättning av olika konfigurationer. Dessa nätverksparameterar har hittills huvudsakligen bestämts med hjälp av en manuell optimeringsprocess. Detta tillvägagångssätt är både dyrt, svårt och tidskrävande och allt eftersom komplexiteten av GSM-näten ökar kommer problemet att bli större.Detta examensarbete presenterar en implementering av en algoritm för automatiserad optimering av prestanda som huvudsakligen använder sig av genetiska algoritmer för att optimera värdet av nätverksparametrarna. Implementeringen har använts för att påvisa att konceptet med en automatiserad optimering fungerar och det mesta av arbetet har utförts för att kunna använda detta i praktiken. Implementeringen har tillämpats på Link Quality Control-algoritmen och Improved ACK/NACK-funktionaliteten, vilket är en del av GSM EDGE Evolution.

APA, Harvard, Vancouver, ISO, and other styles
6

Shaun, Ferdous Jahan. "Multi-Parameters Miniature Sensor for Water Network Management." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1138/document.

Full text
Abstract:
L’eau est une ressource vitale, indispensable à la vie sur terre. A l’instar de nombreuses autres ressources naturelles, l’eau propre à la consommation est soumise à une forte pression à cause de l’impact de l’activité humaine d’une part et de l’augmentation continue de la population mondiale d’autre part. Une pression tellement forte que l’eau propre représente l’un des 17 objectifs de développement durable des Nations Unies. Dans ce contexte, une gestion rationnelle et durable de la ressource s’avère indispensable. Dans ce but, un système intelligent de supervision des réseaux d’eau potable peut s’avérer très utile. Les systèmes existant sont toutefois peu intégrés et compacts, nécessitent souvent une alimentation externe, et restent relativement chers pour un déploiement massif sur les réseaux. La présente thèse s’inscrit dans le cadre d’un projet de recherche européen, PROTEUS, visant à pallier ces différents problèmes en mettant au point un système de mesure pour la supervision de la ressource en eau permettant la mesure de 9 paramètres physico-chimiques, reconfigurable, et énergétiquement autonome. La contribution de la présente thèse à ce projet porte sur la conception et l’optimisation des différents capteurs physiques (conductivité électrique, pression, température et débit) ainsi qu’à leur co-intégration sur une même puce. Le système proposé montre des performances au moins égales à celle de l’état de l’art en ce qui concerne la robustesse, assurée par la redondance de nombreux éléments sensibles, le domaine de sensibilité et la consommation énergétique. Le présent manuscrit est par conséquent construit comme suit : le premier chapitre est une introduction générale à la supervision de grandeurs environnementales et à la puce multi-capteurs. Le second chapitre décrit la structure de la puce multi-capteurs ainsi que les méthodes de fabrication utilisées, avec une attention particulière accordée aux capteurs de pression et de conductivité électrique. Le troisième chapitre porte sur l’utilisation de résistances électriques pour la mesure de diverses grandeurs physiques, notamment la température. Le dernier chapitre s’attarde plus particulièrement sur l’utilisation de ce type de résistances pour la mesure de débit avant de conclure et de proposer des perspectives pour des travaux futurs
Water is a vital element for every living being on the earth. Like many other dwindling natural resources, clean water faces a strong pressure because of human activity and the rapid growth of global population. The situation is so critical that clean water has been identified as one of the seventeenth sustainable development goals of the United Nations. Under these conditions, a sustainable management of water resources is necessary. For this purpose, a smart solution for water networks monitoring can be very helpful. However, commercially available solutions lack compactness, self-powering capabilities cost competitiveness, necessary to enable the large rollout over water networks. The present thesis takes place in the framework of a European research project, PROTEUS, which addresses these different problems by designing and fabricating a multi-parameter sensor chip (MPSC) for water resources monitoring. The MPSC enables the measurement of 9 physical and chemical parameters, is reconfigurable and self-powered. The present thesis addresses more precisely physical sensors, their design, optimization and co-integration on the MPSC. The developed device exhibits state of the art or larger performances with regard to its redundancy, turn-down ratio and power consumption. The present manuscript is split into two main parts: Part-I and Part-II. Part-I deals with non-thermal aspects of the MPSC, the pressure and conductivity sensor for instance, as well as the fabrication process of the whole device (Chapter 1 and 2). The background of environmental monitoring is presented in Chapter 1 along with the State of Art review. Chapter 2 describes fabrication methods of the MPSC. Preliminary characterization results of non-thermal sensors are also reported in this chapter. Chapter 3 and 4, included in Part-II, deal with thermal sensors (temperature and flow-rate). Chapter 3 describes the many possible uses of electric resistances for sensing applications. Finally, in chapter four, we focus on flowrate sensors before concluding and making a few suggestions for future works
APA, Harvard, Vancouver, ISO, and other styles
7

Tobolka, Lukáš. "Problematika návrhu síťové infrastruktury." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442356.

Full text
Abstract:
The aim of this diploma thesis is to look at the design of network infrastructure in practice. There are individual procedures, brands, standards that must be followed when designing. It includes a brief general overview of terminal elements, cable systems and related matters. In general, methods for measuring optical lines and parameters for measuring metallic networks are described here. The possible complications that accompany it are briefly described in the implementation. The network design methodology is also described here on the example of a specific object. Before the actual handover of the work, the entire infrastructure is analyzed and measured with output protocols within the certification of the entire system together with outputs in the form of a measurement protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Lux, Matthew William. "Estimation of gene network parameters from imaging cytometry data." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23082.

Full text
Abstract:
Synthetic biology endeavors to forward engineer genetic circuits with novel function. A major inspiration for the field has been the enormous success in the engineering of digital electronic circuits over the past half century. This dissertation approaches synthetic biology from the perspective of the engineering design cycle, a concept ubiquitous across many engineering disciplines. First, an analysis of the state of the engineering design cycle in synthetic biology is presented, pointing out the most limiting challenges currently facing the field. Second, a principle commonly used in electronics to weigh the tradeoffs between hardware and software implementations of a function, called co-design, is applied to synthetic biology. Designs to implement a specific logical function in three distinct domains are proposed and their pros and cons weighed. Third, automatic transitioning between an abstract design, its physical implementation, and accurate models of the corresponding system are critical for success in synthetic biology. We present a framework for accomplishing this task and demonstrate how it can be used to explore a design space. A major limitation of the aforementioned approach is that adequate parameter values for the performance of genetic components do not yet exist. Thus far, it has not been possible to uniquely attribute the function of a device to the function of the individual components in a way that enables accurate prediction of the function of new devices assembled from the same components. This lack presents a major challenge to rapid progression through the design cycle. We address this challenge by first collecting high time-resolution fluorescence trajectories of individual cells expressing a fluorescent protein, as well as snapshots of the number of corresponding mRNA molecules per cell. We then leverage the information embedded in the cell-cell variability of the population to extract parameter values for a stochastic model of gene expression more complex than typically used. Such analysis opens the door for models of genetic components that can more reliably predict the function of new combinations of these basic components.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Karayaka, Hayrettin Bora. "Neural network modeling and estimation of synchronous machine parameters /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488195633519029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McCloskey, Rosemary Martha. "Phylogenetic estimation of contact network parameters with approximate Bayesian computation." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58663.

Full text
Abstract:
Models of the spread of disease in a population often make the simplifying assumption that the population is homogeneously mixed, or is divided into homogeneously mixed compartments. However, human populations have complex structures formed by social contacts, which can have a significant influence on the rate and pattern of epidemic spread. Contact networks capture this structure by explicitly representing each contact that could possibly lead to a transmission. Contact network models parameterize the structure of these networks, but estimating their parameters from contact data requires extensive, often prohibitive, epidemiological investigation. We developed a method based on approximate Bayesian computation (ABC) for estimating structural parameters of the contact network underlying an observed viral phylogeny. The method combines adaptive sequential Monte Carlo for ABC, Gillespie simulation for propagating epidemics though networks, and a previously developed kernel-based tree similarity score. Our method offers the potential to quantitatively investigate contact network structure from phylogenies derived from viral sequence data, complementing traditional epidemiological methods. We applied our method to the Barabási-Albert network model. This model incorporates the preferential attachment mechanism observed in real world social and sexual networks, whereby individuals with more connections attract new contacts at an elevated rate (“the rich get richer”). Using simulated data, we found that the strength of preferential attachment and the number of infected nodes could often be accurately estimated. However, the mean degree of the network and the total number of nodes appeared to be weakly- or non-identifiable with ABC. Finally, the Barabási-Albert model was fit to eleven real world HIV datasets, and substantial heterogeneity in the parameter estimates was observed. Posterior means for the preferential attachment power were all sub-linear, consistent with literature results. We found that the strength of preferential attachment was higher in injection drug user populations, potentially indicating that high-degree “superspreader” nodes may play a role in epidemics among this risk group. Our results underscore the importance of considering contact structures when investigating viral outbreaks.
Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Muhammad, Sanusi. "Scalable and network aware video coding for advanced communications over heterogeneous networks." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7469.

Full text
Abstract:
This work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service. In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques. To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission. Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service. To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream.
APA, Harvard, Vancouver, ISO, and other styles
12

Štainer, Martin. "Síťový tester." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413073.

Full text
Abstract:
The objective of this thesis is the issue of network parameters measuring. Measurement methodology and tester concept were designed. Based on the designed tester concept a plugin for Apache JMeter was implemented. Two experimental tests were run based on the methodology designed, with their objectives set to explore the difference in performance between QUIC and TCP protocols.
APA, Harvard, Vancouver, ISO, and other styles
13

Chalasani, Roopa. "OPTIMIZATION OF NETWORK PARAMETERS AND SEMI-SUPERVISION IN GAUSSIAN ART ARCHITECTURES." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4129.

Full text
Abstract:
In this thesis we extensively experiment with two ART (adaptive resonance theory) architectures called Gaussian ARTMAP (GAM) and Distributed Gaussian ARTMAP (dGAM). Both of these classifiers have been successfully used in the past on a variety of applications. One of our contributions in this thesis is extensively experiments with the GAM and dGAM network parameters and appropriately identifying ranges for these parameters for which these architectures attain good performance (good classification performance and small network size). Furthermore, we have implemented novel modifications of these architectures, called semi-supervised GAM and dGAM architectures. Semi-supervision is a concept that has been used effectively before with the FAM and EAM architectures and in this thesis we are answering the question of whether semi-supervision has the same beneficial effect on the GAM architectures too. Finally, we compared the performance of GAM, dGAM, EAM, FAM and their semi-supervised versions on a number of datasets (simulated and real datasets). These experiments allowed us to draw appropriate conclusions regarding the comparative performance of these architectures.
M.S.E.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
14

Taylor, Jason Ashley Halpin S. Mark. "Online in-situ estimation of network parameters under intermittent excitation conditions." Auburn, Ala., 2008. http://hdl.handle.net/10415/1558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Dhond, Anjali 1977. "Application of neural network techniques for modeling of blast furnace parameters." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/17490.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 93-96).
This thesis discusses the predictions of various output variables in a blast furnace. It compares the ability of multi-layer perceptron neural networks for prediction with other blast furnace prediction techniques. The output variables: Hot Metal Temperature, Silicon Content, Slag Basicity, RDI, and +10 are all modeled using the MLP networks. Different solutions are proposed for preprocessing the original data and finding the most relevant input variables. The NNRUN software is used to find the best MLP neural network. Finally, methods to control the output variables in the blast furnace are examined and a derivative-based sensitivity analysis is discussed.
by Anjali Dhond.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Latzko, Vincent, Christian Vielhaus, and Frank H. P. Fitzek. "Usecase Driven Evolution of Network Coding Parameters Enabling Tactile Internet Applications." IEEE, 2020. https://tud.qucosa.de/id/qucosa%3A75006.

Full text
Abstract:
Present-day and future network protocols that include and implement Forward Error Correction are configurable by internal parameters, typically incorporating expert knowledge to set up.We introduce a framework to systematically, objectively and efficiently determine parameters for Random Linear Network Codes (RLNC). Our approach uses an unbiased, consistent simulator in an optimization loop and utilizes a customizable, powerful and extendable parametric loss function. This allows to tailor existing protocols to various use cases, including ultra reliable, low latency communication (URLLC) codes. Successful configurations exploring the search space are under evolutionary pressure and written into a database for instant retrieval. We demonstrate three examples, Full Vector Coding, tail RLNC, and PACE with different focus for each.
APA, Harvard, Vancouver, ISO, and other styles
17

Erenay, Bulent. "Concurrent Supply Chain Network & Manufacturing Systems Design Under Uncertain Parameters." Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1459206318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Abdelzaher, Ahmed F. "Identifying Parameters for Robust Network Growth using Attachment Kernels: A case study on directed and undirected networks." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4481.

Full text
Abstract:
Network growing mechanisms are used to construct random networks that have structural behaviors similar to existing networks such as genetic networks, in efforts of understanding the evolution of complex topologies. Popular mechanisms, such as preferential attachment, are capable of preserving network features such as the degree distribution. However, little is known about such randomly grown structures regarding robustness to disturbances (e.g., edge deletions). Moreover, preferential attachment does not target optimizing the network's functionality, such as information flow. Here, we consider a network to be optimal if it's natural functionality is relatively high in addition to possessing some degree of robustness to disturbances. Specifically, a robust network would continue to (1) transmit information, (2) preserve it's connectivity and (3) preserve internal clusters post failures. In efforts to pinpoint features that would possibly replace or collaborate with the degree of a node as criteria for preferential attachment, we present a case study on both; undirected and directed networks. For undirected networks, we make a case study on wireless sensor networks in which we outline a strategy using Support Vector Regression. For Directed networks, we formulate an Integer Linear Program to gauge the exact transcriptional regulatory network optimal structures, from there on we can identify variations in structural features post optimization.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Xu. "Cell identity allocation and optimisation of handover parameters in self-organised LTE femtocell networks." Thesis, University of Bedfordshire, 2013. http://hdl.handle.net/10547/335874.

Full text
Abstract:
Femtocell is a small cellular base station used by operators to extend indoor service coverage and enhance overall network performance. In Long Term Evolution (LTE), femtocell works under macrocell coverage and combines with the macrocell to constitute the two-tier network. Compared to the traditional single-tier network, the two-tier scenario creates many new challenges, which lead to the 3rd Generation Partnership Project (3GPP) implementing an automation technology called Self-Organising Network (SON) in order to achieve lower cost and enhanced network performance. This thesis focuses on the inbound and outbound handovers (handover between femtocell and macrocell); in detail, it provides suitable solutions for the intensity of femtocell handover prediction, Physical Cell Identity (PCI) allocation and handover triggering parameter optimisation. Moreover, those solutions are implemented in the structure of SON. In order to e ciently manage radio resource allocation, this research investigates the conventional UE-based prediction model and proposes a cell-based prediction model to predict the intensity of a femtocell's handover, which overcomes the drawbacks of the conventional models in the two-tier scenario. Then, the predictor is used in the proposed dynamic group PCI allocation approach in order to solve the problem of PCI allocation for the femtocells. In addition, based on SON, this approach is implemented in the structure of a centralised Automated Con guration of Physical Cell Identity (ACPCI). It overcomes the drawbacks of the conventional method by reducing inbound handover failure of Cell Global Identity (CGI). This thesis also tackles optimisation of the handover triggering parameters to minimise handover failure. A dynamic hysteresis-adjusting approach for each User Equipment (UE) is proposed, using received average Reference Signal-Signal to Interference plus Noise Ratio (RS-SINR) of the UE as a criterion. Furthermore, based on SON, this approach is implemented in the structure of hybrid Mobility Robustness Optimisation (MRO). It is able to off er the unique optimised hysteresis value to the individual UE in the network. In order to evaluate the performance of the proposed approach against existing methods, a System Level Simulation (SLS) tool, provided by the Centre for Wireless Network Design (CWiND) research group, is utilised, which models the structure of two-tier communication of LTE femtocell-based networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Erdurmaz, Muammer Sercan. "Neural Network Prediction Of Tsunami Parameters In The Aegean And Marmara Seas." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605134/index.pdf.

Full text
Abstract:
Tsunamis are characterized as shallow water waves, with long periods and wavelengths. They occur by a sudden water volume displacement. Earthquake is one of the main reasons of a tsunami development. Historical data for an observation period of 3500 years starting from 1500 B.C. indicates that approximately 100 tsunamis occurred in the seas neighboring Turkey. Historical earthquake and tsunami data were collected and used to develop two artificial neural network models to forecast tsunami characteristics for future occurrences and to estimate the tsunami return period. Artificial Neural Network (ANN) is a system simulating the human brain learning and thinking behavior by experiencing measured or observed data. A set of artificial neural network is used to estimate the future earthquakes that may create a tsunami and their magnitudes. A second set is designed for the estimation of tsunami inundation with relation with the tsunami intensity, the earthquake depth and the earthquake magnitude that are predicted by the first set of neural networks. In the case study, Marmara and Aegean regions are taken into consideration for the estimation process. Return periods including the last occurred earthquake in the Turkish seas, which was the izmit (Kocaeli) Earthquake in 1999, were utilized together with the average earthquake depths calculated for Marmara and Aegean regions for the prediction of the earthquake magnitude that may create a tsunami in the stated regions for various return periods of 1-100 years starting from the year of 2004. The obtained earthquake magnitudes were used together with tsunami intensities and earthquake depth to forecast tsunami wave height at the coast. It is concluded that, Neural Networks predictions were a satisfactory first step to implement earthquake parameters such as depth and magnitude, for the average tsunami height on the shore calculations.
APA, Harvard, Vancouver, ISO, and other styles
21

Heyns, Michael John. "Ensemble estimation and analysis of network parameters: strengthening the GIC modelling chain." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25277.

Full text
Abstract:
Ensemble Estimation and Analysis of Network Parameters - Strengthening the GIC Modelling Chain - Abstract Large grounded conducting networks on Earth's surface have long been known to be affected by solar activity and geomagnetic storms. Geomagnetically induced currents (GICs) in these quasiantennas are just one of the effects. In modern times, society has become more and more dependent on electrical power and, as a result, power networks. These power networks form extensive grounded conductors and are susceptible to GICs, even at mid-latitude regions. Given a large enough event now, such as the Carrington event of 1859, the direct and knock-on results can be devastating. Such an event is more than just a possibility, it is just a matter of time. With this in mind, the study of the effects of GICs and the modelling of them has become essential to ensure the future security of society in general. GIC modelling makes the assumption that the resultant GIC at a specific node in a power network is assumed to be linearly related to the horizontal vector components of the geoelectric field, which is induced by a plane-wave geomagnetic field. The linear GIC and geoelectric field relation is linked by a pair of network parameters, a and b. These parameters are not easily measurable explicitly but may be estimated empirically. Furthermore, these parameters are traditionally only seen to include network information and remain constant given a stable network. In this work, a new empirical approach to derive estimates for a and b is presented where the linear relation is solved simultaneously for all possible pair of time instances. Given a geomagnetic storm time-series (length n) of simultaneous GIC and geoelectric field data to solve for a and b, taking all possible time instance pairs yields approximately N²/2 estimates for a and b. The resulting ensembles of parameter estimates are analysed and found to be approximately Cauchy-distributed. Each individual estimate resulting from a single pair of time instances being solved is not the true state of the system, but a possible state. Taking the ensemble as a whole though gives the most probable parameter estimate, which in the case of a Cauchy distribution is the median. These ensemble parameter estimates are used in the engineering link of the modelling chain, but the ensembles themselves allow further analysis into the nature of GICs. An improvement is seen when comparing the performance of the ensemble estimates applied to an out-of-sample dataset during the Halloween Storm of 2003 with previous GIC modelling in the South African power network using the same dataset. Analysis of the ensembles has verified certain ground assumptions (specifically the plane-wave assumption and network directionality) made as a first-order approximation in GIC modelling and has also shown that errors from these assumptions are absorbed into empirically derived network parameters. Using a range of estimates from the ensemble, a GIC prediction band is produced. This in itself corresponds to an error estimate in the prediction. For the first time, it has been explicitly shown that empirically derived network parameters show a correlation to the magnitude of the produced GIC. This behaviour is then used to refine the parameter estimation further and allow for real time dynamic network parameter estimation that further improves modelling.
APA, Harvard, Vancouver, ISO, and other styles
22

Chaney, Antwan. "QUALITY OF SERVICE PARAMETERS WITHIN A MIXED NETWORK FOR THE INET ENVIRONMENT." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604405.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The focus of the integrated Network Enhanced Telemetry (iNET) project is to enhance the current telemetry technology (IRIG106) and still maintain the reliability of the current technology. The Mixed Networking environment is composed of a wired network based on standard 802.11 and a modified wireless based on 802.11. Determining the viability of the networking scheme within the iNET project is critical. The QoS features such as delay and jitter are measures of performance specified by user conditions. These QoS features are measured against current legacy links. This paper will show a comparison of the three QoS levels (best effort, assured, and premium services) that the network provides and investigate QoS performance of the Mixed Network in the iNET environment. This will provide a framework for assessing the strength and weakness of the Mixed Network as well as scoping further research.
APA, Harvard, Vancouver, ISO, and other styles
23

Shahraeeni, Mohammad Sadegh. "Inversion of seismic attributes for petrophysical parameters and rock facies." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4754.

Full text
Abstract:
Prediction of rock and fluid properties such as porosity, clay content, and water saturation is essential for exploration and development of hydrocarbon reservoirs. Rock and fluid property maps obtained from such predictions can be used for optimal selection of well locations for reservoir development and production enhancement. Seismic data are usually the only source of information available throughout a field that can be used to predict the 3D distribution of properties with appropriate spatial resolution. The main challenge in inferring properties from seismic data is the ambiguous nature of geophysical information. Therefore, any estimate of rock and fluid property maps derived from seismic data must also represent its associated uncertainty. In this study we develop a computationally efficient mathematical technique based on neural networks to integrate measured data and a priori information in order to reduce the uncertainty in rock and fluid properties in a reservoir. The post inversion (a posteriori) information about rock and fluid properties are represented by the joint probability density function (PDF) of porosity, clay content, and water saturation. In this technique the a posteriori PDF is modeled by a weighted sum of Gaussian PDF’s. A so-called mixture density network (MDN) estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. We solve several inverse problems with the MDN and compare results with Monte Carlo (MC) sampling solution and show that the MDN inversion technique provides good estimate of the MC sampling solution. However, the computational cost of training and using the neural network is much lower than solution found by MC sampling (more than a factor of 104 in some cases). We also discuss the design, implementation, and training procedure of the MDN, and its limitations in estimating the solution of an inverse problem. In this thesis we focus on data from a deep offshore field in Africa. Our goal is to apply the MDN inversion technique to obtain maps of petrophysical properties (i.e., porosity, clay content, water saturation), and petrophysical facies from 3D seismic data. Petrophysical facies (i.e., non-reservoir, oil- and brine-saturated reservoir facies) are defined probabilistically based on geological information and values of the petrophysical parameters. First, we investigate the relationship (i.e., petrophysical forward function) between compressional- and shear-wave velocity and petrophysical parameters. The petrophysical forward function depends on different properties of rocks and varies from one rock type to another. Therefore, after acquisition of well logs or seismic data from a geological setting the petrophysical forward function must be calibrated with data and observations. The uncertainty of the petrophysical forward function comes from uncertainty in measurements and uncertainty about the type of facies. We present a method to construct the petrophysical forward function with its associated uncertainty from the both sources above. The results show that introducing uncertainty in facies improves the accuracy of the petrophysical forward function predictions. Then, we apply the MDN inversion method to solve four different petrophysical inverse problems. In particular, we invert P- and S-wave impedance logs for the joint PDF of porosity, clay content, and water saturation using a calibrated petrophysical forward function. Results show that posterior PDF of the model parameters provides reasonable estimates of measured well logs. Errors in the posterior PDF are mainly due to errors in the petrophysical forward function. Finally, we apply the MDN inversion method to predict 3D petrophysical properties from attributes of seismic data. In this application, the inversion objective is to estimate the joint PDF of porosity, clay content, and water saturation at each point in the reservoir, from the compressional- and shear-wave-impedance obtained from the inversion of AVO seismic data. Uncertainty in the a posteriori PDF of the model parameters are due to different sources such as variations in effective pressure, bulk modulus and density of hydrocarbon, uncertainty of the petrophysical forward function, and random noise in recorded data. Results show that the standard deviations of all model parameters are reduced after inversion, which shows that the inversion process provides information about all parameters. We also applied the result of the petrophysical inversion to estimate the 3D probability maps of non-reservoir facies, brine- and oil-saturated reservoir facies. The accuracy of the predicted oil-saturated facies at the well location is good, but due to errors in the petrophysical inversion the predicted non-reservoir and brine-saturated facies are ambiguous. Although the accuracy of results may vary due to different sources of error in different applications, the fast, probabilistic method of solving non-linear inverse problems developed in this study can be applied to invert well logs and large seismic data sets for petrophysical parameters in different applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Nowotny, Klaus, and Dieter Pennerstorfer. "Network migration: do neighbouring regions matter?" Routledge by Taylor & Francis Group, 2019. http://dx.doi.org/10.1080/00343404.2017.1380305.

Full text
Abstract:
This paper analyses the role of the spatial structure of migrant networks in the location decision of migrants to the European Union at the regional level. Using a random parameters logit specification, a significant positive effect of migrant networks in neighbouring regions on migrants' location decisions is found. Although this spatial spillover effect is smaller than the effect of networks in the host regions, omitting to control for this spatial dependence results in a 40% overestimation of the effect of regional migrant networks on the location decision of newly arriving migrants.
APA, Harvard, Vancouver, ISO, and other styles
25

Takamizawa, Koichiro. "Analysis of Highly Coupled Wideband Antenna Arrays Using Scattering Parameter Network Models." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/11099.

Full text
Abstract:
Wideband phased arrays require very tight element spacing to permit wide angle scanning of the main beam over the wide bandwidth. The consequence of tight spacing is very high mutual coupling among the elements in the array. Previous efforts by Virginia Tech Antenna Group has shown that the strong coupling can be utilized in arrays to obtain broadband frequency response while maintaining a small element spacing. However, mutual coupling between elements in a tightly coupled array can sometimes dramatically change the operating frequency, bandwidth, and radiation pattern from that of the single isolated element. Thus, there are some fundamental questions that remain regarding the effective operation of highly coupled arrays for beam forming, beam scanning, and aperture reconfiguration. Existing antenna pattern analysis techniques including the active element pattern method are inadequate for the application in highly coupled arrays. This dissertation focuses on the development of a new antenna array analysis technique. The presented method is based on the scattering parameter network descriptions of the array elements, associated feed network and the active element patterns. The developed model is general. It can be applied to an array of any size and configuration. The model can be utilized to determine directivity, gain and realized gain of arrays as well as their radiation efficiency and impedance mismatch. Using the network model, the relationship between radiation pattern characteristics and the input impedance characteristics of the array antennas becomes clear. Three types of source impedance matching conditions for array antennas are investigated using the model. A numerically simulated array of strip dipole array is used to investigate the effects of various impedance matching methods on the radiation pattern and impedance bandwidth. An application of network analysis is presented on an experimental investigation of $3\times 3$ Foursquare array test bed to further verify the concepts.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Bongioanni, Vincent Italo. "Enhancing Network-Level Pavement Macrotexture Assessment." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/89326.

Full text
Abstract:
Pavement macrotexture has been shown to influence a range of safety and comfort issues including wet weather friction, splash and spray, ambient and in-vehicle noise, tire wear, and rolling resistance. While devices and general guidance exist to measure macrotexture, the wide-scale collection and use of macrotexture is neither mandated nor is it typically employed in the United States. This work seeks to improve upon the methods used to calibrate, collect, pre-process, and distill macrotexture data into useful information that can be utilized by pavement managers. This is accomplished by 1. developing a methodology to evaluate and compare candidate data collection devices; 2. plans and procedures to evaluate the accuracy of high-speed network data collection devices with reference surfaces and measurements; 3. the development of a method to remove erroneous data from emerging 3-D macrotexture sensors; 4. development of a model to describe the change in macrotexture as a function of traffic; 5.finally, distillation of the final collected pavement surface profiles into parameters for the prediction of important pavement surface properties aforementioned. Various high-speed macrotexture measurement devices were shown to have good repeatability (between 0.06 to 0.09mm MPD) and interchangeability of single-spot laser dfevices was demonstrated via a limits of agreement analysis. The operational factors of speed and acceleration were shown to affect the resulting MPD of several devices and guidelines are given for vehicle speed and sensor exposure settings. Devices with single spot and line lasers were shown to reproduce reference waveforms on manufactured surfaces within predefined tolerances. A model was developed that predicts future macrotexture levels (as measured by RMS) for pavements prone to bleeding due to rich asphalt content. Finally, several previously published macrotexture parameters along with a suite of novel parameters were evaluated for their effectiveness in the prediction of wet weather friction and certain types of road noise. Many of the parameters evaluated outperformed the current metrics of MPD and RMS.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
27

Asim, Muhammad Ahsan. "Network Testing in a Testbed Simulator using Combinatorial Structures." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6058.

Full text
Abstract:
This report covers one of the most demanding issues of network users i.e. network testing. Network testing in this study is about performance evaluation of networks, by putting traffic load gradually to determine the queuing delay for different traffics. Testing of such operations is becoming complex and necessary due to use of real time applications such as voice and video traffic, parallel to elastic data of ordinary applications over WAN links. Huge size elastic data occupies almost 80% resources and causes delay for time sensitive traffic. Performance parameters like service outage, delay, packet loss and jitter are tested to assure the reliability factor of provided Quality of Service (QoS) in the Service Level Agreements (SLAs). Normally these network services are tested after deployment of physical networks. In this case most of the time customers have to experience unavailability (outage) of network services due to increased levels of load and stress. According to user-centric point of view these outages are violation and must be avoided by the net-centric end. In order to meet these challenges network SLAs are tested on simulators in lab environment. This study provides a solution for this problem in a form of testbed simulator named Combinatorial TestBed Simulator (CTBS). Prototype of this simulator is developed for conducting experiment. It provides a systematic approach of combinatorial structures for finding such traffic patterns that exceeds the limit of queuing delay, committed in SLAs. Combinatorics is a branch of mathematics that deals with discrete and normally finite elements. In the design of CTBS, technique of combinatorics is used to generate a variety of test data that cannot be generated manually for testing the given network scenario. To validate the design of CTBS, results obtained by pilot runs are compared with the results calculated using timeline. After validation of CTBS design, actual experiment is conducted to determine the set of traffic patterns that exceeds the threshold value of queuing delay for Voice over Internet Protocol (VOIP) traffic.
14:36 Folkparksvagan Ronneby 372 40 Sweden
APA, Harvard, Vancouver, ISO, and other styles
28

Fey, Enikö. "The Effect of Combining Network and Server QoS Parameters on End-to-End Performance." Thesis, KTH, Teleinformatik, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93516.

Full text
Abstract:
Application hosting is becoming a popular business, Application Service Providers (ASPs) need however to keep up with the increasing pace of the market. This implies that they have to provide infrastructure to an increasing number of clients, and at the same time give QoS guarantees to these clients. One solution for ASPs to both guarantee a certain service level (QoS) for their clients and keep expanding would be to have so many resources as to be able to provide more than the maximum aggregate need of their clients. This may turn out to be an expensive or even an impossible solution - sharing infrastructure between clients and offering some means of resource reservation, and using charging to insure that clients only reserve the resources they need, is an alternative. It is however not an easy problem to solve, particularly if the procedure of adding new clients is to be automated, and the resources dynamically allocated. The ICorpMaker framework being developed at the IBM Zurich Research Laboratory offers a solution to the above named problems. In the ICorpMaker framework dynamic resource allocation is achieved by letting clients modify the amount of resources allocated to them in a simple manner, requesting more or less resources than their current allotment. The difficulty in achieving the end-to-end performance the client desires, lies in the fact that it is not certain how modifying resource allocation at the network respectively server level will combine and affect the end-to-end performance experienced by the end users of the service. The aim of this thesis project was to study the correlation between different network and server QoS parameters and the resulting end-to-end performance by making measurements. The results obtained from these measurements give an answer to the question of how to change the network and server resource allocations, when a client's application does not perform in a satisfactory way and hence the client requests more resources. Certain optimizations for the resource (re)allocation were also suggested based on the results.
APA, Harvard, Vancouver, ISO, and other styles
29

Dong, Wei. "Identification of Electrical Parameters in A Power Network Using Genetic Algorithms and Transient Measurements." Thesis, University of Nottingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Zhi Hui. "Impact parameters in an entrepreneurial career determination model from cognition and social network perspectives." Thesis, Manchester Metropolitan University, 2014. http://e-space.mmu.ac.uk/326217/.

Full text
Abstract:
The background to this study relates to the policy of the Chinese government to encourage university graduates to become entrepreneurs as a means of addressing unemployment in China. Far less than the expected number of university graduates however engage in an entrepreneurial career which leads one to question the effectiveness of this policy initiative. A number of critical issues arise. First, although the government’s policy incentives may have a positive impact on the take-up of entrepreneurial careers, there is insufficient research to justify the kind of support currently being implemented by the Chinese government. Second, while some studies have shown that one’s social network is a significant attribute in the decision to become an entrepreneur, the empirical evidence is mixed. Other studies have shown that entrepreneurial cognition is a crucial antecedent to whether people choose to become entrepreneurs. Few studies however have examined the relationship between one’s social network and cognition, and their relationship to how entrepreneurial intentions are formed. In addressing these gaps, the thesis investigates how the characteristics of an individual’s social network affect the formation of his or her entrepreneurial intentions directly and via entrepreneurial cognition. The findings of this study contribute to the literature in three respects. The first contribution comes from the argument that one’s social network properties and in particular, one’s entrepreneurial social network, directly influence cognition during the formation of entrepreneurial intentions. The second contribution reveals alternative measures and hence explanations of entrepreneurship in relation to one’s social capital, one’s social network characteristics and other factors. The third contribution rests on the analytical approach, which uses the technics of structural equation modeling (SEM) to reveal the relationship between critical realism and conceptions. As the analysis show, SEM is an appropriate and effective approach to the confirmatory analysis. Essentially, it enables integrating knowledge and by drawing together parameter variables and latent variables, it offers concurrent understanding of the critical insight of the focal problem of why some people choose to be entrepreneurs.
APA, Harvard, Vancouver, ISO, and other styles
31

Ramachandra, Pradeepa. "A Study on the Impact of Antenna Downtilt on theOutdoor Users in an Urban Environment." Thesis, Linköpings universitet, Kommunikationssystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-80414.

Full text
Abstract:
Inter-site interference distribution acts as a basic limitation on how much performance a network service provider can achieve in an urban network scenario. There are many different ways of controlling this interference levels. One such method is tuning the antenna downtilt depending on the network situation. Antenna downtilt can also be seen as a powerful tool for load balancing in the network. This thesis work involves a study of the impact of the antenna downtilt in an urban environment, involving non-uniform user distribution. A realistic dual ray propagation model is used to model the path gain from the base station to a UE. Such a propagation model is used along with a directional antenna radiation pattern model to calculate the overall path gain from the base station to a UE. Under such modeling, the results of the simulations show that the antenna downtilt plays a crucial role in optimizing the network performance. The results show that the optimal antenna downtilt angle is not very sensitive to the location of the hotspot in the network. The results also show that the antenna downtilt sensitivity is very much dependent on the network scenario. The coupling between the antenna downtilt and the elevation half power beamwidth is also evaluated.
APA, Harvard, Vancouver, ISO, and other styles
32

Fic, Miloslav. "Adaptace parametrů ve fuzzy systémech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221163.

Full text
Abstract:
This Master’s thesis deals with adaptation of fuzzy system parameters with main aim on artificial neural network. Current knowledge of methods connecting fuzzy systems and artificial neural networks is discussed in the search part of this work. The search in Student’s works is discussed either. Chapter focused on methods application deals with classifying ability verification of the chosen fuzzy-neural network with Kohonen learning algorithm. Later the model of fuzzy system with parameters adaptation based on fuzzyneural network with Kohonen learning algorithm is shown.
APA, Harvard, Vancouver, ISO, and other styles
33

Spies, Lucas Daniel. "Machine-Learning based tool to predict Tire Noise using both Tire and Pavement Parameters." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91407.

Full text
Abstract:
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, new machine learning algorithms to model TPIN have been implemented. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research studied the correct configuration of such tools. More specifically, Artificial Neural Network (ANN) configurations were studied. Their implementation was based on the problem requirements (acoustic sound pressure prediction). Moreover, a customized neuron configuration showed improvements on the ANN TPIN prediction capabilities. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement profile when predicting TPIN. Finally, the new ANN configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed. Tire noise narrowband spectra for a frequency range of 400-1600 Hz is obtained as a result.
Master of Science
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, machine learning algorithms, based on the human brain structure, have been implemented to model TPIN. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research focused on the study of the correct configuration of such machine learning algorithms applied to the very specific task of TPIN prediction. Moreover, a customized configuration showed improvements on the TPIN prediction capabilities of these algorithms. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement roughness when predicting TPIN. Finally, the new machine learning algorithm configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete computational tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed.
APA, Harvard, Vancouver, ISO, and other styles
34

Hershberger, Kyle M. "In-situ S-Parameter Analysis and Applications." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51839.

Full text
Abstract:
This thesis will begin with an investigation on the limitations associated with the predominate two-port stability analysis techniques with respect to multi-stage RF amplifier design. The primary focus will be to investigate and develop network analysis techniques that allow internal ports to be created within a RF circuit. This technique will facilitate the application of existing stability analysis techniques in ways that are not commonly known. Examples of situations where traditional network and stability analysis is insufficient will be presented, and the application of the newly developed techniques will be examined.
APA, Harvard, Vancouver, ISO, and other styles
35

Nam, Kyungdoo T. "A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278815/.

Full text
Abstract:
This study develops a heuristic procedure for specifying parameters for a neural network configuration (learning rate, momentum, and the number of neurons in a single hidden layer) in Shewhart X-bar control chart applications. Also, this study examines the replicability of the neural network solution when the neural network is retrained several times with different initial weights.
APA, Harvard, Vancouver, ISO, and other styles
36

Koksal, Murat Miran. "Positioning Based On Tracking Of Signal Parameters In A Single Base Station Wimax Network Using Fingerprinting." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612248/index.pdf.

Full text
Abstract:
IEEE 802.16 is a point to multipoint broadband wireless access standard, designed from ground up for fast and reliable mobile networking. Several location-related MAC layer fields specified in the standard indicate that WiMAX networks can be convenient backbones for future positioning systems. Information encapsulated in MAC headers is especially important for single base station positioning systems which require fewer network resources than multiple reference station location systems, but need more location-related input data. In this thesis, an algorithm for positioning mobile stations in a single base station network is presented to investigate location capability of WiMAX systems. The algorithm makes use of fingerprinting to create a training database and seeks to find locations of mobile stations by tracking them according to their signal parameters. Experimental results give an idea about how a single base station positioning system performs in the absence of sufficient location-related data, and suggest that better results can be obtained if MAC headers specified in IEEE 802.16 standard can be accessed.
APA, Harvard, Vancouver, ISO, and other styles
37

Dhanasekaran, Arockia R. "A Dynamic State Metabolic Journey: From Mass Spectrometry to Network Analysis via Estimation of Kinetic Parameters." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1097.

Full text
Abstract:
In the post-genomic era, there is a dire need for tools to perform metabolic analyses that include the structural, functional, and regulatory analysis of metabolic networks. This need arose because of the lag between the two phases of metabolic engineering, namely, synthesis and analysis. Molecular biological tools for synthesis like recombinant DNA technology and genetic engineering have advanced a lot farther than tools for systemic analysis. Consequently, bioinformatics is poised to play an important role in bridging the gap between the two phases of metabolic engineering, thereby accelerating the improvement of organisms by using predictive simulations that can be done in minutes rather than mutant constructions that require weeks to months. In addition, metabolism occurs at a rapid speed compared to other cellular activities and has two states, dynamic state and steady state. Dynamic state analysis sheds more light on the mechanisms and regulation of metabolism than its steady state counterpart. Currently, several in silico tools exist for steady-state analysis of metabolism, but tools for dynamic analysis are lacking. This research focused on simulating the dynamic state of metabolism for predictive analysis of the metabolic changes in an organism during metabolic engineering. The goals of this research were accomplished by developing two software tools. Metabolome Searcher, a web-based high throughput tool, facilitates putative compound identification and metabolic pathway mapping of mass spectrometry data by applying genome-restriction. The second tool, DynaFlux, uses these compound identifications along with time course data obtained from a mass spectrometer in conjunction with the pathways of interest to simulate and estimate dynamic-state metabolic flux, as well as to analyze the network properties. The features available in DynaFlux are: 1) derivation of the metabolic reconstructions from Pathway Tools software for the simulation; 2) automated building of the mathematical model of the metabolic network; 3) estimation of the kinetic parameters, KR, v, Vmaxf, Vmaxr, and Kdy, using hybrid-mutation random-restart hill climbing search; 4) perturbation studies of enzyme activities; 5) enumeration of feasible routes between two metabolites; 6) determination of the minimal enzyme set and dispensable enzyme set; 7) imputation of missing metabolite data; and 8) visualization of the network.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohammed, Peshawa. "Deformation monitoring using GNSS:A study on a local network with preset displacements." Thesis, Högskolan i Gävle, Samhällsbyggnad, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-30153.

Full text
Abstract:
In the past two decades, the number of observations and the accuracy of satellite-basedgeodetic measurements like Global Navigation satellite systems (GNSS) greatly increased,providing measured values of displacements and velocities of permanent geodetic stations.Establishment of the geodetic control networks and collecting geodetic observations, indifferent epochs, are a commonly used method for detection of displacements andconsequently disaster management. Selecting proper processing parameters for differenttypes of monitoring networks are critical factors of the deformation monitoring analysisusing GNSS, which is the main aim of this research. In this study, a simulation study and acontrolled survey were performed using simultaneous GNSS measurements of 5 geodeticpillars, established by Lantmäteriet at Gävle airport. Sensitivity analyses were performed ondifferent types of monitoring networks using different set of processing paarameters . Thesescenarios consider different sets of parameters, different types of monitoring networks, andvarious number of monitoring stations to evaluate the detectable displacements andcompare with the known millimeter displacements (simulated one). The results showed thatthe selection of processing parameters depends on the type and size of the monitoringnetwork and the location of the monitoring stations. Analyses also show that onlineprocessing services can provide mm-cm level accuracy for displacement detection ifsufficient observation time is available. Finally, checks were performed on the two ofsample scenarios to find the minimum observation time required for reaching to the mostaccurate simulated (preset) displacements.
APA, Harvard, Vancouver, ISO, and other styles
39

Yunda, Lozano Daniel. "Improving vertical handover performance for RTP streams containing voice : Using network parameters to predict future network conditions in order to make a vertical handover decision." Thesis, KTH, Kommunikationssystem, CoS, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92019.

Full text
Abstract:
Wireless local area networks WLAN and Voice over IP technologies enable local low cost wireless telephony, while cellular networks offer wide-area coverage. The use of dual mode WLAN-cellular terminals should allow cost savings by automatically switching from GSM to WLAN networks whenever it is feasible. However, in order to allow user mobility during a call, a handover procedure for transferring a call between the WLAN interface and the cellular network should be defined. The decision algorithm that triggers such a handover is critical to maintain voice quality and uninterrupted communication. Information or measurements collected from the network may be used to anticipate when the connection will degrade to such a point that a handover is desirable in order to allow a sufficient time span for the handover’s successful execution. It is the delay in detecting when to make a handover and the time to execute it that motivates the need for a prediction. The goal of this thesis is therefore to present a method to predict when a handover should be made based upon network conditions. We selected a number of WLAN and VoIP software tools and adapted them to perform the measurements. These tools allowed us to measure parameters of the WLAN’s physical and link layers. Packet losses and jitter measurements were used as well. We have assumed that there is ubiquitous cellular coverage so that we only have to be concerned with upward handovers (i.e, from the WLAN to the cellular network and not the reverse). Finally we have designed and evaluated a mechanism that triggers the handover based in these measurements.
WLAN, trådlöst lokalt nätverk, och IP-telefoni tillsammans gör det möjligt med billig trådlös telefoni, samtidigt som mobiltelefoninätverk erbjuder stor signal beläggning. Att använda WLAN-mobil med dubbla hårdvaruterminaler skulle ge en kostnadsreducering genom att automatisk byta från GSM till WLAN när det är möjligt. Emellertid för att kunna flytta pågående samtal mellan ett WLAN- och ett mobilt gränssnitt, måste en handovermekansim definieras. En beslutsalgoritm som utlöser sådan handover är av stor vikt för att bibehålla röstkvalitet och oavbruten kommunikation. För att tillåta ett tillräckligt tidsspann för handoverns utförande kan information tagen från nätverket användas för att förutse när kommunikationen ska degraderas till en sådan punkt att en handover är önskvärd. Förseningen i detekteringen när en handover ska ske och tiden för utförandet motiverar behovet av förutsägelse. Det här exjobbet introducerar en metod som förutsäger när handover ska börja baserade på nätverksförhållandena. Vi har valt några WLAN och VoIP-program och anpassat dem för att genomföra mätningarna. Programmen tillät oss att mäta WLANs parameter för fysiska och datalänksskikten. Pecket Loss och jitter-mätningar användes likaså. Vi antog att det fanns GSM tjänst på alla platser så att vi endast behövde göra uppg aende handover(t.ex. från WLAN till mobilt nätverk och inte tvärtom). Vi framkallade och testade en mekanism att starta handovern baserade på nätverksmätningarna.
This is the same Ian Marsh as advisor who authored the disseratation http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10572
APA, Harvard, Vancouver, ISO, and other styles
40

Ho, Siu-kui, and 何兆鉅. "Sensitivity of parameters in transportation modelling on the implication of network requirement: a casestudy of Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B31975070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Pota, Zainab Abbas. "Simulation Analysis of Quality of Service Parameters for On-board Switching on ATM Network for Multimedia Applications." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1289802401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Franc, Jan. "Metody měření přenosových rychlostí v datových sítích." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218326.

Full text
Abstract:
The target of this master's thesis were the known methods for testing the quality of transfer parameters of data networks. I have studied the RFC 2544 document to analyze these tests. From that information and by studying other existing web-based applications I was able to design a concept of my own application that will allow to measure basic quality parameters of transfers made through the Internet (the parameters: downstream, upstream, latency, variance of these and a traced route). My application doesn't require any modifications on the user's system. It's built on the server-based programming language PHP and uses the relational database engine MySQL to store measurement and user data. On the client side, it's assisted by the JavaScript scripting language. Both registered users and visitors are allowed to perform the listed measurements. Registered users are able to browse the history of their own benchmark results and also to send messages to others. There is an administrative account to oversee the operations. Another part of my thesis work is an application for Windows that performs the same measurements but does not use JavaScript.
APA, Harvard, Vancouver, ISO, and other styles
43

Sukup, Luboš. "Metody měření výkonnostních a kvalitativních parametrů datových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-264927.

Full text
Abstract:
Master thesis involves the development of quality measurement issues and performance parameters in data networks. It describes the main technologies as they affect the quality and performance parameters and the effect of these parameters for voice, video and data services. Next are listed some methods for measuring parameters of the data network.In the practical part is selected one method of measuring network parameters and properties of this method are demonstrated by illustrative examples.
APA, Harvard, Vancouver, ISO, and other styles
44

Kou, Zhiqing. "Use of artificial neural network for predicting stage-discharge relationship and water quality parameters for selected Hawaii streams." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7002.

Full text
Abstract:
The goal of the study is to examine the efficacy of the artificial neural network (ANN) to develop stage-discharge relationship and to simulate water quality parameters. With the data from the United States Geological Survey (USGS) for two Hawaii stream gaging stations: the Manoa Stream at Kanewai field and the Waiakeakua Stream at Honolulu, ANN will be used to predict discharge for those two stations, but only Manoa Stream at Kanewai Field will be analyzed for the water quality parameters such as dissolved oxygen, dissolved organic carbon, solids residue and suspended sediment. For both stations, the performance of ANN is superior to the rating curves currently used by USGS. The network with one or two hidden layers does not make significant difference for modeling those two rating curves, but it was found that the selection of the test data set is very important. For simulating water quality parameters, the network fails to learn the relation between input and target due to the insufficient input parameter and short length of record. For the station of Waiakeakua Stream at Honolulu, the most important input parameters are hydraulic radius and conveyance of the cross section where the discharge was measured. But for Manoa Stream at Kanewai Field, it is one of the antecedent gage heights (H(t-2)) that contributes significantly to the network performance.
xiii, 216 leaves
APA, Harvard, Vancouver, ISO, and other styles
45

Ho, Siu-kui. "Sensitivity of parameters in transportation modelling on the implication of network requirement : a case study of Hong Kong /." [Hong Kong] : University of Hong Kong, 1986. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1233361X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Chun-Ling, and 林君玲. "Neural Network Parameters Optimization Using Swarm Intelligence." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/69922956622784763089.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
94
Artificial neural networks are also known as parallel distributed processors, adaptive systems, self-organizing systems, neurocomputers, connectionism etc. It refers to human mind and brain activity and has developed as a model. The neural networks consist of many simple processing elements with connections. Neural networks are a new-type of data processing and computing methodology enlightened by biological module. Many researchers in science expect the neural networks can be with its own intelligence and learning ability as human brain. Once this neural network can be developed, most complicated problems and highly hazardous occupation can be assigned to these intelligent mechanisms without manual operation. During neural network training processes, there are many parameters to be set, such as learning rate, hidden node numbers, etc. These parameters not only influence the efficiency of the network directly but also spend much computational consumption for finding an optimal combination. In this thesis, detail comparisons with meta-heuristic algorithms have been made for choosing a better algorithm to solve parameters optimization problems for neural networks. According to their’s specification, the particle swarm optimization (PSO) has been chosen and it is more suitable than other meta-heuristic algorithms. PSO algorithm’s behavior has restraining and self-cognition of society that is similar to biological colonies. The advantages of PSO are simple in concept, easy to implement, computationally efficient and only few parameters need to be adjustment. In this thesis, the PSO was applied on feed-forward neural network (FFNN) to decide a suitable learning rate for BSS problem and radial basis function neural network (RBFNN) to decide a suitable hidden node number. The experiment results show that, compared with other related methods, the proposed algorithm has higher robustness and efficiency for parameters adjustment of neural network.
APA, Harvard, Vancouver, ISO, and other styles
47

Lu, Ping-Chih, and 盧炳志. "The study of neural network approach on geotechnical parameters analysis." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/54683347387314190982.

Full text
Abstract:
博士
國立成功大學
土木工程學系
87
ABSTRACT Geotechnical engineering is predominantly concerned with the prediction and/or modeling of the engineering behavior of soil or rock masses in their response to applied load when they are used as a construction material or a support for engineered works structures such as embankments, dames, and pavements. The progress in testing technique and improvement in methods of analysis has tremendously enhanced the analytical ability of geotechnical engineers. However, due to the large volumes involved, a geotechnical engineer usually is restricted to investigate only a very small proportion of the soil or rock contained within a site of interest when trying to predict their engineering performance. Hence, a geotechnical engineer usually needs to predict the site characteristics based on limited information. In a conventional approach, the task of characterizing the site based on limited number of test results consists of two main steps: (1) generalization of test results and (2) estimation of design soil parameters based on the generalized test results. The latter task involves a mapping from test results such as SPT blow counts (N values) to design parameters such as elastic modulus and drained friction angle. This mapping is usually carried out using some “empirical correlation,” a relation usually established through statistical regression analyses. Through these two steps, working profiles of design soil parameters may be established. Depending on the size of the project, knowledge of local geology, and experience, the geotechnical engineer may incorporate different degrees of conservatism in these two steps. Least-square regression is a powerful technique that the geotechnical engineer has relied upon for establishing empirical equations for many decades. One problem with the regression method is that it requires a model (i.e., the form of the regression equation) to begin with. Since a prior knowledge of the model is required, it is often difficult to choose the “right” model for conducting the regression analysis, if the input/output relationship is highly non-linear and complex. No systematic guidance is available to search for the right model. It would be of interest to the geotechnical engineer to develop new methods that are more accurate than the existing methods in light of the availability of more data and the recent advance in the area of data analysis techniques. In recent years, Artificial Neural Networks (ANNs) have been successfully applied to applications in almost all branches of science and engineering. An ANN is a computational method inspired by studies of nervous system in biologic organisms. One important characteristic of ANNs is that ANNs have the ability to learn and generalize from examples and to produce meaningful solutions to problems even when the input data are incomplete. In mathematical terms an ANN model estimates functions from examples as do conventional statistical approaches. The difference is no prior knowledge or physical model is required with this ANN approach. An ANN model is simpler in concept and might provide an insight on some geotechnical engineering applications, although from a different angle, to complement existing analysis methods. The purpose of this study is to explore the potential of artificial neural network (ANN) approaches in the analysis of geotechnical engineering. Part 1 of this study provides a brief introduction of back-propagation (BP) networks and then shows how BP networks are used to predict soil properties based on testing results. If properly trained, BP networks can yield a good approximation of soil properties of known sample data and provides an adequate generalization of same soil properties of other samples. Several important issues on the training of ANNs are addressed in this study. These include: (1) scaling input/output vectors, (2) number of neurons in the hidden layer, (3) transfer function, (4) error goal, (5) initial weights, (6) training algorithms, and (7) improving generalization. In part 2 of this study, two new approaches are developed to handle some uncertainties in ANNs and improve the reliability of ANN models by combining the fuzzy sets theory. One proposed ANN approach involves a module for pre-processing input soil parameters and a module for post-processing network output. The pre-processing module screens the input data through a group of pre-defined fuzzy sets, and the post-processing module, on the other hand, defuzzifies the output from the network into a non-fuzzy value. In another ANN approach, fuzzy sets are used to represent the parameters of the neural network, such as weights and biases. A two-stage training method is presented for establishing fuzzy parameters. One important characteristic of ANN methods is that an ANN approach does not requires a physical or theoretical model to begin with. From an engineering viewpoint, however, the physical meanings of the analysis methods are usually required. In part 3 of this study, an application oriented network concept is presented. In this approach, a category layer is introduced to the conventional network to represent the local experience and engineering judgement of engineers. In this study, the neural network methods are shown to perform well in all application examples. Artificial neural network method is an on-going research tool in the area of geotechnical engineering, applications of the proposed models need to be studied with more data sets to validate the procedures and algorithms. Such study could advance empirical understanding of the discipline.
APA, Harvard, Vancouver, ISO, and other styles
48

Higuerey, Evelitsa E. "Neural network modeling of process parameters for electrical discharge machining /." Diss., 1998. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:9831802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Peng, Li-Hsien, and 彭立賢. "Application of the Neural Network to CNC Controller Parameters Optimization." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5311062%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
107
CNC machine tools play an important role in the mechanical industry. When the CNC machine tool is used for machining, there are three kinds of the processing indexes such as speed, accuracy and surface quality. Due to the Industry 4.0. the products are gradually oriented towards the trend of low volumes and high product variety. After the machine tool has been shipped from the factory, it will be set a set of standard controller parameter. But this set of controller parameter can’t conform all kinds of processing requirement. Therefore, it is extremely important to adjust the controller parameters for different requirements of workpiece quality. This study conducts the time-consuming full factor experiment from controller software, and uses this data to train the pre-trained model. Through the concept of transfer learning, the pre-trained model’s parameters are transferred to the model which was trained by the machine tool to conducts the Taguchi orthogonal table experiment. Finally, the genetic algorithm is used to find the best combination of parameters for different processing requirements. Use the position loop feedback signal from machine to verify the optimization. The optimized result of the genetic algorithm is compared with the original parameters and the best parameters obtained from the Taguchi experiment. The best parameters are obtained from the Taguchi experiment. The speed index of the speed priority criterion is optimized by 95%. The corner’s accuracy index of the precision priority criterion is optimized by 63.81%. The corner’s surface quality index of the surface quality priority criterion is optimized by 87.02%. The best parameters are obtained through genetic algorithm optimization. The speed index of the speed priority criterion is optimized by 95%. The corner’s accuracy index of the precision priority criterion is optimized by 55.18%. The corner’s surface quality index of the surface quality priority criterion is optimized by 83.93%. Another part of research is to develop a rapid measurement system for machine tools. This measurement system can dynamically measure tool center point of the machine tool. Repeated experiments are carried out for different processing paths and feed rates. The experimental results show that accuracy of the measurement system is about 0.04 mm. Three processing indexes which are optimized by intelligent method will be verified by this measurement system. It is more similar to the actual machining condition by measuring the path of tool point center. A highly accurate measurement system with intelligent parameter optimization will greatly increase the efficiency of the machine''s tuning.
APA, Harvard, Vancouver, ISO, and other styles
50

Su, Jau-Bo, and 蘇照博. "The Optimum System Parameters in a Cellular Green Communication Network." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50411067135796166591.

Full text
Abstract:
碩士
大葉大學
電機工程學系
103
I In this article, the outage of a cellular wireless green communication is evaluated assuming the premise of no change in the circuit configuration, how to achieve the efficiency use of energy. Generally, the use of energy-efficient systems focus on reducing cellular base station and cell phone transmitting power. In the working state of the base station, more than 50% of the energy consumed in the circuit processing, air conditioning, and other factors. However, even during non-peak energy and peak power consumption are expressing the same performance. The methods described herein are to reallocate the user to a different base station. In non-peak periodic by part of base stations are not burden any user, so that these parts of the base station after dark, can to achieve the effect of energy conservation. Of course we can’t hope that whole green energy newsletter damage the system user's QoS. To set a target on the download speed in the system can ensure the quality of service (QoS) to reach all the user. Finally, the results obtained by the discovery. And overall system performance in operation can be made via the parameters related to this research including of outage threshold, power, communication distance ...etc. The aim of energy saving in a cellular green communication network can be reached.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography