Dissertations / Theses on the topic 'Multiple technologies'

To see the other types of publications on this topic, follow the link: Multiple technologies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiple technologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Charitou, Stefania. "The film multiple : technologies, sites, practices." Thesis, Goldsmiths College (University of London), 2013. http://research.gold.ac.uk/9469/.

Full text
Abstract:
This thesis examines the shifting conditions of the material and technological properties of the object of film and subsequently of the idea of cinema, in the light of the transition from analogue to digital technologies. I suggest that this technological transition has ontological dimensions, which I examine by looking at spaces and places that encompass this transition, materially and conceptually. The study argues that the nature of film and cinema is multiple, in continuous states of ‘becoming’. The anchor of this study is post-Actor Network theorist Annemarie Mol’s philosophical argument that objects ‘come into being’ according to the practices and sites in which they are placed. The thesis examines situated practice-based interactions between the two technologies, which shape relationships of power, replacement, exchange and collaboration. I explore those issues at four specific institutional sites: the gallery, focusing on three moving image exhibitions in London, the British Film Institute’s Archive in Berkhamsted, LUX, the UK agency of artists’ moving image distribution and collection, and the movie theatre’s projection room that gradually displays only digital films. This survey examines the situated practices of presentation, exhibition, print checking, archiving, restoration and theatrical projection of film. The aim of this study is to present a multiple object and a multiple idea that is defined not merely by technology, but rather by sites and in particular their operational practices, objectives and organization. By evaluating how analogue and digital technologies interplay in these sites, the study aims to highlight issues of film’s and cinema’s multiplicity that unfolds and shifts both in space and time. The investigated practices evolve in electronic spaces, physical sites and particular locations. Furthermore, they expose different temporalities for analogue and digital film and indicate cinema’s virtual nature to be transcending in time. The situated practices unite and divide the analogue and digital technologies, exposing a manipulative relationship between tangible space, formed by the mechanical apparatus of projection and network space, marked by the digital’s temporal and spatial ubiquity. At the same time, cinema is actualized in measurable time, while its phenomenon is formulated in the continuous movement and differentiation of duration.
APA, Harvard, Vancouver, ISO, and other styles
2

Andersson, Emelie. "Multiple Platform First : Design Guidelines for Multiple Platform Games." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160991.

Full text
Abstract:
This thesis investigates the two gaming platforms PC and Console and how the interfaces of games on these platforms could be designed more efficiently making it easier to release games on multiple platforms. In other words, how could the Multiple Platform First method look. Little previous work exist on this problem so this thesis work gathers information from other industries and also research on user interfaces in games in general. By looking at games running on both platforms different best practises and common solutions were discovered. A study was conducted testing different in-game components on users. The components were selected to test if the users would accept non-traditional components since users detect when playing on an interface not intended for the platform. This makes the study very complicated since the "best" solution might not work if the users does not accept it for the intended platform. Concepts were designed to combine the testing of solutions with the users opinion of the solutions. The chosen concepts were researched both in literature and by looking at present implementations in games. To be able to user test the solutions they were iterated from low fidelity prototypes on paper to high fidelity prototypes that were playable in Unity. The prototypes were tested on users and data gathered through Think Aloud comments and questionnaire answers. This study presents a first draft of how a multiple platform approach can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Koopmans, Reinout Michael. "Asymmetric industry structures : multiple technologies, firm dynamics and profitability." Thesis, London School of Economics and Political Science (University of London), 1996. http://etheses.lse.ac.uk/1428/.

Full text
Abstract:
The origins of asymmetric firm sizes are analyzed in the first part of this thesis, modelling technology choice in a one shot quantity game with homogeneous goods. For certain sizes of the market more than one technology is chosen in equilibrium. Generally, the larger the market, the higher the fixed cost of the technology that is chosen in equilibrium. The trade off between market size and concentration is non-monotonic, even if for any size of the market only the most fragmented market structure is considered. In the second part consequences of asymmetric firm sizes are investigated. In Chapter 3 firm dynamics in the chemical sector are examined, distinguishing between the dynamics of scale and scope. The production capacity of firms in homogeneous bulk chemical markets converges in size on a market by market basis, resulting in a fragmented industry structure on a disaggregated level. However, the number of products chemical corporations produce within a category of (synthetic organic) chemicals diverges, leading to a more concentrated industry structure on higher levels of aggregation. These counteracting forces can potentially explain the persistence of concentration that has been observed in fast growing chemical markets. In Chapter 4 it is shown that if the observed asymmetry between firms is consistent with a (subgame-perfect) equilibrium of some single or multi-stage game, bounds exist that restrict the degree of asymmetry between the firms' profitability. Their shape is determined by industry factors. In particular, a higher sensitivity of a firm's profitability on its competitor's action rotates the bounds on the profitability-size trade off anti-clockwise. This is tested for homogeneous goods industries using a panel from the FTC Line of Business Data. Allowing for firm specific fixed effects, some strong empirical support is found.
APA, Harvard, Vancouver, ISO, and other styles
4

Oliveira, Rúben Pedrosa. "Sensor networks with multiple technologies: short and long range." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22735.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
Low-Power Wide Area Networks (LPWANs) are one set of technologies that are growing in the eld of the Internet of Things (IoT). Due to the long range capabilities and low energy consumption, Low-Power Wide Area Networks (LPWANs) are the ideal technologies to send small data occasionally. With their unique characteristics, LPWANs can be used in many applications and in di erent environments such as urban, rural and even indoor. The work developed in this dissertation presents a study on the LPWAN LoRa technology, by testing and evaluate its range, signal quality properties and its performance in delivering data. For this, three distinct scenarios are proposed and tested. The inclusion of LoRa in a multi-technology data gathering platform is the key objective of this dissertation. For this it is proposed: (1) an organization based in clusters of sensor nodes; (2) a Media Access Control (MAC) protocol to provide e cient communications through the LoRa technology; and nally, (3) a Connection Manager that is capable of managing the di erent available technologies in the sensor nodes and that is able to adapt its actions according to the acquired data type is proposed. The performed tests aim to perceive which type of parameters can in uence the performance of the overall proposed solution, as well as the advantages of a multi-technology approach in a data gathering platform.
Low-Power Wide Area Networks (LPWANs) são um conjunto de tecnologias em crescimento na área da Internet of Things (IoT). Devido ás suas capacidades de comunicar a longo alcance e de baixo consumo energético, as LPWANs apresentam-se como a tecnologia ideal para o envio ocasional de pequenas porções de dados. Ao possuírem características únicas, as LPWANs podem ser usadas em diversas aplicações e em diferentes ambientes, sejam eles urbanos, rurais ou interiores. O trabalho desenvolvido nesta dissertação apresenta um estudo acerca da tecnologia Long Range (LoRa), uma LPWAN, testando e avaliando o seu alcance, a qualidade do sinal e o desempenho na entrega de dados. Para isso, três cenários distintos são propostos e testados. A inclusão de LoRa numa plataforma de aquisição de dados com múltiplas tecnologias e um dos objectivos chave desta dissertação. Para isso, são propostas: (1) uma organização baseada em clusters de sensores; (2) um protocolo de controlo de acesso ao meio (MAC) para permitir que as comunicações através de LoRa sejam eficientes; e finalmente, (3) um gestor de conectividade com capacidade de gerir as diferentes tecnologias disponíveis nos sensores e que seja capaz de agir consoante o tipo de dados adquiridos. Os testes efectuados tem como objectivo perceber que tipo de parâmetros podem influenciar o desempenho global da soluçao proposta, bem como as vantagens de usar uma abordagem baseada em múltiplas tecnologias numa plataforma de aquisição de dados.
APA, Harvard, Vancouver, ISO, and other styles
5

DeRoest, Gary Eugene. "How People With Multiple Sclerosis Experience Web-Based Instructional Technologies." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7375.

Full text
Abstract:
People with the autoimmune disease multiple sclerosis (MS) have few options for educational activities. Although web-based instruction may be a viable option, little is known about how people with MS perceive this form of learning. The purpose of this qualitative study was to understand the experiences of people with MS regarding web- based instruction. The 3 learning structures--differentiated instruction, collaborative learning, and assistive technology--provided the conceptual framework for this research. Nine volunteers from the Pacific Northwest area of the United States who have MS were individually interviewed for this basic qualitative study. Transcripts were analyzed using open, axial, and selective coding. The results indicated that all participants found personal and professional benefits of their experience with web-based instruction and used course management systems to successfully communicate with instructors or peers. Participants also noted that these management systems did not directly aggravate their MS disease symptoms. Findings from the study may be useful information to individuals with MS for effectively managing their educational choices. This study's results could also be used by learning institutions to improve the access to education and allow individuals with MS to more fully participate in training opportunities.
APA, Harvard, Vancouver, ISO, and other styles
6

Jamalipour, Abbas, Tadahiro Wada, Takaya Yamazato, and 敬也 山里. "A Tutorial on Multiple Access Technologies for Beyond 3G Mobile Networks." IEEE, 2005. http://hdl.handle.net/2237/6614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hilgers, Brandon. "SRAM Compiler For Automated Memory Layout Supporting Multiple Transistor Process Technologies." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1423.

Full text
Abstract:
This research details the design of an SRAM compiler for quickly creating SRAM blocks for Cal Poly integrated circuit (IC) designs. The compiler generates memory for two process technologies (IBM 180nm cmrf7sf and ON Semiconductor 600nm SCMOS) and requires a minimum number of specifications from the user for ease of use, while still offering the option to customize the performance for speed or area of the generated SRAM cell. By automatically creating SRAM arrays, the compiler saves the user time from having to layout and test memory and allows for quick updates and changes to a design. Memory compilers with various features already exist, but they have several disadvantages. Most memory compilers are expensive, usually only generate memory for one process technology, and don’t allow for user-defined custom SRAM cell optimizations. This free design makes it available for students and institutions that would not be able to afford an industry-made compiler. A compiler that offers multiple process technologies allows for more freedom to design in other processes if needed or desired. An attempt was made for this design to be modular for different process technologies so new processes could be added with ease; however, different process technologies have different DRC rules, making that option very difficult to attain. A customizable SRAM cell based on transistor sizing ratios allows for optimized designs in speed, area, or power, and for academic research. Even for an experienced designer, the layout of a single SRAM cell (1 bit) can take an hour. This command-line-based tool can draw a 1Kb SRAM block in seconds and a 1Mb SRAM block in about 15 minutes. In addition, this compiler also adds a manually laid out precharge circuit to each of the SRAM columns for an enhanced read operation by ensuring the bit lines have valid logic output values. Finally, an analysis on SRAM cell stability is done for creating a robust cell as the default design for the compiler. The default cell design is verified for stability during read and write operations, and has an area of 14.067 µm2 for the cmrf7sf process and 246.42 µm2 for the SCMOS process. All factors considered, this SRAM compiler design overcomes several of the drawbacks of other existing memory compilers.
APA, Harvard, Vancouver, ISO, and other styles
8

NILSEN, SAMUEL, and ERIC NYBERG. "The adoption of Industry 4.0- technologies in manufacturing : a multiple case study." Thesis, KTH, Industriell Management, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190630.

Full text
Abstract:
Innovations such as combustion engines, electricity and assembly lines have all had a significant role in manufacturing, where the past three industrial revolutions have changed the way manufacturing is performed. The technical progress within the manufacturing industry continues at a high rate and today's progress can be seen as a part of the fourth industrial revolution. The progress can be exemplified by ”Industrie 4.0”; the German government's vision of future manufacturing. Previous studies have been conducted with the aim of investigating the benefits, progress and relevance of Industry 4.0-technologies. Little emphasis in these studies has been put on differences in implementation and relevance of Industry 4.0-technologies across and within industries. This thesis aims to investigate the adoption of Industry 4.0-technologies among and within selected industries and what types of patterns that exists among them. Using a qualitative multiple case study consisting of firms from Aerospace, Heavy equipment, Automation, Electronics and Motor Vehicle Industry, we gain insight into how leading firms are implementing the technologies. In order to identify the factors determining how Industry 4.0-technologies are implemented and what common themes can be found, we introduce the concept production logic, which is built upon the connection between competitive priorities; quality, flexibility, delivery time, cost efficiency and ergonomics. This thesis has two contributions. In our first contribution, we have categorized technologies within Industry 4.0 into two bundles; the Human-Machine-Interface (HMI) and the connectivity bundle. The HMI bundle includes devices for assisting operators in manufacturing activities, such as touchscreens, augmented reality and collaborative robots. The connectivity-bundle includes systems for connecting devices, collecting and analyzing data from the digitalized factory. The result of this master thesis indicates that depending on a firm’s or industry’s logic of production, the adoption of elements from the technology bundles differ. Firms where flexibility is dominant tend to implement elements from the HMI-bundle to a larger degree. In the other end, firms with few product variations where quality and efficiency dominates the production logic tends to implement elements from the connectivity bundle in order to tightly monitor and improve quality in their assembly. Regardless of production logic, firms are implementing elements from both bundles, but with different composition and applications. The second contribution is within the literature of technological transitions. In this contribution, we have studied the rise and development of the HMI-bundle in the light of Geels (2002) Multi-Level Perspective (MLP). It can be concluded that an increased pressure on the landscape-level in the form of changes in the consumer-market and the attitudes within the labor force has created a gradual spread of the HMI-bundle within industries. The bundles have also been studied through Rogers (1995) five attributes of innovation, where the lack of testability and observability prevents increased application of M2M-interfaces. Concerning Big Data and analytics, the high complexity prevents the technology from being further applied. As the HMI-bundle involves a number of technologies with large differences in properties, it is hard draw any conclusion using the attributes of innovation about what limits their application.
APA, Harvard, Vancouver, ISO, and other styles
9

Sheikh, Nasir Jamil. "Assessment of Solar Photovoltaic Technologies Using Multiple Perspectives and Hierarchical Decision Modeling." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/978.

Full text
Abstract:
The objective of this research is to build a decision model for a comprehensive assessment of solar photovoltaic technologies using multiple perspectives. These perspectives include: social, technological, economic, environmental, and political (STEEP) with each perspective consisting of multiple criteria. Hierarchical decision modeling and expert judgment quantification are used to provide the relative ranking of the perspectives and criteria. Such modeling is effective in addressing technology evaluations with competing and contrasting perspectives and criteria where both quantitative and qualitative measurements are represented. The model is then operationalized by constructing desirability functions for each criterion. The combined results provide an overall numerical score for each technology under consideration as well as criteria desirability gaps. This model is useful for assessing photovoltaic technologies from varying worldviews such as the electric utility worldview, the photovoltaic manufacturer's worldview, or the national policy worldview. This model can also provide guidance to decision makers and practitioners on areas of improvement for a selected technology. The research utilizes the electric utility worldview as a case study.
APA, Harvard, Vancouver, ISO, and other styles
10

Giménez, Colás Sonia. "Ultra Dense Networks Deployment for beyond 2020 Technologies." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86204.

Full text
Abstract:
A new communication paradigm is foreseen for beyond 2020 society, due to the emergence of new broadband services and the Internet of Things era. The set of requirements imposed by these new applications is large and diverse, aiming to provide a ubiquitous broadband connectivity. Research community has been working in the last decade towards the definition of the 5G mobile wireless networks that will provide the proper mechanisms to reach these challenging requirements. In this framework, three key research directions have been identified for the improvement of capacity in 5G: the increase of the spectral efficiency by means of, for example, the use of massive MIMO technology, the use of larger amounts of spectrum by utilizing the millimeter wave band, and the network densification by deploying more base stations per unit area. This dissertation addresses densification as the main enabler for the broadband and massive connectivity required in future 5G networks. To this aim, this Thesis focuses on the study of the UDN. In particular, a set of technology enablers that can lead UDN to achieve their maximum efficiency and performance are investigated, namely, the use of higher frequency bands for the benefit of larger bandwidths, the use of massive MIMO with distributed antenna systems, and the use of distributed radio resource management techniques for the inter-cell interference coordination. Firstly, this Thesis analyzes whether there exists a fundamental performance limit related with densification in cellular networks. To this end, the UDN performance is evaluated by means of an analytical model consisting of a 1-dimensional network deployment with equally spaced BS. The inter-BS distance is decreased until reaching the limit of densification when this distance approaches 0. The achievable rates in networks with different inter-BS distances are analyzed for several levels of transmission power availability, and for various types of cooperation among cells. Moreover, UDN performance is studied in conjunction with the use of a massive number of antennas and larger amounts of spectrum. In particular, the performance of hybrid beamforming and precoding MIMO schemes are assessed in both indoor and outdoor scenarios with multiple cells and users, working in the mmW frequency band. On the one hand, beamforming schemes using the full-connected hybrid architecture are analyzed in BS with limited number of RF chains, identifying the strengths and weaknesses of these schemes in a dense-urban scenario. On the other hand, the performance of different indoor deployment strategies using HP in the mmW band is evaluated, focusing on the use of DAS. More specifically, a DHP suitable for DAS is proposed, comparing its performance with that of HP in other indoor deployment strategies. Lastly, the presence of practical limitations and hardware impairments in the use of hybrid architectures is also investigated. Finally, the investigation of UDN is completed with the study of their main limitation, which is the increasing inter-cell interference in the network. In order to tackle this problem, an eICIC scheduling algorithm based on resource partitioning techniques is proposed. Its performance is evaluated and compared to other scheduling algorithms under several degrees of network densification. After the completion of this study, the potential of UDN to reach the capacity requirements of 5G networks is confirmed. Nevertheless, without the use of larger portions of spectrum, a proper interference management and the use of a massive number of antennas, densification could turn into a serious problem for mobile operators. Performance evaluation results show large system capacity gains with the use of massive MIMO techniques in UDN, and even greater when the antennas are distributed. Furthermore, the application of ICIC techniques reveals that, besides the increase in system capacity, it brings significant energy savings to UDNs.
A partir del año 2020 se prevé que un nuevo paradigma de comunicación surja en la sociedad, debido a la aparición de nuevos servicios y la era del Internet de las cosas. El conjunto de requisitos impuesto por estas nuevas aplicaciones es muy amplio y diverso, y tiene como principal objetivo proporcionar conectividad de banda ancha y universal. En las últimas décadas, la comunidad científica ha estado trabajando en la definición de la 5G de redes móviles que brindará los mecanismos necesarios para garantizar estos requisitos. En este marco, se han identificado tres mecanismos clave para conseguir el necesario incremento de capacidad de la red: el aumento de la eficiencia espectral a través de, por ejemplo, el uso de tecnologías MIMO masivas, la utilización de mayores porciones del espectro en frecuencia y la densificación de la red mediante el despliegue de más estaciones base por área. Esta Tesis doctoral aborda la densificación como el principal mecanismo que permitirá la conectividad de banda ancha y universal requerida en la 5G, centrándose en el estudio de las Redes Ultra Densas o UDNs. En concreto, se analiza el conjunto de tecnologías habilitantes que pueden llevar a las UDNs a obtener su máxima eficiencia y prestaciones, incluyendo el uso de altas frecuencias para el aprovechamiento de mayores anchos de banda, la utilización de MIMO masivo con sistemas de antenas distribuidas y el uso de técnicas de reparto de recursos distribuidas para la coordinación de interferencias. En primer lugar, se analiza si existe un límite fundamental en la mejora de las prestaciones en relación a la densificación. Con este fin, las prestaciones de las UDNs se evalúan utilizando un modelo analítico de red unidimensional con BSs equiespaciadas, en el que la distancia entre BSs se disminuye hasta alcanzar el límite de densificación cuando ésta se aproxima a 0. Las tasas alcanzables en redes con distintas distancias entre BSs son analizadas, considerando distintos niveles de potencia disponible en la red y varios grados de cooperación entre celdas. Además, el comportamiento de las UDNs se estudia junto al uso masivo de antenas y la utilización de anchos de banda mayores. Más concretamente, las prestaciones de ciertas técnicas híbridas MIMO de precodificación y beamforming se examinan en la banda milimétrica. Por una parte, se analizan esquemas de beamforming en BSs con arquitectura híbrida en función de la disponibilidad de cadenas de radiofrecuencia en escenarios exteriores. Por otra parte, se evalúan las prestaciones de ciertos esquemas de precodificación híbrida en escenarios interiores, utilizando distintos despliegues y centrando la atención en los sistemas de antenas distribuidos o DAS. Además, se propone un algoritmo de precodificación híbrida específico para DAS, y se evalúan y comparan sus prestaciones con las de otros algoritmos de precodificación utilizados. Por último, se investiga el impacto en las prestaciones de ciertas limitaciones prácticas y deficiencias introducidas por el uso de dispositivos no ideales. Finalmente, el estudio de las UDNs se completa con el análisis de su principal limitación, el nivel creciente de interferencia en la red. Para ello, se propone un algoritmo de control de interferencias basado en la partición de recursos. Sus prestaciones son evaluadas y comparadas con las de otras técnicas de asignación de recursos. Tras este estudio, se puede afirmar que las UDNs tienen gran potencial para la consecución de los requisitos de la 5G. Sin embargo, sin el uso conjunto de mayores porciones del espectro, adecuadas técnicas de control de la interferencia y el uso masivo de antenas, las UDNs pueden convertirse en serios obstáculos para los operadores móviles. Los resultados de la evaluación de prestaciones de estas tecnologías confirman el gran aumento de la capacidad de las redes mediante el uso masivo de antenas y la introducción de mecanismos de I
A partir de l'any 2020 es preveu un nou paradigma de comunicació en la societat, degut a l'aparició de nous serveis i la era de la Internet de les coses. El conjunt de requeriments imposat per aquestes noves aplicacions és ampli i divers, i té com a principal objectiu proporcionar connectivitat universal i de banda ampla. En les últimes dècades, la comunitat científica ha estat treballant en la definició de la 5G, que proveirà els mecanismes necessaris per a garantir aquests exigents requeriments. En aquest marc, s'han identificat tres mecanismes claus per a aconseguir l'increment necessari en la capacitat: l'augment de l'eficiència espectral a través de, per exemple, l'ús de tecnologies MIMO massives, la utilització de majors porcions de l'espectre i la densificació mitjançant el desplegament de més estacions base per àrea. Aquesta Tesi aborda la densificació com a principal mecanisme que permetrà la connectivitat de banda ampla i universal requerida en la 5G, centrant-se en l' estudi de les xarxes ultra denses (UDNs). Concretament, el conjunt de tecnologies que poden dur a les UDNs a la seua màxima eficiència i prestacions és analitzat, incloent l'ús d'altes freqüències per a l'aprofitament de majors amplàries de banda, la utilització de MIMO massiu amb sistemes d'antenes distribuïdes i l'ús de tècniques distribuïdes de repartiment de recursos per a la coordinació de la interferència. En primer lloc, aquesta Tesi analitza si existeix un límit fonamental en les prestacions en relació a la densificació. Per això, les prestacions de les UDNs s'avaluen utilitzant un model analític unidimensional amb estacions base equidistants, en les quals la distància entre estacions base es redueix fins assolir el límit de densificació quan aquesta distància s'aproxima a 0. Les taxes assolibles en xarxes amb diferents distàncies entre estacions base s'analitzen considerant diferents nivells de potència i varis graus de cooperació entre cel·les. A més, el comportament de les UDNs s'estudia conjuntament amb l'ús massiu d'antenes i la utilització de majors amplàries de banda. Més concretament, les prestacions de certes tècniques híbrides MIMO de precodificació i beamforming s'examinen en la banda mil·limètrica. D'una banda, els esquemes de beamforming aplicats a estacions base amb arquitectures híbrides és analitzat amb disponibilitat limitada de cadenes de radiofreqüència a un escenari urbà dens. D'altra banda, s'avaluen les prestacions de certs esquemes de precodificació híbrida en escenaris d'interior, utilitzant diferents estratègies de desplegament i centrant l'atenció en els sistemes d' antenes distribuïdes (DAS). A més, es proposa un algoritme de precodificació híbrida distribuïda per a DAS, i s'avaluen i comparen les seues prestacions amb les de altres algoritmes. Per últim, s'investiga l'impacte de les limitacions pràctiques i altres deficiències introduïdes per l'ús de dispositius no ideals en les prestacions de tots els esquemes anteriors. Finalment, l' estudi de les UDNs es completa amb l'anàlisi de la seua principal limitació, el nivell creixent d'interferència entre cel·les. Per tractar aquest problema, es proposa un algoritme de control d'interferències basat en la partició de recursos. Les prestacions de l'algoritme proposat s'avaluen i comparen amb les d'altres tècniques d'assignació de recursos. Una vegada completat aquest estudi, es pot afirmar que les UDNs tenen un gran potencial per aconseguir els ambiciosos requeriments plantejats per a la 5G. Tanmateix, sense l'ús conjunt de majors amplàries de banda, apropiades tècniques de control de la interferència i l'ús massiu d'antenes, les UDNs poden convertir-se en seriosos obstacles per als operadors mòbils. Els resultats de l'avaluació de prestacions d' aquestes tecnologies confirmen el gran augment de la capacitat de les xarxes obtingut mitjançant l'ús massiu d'antenes i la introducci
Giménez Colás, S. (2017). Ultra Dense Networks Deployment for beyond 2020 Technologies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86204
TESIS
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Zhenhe. "Improved railway vehicle inspection and monitoring through the integration of multiple monitoring technologies." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7677/.

Full text
Abstract:
The effectiveness and efficiency of railway vehicle condition monitoring is increasingly critical to railway operations as it directly affects safety, reliability, maintenance efficiency, and overall system performance. Although there are a vast number of railway vehicle condition monitoring technologies, wayside systems are becoming increasingly popular because of the reduced cost of a single monitoring point, and because they do not interfere with the existing railway line. Acoustic sensing and visual imaging are two wayside monitoring technologies that can be applied to monitor the condition of vehicle components such as roller bearing, gearboxes, couplers, and pantographs, etc. The central hypothesis of this thesis is that it is possible to integrate acoustic sensing and visual imaging technologies to achieve enhancement in condition monitoring of railway vehicles. So this thesis presents improvements in railway vehicle condition monitoring through the integration of acoustic sensing and visual imaging technologies.
APA, Harvard, Vancouver, ISO, and other styles
12

Demmel, Johann George. "A multiple objective decision model for the evaluation of advanced manufacturing system technologies." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185552.

Full text
Abstract:
Ordinary financial measures oversimplify the evaluation of Advanced Manufacturing System Technologies (AMST). A multiple objective decision model is developed which avoids the shortcomings of the traditional evaluation methods. The model is comprised of three objectives: Pecuniary, Strategic, and Tactical. The Pecuniary objective is based upon traditional Discounted Cash Flow techniques, with the results normalized to a [-1, +1] (worst-best) scale. The Strategic and Tactical objectives are based upon the concept of qualitative flows, and a qualitative discounting method is employed to discount the qualitative costs/benefits to a present value. The three objectives are traded-off using the Composite Programming technique, resulting in a rank ordering of the alternatives which are under consideration. The three objectives of the model are broken down into attributes which define the objective, and these attributes are "mapped" into the organization of a manufacturing environment. It is shown that the model covers the entire manufacturing environment. It is shown that the model covers the entire manufacturing organization in accounting for the costs/benefits of the proposed AMST alternatives. The influence of the three objectives on the final score is analyzed using a mixture experiment. The mixture experiment provides insight into the effect of varying the importance of each objective and its effect on the final rankings. This provides the analyst a method to determine which attributes and/or objectives are critical for the AMST alternative being investigated. Realizing that the evaluation of AMST hinges on events which are to occur in the future, and these events are not known for certain, the procedure is extended to include a measure of risk. Cash flows, qualitative flows, interest rates, and project lengths are provided by the decision maker as pessimistic, most likely, and optimistic estimates. An analysis is provided where the inputs are assumed to be independent. The model is then further enhanced to allow time dependence between cash flows and qualitative flows of a single attribute. Results are provided as a mean and variance of the evaluation score, objective scores, and indices, along with frequency distributions. A case study analysis is provided which shows the application of the techniques developed in this work.
APA, Harvard, Vancouver, ISO, and other styles
13

Hewzulla, Dilshat. "Deriving mathematical significance in palaeontological data from large-scale database technologies." Thesis, University of East London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Henderson, Mark J. "Movements, Growth, and Mortality of Chesapeake Bay Summer Flounder Based on Multiple Tagging Technologies." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539616692.

Full text
Abstract:
The research projects presented in this dissertation used multiple tagging technologies to examine the movements, growth, and mortality rates of summer flounder tagged and released in Chesapeake Bay. In the first two chapters, I used acoustic, archival, and conventional tags to examine the behavior of summer flounder on different spatial scales. Investigating the movement behavior of individuals on different scales is an important step towards understanding how large-scale distributions of a population are established. Based on the observed behaviors of summer flounder, I hypothesize that the movements of these fish are primarily related to foraging behavior while they are resident in Chesapeake Bay. In the third chapter, I use growth models to investigate hypotheses regarding recreational angler noncompliance with minimum size regulations in Virginia. Angler noncompliance with management regulations can severely degrade the ability of fishery managers to prevent overexploitation of fish populations. Using a growth model fit to recreational angler mark-recapture data, I demonstrate that recreational anglers in Virginia responded to changes in summer flounder management regulations, but considerable levels of noncompliance were detected in years when management agencies drastically increased the minimum size regulations. In the final chapter, I attempt to estimate natural and fishing mortality rates of summer flounder using conventional mark-recapture data collected by an angler tagging program. These mortality rates were estimated using a Barker model, which is a generalization of the Cormack-Jolly-Seber tagging model. Results from this study indicated that sublegal summer flounder experience different emigration or mortality processes than do larger fish. Furthermore, handling and tagging mortality rates of summer flounder were much larger than the recreational discard mortality rate currently used in the stock assessment, implying that the recreational discard mortality rate should be reexamined. The research presented in this dissertation provides information that could be used by management agencies to further understand the behavior of summer flounder, and how to most effectively manage this population.
APA, Harvard, Vancouver, ISO, and other styles
15

Holt, Ryan Samuel. "Three enabling technologies for vision-based, forest-fire perimeter surveillance using multiple unmanned aerial systems /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1894.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Holt, Ryan S. "Three Enabling Technologies for Vision-Based, Forest-Fire Perimeter Surveillance Using Multiple Unmanned Aerial Systems." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/931.

Full text
Abstract:
The ability to gather and process information regarding the condition of forest fires is essential to cost-effective, safe, and efficient fire fighting. Advances in sensory and autopilot technology have made miniature unmanned aerial systems (UASs) an important tool in the acquisition of information. This thesis addresses some of the challenges faced when employing UASs for forest-fire perimeter surveillance; namely, perimeter tracking, cooperative perimeter surveillance, and path planning. Solutions to the first two issues are presented and a method for understanding path planning within the context of a forest-fire environment is demonstrated. Both simulation and hardware results are provided for each solution.
APA, Harvard, Vancouver, ISO, and other styles
17

Hedlund, Nicklas. "TicTacTraining : Coordination of multiple clients in a web based exergame." Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74936.

Full text
Abstract:
The traditional way to coordinate multiple clients in a multiplayer based game on multiple platforms is to create an implementation on a per-platform basis - resulting in often four different implementations, one for each major platform - ie. iOS, Android, Windows, and Linux based operating systems. This report examines the possibility of replacing multiple so called “native apps” with a single web based implementation - granting users access on all devices that supports modern browsers, and discusses what tools were used in the development of the application and why.
Det vanliga sättet att hantera multipla klienter i spel med flerspelarlägen är att utveckla unika implementationer på en per-plattform basis - alltså att samma applikation kräver en unik implementation för var plattform som skall stödjas. Vanligtvis görs en unik implementation för iOS, Android, Windows och Unix-baserade operativsystem vardera. Denna rapport undersöker möjligheten att ersätta dessa implementationer med en unison webbaserad implementation som tillåter alla enheter ​med stöd för moderna webbläsare att använda applikationen och diskuterar vilka verktyg som använts och varför.
APA, Harvard, Vancouver, ISO, and other styles
18

Garlapati, Shravan Kumar Reddy. "Enabling Communication and Networking Technologies for Smart Grid." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/56629.

Full text
Abstract:
Transforming the aging electric grid to a smart grid is an active area of research in industry and the government. One of the main objectives of the smart grid is to improve the efficiency of power generation, transmission and distribution and also to improve the stability and the reliability of the grid. In order to achieve this, various processes involved in power generation, transmission, and distribution should be armed with advanced sensor technologies, computing, communication and networking capabilities to an unprecedented level. These high speed data transfer and computational abilities aid power system engineers to obtain wide area measurements, achieve better control of power system operations and improve the reliability of power supply and the efficiency of different power grid operations. In the process of making the grid smarter, problems existing in traditional grid applications can be identified and solutions have to be developed to fix the identified issues. In this dissertation, two problems that aid power system engineers to meet the above mentioned smart grid's objective are researched. One problem is related to the distribution-side smart grid and the other one is a part of the transmission-side smart grid. Advanced Metering Infrastructure (AMI) is one of the important distribution-side smart grid applications. AMI is a technology where smart meters are installed at customer site which gives the utilities the ability to monitor and collect information related to the amount of electricity, water, and gas consumed by the user. Many recent research studies suggested the use of 3G cellular CDMA2000 for AMI network as it provides an advanced and cost effective solution for smart grid communications. Taking into account both technical and non-technical factors such as extended lifetime, security, availability and control of the solution, Alliander, an electric utility in Netherlands deployed a private 3G CDMA2000 network for smart metering. Although 3G CDMA2000 satisfies the requirements of smart grid applications, an analysis on the use of the current state of the art 3G CDMA2000 for smart grid applications indicates that its usage results in high percentage of control overhead, high latency and high power consumption for data transfer. As a part of this dissertation, we proposed FLEX-MAC - a new Medium Access Control (MAC) protocol that reduces the latency and overhead in smart meter data collection when compared to 3G CDMA2000 MAC. As mentioned above the second problem studied in this dissertation is related to the transmission-side grid. Power grid transmission and sub-transmission lines are generally protected by distance relays. After a thorough analysis of U.S. historical blackouts, North American Electric Reliability Council (NERC) has concluded that the hidden failure induced tripping of distance relays is responsible for 70% of the U.S. blackouts. As a part of this dissertation, agent based distance relaying protection scheme is proposed to improve the robustness of distance relays to hidden failures and thus reduce the probability of blackouts. This dissertation has two major contributions. First, a hierarchically distributed non-intrusive Agent Aided Distance Relaying Protection Scheme (AADRPS) is proposed to improve the robustness of distance relays to hidden failures. The problem of adapting the proposed AADRPS to a larger power system network consisting of thousands of buses is modeled as an integer linear programming multiple facility location optimization problem. Distance relaying protection scheme is a real time system and has stringent timing requirements. Therefore, in order to verify if the proposed AADRPS meets the timing requirements or not and also to check for deadlocks, verification models based on UPPAAL real time model checker are provided in this dissertation. So, the entire framework consisting of AADRPS that aids in increasing the robustness of distance relays and reducing the possibility of blackouts, the multiple facility location optimization models and the UPPAAL real time model checker verification models form one of the major contributions of this dissertation. The second contribution is related to the MAC layer of AMI networks. In this dissertation, FLEX-MAC - a novel and flexible MAC protocol is proposed to reduce the overhead and latency in smart meter data collection. The novelty of the FLEX-MAC lies in its ability to change the mode of operation based on the type of the data being collected in a smart meter network. FLEX-MAC employs Frame and Channel Reserved (FCR) MAC or Frame Reserved and Random Channel (FRRC) MAC for scheduled data collection. Power outage data in an AMI network is considered as a random data . In a densely populated area, during an outage, a large number of smart meters attempt to report the outage, which significantly increases the Random Access CHannel (RACH) load. In order to reduce the RACH traffic during an outage, this dissertation proposes a Time Hierarchical Scheme (THS). Also, in order to minimize the total time to collect the power outage data, a Backward Recursive Dynamic Programming (BRDP) approach is proposed to adapt the transmission rate of smart meters reporting an outage. Both the Optimal Transmission Rate Adaption and Time Hierarchical Scheme form the basis of OTRA-THS MAC which is employed by FLEX-MAC for random data collection. Additionally, in this work, Markov chain models are presented for evaluating the performance of FCR and FRRC MACs in terms of average throughput and delay. Also, another Markov model is presented to find the mean time to absorption or mean time to collect power outage data of OTRA-TH MAC during an outage.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Qutaishat, Fadi Taher. "An investigation of web-based personalisation technologies for information provision focussing on the multiple sclerosis community." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/36095.

Full text
Abstract:
The exponential growth of online information has made the process of locating appropriate information complex. This complexity increases when individuals are characterised by changeable needs, preferences, goals or knowledge, because this requires the system to personalise or adapt e.g. content in accordance with these needs. This research developed a prototype system for personalising information and investigated the appropriateness of using personalisation techniques. It focused on people with MS (Multiple Sclerosis) who have changeable needs. During the investigation, a prototype of a personalised system was developed to provide personalised content, links and content presentation (i.e. layout). A number of personalisation approaches, techniques and models that are used in the domain of adaptive hypermedia, were selected in the development of the prototype system. Furthermore, XML, XSL and the Apache Cocoon framework were used as the underlying technologies for this development.
APA, Harvard, Vancouver, ISO, and other styles
20

Bieringer, Alexandra, and Linda Müller. "Integration of Internet of Things technologies in warehouses : A multiple case study on how the Internet of Things technologies can efficiently be used in the warehousing processes." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Centre of Logistics and Supply Chain Management (CeLS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-40087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Anscombe, C. J. "Multiple displacement amplification and whole genome sequencing for the diagnosis of infectious diseases." Thesis, Queen Mary, University of London, 2016. http://qmro.qmul.ac.uk/xmlui/handle/123456789/18409.

Full text
Abstract:
Next-generation sequencing technologies are revolutionising our ability to characterise and investigate infectious diseases. Utilising the power of high throughput sequencing, this study reports, the development of a sensitive, non-PCR based, unbiased amplification method, which allows the rapid and accurate sequencing of multiple microbial pathogens directly from clinical samples. The method employs Φ29 DNA polymerase, a highly efficient enzyme able to produce strand displacement during the polymerisation process with high fidelity. Problems with DNA secondary structure were overcome and the method optimised to produce sufficient DNA to sequence from a single bacterial cell in two hours. Evidence was also found that the enzyme requires at least six bases of single stranded DNA to initiate replication, and is not capable of amplification from nicks. Φ29 multiple displacement amplification was shown to be suitable for a range of GC contents and bacterial cell wall types as well as for viral pathogens. The method was shown to be able to provide relative quantification of mixed cells, and a method for quantification of viruses using a known standard was developed. To complement the novel molecular biology workflow, a data analysis pipeline was developed to allow pathogen identification and characterisation without prior knowledge of input. The use of de novo assemblies for annotation was shown to be equivalent to the use of polished reference genomes. Single cell Φ29 MDA samples had better assembly and annotation than non-amplification controls, a novel finding which, when combined with the very long DNA fragments produced, has interesting implications for a variety of analytical procedures. A sampling process was developed to allow isolation and amplification of pathogens directly from clinical samples, with good concordance shown between this method and traditional testing. The process was tested on a variety of modelled and real clinical samples showing good application to sterile site infections, particularly bacteraemia models. Within these samples multiple bacterial, viral and parasitic pathogens were identified, showing good application across multiple infection types. Emerging pathogens were identified including Onchocerca volvulus within a CSF sample, and Sneathia sanguinegens within an STI sample. Use of Φ29 MDA allows rapid and accurate amplification of whole pathogen genomes. When this is coupled with the sample processing developed here it is possible to detect the presence of pathogens in sterile sites with a sensitivity of a single genome copy.
APA, Harvard, Vancouver, ISO, and other styles
22

Ahn, Byungmun. "General Satisfaction of Students in 100% Online Courses in the Department of Learning Technologies at the University of North Texas." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc115042/.

Full text
Abstract:
The purpose of this study was to examine whether there are significant relationships between the general satisfaction of students and learner-content interaction, learner-instructor interaction, learner-learner interaction, and learner-technology interaction in 100% online courses. There were 310 responses from the students. This study did not use data from duplicate students and instructors. Excel was used to find duplicate students and instructors; therefore, 128 responses were deleted. After examination of box plots, an additional four cases were removed because they were outliers on seven or more variables. Nineteen responses were deleted because they did not answer all questions of interest, resulting in a total sample of 159 students. Multiple regression analysis was used to examine the relationship between the four independent variables and the dependent variable. In addition to tests for statistical significance, practical significance was evaluated with the multiple R2 , which reported the common variance between independent variables and dependent variable. The two variables of learner-content and learner-instructor interaction play a significant role in predicting online satisfaction. Minimally, the variable learner-technology can predict online satisfaction and is an important construct that must be considered when offering online courses. Results of this study provide help in establishing a valid and reliable survey instrument and in developing an online best learning environment, as well as recommendations for institutions offering online learning or considering the development of online learning courses.
APA, Harvard, Vancouver, ISO, and other styles
23

Uzuegbunam, Nkiruka M. A. "SELF-IMAGE MULTIMEDIA TECHNOLOGIES FOR FEEDFORWARD OBSERVATIONAL LEARNING." UKnowledge, 2018. https://uknowledge.uky.edu/ece_etds/124.

Full text
Abstract:
This dissertation investigates the development and use of self-images in augmented reality systems for learning and learning-based activities. This work focuses on self- modeling, a particular form of learning, actively employed in various settings for therapy or teaching. In particular, this work aims to develop novel multimedia systems to support the display and rendering of augmented self-images. It aims to use interactivity (via games) as a means of obtaining imagery for use in creating augmented self-images. Two multimedia systems are developed, discussed and analyzed. The proposed systems are validated in terms of their technical innovation and their clinical efficacy in delivering behavioral interventions for young children on the autism spectrum.
APA, Harvard, Vancouver, ISO, and other styles
24

Adolfsson, Elin, Julia Edström, and Wilma Övringe. "Three clicks away : A multiple case study of how technologies change the customer journey in the retail furniture sector." Thesis, Linnéuniversitetet, Institutionen för marknadsföring (MF), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105486.

Full text
Abstract:
The digital world is constantly growing. This contributes to increasing expectations and demands in the technical aspect. Digitization is powerful, but it further contributes to a number of challenges, such as the coordination of physical and digital stores. During the corona pandemic, the digital transformation has accelerated for companies to adapt to customers. Therefore, the purpose of the research is to understand how swedish retailers can manage technological touchpoints in customer journeys in the furniture sector. Further the purpose is to investigate how retailers can use technological touchpoints to change the quality and customer engagement in the customer journey.  To be able to answer the empirical findings, six semi-structured interviews were obtained through a multiple case study. The theory from previous research and the empirical results were then discussed in the analysis. The conclusion of the thesis resulted in the management of digital touchpoints is different depending on whether it concerns companies using physical stores or e-commerce. This thesis has identified that retailers in the furniture sector manage technological touchpoints to create a seamless customer experience, increase customer satisfaction, more easily engage customers and to be able to integrate with customers in several ways. Digital touchpoints have made it easier for companies to measure results and to be able to improve the customer journey. One identification was that customer engagement has increased in relation to the increased digitalization, but also that the researched companies believe that digitization is vital for increasing customer engagement. Due to the technology, opportunities are created to be able to develop the quality of the customer journey. Further, by being able to measure the customer journey, the companies gain an insight into what should be constantly improved.
APA, Harvard, Vancouver, ISO, and other styles
25

Garro, Crevillén Eduardo. "Advanced Layered Divsion Multiplexing Technologies for Next-Gen Broadcast." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/105559.

Full text
Abstract:
Desde comienzos del siglo XXI, los sistemas de radiodifusión terrestre han sido culpados de un uso ineficiente del espectro asignado. Para aumentar la eficiencia espectral, los organismos de estandarización de TV digital comenzaron a desarrollar la evolución técnica de los sistemas de TDT de primera generación. Entre otros, uno de los objetivos principales de los sistemas de TDT de próxima generación (DVB-T2 y ATSC 3.0) es proporcionar simultáneamente servicios de TV a dispositivos móviles y fijos. El principal inconveniente de esta entrega simultánea son los diferentes requisitos de cada condición de recepción. Para abordar estas limitaciones, se han considerado diferentes técnicas de multiplexación. Mientras que DVB-T2 acomete la entrega simultánea de los dos servicios mediante TDM, ATSC 3.0 adoptó la Multiplexación por División en Capas (LDM). LDM puede superar a TDM y a FDM al aprovechar la relación de Protección de Error Desigual (UEP), ya que ambos servicios, llamados capas, utilizan todos los recursos de frecuencia y tiempo con diferentes niveles de potencia. En el lado del receptor, se distinguen dos implementaciones, de acuerdo con la capa a decodificar. Los receptores móviles solo están destinados a obtener la capa superior, conocida como Core Layer (CL). Para no aumentar su complejidad en comparación con los receptores de capa única, la capa inferior, conocida como Enhanced Layer (EL), es tratada como un ruido adicional en la decodificación. Los receptores fijos aumentan su complejidad, ya que deben realizar un proceso de Cancelación de Interferencia (SIC) sobre la CL para obtener la EL. Para limitar la complejidad adicional de los receptores fijos, las capas de LDM en ATSC 3.0 están configuradas con diferentes capacidades de corrección, pero comparten el resto de bloques de la capa física, incluido el TIL, el PP, el tamaño de FFT, y el GI. Esta disertación investiga tecnologías avanzadas para optimizar el rendimiento de LDM. Primero se propone una optimización del proceso de demapeo para las dos capas de LDM. El algoritmo propuesto logra un aumento de capacidad, al tener en cuenta la forma de la EL en el proceso de demapeo de la CL. Sin embargo, el número de distancias Euclidianas a computar puede aumentar significativamente, conduciendo no solo a receptores fijos más complejos, sino también a receptores móviles más complejos. A continuación, se determina la configuración de piloto ATSC 3.0 más adecuada para LDM. Teniendo en cuenta que las dos capas comparten el mismo PP, surge una contrapartida entre la densidad de pilotos (CL) y la redundancia sobre los datos (EL). A partir de los resultados de rendimiento, se recomienda el uso de un PP no muy denso, ya que ya han sido diseñados para hacer frente a ecos largos y altas velocidades. La amplitud piloto óptima depende del estimador de canal en los receptores (ej., se recomienda la amplitud mínima para una implementación Wiener, mientras que la máxima para una implementación FFT). También se investiga la potencial transmisión conjunta de LDM con tres tecnologías avanzadas adoptadas en ATSC 3.0: las tecnologías de agregación MultiRF, los esquemas de MISO distribuido y los de MIMO colocalizado. Se estudian los potenciales casos de uso, los aspectos de implementación del transmisor y el receptor, y las ganancias de rendimiento de las configuraciones conjuntas para las dos capas de LDM. Las restricciones adicionales de combinar LDM con las tecnologías avanzadas se consideran admisibles, ya que las mayores demandas ya están contempladas en ATSC 3.0 (ej., una segunda cadena de recepción). Se obtienen ganancias significativas en condiciones de recepción peatonal gracias a la diversidad en frecuencia proporcionada por las tecnologías MultiRF. La conjunción de LDM con esquemas de MISO proporciona ganancias de rendimiento significativas en redes SFN para la capa fija con el esquema de Alamouti.
Since the beginning of the 21st century, terrestrial broadcasting systems have been blamed of an inefficient use of the allocated spectrum. To increase the spectral efficiency, digital television Standards Developing Organizations settled to develop the technical evolution of the first-generation DTT systems. Among others, a primary goal of next-generation DTT systems (DVB-T2 and ATSC 3.0) is to simultaneously provide TV services to mobile and fixed devices. The major drawback of this simultaneous delivery is the different requirement of each reception condition. To address these constraints different multiplexing techniques have been considered. While DVB-T2 fulfilled the simultaneous delivery of the two services by TDM, ATSC 3.0 adopted the LDM technology. LDM can outperform TDM and FDM by taking advantage of the UEP ratio, as both services, namely layers, utilize all the frequency and time resources with different power levels. At receiver side, two implementations are distinguished, according to the intended layer. Mobile receivers are only intended to obtain the upper layer, known as CL. In order not to increase their complexity compared to single layer receivers, the lower layer, known as EL is treated as an additional noise on the CL decoding. Fixed receivers, increase their complexity, as they should performed a SIC process on the CL for getting the EL. To limit the additional complexity of fixed receivers, the LDM layers in ATSC 3.0 are configured with different error correction capabilities, but share the rest of physical layer parameters, including the TIL, the PP, the FFT size, and the GI. This dissertation investigates advanced technologies to optimize the LDM performance. A demapping optimization for the two LDM layers is first proposed. A capacity increase is achieved by the proposed algorithm, which takes into account the underlying layer shape in the demapping process. Nevertheless, the number of Euclidean distances to be computed can be significantly increased, contributing to not only more complex fixed receivers, but also more complex mobile receivers. Next, the most suitable ATSC 3.0 pilot configuration for LDM is determined. Considering the two layers share the same PP a trade-off between pilot density (CL) and data overhead (EL) arises. From the performance results, it is recommended the use of a not very dense PP, as they have been already designed to cope with long echoes and high speeds. The optimum pilot amplitude depends on the channel estimator at receivers (e.g. the minimum amplitude is recommended for a Wiener implementation, while the maximum for a FFT implementation). The potential combination of LDM with three advanced technologies that have been adopted in ATSC 3.0 is also investigated: MultiRF technologies, distributed MISO schemes, and co-located MIMO schemes. The potential use cases, the transmitter and receiver implementations, and the performance gains of the joint configurations are studied for the two LDM layers. The additional constraints of combining LDM with the advanced technologies is considered admissible, as the greatest demands (e.g. a second receiving chain) are already contemplated in ATSC 3.0. Significant gains are found for the mobile layer at pedestrian reception conditions thanks to the frequency diversity provided by MultiRF technologies. The conjunction of LDM with distributed MISO schemes provides significant performance gains on SFNs for the fixed layer with Alamouti scheme. Last, considering the complexity in the mobile receivers and the CL performance, the recommended joint configuration is MISO in the CL and MIMO in the EL.
Des de començaments del segle XXI, els sistemes de radiodifusió terrestre han sigut culpats d'un ús ineficient de l'espectre assignat. Per a augmentar l'eficiència espectral, els organismes d'estandardització de TV digital van començar a desenvolupar l'evolució tècnica dels sistemes de TDT de primera generació. Entre altres, un dels objectius principals dels sistemes de TDT de pròxima generació (DVB-T2 i el ATSC 3.0) és proporcionar simultàniament serveis de TV a dispositius mòbils i fixos. El principal inconvenient d'aquest lliurament simultani són els diferents requisits de cada condició de recepció. Per a abordar aquestes limitacions, s'han considerat diferents tècniques de multiplexació. Mentre que DVB-T2 escomet el lliurament simultani dels dos serveis mitjançant TDM, ATSC 3.0 va adoptar la Multiplexació per Divisió en Capes (LDM). LDM pot superar a TDM i a FDM en aprofitar la relació de Protecció d'Error Desigual (UEP), ja que tots dos serveis, cridats capes, utilitzen tots els recursos de freqüència i temps amb diferents nivells de potència. En el costat del receptor, es distingeixen dues implementacions, d'acord amb la capa a decodificar. Els receptors mòbils solament estan destinats a obtenir la capa superior, coneguda com Core Layer (CL). Per a no augmentar la seua complexitat en comparació amb els receptors de capa única, la capa inferior, coneguda com Enhanced Layer (EL), és tractada com un soroll addicional en la decodificació. Els receptors fixos augmenten la seua complexitat, ja que han de realitzar un procés de Cancel·lació d'Interferència (SIC) sobre la CL per a obtenir l'EL. Per a limitar la complexitat addicional dels receptors fixos, les capes de LDM en ATSC 3.0 estan configurades amb diferents capacitats de correcció, però comparteixen la resta de blocs de la capa física, inclòs el TIL, el PP, la grandària de FFT i el GI. Aquesta dissertació investiga tecnologies avançades per a optimitzar el rendiment de LDM. Primer es proposa una optimització del procés de demapeo per a les dues capes de LDM. L'algoritme proposat aconsegueix un augment de capacitat, en tenir en compte la forma de l'EL en el procés de demapeo de la CL. No obstant açò, el nombre de distàncies Euclidianes a computar pot augmentar significativament, conduint NO sols a receptors fixos més complexos, sinó també a receptors mòbils més complexos. A continuació, es determina la configuració de pilot ATSC 3.0 més adequada per a LDM. Tenint en compte que les dues capes comparteixen el mateix PP, es produeix una contrapartida entre la densitat de pilots (CL) i la redundància sobre les dades (EL). A partir dels resultats de rendiment, es recomana l'ús d'un PP no gaire dens, ja que ja han sigut dissenyats per a fer front a ecos llargs i altes velocitats. L'amplitud pilot òptima depèn de l'estimador de canal en els receptors (ex., es recomana l'amplitud mínima per a una implementació Wiener, mentre que la màxima per a una implementació FFT). També s'investiga la potencial transmissió conjunta de LDM amb tres tecnologies avançades adoptades en ATSC 3.0: les tecnologies d'agregació de MultiRF, els esquemes de MISO distribuït i els de MIMO colocalitzat. S'estudien els potencials casos d'ús, els principals aspectes d'implementació del transmissor i el receptor, i els guanys de rendiment de les configuracions conjuntes per a les dues capes de LDM. Les restriccions addicionals de combinar LDM amb les tecnologies avançades es consideren admissibles, ja que les majors demandes ja estan contemplades en ATSC 3.0 (ex., una segona cadena de recepció). S'obtenen guanys significatius per a la capa mòbil en condicions de recepció per als vianants gràcies a la diversitat en freqüència proporcionada per les tecnologies MultiRF. La conjunció de LDM amb esquemes MISO distribuïts proporciona guanys de rendiment significatius en xarxes SFN per a la capa fixa amb l'esquema d'Alamouti.
Garro Crevillén, E. (2018). Advanced Layered Divsion Multiplexing Technologies for Next-Gen Broadcast [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/105559
TESIS
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Wenhao. "Experimental evaluation of indoor air cleaning technologies and modeling of UV-PCO (photocatalytic oxidation) air cleaners under multiple VOCs conditions." Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1342744161&sid=3&Fmt=2&clientId=3739&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gonzalez, Sergio E. (Sergio Ezequiel). "On creating cleantech confluences : best practices and partnerships to mobilize multiple sources of private capital into early-stage clean technologies." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104811.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis. Vita.
Includes bibliographical references (pages 81-84).
During the 2015 Paris Climate Change Conference, world climate scientists and policymakers agreed that global temperatures must not exceed a two degree Celsius increase above pre-industrial levels within the next 30 years. It is estimated that this will require investments of $40 trillion or $1.3 trillion per year in new and mature clean technologies. Currently, only about $0.3 trillion of investment goes to clean technology a year and the majority of that funding goes to mature, proven technologies. There is an investment gap in clean technologies, and the gap is especially pronounced for new and unproven technologies that are necessary to bring down costs of the entire system, and produce quicker breakthroughs in CO₂ mitigation. The gap is partly due to the large losses sustained by venture capitalists-one of the greatest source of early-stage capital-who invested heavily in clean technology companies in the years leading up to the 2008 recession. After the market crashed, federal and state governments ended up being among the few remaining supporters of these technology companies because of their public benefits. However, in order to stay below 2 degree Celsius warming, venture capitalists and other private venture investors must be engaged to invest in the clean technology sector again. Public sector funds are not sufficient. In a sector that has produced few winners while receiving substantial government support, the challenge could not be greater. To address this challenge, we ask three questions of three key actors: How can entrepreneurs attract private investment and scale up pass the Valley of Death? How can venture capitalists build the ability and confidence to invest in the cleantech sector again? How can policymakers address the failure modes that may still exist if investors and entrepreneurs follow best practices? To explore this issue, we conducted interviews, reviewed literature, compiled data from online sources, and compiled information from conferences and workshops. Our findings reveal a "Cleantech Confluence", or a preliminary set of best practices and partnerships. When simultaneously implemented, the Confluence can mobilize multiple sources of private capital into early-stage clean technologies.
by Sergio E. Gonzalez.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
28

Duque, Hughes Adriana. "Knowing in practice in distributed working : a comparative case-study of single-function, multiple-client teams collaborating through information technologies." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lingga, Marwan Mossa. "Developing a Hierarchical Decision Model to Evaluate Nuclear Power Plant Alternative Siting Technologies." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2943.

Full text
Abstract:
A strong trend of returning to nuclear power is evident in different places in the world. Forty-five countries are planning to add nuclear power to their grids and more than 66 nuclear power plants are under construction. Nuclear power plants that generate electricity and steam need to improve safety to become more acceptable to governments and the public. One novel practical solution to increase nuclear power plants' safety factor is to build them away from urban areas, such as offshore or underground. To date, Land-Based siting is the dominant option for siting all commercial operational nuclear power plants. However, the literature reveals several options for building nuclear power plants in safer sitings than Land-Based sitings. The alternatives are several and each has advantages and disadvantages, and it is difficult to distinguish among them and choose the best for a specific project. In this research, we recall the old idea of using the alternatives of offshore and underground sitings for new nuclear power plants and propose a tool to help in choosing the best siting technology. This research involved the development of a decision model for evaluating several potential nuclear power plant siting technologies, both those that are currently available and future ones. The decision model was developed based on the Hierarchical Decision Modeling (HDM) methodology. The model considers five major dimensions, social, technical, economic, environmental, and political (STEEP), and their related criteria and sub-criteria. The model was designed and developed by the author, and its elements' validation and evaluation were done by a large number of experts in the field of nuclear energy. The decision model was applied in evaluating five potential siting technologies and ranked the Natural Island as the best in comparison to Land-Based, Floating Plant, Artificial Island, and Semi-Embedded plant.
APA, Harvard, Vancouver, ISO, and other styles
30

Ahn, Yong Han. "The Development of Models to Identify Relationships Between First Costs of Green Building Strategies and Technologies and Life Cycle Costs for Public Green Facilities." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/26252.

Full text
Abstract:
Public buildings and other public facilities are essential for the functioning and quality of life in modern societies, but they also frequently have a significant negative impact on the natural environment. Public agencies, with their large portfolios of facilities, have faced considerable challenges in recent years in minimizing their negative environmental impacts and energy consumption and coping with shortages of financial capital to invest in new facilities and operate and maintain existing ones, while still meeting their mission goals. These range from the need to provide a quality workplace for their staff to providing a public service and long term benefits to the public. The concept of green building has emerged as a set of objectives and practices designed to reduce negative environment impacts and other challenges while enhancing the functionality of built facilities. However, the prevailing belief related to implementing green building is that incorporating Green Building Strategies and Technologies (GBSTs) increases the initial cost of constructing a facility while potentially reducing its life cycle costs. Thus, this research deals with optimizing the design of individual facilities to balance the initial cost investment for GBSTs versus their potential Life Cycle Cost (LCC) savings without the need to conduct detailed life cycle cost analysis during the early capital planning and budget phases in public sector projects. The purpose of this study is to develop an approach for modeling the general relationship between investments in initial costs versus savings in LCCs involved in implementing green building strategies in public capital projects. To address the research question, this study developed multiple regression models to identify the relationships between GBSTs and their initial cost premiums, operating costs, and LCCs. The multiple regression models include dummy variables because this is a convenient way of applying a single regression equation to represent several nominal variables, which here consist of initial, operating, maintenance, and repair and replacement costs, and ordinal variables, which in this case are the GBST alternatives considered. These new regression models can be used to identify the relationship between GBST alternatives, initial cost premiums, annual operating costs and LCC in the earliest stage of a project, when public agencies are at the capital planning and budgeting stages of facility development, without necessarily needing to know the precise details of design and implementation for a particular building. In addition, this study also proposes and tests a method to generate all the necessary cost data based on building performance models and industry accepted standard cost data. This statistical approach can easily be extended to accommodate additional GBSTs that were not included in this study to identify the relationship between their initial cost premium and their potential LCC saving at the earliest stage of facility development. In addition, this approach will be a useful tool for other institutional facility owners who manage large facility portfolios with significant annual facility investments and over time should help them minimize the environmental impacts caused by their facilities.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Chhel, Fabien-Sothéa. "Problème de caractérisation multiple : Application à la détection de souches bactériennes phytopathogènes." Thesis, Angers, 2014. http://www.theses.fr/2014ANGE0018/document.

Full text
Abstract:
L’informatique joue un rôle de plus en plus prépondérant dans l’analyse et la compréhension des données biologiques. Dans cette thèse, nous présentons le problème de caractérisation multiple qui aborde, sous l’angle de l’optimisation combinatoire, la détection de souches bactériennes phytopathogènes. En effet, certaines bactéries sont responsables de pathologies sur une large gamme de cultures diminuant ainsi la valeur marchande des semences. Des tests de diagnostic peuvent être alors conçus et nécessitent l’identification de caractères observables discriminants. Nous avons développé plusieurs méthodes de résolution exactes (branch and bound et programmation linéaire) et approchées (métaheuristiques) afin de minimiser le nombre de caractères à traiter. Précisément, en considérant un sous-ensembles d’individus représentatifs de différents groupes, il s’agit d’établir pour chacun de ces groupes une caractérisation (sous forme de formule propositionnelle) permettant de distinguer les individus du groupe par rapport à ceux des autres groupes. Nous utilisons le terme (caractérisation multiple) pour rendre compte de l’aspect mutuel de ces discriminations au sens (un groupe versus le reste des groupes) et ce pour chaque groupe. Nos travaux ont été validés expérimentalement sur des jeux de données et une étude approfondie de la complexité de ce problème a été menée
Computer science plays an increasingly important role in the analysis and understanding of biological data. In this thesis, we present the multiple characterization problem that addresses, as a combinatorial optimization problem, the detection of plant pathogenic bacterial strains. Indeed, some bacteria are responsible for diseases on a wide range of crops and decrease the market value of the seeds. Diagnostic tests can be designed and require the identification of discriminant observable characters. We have developed several exact methods (branch and bound, linear programming) and approximate (metaheuristics) to minimize the number of characters to deal with. Specifically, considering a subset of individuals representing different groups, the aim for each group is to find a characterization (as a propositional formula) to distinguish the individuals of the group compared to those of the other groups. We use the term “multiple characterization” to highlight the mutual aspect of theses discriminations in the sense “versus a group the other groups” and that for each group. Our work has been validated experimentally on data sets and an in-depth study of the problem’s complexity has been conducted
APA, Harvard, Vancouver, ISO, and other styles
32

Intrapairot, Arunee. "A study on the adoption and diffusion of information and communication technologies in the banking industry in Thailand using multiple-criteria decision making and system dynamics approaches." Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/494.

Full text
Abstract:
The main objective of this study is to develop requisite models for information and communication technology (ACT) adoption and diffusion in the banking industry in Thailand. The research, combining two study areas of multiple criteria decision-making (MCDM) and system dynamics (SD), is conducted using two research methodologies: system development and a case study of the Siam Commercial Bank PCL in Thailand.The study shows how to combine the two decision-making tools of MCDM and system dynamics effectively. The requisite group models of ICT adoption and diffusion provide ways to select the most preferred technology and to allow forward planning to diffuse the adopted technology more effectively. With an embedded decision support tool, decision-makers are able to apply the models with their available information, intuition, knowledge, and experience to improve their decision-making and enhance their learning.Initially, the research revealed that the Siam Commercial Bank currently employs various types of information and communication technologies (ICT) to facilitate work processes, fulfil customers' requirements, and retain its competitive advantage. However, the bank still confronts problems relating to technology adoption and diffusion.A requisite group model of ICT adoption was developed using MCDM as a decision making tool. The model illustrated how to select the most preferable technological alternative that fulfilled the mission of the bank. Results from the MCDM analysis revealed that the preferred technology was Extranet banking followed by a data warehouse. The requisite group model of ICT diffusion was further developed using the system dynamics approach in order to enhance understanding of system behaviour of the selected technology and then provide ways to diffuse it more effectively. The model analyses were divided into three sub-models of information and communication technologies (ICT), a data warehouse, and Extranet banking.The generic model of ICT can be applied to any particular technology. Results revealed that the pattern of technology diffusion follows the S curve and the dominant variables that may impact on technology diffusion are training, a backlog of problems, and market potential. Furthermore economic returns are obtained only after spending substantially on technological investment. Thus, it is necessary to balance between technological investment and economic returns. The model of diffusion of a data warehouse was developed highlighting the necessity of quality and quantity of knowledge workers. Therefore, training support is an important factor to diffuse this technology. On the other hand, the model of diffusion of Extranet banking revealed that the success of this technology comes from the acceptance by customers. Thus, perceived relative advantages, positive features of the technology and promotional advertising should be taken into consideration. The S curve pattern of technology diffusion is also confirmed by the two technologies.The policy for technology adoption involves the selection of technology, which best fits with identified criteria. The policy analyses of the three technologies confirm that the core important policies that increase technology diffusion and economic gains are increasing positive features of technology, decreasing perceived complexity, increased perceived relative advantages and increasing co-operation between IT people and users. If technology is to support the work performance in an organisation, training support is the dominant policy, whereas if technology facilitates customers directly, marketing strategy such as promotional advertising is vital.The study implied that the banking industry in Thailand is able to use ICT as levers for competitive advantage. However, technological investment in each bank differs depending on size, objectives and readiness in terms of capital and human resources.All the findings have implications for the bank and could be applied to other banks and general policy makers in various business enterprises.
APA, Harvard, Vancouver, ISO, and other styles
33

Trauntschnig, Joakim, and David Oliver Hetz. "Family Businesses Long-term Orientation – the Effect on their Digital Transformation : A multiple-case study within traditional industries." Thesis, Jönköping University, Internationella Handelshögskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-48850.

Full text
Abstract:
Digital technologies are disrupting firms of all sizes in all industries. This prompts firms to stipulate responses to the ongoing changes and challenges digitalization spawns. Especially family businesses, with their unique structural and behavioral characteristics and economic importance, in traditional industries must reinvent themselves and transform their business digitally to ensure longevity. Despite the recent increasing contributions of academia for the digital transformation phenomenon, attention for specific organizational forms, in particular family businesses is still scarce. A better understanding of how these organizations undergo a digital transformation is vital for family business adaptation and future survival. Therefore, this study emphasizes on the specific attribute of long-term orientation. Hence, the purpose of this study was to investigate family businesses’ long-term orientations impact their digital transformation process. For the intention to contribute to academia through theory building, we have chosen a qualitative, exploratory research design using a multiple case study of six selected family businesses. 14 semi-structured interviews with seven family managers and seven non-family managers were conducted to collect data. The results show that family businesses prepare a basis for their digital transformation through the influence of their long-term orientation. Building on our findings we developed a model of a digital foundation for family businesses endeavors in their digital transformation process to manage the three phases of digital transformation. This digital foundation as a capability affected by a family business long-term orientation impacts their digital transformation process in distinct ways.
APA, Harvard, Vancouver, ISO, and other styles
34

Intrapairot, Arunee. "A study on the adoption and diffusion of information and communication technologies in the banking industry in Thailand using multiple-criteria decision making and system dynamics approaches." Curtin University of Technology, Graduate School of Business, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10367.

Full text
Abstract:
The main objective of this study is to develop requisite models for information and communication technology (ACT) adoption and diffusion in the banking industry in Thailand. The research, combining two study areas of multiple criteria decision-making (MCDM) and system dynamics (SD), is conducted using two research methodologies: system development and a case study of the Siam Commercial Bank PCL in Thailand.The study shows how to combine the two decision-making tools of MCDM and system dynamics effectively. The requisite group models of ICT adoption and diffusion provide ways to select the most preferred technology and to allow forward planning to diffuse the adopted technology more effectively. With an embedded decision support tool, decision-makers are able to apply the models with their available information, intuition, knowledge, and experience to improve their decision-making and enhance their learning.Initially, the research revealed that the Siam Commercial Bank currently employs various types of information and communication technologies (ICT) to facilitate work processes, fulfil customers' requirements, and retain its competitive advantage. However, the bank still confronts problems relating to technology adoption and diffusion.A requisite group model of ICT adoption was developed using MCDM as a decision making tool. The model illustrated how to select the most preferable technological alternative that fulfilled the mission of the bank. Results from the MCDM analysis revealed that the preferred technology was Extranet banking followed by a data warehouse. The requisite group model of ICT diffusion was further developed using the system dynamics approach in order to enhance understanding of system behaviour of the selected technology and then provide ways to diffuse it more effectively. The model analyses were divided into three sub-models of information ++
and communication technologies (ICT), a data warehouse, and Extranet banking.The generic model of ICT can be applied to any particular technology. Results revealed that the pattern of technology diffusion follows the S curve and the dominant variables that may impact on technology diffusion are training, a backlog of problems, and market potential. Furthermore economic returns are obtained only after spending substantially on technological investment. Thus, it is necessary to balance between technological investment and economic returns. The model of diffusion of a data warehouse was developed highlighting the necessity of quality and quantity of knowledge workers. Therefore, training support is an important factor to diffuse this technology. On the other hand, the model of diffusion of Extranet banking revealed that the success of this technology comes from the acceptance by customers. Thus, perceived relative advantages, positive features of the technology and promotional advertising should be taken into consideration. The S curve pattern of technology diffusion is also confirmed by the two technologies.The policy for technology adoption involves the selection of technology, which best fits with identified criteria. The policy analyses of the three technologies confirm that the core important policies that increase technology diffusion and economic gains are increasing positive features of technology, decreasing perceived complexity, increased perceived relative advantages and increasing co-operation between IT people and users. If technology is to support the work performance in an organisation, training support is the dominant policy, whereas if technology facilitates customers directly, marketing strategy such as promotional advertising is vital.The study implied that the banking industry in Thailand is able to use ICT as levers for competitive advantage. ++
However, technological investment in each bank differs depending on size, objectives and readiness in terms of capital and human resources.All the findings have implications for the bank and could be applied to other banks and general policy makers in various business enterprises.
APA, Harvard, Vancouver, ISO, and other styles
35

Turk, Mustafa. "Substituting Natural Gas with Solar Energy in Industrial Heating Applications : A Multiple Case Study within Italy and Spain." Thesis, Uppsala universitet, Industriell teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447350.

Full text
Abstract:
With the increasing awareness of global warming and the need for limiting greenhouse gas emissions, several sectors are witnessing comprehensive transformations towards sustainable generation and consumption. The European Union can be considered the home for most of these transformations given the union’s efforts to enable decarbonization through regulatory frameworks and initiatives. However, one overlooked source of carbon emissions is the industrial heating sector which is heavily dependent on fossil fuel. Emerging technologies such as solar thermal could provide a solution for limiting the greenhouse gases emitted by this sector. This study examines the factors influencing the diffusion of solar thermal technology and its potential for substituting natural gas in the industrial heating sector. Specifically, the study examines the thermal energy supply side as being a potential facilitator for the diffusion of solar thermal technology. Certain elements from Everett Rogers’ (1995) work on the diffusion of innovations are applied to solar thermal technology along with the concept of lead users by Hippel (1986). The study follows a qualitative approach in collecting and analyzing data through interviews and document analysis. Experts from the energy sector were interviewed along with examining public documents of two major utility companies. The findings suggest that utility companies examined, despite their evident decarbonization efforts, do not represent a suitable vehicle for the diffusion of solar thermal technology. Instead, a business model based on energy efficiency could be the possible breakthrough for this technology. Finally, the study concludes with suggestions for possible actions to expedite the diffusion of solar thermal in the industrial sector.
APA, Harvard, Vancouver, ISO, and other styles
36

Semensato, Bárbara Ilze. "Les capacités dynamiques pour l'innovation et les modèles d'internationalisation des entreprises basées sur les nouvelles technologies : une étude de cas multiple avec les PME Brésiliennes." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAG009.

Full text
Abstract:
La mondialisation des marchés et de la compétitivité internationale de plus en plus dans les deux dernières décennies ont fourni à l’entrée des entreprises concurrentes sur le marché, parmi lesquels les petites entreprises. Notamment reconnus pour leur importance sociale et économique, les petites entreprises des secteurs de l’activité de l’industrie, du commerce et des services sont, en termes numériques, la grande majorité des entreprises au Brésil. Compte tenu de l’importance de cet objet de la recherche, cette recherche a comme objectif général d'explorer la relation entre l’orientation à l'innovation et les modèles d'internationalisation des petites et moyennes entreprises (PME). Pour atteindre cet objectif principal, trois objectifs spécifiques sont établis, qui sont l’étude du processus et du modèle d’internationalisation de petites et moyennes entreprises basées sur les nouvelles technologies et l’étude des capacités dynamiques pour l’innovation inhérente au processus et au modèle d’internationalisation des PMEs. Les capacités dynamiques d’innovation dirigent le développement de l’innovation technologique, à savoir, l’innovation des produits, procédés et services, favorisant également le développement d’innovation non technologique, en d’autres termes, de Marketing et de l’organisation. En outre, les capacités dynamiques renforcent positivement sur la compétitivité des petites entreprises dans les marchés intérieurs et internationaux. Le fondement théorique de cette recherche réside dans les théories de l’Internationalisation, de l’École Comportementale et de l’École Économique, de l’Internationalisation des Entreprises, et les théories de l’Innovation, se référant aux Capacités Dynamiques pour l’Innovation. Afin de mieux comprendre l’objet de la recherche, pour chaque sujet il y a une section concernant les PME. La diversité sectorielle des participants les entreprises a contribué à l’ampleur des résultats sur les capacités dynamiques de l’innovation des PME brésiliennes, ainsi que d’identifier leurs modèles d’internationalisation. D’après une étude qualitative, l’analyse montre que les PME brésiliennes cherchent à se différencier par l’innovation dans leurs marchés internationaux d’exploitation. En ce qui concerne les modèles d’internationalisation des PME brésiliennes, ils diffèrent à certains critères que celles figurant dans la littérature. Par conséquent, l’analyse des capacités dynamiques de l’innovation montre que les petites entreprises brésiliennes ont un fort potentiel pour le développement de l’innovation, même avec l’existence de barrières externes. Sur l’internationalisation, les PME de l’étude ont des modèles internationaux spécifiques, exigeant, par conséquent, critères approches par rapport à la littérature. Des contributions académiques, la recherche présente l’analyse des capacités dynamiques d’innovation liées à la configuration d’internationalisation des PME brésiliennes, présentant les variables émergents aux thèmes de recherche. Enfin, comme les contributions managériales, l’analyse des cas permettent de vérifier comment les entreprises cherchent à se positionner de manière concurrentielle sur les marchés internationaux
The globalization of markets and the growing international competitiveness in the last two decades have provided the entry of competing firms in the market, among which are small firms. Notably recognized for their social and economic importance, small enterprises of the industry, trade and services sectors of activity are, in numerical terms, the vast majority of businesses in Brazil. Given the importance of this object of research, this research has as general objective to explore the relationship between the innovation orientation and the internationalization patterns of small and medium enterprises (SMEs). To achieve this general objective, three specific objectives are drawn, which are the study of the internationalization process and patterns of small-and-medium-sized technology based firms and the study of dynamic capabilities for innovation inherent to the distinct internationalization process and patterns of the SMEs. The dynamic capabilities for innovation drive the technological innovation development, namely, innovation in products, processes and services, also fostering the non-technological innovation development, in other words, the Marketing and the organizational. In addition, the dynamic capabilities impact positively on the competitiveness of small businesses in domestic and international markets. The theoretical basis of this research lies in the Internationalization Theories, from the Behavioral School and the Economic School, for Business Internationalization, and the Innovation Theories, referring to the Dynamic Capabilities for Innovation. In order to better understand the object of research, for each topic there is a section concerning to the SMEs. The sectoral diversity of the participants firms contributed to the magnitude of results on the dynamic capabilities for innovation of Brazilian SMEs, as well as to identify their internationalization patterns. From a qualitative study, the analysis show that Brazilian SMEs seek to differentiate through innovation in their international operating markets. Regarding the internationalization patterns of Brazilian SMEs, they differ in some criteria than shown in the literature. Therefore, the analysis of dynamic capabilities for innovation shows that small Brazilian companies have high potential for the innovation development, even with the existence of external barriers. Concerning the internationalization, the SMEs of the study have specific international patterns, requiring, therefore, criteria approaches in relation to literature. As academic contributions, the research presents the analysis of dynamic capabilities for innovation related to the pattern of internationalization of Brazilian SMEs, presenting emerging variables from the research themes. Finally, as managerial contributions, the analysis of the cases enables verifying how firms seek to position themselves competitively in international markets
APA, Harvard, Vancouver, ISO, and other styles
37

Hathaway, Drew Aaron. "The use of immersive technologies to improve consumer testing: the impact of multiple immersion levels on data quality and panelist engagement for the evaluation of cookies under a preparation-based scenario." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1448994162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sandberg, Pontus. "A work process supporting the implementation of smart factory technologies developed in smart factory compliant laboratory environment." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44257.

Full text
Abstract:
The industry is facing major challenges today. The challenges are tougher global competition, customers who require individualized products and shorter product lifecycles. The predicted industrial revolution is a way to deal with these challenges. Industry 4.0 includes strategies linked to several technologies that will meet the new needs. Smart factory is a central concept in industry 4.0, which involves connected technologies of various kinds. Such as digital manufacturing technology, network communication technology, computer technology, automation technology and several other areas. In this work, these were defined as smart factory technologies. Implementing such technologies will result in improved flexibility, resource productivity and efficiency, quality, etc. But, implementing smart factory technologies poses major challenges for the companies. Laboratory environments can be utilized to address the challenges. This results in a new problem, how to transfer a smart factory technology developed in a laboratory environment to a full-scale production system. In the literature study no, structured approach was identified to handle this challenge. Therefore, the purpose of this work was to: create a work process that supports the technology transfer from a smart factory compliant laboratory environment to a full-scale production system. To justify the purpose, the following research questions were answered: RQ1: What are the differences in the operating environment between the laboratory and the full-scale production system? RQ2: How is a smart factory technology determined ready to be implemented into a full-scale production system? RQ3: What critical factors should a work process for the implementation of smart factory technologies include? The research questions were answered by conducting a multiple-case study in collaboration with Scania CV AB. During the case studies, interviews, observations and other relevant types of data collection were conducted. The results were as follows: RQ1: How difficult it is to transfer a technology from a laboratory environment to a full-scale production system depends on how large the differences between these are. The general difference is that laboratory environments are used to experiment and develop technologies and a full-scale production system is used to produce products. Some want the laboratory environment to be an exact copy of a full-scale production system, but this is not appropriate because it means you lose the freedom of experimentation and it would be much more expensive. RQ2: Determining whether a smart factory technology is ready consists of two parts, laboratory activities and pilot testing. A structured assessment method has been developed. The laboratory operations reduce the risks and contribute to raising the degree of maturity of the technology. In pilot testing, it is important not to interfere with the full-scale production system stability. This is the reason for doing pilot testing in a delimited area first and checking that the technology works as desired. RQ3: The critical factors identified were: competence and knowledge, technology contributing to improvements, considering risks with implementation, cost versus potential improvement, clear goals and reason for implementation and communication.
APA, Harvard, Vancouver, ISO, and other styles
39

Castro, Juscileide Braga de. "ConstruÃÃo do conceito de covariaÃÃo por estudantes do ensino fundamental em ambientes de mÃltiplas representaÃÃes com suporte das tecnologias digitais." Universidade Federal do CearÃ, 2016. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16487.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Esta pesquisa teve por objetivo analisar as contribuiÃÃes de metodologia desenvolvida, com suporte de tecnologias digitais, para o desenvolvimento do conceito de covariaÃÃo presente nas estruturas multiplicativas. Para isso, foram realizadas anÃlises das situaÃÃes presentes no campo conceitual multiplicativo, verificando a ocorrÃncia, ou nÃo, da covariaÃÃo. O desenvolvimento das atividades foi fundamentado em estudos relacionados Ãs contribuiÃÃes das mÃltiplas representaÃÃes para a aprendizagem e da abordagem seres-humanos-com-mÃdias. Utilizou-se, como metodologia, a pesquisa de intervenÃÃo. A investigaÃÃo foi realizada em uma Escola Municipal de Tempo Integral, localizada no municÃpio de Fortaleza - CearÃ, com estudantes de uma das turmas do 6 ano do Ensino Fundamental. A turma de alunos foi dividida em: Grupo Controle (GC), com 15 alunos e Grupo Experimental (GE), com 12 alunos. A investigaÃÃo foi dividida em trÃs etapas: prÃ-teste, intervenÃÃo e pÃs-teste. Todos os alunos, dos dois grupos, participaram do prÃ-teste e do pÃs-teste, aplicados individualmente e sem uso do computador. Tendo sido aplicados para diagnosticar os conhecimentos dos alunos em relaÃÃo à compreensÃo de situaÃÃes de proporÃÃo simples, de proporÃÃo mÃltipla, de proporÃÃo dupla, de interpretaÃÃo e construÃÃo de grÃficos lineares e compreensÃo de padrÃo de tabelas. A intervenÃÃo aconteceu apenas com o GE, no momento das aulas de MatemÃtica. Essa etapa teve duraÃÃo de 3 meses, com 18 encontros. As atividades desenvolvidas para esses encontros, utilizavam tecnologias digitais como: software Geogebra, recurso digital Equilibrando proporÃÃes, aplicativo online Cacoo, WhatsApp e blog. O GC manteve as aulas de MatemÃtica e de disciplinas eletivas, nos mesmos horÃrios do GE. Os dados foram analisados de modo a conhecer e compreender o desempenho dos alunos antes e apÃs as atividades; os teoremas-em-aÃÃo mobilizados durante a intervenÃÃo e suas evoluÃÃes; e as contribuiÃÃes das tecnologias usadas para a compreensÃo do conceito de covariaÃÃo. Os estudantes submetidos à intervenÃÃo apresentaram, estatisticamente, um desempenho superior, quando comparados aos estudantes do GC, demonstrando a eficÃcia da metodologia. Constatou-se, ainda, a modificaÃÃo de esquemas por meio de estratÃgias mais elaboradas, mesmo para situaÃÃes que jà eram conhecidas pelos estudantes do GE. As tecnologias digitais utilizadas contribuÃram para a compreensÃo da invariÃncia e da covariaÃÃo, ao relacionar mÃltiplas representaÃÃes de forma dinÃmica, possibilitar a produÃÃo de conhecimento e a significaÃÃo de contextos sociais e matemÃticos.
APA, Harvard, Vancouver, ISO, and other styles
40

Angelis, Aris Nikolaos. "Multiple criteria decision analysis for assessing the value of new medical technologies : researching, developing and applying a new value framework for the purpose of health technology assessment." Thesis, London School of Economics and Political Science (University of London), 2017. http://etheses.lse.ac.uk/3742/.

Full text
Abstract:
Introduction: Current evaluation approaches for new medical technologies are problematic for a plethora of reasons relating to measuring their expected costs and consequences, but also due to hurdles in turning assessed information into coverage decisions. Most adopted methodologies focus on a limited number of value dimensions, despite the fact that the value of new medicines is multi-dimensional in nature. Explicit elicitation of social value tradeoffs is not possible and decision-makers may adopt intuitive or heuristic modes for simplification purposes, based on ad hoc procedures that might lead to arbitrary decisions. Objectives: The objective of the present thesis is to develop and empirically test a methodological framework that can be used to assess the overall value of new medical technologies by explicitly capturing multiple aspects of value while allowing for their tradeoffs, through the incorporation of decision-makers’ preferences in a structured and transparent way. The research hypothesis is that Multiple Criteria Decision Analysis (MCDA) can provide a methodological option for the evaluation of new medicines in the context of Health Technology Assessment (HTA), to support decision-making and contribute to more efficient resource allocation. Methods and Empirical Evidence: The first paper proposes a conceptual methodological process, based on multi-attribute value theory (MAVT) methods comprising five distinct phases, outlining the stages involved in each phase and discusses their relevance in the HTA context. The second paper conducts a systematic literature review and expert consultation in order to investigate the practices, processes and policies of value-assessment for new medicines across eight European countries and identifies the evaluation criteria employed and how these inform coverage recommendations as part of HTA. The third paper develops a MAVT value framework for HTA, incorporating a generic value tree for new medicines composed from different levels of criteria that fall under five value domains (i.e. therapeutic, safety, burden of disease, innovation and socio-economic), together with a selection of scoring, weighting and aggregating techniques. In the fourth and fifth papers, the value framework is tested empirically by conducting two real-world case studies: in the first, the value tree is adapted for the evaluation of second-line biological treatments for metastatic colorectal cancer (mCRC) patients having received prior oxaliplatin-based chemotherapy; in the second, the value tree is conditioned for the evaluation of third-line treatments for metastatic castrate resistant prostate cancer (mCRPC) patients having received prior docetaxel chemotherapy. Both case studies were informed by decision conferences with relevant expert panels. In the mCRC decision conference multiple stakeholders participated reflecting the composition of the English National Institute for Health and Care Excellence (NICE) technology appraisal committees, whereas in the mCRPC decision conference a group of evaluators participated from the Swedish Dental and Pharmaceutical Benefits Agency (TLV), thereby adopting the TLV decision-making perspective. Policy Implications: The value scores produced from the MCDA process reflect a more comprehensive benefit metric that embeds the preferences of stakeholders and decisionsmakers across a number of explicit evaluation criteria. The incorporation of alternative treatments’ purchasing costs can then be used to derive incremental cost value ratios based on which the treatments can be ranked on ‘value-for-money’ grounds, reflecting their incremental cost relative to incremental value. Conclusion: The MCDA value framework developed can aid HTA decision-makers by allowing them to explicitly consider multiple criteria and their relative importance, enabling them to understand and incorporate their own preferences and value trade-offs in a constructed and transparent way. It can be turned into a flexible decision tool for resource allocation purposes in the coverage of new medicines by payers but could also be adapted for other decision-making contexts along their development, regulation and use.
APA, Harvard, Vancouver, ISO, and other styles
41

Thimon, Bozec Sophie. "La fabrique d'une compétence stratégique, proposition d'un modèle : une application aux impacts des usages des technologies de l'information en PME." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0097/document.

Full text
Abstract:
L’objet de notre recherche est double : comprendre le phénomène de fabrique d’une compétence organisationnelle stratégique induite par les usages des TI, d’une part, et aller au-delà des modèles existants d’analyse des impacts des TI en intégrant une approche multidimensionnelle, d’autre part. Nous avons privilégié une approche empirique qui a débuté par une étude de cas pilote qui nous a permis de délimiter un cadre théorique de manière abductive, par un aller-retour constant entre le terrain et les construits théoriques existants. Le cadre conceptuel se veut une synthèse des différents courants RBV intégrant le concept d’apprentissage organisationnel. Les résultats d’une étude multicas soulignent le caractère diffus et partiellement intentionnel d’un processus requérant l’interaction de ressources complémentaires fortement dépendantes du passé et aboutissant à l’amélioration de l‘efficience des routines et de la capacité dynamique d’apprentissage. Nous proposons une modélisation du processus et nous discutons des concepts significatifs quant à l'étude d'un tel phénomène
Our research object is twofold: understand a strategic organizational competences bulding using IT, on the one hand, and go beyond the existing analysis of impacts of the IT models, integrating a multidimensional approach, on the other hand. We privileged an empirical approach beginning with a pilot case study that allowed us to define a theoretical framework of an abductive way, by a constant back and forth between the field and the theoretical existing constructs. The conceptual framework is a synthesis of the various RBV strands integrating the concept of organizational learning. The results of a multicase study underline the diffused and partially intentional character of a process, requiring interaction of additional resources highly dependent from the past, and leading to improve efficiency of the routines and dynamic adaptability. We propose a design of the process and discuss significant concepts for the study of this phenomenon
APA, Harvard, Vancouver, ISO, and other styles
42

Anantharramu, Gurruraj, and Pascal Kaiser. "Understanding the design and delivery of customer experience from multiple perspectives : A case study within luxury travel industry." Thesis, Uppsala universitet, Industriell teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413315.

Full text
Abstract:
Staging experiences and providing optimal customer experience has become the new battlefield within the marketing segment, since the introduction of experience economy. The modern customer has multiple devices, channels and touchpoints to interact with the organization and with rapidly changing digital technology, where the product or service information are available online 24/7, he/ she is in-charge of his/ her own experiences. This multitude of options pose great challenges for the organization to understand customer needs, expectations, and behavior, and predict and manage customer experience. Despite numerous studies and streams of literature, several authors, scholars, and practitioners have developed fragmented frameworks and models that partially addresses this multidimensional construct of customer experience. Furthermore, things get complicated when these fragmented constructs are used by the luxury travel industry to design, develop, and manage customer experience. Therefore, in order to address this broad concept and provide the organization with a holistic framework that can be leveraged for providing customer experience, we conducted a qualitative multi-case study, that included 14 semi-structured interviews from various actors within the supply- chain of the luxury travel industry. Using thematic analysis, the rich empirical data from the interviews were analyzed and transformed into sub-themes and themes. Keeping these themes as the foundation, we propose an integrated conceptual model that captures a firm integrating customer and co-creation perspectives to provide customer experience. This integrated model consists of five building blocks, Organizational Factors, Design, Delivery and Management of customer experience, Co-Creation, Customer Experience Insights / Metrics and Moderating Factors, that coupled together should influence customer experience. Using this conceptual model, we analyzed how different actors within the supply-chain provide customer experience. Subsequently, we also develop a customer journey map (from a customer perspective) consisting of customer needs, channels, and touchpoints to understand the critical touchpoint that act as the primary contributors for providing customer experience. And finally, we highlight the driving factors and barriers for providing customer experience within luxury travel industry.
APA, Harvard, Vancouver, ISO, and other styles
43

Gulbinas, Andrius. "Visuomeninių pastatų renovacijos daugiakriterinė internetinė sprendimų paramos sistema." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20060220_092634-39045.

Full text
Abstract:
The aim of the disertation is to improve the efficiency of the of the refurbishment process of the public buildings by using methods of multiple criteria analysis, the developed model for integrated analysis of the negotiations process for refurbishment of buildings and the Multiple Criteria Decision Support System for Refurbishment of Public Buildings developed on the basis of these models.
APA, Harvard, Vancouver, ISO, and other styles
44

Siri, Åhman. "Wait, I'm him now? : Identification and choice in games with more than one protagonist." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15418.

Full text
Abstract:
This study examines correlations between player choice and identification in a multiple protagonist video game, seeking to determine whether a player’s identification with one or more player characters affects the way they make choices while playing. It discusses various definitions and types of identification as well as ways to create a successful narrative with multiple protagonists. The artefact created for the study is a text-based game with a branching narrative, where the player is required to make choices for three different characters, and a qualitative research method based on interviews with a small group of participants. The results show that players seek to identify with the player character even when there are more than one, and often use this as a basis for the choices they make, either by imagining themselves in the situation of the main character or by imagining that they are the main character. They do not usually base their choices while playing as a character on their identification or lack thereof with another, and regardless of how they made choices, most players made more or less the same ones. However, it did show that lack of identification made making choices more difficult for that character, which lessened their enjoyment of that storyline.

Artefakten som användes i arbetet utvecklades i samarbete med Amanda Thim

APA, Harvard, Vancouver, ISO, and other styles
45

ENGBLOM, ALEXANDER, and EMMA LUNDQUIST. "Sharing Sales and Service Networks with Competitors : A Multiple-case Study in the Heavy Truck Industry." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236768.

Full text
Abstract:
In the heavy truck industry, sales and service networks play an important role. Investments in network infrastructure are costly, and to be profitable, there is a critical service volume to fulfil. This is especially challenging in new markets, where service demand is low and uncertain. A possible solution is to share sales and service networks with competitors. Simultaneous cooperation and competition, co-opetition, is a complex and contradictory phenomenon that has been previously researched, but with focus on cooperation in input activities like R&D. This thesis investigates co-opetition in output activities, in sales and services, by analysing how heavy truck companies could form competitor partnerships in sales and service networks. This is done by a literature review and a multiple case study. From the analysis, a framework for assessing and designing potential competitor partnerships is presented. The framework consists of seven factors that are significant for competitor partnership success in sales and service networks; cultural fit, noncompeting products, compatible goals, excess network capacity, dedicated salespersons, high commitment, and patient implementation. This thesis contributes to science by research on co-opetition research in output activities, and by a discussion on the meaning of competition, success and partnerships.
I tunga lastbilsbranschen spelar sälj- och servicenätverk en viktig roll. Investeringar i nätverken är dyra, och för att vara lönsamma krävs en viss servicevolym. Det är särskilt utmanande på nya marknader, där efterfrågan är låg och obestämd. En möjlig lösning är att dela sälj- och servicenätverk med konkurrenter. Simultant samarbete och konkurrens, så kallad co-opetition, är ett komplext och motsägande fenomen, som tidigare forskats på med fokus på samarbete inom utveckling. Denna uppsats undersöker co-opetition inom sälj, genom att titta på hur tunga lastbilsföretag kan ingå partnerskap inom sälj- och servicenätverk tillsammans med konkurrenter. Detta görs genom en litteraturgenomgång samt en flerfallstudie. Utifrån vår analys presenteras ett ramverk för att bedöma och utforma potentiella partnerskap med konkurrenter. Ramverket består av sju viktiga faktorer för framgångsrika partnerskap inom sälj- och servicenätverk tillsammans med konkurrenter; kulturell passform (cultural fit), icke-konkurrerande produkter, kompatibla mål, överbliven kapacitet i nätverket, dedikerade säljare, högt engagemang och tålmodig implementering. Uppsatsen bidrar till forskning inom co-opetition inom sälj och med diskussion kring begreppen konkurrens, framgång och partnerskap.
APA, Harvard, Vancouver, ISO, and other styles
46

Martin, Luc. "Méthodes de corrections avancées des effets de proximité en lithographie électronique à écriture directe : Application aux technologies sub-32nm." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0003.

Full text
Abstract:
Pour adresser les nœuds technologiques avancés dans le cadre de la lithographie électronique, une nouvelle stratégie de correction des effets de proximité a été imaginée pour prendre le relai de la technique standard de modulation de dose. Dans ces travaux de thèse, les effets de proximité ont été analysés sur les outils e-beam de dernière génération au sein du LETI. Les limites de la modulation de dose ont aussi été évaluées. Parallèlement, une approche plus fondamentale, basée sur la simulation, a permis de mieux comprendre l'impact des différentes étapes du procédé de lithographie sur les motifs réalisés. Une nouvelle stratégie de correction avancée, appelée exposition multiple, a ensuite été mise au point. Celle-ci fait intervenir des motifs spécifiques appelés eRIF (electron Resolution lmprovement Features) dont l'exposition, couplée à celle des motifs initiaux permet de mieux contrôler la répartition de la dose injectée dans la résine. On parle alors d'expositions multiples. Au cours de ces travaux le positionnement des eRIF, ainsi que leurs dimensions ont fait l'objet d'une étude approfondie. L'élaboration d'algorithmes d'optimisation et la réalisation d'expérimentations en salle blanche ont permis d'optimiser ces paramètres et de mettre en évidence les gains apportés par les eRIF. Par rapport à la modulation de dose, des améliorations significatives ont pu être démontrées sur de véritables circuits intégrés. Grâce à l'exposition multiple, la résolution ultime des outils de lithographie e-beam a été repoussée de 2 nœuds technologiques pour les niveaux les plus critiques d'un circuit. Les règles de dessin retenues pour réaliser les eRIF ont ensuite été intégrées dans des modèles de corrections. via le logiciel de préparation de données INSCALE d'ASELTA NANOGRAPHICS pour assurer une correction automatisée des circuits
In electron beam lithography, a new proximity affects correction strategy has been imagined to push the resolution capabilities beyond the limitations of the standard dose modulation. In this work, the proximity affects inherent to e-beam lithography have been studied on the newest e-beam tools available at LETI. First, the limits of the standard dose modulation correction have been evaluated. The influences of each step of the lithographic process have also been analyzed from a theoretical point a view. A simulation approach was built and used to determine the impact of each of these steps on the patterned features. Then, a new writing strategy has been fully developed. It involves sub resolution features known as eRIF (electron Resolution Improvement features) which provide a finer control of the dose profile into the resist. Since the eRIF are exposed a top the nominal features, this new writing strategy is called multiple pass exposure. In this work, the position, the dose and the design of the eRIF have been studied and optimized to get the best of this new strategy. To do so, experiments were led in a clean room environment, and minimization algorithms have been developed. It has been demonstrated that the eRIF provide a significant gain compared to the standard dose modulation. Improvements have been observed even on the most critical levels of the Integrated circuits. By using the multiple pass exposure with optimized eRIF, the resolution capabilities of the e-beam tool have been reduced by 2 technological nodes. The design rules that have been determined to use the eRIF the most efficient way were finally implemented in INSCALE, the new data preparation software developed by ASELTA NANOGRAPHICS. This way, multiple pass exposure can be used in an automated mode to correct full layouts
APA, Harvard, Vancouver, ISO, and other styles
47

Saidi, Taofik. "Architectures matérielles pour la technologie W-CDMA étendue aux systèmes multi-antennes." Doctoral thesis, Université Laval, 2009. http://hdl.handle.net/20.500.11794/20828.

Full text
Abstract:
Depuis une dizaine d 'années, l'avènement des techniques multi-antennes (ou MIMO) pour les communications sans fil , mobiles ou fixes , a révolutionné les possibilités offertes pour de nombreux domaines d 'application des télécommunications. La disposition de plusieurs antennes de part et d 'autre du lien augmente considérablement la capacité des systèmes sans fil. Cependant, les algorithmes numériques à mettre en oeuvre pour réaliser ces systèmes sont autrement complexes et constituent un challenge quant à la définition d'architectures matérielles performantes. L'objectif du travail présent repose précisément sur la définition optimale de solutions architecturales, dans un contexte CDMA, pour contrer cette problématique. Le premier aspect de ce travail porte sur une étude approfondie des algorithmes spatio-temporels et des méthodes de conception en vue d'une implantation matérielle efficace. De nombreux schémas de détection sont proposés dans la littérature et sont applicables suivant trois critères qui sont: la qualité de service, le débit binaire et la complexité algorithmique. Cette dernière constitue une contrainte forte pour une mise en application à faible coût de terminaux mobiles intégrant ces applications. Aussi, il est nécessaire de disposer d'outils performants pour simuler, évaluer et affiner (prototypage rapide) ces nouveaux systèmes, candidats probables pour les télécommunications de quatrième génération. Le second aspect concerne la réalisation d'un transcepteur multi-antennes sans codage de canal, intégrant la technologie d'accès multiple par répartition de codes dans le cas d'un canal large bande. Un système mono-antenne WCDMA, généralisable à un nombre quelconque d'antennes, a été intégré et simulé au sein de la plate-forme de prototypage rapide Lyrtech. L'architecture développée intègre les principaux modules du traitement en bande de base, à savoir le filtrage de Nyquist, la détection des multiples trajets suivie de l'étape de détection. Le prototype MIMO-WCDMA développé est caractérisé par sa flexibilité suivant le nombre de voies e~trantes, le format d'entrée des échantillons, les caractéristiques du canal sans fil et la technologie ciblée (ASIC, FPGA). Le troisième aspect se veut plus prospectif en détaillant de nouveaux mécanismes pour réduire le coût matériel des systèmes multi-antennes. Le principe d'allocation adaptative de la virgule fixe est présenté dans le but d'adapter le codage des données suivant les caractéristiques du canal sans fil et de minimiser en conséquence la complexité du circuit. D'autre part, le concept d'architectures adaptatives est proposé afin de minimiser l'énergie consommée au sein d 'un système embarqué suivant le contexte d'application.
APA, Harvard, Vancouver, ISO, and other styles
48

Rodrigues, Nomélia Maria Carreiro Sousa. "A Importância das Tecnologias da Informação e Comunicação (TIC) no Processo Educativo dos Alunos com Multideficiência: perceção dos professores." Master's thesis, [s.n.], 2015. http://hdl.handle.net/10284/4995.

Full text
Abstract:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências da Educação: Educação Especial, área de especialização em Domínio Cognitivo e Motor
Em todos os domínios da sociedade atual, da Educação em geral e da Educação Especial em particular, as tecnologias da informação são aplicadas no sentido de melhorar a qualidade de vida e, neste contexto, de melhorar os processos de ensino e aprendizagem das crianças multideficientes, na medida que, as características destes alunos colocam grandes desafios às escolas e aos profissionais que com eles trabalham. O presente estudo incide sobre a perceção dos professores de Educação Especial, na Região Autónoma dos Açores (RAA), face à adoção das Tecnologias da Informação e Comunicação (TIC) na prática pedagógica com alunos com multideficiência. Procura também compreender quais são as atitudes dos professores de Educação Especial face às TIC e o seu parecer na adoção das mesmas no processo educativo destes alunos. No decorrer da planificação da presente investigação, optou-se por um estudo descritivo de cariz quantitativo. A recolha de dados foi feita através de um inquérito por questionário ao qual responderam 97 professores. Conclui-se que a idade dos docentes não tem qualquer influência na utilização das TIC; os docentes com menos experiência profissional apresentam uma maior apetência pelo conhecimento e uso das TIC; os docentes com mestrado ou doutoramento usam mais aplicativos de exploração lúdica para alunos com multideficiência do que os docentes especializados; quanto mais elevada é a utilização das TIC e a formação contínua na área, maior é a perceção de benefícios para as crianças multideficientes e mais elevada é a frequência da sua utilização; os docentes da amostra, ainda referem sentir muitas necessidades formativas referentes ao uso das TIC na multideficiência.
In all areas of modern society, Education in general and Special Education, in particular, information technology is applied to improve the quality of life and, in this context, to improve the teaching and learning processes of children with multiple disabilities, in so far that, the characteristics of these students pose major challenges to schools and professionals who work with them. This study focuses on the perception of Special Education teachers in the Região Autónoma dos Açores (RAA), due to the adoption of Information and Communication Technologies (ICT) in multiple disabilities teaching practice. Also seeks to understand the attitudes of Special Education teachers towards ICT and its opinion on the adoption of the same in the educational process of these students. During the planning phase of this research was opted for apply a descriptive study of quantitative nature. Data gathering was carried through questionnaire survey to which 97 teachers responded. We conclude that the age of teachers has no influence on the use of ICT; teachers with less teaching experience have a greater keenness for knowledge and use of ICT; teachers with master's or doctorate use more playful exploration applications for students with multiple disabilities than specialized teachers; the higher the use of ICT and continuing education in the area, the higher is the perception of benefits for children with multiple disabilities; overall, the higher is the frequency of use; the teachers in sample also showed many training needs regarding the use of ICT in multiple disabilities.
APA, Harvard, Vancouver, ISO, and other styles
49

Saidi, Taofik. "Architectures matérielles pour la technologie W-CDMA étendue aux systèmes multi-antennes." Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/tsaidi.pdf.

Full text
Abstract:
Depuis une dizaine d'années, l'avènement des systèmes multi-antennes (ou MIMO) pour les communications sans fil, mobiles ou fixes, a révolutionné les possibilités offertes pour de nombreux domaines d'application des télécommunications. La disposition de plusieurs antennes de part et d'autre du lien augmente considérablement la capacité des systèmes sans fil. Cependant, les algorithmes numériques à mettre en \oe uvre pour réaliser ces systèmes sont autrement complexes et constituent un challenge quant à la définition d'architectures matérielles performantes. L'objectif du travail présent repose précisément sur la définition optimale de solutions architecturales, dans un contexte CDMA, pour contrer cette problématique. Le premier aspect de ce travail porte sur une étude approfondie des algorithmes spatio-temporels et des méthodes de conception en vue d'une implantation matérielle efficace. De nombreux schémas de détection sont proposés dans la littérature et sont applicables suivant trois critères qui sont : la qualité de service, le débit binaire et la complexité algorithmique. Cette dernière constitue une contrainte forte pour une mise en application à faible coût de terminaux mobiles intégrant ces applications. Aussi, il est nécessaire de disposer d'outils performants pour simuler, évaluer et affiner (prototypage rapide) ces nouveaux systèmes, candidats probables pour les télécommunications de quatrième génération. Le second aspect concerne la réalisation d'un transcepteur multi-antennes sans codage de canal, intégrant la technologie d'accès multiple par répartition de codes dans le cas d'un canal large bande. Un système mono-antenne WCDMA, généralisable à un nombre quelconque d'antennes, a été intégré et simulé au sein de la plate-forme de prototypage rapide Lyrtech. L'architecture développée intègre les principaux modules du traitement en bande de base, à savoir le filtrage de Nyquist, la détection des multiples trajets suivie de l'étape de détection. Le prototype MIMO-WCDMA développé est caractérisé par sa flexibilité suivant le nombre de voies entrantes, le format d'entrée des échantillons, les caractéristiques du canal sans fil et la technologie ciblée (ASIC, FPGA). Le troisième aspect se veut plus prospectif en détaillant de nouveaux mécanismes pour réduire le coût matériel des systèmes multi-antennes. Le principe d'allocation adaptative de la virgule fixe est présenté dans le but d'adapter le codage des données suivant les caractéristiques du canal sans fil et de minimiser en conséquence la complexité du circuit. D'autre part, le concept d'architectures adaptatives est proposé afin de minimiser l'énergie consommée au sein d'un système embarqué suivant le contexte d'application
Over the last ten years, multi-antenna systems (MIMO) for mobile and fixed wireless communications have revolutionized the possibilities offered for numerous telecommunication application domains. Using multiple antennas at both ends of the link considerably increases the capacity of wireless systems. However, the digital algorithms needed to realize these systems are complex and constitute a challenge in order to define powerful hardware architectures. The goal of the present work centers specifically on the optimal definition of architectural solutions to combat this problem in a CDMA context. The first aspect of this work is a deepened study of space-time algorithms and design methods with regards to an efficient hardware implementation. Numerous detection schemes are proposed in the litterature and are applicable following three criteria : quality of service, binary throughput and algorithmic complexity. The latter constitutes a strong limitation for a low cost implementation of mobiles including these applications. Thus, it is necessary to use powerful tools to simulate, evaluate and rapidly prototype these new systems which constitute likely candidates for fourth generation telecommunication systems. The second aspect is the realization of an uncoded multi-antenna transceiver, integrating CDMA in a wideband channel case. A WCDMA mono-antenna system, generalized to any antenna array, has been integrated and simulated on the Lyrtech rapid prototyping platform. The developped architecture integrates the main baseband processing modules, such as Nyquist filtering, multiple path detection and detection itself. The MIMO-WCDMA prototype is characterized by its flexibility with regards to the number of inputs, format of samples, characteristics of the wireless channel and the targeted technology (ASIC, FPGA). The third aspect is more prospective since it introduces new methods to reduce the hardware cost of multi-antenna systems. The principle of dynamic allocation of the fixed point format is presented with the goal of adapting the data encoding according to the wireless channel's characteristics and consequently minimizing the circuit's complexity. Also, the concept of adaptive architectures is proposed to reduce the power consumption in an embedded system according to application context
APA, Harvard, Vancouver, ISO, and other styles
50

Saidi, Taofik Sentieys Olivier Roy Sébastien. "Architectures matérielles pour la technologie W-CDMA étendue aux systèmes multi-antennes." Rennes : [s.n.], 2008. ftp://ftp.irisa.fr/techreports/theses/2008/tsaidi.pdf.

Full text
Abstract:
Thèse doctorat : Traitement du signal et télécommunications : Rennes 1 : 2008. Thèse doctorat : Traitement du signal et télécommunications : Université de Laval (Québec) : 2008.
Titre provenant de la page du titre du document électronique. Bibliogr. p. 177-182.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography