Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: LOGO (Computer system).

Rozprawy doktorskie na temat „LOGO (Computer system)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 43 najlepszych rozpraw doktorskich naukowych na temat „LOGO (Computer system)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Finnighan, Grant Adam. "Computer image based scaling of logs". Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26698.

Pełny tekst źródła
Streszczenie:
Individual log scaling for the forest industry is a time consuming operation. Presented here are the design and prototype test results of an automated technique that will improve on the current speed of this operation, while still achieving the required accuracy. This is based on a technique that uses a television camera and graphics monitor to enable the operator to spot logs in images, which an attached processor can automatically scale. The system must be first calibrated however. Additional to the time savings are the advantages that the accuracy will be maintained, if not improved, and the operation may now be performed from a sheltered location.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
2

Qiu, Tongqing. "Understanding a large-scale IPTV network via system logs". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41228.

Pełny tekst źródła
Streszczenie:
Recently, there has been a global trend among the telecommunication industry on the rapid deployment of IPTV (Internet Protocol Television) infrastructure and services. While the industry rushes into the IPTV era, the comprehensive understanding of the status and dynamics of IPTV network lags behind. Filling this gap requires in-depth analysis of large amounts of measurement data across the IPTV network. One type of the data of particular interest is device or system log, which has not been systematically studied before. In this dissertation, we will explore the possibility of utilizing system logs to serve a wide range of IPTV network management purposes including health monitoring, troubleshooting and performance evaluation, etc. In particular, we develop a tool to convert raw router syslogs to meaningful network events. In addition, by analyzing set-top box (STB) logs, we propose a series of models to capture both channel popularity and dynamics, and users' activity on the IPTV network.
Style APA, Harvard, Vancouver, ISO itp.
3

Katebi, Ataur Rahim. "Supporting snapshots in a log-based file system". [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0008900.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Magnusson, Jesper. "Monitoring malicious PowerShell usage through log analysis". Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75152.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhu, Lilin. "Logserver monitor for managing log messages of applications". CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2054.

Pełny tekst źródła
Streszczenie:
This project is a graphical user interface for managing log information. Logging is an important componet of a software development cycle as well as for diagnostics of performance and monitoring of the software after deployment. The LogServer Monitor provides a graphical user interface for the display and management of logged information from a distributed environment.
Style APA, Harvard, Vancouver, ISO itp.
6

Michel, Hannes. "Visualizing audit log events at the Swedish Police Authority to facilitate its use in the judicial system". Thesis, Luleå tekniska universitet, Digitala tjänster och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75244.

Pełny tekst źródła
Streszczenie:
Within the Swedish Police Authority, physical users’ actions within all systems that manage sensitive information, are registered and sent to an audit log. The audit log contains log entries that consist of information regarding the events that occur by the performing user. This means that the audit log continuously manages massive amounts of data which is collected, processed and stored. For the police authority, the audit log may be useful for proving a digital trail of something that has occurred. An audit log is based upon the collected data from a security log. Security logs can collect datafrom most of the available systems and applications. It provides the availability for the organizationto implement network surveillance over the digital assets where logs are collected in real-time whichenables the possibility to detect any intrusion over the network. Furthermore, additional assets thatlog events are generated from are security software, firewalls, operating systems, workstations,networking equipment, and applications. The actors in a court of law usually don’t possess the technical knowledge required to interpret alog events since they can contain variable names, unparsed data or undefined values. Thisemphasizes the need for a user-friendly artifact of the audit log events that facilitates its use. Researching a way of displaying the current data format and displaying it in an improvedpresentable manner would be beneficial as an academic research by producing a generalizablemodel. In addition, it would prove useful for the internal investigations of the police authority sinceit was formed by their needs.
Style APA, Harvard, Vancouver, ISO itp.
7

Hejderup, Jacob. "Multipla loggar för ökad programförståelse : Hur multipla loggar kan bidra till programutveckling och programförståelse". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-39390.

Pełny tekst źródła
Streszczenie:
För att utveckla eller underhålla mjukvara krävs en viss nivå av programförståelse. För att underlätta denna förståelse används olika typer av verktyg. Denna studie fokuserar på två olika verktyg som använder sig av dynamisk analys: enkla och multipla loggar. Studiens syfte är att undersöka om multipla loggar kan bidra till en ökad programförståelse vid programutveckling. Eclipse är en utvecklingsmiljö som används för att visa programmets källkod. Trace Compass är ett verktyg som används för att inspektera loggar. Denna studie utfördes i två moment: experiment och intervjuer. Experimentet bestod av 10 typiska förståelseuppgifter i ett program-utvecklingssammanhang. Efter experimentet utfördes en intervju med samtliga deltagare.  Resultatet av undersökningen blev att multipla loggar skulle kunna vara lämpligare att använda vid problem som var relaterade till två eller fler komponenter i ett system.  En av begränsningarna i denna studie var att studien hade för få deltagare för ett generellt ställningstagande.
To develop or maintain a piece of code requires a certain level of comprehension of the developed or maintained software itself. To achieve this goal the developer uses a set of different tools. This report will focus on two types of debug tools: single trace and multiple traces. The purpose of the study is to examine how multiple traces can contribute to an improved program comprehension. The study was carried out through experiments and interviews. The experiment consisted of 10 typical comprehension tasks in a development context. Eclipse and Trace Compass were used to display the logs. Eclipse is a development environment that shows the source code. Trace Compass is a tool for inspecting traces.  After the experiment, an interview was carried out with the subjects of the experiment. The results of this study indicated that multiple traces could have an advantage over a single trace when the task is to understand the interactions between code components in a software system. One of the limitations of the study was due to the limited number of subjects taken part in the study and cannot be used to draw a more general conclusion.
Style APA, Harvard, Vancouver, ISO itp.
8

Barrett, Scott M. "A Computer Simulation Model for Predicting the Impacts of Log Truck Turn-Time on Timber Harvesting System Productivity". Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/31170.

Pełny tekst źródła
Streszczenie:
A computer simulation model was developed to represent a logging contractorâ s harvesting and trucking system of wood delivery from the contractorâ s in-woods landing to the receiving mill. The Log Trucking System Simulation model (LTSS) focuses on the impacts to logging contractors as changes in truck turn times cause an imbalance between harvesting and trucking systems. The model was designed to serve as a practical tool that can illustrate the magnitude of cost and productivity changes as the delivery capacity of the contractorâ s trucking system changes. The model was used to perform incremental analyses using an example contractorâ s costs and production rates to illustrate the nature of impacts associated with changes in the contractorâ s trucking system. These analyses indicated that the primary impact of increased turn times occurs when increased delivery time decreases the number of loads per day the contractorâ s trucking system can deliver. When increased delivery times cause the trucking system to limit harvesting production, total costs per delivered ton increase. In cases where trucking significantly limits system production, total costs per delivered ton would decrease if additional trucks were added. The model allows the user to simulate a harvest with up to eight products trucked to different receiving mills. The LTSS model can be utilized without extensive data input requirements and serves as a user friendly tool for predicting cost and productivity changes in a logging contractorâ s harvesting and trucking system based on changes in truck delivery times.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
9

Goel, Prateek. "Integrated system for subsurface exploration data collection and borehole log generation". Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/20967.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ma, Hongyan. "User-system coordination in unified probabilistic retrieval exploiting search logs to construct common ground /". Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581426061&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Knutsson, Karl. "Security Without Cost : A Cryptographic Log-structured File System". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4715.

Pełny tekst źródła
Streszczenie:
Historically, cryptographic file systems have been several times slower than non-cryptographic file systems. This paper describes the design and implementation of a fast Cryptographic Log-structured File System on OpenBSD. We experimentally demonstrate that our pro-totype file system performs close to the Fast File System (FFS) and the Log-structured File System (LFS). To increase performance, our file system performs most encryption and decryption work during disk read and write operations. This is possible thanks to the SEAL encryption algorithm, a software optimized stream cipher that allows the en-cryption work to be performed prior to the actual data is available. We believe that our cryptographic file system design is ideal for optimal read and write performance on locally stored confidential data.
Denna uppsats beskriver utvecklingen av ett kryptografiskt log-strukturerat filsystem och vi visar genom experiment att dess prestanda är jämförbar med lokala filsystem.
Karl Knutsson Skiftesgatan 40 332 35 Gislaved Sweden
Style APA, Harvard, Vancouver, ISO itp.
12

Radley, Johannes Jurgens. "Pseudo-random access compressed archive for security log data". Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1020019.

Pełny tekst źródła
Streszczenie:
We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
Style APA, Harvard, Vancouver, ISO itp.
13

Rizothanasis, Georgios. "Identifying User Actions from Network Traffic". Thesis, Linköpings universitet, Databas och informationsteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119675.

Pełny tekst źródła
Streszczenie:
Identification of a user’s actions while browsing the Internet is mostly achieved by instrumentation of the user’s browser or by obtaining server logs. In both cases this requires installation of software on multiple clients and/or servers in order to obtain sufficient data. However, by using network traffic, access to user generated traffic from multiple clients to multiple servers is possible. In this project a proxy server is used for recording network traffic and a user-action identification algorithm is proposed. The proposed algorithm includes various policies of analyzing network traffic in order to identify user actions. This project also presents an evaluation framework for the proposed policies, based on which the tradeoff of the various policies is revealed. Proxy servers are widely deployed by numerous organizations and often used for web mining, so with the work of this project user action recognition can be a new tool when considering web traffic evaluation.
Style APA, Harvard, Vancouver, ISO itp.
14

Wang, Xiaohan. "Designing and Evaluating a Visualization System for Log Data". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279586.

Pełny tekst źródła
Streszczenie:
In the engineering field, log data analysis has been conducted by most companies as it has become a significant step for discovering problems and obtaining insights into the system. Visualization which brings better comprehension of data could be used as an effective and intuitive method for data analysis. This study aims at applying a participatory design approach to develop a visualization system of log data, employed with design activities including interviews, prototyping, usability testing and questionnaires in the research process, along with a comparative study on the impacts of using narrative visualization techniques and storytelling on usability and user engagement with exploratory visualizations. The findings exposed that using storytelling and narrative visualization techniques seems to increase user engagement while it does not seem to increase usability. Definitive conclusions could not be drawn due to a low demographic diversity of participants; however, the results could be an initial insight to trigger further research on the impacts of storytelling and narrative visualization techniques on user experience. Future research is encouraged to recruit more participants in a wide diversity, pre-process log data and conduct a comparative study on selecting the best visualization for log data.
Inom teknikområdet har loggdata-analys genomförts av de flesta företag eftersom det har blivit ett viktigt steg for att upptäcka problem och fa insikt i systemet. Visualisering som ger bättre förståelse av data kan användas som en effektiv och intuitiv metod for dataanalys. Denna studie syftar till att tillämpa en deltagande designmetod for att utveckla ett visualiseringssystem av loggdata, anställda med designaktiviteter inklusive intervjuer, prototyper, användbarhetstest och frågeformulär i forskningsprocessen, tillsammans med en jämförande studie av effekterna av att använda berättande visualiseringstekniker och berättelse om användbarhet och användarengagemang med utforskande visualiseringar. Resultaten visade att användning av berättelser och berättande visualiseringstekniker verkar oka användarnas engagemang medan det inte verkar oka användbarheten. Definitiva slutsatser kunde inte dras på grund av en lag demografisk mångfald av deltagare; emellertid kan resultaten vara en första insikt for att utlösa ytterligare forskning om effekterna av berättelser och berättande visualiseringstekniker på användarupplevelsen. Framtida forskning uppmuntras att rekrytera fler deltagare i en stor mångfald, förbereda loggdata och genomföra en jämförande studie om att välja den basta visualiseringen for loggdata.
Style APA, Harvard, Vancouver, ISO itp.
15

Strömgren, Calle, i Marcus Storm. "System Monitor : Ett felsökningssystem för Paperline". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-32315.

Pełny tekst źródła
Streszczenie:
When an error occurs in an IT system that is vital for the production of a major industry, the consequences can be great. To quickly identify and correct errors is important as a stop in a system can lead to a break in production, which is costly for the industry. Our task in this thesis has been to develop a system for ÅF that facilitates the debugging process of the system Paperline. The system's target audience is ÅF-call personnel that provides support for Paperline 24 hours a day if something goes wrong. The system consists of a Windows service, a database and a web application and is developed mainly with the techniques C#.NET, MVC 5, Google Charts, Javascript, HTML, CSS, and Entity Framework. The result of the thesis is a deployed system that facilitates the debugging process by retrieving, interpreting and presenting the log messages that PaperLine produces. The system is used by the call personnel at ÅF to easily perform troubleshooting work in Paperline.
När ett fel inträffar i ett IT-system som är vitalt för produktionsprocessen i en stor industri kan följderna vara stora. Att snabbt identifiera och åtgärda felen är viktigt eftersom ett stillastående system kan innebära stillastående produktion vilket är kostsamt för industrin. Vår uppgift i detta exjobb har varit att, för ÅF, utveckla ett system som underlättar felsökningsarbetet av systemet Paperline. Systemets målgrupp är ÅFs jourpersonal som 24 timmar om dygnet ansvarar för support av Paperline då något går fel. Systemet består av en Windows-tjänst, en databas och en webbapplikation och är utvecklat huvudsakligen med teknikerna C#.NET, MVC 5, Google Charts, Javascript, HTML, CSS och Entity Framework. Resultatet av exjobbet är ett driftsatt system som underlättar felsökningsarbetet genom att hämta, tolka och presentera de logginlägg som Paperline producerar. Systemet används av jourgruppen på ÅF för att enkelt utföra felsökningsarbete i Paperline.
Style APA, Harvard, Vancouver, ISO itp.
16

Kristiansson, Herrera Lucas. "A structured approach to selecting the most suitable log management system for an organization". Thesis, Uppsala universitet, Datalogi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424413.

Pełny tekst źródła
Streszczenie:
With the advent of digitalization, a typical organization today will contain an ecosystem of servers, databases, and other components. These systems can produce large volumes of log data on a daily basis. By using a log management system (LMS) for collecting, structuring and analyzing these log events, an organization could benefit in their services. The primary intent with this thesis is to construct a decision model that will aid organizations in finding a LMS that most fit their needs. To construct such a model, a number of log management products are investigated that are both proprietary and open source. Furthermore, good practices of handling log data are investigated by reading various papers and books on the subject. The result is a decision model that can be used by an organization for preparing, implementing, maintaining and choosing a LMS. The decision model makes an attempt to quantify various properties such as product features, but the LMSs it suggests should mostly be seen as a decision basis. In order to make the decision model more comprehensive and usable, more products should be included in the model and other factors that could play a part in finding a suitable LMS should be investigated.
Style APA, Harvard, Vancouver, ISO itp.
17

Gillström, Niklas. "Log-selection strategies in a real-time system". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-25844.

Pełny tekst źródła
Streszczenie:
This thesis presents and evaluates how to select the data to be logged in an embedded realtime system so as to be able to give confidence that it is possible to perform an accurate identification of the fault(s) that caused any runtime errors. Several log-selection strategies were evaluated by injecting random faults into a simulated real-time system. An instrument was created to perform accurate detection and identification of these faults by evaluating log data. The instrument’s output was compared to ground truth to determine the accuracy of the instrument. Three strategies for selecting the log entries to keep in limited permanent memory were created. The strategies were evaluated using log data from the simulated real-time system. One of the log-selection strategies performed much better than the other two: it minimized processing time and stored the maximum amount of useful log data in the available storage space.
Denna uppsats illustrerar hur det blev fastställt vad som ska loggas i ett inbäddat realtidssystem för att kunna ge förtroende för att det är möjligt att utföra en korrekt identifiering av fel(en) som orsakat körningsfel. Ett antal strategier utvärderades för loggval genom att injicera slumpmässiga fel i ett simulerat realtidssystem. Ett instrument konstruerades för att utföra en korrekt upptäckt och identifiering av dessa fel genom att utvärdera loggdata. Instrumentets utdata jämfördes med ett kontrollvärde för att bestämma riktigheten av instrumentet. Tre strategier skapades för att avgöra vilka loggposter som skulle behållas i det begränsade permanenta lagringsutrymmet. Strategierna utvärderades med hjälp av loggdata från det simulerade realtidssystemet. En av strategierna för val av loggdata presterade klart bättre än de andra två: den minimerade tiden för bearbetning och lagrade maximal mängd användbar loggdata i det permanenta lagringsutrymmet.
Style APA, Harvard, Vancouver, ISO itp.
18

LING, ZHANG. "Regression Test Selection in Multi-TaskingReal-Time Systems based on Run-Time logs". Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-6690.

Pełny tekst źródła
Streszczenie:

Regression testing plays an important role during the software development life-cycle,especially during maintenance, it provides confidence that the modified parts of softwarebehave as intended and the unchanged parts have no affect by the modification. Regressiontest selection is used to select test cases from the test suites which have been used to test theprevious version of the software. In this thesis, we extend the traditional definition of a testcase with a log file, containing information of which events that occurred when the test casewas last executed. Based on the contents of this log file, we propose a method of regressiontest selection for multi-tasking real-time systems, able to determine which parts of softwarethat have not been affected by the modification. Therefore, the test cases designed for theunchanged parts do not need to be re-tested.

Style APA, Harvard, Vancouver, ISO itp.
19

Incebacak, Davut. "Design And Implementation Of A Secure And Searchable Audit Logging System". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608431/index.pdf.

Pełny tekst źródła
Streszczenie:
Logs are append-only time-stamped records to represent events in computers or network devices. Today, in many real-world networking applications, logging is a central service however it is a big challenge to satisfy the conflicting requirements when the security of log records is of concern. On one hand, being kept on mostly untrusted hosts, the logs should be preserved against unauthorized modifications and privacy breaches. On the other, serving as the primary evidence for digital crimes, logs are often needed for analysis by investigators. In this thesis, motivated by these requirements we define a model which integrates forward integrity techniques with search capabilities of encrypted logs. We also implement this model with advanced cryptographic primitives such as Identity Based Encryption. Our model, in one side, provides secure delegation of search capabilities to authorized users while protecting information privacy, on the other, these search capabilities set boundaries of a user&rsquo
s search operation. By this way user can not access logs which are not related with his case. Also, in this dissertation, we propose an improvement to Schneier and Kelsey&rsquo
s idea of forward integrity mechanism.
Style APA, Harvard, Vancouver, ISO itp.
20

Hu, Zhen Hua Sampson. "Antennas with frequency domain control for future communication systems". Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3332/.

Pełny tekst źródła
Streszczenie:
This dissertation describes research into “Antennas with Frequency Domain Control for Future Communication Systems” and several novel antennas are shown, each of which addresses a specific issue for current and future communication systems, in terms of wideband coverage, channel capacity, antenna isolation and band-rejection. These antenna designs may be candidates for implementation in future multiband radios, and software defined radio (SDR) and cognitive radio (CR) systems, which are two new concepts in wireless communications in the foreseeable future, although it is evident that there are as yet no clear specifications for those future systems. A novel two-port reconfigurable antenna which can operate within a narrowband or wideband mode is presented. Three different structures of wideband reconfigurable balanced antennas, with a wide tuning range, have been proposed. When the balanced antenna is combined with the two-port chassis antenna, it becomes a reconfigurable MIMO antenna for small terminals and at least 15 dB of isolation is achieved. Several designs of conical monopole antennas, incorporating different types of slots to achieve good band-rejection behaviour, have been introduced. These are the 2 Cshaped, 4 C-shaped slots, 4 U-shaped slots, 4 tilted-U-shaped slots and 4 U-C-shaped slots. The study of wideband antennas with notched-band behaviour using a simple equivalent circuit model has been proposed. It has been noted that increasing the number of resonators and the coupling factor will increase the band-rejection. However, it will also widen the bandwidth of the frequency notched band. A novel pyramidal monopole antenna, with four loop shaped slots, offering wide tunable band-notch, is also presented.
Style APA, Harvard, Vancouver, ISO itp.
21

Uppströmer, Viktor, i Henning Råberg. "Detecting Lateral Movement in Microsoft Active Directory Log Files : A supervised machine learning approach". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18337.

Pełny tekst źródła
Streszczenie:
Cyberattacker utgör ett stort hot för dagens företag och organisationer, med engenomsnittlig kostnad för ett intrång på ca 3,86 miljoner USD. För att minimera kostnaden av ett intrång är det viktigt att detektera intrånget i ett så tidigt stadium som möjligt. Avancerande långvariga hot (APT) är en sofistikerad cyberattack som har en lång närvaro i offrets nätverk. Efter attackerarens första intrång kommer fokuset av attacken skifta till att få kontroll över så många enheter som möjligt på nätverket. Detta steg kallas för lateral rörelse och är ett av de mest kritiska stegen i en APT. Syftet med denna uppsats är att undersöka hur och hur väl lateral rörelse kan upptäckas med hjälp av en maskininlärningsmetod. I undersökningen jämförs och utvärderas fem maskininlärningsalgoritmer med upprepad korsvalidering följt av statistisk testning för att bestämma vilken av algoritmerna som är bäst. Undersökningen konkluderar även vilka attributer i det undersökta datasetet som är väsentliga för att detektera laterala rörelser. Datasetet kommer från en Active Directory domänkontrollant där datasetets attributer är skapade av korrelerade loggar med hjälp av datornamn, IP-adress och användarnamn. Datasetet består av en syntetisk, samt, en verklig del vilket skapar ett semi-syntetiskt dataset som innehåller ett multiklass klassifierings problem. Experimentet konkluderar att all fem algoritmer klassificerar rätt med en pricksäkerhet (accuracy) på 0.998. Algoritmen RF presterar med den högsta f-measure (0.88) samt recall (0.858), SVM är bäst gällande precision (0.972) och DT har denlägsta inlärningstiden (1237ms). Baserat på resultaten indikerar undersökningenatt algoritmerna RF, SVM och DT presterar bäst i olika scenarier. Till exempel kan SVM användas om en låg mängd falsk positiva larm är viktigt. Om en balanserad prestation av de olika prestanda mätningarna är viktigast ska RF användas. Undersökningen konkluderar även att en stor mängd utav de undersökta attributerna av datasetet kan bortses i framtida experiment, då det inte påverkade prestandan på någon av algoritmerna.
Cyber attacks raise a high threat for companies and organisations worldwide. With the cost of a data breach reaching $3.86million on average, the demand is high fora rapid solution to detect cyber attacks as early as possible. Advanced persistent threats (APT) are sophisticated cyber attacks which have long persistence inside the network. During an APT, the attacker will spread its foothold over the network. This stage, which is one of the most critical steps in an APT, is called lateral movement. The purpose of the thesis is to investigate lateral movement detection with a machine learning approach. Five machine learning algorithms are compared using repeated cross-validation followed statistical testing to determine the best performing algorithm and feature importance. Features used for learning the classifiers are extracted from Active Directory log entries that relate to each other, with a similar workstation, IP, or account name. These features are the basis of a semi-synthetic dataset, which consists of a multiclass classification problem. The experiment concludes that all five algorithms perform with an accuracy of 0.998. RF displays the highest f1-score (0.88) and recall (0.858), SVM performs the best with the performance metric precision (0.972), and DT has the lowest computational cost (1237ms). Based on these results, the thesis concludes that the algorithms RF, SVM, and DT perform best in different scenarios. For instance, SVM should be used if a low amount of false positives is favoured. If the general and balanced performance of multiple metrics is preferred, then RF will perform best. The results also conclude that a significant amount of the examined features can be disregarded in future experiments, as they do not impact the performance of either classifier.
Style APA, Harvard, Vancouver, ISO itp.
22

Ekman, Niklas. "Handling Big Data using a Distributed Search Engine : Preparing Log Data for On-Demand Analysis". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222373.

Pełny tekst źródła
Streszczenie:
Big data are datasets that is very large and computational complex. With an increasing volume of data the time a trivial processing task can be challenging. Companies collects data at a fast rate but knowing what to do with the data can be hard. A search engine is a system that indexes data making it efficiently queryable by users. When a bug occurs in a computer system log data is consulted in order to understand why, but processing big log data can take a long time. The purpose of this thesis is to investigate, compare and implement a distributed search engine that can prepare log data for analysis, which will make it easier for a developer to investigate bugs. There are three popular search engines: Apache Lucene, Elasticsearch and Apache Solr. Elasticsearch and Apache Solr are built as distributed systems making them capable of handling big data. Requirements was established through interviews. Big log data of totally 40 GB was provided that would be indexed in the selected search engine. The log data provided was generated in a proprietary binary format and it had to be decoded before. The distributed search engines was evaluated based on: Distributed architecture, text analysis, indexing and querying. Elasticsearch was selected for implementation. A cluster was set up on Amazon Web Services and tests was executed in order to determine how different configurations performed. An indexing software was written that would transfer data to the cluster. Results was verified through a case-study with participants of the stakeholder.
Stordata är en datamängd som är mycket stora och komplexa att göra beräkningar på. När en datamängd ökar blir en trivial bearbetningsuppgift betydligt mera utmanande. Företagen samlar idag in data i allt snabbare takt men det är svårt att veta exakt vad man ska göra med den data. En sökmotor är ett system som indexerar data och gör det effektivt att för användare att söka i det. När ett fel inträffar i ett datorsystem går utvecklare igenom loggdata för att få en insikt i varför, men det kan ta lång tid att söka igenom en stor mängd loggdata. Syftet med denna avhandling är att undersöka, jämföra och implementera en distribuerad sökmotor som kan förbereda loggdata för analys, vilket gör det lättare för utvecklare att undersöka buggar. Det finns tre populära sökmotorer: Apache Lucene, Elasticsearch och Apache Solr. Elasticsearch och Apache Solr är byggda som distribuerade system och kan därav hantera stordata. Krav fastställdes genom intervjuer. En stor mängd loggdata på totalt 40 GB indexerades i den valda sökmotorn. Den loggdata som användes genererades i en proprietär binärt format som behövdes avkodas för att kunna användas. De distribuerade sökmotorerna utvärderades utifrån kriterierna: Distribuerad arkitektur, textanalys, indexering och förfrågningar. Elasticsearch valdes för att implementeras. Ett kluster sattes upp på Amazon Web Services och test utfördes för att bestämma hur olika konfigurationer presterade. En indexeringsprogramvara skrevs som skulle överföra data till klustret. Resultatet verifierades genom en studie med deltagare från intressenten.
Style APA, Harvard, Vancouver, ISO itp.
23

Olars, Sebastian. "Analysis of Diameter Log Files with Elastic Stack". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-80770.

Pełny tekst źródła
Streszczenie:
There is a growing need for more efficient tools and services for log analysis. A need that comes from the ever-growing use of digital services and applications, each one generating thousands of lines of log event message for the sake of auditing and troubleshooting. This thesis was initiated on behalf of one of the departments of the IT consulting company TietoEvry in Karlstad. The purpose of this thesis project was to investigate whether the log analysis service Elastic Stack would be a suitable solution for TietoEvry’s need for a more efficient method of log event analysis. As part of this investigation, a small-scale deployment of Elastic Stack was created, used as proof of concept. The investigation showed that Elastic Stack would be a suitable tool for the monitoring and analysis needs of TietoEvry. The final version of deployment was, however, not able to fulfill all of the requirements that were initially set out by TietoEvry, however, this was mainly due to a lack of time and rather than limitations of Elastic Stack.
Style APA, Harvard, Vancouver, ISO itp.
24

von, Hacht Johan. "Anomaly Detection for Root Cause Analysis in System Logs using Long Short-Term Memory". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301656.

Pełny tekst źródła
Streszczenie:
Many software systems are under test to ensure that they function as expected. Sometimes, a test can fail, and in that case, it is essential to understand the cause of the failure. However, as systems grow larger and become more complex, this task can become non-trivial and potentially take much time. Therefore, even partially, automating the process of root cause analysis can save time for the developers involved. This thesis investigates the use of a Long Short-Term Memory (LSTM) anomaly detector in system logs for root cause analysis. The implementation is evaluated in a quantitative and a qualitative experiment. The quantitative experiment evaluates the performance of the anomaly detector in terms of precision, recall, and F1 measure. Anomaly injection is used to measure these metrics since there are no labels in the data. Additionally, the LSTM is compared with a baseline model. The qualitative experiment evaluates how effective the anomaly detector could be for root cause analysis of the test failures. This was evaluated in interviews with an expert in the software system that produced the log data that the thesis uses. The results show that the LSTM anomaly detector achieved a higher F1 measure than the proposed baseline implementation thanks to its ability to detect unusual events and events happening out of order. The qualitative results indicate that the anomaly detector could be used for root cause analysis. In many of the evaluated test failures, the expert being interviewed could deduce the cause of the failure. Even if the detector did not find the exact issue, a particular part of the software might be highlighted, meaning that it produces many anomalous log messages. With this information, the expert could contact the people responsible for that part of the application for help. In conclusion, the anomaly detector automatically collects the necessary information for the expert to perform root cause analysis. As a result, it could save the expert time to perform this task. With further improvements, it could also be possible for non-experts to utilise the anomaly detector, reducing the need for an expert.
Många mjukvarusystem testas för att försäkra att de fungerar som de ska. Ibland kan ett test misslyckas och i detta fall är det viktigt att förstå varför det gick fel. Detta kan bli problematiskt när mjukvarusystemen växer och blir mer komplexa eftersom att denna uppgift kan bli icke trivial och ta mycket tid. Om man skulle kunna automatisera felsökningsprocessen skulle det kunna spara mycket tid för de invloverade utvecklarna. Denna rapport undersöker användningen av en Long Short-Term Memory (LSTM) anomalidetektor för grundorsaksanalys i loggar. Implementationen utvärderas genom en kvantitativ och kvalitativ undersökning. Den kvantitativa undersökningen utvärderar prestandan av anomalidetektorn med precision, recall och F1 mått. Artificiellt insatta anomalier används för att kunna beräkna dessa mått eftersom att det inte finns etiketter i den använda datan. Implementationen jämförs också med en annan simpel anomalidetektor. Den kvalitativa undersökning utvärderar hur användbar anomalidetektorn är för grundorsaksanalys för misslyckade tester. Detta utvärderades genom intervjuer med en expert inom mjukvaran som producerade datan som användes in denna rapport. Resultaten visar att LSTM anomalidetektorn lyckades nå ett högre F1 mått jämfört med den simpla modellen. Detta tack vare att den kunde upptäcka ovanliga loggmeddelanden och loggmeddelanden som skedde i fel ordning. De kvalitativa resultaten pekar på att anomalidetektorn kan användas för grundorsaksanalys för misslyckade tester. I många av de misslyckade tester som utvärderades kunde experten hitta anledningen till att felet misslyckades genom det som hittades av anomalidetektorn. Även om detektorn inte hittade den exakta orsaken till att testet misslyckades så kan den belysa en vissa del av mjukvaran. Detta betyder att just den delen av mjukvaran producerad många anomalier i loggarna. Med denna information kan experten kontakta andra personer som känner till den delen av mjukvaran bättre för hjälp. Anomalidetektorn automatiskt den information som är viktig för att experten ska kunna utföra grundorsaksanalys. Tack vare detta kan experten spendera mindre tid på denna uppgift. Med vissa förbättringar skulle det också kunna vara möjligt för mindre erfarna utvecklare att använda anomalidetektorn. Detta minskar behovet för en expert.
Style APA, Harvard, Vancouver, ISO itp.
25

Jonas, Susanne. "Automatic Status Logger For a Gas Turbine". Thesis, Linköping University, Department of Science and Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11020.

Pełny tekst źródła
Streszczenie:

The Company Siemens Industrial Turbo Machinery AB manufactures and launches in operation among other things gas turbines, steam turbines, compressors, turn-key power plants and carries out service for components for heat and power production. Siemens also performs research and development, marketing, sales and installations of turbines and completes power plants, service and refurbish.

Our thesis for the engineering degree is to develop an automatic status log which will be used as a tool to control how the status of the machine is before and after technical service at gas turbines. Operational disturbances will be registered in a structured way in order to get a good possibility to follow up the reliability of the application.

An automatic log function has been developed and will be activated at start, stop and shutdown of the turbine system. Log files are created automatically and get a name with the event type, the date and the time. The files contain data as timestamp, name, measured values and units of the signals which are going to be analyzed by the support engineers. They can evaluate the cause of the problem using the log files.

Style APA, Harvard, Vancouver, ISO itp.
26

Monteiro, Steena D. S. "A Novel Authentication And Validation Mechanism For Analyzing Syslogs Forensically". DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/198.

Pełny tekst źródła
Streszczenie:
This research proposes a novel technique for authenticating and validating syslogs for forensic analysis. This technique uses a modification of the Needham Schroeder protocol, which uses nonces (numbers used only once) and public keys. Syslogs, which were developed from an event-logging perspective and not from an evidence-sustaining one, are system treasure maps that chart out and pinpoint attacks and attack attempts. Over the past few years, research on securing syslogs has yielded enhanced syslog protocols that focus on tamper prevention and detection. However, many of these protocols, though efficient from a security perspective, are inadequate when forensics comes into play. From a legal perspective, any kind of evidence found at a crime scene needs to be validated. In addition, any digital forensic evidence when presented in court needs to be admissible, authentic, believable, and reliable. Currently, a patchy log on the server side and client side cannot be considered as formal authentication of a wrongdoer. This work presents a method that ties together, authenticates, and validates all the entities involved in the crime scene--the user using the application, the system that is being used, and the application being used on the system by the user. This means that instead of merely transmitting the header and the message, which is the standard syslog protocol format, the syslog entry along with the user fingerprint, application fingerprint, and system fingerprint are transmitted to the logging server. The assignment of digital fingerprints and the addition of a challenge response mechanism to the underlying syslogging mechanism aim to validate generated syslogs forensically.
Style APA, Harvard, Vancouver, ISO itp.
27

Gorski, Hans-Joachim. "Zum Einsatz des Computers als Werkzeug beim interaktiven Programmieren im Mathematikunterricht der Hauptschule : ein Vorschlag zum Lernbereich "Prozent-, Zins- und Zinseszinsrechnung" unter besonderer Berücksichtigung des LOGO-Systems /". Bad Salzdetfurth : Franzbecker, 1991. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=002649032&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Flodin, Anton. "Leerec : A scalable product recommendation engine suitable for transaction data". Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33941.

Pełny tekst źródła
Streszczenie:
We are currently living in the Internet of Things (IoT) era, which involves devices that are connected to Internet and are communicating with each other. Each year, the number of devices increases rapidly, which result in rapid growth of data that is generated. This large amount of data is sometimes titled as Big Data, which is generated from different sources, such as log data of user behavior. These log files can be collected and analyzed in different ways, such as creating product recommendations. Product recommendations have been around since the late 90s, when the amount of data collected were not at the same level as it is today. The aim of this thesis has been to investigating methods to process and create product recommendations to see how well they are adapted for Big Data. This has been accomplished by three theory studies on how to process user events, how to make the product recommendation algorithm called collaborative filtering scalable and finally how to convert implicit feedback to explicit feedback (ratings). This resulted in a recommendation engine consisting of Apache Spark as the data processing system, which had three functions: read multiple log files and concatenate log files for each month, parsing the log files of the user events to create explicit ratings from the transactions and create four types of recommendations. The NoSQL database MongoDB was chosen as the database to store the different types of product recommendations that was created. To be able to get the recommendations from the recommendation engine and the database, a REST API was implemented which can be used by any third-party. What can be concluded from the results of this thesis work is that the system that was implemented is partial scalable. This means that Apache Spark was scalable for both concatenating files, parse and create ratings and also create the recommendations using the ALS method. However, MongoDB was shown to be not scalable when managing more than 100 concurrent requests. Future work involves making the recommendation engine distributed in a multi-node cluster to utilize the parallelization of Apache Spark. Other recommendations include considering other NoSQL databases that might be more scalable than MongoDB.
Style APA, Harvard, Vancouver, ISO itp.
29

Munter, Johan. "Number Recognition of Real-world Images in the Forest Industry : a study of segmentation and recognition of numbers on images of logs with color-stamped numbers". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39365.

Pełny tekst źródła
Streszczenie:
Analytics such as machine learning are of big interest in many types of industries. Optical character recognition is essentially a solved problem, whereas number recognition on real-world images which can be one form of machine learning are a more challenging obstacle. The purpose of this study was to implement a system that can detect and read numbers on given dataset originating from the forest industry being images of color-stamped logs. This study evaluated accuracy of segmentation and number recognition on images of color-stamped logs when using a pre-trained model of the street view house numbers dataset. The general approach of preprocessing was based on car number plate segmentation because of the similar problem of identifying an object to then locate individual digits. Color segmentation were the biggest asset for the preprocessing because of the distinct red color of digits compared to the rest of the image. The accuracy of number recognition was significantly lower when using the pre-trained model on color-stamped logs being 26% in comparison to street view house numbers with 95% but could still reach over 80% per digit accuracy rate for some image classes when excluding accuracy of segmentation. The highest segmentation accuracy among classes was 93% and the lowest was 32%. From the results it was concluded that unclear digits on images lessened the number recognition accuracy the most. There are much to consider for future work, but the most obvious and impactful change would be to train a more accurate model by basing it on the dataset of color-stamped logs.
Style APA, Harvard, Vancouver, ISO itp.
30

Pham, Khoi Minh. "NEURAL NETWORK ON VIRTUALIZATION SYSTEM, AS A WAY TO MANAGE FAILURE EVENTS OCCURRENCE ON CLOUD COMPUTING". CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/670.

Pełny tekst źródła
Streszczenie:
Cloud computing is one important direction of current advanced technology trends, which is dominating the industry in many aspects. These days Cloud computing has become an intense battlefield of many big technology companies, whoever can win this war can have a very high potential to rule the next generation of technologies. From a technical point of view, Cloud computing is classified into three different categories, each can provide different crucial services to users: Infrastructure (Hardware) as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS). Normally, the standard measurements for cloud computing reliability level is based on two approaches: Service Level Agreements (SLAs) and Quality of Service (QoS). This thesis will focus on IaaS cloud systems’ Error Event Logs as an aspect of QoS in IaaS cloud reliability. To have a better view, basically, IaaS is a derivation of the traditional virtualization system where multiple virtual machines (VMs) with different Operating System (OS) platforms, are run solely on one physical machine (PM) that has enough computational power. The PM will play the role of the host machine in cloud computing, and the VMs will play the role as the guest machines in cloud computing. Due to the lack of fully access to the complete real cloud system, this thesis will investigate the technical reliability level of IaaS cloud through simulated virtualization system. By collecting and analyzing the event logs generated from the virtualization system, we can have a general overview of the system’s technical reliability level based on number of error events occur in the system. Then, these events will be used on neural network time series model to detect the system failure events’ pattern, as well as predict the next error event that is going to occur in the virtualization system.
Style APA, Harvard, Vancouver, ISO itp.
31

Isaksson, David, i Jonas Fredriksson. "Diametermätning av timmer med stereovision". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-41270.

Pełny tekst źródła
Streszczenie:
Syfte – Syftet för den här studien är att utveckla en metod för att mäta diametern på travat timmers ändträytor i en bild med förvridet perspektiv där ytorna befinner sig på olika djup i förhållande till varandra, för att effektivisera mätning inom skogsindustrin. Metod – Denna studie genomfördes i samarbete med Cind AB, och arbetet var uppdelat i två faser med Design Science Research som forskningsmetod. I Fas 1 rektifierades bilder på ändträytorna manuellt och i Fas 2 utnyttjades ett punktmoln för att uppskatta rektifieringsplanet. Detta gjordes i en stereokamerarigg i skala 1:25 på 139 stockar. Samtliga stockar mättes digitalt i de rektifierade bilderna och manuellt med ett digitalt skjutmått. Ett konfidensintervall för differensen beräknades fram för att bedöma mätnoggrannheten. Resultat – Konfidensintervallet för Fas 1 tyder på att metoden har potential då rektifieringsplanet placeras korrekt, vilket Fas 2 visar är en svår och komplex uppgift. Slutsatser – Den utvecklade metodens mätnoggrannhet uppnådde inte studiens mål på 5% felmarginal. Det skulle dock vara möjligt att mäta ändträytor med god noggrannhet om punktmolnet har tillräckligt hög kvalitet. Begränsningar – Mjukvaran som använder punktmolnet för att rektifiera bilderna är en modifierad version av Cinds proprietära produkt. Datamängden som används i studien samlas endast in via Cinds testrigg.
Purpose – The purpose of this study is to develop a method for measuring the diameter of piled logs on a truck in a picture that has skewed perspective and where the end surfaces are at different depths in relation to each other. The intent of this method is to further streamline log measurement in the logging industry. Method – This study was conducted in collaboration with Cind AB, and the work was split in two phases with Design Science Research as research method. In Phase 1, images with log end surfaces were rectified manually, and in Phase 2 a point cloud was used to estimate the rectification plane. This was done with a stereo camera rig in scale 1:25 on a total of 139 logs. All logs were digitally measured in the rectified images and manually measured with a digital caliper. A confidence interval for the difference was calculated to assess the measurement accuracy. Findings – The confidence interval from Phase 1 indicates that the developed method has potential when the rectification plane is placed correctly, which Phase 2 shows is a difficult and complex task. Conclusions – The developed method did not reach the desired measurement accuracy of 5% margin of error, which means that the goal of the study was not achieved. It would be possible to measure the end surfaces of logs with high precision if the point cloud is of a sufficiently high quality. Limitations – The software that utilizes point cloud information to rectify the images is a modified version of Cind’s proprietary product. The dataset that is used in this study is collected solely through Cind’s test rig.
Style APA, Harvard, Vancouver, ISO itp.
32

Marzo, i. Grimalt Núria. "Natural Language Processing Model for Log Analysis to Retrieve Solutions For Troubleshooting Processes". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300042.

Pełny tekst źródła
Streszczenie:
In the telecommunications industry, one of the most time-consuming tasks is troubleshooting and the resolution of Trouble Report (TR) tickets. This task involves the understanding of textual data which can be challenging due to its domain- and company-specific features. The text contains many abbreviations, typos, tables as well as numerical information. This work tries to solve the issue of retrieving solutions for new troubleshooting reports in an automated way by using a Natural Language Processing (NLP) model, in particular Bidirectional Encoder Representations from Transformers (BERT)- based approaches. It proposes a text ranking model that, given a description of a fault, can rank the best possible solutions to that problem using answers from past TRs. The model tackles the trade-off between accuracy and latency by implementing a multi-stage BERT-based architecture with an initial retrieval stage and a re-ranker stage. Having a model that achieves a desired accuracy under a latency constraint allows it to be suited for industry applications. The experiments to evaluate the latency and the accuracy of the model have been performed on Ericsson’s troubleshooting dataset. The evaluation of the proposed model suggest that it is able to retrieve and re-rank solution for TRs with a significant improvement compared to a non-BERT model.
En av de mest tidskrävande uppgifterna inom telekommunikationsindustrin är att felsöka och hitta lösningar till felrapporter (TR). Denna uppgift kräver förståelse av textdata, som försvåras as att texten innehåller företags- och domänspecifika attribut. Texten innehåller typiskt sett många förkortningar, felskrivningar och tabeller blandat med numerisk information. Detta examensarbete ämnar att förenkla inhämtningen av lösningar av nya felsökningar på ett automatiserat sätt med hjälp av av naturlig språkbehandling (NLP), specifikt modeller baserade på dubbelriktad kodrepresentation (BERT). Examensarbetet föreslår en textrankningsmodell som, givet en felbeskrivning, kan rangordna de bästa möjliga lösningarna till felet baserat på tidigare felsökningar. Modellen hanterar avvägningen mellan noggrannhet och fördröjning genom att implementera den dubbelriktade kodrepresentationen i två faser: en initial inhämtningsfas och en omordningsfas. För industrianvändning krävs att modellen uppnår en given noggrannhet med en viss tidsbegränsning. Experimenten för att utvärdera noggrannheten och fördröjningen har utförts på Ericssons felsökningsdata. Utvärderingen visar att den föreslagna modellen kan hämta och omordna data för felsökningar med signifikanta förbättringar gentemot modeller utan dubbelriktad kodrepresentation.
Style APA, Harvard, Vancouver, ISO itp.
33

Bjurenfalk, Jonatan, i August Johnson. "Automated error matching system using machine learning and data clustering : Evaluating unsupervised learning methods for categorizing error types, capturing bugs, and detecting outliers". Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177280.

Pełny tekst źródła
Streszczenie:
For large and complex software systems, it is a time-consuming process to manually inspect error logs produced from the test suites of such systems. Whether it is for identifyingabnormal faults, or finding bugs; it is a process that limits development progress, and requires experience. An automated solution for such processes could potentially lead to efficient fault identification and bug reporting, while also enabling developers to spend more time on improving system functionality. Three unsupervised clustering algorithms are evaluated for the task, HDBSCAN, DBSCAN, and X-Means. In addition, HDBSCAN, DBSCAN and an LSTM-based autoencoder are evaluated for outlier detection. The dataset consists of error logs produced from a robotic test system. These logs are cleaned and pre-processed using stopword removal, stemming, term frequency-inverse document frequency (tf-idf) and singular value decomposition (SVD). Two domain experts are tasked with evaluating the results produced from clustering and outlier detection. Results indicate that X-Means outperform the other clustering algorithms when tasked with automatically categorizing error types, and capturing bugs. Furthermore, none of the outlier detection methods yielded sufficient results. However, it was found that X-Means’s clusters with a size of one data point yielded an accurate representation of outliers occurring in the error log dataset. Conclusively, the domain experts deemed X-means to be a helpful tool for categorizing error types, capturing bugs, and detecting outliers.
Style APA, Harvard, Vancouver, ISO itp.
34

Santos, Kássio Cabral Pereira dos. "Utilização de ontologias de referências como abordagem para interoperabilidade entre sistemas de informação utilizados ao longo do ciclo de vida de produtos". Universidade Tecnológica Federal do Paraná, 2011. http://repositorio.utfpr.edu.br/jspui/handle/1/351.

Pełny tekst źródła
Streszczenie:
Fundação Araucária
Para garantir sua sobrevivência perante um mercado cada vez mais competitivo e globalizado, as empresas passaram a se diferenciar pelo lançamento antecipado de produtos e pela utilização da informação como fator de competitividade, de forma que atendam melhor às crescentes necessidades e expectativas dos clientes. Entretanto, muitas dessas informações se encontram em uma forma não estruturada, gerando erros e inconsistências de projeto. Neste sentido, uma das soluções mais promissoras e pesquisadas na atualidade para garantir a troca e compartilhamento de informações é a construção de ontologias. Tratam-se de estruturas de informação que auxiliam na interoperabilidade semântica entre diferentes sistemas de informação. A presente pesquisa teve como objetivo principal propor uma abordagem semântica para interoperabilidade entre softwares utilizados ao longo do ciclo de vida de produtos, baseada na aplicação de ontologias de referência. Tal abordagem consiste na apresentação de um modelo conceitual no qual uma ontologia de referência é criada a partir da identificação de demandas de interoperabilidade. A ontologia criada fornece elementos suficientes para que o mapeamento semântico possa ser realizado, mediante consulta às suas classes, propriedades e axiomas. Um estudo de caso na indústria de papel reciclado foi adotado no sentido de identificar cenários reais que permitissem exemplificar a forma pela qual o modelo conceitual seria aplicado. Dois sistemas de informação atualmente utilizados ao longo do ciclo de vida do produto foram identificados, os quais utilizam estruturas heterogêneas de dados, sem correspondência direta entre termos. Após a criação de uma ontologia de referência para este caso, utilizando métodos construtivos e ferramentas consagradas, três possíveis cenários de interoperabilidade foram analisados. Para cada um foram propostas soluções de interoperabilidade semântica utilizando o modelo conceitual proposto.
To guarantee survival in an increasingly competitive and globalized market, companies need to differentiate themselves not only through the launch of new products, but also through the effective use of information to better serve the needs and expectations of their customers. However, finding well-structured information is difficult and the lack of clarity often leads to errors and project inconsistencies. One of the most promising and sought-after solutions to guarantee the accurate exchange of information is the use of an ontology, a structural framework that aids in the interoperability of different information systems. The objective of this research is to propose an approach to software interoperability, based on an ontology reference system, that can be used throughout the product lifecycle. This approach presents a conceptual model that creates a reference ontology based on the identification of interoperability needs. This research presents sufficient evidence that semantics mapping can be achieved in conjunction with the appropriate information class, properties, and axioms. To highlight real-world scenarios that exemplify how this conceptual model could be effectively applied, this research refers to a case study from the recycled paper industry. In addition, this study identifies two information systems that are currently used throughout the product lifecycle which utilize heterogeneous data structures with no direct correspondence between terms. Through constructive methods and existing tools, a reference ontology was created for these systems. Three interoperability scenarios were analyzed and solutions were proposed for semantic interoperability using the suggested model.
Style APA, Harvard, Vancouver, ISO itp.
35

Carr, Benjamin Alan. "Information, knowledge and learning : is the Web effective as a medium for Mathematics teaching?" Diss., Pretoria : [s.n.], 2002. http://uptd.up.ac.za/thesis/available/etd-04082003-113155.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Chaouachi, Amor. "Modelisation d'images coherentes, en objets, dans une base de connaissances : application a l'enseignement assiste par ordinateur". Toulouse 3, 1987. http://www.theses.fr/1987TOU30114.

Pełny tekst źródła
Streszczenie:
Deux objectifs ont ete poursuivis: 1)- la facilite de description qui consiste en une approche methodologique pour la modelisation en langage oriente objet du contexte du dialogue de simulation. 2)- l'expression "imagee" des etats locaux stables, lors d'une session de suivi d'exercices, grace a un outil graphique puissant. Cet outil permet de representer des frames de structure d'objets, des chaines de deductions, en plus des fonctionnalites graphiques standard de type gks
Style APA, Harvard, Vancouver, ISO itp.
37

張晃瑜. "Log-file analysis with forensics in computer system". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/c6xhb9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Skilbeck, Chloe Alison. "Misfeasor analysis from Patient Browser System Logs". Thesis, 2003. https://eprints.utas.edu.au/21593/1/whole_SkilbeckChloeAlison2003_thesis.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Jiang, Weihang. "Understanding storage system problems and diagnosing them through log analysis /". 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3362928.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3601. Adviser: Yuanyuan Zhou. Includes bibliographical references (leaves 91-98) Available on microfilm from Pro Quest Information and Learning.
Style APA, Harvard, Vancouver, ISO itp.
40

Abreu, Joaquim Tomás Almada. "Development of a centralized log management system". Master's thesis, 2020. http://hdl.handle.net/10400.13/2989.

Pełny tekst źródła
Streszczenie:
Os registos de um sistema são uma peça crucial de qualquer sistema e fornecem uma visão útil daquilo que este está fazendo e do que acontenceu em caso de falha. Qualquer processo executado num sistema gera registos em algum formato. Normalmente, estes registos ficam armazenados em memória local. À medida que os sistemas evoluiram, o número de registos a analisar também aumentou, e, como consequência desta evolução, surgiu a necessidade de produzir um formato de registos uniforme, minimizando assim dependências e facilitando o processo de análise. A ams é uma empresa que desenvolve e cria soluções no mercado dos sensores. Com vinte e dois centros de design e três locais de fabrico, a empresa fornece os seus serviços a mais de oito mil clientes em todo o mundo. Um centro de design está localizado no Funchal, no qual está incluida uma equipa de engenheiros de aplicação que planeiam e desenvolvem applicações de software para clientes internos. O processo de desenvolvimento destes engenheiros envolve várias aplicações e programas, cada um com o seu próprio sistema de registos. Os registos gerados por cada aplicação são mantido em sistemas de armazenamento distintos. Se um desenvolvedor ou administrador quiser solucionar um problema que abrange várias aplicações, será necessário percorrer as várias localizações onde os registos estão armazenados, colecionando-os e correlacionando-os de forma a melhor entender o problema. Este processo é cansativo e, se o ambiente for dimensionado automaticamente, a solução de problemas semelhantes torna-se inconcebível. Este projeto teve como principal objetivo resolver estes problemas, criando assim um Sistema de Gestão de Registos Centralizado capaz de lidar com registos de várias fontes, como também fornecer serviços que irão ajudar os desenvolvedores e administradores a melhor entender os diferentes ambientes afetados. A solução final foi desenvolvida utilizando um conjunto de diferentes tecnologias de código aberto, tais como a Elastic Stack (Elasticsearch, Logstash e Kibana), Node.js, GraphQL e Cassandra. O presente documento descreve o processo e as decisões tomadas para chegar à solução apresentada.
Logs are a crucial piece of any system and give a helpful insight into what it is doing as well as what happened in case of failure. Every process running on a system generates logs in some format. Generally, these logs are written to local storage resources. As systems evolved, the number of logs to analyze increased, and, as a consequence of this progress, there was the need of having a standardized log format, minimizing dependencies and making the analysis process easier. ams is a company that develops and creates sensor solutions. With twenty-two design centers and three manufacturing locations, the company serves to over eight thousand clients worldwide. One design center is located in Funchal, which includes a team of application engineers that design and develop software applications to clients inside the company. The application engineer’s development process is comprised of several applications and programs, each having its own logging system. Log entries generated by different applications are kept in separate storage systems. If a developer or administrator wants to troubleshoot an issue that includes several applications, he/she would have to go to different database systems or locations to collect the logs and correlate them across the several requests. This is a tiresome process and if the environment is auto-scaled, then troubleshooting an issue is inconceivable. This project aimed to solve these problems by creating a Centralized Log Management System that was capable of handling logs from a variety of sources, as well as to provide services that will help developers and administrators better understand the different affected environments. The deployed solution was developed using a set of different open-source technologies, such as the Elastic Stack (Elasticsearch, Logstash and Kibana), Node.js, GraphQL and Cassandra. The present document describes the process and decisions taken to achieve the solution.
Style APA, Harvard, Vancouver, ISO itp.
41

Lewis, April Ann. "Enterprise Users and Web Search Behavior". 2010. http://trace.tennessee.edu/utk_gradthes/643.

Pełny tekst źródła
Streszczenie:
This thesis describes analysis of user web query behavior associated with Oak Ridge National Laboratory’s (ORNL) Enterprise Search System (Hereafter, ORNL Intranet). The ORNL Intranet provides users a means to search all kinds of data stores for relevant business and research information using a single query. The Global Intranet Trends for 2010 Report suggests the biggest current obstacle for corporate intranets is “findability and Siloed content”. Intranets differ from internets in the way they create, control, and share content which can make it often difficult and sometimes impossible for users to find information. Stenmark (2006) first noted studies of corporate internal search behavior is lacking and so appealed for more published research on the subject. This study employs mature scientific internet web query transaction log analysis (TLA) to examine how corporate intranet users at ORNL search for information. The focus of the study is to better understand general search behaviors and to identify unique trends associated with query composition and vocabulary. The results are compared to published Intranet studies. A literature review suggests only a handful of intranet based web search studies exist and each focus largely on a single aspect of intranet search. This implies that the ORNL study is the first to comprehensively analyze a corporate intranet user web query corpus, providing results to the public. This study analyzes 65,000 user queries submitted to the ORNL intranet from September 17, 2007 through December 31, 2007. A granular relational data model first introduced by Wang, Berry, and Yang (2003) for Web query analysis was adopted and modified for data mining and analysis of the ORNL query corpus. The ORNL query corpus is characterized using Zipf Distributions, descriptive word statistics, and Mutual Information. User search vocabulary is analyzed using frequency distribution and probability statistics. The results showed that ORNL users searched for unique types of information. ORNL users are uncertain of how to best formulate queries and don’t use search interface tools to narrow search scope. Special domain language comprised 38% of the queries. The average results returned per query for ORNL were too high and no hits occurred 16.34%.
Style APA, Harvard, Vancouver, ISO itp.
42

Vimalachandran, Pasupathy. "Privacy and Security of Storing Patients’ Data in the Cloud". Thesis, 2019. https://vuir.vu.edu.au/40598/.

Pełny tekst źródła
Streszczenie:
A better health care service must ensure patients receive the right care, in the right place, at the right time. In enabling better health care, the impact of technology is immense. Technological breakthroughs are revolutionising the way health care is being delivered. To deliver better health care, sharing health information amongst health care providers who are involved with the care is critical. An Electronic Health Record (EHR) platform is used to share the health information among those health care providers faster, as a result of technological advancement including the Internet and the Cloud. However, when integrating such technologies to support the provision of health care, they lead to major concerns over privacy and security of health sensitive information. The privacy and security concerns include a wide range of ethical and legal issues associated with the system. These concerns need to be considered and addressed for the implementation of EHR systems. In a shared environment like EHRs, these concerns become more significant. In this thesis, the author explores and discusses the situations where these concerns do arise in a health care environment. This thesis also covers different attacks that have targeted health care information in the past, with potential solutions for every attack identified. From these findings, the proposed system is designed and developed to provide considerable security assurance for a health care organisation when using the EHR systems. Furthermore, the My Health Record (MyHR) system is introduced in Australia to allow an individual’s doctors and other health care providers to access the individual’s health information. Privacy and security in using MyHR is a major challenge that impacts its usage. Taking all these concerns into account, the author will also focus on discussing and analysing major existing access control methods, various threats for data privacy and security concerns over EHR use and the importance of data integrity while using MyHR or any other EHR systems. To preserve data privacy and security and prevent unauthorised access to the system, the author proposes a three-tier security model. In this three-tier security model, the first tier covers an access control mechanism, an Intermediate State of Databases (ISD) is included in the second tier and the third layer involves cryptography/data encryption and decryption. These three tiers, collectively, cover different forms of attacks from different sources including unauthorised access from inside a health care organisation. In every tier, a specific technique has been utilised. In tier one, an Improved Access Control Mechanism (IACM) known as log-in pair, pseudonymisation technique is proposed in tier two and a special new encryption and decryption algorithm has been developed and used for tier three in the proposed system. In addition, the design, development, and implementation of the proposed model have been described to enable and evaluate the operational protocol. Problem 1. Non-clinical staff including reception, admin staff access sensitive health clinical information (insiders). Solution 1. An improved access control mechanism named log-in pair is introduced and occupied in tier one. Problem 2. Researchers and research institutes access health data sets for research activities (outsiders). Solution 2. Pseudonymisation technique, in tier two, provides de-identified required data with relationships, not the sensitive data. Problem 3. The massive amount of sensitive health data stored with the EHR system in the Cloud becomes more vulnerable to data attacks. Solution 3. A new encryption and decryption algorithm is achieved and used in tier three to provide high security while storing the data in the Cloud.
Style APA, Harvard, Vancouver, ISO itp.
43

Mader, Felix. "Räumliche, GIS-gestützte Analyse von Linientransektstichproben". Doctoral thesis, 2007. http://hdl.handle.net/11858/00-1735-0000-0006-B626-D.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii