Academic literature on the topic 'Corpora Processing Software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Corpora Processing Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Corpora Processing Software"

1

Soloviev, F. N. "Embedding Additional Natural Language Processing Tools into the TXM Platform." Vestnik NSU. Series: Information Technologies 18, no. 1 (2020): 74–82. http://dx.doi.org/10.25205/1818-7900-2020-18-1-74-82.

Full text
Abstract:
In our work we present a description of integration of natural language processing tools (pseudostem extraction, noun phrase extraction, verb government analysis) in order to extend analytic facilities of the TXM corpora analysis platform. The tools introduced in the paper are combined into a single software package providing TXM platform with an effective specialized corpora preparation tool for further analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Dutsova, Ralitsa. "Web-based software system for processing bilingual digital resources." Cognitive Studies | Études cognitives, no. 14 (September 4, 2014): 33–43. http://dx.doi.org/10.11649/cs.2014.004.

Full text
Abstract:
Web-based software system for processing bilingual digital resourcesThe article describes a software management system developed at the Institute of Mathematics and Informatics, BAS, for the creation, storing and processing of digital language resources in Bulgarian. Independent components of the system are intended for the creation and management of bilingual dictionaries, for information retrieval and data mining from a bilingual dictionary, and for the presentation of aligned corpora. A module which connects these components is also being developed. The system, implemented as a web-application, contains tools for compilation, editing and search within all components.
APA, Harvard, Vancouver, ISO, and other styles
3

Ali, Mohammed Abdulmalik. "Artificial intelligence and natural language processing: the Arabic corpora in online translation software." International Journal of ADVANCED AND APPLIED SCIENCES 3, no. 9 (September 2016): 59–66. http://dx.doi.org/10.21833/ijaas.2016.09.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Renouf, Antoinette. "The Establishment and Use of Text Corpora at Birmingham University." HERMES - Journal of Language and Communication in Business 4, no. 7 (July 28, 2015): 71. http://dx.doi.org/10.7146/hjlcb.v4i7.21475.

Full text
Abstract:
The School of English at Birmingham University has over the last ten years increasingly integrated the study and use of corpora into its research and teaching activities. Cobuild Ltd and the English for Overseas Students Unit are particularly active, as is the Research and Development Unit for English Language Studies. Members of the Research Unit have created the purpose-built corpora that make up the Birmingham Collection of English Text. The Research Unit is using these to support its linguistic research projects and the development of new types of text-processing software, as well as for specialised teaching purposes.
APA, Harvard, Vancouver, ISO, and other styles
5

ZHAO, SHIQI, HAIFENG WANG, TING LIU, and SHENG LI. "Extracting paraphrase patterns from bilingual parallel corpora." Natural Language Engineering 15, no. 4 (September 16, 2009): 503–26. http://dx.doi.org/10.1017/s1351324909990155.

Full text
Abstract:
AbstractParaphrase patterns are semantically equivalent patterns, which are useful in both paraphrase recognition and generation. This paper presents a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the paraphrase patterns in English are extracted using the patterns in another language as pivots. We make use of log-linear models for computing the paraphrase likelihood between pattern pairs and exploit feature functions based on maximum likelihood estimation (MLE), lexical weighting (LW), and monolingual word alignment (MWA). Using the presented method, we extract more than 1 million pairs of paraphrase patterns from about 2 million pairs of bilingual parallel sentences. The precision of the extracted paraphrase patterns is above 78%. Experimental results show that the presented method significantly outperforms a well-known method called discovery of inference rules from text (DIRT). Additionally, the log-linear model with the proposed feature functions are effective. The extracted paraphrase patterns are fully analyzed. Especially, we found that the extracted paraphrase patterns can be classified into five types, which are useful in multiple natural language processing (NLP) applications.
APA, Harvard, Vancouver, ISO, and other styles
6

MIHALCEA, RADA, and DAN I. MOLDOVAN. "AutoASC — A SYSTEM FOR AUTOMATIC ACQUISITION OF SENSE TAGGED CORPORA." International Journal of Pattern Recognition and Artificial Intelligence 14, no. 01 (February 2000): 3–17. http://dx.doi.org/10.1142/s0218001400000039.

Full text
Abstract:
Many natural language processing tasks, such as word sense disambiguation, knowledge acquisition, information retrieval, use semantically tagged corpora. Till recently, these corpus-based systems relied on text manually annotated with semantic tags; but the massive human intervention in this process has become a serious impediment in building robust systems. In this paper, we present AutoASC, a system which automatically acquires sense tagged corpora. It is based on (1) the information provided in WordNet, particularly the word definitions found within the glosses and (2) the information gathered from Internet using existing search engines. The system was tested on a set of 46 concepts, for which 2071 example sentences have been acquired; for these, a precision of 87% was observed.
APA, Harvard, Vancouver, ISO, and other styles
7

Chersoni, E., E. Santus, L. Pannitto, A. Lenci, P. Blache, and C. R. Huang. "A structured distributional model of sentence meaning and processing." Natural Language Engineering 25, no. 4 (July 2019): 483–502. http://dx.doi.org/10.1017/s1351324919000214.

Full text
Abstract:
AbstractMost compositional distributional semantic models represent sentence meaning with a single vector. In this paper, we propose a structured distributional model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations. The semantic representation of a sentence is a formal structure derived from discourse representation theory and containing distributional vectors. This structure is dynamically and incrementally built by integrating knowledge about events and their typical participants, as they are activated by lexical items. Event knowledge is modelled as a graph extracted from parsed corpora and encoding roles and relationships between participants that are represented as distributional vectors. SDM is grounded on extensive psycholinguistic research showing that generalized knowledge about events stored in semantic memory plays a key role in sentence comprehension.We evaluate SDMon two recently introduced compositionality data sets, and our results show that combining a simple compositionalmodel with event knowledge constantly improves performances, even with dif ferent types of word embeddings.
APA, Harvard, Vancouver, ISO, and other styles
8

PERIÑAN-PASCUAL, CARLOS. "DEXTER: A workbench for automatic term extraction with specialized corpora." Natural Language Engineering 24, no. 2 (October 5, 2017): 163–98. http://dx.doi.org/10.1017/s1351324917000365.

Full text
Abstract:
AbstractAutomatic term extraction has become a priority area of research within corpus processing. Despite the extensive literature in this field, there are still some outstanding issues that should be dealt with during the construction of term extractors, particularly those oriented to support research in terminology and terminography. In this regard, this article describes the design and development of DEXTER, an online workbench for the extraction of simple and complex terms from domain-specific corpora in English, French, Italian and Spanish. In this framework, three issues contribute to placing the most important terms in the foreground. First, unlike the elaborate morphosyntactic patterns proposed by most previous research, shallow lexical filters have been constructed to discard term candidates. Second, a large number of common stopwords are automatically detected by means of a method that relies on the IATE database together with the frequency distribution of the domain-specific corpus and a general corpus. Third, the term-ranking metric, which is grounded on the notions of salience, relevance and cohesion, is guided by the IATE database to display an adequate distribution of terms.
APA, Harvard, Vancouver, ISO, and other styles
9

XUE, NAIWEN, FEI XIA, FU-DONG CHIOU, and MARTA PALMER. "The Penn Chinese TreeBank: Phrase structure annotation of a large corpus." Natural Language Engineering 11, no. 2 (May 19, 2005): 207–38. http://dx.doi.org/10.1017/s135132490400364x.

Full text
Abstract:
With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Altheneyan, Alaa, and Mohamed El Bachir Menai. "Evaluation of State-of-the-Art Paraphrase Identification and Its Application to Automatic Plagiarism Detection." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 04 (August 22, 2019): 2053004. http://dx.doi.org/10.1142/s0218001420530043.

Full text
Abstract:
Paraphrase identification is a natural language processing (NLP) problem that involves the determination of whether two text segments have the same meaning. Various NLP applications rely on a solution to this problem, including automatic plagiarism detection, text summarization, machine translation (MT), and question answering. The methods for identifying paraphrases found in the literature fall into two main classes: similarity-based methods and classification methods. This paper presents a critical study and an evaluation of existing methods for paraphrase identification and its application to automatic plagiarism detection. It presents the classes of paraphrase phenomena, the main methods, and the sets of features used by each particular method. All the methods and features used are discussed and enumerated in a table for easy comparison. Their performances on benchmark corpora are also discussed and compared via tables. Automatic plagiarism detection is presented as an application of paraphrase identification. The performances on benchmark corpora of existing plagiarism detection systems able to detect paraphrases are compared and discussed. The main outcome of this study is the identification of word overlap, structural representations, and MT measures as feature subsets that lead to the best performance results for support vector machines in both paraphrase identification and plagiarism detection on corpora. The performance results achieved by deep learning techniques highlight that these techniques are the most promising research direction in this field.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Corpora Processing Software"

1

Grepl, Filip. "Aplikace pro řízení paralelního zpracování dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445490.

Full text
Abstract:
This work deals with the design and implementation of a system for parallel execution of tasks in the Knowledge Technology Research Group. The goal is to create a web application that allows to control their processing and monitor runs of these tasks including the use of system resources. The work first analyzes the current method of parallel data processing and the shortcomings of this solution. Then the work describes the existing tools including the problems that their test deployment revealed. Based on this knowledge, the requirements for a new application are defined and the design of the entire system is created. After that the selected parts of implementation and the way of the whole system testing is described together with the comparison of the efficiency with the original system.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Corpora Processing Software"

1

Corporate accounting systems: A software engineering approach. Wokingham, England: Addison-Wesley, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kenneth, Smith. Corporate accounting systems: A software engineering approach. Wokingham: Addison-Wesley, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Breslin, Jud. Selecting and installing software packages: New methodology for corporate implementation. New York: Quorum Books, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sorensen, Dean. Business performance intelligence software: A market evaluation. Morristown, NJ]: Financial Executives Research Foundation, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bouthillier, France. Assessing competitive intelligence software: A guide to evaluating CI technology. Medford, N.J: Information Today, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

M, Stevens Dawn, ed. Standards for online communication: Publishing information for the internet/World Wide Web, help systems/corporate intranets. New York: John Wiley, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hackos, JoAnn T. Standards for online communication: Publishing information for the Internet/World Wide Web/help Systems/corporate intranets. New York: Wiley, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dillon, Patrick M. Multimedia in the corporate world: Business uses and technology challenges. [Atlanta, Ga.]: Chantico Pub. Co., 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

IT security governance guidebook with security program metrics on CD-ROM. Boca Raton, FL: Auerbach Publications/Taylor & Francis, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Monsen, Laura. Migrating to Office 95 & Office 97: A corporate user's quick reference. Indianapolis, IN: Que, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Corpora Processing Software"

1

Abdoullaev, Azamat. "How to Represent the World." In Reality, Universal Ontology and Knowledge Systems, 214–57. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-966-3.ch010.

Full text
Abstract:
As far as human knowledge about the world is commonly given in NL expressions and as far as universal ontology is a general science of the world, the examination of its impact on natural language science and technology is among the central topics of many academic workshops and conferences. Ontologists, knowledge engineers, lexicographers, lexical semanticists, and computer scientists are attempting to integrate top-level entity classes with language knowledge presented in extensive corpora and electronic lexical resources. Such a deep quest is mostly motivated by high application potential of reality-driven models of language for knowledge communication and management, information retrieval and extraction, information exchange in software and dialogue systems, all with an ultimate view to transform the World Wide Web into a machine-readable global language resource of world knowledge, the Onto-Semantic Web. One of the practical applications of integrative ontological framework is to discover the underlying mechanisms of representing and processing language content and meaning by cognitive agents, human and artificial. Specifically, to provide the formalized algorithms or rules, whereby machines could derive or attach significance (or signification) from coded signals, both natural signs obtained by sensors and linguistic symbols.
APA, Harvard, Vancouver, ISO, and other styles
2

Yannis, Panagis. "Automated Text Analysis." In Research Methods in the Social Sciences: An A-Z of key concepts, 14–16. Oxford University Press, 2021. http://dx.doi.org/10.1093/hepl/9780198850298.003.0002.

Full text
Abstract:
This chapter examines automated text analysis (ATA), which describes the different methodologies that can be applied in order to perform text analysis with the use of computer software. ATA is a computer-assisted method for analysing text, whenever the analysis would be prohibitively labour-intensive due to the volume of texts to be analysed. ATA methods have become more popular due to current interest in big data, taking into account the volume of textual content that is made easily accessible by the digitization of human activity. Key to ATA is the notion of corpus, which is a collection of texts. A necessary step before starting any analysis is to collect together the necessary documents and construct the corpora that will be used. Which texts need to be included in this step is dictated by the research question. After text collection, some processing steps need to be taken before the analysis starts, for example tokenization and part-of-speech tagging. Tokenization is the process of splitting a text into its constituent words, also called tokens, whereas part-of-speech tagging assigns each word a label that indicates the respective part-of-speech.
APA, Harvard, Vancouver, ISO, and other styles
3

Morra, Emanuele, Roberto Revetria, Danilo Pecorino, Matteo Giudici, and Gabriele Galli. "An Innovative AI-Based System for Corruption Risks Assessment Among Corporate Managers to Support Open Source Analysis." In Knowledge Innovation Through Intelligent Software Methodologies, Tools and Techniques. IOS Press, 2020. http://dx.doi.org/10.3233/faia200564.

Full text
Abstract:
The paper has its focus on the creation of an innovative Natural Language Processing system for the quest of available information and consequent data analysis, aimed at reconstructing the corporate chain and monitoring the sensitive risk of corruption for people involved in command positions. Today, the greatest opportunity in finding information is represented by the Internet or other open sources, where the contents related to corporate managers are continuously posted and updated. Given the vastness of the information dimension, it seems remarkably advantageous to have an intelligent analysis system capable of independently finding, analyzing and synthesizing information related to a set of target subjects. The aim of this document is to describe a forecasting model based on Machine Learning and Artificial Intelligence techniques capable of understanding whether a news item related to an individual (sought during a due diligence process) contains information about crime, investigation, conviction, fraud, corruption or sanction relating to the subject sought. Methods based on Artificial Neural Networks and Support Vector Machine, compared one to the others, are introduced and applied for the scope. In particular, results showed the architecture based on SVM with TF-IDF matrix and test pre-processing outperforms the others discussed in this paper demonstrating high accuracy and precision in prediction new data as well.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdul-Mehdi, Ziyad Tariq, Ali Bin Mamat, Hamidah Ibrahim, and Mustafa M. Dirs. "Transaction Management in Mobile Databases." In Database Technologies, 1257–66. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-058-5.ch072.

Full text
Abstract:
Recent advances in wireless communications and computer technology have provided users the opportunity to access information and services regardless of their physical location or movement behavior. In the context of database applications, these mobile users should have the ability to both query and update public, private, and corporate databases. The main goal of mobile software research is to provide as much functionality of network computing as possible within the limits of the mobile computer’s capabilities. Consequently, transaction processing and efficient update techniques for mobile and disconnected operations have been very popular. In this article, we present the main architecture of mobile transactions and the characteristics with a database perspective. Some of the extensive transaction models and transaction processing for mobile computing are discussed with their underlying assumptions. A brief comparison of the models is also included.
APA, Harvard, Vancouver, ISO, and other styles
5

Abdul-Mehdi, Z., A. Mamat, H. Ibrahim, and M. Dirs. "Transaction Management in Mobile Databases." In Encyclopedia of Mobile Computing and Commerce, 947–53. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-002-8.ch158.

Full text
Abstract:
Recent advances in wireless communications and computer technology have provided users the opportunity to access information and services regardless of their physical location or movement behavior. In the context of database applications, these mobile users should have the ability to both query and update public, private, and corporate databases. The main goal of mobile software research is to provide as much functionality of network computing as possible within the limits of the mobile computer’s capabilities. Consequently, transaction processing and efficient update techniques for mobile and disconnected operations have been very popular. In this article, we present the main architecture of mobile transactions and the characteristics with a database perspective. Some of the extensive transaction models and transaction processing for mobile computing are discussed with their underlying assumptions. A brief comparison of the models is also included
APA, Harvard, Vancouver, ISO, and other styles
6

Keim, Tobias, and Tim Weitzel. "An Adoption and Diffusion Perspective on HRIS Usage." In Encyclopedia of Human Resources Information Systems, 18–23. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-59904-883-3.ch004.

Full text
Abstract:
Information technology in the past decade has drastically changed the human resources function. Providing support for mainly administrative activities such as payroll and attendance management in the beginning, information technology today enhances many of the recruitment function’s subprocesses such as longand short-term candidate attraction, the generation, pre-screening, and processing of applications or the contracting and onboarding of new hires. Online job advertisements on corporate Web sites and Internet job boards, online CV databases, different forms of electronic applications, applicant management systems, corporate skill databases, and IS supported workflows for the contracting phase are only few examples of the various ways by which information systems today support recruitment processes. However, little attention so far has been paid to the question of how these different forms of software support are effectively adopted by employers and how the interplay between HR departments, specialized departments, shared service centers and external service providers can be understood on an organizational level. Therefore, our research questions within this article are how do human resources information systems (HRIS) diffuse along the recruitment process? How can we explain the phenomena empirically observed? In order to answer these questions, we outline the literature on IS adoption and diffusion. Then, building on own longitudinal qualitative and quantitative research, we present a theoretically and empirically grounded framework of HRIS adoption and diffusion for the recruitment function.
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Hamdani, Wasim A. "E-Mail, Web Service and Cryptography." In Applied Cryptography for Cyber Security and Defense, 52–78. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61520-783-1.ch003.

Full text
Abstract:
Cryptography is the study and practice of protecting information and has been used since ancient times in many different shapes and forms to protect messages from being intercepted. However, since 1976, when data encryption was selected as an official Federal Information Processing Standard (FIPS) for the United States, cryptography has gained large attention and a great amount of application and use. Furthermore, cryptography started to be part of protected public communication when e-mail became commonly used by the public. There are many electronic services. Some are based on web interaction and others are used as independent servers, called e-mail hosting services, which is an Internet hosting service that runs e-mail servers. Encrypting e-mail messages as they traverse the Internet is not the only reason to understand or use various cryptographic methods. Every time one checks his/her e-mail, the password is being sent over the wire. Many Internet service providers or corporate environments use no encryption on their mail servers and the passwords used to check mail are submitted to the network in clear text (with no encryption). When a password is put into clear text on a wire, it can easily be intercepted. Encrypting email will keep all but the most dedicated hackers from intercepting and reading a private communications. Using a personal email certificate one can digitally sign an email so that recipients can verify that it’s really from the sender as well as encrypt the messages so that only the intended recipients can view it. Web service is defined as “a software system designed to support interoperable machine-to-machine interaction over a network” and e-mail is “communicate electronically on the computer”. This chapter focus on introduce three topics: E-mail structure and organization, web service types, their organization and cryptography algorithms which integrated in the E-mail and web services to provide high level of security. The main issue in this article is to build the general foundation through Definitions, history, cryptography algorithms symmetric and asymmetric, hash algorithms, digital signature, suite B and general principle to introduce the use of cryptography in the E-mailand web service
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Corpora Processing Software"

1

Chadha, Bipin, R. E. Fulton, and J. C. Calhoun. "Case Study Approach for Information-Integration of Material Handling." In ASME 1991 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/edm1991-0178.

Full text
Abstract:
Abstract Information-Integration is vital for keeping manufacturing operations competitive. A case study approach has been adopted to better understand the role of information in integrated manufacturing. Information is now considered a corporate asset. Creation, processing, movement, and security of information is therefore as important as that of the products/services of an enterprise. The case studies have helped in identifying the issues involved in developing an information system and supporting software framework for a manufacturing enterprise. The case studies have helped in refining an integration model, and identifying the characteristics desirable in modeling methodologies and tools. This paper describes a case study dealing with integrated manufacture of optical fiber products. A phased development and implementation approach was adopted where a small, manageable slice of the system is considered for the case study followed by functional modeling (IDEF0) and data flow modeling (Data Flow Diagrams). This identifies the pieces of information of interest. The information relationships are modeled using Extended Entity Relationship (EER) diagrams which are then mapped on to a relational model. The relational tables thus obtained were implemented on a commercial Database Management System. The functional constraints and application interfaces were then built using SQL and commercial application interface tools. The sections in the paper describe the functional models, data flow diagrams, EER diagrams, relational database design, and user/application interfaces developed for the system. Implementation experiences and observations are discussed followed by plans for the next phase of the system.
APA, Harvard, Vancouver, ISO, and other styles
2

Semke, William H., Richard R. Schultz, David Dvorak, Samuel Trandem, Brian Berseth, and Matthew Lendway. "Utilizing UAV Payload Design by Undergraduate Researchers for Educational and Research Development." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43620.

Full text
Abstract:
An undergraduate team consisting of mechanical and electrical engineering students at the University of North Dakota developed an electro-optical and un-cooled thermal infrared digital imaging remote sensing payload for an Unmanned Aerial Vehicle (UAV). The first iteration of the payload design began in the fall of 2005 and the inaugural flight tests took place at Camp Ripley, Minnesota, a National Guard facility, in the fall of 2006 with a corporate partner. The second iteration design with increased performance in object tracking and data processing is expected to fly in the summer of 2007. Payload development for integration into a UAV is a process that is not currently well defined by industrial practices or regulated by government. These processes are a significant part of the research being conducted in order to define the “best practices.” The emerging field of UAVs generates tremendous interest and serves to attract quality students into the research. As with many emerging technologies there are many new exciting developments, however, the fundamentals taught in core courses are still critical to the process and serve as the basis of the system. In this manner, the program stimulates innovative design while maintaining a solid connection to undergraduate courses and illustrates the importance of advanced courses. The payload development was guided by off-the-shelf components and software using a systems engineering methodology throughout the project. Many of the design and payload flight constraints were based on external factors, such as difficulties with access to airspace, weather-related delays, and ITAR restrictions on hardware. Overall, the research project continues to be a tremendous experiential learning activity for mechanical and electrical engineering students, as well as for the faculty members. The process has been extremely successful in enhancing the expertise in systems engineering and design in the students and developing the UAV payload design knowledge base and necessary infrastructure at the university.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography