Academic literature on the topic '280300 Computer Software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '280300 Computer Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "280300 Computer Software"

1

Du, Caifan, Johanna Cohoon, Patrice Lopez, and James Howison. "Understanding progress in software citation: a study of software citation in the CORD-19 corpus." PeerJ Computer Science 8 (July 25, 2022): e1022. http://dx.doi.org/10.7717/peerj-cs.1022.

Full text
Abstract:
In this paper, we investigate progress toward improved software citation by examining current software citation practices. We first introduce our machine learning based data pipeline that extracts software mentions from the CORD-19 corpus, a regularly updated collection of more than 280,000 scholarly articles on COVID-19 and related historical coronaviruses. We then closely examine a stratified sample of extracted software mentions from recent CORD-19 publications to understand the status of software citation. We also searched online for the mentioned software projects and their citation requests. We evaluate both practices of referencing software in publications and making software citable in comparison with earlier findings and recent advocacy recommendations. We found increased mentions of software versions, increased open source practices, and improved software accessibility. Yet, we also found a continuation of high numbers of informal mentions that did not sufficiently credit software authors. Existing software citation requests were diverse but did not match with software citation advocacy recommendations nor were they frequently followed by researchers authoring papers. Finally, we discuss implications for software citation advocacy and standard making efforts seeking to improve the situation. Our results show the diversity of software citation practices and how they differ from advocacy recommendations, provide a baseline for assessing the progress of software citation implementation, and enrich the understanding of existing challenges.
APA, Harvard, Vancouver, ISO, and other styles
2

Ifkirne, Mohammed, Houssam El Bouhi, Siham Acharki, Quoc Bao Pham, Abdelouahed Farah, and Nguyen Thi Thuy Linh. "Multi-Criteria GIS-Based Analysis for Mapping Suitable Sites for Onshore Wind Farms in Southeast France." Land 11, no. 10 (October 19, 2022): 1839. http://dx.doi.org/10.3390/land11101839.

Full text
Abstract:
Wind energy is critical to traditional energy sources replacement in France and throughout the world. Wind energy generation in France is quite unevenly spread across the country. Despite its considerable wind potential, the research region is among the least productive. The region is a very complicated location where socio-environmental, technological, and topographical restrictions intersect, which is why energy production planning studies in this area have been delayed. In this research, the methodology used for identifying appropriate sites for future wind farms in this region combines GIS with MCDA approaches such as AHP. Six determining factors are selected: the average wind speed, which has a weight of 38%; the protected areas, which have a relative weight of 26%; the distance to electrical substations and road networks, both of which have a significant influence on relative weights of 13%; and finally, the slope and elevation, which have weights of 5% and 3%, respectively. Only one alternative was investigated (suitable and unsuitable). The spatial database was generated using ArcGIS and QGIS software; the AHP was computed using Excel; and several treatments, such as raster data categorization and weighted overlay, were automated using the Python programming language. The regions identified for wind turbines installation are defined by a total of 962,612 pixels, which cover a total of 651 km2 and represent around 6.98% of the research area. The theoretical wind potential calculation results suggest that for at least one site with an area bigger than 400 ha, the energy output ranges between 182.60 and 280.20 MW. The planned sites appear to be suitable; each site can support an average installed capacity of 45 MW. This energy benefit will fulfill the region’s population’s transportation, heating, and electrical demands.
APA, Harvard, Vancouver, ISO, and other styles
3

Bucher, Taina. "About a Bot: Hoax, Fake, Performance Art." M/C Journal 17, no. 3 (June 7, 2014). http://dx.doi.org/10.5204/mcj.814.

Full text
Abstract:
Introduction Automated or semi-automated software agents, better known as bots, have become an integral part of social media platforms. Reportedly, bots now generate twenty-four per cent of all posts on Twitter (Orlean “Man”), yet we know very little about who these bots are, what they do, or how to attend to these bots. This article examines one particular prominent exemplar: @Horse_ebooks, a much beloved Twitter bot that turned out not to be a “proper” bot after all. By examining how people responded to the revelations that the @Horse_ebooks account was in fact a human and not an automated software program, the intention here is not only to nuance some of the more common discourses around Twitter bots as spam, but more directly and significantly, to use the concept of persona as a useful analytical framework for understanding the relationships people forge with bots. Twitter bots tend to be portrayed as annoying parasites that generate “fake traffic” and “steal identities” (Hill; Love; Perlroth; Slotkin). According to such news media presentations, bots are part of an “ethically-questionable industry,” where they operate to provide an (false) impression of popularity (Hill). In a similar vein, much of the existing academic research on bots, especially from a computer science standpoint, tends to focus on the destructive nature of bots in an attempt to design better spam detection systems (Laboreiro et.al; Weiss and Tscheligi; Zangerle and Specht). While some notable exceptions exist (Gehl; Hwang et al; Mowbray), there is still an obvious lack of research on Twitter bots within Media Studies. By examining a case of “bot fakeness”—albeit in a somewhat different manner—this article contributes an understanding of Twitter bots as medium-specific personas. The case of @Horse_ebooks does show how people treat it as having a distinct personality. More importantly, this case study shows how the relationship people forge with an alleged bot differs from how they would relate to a human. To understand the ambiguity of the concept of persona as it applies to bots, this article relies on para-social interaction theory as developed by Horton and Wohl. In their seminal article first published in 1956, Horton and Wohl understood para-social interaction as a “simulacrum of conversational give and take” that takes place particularly between mass media users and performers (215). The relationship was termed para-social because, despite of the nonreciprocal exposure situation, the viewer would feel as if the relationship was real and intimate. Like theater, an illusory relationship would be created between what they called the persona—an “indigenous figure” presented and created by the mass media—and the viewer (Horton and Wohl 216). Like the “new types of performers” created by the mass media—”the quizmasters, announcers or ‘interviewers’” —bots too, seem to represent a “special category of ‘personalities’ whose existence is a function of the media themselves” (Horton and Wohl 216). In what follows, I revisit the concept of para-social interaction using the case of @Horse_ebooks, to show that there is potential to expand an understanding of persona to include non-human actors as well. Everything Happens So Much: The Case of @Horse_ebooks The case of the now debunked Twitter account @Horse_ebooks is interesting for a number of reasons, not least because it highlights the presence of what we might call botness, the belief that bots possess distinct personalities or personas that are specific to algorithms. In the context of Twitter, bots are pieces of software or scripts that are designed to automatically or semi-automatically publish tweets or make and accept friend requests (Mowbray). Typically, bots are programmed and designed to be as humanlike as possible, a goal that has been pursued ever since Alan Turing proposed what has now become known as the Turing test (Gehl; Weiss and Tschengeli). The Turing test provides the classic challenge for artificial intelligence, namely, whether a machine can impersonate a human so convincingly that it becomes indistinguishable from an actual human. This challenge is particularly pertinent to spambots as they need to dodge the radar of increasingly complex spam filters and detection algorithms. To avoid detection, bots masquerade as “real” accounts, trying to seem as human as possible (Orlean “Man”). Not all bots, however, pretend to be humans. Bots are created for all kinds of purposes. As Mowbray points out, “many bots are designed to be informative or otherwise useful” (184). For example, bots are designed to tweet news headlines, stock market quotes, traffic information, weather forecasts, or even the hourly bell chimes from Big Ben. Others are made for more artistic purposes or simply for fun by hackers and other Internet pundits. These bots tell jokes, automatically respond to certain keywords typed by other users, or write poems (i.e. @pentametron, @ProfJocular). Amidst the growing bot population on Twitter, @Horse_ebooks is perhaps one of the best known and most prominent. The account was originally created by Russian web developer Alexey Kouznetsov and launched on 5 August 2010. In the beginning, @Horse_ebooks periodically tweeted links to an online store selling e-books, some of which were themed around horses. What most people did not know, until it was revealed to the public on 24 September 2013 (Orlean “Horse”), was that the @Horse_ebooks account had been taken over by artist and Buzzfeed employee Jacob Bakkila in September 2011. Only a year after its inception, @Horse_ebooks went from being a bot to being a human impersonating a bot impersonating a human. After making a deal with Kouznetsov, Bakkila disabled the spambot and started generating tweets on behalf of @Horse_ebooks, using found material and text strings from various obscure Internet sites. The first tweet in Bakkila’s disguise was published on 14 September 2011, saying: “You will undoubtedly look on this moment with shock and”. For the next two years, streams of similar, “strangely poetic” (Chen) tweets were published, slowly giving rise to a devoted and growing fan base. Over the years, @Horse_ebooks became somewhat of a cultural phenomenon—an Internet celebrity of sorts. By 2012, @Horse_ebooks had risen to Internet fame; becoming one of the most mentioned “spambots” in news reports and blogs (Chen). Responses to the @Horse_ebooks “Revelation” On 24 September 2013, journalist Susan Orlean published a piece in The New Yorker revealing that @Horse_ebooks was in fact “human after all” (Orlean “@Horse_ebooks”). The revelation rapidly spurred a plethora of different reactions by its followers and fans, ranging from indifference, admiration and disappointment. Some of the sadness and disappointment felt can be seen clearly in the many of media reports, blog posts and tweets that emerged after the New Yorker story was published. Meyer of The Atlantic expressed his disbelief as follows: @Horse_ebooks, reporters told us, was powered by an algorithm. [...] We loved the horse because it was the network talking to itself about us, while trying to speak to us. Our inventions, speaking—somehow sublimely—of ourselves. Our joy was even a little voyeuristic. An algorithm does not need an audience. To me, though, that disappointment is only a mark of the horse’s success. We loved @Horse_ebooks because it was seerlike, childlike. But no: There were people behind it all along. We thought we were obliging a program, a thing which needs no obliging, whereas in fact we were falling for a plan. (Original italics) People felt betrayed, indeed fooled by @Horse_ebooks. As Watson sees it, “The internet got up in arms about the revelation, mostly because it disrupted our desire to believe that there was beauty in algorithms and randomness.” Several prominent Internet pundits, developers and otherwise computationally skilled people, quickly shared their disappointment and even anger on Twitter. As Jacob Harris, a self-proclaimed @Horse_ebooks fan and news hacker at the New York Times expressed it: Harris’ comparisons to the winning chess-playing computer Deep Blue speaks to the kind of disappointment felt. It speaks to the deep fascination that people feel towards the mysteries of the machine. It speaks to the fundamental belief in the potentials of machine intelligence and to the kind of techno-optimism felt amongst many hackers and “webbies.” As technologist and academic Dan Sinker said, “If I can’t rely on a Twitter bot to actually be a bot, what can I rely on?” (Sinker “If”). Perhaps most poignantly, Sinker noted his obvious disbelief in a blog post tellingly titled “Eulogy for a horse”: It’s been said that, given enough time, a million monkeys at typewriters would eventually, randomly, type the works of Shakespeare. It’s just a way of saying that mathematically, given infinite possibilities, eventually everything will happen. But I’ve always wanted it literally to be true. I’ve wanted those little monkeys to produce something beautiful, something meaningful, and yet something wholly unexpected.@Horse_ebooks was my monkey Shakespeare. I think it was a lot of people’s…[I]t really feels hard, like a punch through everything I thought I knew. (Sinker “Eulogy”) It is one thing is to be fooled by a human and quite another to be fooled by a “Buzzfeed employee.” More than anything perhaps, the question of authenticity and trustworthiness seems to be at stake. In this sense, “It wasn’t the identities of the feed’s writers that shocked everyone (though one of the two writers works for BuzzFeed, which really pissed people off). Rather, it was the fact that they were human in the first place” (Farago). As Sinker put it at the end of the “Eulogy”: I want to believe this wasn’t just yet another internet buzz-marketing prank.I want to believe that @Horse was as beautiful and wonderful today as it was yesterday.I want to believe that beauty can be assembled from the randomness of life all around us.I want to believe that a million monkeys can make something amazingGod.I really, really do want to believe.But I don’t think I do.And that feels even worse. Bots as Personae: Revisiting Horton and Wohl’s Concept of Para-Social Relations How then are we to understand and interpret @Horse_ebooks and peoples’ responses to the revelations? Existing research on human-robot relations suggest that machines are routinely treated as having personalities (Turkle “Life”). There is even evidence to suggest that people often imagine relationships with (sufficiently responsive) robots as being better than relationships with humans. As Turkle explains, this is because relationships with machines, unlike humans, do not demand any ethical commitments (Turkle “Alone”). In other words, bots are oftentimes read and perceived as personas, with which people forge affective relationships. The term “persona” can be understood as a performance of personhood. In a Goffmanian sense, this performance describes how human beings enact roles and present themselves in public (Goffman). As Moore puts it, “the persona is a projection, a puppet show, usually constructed by an author and enlivened by the performance, interpretation, or adaptation”. From Marcel Mauss’ classic analysis of gifts as objects thoroughly personified (Scott), through to the study of drag queens (Stru¨bel-Scheiner), the concept of persona signifies a masquerade, a performance. As a useful concept to theorise the performance and doing of personhood, persona has been used to study everything from celebrity culture (Marshall), fiction, and social networking sites (Zhao et al.). The concept also figures prominently in Human Computer Interaction and Usability Studies where the creation of personas constitutes an important design methodology (Dong et al.). Furthermore, as Marshall points out, persona figures prominently in Jungian psychoanalysis where it exemplifies the idea of “what a man should appear to be” (166). While all of these perspectives allow for interesting analysis of personas, here I want to draw on an understanding of persona as a medium specific condition. Specifically, I want to revisit Horton and Wohl’s classic text about para-social interaction. Despite the fact that it was written almost 60 years ago and in the context of the then emerging mass media – radio, television and movies – their observations are still relevant and useful to theorise the kinds of relations people forge with bots today. According to Horton and Wohl, the “persona offers, above all, a continuing relationship. His appearance is a regular and dependable event, to be counted on, planned for, and integrated into the routines of daily life” (216). The para-social relations between audiences and TV personas are developed over time and become more meaningful to the audience as it acquires a history. Not only are devoted TV audiences characterized by a strong belief in the character of the persona, they are also expected to “assume a sense of personal obligation to the performer” (Horton and Wohl 220). As Horton and Wohl note, “the “fan” - comes to believe that he “knows” the persona more intimately and profoundly than others do; that he “understands” his character and appreciates his values and motives (216). In a similar vein, fans of @Horse_ebooks expressed their emotional attachments in blog posts and tweets. For Sinker, @Horse_ebooks seemed to represent the kind of dependable and regular event that Horton and Wohl described: “Even today, I love @Horse_ebooks. A lot. Every day it was a gift. There were some days—thankfully not all that many—where it was the only thing I looked forward to. I know that that was true for others as well” (Sinker “Eulogy”). Judging from searching Twitter retroactively for @Horse_ebooks, the bot meant something, if not much, to other people as well. Still, almost a year after the revelations, people regularly tweet that they miss @Horse_ebooks. For example, Harris tweets messages saying things like: “I’m still bitter about @Horse_ebooks” (12 November 2013) or “Many of us are still angry and hurt about @Horse_ebooks” (27 December 2013). Twitter user @third_dystopia says he feels something is missing from his life, realizing “horse eBooks hasn’t tweeted since September.” Another of the many messages posted in retrospect similarly says: “I want @Horse_ebooks back. Ever since he went silent, Twitter hasn’t been the same for me” (Lockwood). Indeed, Marshall suggests that affect is at “the heart of a wider persona culture” (162). In a Deleuzian understanding of the term, affect refers to the “capacity to affect and be affected” (Steward 2). Borrowing from Marshall, what the @Horse_ebooks case shows is “that there are connections in our culture that are not necessarily coordinated with purposive and rational alignments. They are organised around clusters of sentiment that help situate people on various spectra of activity and engagement” (162). The concept of persona helps to understand how the performance of @Horse_ebooks depends on the audience to “contribute to the illusion by believing in it” (Horton and Wohl 220). “@Horse_ebooks was my monkey” as Sinker says, suggests a fundamental loss. In this case the para-social relation could no longer be sustained, as the illusion of being engaged in a relation with a machine was taken away. The concept of para-social relations helps not only to illuminate the similarities between how people reacted to @Horse_ebooks and the way in which Horton and Wohl described peoples’ reactions to TV personas. It also allows us to see some crucial differences between the ways in which people relate to bots compared to how they relate to a human. For example, rather than an expression of grief at the loss of a social relationship, it could be argued that the responses triggered by the @Horse_ebooks revelations was of a more general loss of belief in the promises of artificial intelligence. To a certain extent, the appeal of @Horse_ebooks was precisely the fact that it was widely believed not to be a person. Whereas TV personas demand an ethical and social commitment on the part of the audience to keep the masquerade of the performer alive, a bot “needs no obliging” (Meyer). Unlike TV personas that depend on an illusory sense of intimacy, bots do “not need an audience” (Meyer). Whether or not people treat bots in the same way as they treat TV personas, Horton and Wohl’s concept of para-social relations ultimately points towards an understanding of the bot persona as “a function of the media themselves” (Horton and Wohl 216). If quizmasters were seen as the “typical and indigenous figures” of mass media in 1956 (Horton and Wohl 216), the bot, I would suggest, constitutes such an “indigenous figure” today. The bot, if not exactly a “new type of performer” (Horton and Wohl 216), is certainly a pervasive “performer”—indeed a persona—on Twitter today. While @Horse_ebooks was somewhat paradoxically revealed as a “performance art” piece (Orlean “Man”), the concept of persona allows us to see the “real” performance of @Horse_ebooks as constituted in the doing of botness. As the responses to @Horse_ebooks show, the concept of persona is not merely tied to beliefs about “what man should appear to be” (Jung 158), but also to ideas about what a bot should appear to be. Moreover, what the curious case of @Horse_ebooks shows, is how bots are not necessarily interpreted and judged by the standards of the original Turing test, that is, how humanlike they are, but according to how well they perform as bots. Indeed, we might ultimately understand the present case as a successful reverse Turing test, highlighting how humans can impersonate a bot so convincingly that it becomes indistinguishable from an actual bot. References Chen, Adrian. “How I Found the Human Being Behind @Horse_ebooks, The Internet's Favorite Spambot.” Gawker 23 Feb. 2012. 20 Apr. 2014 ‹http://gawker.com/5887697/how-i-found-the-human-being-behind-horseebooks-the-internets-favorite-spambot›. Dong, Jianming, Kuldeep Kelkar, and Kelly Braun. “Getting the Most Out of Personas for Product Usability Enhancements.” Usability and Internationalization. HCI and Culture Lecture Notes in Computer Science 4559 (2007): 291-96. Farago, Jason. “Give Me a Break. @Horse_ebooks Isn’t Art.” New Republic 24 Sep. 2013. 2 Apr. 2014 ‹http://www.newrepublic.com/article/114843/horse-ebooks-twitter-hoax-isnt-art›. Gehl, Robert. Reverse Engineering Social Media: Software, Culture, and Political Economy in New Media Capitalism. Temple University Press, 2014. Goffman, Erwin. The Presentation of Self in Everyday Life. New York: Anchor Books, 1959. Harris, Jacob (harrisj). “For a programmer like me who loves whimsical code, it’s a bit like being into computer chess and finding out Deep Blue has a guy inside.” 24 Sep. 2013, 5:03. Tweet. Harris, Jacob (harrisj). “I’m still bitter about ?@Horse_ebooks.” 12 Nov. 2013, 00:15. Tweet. Harris, Jacob (harrisj). “Many of us are still angry and hurt about ?@horse_ebooks.” 27 Dec. 2013, 6:24. Tweet. Hill, Kashmir. “The Invasion of the Twitter Bots.” Forbes 9 Aug. 2012. 13 Mar. 2014 ‹http://www.forbes.com/sites/kashmirhill/2012/08/09/the-invasion-of-the-twitter-bots›. Horton, Donald, and Richard Wohl. “Mass Communication and Para-Social Interaction: Observations on Intimacy at a Distance.” Psychiatry 19 (1956): 215-29. Isaacson, Andy. “Are You Following a Bot? How to Manipulate Social Movements by Hacking Twitter.” The Atlantic 2 Apr. 2011. 13 Mar. 2014 ‹http://www.theatlantic.com/magazine/archive/2011/05/are-you-following-a-bot/308448/›. Jung, Carl. Two Essays on Analytical Psychology, 2nd ed. London: Routledge, 1992. Laboreiro, Gustavo, Luís Sarmento, and Eugénio Oliveira. “Identifying Automatic Posting Systems in Microblogs.” Progress in Artificial Intelligence. Ed. Luis Antunes and H. Sofia Pinto. Berlin: Springer Verlag, 2011. Lee, Kyumin, B. David Eoff, and James Caverlee. “Seven Months with the Devils: A Long-Term Study of Content Polluters on Twitter.” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, 2011. Lockwood, Alex (heislockwood). “I want @Horse_ebooks back. Ever since he went silent, Twitter hasn’t been the same for me.” 7 Jan. 2014, 15:49. Tweet. Love, Dylan. “More than One Third of Web Traffic Is Fake.” Slate 24 Mar. 2014. 20 Apr. 2014 ‹http://www.slate.com/blogs/business_insider/2014/03/24/fake_online_traffic_36_percent_of_all_web_traffic_is_fraudulent.html›. Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self”. Journalism 15.2 (2014): 153–70. Meyer, Robinson. “@Horse_ebooks Is the Most Successful Piece of Cyber Fiction, Ever.” The Atlantic 24 Sep. 2013. 2 Apr. 2014 ‹http://www.theatlantic.com/technology/archive/2013/10/an-amazing-new-twitter-account-that-sort-of-mimics-your-tweets/280400›. Moore, Chris. “Personae or Personas: the Social in Social Media.” Persona Studies 13 Oct. 2011. 20 Apr. 2014 ‹http://www.personastudies.com/2011/10/personae-or-personas-social-in-social.html›. Mowbray, Miranda. “Automated Twitter Accounts.” Twitter and Society. Eds. Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt and Cornelius Puschmann. New York: Peter Lang, 2014. 183-94. Orlean, Susan. “Man and Machine: Playing Games on the Internet.” The New Yorker 10 Feb. 2014. 13 Mar. 2014 ‹http://www.newyorker.com/reporting/2014/02/10/140210fa_fact_orlean›. Orlean, Susan. “@Horse_ebooks Is Human after All.” The New Yorker 24 Sep. 2013. 15 Feb. 2013 ‹http://www.newyorker.com/online/blogs/elements/2013/09/horse-ebooks-and-pronunciation-book-revealed.html›. Pearce, Ian, Max Nanis, and Tim Hwang. “PacSocial: Field Test Report.” 15 Nov. 2011. 2 Apr. 2014 ‹http://pacsocial.com/files/pacsocial_field_test_report_2011-11-15.pdf›. Perlroth, Nicole. “Fake Twitter Followers Become Multimillion-Dollar Business.” The New York Times 5 Apr. 2013. 13 Mar. 2014 ‹http://bits.blogs.nytimes.com/2013/04/05/fake-twitter-followers-becomes-multimillion-dollar-business/?_php=true&_type=blogs&_php=true&_type=blogs&_r=1›. Scott, Linda. “The Troupe: Celebrities as Dramatis Personae in Advertisements.” NA: Advances in Consumer Research. Vol. 18. Eds. Rebecca H. Holman and Michael R. Solomon. Provo, UT: Association for Consumer Research, 1991. 355-63. Sinker, Dan. “Eulogy for a Horse.“ dansinker.com 24 Sep. 2013. 22 Apr. 2014 ‹http://web.archive.org/web/20140213003406/http://dansinker.com/post/62183207705/eulogy-for-a-horse›. Sinker, Dan (dansinker). “If I can’t rely on a Twitter bot to actually be a bot. What can I rely on?” 24 Sep. 2013, 4:36. Tweet. Slotkin, Jason. “Twitter ‘Bots’ Steal Tweeters’ Identities.” Marketplace 27 May 2013. 20 Apr. 2014 ‹http://www.marketplace.org/topics/tech/twitter-bots-steal-tweeters-identities›. Stetten, Melissa (MelissaStetten). “Finding out @Horse_ebooks is a Buzzfeed employee’s “performance art” is like Banksy revealing that he’s Jared Leto.” 25 Sep. 2013, 4:39. Tweet. Stewart, Kathleen. Ordinary Affects. Durham: Duke University Press, 2007. Strübel-Scheiner, Jessica. “Gender Performativity and Self-Perception: Drag as Masquerade.” International Journal of Humanities and Social Science 1.13 (2011): 12-19. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011. Tea Cake (third_dystopia). “I felt like something was missing from my life, and then I realized horse eBooks hasn't tweeted since September.” 9 Jan. 2014, 18:40. Tweet. Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Touchstone, 1995. Watson, Sara. “Else 9:30: The “Monkeys with Typewriter” Algorithm.” John Battelle’s searchblog 30 Sep. 2013. 23 Mar. 2014 ‹http://battellemedia.com/archives/2013/09/else-9-30-believing-in-monkeys-with-typewriters-algorithms.php›. Weiss, Astrid, and Manfred Tscheligi.”Rethinking the Human–Agent Relationship: Which Social Cues Do Interactive Agents Really Need to Have?” Believable Bots: Can Computers Play Like People? Ed. Philip Hingston. Berlin: Springer Verlag, 2012. 1-28. Zhao, Shanyang, Sherri Grasmuck, and Jason Martin. “Identity Construction on Facebook: Digital Empowerment in Anchored Relationships.” Computers in Human Behavior 24.5 (2008): 1816-36. Zangerle, Eva, and Günther Specht. “‘Sorry, I Was Hacked’: A Classification of Compromised Twitter Accounts.” Proceedings of ACM Symposium on Applied Computing, Gyeongju, Republic of Korea, 2014.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "280300 Computer Software"

1

Jiang, Feng. "Capturing event metadata in the sky : a Java-based application for receiving astronomical internet feeds : a thesis presented in partial fulfilment of the requirements for the degree of Master of Computer Science in Computer Science at Massey University, Auckland, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/897.

Full text
Abstract:
When an astronomical observer discovers a transient event in the sky, how can the information be immediately shared and delivered to others? Not too long time ago, people shared the information about what they discovered in the sky by books, telegraphs, and telephones. The new generation of transferring the event data is the way by the Internet. The information of astronomical events is able to be packed and put online as an Internet feed. For receiving these packed data, an Internet feed listener software would be required in a terminal computer. In other applications, the listener would connect to an intelligent robotic telescope network and automatically drive a telescope to capture the instant Astrophysical phenomena. However, because the technologies of transferring the astronomical event data are in the initial steps, the only resource available is the Perl-based Internet feed listener developed by the team of eSTAR. In this research, a Java-based Internet feed listener was developed. The application supports more features than the Perl-based application. After applying the rich Java benefits, the application is able to receive, parse and manage the Internet feed data in an efficient way with the friendly user interface. Keywords: Java, socket programming, VOEvent, real-time astronomy
APA, Harvard, Vancouver, ISO, and other styles
2

Thompson, Errol Lindsay. "How do they understand? Practitioner perceptions of an object-oriented program : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Education (Computer Science) at Massey University, Palmerston North, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/854.

Full text
Abstract:
In the computer science community, there is considerable debate about the appropriate sequence for introducing object-oriented concepts to novice programmers. Research into novice programming has struggled to identify the critical aspects that would provide a consistently successful approach to teaching introductory object-oriented programming. Starting from the premise that the conceptions of a task determine the type of output from the task, assisting novice programmers to become aware of what the required output should be, may lay a foundation for improving learning. This study adopted a phenomenographic approach. Thirty one practitioners were interviewed about the ways in which they experience object-oriented programming and categories of description and critical aspects were identified. These critical aspects were then used to examine the spaces of learning provided in twenty introductory textbooks. The study uncovered critical aspects that related to the way that practitioners expressed their understanding of an object-oriented program and the influences on their approach to designing programs. The study of the textbooks revealed a large variability in the cover of these critical aspects.
APA, Harvard, Vancouver, ISO, and other styles
3

Rountree, Richard John. "Novel technologies for the manipulation of meshes on the CPU and GPU : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Science in Computer Science at Massey University, Palmerston North, New Zealand." Massey University, 2007. http://hdl.handle.net/10179/700.

Full text
Abstract:
This thesis relates to research and development in the field of 3D mesh data for computer graphics. A review of existing storage and manipulation techniques for mesh data is given followed by a framework for mesh editing. The proposed framework combines complex mesh editing techniques, automatic level of detail generation and mesh compression for storage. These methods work coherently due to the underlying data structure. The problem of storing and manipulating data for 3D models is a highly researched field. Models are usually represented by sparse mesh data which consists of vertex position information, the connectivity information to generate faces from those vertices, surface normal data and texture coordinate information. This sparse data is sent to the graphics hardware for rendering but must be manipulated on the CPU. The proposed framework is based upon geometry images and is designed to store and manipulate the mesh data entirely on the graphics hardware. By utilizing the highly parallel nature of current graphics hardware and new hardware features, new levels of interactivity with large meshes can be gained. Automatic level of detail rendering can be used to allow models upwards of 2 million polygons to be manipulated in real time while viewing a lower level of detail. Through the use of pixels shaders the high detail is preserved in the surface normals while geometric detail is reduced. A compression scheme is then introduced which utilizes the regular structure of the geometry image to compress the floating point data. A number of existing compression schemes are compared as well as custom bit packing. This is a TIF funded project which is partnered with Unlimited Realities, a Palmerston North software development company. The project was to design a system to create, manipulate and store 3D meshes in a compressed and easy to manipulate manner. The goal is to create the underlying technologies to allow for a 3D modelling system to become integrated into the Umajin engine, not to create a user interface/stand alone modelling program. The Umajin engine is a 3D engine created by Unlimited Realities which has a strong focus on multimedia. More information on the Umajin engine can be found at www.umajin.com. In this project we propose a method which gives the user the ability to model with the high level of detail found in packages aimed at creating offline renders but create models which are designed for real time rendering.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnston, Christopher Troy. "VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1219.

Full text
Abstract:
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohanarajah, Selvarajah. "Designing CBL systems for complex domains using problem transformation and fuzzy logic : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand." Massey University, 2007. http://hdl.handle.net/10179/743.

Full text
Abstract:
Some disciplines are inherently complex and challenging to learn. This research attempts to design an instructional strategy for CBL systems to simplify learning certain complex domains. Firstly, problem transformation, a constructionist instructional technique, is used to promote active learning by encouraging students to construct more complex artefacts based on less complex ones. Scaffolding is used at the initial learning stages to alleviate the difficulty associated with complex transformation processes. The proposed instructional strategy brings various techniques together to enhance the learning experience. A functional prototype is implemented with Object-Z as the exemplar subject. Both objective and subjective evaluations using the prototype indicate that the proposed CBL system has a statistically significant impact on learning a complex domain. CBL systems include Learner models to provide adaptable support tailored to individual learners. Bayesian theory is used in general to manage uncertainty in Learner models. In this research, a fuzzy logic based locally intelligent Learner model is utilized. The fuzzy model is simple to design and implement, and easy to understand and explain, as well as efficient. Bayesian theory is used to complement the fuzzy model. Evaluation shows that the accuracy of the proposed Learner model is statistically significant. Further, opening Learner model reduces uncertainty, and the fuzzy rules are simple and resemble human reasoning processes. Therefore, it is argued that opening a fuzzy Learner model is both easy and effective. Scaffolding requires formative assessments. In this research, a confidence based multiple test marking scheme is proposed as traditional schemes are not suitable for measuring partial knowledge. Subjective evaluation confirms that the proposed schema is effective. Finally, a step-by-step methodology to transform simple UML class diagrams to Object-Z schemas is designed in order to implement problem transformation. This methodology could be extended to implement a semi-automated translation system for UML to Object Models.
APA, Harvard, Vancouver, ISO, and other styles
6

Chetsumon, Sireerat. "Attitudes of extension agents towards expert systems as decision support tools in Thailand." Lincoln University, 2005. http://hdl.handle.net/10182/1371.

Full text
Abstract:
It has been suggested 'expert systems' might have a significant role in the future through enabling many more people to access human experts. It is, therefore, important to understand how potential users interact with these computer systems. This study investigates the effect of extension agents' attitudes towards the features and use of an example expert system for rice disease diagnosis and management(POSOP). It also considers the effect of extension agents' personality traits and intelligence on their attitudes towards its use, and the agents' perception of control over using it. Answers to these questions lead to developing better systems and to increasing their adoption. Using structural equation modelling, two models - the extension agents' perceived usefulness of POSOP, and their attitude towards the use of POSOP, were developed (Models ATU and ATP). Two of POSOP's features (its value as a decision support tool, and its user interface), two personality traits (Openness (0) and Extraversion (E)), and the agents' intelligence, proved to be significant, and were evaluated. The agents' attitude towards POSOP's value had a substantial impact on their perceived usefulness and their attitude towards using it, and thus their intention to use POSOP. Their attitude towards POSOP's user interface also had an impact on their attitude towards its perceived usefulness, but had no impact on their attitude towards using it. However, the user interface did contribute to its value. In Model ATU, neither Openness (0) nor Extraversion (E) had an impact on the agents' perceived usefulness indicating POSOP was considered useful regardless of the agents' personality background. However, Extraversion (E) had a negative impact on their intention to use POSOP in Model ATP indicating that 'introverted' agents had a clear intention to use POSOP relative to the 'extroverted' agents. Extension agents' intelligence, in terms of their GPA, had neither an impact on their attitude, nor their subjective norm (expectation of 'others' beliefs), to the use of POSOP. It also had no association with any of the variables in both models. Both models explain and predict that it is likely that the agents will use POSOP. However, the availability of computers, particularly their capacity, are likely to impede its use. Although the agents believed using POSOP would not be difficult, they still believed training would be beneficial. To be a useful decision support tool, the expert system's value and user interface as well as its usefulness and ease of use, are all crucially important to the preliminary acceptance of a system. Most importantly, the users' problems and needs should be assessed and taken into account as a first priority in developing an expert system. Furthermore, the users should be involved in the system development. The results emphasise that the use of an expert system is not only determined by the system's value and its user interface, but also the agents' perceived usefulness, and their attitude towards using it. In addition, the agents' perception of control over using it is also a significant factor. The results suggested improvements to the system's value and its user interface would increase its potential use, and also providing suitable computers, coupled with training, would encourage its use.
APA, Harvard, Vancouver, ISO, and other styles
7

Blakey, Jeremy Peter. "Database training for novice end users : a design research approach : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Albany, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/880.

Full text
Abstract:
Of all of the desktop software available, that for the implementation of a database is some of the most complex. With the increasing number of computer users having access to this sophisticated software, but with no obvious way to learn the rudiments of data modelling for the implementation of a database, there is a need for a simple, convenient method to improve their understanding. The research described in this thesis represents the first steps in the development of a tool to accomplish this improvement. In a preliminary study using empirical research a conceptual model was used to improve novice end users’ understanding of the relational concepts of data organisation and the use of a database software package. The results showed that no conclusions could be drawn about either the artefact used or the method of evaluation. Following the lead of researchers in the fields of both education and information systems, a design research process was developed, consisting of the construction and evaluation of a training artefact. A combination of design research and a design experiment was used in the main study described in this thesis. New to research in information systems, design research is a methodology or set of analytical techniques and perspectives, and this was used to develop a process (development of an artefact) and a product (the artefact itself). The artefact, once developed, needed to be evaluated for its effectiveness, and this was done using a design experiment. The experiment involved exposing the artefact to a small group of end users in a realistic setting and defining a process for the evaluation of the artefact. The artefact was the tool that would facilitate the improvement of the understanding of data modelling, the vital precursor to the development of a database. The research was conducted among a group of novice end users who were exposed to the artefact, facilitated by an independent person. In order to assess whether there was any improvement in the novices’ understanding of relational data modelling and database concepts, they then completed a post-test. Results confirmed that the artefact, trialled through one iteration, was successful in improving the understanding of these novice end users in the area of data modelling. The combination of design research and design experiment as described above gave rise to a new methodology, called experimental design research at this early juncture. The successful outcome of this research will lead to further iterations of the design research methodology, leading in turn to the further development of the artefact which will be both useful and accessible to novice users of personal computers and database software. This research has made the following original contributions. Firstly, the use of the design research methodology for the development of the artefact, which proved successful in improving novice users’ understanding of relational data structures. Secondly, the novel use of a design experiment in an information systems project, which was used to evaluate the success of the artefact. And finally, the combination of the developed artefact followed by its successful evaluation using a design experiment resulted in the hybrid experimental design research methodology. The success of the implementation of the experimental design research methodology in this information systems project shows much promise for its successful application to similar projects.
APA, Harvard, Vancouver, ISO, and other styles
8

Rutherford, Paul. "Usability of navigation tools in software for browsing genetic sequences." Diss., Lincoln University, 2008. http://hdl.handle.net/10182/948.

Full text
Abstract:
Software to display and analyse DNA sequences is a crucial tool for bioinformatics research. The data of a DNA sequence has a relatively simple format but the length and sheer volume of data can create difficulties in navigation while maintaining overall context. This is one reason that current bioinformatics applications can be difficult to use. This research examines techniques for navigating through large single DNA sequences and their annotations. Navigation in DNA sequences is considered here in terms of the navigational activities: exploration, wayfinding and identifying objects. A process incorporating user-centred design was used to create prototypes involving panning and zooming of DNA sequences. This approach included a questionnaire to define the target users and their goals, an examination of existing bioinformatics applications to identify navigation designs, a heuristic evaluation of those designs, and a usability study of prototypes. Three designs for panning and five designs for zooming were selected for development. During usability testing, users were asked to perform common navigational activities using each of the designs. The “Connected View” design was found to be the most usable for panning while the “Zoom Slider” design was best for zooming and most useful zooming tool for tasks involving browsing. For some tasks the ability to zoom was unnecessary. The research provides important insights into the expectations that researchers have of bioinformatics applications and suitable methods for designing for that audience. The outcomes of this type of research can be used to help improve bioinformatics applications so that they will be truly usable by researchers.
APA, Harvard, Vancouver, ISO, and other styles
9

Dermoudy, J. "Effective Run-Time Management of Parallelism in a Functional Programming Context." 2002. http://eprints.utas.edu.au/67.

Full text
Abstract:
This thesis considers how to speed up the execution of functional programs using parallel execution, load distribution, and speculative evaluation. This is an important challenge given the increasing complexity of software systems, the decreasing cost of individual processors, and the appropriateness of the functional paradigm for parallelisation. Processor speeds are continuing to climb — but the magnitudes of increase are overridden by both the increasing complexity of software and the escalating expectation of users. Future gains in speed are likely to occur through the combination of today’s conventional uni-processors to form loosely-coupled multicomputers. Parallel program execution can theoretically provide linear speed-ups, but for this theoretical benefit to be realised two main hurdles must be overcome. The first of these is the identification and extraction of parallelism within the program to be executed. The second hurdle is the runtime management and scheduling of the parallel components to achieve the speed-up without slowing the execution of the program. Clearly a lot of work can be done by the programmer to ‘parallelise’ the algorithm. There is often, however, much parallelism available without significant effort on the part of the programmer. Functional programming languages and compilers have received much attention in the last decade for the contributions possible in parallel executions. Since the semantics of languages from the functional programming paradigm manifest the Church-Rosser property (that the order of evaluation of sub-expressions does not affect the result), sub-expressions may be executed in parallel. The absence of side-effects and the lack of state facilitate the availability of expressions suitable for concurrent evaluation. Unfortunately, such expressions may involve varying amounts of computation or require high amounts of data — both of which complicate the management of parallel execution. If the future of computation is through the formation of multicomputers, we are faced with the high probability that the number of available processing units will quickly outweigh the known parallelism of an algorithm at any given moment during execution. Intuitively this spare processing power should be utilised if possible. The premise of speculative evaluation is that it employs otherwise idle tasks on work that may prove beneficial. The more program components available for execution the greater the opportunity for speculation and potentially the quicker the program’s result may be obtained. The second impediment for the parallel execution of programs is the scheduling of program components for evaluation. Multicomputer execution of a program involves the allocation of program components among the available tasks to maximise throughput. We present a decentralised, speculation-cognate, load distribution algorithm that allocates and manages the distribution of program components among the tasks with the co-aim of minimising the impact on tasks executing program components known to be required. In this dissertation we present our implementation of minimal-impact speculative evaluation in the context of the functional programming language Haskell augmented with a number of primitives for the indication of useful parallelism. We expound four (two quantitative and two qualitative) novel schemes for expressing the initial speculative contribution of program components and provide a translation mechanism to illustrate the equivalence of the four. The implementation is based on the Glasgow Haskell Compiler (GHC) version 0•29 — the de facto standard for parallel functional programming research — and strives to minimise the runtime overhead of managing speculative evaluation. We have augmented the Graph reduction for a Unified Machine model (GUM) runtime system with our load distribution algorithm and speculative evaluation sub-system. Both are motivated by the need to facilitate speculative evaluation without adversely impacting on program components directly influencing the program’s result. Experiments have been undertaken using common benchmark programs. These programs have been executed under sequential, conservative parallel, and speculative parallel evaluation to study the overheads of the runtime system and to show the benefits of speculation. The results of the experiments conducted using an emulated multicomputer add evidence of the usefulness of speculative evaluation in general and effective speculative evaluation in particular.
APA, Harvard, Vancouver, ISO, and other styles
10

Goldsmith, B. "A Peer to Peer Supply Chain Network." 2004. http://eprints.utas.edu.au/148.

Full text
Abstract:
Many papers have speculated on the possibility of applying peer-to-peer networking concepts to networks that exist in the physical world such as financial markets, business or personal communication and ad hoc networking. One such application that has been discussed in the literature has been the application of peer-to-peer to corporate supply chains to provide a flexible communication medium that may overcome some classical problems in supply chain management. This thesis presents the design, development and evaluation of a system which implements a peer-to-peer supply chain system. A general, flexible peer-to-peer network was developed which serves as a foundation to build peer-to-peer data swapping applications on top of. It provides simple network management, searching and data swapping services which form the basis of many peer-to-peer systems. Using the developed framework, a supply chain focussed application was built to test the feasibility of applying peer-to-peer networking to supply chain management. Results and discussion are presented of a scenario analysis which yielded positive results. Several future directions for research in this area are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "280300 Computer Software"

1

Samolej, Sławomir, and Tomasz Szmuc. "HTCPNs–Based Modelling and Evaluation of Dynamic Computer Cluster Reconfiguration." In Advances in Software Engineering Techniques, 97–108. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28038-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cunningham, Sean, Jemil Gambo, Aidan Lawless, Declan Moore, Murat Yilmaz, Paul M. Clarke, and Rory V. O’Connor. "Software Testing: A Changing Career." In Communications in Computer and Information Science, 731–42. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Keskin Kaynak, İlgi, Evren Çilden, and Selin Aydin. "Software Quality Improvement Practices in Continuous Integration." In Communications in Computer and Information Science, 507–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Murphy, Alex, Ben Kelly, Kai Bergmann, Kyrylo Khaletskyy, Rory V. O’Connor, and Paul M. Clarke. "Examining Unequal Gender Distribution in Software Engineering." In Communications in Computer and Information Science, 659–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meade, Edward, Emma O’Keeffe, Niall Lyons, Dean Lynch, Murat Yilmaz, Ulas Gulec, Rory V. O’Connor, and Paul M. Clarke. "The Changing Role of the Software Engineer." In Communications in Computer and Information Science, 682–94. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nalepa, Gislaine, Rafaela Mantovani Fontana, Sheila Reinehr, and Andreia Malucelli. "Using Agile Approaches to Drive Software Process Improvement Initiatives." In Communications in Computer and Information Science, 495–506. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Corazza, Anna, Sergio Di Martino, Valerio Maggio, and Giuseppe Scanniello. "Combining Machine Learning and Information Retrieval Techniques for Software Clustering." In Communications in Computer and Information Science, 42–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28033-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gasca-Hurtado, Gloria Piedad, María Clara Gómez-Alvarez, Mirna Muñoz, and Adriana Peña. "A Gamified Proposal for Software Risk Analysis in Agile Methodologies." In Communications in Computer and Information Science, 272–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yilmaz, Murat, Serdar Tasel, Eray Tuzun, Ulas Gulec, Rory V. O’Connor, and Paul M. Clarke. "Applying Blockchain to Improve the Integrity of the Software Development Process." In Communications in Computer and Information Science, 260–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Walter, Bartosz, Marcin Wolski, Žarko Stanisavljević, and Andrijana Todosijević. "Designing a Maturity Model for a Distributed Software Organization. An Experience Report." In Communications in Computer and Information Science, 123–35. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "280300 Computer Software"

1

Etcheverry, John, Mark Patterson, and Diana Grauer. "Virtual Design of an Industrial, Large-Bore, Spark-Ignited, Natural Gas, Internal Combustion Engine for Reduction of Regulated Pollutant Emissions." In ASME 2013 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icef2013-19138.

Full text
Abstract:
The natural gas industry has long depended on large bore, two-stroke cycle, spark-ignited, gas-powered, reciprocating engines to move gas from the well to the pipeline and downstream. As regulations governing the pollutant emissions from these engines are tightened the industry is turning to the engine OEMs for a solution. The challenge of further reducing engine emissions is not a new task to the industry. However, as the requirements placed on the engines are further restricted, the technology required to achieve these goals becomes more advanced, along with the required tools and technology to create it. New predictive tools have been created and have become more powerful and capable as computer software and hardware becomes more advanced, enabling engineers to create more complex designs and to do so quickly and at lower cost, all of which may not have been possible previously. This paper investigates methods used in designing the Ajax 2800 series, which is a large bore, two-stroke cycle, gas-powered, reciprocating engine and the improvements in emissions that resulted from the application of these methods.. Solutions to overcoming the challenges encountered during the process will also be presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography