Academic literature on the topic 'Zipper artifact'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Zipper artifact.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Zipper artifact"

1

Park, Daejun, and Jechang Jeong. "Quadratic Taylor Approximation Demosaicking System Using Post-Processing for Zipper Artifact Removal." IEEE Access 6 (2018): 58244–53. http://dx.doi.org/10.1109/access.2018.2874010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Heiland, Sabine. "From A as in Aliasing to Z as in Zipper: Artifacts in MRI." Clinical Neuroradiology 18, no. 1 (March 2008): 25–36. http://dx.doi.org/10.1007/s00062-008-8003-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Geng, Hui, Siu-Ki Yu, Wai-Wang Lam, Wing-Kei Rebecca Wong, Yick-Wing Ho, and Sau-Fan Liu. "The Dosimetric Effect of Zipper Artifacts on TomoTherapy Adaptive Dose Calculation—A Phantom Study." Medical Dosimetry 36, no. 3 (September 2011): 306–12. http://dx.doi.org/10.1016/j.meddos.2010.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kwan, Chiman, Bryan Chou, and James Bell III. "Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images." Electronics 8, no. 3 (March 11, 2019): 308. http://dx.doi.org/10.3390/electronics8030308.

Full text
Abstract:
Bayer pattern filters have been used in many commercial digital cameras. In National Aeronautics and Space Administration’s (NASA) mast camera (Mastcam) imaging system, onboard the Mars Science Laboratory (MSL) rover Curiosity, a Bayer pattern filter is being used to capture the RGB (red, green, and blue) color of scenes on Mars. The Mastcam has two cameras: left and right. The right camera has three times better resolution than that of the left. It is well known that demosaicing introduces color and zipper artifacts. Here, we present a comparative study of demosaicing results using conventional and deep learning algorithms. Sixteen left and 15 right Mastcam images were used in our experiments. Due to a lack of ground truth images for Mastcam data from Mars, we compared the various algorithms using a blind image quality assessment model. It was observed that no one algorithm can work the best for all images. In particular, a deep learning-based algorithm worked the best for the right Mastcam images and a conventional algorithm achieved the best results for the left Mastcam images. Moreover, subjective evaluation of five demosaiced Mastcam images was also used to compare the various algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Jierun, Song Wen, and S. H. Gary Chan. "Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1018–26. http://dx.doi.org/10.1609/aaai.v35i2.16186.

Full text
Abstract:
Image demosaicking and denoising are the two key fundamental steps in digital camera pipelines, aiming to reconstruct clean color images from noisy luminance readings. In this paper, we propose and study Wild-JDD, a novel learning framework for joint demosaicking and denoising in the wild. In contrast to previous works which generally assume the ground truth of training data is a perfect reflection of the reality, we consider here the more common imperfect case of ground truth uncertainty in the wild. We first illustrate its manifestation as various kinds of artifacts including zipper effect, color moire and residual noise. Then we formulate a two-stage data degradation process to capture such ground truth uncertainty, where a conjugate prior distribution is imposed upon a base distribution. After that, we derive an evidence lower bound (ELBO) loss to train a neural network that approximates the parameters of the conjugate prior distribution conditioned on the degraded input. Finally, to further enhance the performance for out-of-distribution input, we design a simple but effective fine-tuning strategy by taking the input as a weakly informative prior. Taking into account ground truth uncertainty, Wild-JDD enjoys good interpretability during optimization. Extensive experiments validate that it outperforms state-of-the-art schemes on joint demosaicking and denoising tasks on both synthetic and realistic raw datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Fauziah, Nurul, and Imron Wakhid Harits. "HISTORICAL AND SOCIAL PERSPECTIVES OF BLYTON’S THE FAMOUS FIVE." PARAFRASE : Jurnal Kajian Kebahasaan & Kesastraan 22, no. 1 (May 31, 2022): 18–34. http://dx.doi.org/10.30996/parafrase.v22i1.6603.

Full text
Abstract:
This article examines the historical and social perspectives in children literature of three series of Blyton’s The Famous Five. The objective of this study is, to explain the historical context and social perspectives of the three series novel of The Famous Five. This study used qualitative method to collect, select and analyze the data. The data of this study is three series of novel The Famous Five by Enid Blyton. This study used document analysis as the process of collecting data in this research. The writer collected the data using some notes and online news as the instrument to get the data. The theory used in this study is historical and social approaches in children literature by Jack Zipes. The study found some historical aspects and social elements in the novel. The result of this study showed that (1) there are twelve historical aspects dealing with place, event, and artifact. (2) there are four social elements dealing with economic condition, gender equality, tradition, and culture. The historical aspects and social elements of Blyton’s The Famous Five are depicted in the form of conversation and paragraphs. Based on the result of this study, the writer concludes that Blyton’s background influences the process of the historical aspects and social elements which are depicted in the three series of Blyton’s The Famous Five.
APA, Harvard, Vancouver, ISO, and other styles
7

"101 MRI Brain Solutions, 1st Edition: A Book Review." Open Journal of Radiology and Medical Imaging, September 22, 2021, 49–50. http://dx.doi.org/10.36811/ojrmi.2021.110016.

Full text
Abstract:
This book is written by a group of Neuroradiologists from an Indian university and published by Jaypee Brothers Medical Publishers (P) Ltd see (Figure 1). The first chapter discusses the physical principles of MRI and the 2nd chapter illustrates brain anatomy. From the 3rd to 11th chapter, all chapters discuss different malformations and disorders that affect the brain. Chapters 12th and 13th are glossary and acronyms respectively. The only colored pictures are in the plate and the rest of the book illustrations has no colors. Each chapter is written by a Neuroradiologists of the 14 authors of the book (3 editors and 11 contributors). The used method in 3rd to 11th chapter is a presentation of a case as an example of the brain malformation or disorder on an MRI scan. The book uses Radioland’s style in presenting the cases. Each case will have a case presentation and history, MRI scan pictures, MRI findings, comments and explanations, opinions, and clinical discussion. This book is (101), so it is for beginners and it does not cover all MRI neuroradiology topics since it has 101 cases only. The book has too many English language mistakes and a significant lack of proofreading. Some of the MRI images have poor image quality. The book is printed on high-quality papers. In some chapters of the book, there are some CT scans to show the difference between MRI and CT in the ability to show some of the abnormalities. The medical illustration is not high-quality illustration; for example; the Percheron artery was illustrated in the book poorly. Nobody should expect to see professional illustrations like Netter’s style in illustrations in this book. In the book, they rarely use arrows to indicate the location of the pathology. The book does not mention some of the famous neuroradiology signs. In the subdural hematoma case, they present a case of acute on chronic hematoma as subdural hematoma case which would be better to show acute in one case, chronic in one case, and acute on chronic on a third case because this book is for beginners. Some anatomical illustrations are out of the chapter which discuss the anatomy of the brain like the illustration of the cerebral venous system on page 137, figure 28. From page 139 to 144, the book presents three anatomical variations in the brain’s vasculatures (carotid cavernous fistula, persistent occipital sinus, and fetal posterior cerebellar artery) which would be better to be in a separate chapter about anatomical variations. Sometimes, the book will mention a famous neuroradiology sign without providing any MRI image as a demonstration. As well, the book will describe some findings which is invisible on MRI images and the authors will write “not seen in these images”. If the finding is not demonstrated for the readers, the authors should not mention it in the findings. Instead, they can include this undemonstrated finding in the comments and explanation section of the case. This book will mention too many things (radiological signs, findings, associated conditions, etc.) without showing figures for what they are writing about like in the astrocytoma case they mention astrocytoma’s types without differentiating them on figures. Plus, the cases are not mentioned in an organized fashion. For example, the book will speak about a tumor that affects a specific region (let’s call it region number 1) then they will write about another tumor in a different region (let’s call it region number 2) then they jump back to the first region and write about another tumor that affects region number 1. The book needs to be organized to write; for example; the tumors in cerebellopontine angle together which will help the reader to collect and compare all the information about the tumors that affect this region then move to another region. The book will provide a low image quality with too many artifacts; for example; the diffusion image in the hypoglycemia case. The artifacts chapter discusses only three common types of artifacts which are the Gibbs phenomenon, zipper artifact, and susceptibility artifact. The chapter can be improved by organizing the artifacts like; patient-related, signal processing related, and machinery (hardware) related artifacts. The artifacts’ chapter is placed in between clinical cases which should be removed to the beginning or at the end of the book. The last chapter collected different cases under miscellaneous.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Rawi, Ahmed, Carmen Celestini, Nicole Stewart, and Nathan Worku. "How Google Autocomplete Algorithms about Conspiracy Theorists Mislead the Public." M/C Journal 25, no. 1 (March 21, 2022). http://dx.doi.org/10.5204/mcj.2852.

Full text
Abstract:
Introduction: Google Autocomplete Algorithms Despite recent attention to the impact of social media platforms on political discourse and public opinion, most people locate their news on search engines (Robertson et al.). When a user conducts a search, millions of outputs, in the form of videos, images, articles, and Websites are sorted to present the most relevant search predictions. Google, the most dominant search engine in the world, expanded its search index in 2009 to include the autocomplete function, which provides suggestions for query inputs (Dörr and Stephan). Google’s autocomplete function also allows users to “search smarter” by reducing typing time by 25 percent (Baker and Potts 189). Google’s complex algorithm is impacted upon by factors like search history, location, and keyword searches (Karapapa and Borghi), and there are policies to ensure the autocomplete function does not contain harmful content. In 2017, Google implemented a feedback tool to allow human evaluators to assess the quality of search results; however, the algorithm still provides misleading results that frame far-right actors as neutral. In this article, we use reverse engineering to understand the nature of these algorithms in relation to the descriptive outcome, to illustrate how autocomplete subtitles label conspiracists in three countries. According to Google, these “subtitles are generated automatically”, further stating that the “systems might determine that someone could be called an actor, director, or writer. Only one of these can appear as the subtitle” and that Google “cannot accept or create custom subtitles” (Google). We focused our attention on well-known conspiracy theorists because of their influence and audience outreach. In this article we argue that these subtitles are problematic because they can mislead the public and amplify extremist views. Google’s autocomplete feature is misleading because it does not highlight what is publicly known about these actors. The labels are neutral or positive but never negative, reflecting primary jobs and/or the actor’s preferred descriptions. This is harmful to the public because Google’s search rankings can influence a user’s knowledge and information preferences through the search engine manipulation effect (Epstein and Robertson). Users’ preferences and understanding of information can be manipulated based upon their trust in Google search results, thus allowing these labels to be widely accepted instead of providing a full picture of the harm their ideologies and belief cause. Algorithms That Mainstream Conspiracies Search engines establish order and visibility to Web pages that operationalise and stabilise meaning to particular queries (Gillespie). Google’s subtitles and blackbox operate as a complex algorithm for its search index and offer a mediated visibility to aspects of social and political life (Gillespie). Algorithms are designed to perform computational tasks through an operational sequence that computer systems must follow (Broussard), but they are also “invisible infrastructures” that Internet users consciously or unconsciously follow (Gran et al. 1779). The way algorithms rank, classify, sort, predict, and process data is political because it presents the world through a predetermined lens (Bucher 3) decided by proprietary knowledge – a “secret sauce” (O’Neil 29) – that is not disclosed to the general public (Christin). Technology titans, like Google, Facebook, and Amazon (Webb), rigorously protect and defend intellectual property for these algorithms, which are worth billions of dollars (O’Neil). As a result, algorithms are commonly defined as opaque, secret “black boxes” that conceal the decisions that are already made “behind corporate walls and layers of code” (Pasquale 899). The opacity of algorithms is related to layers of intentional secrecy, technical illiteracy, the size of algorithmic systems, and the ability of machine learning algorithms to evolve and become unintelligible to humans, even to those trained in programming languages (Christin 898-899). The opaque nature of algorithms alongside the perceived neutrality of algorithmic systems is problematic. Search engines are increasingly normalised and this leads to a socialisation where suppositions are made that “these artifacts are credible and provide accurate information that is fundamentally depoliticized and neutral” (Noble 25). Google’s autocomplete and PageRank algorithms exist outside of the veil of neutrality. In 2015, Google’s photos app, which uses machine learning techniques to help users collect, search, and categorise images, labelled two black people as ‘gorillas’ (O’Neil). Safiya Noble illustrates how media and technology are rooted in systems of white supremacy, and how these long-standing social biases surface in algorithms, illustrating how racial and gendered inequities embed into algorithmic systems. Google actively fixes algorithmic biases with band-aid-like solutions, which means the errors remain inevitable constituents within the algorithms. Rising levels of automation correspond to a rising level of errors, which can lead to confusion and misdirection of the algorithms that people use to manage their lives (O’Neil). As a result, software, code, machine learning algorithms, and facial/voice recognition technologies are scrutinised for producing and reproducing prejudices (Gray) and promoting conspiracies – often described as algorithmic bias (Bucher). Algorithmic bias occurs because algorithms are trained by historical data already embedded with social biases (O’Neil), and if that is not problematic enough, algorithms like Google’s search engine also learn and replicate the behaviours of Internet users (Benjamin 93), including conspiracy theorists and their followers. Technological errors, algorithmic bias, and increasing automation are further complicated by the fact that Google’s Internet service uses “2 billion lines of code” – a magnitude that is difficult to keep track of, including for “the programmers who designed the algorithm” (Christin 899). Understanding this level of code is not critical to understanding algorithmic logics, but we must be aware of the inscriptions such algorithms afford (Krasmann). As algorithms become more ubiquitous it is urgent to “demand that systems that hold algorithms accountable become ubiquitous as well” (O’Neil 231). This is particularly important because algorithms play a critical role in “providing the conditions for participation in public life”; however, the majority of the public has a modest to nonexistent awareness of algorithms (Gran et al. 1791). Given the heavy reliance of Internet users on Google’s search engine, it is necessary for research to provide a glimpse into the black boxes that people use to extract information especially when it comes to searching for information about conspiracy theorists. Our study fills a major gap in research as it examines a sub-category of Google’s autocomplete algorithm that has not been empirically explored before. Unlike the standard autocomplete feature that is primarily programmed according to popular searches, we examine the subtitle feature that operates as a fixed label for popular conspiracists within Google’s algorithm. Our initial foray into our research revealed that this is not only an issue with conspiracists, but also occurs with terrorists, extremists, and mass murderers. Method Using a reverse engineering approach (Bucher) from September to October 2021, we explored how Google’s autocomplete feature assigns subtitles to widely known conspiracists. The conspiracists were not geographically limited, and we searched for those who reside in the United States, Canada, United Kingdom, and various countries in Europe. Reverse engineering stems from Ashby’s canonical text on cybernetics, in which he argues that black boxes are not a problem; the problem or challenge is related to the way one can discern their contents. As Google’s algorithms are not disclosed to the general public (Christin), we use this method as an extraction tool to understand the nature of how these algorithms (Eilam) apply subtitles. To systematically document the search results, we took screenshots for every conspiracist we searched in an attempt to archive the Google autocomplete algorithm. By relying on previous literature, reports, and the figures’ public statements, we identified and searched Google for 37 Western-based and influencial conspiracy theorists. We initially experimented with other problematic figures, including terrorists, extremists, and mass murderers to see whether Google applied a subtitle or not. Additionally, we examined whether subtitles were positive, neutral, or negative, and compared this valence to personality descriptions for each figure. Using the standard procedures of content analysis (Krippendorff), we focus on the manifest or explicit meaning of text to inform subtitle valence in terms of their positive, negative, or neutral connotations. These manifest features refer to the “elements that are physically present and countable” (Gray and Densten 420) or what is known as the dictionary definitions of items. Using a manual query, we searched Google for subtitles ascribed to conspiracy theorists, and found the results were consistent across different countries. Searches were conducted on Firefox and Chrome and tested on an Android phone. Regardless of language input or the country location established by a Virtual Private Network (VPN), the search terms remained stable, regardless of who conducted the search. The conspiracy theorists in our dataset cover a wide range of conspiracies, including historical figures like Nesta Webster and John Robison, who were foundational in Illuminati lore, as well as contemporary conspiracists such as Marjorie Taylor Greene and Alex Jones. Each individual’s name was searched on Google with a VPN set to three countries. Results and Discussion This study examines Google’s autocomplete feature associated with subtitles of conspiratorial actors. We first tested Google’s subtitling system with known terrorists, convicted mass shooters, and controversial cult leaders like David Koresh. Garry et al. (154) argue that “while conspiracy theories may not have mass radicalising effects, they are extremely effective at leading to increased polarization within societies”. We believe that the impact of neutral subtitling of conspiracists reflects the integral role conspiracies plays in contemporary politics and right-wing extremism. The sample includes contemporary and historical conspiracists to establish consistency in labelling. For historical figures, the labels are less consequential and simply reflect the reality that Google’s subtitles are primarily neutral. Of the 37 conspiracy theorists we searched (see Table 1 in the Appendix), seven (18.9%) do not have an associated subtitle, and the other 30 (81%) have distinctive subtitles, but none of them reflects the public knowledge of the individuals’ harmful role in disseminating conspiracy theories. In the list, 16 (43.2%) are noted for their contribution to the arts, 4 are labelled as activists, 7 are associated with their professional affiliation or original jobs, 2 to the journalism industry, one is linked to his sports career, another one as a researcher, and 7 have no subtitle. The problem here is that when white nationalists or conspiracy theorists are not acknowledged as such in their subtitles, search engine users could possibly encounter content that may sway their understanding of society, politics, and culture. For example, a conspiracist like Alex Jones is labeled as an “American Radio Host” (see Figure 1), despite losing two defamation lawsuits for declaring that the shooting at Sandy Hook Elementary School in Newtown, Connecticut, was a ‘false flag’ event. Jones’s actions on his InfoWars media platforms led to parents of shooting victims being stalked and threatened. Another conspiracy theorist, Gavin McInnes, the creator of the far-right, neo-fascist Proud Boys organisation, a known terrorist entity in Canada and hate group in the United States, is listed simply as a “Canadian writer” (see Figure 1). Fig. 1: Screenshots of Google’s subtitles for Alex Jones and Gavin McInnes. Although subtitles under an individual’s name are not audio, video, or image content, the algorithms that create these subtitles are an invisible infrastructure that could cause harm through their uninterrogated status and pervasive presence. This could then be a potential conduit to media which could cause harm and develop distrust in electoral and civic processes, or all institutions. Examples from our list include Brittany Pettibone, whose subtitle states that she is an “American writer” despite being one of the main propagators of the Pizzagate conspiracy which led to Edgar Maddison Welch (whose subtitle is “Screenwriter”) travelling from North Carolina to Washington D.C. to violently threaten and confront those who worked at Comet Ping Pong Pizzeria. The same misleading label can be found via searching for James O’Keefe of Project Veritas, who is positively labelled as “American activist”. Veritas is known for releasing audio and video recordings that contain false information designed to discredit academic, political, and service organisations. In one instance, a 2020 video released by O’Keefe accused Democrat Ilhan Omar’s campaign of illegally collecting ballots. The same dissembling of distrust applies to Mike Lindell, whose Google subtitle is “CEO of My Pillow”, as well as Sidney Powell, who is listed as an “American lawyer”; both are propagators of conspiracy theories relating to the 2020 presidential election. The subtitles attributed to conspiracists on Google do not acknowledge the widescale public awareness of the negative role these individuals play in spreading conspiracy theories or causing harm to others. Some of the selected conspiracists are well known white nationalists, including Stefan Molyneux who has been banned from social media platforms like Twitter, Twitch, Facebook, and YouTube for the promotion of scientific racism and eugenics; however, he is neutrally listed on Google as a “Canadian podcaster”. In addition, Laura Loomer, who describes herself as a “proud Islamophobe,” is listed by Google as an “Author”. These subtitles can pose a threat by normalising individuals who spread conspiracy theories, sow dissension and distrust in institutions, and cause harm to minority groups and vulnerable individuals. Once clicking on the selected person, the results, although influenced by the algorithm, did not provide information that aligned with the associated subtitle. The search results are skewed to the actual conspiratorial nature of the individuals and associated news articles. In essence, the subtitles do not reflect the subsequent search results, and provide a counter-labelling to the reality of the resulting information provided to the user. Another significant example is Jerad Miller, who is listed as “American performer”, despite the fact that he is the Las Vegas shooter who posted anti-government and white nationalist 3 Percenters memes on his social media (SunStaff), even though the majority of search results connect him to the mass shooting he orchestrated in 2014. The subtitle “performer” is certainly not the common characteristic that should be associated with Jerad Miller. Table 1 in the Appendix shows that individuals who are not within the contemporary milieux of conspiracists, but have had a significant impact, such as Nesta Webster, Robert Welch Junior, and John Robison, were listed by their original profession or sometimes without a subtitle. David Icke, infamous for his lizard people conspiracies, has a subtitle reflecting his past football career. In all cases, Google’s subtitle was never consistent with the actor’s conspiratorial behaviour. Indeed, the neutral subtitles applied to conspiracists in our research may reflect some aspect of the individuals’ previous careers but are not an accurate reflection of the individuals’ publicly known role in propagating hate, which we argue is misleading to the public. For example, David Icke may be a former footballer, but the 4.7 million search results predominantly focus on his conspiracies, his public fora, and his status of being deplatformed by mainstream social media sites. The subtitles are not only neutral, but they are not based on the actual search results, and so are misleading in what the searcher will discover; most importantly, they do not provide a warning about the misinformation contained in the autocomplete subtitle. To conclude, algorithms automate the search engines that people use in the functions of everyday life, but are also entangled in technological errors, algorithmic bias, and have the capacity to mislead the public. Through a process of reverse engineering (Ashby; Bucher), we searched 37 conspiracy theorists to decode the Google autocomplete algorithms. We identified how the subtitles attributed to conspiracy theorists are neutral, positive, but never negative, which does not accurately reflect the widely known public conspiratorial discourse these individuals propagate on the Web. This is problematic because the algorithms that determine these subtitles are invisible infrastructures acting to misinform the public and to mainstream conspiracies within larger social, cultural, and political structures. This study highlights the urgent need for Google to review the subtitles attributed to conspiracy theorists, terrorists, and mass murderers, to better inform the public about the negative nature of these actors, rather than always labelling them in neutral or positive ways. Funding Acknowledgement This project has been made possible in part by the Canadian Department of Heritage – the Digital Citizen Contribution program – under grant no. R529384. The title of the project is “Understanding hate groups’ narratives and conspiracy theories in traditional and alternative social media”. References Ashby, W. Ross. An Introduction to Cybernetics. Chapman & Hall, 1961. Baker, Paul, and Amanda Potts. "‘Why Do White People Have Thin Lips?’ Google and the Perpetuation of Stereotypes via Auto-Complete Search Forms." Critical Discourse Studies 10.2 (2013): 187-204. Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019. Bucher, Taina. If... Then: Algorithmic Power and Politics. OUP, 2018. Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. MIT P, 2018. Christin, Angèle. "The Ethnographer and the Algorithm: Beyond the Black Box." Theory and Society 49.5 (2020): 897-918. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. MIT P, 2020. Dörr, Dieter, and Juliane Stephan. "The Google Autocomplete Function and the German General Right of Personality." Perspectives on Privacy. De Gruyter, 2014. 80-95. Eilam, Eldad. Reversing: Secrets of Reverse Engineering. John Wiley & Sons, 2011. Epstein, Robert, and Ronald E. Robertson. "The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections." Proceedings of the National Academy of Sciences 112.33 (2015): E4512-E4521. Garry, Amanda, et al. "QAnon Conspiracy Theory: Examining its Evolution and Mechanisms of Radicalization." Journal for Deradicalization 26 (2021): 152-216. Gillespie, Tarleton. "Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem." Information, Communication & Society 20.1 (2017): 63-80. Google. “Update your Google knowledge panel.” 2022. 3 Jan. 2022 <https://support.google.com/knowledgepanel/answer/7534842?hl=en#zippy=%2Csubtitle>. Gran, Anne-Britt, Peter Booth, and Taina Bucher. "To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?" Information, Communication & Society 24.12 (2021): 1779-1796. Gray, Judy H., and Iain L. Densten. "Integrating Quantitative and Qualitative Analysis Using Latent and Manifest Variables." Quality and Quantity 32.4 (1998): 419-431. Gray, Kishonna L. Intersectional Tech: Black Users in Digital Gaming. LSU P, 2020. Karapapa, Stavroula, and Maurizio Borghi. "Search Engine Liability for Autocomplete Suggestions: Personality, Privacy and the Power of the Algorithm." International Journal of Law and Information Technology 23.3 (2015): 261-289. Krasmann, Susanne. "The Logic of the Surface: On the Epistemology of Algorithms in Times of Big Data." Information, Communication & Society 23.14 (2020): 2096-2109. Krippendorff, Klaus. Content Analysis: An Introduction to Its Methodology. Sage, 2004. Noble, Safiya Umoja. Algorithms of Oppression. New York UP, 2018. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. Pasquale, Frank. The Black Box Society. Harvard UP, 2015. Robertson, Ronald E., David Lazer, and Christo Wilson. "Auditing the Personalization and Composition of Politically-Related Search Engine Results Pages." Proceedings of the 2018 World Wide Web Conference. 2018. Staff, Sun. “A Look inside the Lives of Shooters Jerad Miller, Amanda Miller.” Las Vegas Sun 9 June 2014. <https://lasvegassun.com/news/2014/jun/09/look/>. Webb, Amy. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Hachette UK, 2019. Appendix Table 1: The subtitles of conspiracy theorists on Google autocomplete Conspiracy Theorist Google Autocomplete Subtitle Character Description Alex Jones American radio host InfoWars founder, American far-right radio show host and conspiracy theorist. The SPLC describes Alex Jones as "the most prolific conspiracy theorist in contemporary America." Barry Zwicker Canadian journalist Filmmaker who made a documentary that claimed fear was used to control the public after 9/11. Bart Sibrel American producer Writer, producer, and director of work to falsely claim the Apollo moon landings between 1969 and 1972 were staged by NASA. Ben Garrison American cartoonist Alt-right and QAnon political cartoonist Brittany Pettibone American writer Far-right, political vlogger on YouTube and propagator of #pizzagate. Cathy O’Brien American author Cathy O’Brien claims she was a victim of a government mind control project called Project Monarch. Dan Bongino American radio host Stakeholder in Parler, Radio Host, Ex-Spy, Conspiracist (Spygate, MAGA election fraud, etc.). David Icke Former footballer Reptilian humanoid conspiracist. David Wynn Miller (No subtitle) Conspiracist, far-right tax protester, and founder of the Sovereign Citizens Movement. Jack Posobiec American activist Alt-right, alt-lite political activist, conspiracy theorist, and Internet troll. Editor of Human Events Daily. James O’Keefe American activist Founder of Project Veritas, a far-right company that propagates disinformation and conspiracy theories. John Robison Foundational Illuminati conspiracist. Kevin Annett Canadian writer Former minister and writer, who wrote a book exposing the atrocities to Indigenous Communities, and now is a conspiracist and vlogger. Laura Loomer Author Far-right, anti-Muslim, conspiracy theorist, and Internet personality. Republican nominee in Florida's 21st congressional district in 2020. Marjorie Taylor Greene United States Representative Conspiracist, QAnon adherent, and U.S. representative for Georgia's 14th congressional district. Mark Dice American YouTuber Right-wing conservative pundit and conspiracy theorist. Mark Taylor (No subtitle) QAnon minister and self-proclaimed prophet of Donald Trump, the 45th U.S. President. Michael Chossudovsky Canadian economist Professor emeritus at the University of Ottawa, founder of the Centre for Research on Globalization, and conspiracist. Michael Cremo(Drutakarmā dāsa) American researcher Self-described Vedic creationist whose book, Forbidden Archeology, argues humans have lived on earth for millions of years. Mike Lindell CEO of My Pillow Business owner and conspiracist. Neil Patel English entrepreneur Founded The Daily Caller with Tucker Carlson. Nesta Helen Webster English author Foundational Illuminati conspiracist. Naomi Wolf American author Feminist turned conspiracist (ISIS, COVID-19, etc.). Owen Benjamin American comedian Former actor/comedian now conspiracist (Beartopia), who is banned from mainstream social media for using hate speech. Pamela Geller American activist Conspiracist, Anti-Islam, Blogger, Host. Paul Joseph Watson British YouTuber InfoWars co-host and host of the YouTube show PrisonPlanetLive. QAnon Shaman (Jake Angeli) American activist Conspiracy theorist who participated in the 2021 attack on Capitol Hil. Richard B. Spencer (No subtitle) American neo-Nazi, antisemitic conspiracy theorist, and white supremacist. Rick Wiles (No subtitle) Minister, Founded conspiracy site, TruNews. Robert W. Welch Jr. American businessman Founded the John Birch Society. Ronald Watkins (No subtitle) Founder of 8kun. Serge Monast Journalist Creator of Project Blue Beam conspiracy. Sidney Powell (No subtitle) One of former President Trump’s Lawyers, and renowned conspiracist regarding the 2020 Presidential election. Stanton T. Friedman Nuclear physicist Original civilian researcher of the 1947 Roswell UFO incident. Stefan Molyneux Canadian podcaster Irish-born, Canadian far-right white nationalist, podcaster, blogger, and banned YouTuber, who promotes conspiracy theories, scientific racism, eugenics, and racist views Tim LaHaye American author Founded the Council for National Policy, leader in the Moral Majority movement, and co-author of the Left Behind book series. Viva Frei (No subtitle) YouTuber/ Canadian Influencer, on the Far-Right and Covid conspiracy proponent. William Guy Carr Canadian author Illuminati/III World War Conspiracist Google searches conducted as of 9 October 2021.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Zipper artifact"

1

MARINI, FABRIZIO. "Content based no-reference image quality metrics." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/29794.

Full text
Abstract:
Images are playing a more and more important role in sharing, expressing, mining and exchanging information in our daily lives. Now we can all easily capture and share images anywhere and anytime. Since digital images are subject to a wide variety of distortions during acquisition, processing, compression, storage, transmission and reproduction; it becomes necessary to assess the Image Quality. In this thesis, starting from an organized overview of available Image Quality Assessment methods, some original contributions in the framework of No-reference image quality metrics are described.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Zipper artifact"

1

Lee, Christine U., and James F. Glockner. "Case 17.30." In Mayo Clinic Body MRI Case Review, edited by Christine U. Lee and James F. Glockner, 846. Oxford University Press, 2014. http://dx.doi.org/10.1093/med/9780199915705.003.0448.

Full text
Abstract:
19-year-old woman with inflammatory bowel disease and suspected perianal fistula Axial postgadolinium 2D SPGR image (Figure 17.30.1) demonstrates a prominent artifact extending across the right side of the pelvis along the phase encoding direction. Zipper artifact Zipper artifacts most often occur as a result of a frequency leak—this is just extraneous RF radiation detected by the receiver coil. Notice that the line of artifact is off-center, indicating that the fundamental frequency is different from the Larmor frequency of the MRI system, which is what you would expect from an extraneous source....
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography