Get help now
  • Pages 6
  • Words 1252
  • Views 547
  • Download

    Cite

    Clare
    Verified writer
    Rating
    • rating star
    • rating star
    • rating star
    • rating star
    • rating star
    • 4.7/5
    Delivery result 3 hours
    Customers reviews 346
    Hire Writer
    +123 relevant experts are online

    A Cognitive Semiotic Analysis of Language Use

    Academic anxiety?

    Get original paper in 3 hours and nail the task

    Get help now

    124 experts online

    Introduction

    This dissertation examines the ways in which algorithmically automated linguistic constructions on Twitter confound the signifying order and meaning of political discourse, and how such arrangements influence the way humans use language to frame political issues during data-driven political campaigns. As society assimilates technology (e.g. computers, tablets, and mobile devices) into the banality of commonplace routines, automated virtual environments such as Twitter will continue to mobilize new habits for communication, as well as fluctuating conventions for language use online. Such vacillating conditions are made possible by digital platforms, which circulate content in a non-linear, a-temporal fashion, involving an assemblage of human and non-human agency (Latour, 2007; Deleuze and Guattari, 1983; Lazzarato 2014). This digital climate has transformed political discourse into a collaborative endeavour between human minds and digital environments (Caliskan et al. 2017; Sparrow et al., 2011; Crockett, 2017). Consequently, erratic fluctuations in electoral communication practices are likely to persist, particularly so given the injection of complex automated content generated by social Twitter-bots (also known as chatbots), whose machine learning algorithms are designed to confuse and manipulate public opinion during political events (McCurrie and Flazon 2017).

    While early designs of Twitter-bots were motivated by the ambition to automate the posting of content online, contemporary Twitter-bots can execute a sweeping arsenal of complex interactions. Animated by well-defined, deep (machine) learning algorithms, and informed by artificial neural networks, Twitter-bots are scripted to interact with users in a variety of ways, simulating conversation, commenting on posts, and responding to queries (Hwang et al. 2012). Earlier models of AI relied on rules of logic and statistical techniques to calculate measures of probability and were designed according to neural networking models that attempted to simulate the way neurons work within the human brain to generate new knowledge. However, this approach fell out of favour in 1969 when Marvin Minsky and Seymour Papert presented a detailed account of the limitations of the neural network model within the scope of technological innovations at that time. For this reason, AI was abandoned in favour of the expert systems approach throughout the 1980s and early 1990s, which necessitated the manual programming of specific knowledge and sets of rules into the software. Expert systems involved the integration of networks, the creation of databases, and the development of information retrieval processes that could only perform narrow, specialized tasks (Hosanagar 2019: 90).

    During the Big Data revolution of the 2000s, the Internet began to supply the large datasets needed to increase the acuity of machine learning, and the development of progressively more efficient computer chips supplied the processing speed needed to computationally manage these large datasets. Machine learning is the computational method developed to work with large datasets, and its capacity extends well beyond the limitations of the expert systems approach (Hosanagar 2019). Fueled by deep learning artificial neural networks (Smith 1998) that adapt to new situations, the efficacy of machine learning has improved significantly over the last few decades. The artificial neural networks of machine learning algorithms “train” on large datasets to detect a multitude of patterns and generate a virtual “memory” of these patterns within several “hidden” layers consisting of several nodes (imitation biological neurons) nested between a visible input layer and output layer (see figure 1). Nodes belonging to the input layer are static and passive, and do not modify data, whereas the hidden and output layers are active and modify data according to detected patterns. The input layer may contain the output data of another algorithm (Smith 1998). With machine learning algorithms, programmers are not required to program rules that specify which patterns to detect since the algorithm generates its own rules for operating with data as it develops a “virtual memory.” Output layers produces predictions about the patterns detected within the hidden layers (Hosanagar 2019: 93). The value of deep learning algorithms is not simply that they can accommodate large amounts of data, but that they become increasingly adept at detecting subtle patterns across data as the volume of the data within the dataset increases (Hosanagar 2019: 95).

    Figure 1: https://www.dspguide.com/ch26/2.htm

    Evidence that machine learning social bots have infiltrated social media platforms like Twitter is mounting (Varol et al., 2017a; Howard et al., 2017; Ferrara et al., 2016a; Aiello et al., 2012). Some Twitter-bots are scripted to expand their influence by soliciting additional followers, connecting with influential twitter users, and generating activity within trending Twitter discourse. However, some Twitter-bots are animated by natural language processing algorithms, producing newsworthy content using common key words and supplementary information from other online sources (Ferrera et al. 2016). While outsourcing supplementary material, and with the support of artificial neural networks, AI encodes virtual representations of the world into a meditated “meaning space,” automating the generation of content using natural language algorithms (Mugan 2018). Artificial neural networks confer Twitter-bots with the capacity to interact directly with human users by encoding the representations embedded within the linguistic constructions of human Twitter users within a virtual meaning space, and then mapping various points within that meaning space into a viable linguistic response (Mugan 2018: https://blog.usejournal.com/generating-natural-language-text-with-neural-networks-e983bb48caad).

    Fig 2: Schematic of neural text generation, as https://blog.usejournal.com/generating-natural-language-text-with-neural-networks-e983bb48caad

    Within the ecology of linguistic Twitter-data, political bots inundate newsworthy discourse during political crises, elections, and inter-party conflicts (Woolley 2016). Tweets that gain popularity as newsworthy content tend to contain linguistic data that represents unanticipated, controversial issues that could result in negative consequences that are relevant to the public at large (Ruhrmann and Göbbel, 2007; Rudat et al., 2014; Rudat and Buder, 2015). These tweets also tend to be those that are retweeted (Kwak et al., 2010; Zarrella, 2009; Nagarajan et al., 2010; Bruns et al., 2012; Naveed et al., 2011), which is a robust index of the initial tweet’s informational value, amplified by the use of hashtags (Enge, 2014; Suh et al., 2010; Naveed et al., 2011; Stieglitz and Dang-Xuan, 2013). A study conducted by information scientists Bessi and Ferrera asserts that the network embeddedness of bots can negatively impact democratic political discourse (Bessi & Ferrera 2016). While these findings do a great service of qualifying existing trends in the flow of Twitter data within the discourse networks of the social media platform, there is a paucity of research into how the circulation of bot-produced Twitter content impacts human cognition. Given that Twitter is replete with human and bot-generated linguistic constructions related to political opinions, sentiments, and positions, it is an ideal microblogging platform that is particularly ripe with linguistic data for cognitive semiotic and cognitive linguistic research into the influence of Twitter-bots.

    Branigan et al. demonstrate that when interlocutors participate in discourse through the mediation of computer interfaces, they are likely to duplicate the syntactic form of preceding linguistic constructions within a thread of discourse regardless of whether those constructions were generated by another human or pre-scripted by a computer (Branigan et al. 2003). It is hypothesized that the human mind’s proclivity for mimesis renders it susceptible to collaborative cognition with digital media, and prone to interactive alignment with bot-generated content that simulates human-generated political speech acts within online environments.

    How effective are Twitter bots at influencing human thinking at the linguistic level?

    How algorithmic models exploit linguistic constructions online and compose novel and “meaningful” micro-texts” (Veale 2016: 1) Due to the highly linguistic tenor of Twitter, language use should be an integral component of cognitive semiotic empirical research within digitally mediated environments. Social media environments have become significant sites of conflict where competition for communication space is enacted among various actors with considerable impact (Goolsby 2013). Indeed, it has been argued that the circulation of capital is now dependent upon the flow of language through the proprietary platforms of the Internet.

    This essay was written by a fellow student. You may use it as a guide or sample for writing your own paper, but remember to cite it correctly. Don’t submit it as your own as it will be considered plagiarism.

    Need custom essay sample written special for your assignment?

    Choose skilled expert on your subject and get original paper with free plagiarism report

    Order custom paper Without paying upfront

    A Cognitive Semiotic Analysis of Language Use. (2021, Aug 24). Retrieved from https://artscolumbia.org/a-cognitive-semiotic-analysis-of-language-use-171693/

    We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

    Hi, my name is Amy 👋

    In case you can't find a relevant example, our professional writers are ready to help you write a unique paper. Just talk to our smart assistant Amy and she'll connect you with the best match.

    Get help with your paper