Upload
khangminh22
View
0
Download
0
Embed Size (px)
Citation preview
This project is dedicated to my paternal grandfather, Thomas O’Connell, and my maternal
grandmother Beatriz Powers. Both of you are no longer here, but you live forever in my ambition
to change the world with knowledge. Through you and your children, my loving parents, I have
learned the impact of investigating that which you do not know, and the strength of advocating
for what is right and good. I hope to continue your legacies with grace and love.
ii
HYBRID SYSTEMS AND HYBRID GENRES:
EXPLORING U.S. POLITICAL PODCAST
FRAMING TACTICS AND EFFECTS
BY
Emily O’Connell
ABSTRACT
Over the past five years, the medium of podcasting has continued to gain traction and
increase in popularity both in the US and around the world. Millions of Americans regularly
listen to podcasts and interact with numerous genres and content styles in the process. One
particularly powerful and popular genre within this world are political news podcasts. Political
news podcasts range in their ideological leaning and discussed topics, but they often blend
informative news with some kind of softer stylistic or tonal elements, qualifying many of them as
political infotainment that is similar to legacy media forms like conservative talk radio and late
night talk shows. Scholars have long debated the merits and detriments of such softer news
stylings, and in political spaces especially, this emerging hybridity of genre has potentially
persuasive and polarizing consequences. To better understand those dynamics, this dissertation
draws on theories of the hybrid media system, framing, persuasion and more. Ultimately, this
research is an attempt to better identify which tactics of political news framing and agenda
building in the podcast sphere are emerging as commonly held practice, and what some of their
effects may be on political attitude and political knowledge for listeners. Research was
conducted in two phases. The first involved a qualitative content analysis of rhetorical tactics
used in popular US political podcasts. The second phase consisted of an online survey
experiment (N=1,541) where two of the most popular frame groups were chosen for further
study: humor and opinion. Findings from this research indicate that infotainment frames can
iii
have significant effect both on political attitude and political knowledge, and that, when
compared to hard news, these infotainment frames are more persuasive and slightly more
informative. Further discussion and suggestions for future studies are also included.
iv
ACKNOWLEDGMENTS
The completion of this dissertation would not have been possible without the help of so
many people, the most important being my family and friends. To Kara, Jack, Mom, Dad, Lizzie
and Liv, you truly went above and beyond and I appreciate you endlessly. For three long years,
you supported me and listened to me as I worked my way through this process. You cheered me
on at every high and kept me strong when I felt low. I have not always been the most positive or
fully functioning Emily in this time, but you didn’t care, and you didn’t complain. Your love and
patience has not gone without notice. I know that you were truly with me in this, and I thank you
for that. This work could only happen because you were with me all the way.
I would also like to thank my committee, who have pushed me towards valuable learning
and bettered understanding. You have nudged me towards new knowledge and enabled me to
grow as a scholar and I am very grateful. To my advisor, Ericka Menchen-Trevino, you have
helped me at all moments, and you have gone above and beyond your role as teacher and mentor.
I hope that in my future I can pay your kindness forward. You emulate what it means to be an
educator and a researcher of integrity and purpose, and I feel lucky to have had you as a chair for
my committee. I’ll never be able to thank you enough, but I appreciate all you have done for me.
Finally, a special thank you to the other students in my cohort: we have all been on a
journey together, and it was not always easy, but knowing I was in this with you all made a
world of difference. I sincerely thank you all for being exactly who you are. Thank you for
growing with me and helping me whenever you could, for venting with me but also looking on
the bright side, for letting me feed you roughly 1,000 baked goods made with sugar, butter, flour,
and stress, and for showing friendship at all bends in the road. I am glad to know you all and to
have walked this road with you.
v
TABLE OF CONTENTS
ABSTRACT .................................................................................................................................... ii
ACKNOWLEDGMENTS ............................................................................................................. iv
LIST OF TABLES ....................................................................................................................... viii
LIST OF ILLUSTRATIONS .......................................................................................................... x
LIST OF ABBREVIATIONS ........................................................................................................ xi
CHAPTER 1 INTRODUCTION ....................................................................................... 1
A Hybrid Medium: Podcasts as Product, Culture, and Platform ............................ 4 A Hybrid Genre: Infotainment and the Crisis of Objectivity and
Seriousness .............................................................................................................. 5 Statement of the Problem and Purpose of the Study ............................................... 9 Hybrid Methods: Content Analysis and Experimentation .................................... 12 Significance of the Study ...................................................................................... 14 Limitations of the Study ........................................................................................ 17 Organization of the Study ..................................................................................... 17
CHAPTER 2 HYBRID MEDIUMS, SYSTEMS, AND GENRES: DEFINING
POLITICAL PODCASTS IN AN INCREASINGLY DIGITAL NEWS
LANDSCAPE ............................................................................................................. 19
Radio, Podcasts, and the Legacy of Political Audio ............................................. 19 The Hybrid Media System and New Media Effects ............................................. 26
Social Media: The Megaphone Manager and Master of Media ............... 30 Effects of the Hybrid System: Reinforcement, Polarization, and
Increased Partisanship ............................................................................... 32
A Hybrid Profession: Journalism Practice in a Changing World ......................... 36
The Emerging Digital Press ...................................................................... 38 Objectivity Norms ..................................................................................... 40 Seriousness Norms .................................................................................... 42
Framing the Narrative: Political Framing Tactics in the Digital Age ................... 45 The Impact of Soft News and Infotainment Frames ............................................. 49
Humor Framing ......................................................................................... 53 Opinion Framing ....................................................................................... 56
vi
A Partisan Population: Attitude Strength as a Measure of Polarization ............... 57 Frames of Understanding: Political Knowledge and Recall ................................. 61 Summary and Research Questions ........................................................................ 64
CHAPTER 3 METHODS ................................................................................................ 66
Phase One: Content Analysis ................................................................................ 66
Podcast Selection ...................................................................................... 66 Qualitative Content Analysis .................................................................... 70 Developing the Coding Scheme ................................................................ 71
Phase Two: Online Survey Experiment ................................................................ 78
Creating Podcast Stories ........................................................................... 78 Preregistration ........................................................................................... 81 Sampling and Blinding Procedures ........................................................... 84 Survey Design ........................................................................................... 85 Variables ................................................................................................... 87 Measured Variables .................................................................................. 88
Summary ............................................................................................................... 90
CHAPTER 4 CONTENT ANALYSIS FINDINGS ........................................................ 92
Descriptive Analysis ............................................................................................. 97
Global News Podcast ................................................................................ 98 The Ben Shapiro Show ........................................................................... 100 Political Gabfest ...................................................................................... 103 Pod Save America ................................................................................... 104
Thematic Analysis .............................................................................................. 106
Role of the Media .................................................................................... 107 Trump: Eccentric or Adversarial ............................................................ 112 The State of American Government and Political Life .......................... 115
Phase One Summary ........................................................................................... 121
CHAPTER 5 SURVEY-EXPERIMENT FINDINGS ................................................... 124
Sample Description ............................................................................................. 124 Political Attitude ................................................................................................. 129
Humor Framing ....................................................................................... 129 Opinion Framing ..................................................................................... 133 Political Attitude Findings Summary...................................................... 135
vii
Political Knowledge ............................................................................................ 137
Humor Framing ....................................................................................... 138 Opinion Framing ..................................................................................... 140
Phase Two Summary .......................................................................................... 144
CHAPTER 6 DISCUSSION AND CONCLUSIONS ................................................... 147
Discussion ........................................................................................................... 148
Implications for Scholars and Research .................................................. 153
Limitations .......................................................................................................... 158 Future Studies and Practical Prescriptions .......................................................... 164
Recommendations for Journalists ........................................................... 166 Notes for Infotainment Creators ............................................................. 168 Notes for Political Practitioners .............................................................. 170 Prescriptions for Citizens ........................................................................ 171
APPENDIX A COMPREHENSIVE CODEBOOK ...................................................... 173
APPENDIX B EXPERIMENT DESIGN AND FORMAT ........................................... 184
APPENDIX C PODCAST CLIP SCRIPTS .................................................................. 189
APPENDIX D TOP LINE DATA ................................................................................. 195
APPENDIX E SURVEY EXPERIMENT DATA: ANOVA TABLES AND
OUTPUTS ................................................................................................................ 199
REFERENCES ........................................................................................................................... 204
viii
LIST OF TABLES
Table 1. Podcast Audience Reach and Host Social Media Presence 69
Table 2. Pilot Study Identified Codes and Themes 73
Table 3. Hard News Framing Tactics and Story Styles 74
Table 4. Topic Codes 75
Table 5. Tactic Codebook 77
Table 6. Topic Code Frequencies Across Podcasts 95
Table 7. Code Category Instances and Percentages Across Podcasts 98
Table 8. Code Frequency for the Global News Podcast 99
Table 9. Code Frequency for Infotainment Podcasts 101
Table 10. Top Words Across Podcasts 107
Table 11. Media Coverage Criticism & Endorsement 108
Table 12. Republican and Democrat Assessments 116
Table 13. Support Means in Pre-test and Post-test 129
Table 14. Support Means in Pre-test and Post-test 131
Table 15. Support Means in Pre-test and Post-test 134
Table 16. ANOVA Results for Political Attitude (Ideological Interaction) 136
Table 17. Political Knowledge Scores (Right Answers for 3 Story Based Questions) 138
Table 18. Political Knowledge ANOVA (with Ideological Interactions) 138
Table 19. Political Knowledge Scores (Right Answers for 3 Story Based Questions) 140
Table 20. Political Knowledge Scores (Right Answers for 3 Story Based Questions) 141
Table 21. Political Knowledge Percentages (President, Senate, Spending Questions) 144
Table 22. Descriptive ANOVA Results for Attitude Strength and Knowledge Scores 144
Table 23. Political Party Support Scores Ranked from 1-10 (10 being most Support) 195
ix
Table 24: Podcast Listening Habits 195
Table 25. Repeated Measures ANOVA (Ideological Interaction Political Attitude) 199
Table 26. Multivariate Tests for Political Attitude 200
Table 27. Multiple Comparisons of Treatment Groups for Political Attitude 200
Table 28. Multiple Comparisons of Condition Groups for Political Attitude 201
Table 29. Political Knowledge Percentages (3 Story Based Questions) 202
Table 30. Political Knowledge Scores (Right Answers for 3 Story Based Questions) 202
Table 31: Multiple Comparisons of Treatment Groups for Political Knowledge Scores 203
Table 32. 2x2 Analysis Political Attitude with General Knowledge Covariate 203
x
LIST OF ILLUSTRATIONS
Illustration
Figure 1. Global News Podcast Tree Graph 100
Figure 2. Ben Shapiro Show Tree Graph 102
Figure 3. Political Gabfest Tree Graph 103
Figure 4. Pod Save America Tree Map 105
Figure 5. News Frequency Graph 196
Figure 6. Education Level Graph 197
xi
LIST OF ABBREVIATIONS
CTR Conservative Talk Radio
BSS The Ben Shapiro Show
PGF Slate’s Political Gabfest
PSA Pod Save America
BBC The BBC’s Global News Podcast
USMPA Universal School Meals Program Act
1
CHAPTER 1
INTRODUCTION
In the fall of 2015, as the presidential election in the United States continued to unfold,
the media was dominated by coverage of ‘unprecedented’ political tactics and an almost circus
like display of politicking (Patterson, 2016). The rhetoric and tone, the breakaway from norms,
and the shift in public favor and perceived power dominated headlines while overwhelming
Americans (Wells et al., 2016). There was so much content coming at a constant pace, a never-
ending loop of coverage, an excess of breaking news and punditry (Bode, 2020). As such,
Americans grappled with tuning in and staying informed, or ignoring the increasingly hectic
spectacle that was the public sphere.
This uncertain time was when I personally began listening to political podcasts, a
segment of a larger culture that was still coming into its own. After seeing a number of different
creators circulating on Twitter and Facebook, I settled on a show and began streaming. From the
start, this chosen podcast felt like a helpful and enjoyable way to engage with political news and
to make sense of what was happening in such a charged election. The audio foundations of the
medium allowed for thorough listening and multi-tasking. I could drive somewhere, clean
something, or work on mindless projects and also listen to these conversational debates. In that
time, I felt I was learning things, that I was staying apprised of the issues of the day, and that I
was being entertained by hosts who really knew the topics, but also knew when to blend more
entertaining elements from the world outside of the political sphere. I did not have to be a part of
the larger political media loop all the time. I could engage just enough while staying above the
fray. All the while, I took comfort in listening to media that reflected my preexisting worldview.
It felt like I was part of a community of like-minded thinkers, confirming beliefs I held dear each
2
and every week, and I loved the fresh and exciting experience that managed to pull from audio
cultures established long ago.
Over the next five years, political podcasting only grew more popular, and many series
that launched in and around this time grew substantially. Powerhouses on the right (The Ben
Shapiro Show) and the left (Pod Save America) emerged as figureheads of the genre, drawing in
millions of listeners each episode and inspiring turn out in online and offline spaces (Zengerler,
November 2017). At the same time, a culture began forming in podcasting spheres where people
could engage with issues of political life in ways they found new and different and not so
overwhelming (Williams, 2020; Evans, 2020). These podcasts sparked extended discussion
amongst listeners and critics online, and the creators hosted live shows and events, growing their
fanbases. They advocated for different forms of civic engagement like calling senators or
attending marches and protests. They began to blend with older, and some would say more
credible sources of information, appearing on televised talk shows and upping their audience by
advertising through other broadcast means. Some hosts landed radio contracts, others scored
television deals, but at the heart of their brands and their mission were the podcasts, the specially
crafted content that started it all.
Podcasts beyond the political genre also saw similar gains and evolution, with popular
shows across the podcast marketplace inspiring even more fans and bridging to older mediums in
unexpected ways (Berry, 2015; Markman, 2015; McHugh, 2016). However, while podcasts on
the whole have demonstrated their increased popularity and cultural pull, this research
acknowledges the still fluid definition of what podcasting is and what this medium offers
listeners and the larger society. Furthermore, this dissertation hinges on the belief that the
meaning making that takes place in podcasting cultures has an added layer of impact when
3
podcasts deal with politics and the democratic process. This layered distinction will be discussed
further throughout the study, but it rests in part on the fact that political podcasts appear to be
mostly mechanisms of political messaging. Persuasion is something built into the fabric of many
podcasts in this genre, and many of the surveyed shows this dissertation encountered both
critically and peripherally have chosen to align with a particular political party, or, at the very
least, a specific ideology. They express to their audiences that this side deserves more support
and acceptance than other sides do, aiming to convince listeners that their opinions or world view
are correct.
This being the case, a major driver of this research is the desire to chronicle how the
arguments and stories within these political podcasts are being framed and crafted: Are
storytelling tactics similar, despite the party affiliation or political beliefs of the hosts? Are
tactics radically different from other infotainment sources on television or on radio? Are they
representative of journalistic norms of objectivity and seriousness, or completely defined by
softer news formats? But more than a simple description of what is happening, this research also
aims to know if the storytelling tactics used by these podcasts may produce any tangible effects
on their audiences. Again more questions emerge: Are political podcasts making a tangible
difference in their audience’s political life? Is there something distinct to podcasts that has the
power to change minds or strengthen beliefs? Is there something about political infotainment
tactics in podcasts that could produce a marked difference compared to hard news shows? In
turn, by looking at this emerging style of media and assessing both its composition and its
potential effects on audiences, this work ultimately seeks to expand academic understanding of
podcasts as a medium, develop understanding of political infotainment as a genre of podcasting,
4
and provide insights into how infotainment framing tactics shape audience opinions and
knowledge bases.
A Hybrid Medium: Podcasts as Product, Culture, and Platform
Engaging with the above stated topics requires concrete definitions on a number of fronts.
The most notable is on the overarching concept of podcasting. Since the launch of podcasts in
the early 2000s, podcasts have structurally been understood as a digital means of sharing MP3
audio files (Potter, 2006). Podcasts are audio created for internet circulation that is distributed
either through a subscription mechanism, downloading, or embedded opt-in listening (Llinares,
Fox, and Berry, 2018). For many recurring listeners of podcasts, shows and podcasts they enjoy
are selected through a platform of their choosing (like iTunes, Spotify, or Stitcher) and
subscribed to so that each episode is automatically downloaded to their accounts when posted.
This enables them to opt-in to their listening on their own time and in their own way. Podcasts
can also be downloaded or streamed from these spaces without subscribing to a whole show with
recurring installments if audiences only want to listen to a selected episode. Lastly, some
audience members encounter podcasts because a podcast episode or selected clips have been
embedded in some other format. Examples include the inclusion of podcast clips on digital print
media articles, or promotional snippets of podcast media into smaller audio or video files that
then circulate on platforms like Twitter, Facebook and more.
Recent works like compilations from Llinares, Fox, and Berry (2018) expand this simple
technological definition of podcasts, articulating how podcasting is more than just a distribution
of audio files, but rather a coalescing of platform, practice, and culture. For podcasting, the
technology behind the process is only part of the story, and the often-cited connection between
podcasts and radio detracts from the newness of the medium. While connections with radio are
5
clear, given the aural basis of both radio and podcasting, the basis of podcasting comes “from a
desire to circumvent the mediated practices of the radio station and to deliver independent
content directly to listeners… [however] rather than seeking to remediate, refashion or replace
radio, podcasting pays homage to it,” (Llinares, Fox and Berry, 2018 p. 5). Two crucial building
blocks of understanding stem from this simple analysis. The first is that podcasting possesses
different technological affordances from older mediums like radio, because it prioritizes
independent, asynchronistic content delivery and facilitates a more immediate creator-listener
connection. The second is that podcasting is a kind of piecemeal medium of the moment,
bridging together parts of radio, television, and aural storytelling traditions, while moving
forward in a digital world that has different practices, expectations, and contexts. These two
elements, as well as discussion about the unique cultures and practices surrounding podcasts,
will be discussed at length in chapter 2.
A Hybrid Genre: Infotainment and the Crisis of Objectivity and Seriousness
While a blending of norms, cultures, and practices is a defining part of this study’s
connection to podcasts, the category of podcasts chosen for examination also represent a hybrid
mix of storytelling styles. Political infotainment, the genre of podcasts studied at length in this
report, blends the facts and figures of hard-hitting news work with the opinions and entertaining
tangents of podcasters creating this content. Throughout the study and practice of journalism in
America, there has been a continued and unresolved debate about what qualifies as news and
what makes news worthy of being covered and conveyed to the larger public. Political
infotainment podcasts are not the first example of a softening news style or informal tone and
tactics. Previous iterations have long reined as popular with the American public both on
television (late night and daytime talk shows), and on radio (political talk radio). But the genre of
6
infotainment sharply contrasts with historical norms of hard-hitting journalism in numerous
ways. Two of the most obvious are the lack of objectivity and perceived seriousness found in
political infotainment sources.
For many years, news reporters have seen themselves as documenters of serious news
stories that are usually told with limited bias (Schudson, 1999). In this normative understanding
of journalism, the relaying of stories and facts is meant to be done with as little influence from
source to reader as possible. Hard-hitting journalists report what has happened, and in turn
audiences are informed but largely left to make their own assessments of a given story or piece.
However, the selection of what counts as hard-hitting, and thus good quality news is also based
on normative ideals of what is important (North, 2016). Over the years, news coverage has
largely favored those who are already empowered in society, resulting in a mostly male-driven
dialogue and one that has continuously prioritized white, well-to-do, institutionally powerful
voices (Allan, 2010). In this world of news, favored story-topics that are considered serious and
of the utmost importance (including but not limited to stories about politics, business, and
foreign affairs), have long been referred to as ‘hard news’ stories (Tuchman, 1972).
Defining news as ‘hard’ also implies that news can be softened in some ways, and for
years scholars have debated this question of hardness and softness. Originally, hard news was
understood to be the news that ‘mattered’ and that had more integrity and importance, whereas
soft news was a label for stories that were ‘othered’ in some way (Tuchman, 1972). These softer
stories were considered to be surface level and less serious, and were often seen as more
feminine, more fixated on race or ethnicity, or covering subjects that did not directly apply to the
lives of middle and upper-class white men (Allan, 2010). Over time, however, the concept of soft
news versus hard news has begun to shift. Though many gendered and racial tensions still exist
7
in the dynamic between hard and soft news, the terms have come to more readily reflect the
storytelling techniques being used in a given news story as opposed to just the topics being
covered (Otto, Glogger and Boukes, 2017). It also refers so the shelf-life of said news. Breaking
news that is current and of the moment is considered by many to be harder hitting, while
evergreen news stories, that can really be told at any time, are said to have a degree of built in
softness (Boczkowski, 2009).
In today’s modern understanding, soft news can be broadly defined as news that
compromises on the extensive and rigorous standards of journalism practice and production
(Baum, 2002; Boczkowsi, 2009). Instead of adhering to mid-20th century normative standards
for fact-driven reporting, like those of objectivity and seriousness, soft news favors rhetorical
tactics and storytelling mechanisms that can increase audience attention and interest at the
expense of strict unbiased relaying of information (Otto, Glogger and Boukes, 2017). Soft news
has actually become so commonplace in the news marketplace, that many people are using softer
news content as their primary, and sometimes only, source of news. These audiences are often
drawn in by the more casual style and tone, or more overt allegiance to one way of seeing the
world over another. Thus, the continued prevalence of soft news in today’s media ecosystem can
in many ways be attributed to market pressures that guide the media, and which want to bring in
the largest audiences possible to up advertising revenue (Nerone, 1995; Patterson, 2000). Given
this drive to increase audiences, media outlets have come up with numerous iterations of soft
news that vary from each other in a multitude of ways. One of these iterations is political
infotainment.
Despite academic study of the term infotainment emerging as early as the 1980s, scholars
have continued to debate the precise definition of the genre for decades (Melinescu, 2015). Many
8
have offered foundational elements for infotainment, including how it blends entertainment and
information, and how it elevates informality of tone and style, making it more conversational and
accessible to the public (Thussu, 2007). Scholars have also linked infotainment to politics, noting
that a large chunk of infotainment news interacts with politics in some way, and that it often is
overtly political (Brants and Neijen, 1995; Stockwell, 2004; Moy, Xenos and Hess, 2005). Yet in
terms of a concrete concept, infotainment has remained a broad term without significant
boundaries for some time. Only recently have scholars like Otto, Glogger and Boukes (2017)
identified more tangible characterizations of the term, establishing infotainment above all as a
genre of content that is defined by its dueling natures of informing and entertaining (p. 145). To
qualify as infotainment, a program or content piece must prioritize these two elements equally,
providing some kind of facts and information while also sparking enjoyment or gratification
through entertainment means.
As mentioned before, past study of infotainment has largely found its footing in
television media, and previous examination of the genre has mostly looked at late-night talk
shows (Baum, 2002; Moy, Xenos, and Hess, 2005; Sharma, 2008; Taniguchi, 2011). In radio,
extensive studies on ‘softer-talk’ or ‘chat’ radio have established a comparable political
infotainment source in audio mediums (Barker and Knight, 2000; Hall and Cappella, 2002;
Keith, 2008; Ames, 2016). Some scholars, like Mort (2012), have actually drawn the direct
connection between the enduringly popular genre of Conservative Talk Radio (CTR) and
infotainment: “While the content of CTR programs is essentially news and current political
events, such programs do not belong to the category of hard news formats…Because it blends
elements of hard news with humor, derision and sarcasm, it belongs to the ‘infotainment’ genre
but [specifically] to the category of infotainment programs in which the political is ‘primary’ and
9
‘explicit,’” (p. 103). But while research has done well to document and study infotainment in
television and in radio, there is still much more to learn about this genre’s new iterations that
have emerged in the digital age, like the political infotainment podcasts at the heart of this study.
Statement of the Problem and Purpose of the Study
Despite a growing literature on podcasts, and a robust literature on soft news and
infotainment, academic study in both of these fields still needs to expand. Over the past few
decades, audience interactions with news have continued to change and shift, with more
Americans moving to online information sources and so-called less credible outlets (Ingraham,
September 2018). Late night talk shows, which use humor and opinion to recap news stories,
have remained popular in the US, and since the 2016 presidential election, they have increased
their coverage of politics substantially (Stephens, 2018). Meanwhile, political commentary from
these already existing shows and other new infotainment sources continue to circulate on
platforms like YouTube, Facebook, and Twitter (Bowyer, Kauhn, and Missaugh, 2017; Davis,
Love and Killen, 2018). At the same time, new voices in infotainment are coming online each
day, and they are using the internet as a means of distribution and also a space with new
communicative technology to reach their audiences.
Podcasting is one of these new mediums that attracts large audiences, and in 2020 so far
experts estimate that nearly 100 million Americans (37% of the population) listen to podcasts on
a monthly basis (Winn, April 2020). Further, a large segment of podcasting is actually dedicated
to politics and news in some way. In 2019 alone, 10 of the top 30 most listened to podcasts on
iTunes were news podcasts that regularly deal with politics, and 5 of those 10 could be defined
as infotainment, or, at the very least, partisan news sources (Mathson, May 2019). So far there
has not been any research conducted on these kinds of shows in the podcasting world. It is thus
10
important, not only to the academy, but also to political practitioners, journalists, and members
of the public, that society better understand this content style, its reflection of podcasting culture
and practice, and its effects, especially in the current political news sphere.
In doing this, any further research about political infotainment podcasts needs to not only
build on the models of the past, but also to incorporate relevant models of media consumption
today. Prior studies regarding political infotainment have been useful and informative, but they
have not often considered the current dynamics of a high-choice media environment and how
this environment influences both the production and the effects of political infotainment (Prior,
2005). Currently, audiences have a multitude of outlets and content creators to choose from in
their day to day life, and increasingly social scientists are finding that people are prone to
engaging with content that mirrors sources they have already interacted with and enjoyed (Slater,
2007). This pattern of consumption, and this reinforcement of content styles, tone, and opinions
can then lead to increased polarization (Iyengar and Westwood, 2015; Bail et al, 2018). What
that results in is an audience that is farther away from each other not just in taste and viewpoint,
but even in the basic understanding of what is fact and what is fiction (Prior, 2005).
Theories of the hybrid media system, especially, help to inform this larger needed basis
of understanding, and give context to the increased polarization happening in the political realm
and to the changing ways that the audiences consume content and news (Chadwick, 2013;
Chadwick, 2017). The hybrid media system paradigm notes the tangible effects of the internet
age on media creation and circulation, highlighting the blending of new and old media and how
hybridity is changing certain elements of public news consumption. The theory, and applied
studies drawing on this paradigm, also track the increasing evidence that people are being pulled
towards more polarized political beliefs and ideologies in America and around the world
11
(Chadwick, 2013; Chadwick, Vaccari and O’Laughlin, 2018). In infotainment studies, the
insights of the hybrid media system are incredibly useful, and there is a serious need for
explanation and causal analysis of how certain news types and content styles, like infotainment,
are working in the larger political news arena and effecting the public. This project will take a
deeper look at a small section of the new online news apparatus, political infotainment podcasts,
but ultimately argues that serious consideration must be made about the state of news in America
and what can and should be done to improve it.
Furthermore, much of the data compiled about infotainment effects across all mediums to
date has been established through surveys and panel studies. While these methods have provided
valuable insights into people’s existing political attitudes and news consumption habits, they
have been less helpful in assessing the immediate effects of infotainment tactics. Indeed, only
through an experimental design can researchers better control variables and make claims about
causation, establishing what, if any, effects there are when content creators use these rhetorical
infotainment tactics to present political stories to their audiences. Such experimentation has been
somewhat limited in the study of soft and partisan news (Merle and Craig, 2012; Wojcieszak et
al, 2016; Pearson and Knoblock-Westerwick, 2018), and even scarcer when it comes specifically
to infotainment. As such there is tremendous potential insight to be gained in identifying and
testing infotainment tactic effects on members of the American public.
In order to gleam useful findings and draw some preliminary conclusions, this project has
been guided by three overarching queries: (1) What narrative tactics are being used within
popular political infotainment podcasts? (2) How successful are these tactics in strengthening
political attitudes as compared to ‘hard news’? and (3) How successful are these tactics in
increasing political knowledge as compared to ‘hard news’?
12
Hybrid Methods: Content Analysis and Experimentation
To try and better ascertain answers to the proposed research questions, this project has
undertaken a mixed methods approach. The study begins with a qualitative content analysis
examining three popular political infotainment podcasts. The overarching intention was to
identify three political infotainment podcasts in the US that showed the range of this genre in US
markets. The main variable considered in this selection was the political and partisan leaning of
the podcast. As such, one podcast was selected to represent a conservative leaning (The Ben
Shapiro Show), one was selected to represent a liberal leaning (Pod Save America), and a third
was selected to show a bit more independence and less overt partisan support (Slate’s Political
Gabfest). A fourth, hard news podcast, was also selected (The BBC’s Global News Podcast) to
provide a comparison between a hard news offering and the tactics used in political infotainment
podcasts. Overall popularity and audience reach on iTunes and Spotify was used as a defining
factor in this selection process as well, but this was supplemented by preliminary analysis of the
podcasts themselves, and of reviews left by listeners about the style, tone, and political leaning of
each podcast. This selection process is discussed at length in chapter 3.
From these four selected podcast series, ten episodes were randomly chosen from each
show, using random week sampling. The resulting forty episodes spanned from October 2018 to
September 2019, and were fully transcribed for coding purposes. Preexisting literature about
narrative tactics used in hard news and in softer news from television and radio was consulted,
and numerous tactics outlined in this study’s codebook were pulled from those previous research
endeavors. However, this analysis yielded numerous new tactics which will be fully identified
and described in chapter 3 and chapter 4. Because the content analysis portion was also just one,
more descriptive, portion of the overall study, ten episodes per podcast (with a total of 40
podcasts) proved to be an appropriate snapshot into overarching narrative strategies and
13
practices. This is especially true when considering the fact that over the course of 40 podcasts
there were more than 220 news stories analyzed. Based on other research conducted in television
studies, using news segments and whole television episodes as the transcribed element of
analysis, this number of episodes and stories is significant enough to give a sweeping look at
narrative choices and framing tactics when episodes are selected randomly (Fields, 1988; ‘Power
of Pop’, Neuendorf, 2016).
With the benefit of a completed content analysis, the second phase of this project’s
method – an online survey experiment – was constructed, pre-tested, and eventually launched to
assess causal effects of popular political podcast framing techniques. As seen in the guiding
questions above, the ultimate goal of the experiment was to test how political infotainment
tactics affect political knowledge, and how they potentially strengthen political attitudes. To test
this, two of the most popular infotainment tactics identified in phase one, humor and opinion,
were studied, analyzed, and recreated in different audio clips mirroring short podcast stories.
These podcast clips also recreated political bias, with each experimental treatment having both a
liberal version and a conservative version. A fifth audio clip to represent hard news that had no
overt political leaning was also created, and the experiment retained a control treatment where
participants randomly assigned to that treatment did not listen to any audio. Ultimately, the
experiment used a pro-attitudinal design, testing for preexisting ideological leaning, and sorting
respondents into a liberal or conservative pool based on that response. Findings and the meaning
of those findings are discussed in chapters 4, 5, and 6.
Methodologically speaking, this research represents a new approach to the emerging
academic study of political podcasts. Though work has been conducted regarding political
podcasts sporadically (Park, 2017), this material is still limited despite the growing popularity of
14
this medium. The past few years have brought forth studies focused on the culture surrounding
popular podcasts (Berry, 2015), as well as case studies about particular shows and their impacts
in areas like education and journalism (Funk, 2017). There have also been more recent
preliminary academic discussions in journals about the potential effects of political podcasts and
opportunities for academics in the political podcasting space (Williams, 2020; Evans, 2020;
Barker, Chod and Muck, 2020). In many of these studies there has been a clear focus on
audience reactions to the material directly, with methodologies spanning from surveys to
interviews to analyses of fan pages and interactions (McLung and Johnson, 2010). None of these
researchers have tried to identify the narrative tactics that are most popular in political podcasts,
or recreated those tactics and tested them empirically. Thus, while this mixed methods approach
is new within podcast scholarship, it is a logical next step. Together, these two methods provide
both exploratory findings that describe the style and make up of political podcasts, while also
operationalizing that description into more concrete effect-based study, something that is sorely
lacking in podcast studies and studies of online infotainment.
Significance of the Study
In undertaking this research, the intent of this dissertation is to better understand the style
and effects of political infotainment podcasts. The popularity and acclaim of these chosen
content sources is something that in itself merits analysis. However, the overarching significance
of this study will not be so focused on a successful few shows. Instead, this research is an
attempt to better identify which tactics of political news framing and agenda building in the
podcast sphere are emerging as commonly held practice, and what tactics are emphasized by
podcasting as a medium versus inherited from mediums like television and radio. From there, an
15
assessment can be made on effects these tactics may have on political attitude and political
knowledge among audiences.
These two variables have been selected with care and with purpose. Political attitude, and
the potential to strengthen attitudes, has been chosen in an attempt to better understand the
process of polarization and persuasion possible in these podcasts. Much has been said about the
continued division among the American electorate. While some studies have shown that beliefs
can become more polarized from reinforcement spirals (Slater, 2007), others show exposure to
overtly oppositional content can prove just as divisive (Bail et al, 2018). Some scholars grapple
with the existence of echo chambers and filter bubbles (Flaxman, Goel, and Rao, 2016;
Borgesius et al, 2016; Dvir-Gvirsmin, Tsfati and Menchen-Trevino, 2016), or algorithmic
interference and the exacerbating effects of social media (Fuchs, 2014; Holton et al, 2015;
Chadwick, Vaccari and O’Laughlin, 2018), but less work has been done about what specific
narrative tactics enable attitude strengthening. This dissertation looks at the granular level, at the
language and persuasive tactics that political podcasts use to build agendas and relate to their
audience, and then operationalizes those tactics in an experimental design, providing the ability
to make bolder claims about effects and attitude strengthening.
Political knowledge, meanwhile, was chosen as a way to engage with lingering worries
about how informed the American public is. Journalism and political communication scholars
have long debated the decreasing standards for political knowledge in the US and changing
models of information gathering (Schütz, 1946; Brown, 1997; Schudson, 1999). Ideals of how
citizens are informed are shifting, and more realistic scholarship points to the public being
monitorial in their news consumption, paying attention in bursts and spurts, and largely
interacting with news that effects them personally (Boczkowski, Mitchelstein and Walter, 2012;
16
Graves, 2017; Ytre-Arn and Moe, 2018). However, despite these changing habits, citizens need a
solid information base to really make the most informed decisions, and for democracy to stand in
its best form, more agreement on facts and value of truth is needed overall (McCoy, Rahman,
and Somer, 2018). While this project makes no attempt to claim that political infotainment is
grounded purely in truth, it does aim to assess whether facts can be learned from this genre, and
whether or not facts are more readily recalled from opinion and humor narratives, or from
straight hard news. As such, a long-discussed debate about the power and the pitfalls of
infotainment will be continued in this study, and the findings of this project, which show
significant differences in political recall between hard and soft news framing tactics, further
establish the impact infotainment has.
Furthermore, experimentation in this space is a much-needed expansion of the existing
literature. The methodological affordances of experimentation are vast, and ultimately all data
gathered can be analyzed across demographic groups and different segments of the population.
Though the sample size of this project is relatively small (N=1,541), randomization and rigorous
significance testing do allow for some conclusions to be drawn. Findings indicate that there is a
significant difference between infotainment and hard news framing tactics used on podcasts in
strengthening political attitudes. The impact of those findings, as well as discussion on which
framing type is superior in strengthening attitudes, is covered in chapter 5. Testing on political
knowledge was less obvious and generalizable, however, statistically significant differences in
recall did occur on some level between infotainment groups and the hard news treatment, all of
which will be discussed in chapter 5. The impact of these findings will be fully examined in
chapters 5, and 6, and implications for future study and next steps will also be provided.
17
Limitations of the Study
While the limitations of this work will be further acknowledged in chapter 6, some
statement should be made to inform the reader of the constraints surrounding the production of
this dissertation. The most glaring of these limitations stems from a lack of funding, which
resulted in completion of only one experimental test. The podcasts in this experiment discussed
The Universal School Meals Program Act, a bill introduced in the fall of 2019 that proposes
instituting a free school meals program act, covering breakfast, lunch, and dinner, for all US
school children. The justification for choosing this topic is discussed at length in chapters 3 and
5, but on the whole, this represents a low-salience issue that does not inspire strong pre-existing
opinion across political party. Increased budgetary capability would have allowed more
experiments to be run with different topics, and, if findings were consistent across three or more
experiments, a firmer claim towards effect might have been made. However, for the purposes of
this dissertation, which represents a beginning of inquiry and a launching pad for future research,
one experiment run with rigor and integrity is still a substantial academic undertaking.
Similarly, it would be interesting to test the effects of infotainment over more prolonged
periods of time. Seeing whether the strength of attitudes on a given topic stay as strong, or
whether the learned facts are internalized long term would embolden findings on the impact of
political infotainment. However, restrictions on budget and on time limited the ability of this
project to test for attitude and knowledge longevity. Nevertheless, the immediate findings of this
project are still significant and will all be discussed more thoroughly in the coming chapters.
Organization of the Study
Moving on from this introductory chapter, five more sections of this dissertation will
establish previous research, the methods of this current undertaking, the findings those methods
18
yielded, and the implications of those results for this project and beyond. Chapter 2 begins this
work by investigating the theories and scholarship that are most relevant to the current research
undertaking. Prior studies on podcasts and radio, the hybrid media system, journalistic norms,
and framing are all included in this chapter. There is also substantial discussion about the
independent and dependent variables in this research (the two selected infotainment framing
tactics, political attitude, political knowledge and so on). Chapter 3 then outlines the details of
this study’s methods, which consist of a qualitative content analysis and an online survey
experiment. Chapters 4 and 5 outline the findings and results of these two methods, with chapter
4 representing the content analysis and chapter 5 encompassing the experiment. Chapter 6
provides continuing discussion, consideration for scholars and practitioners, and suggestions for
future research.
19
CHAPTER 2
HYBRID MEDIUMS, SYSTEMS, AND GENRES: DEFINING POLITICAL PODCASTS
IN AN INCREASINGLY DIGITAL NEWS LANDSCAPE
In order to fully engage with the concepts put forth in this study, a wide range of
communication theory and debates must be analyzed and tied together. Beginning with an
introduction to podcast studies and the history of radio, this chapter will illustrate the elements
and affordances of audio that have been established in the study of mass media. In saying this,
podcasts differ from previous audio formats because they combine old media logics with new
and emerging ones while also firmly residing within the hybrid media system (Chadwick). As
such, discussion of this system and of the preliminary effects that have been identified as
prominent in online spaces must also be considered. From there, extended literature on political
infotainment will be discussed, including a thorough analysis of the journalistic theories and
norms driving news-reporting and how infotainment bucks against those norms. This project also
requires consultation with research on framing and framing effects in both news and
entertainment contexts. Finally, sections will be included discussing political attitude and
political knowledge, the two selected dependent variables analyzed in this dissertation’s
experimental phase. All of these subject areas are widely ranging and not explicitly tied together
in past research, but given the hybrid nature of this project, they must all be examined and
considered.
Radio, Podcasts, and the Legacy of Political Audio
Any examination of podcasts as a medium, rightfully must start with an inspection of
radio and the legacy of audio broadcasting on the whole. For nearly a hundred years,
communication scholars and other social scientists have studied broadcast technology, beginning
20
with radio. In the first half of the 20th century, when radio began to take hold of America and
other countries, the medium was incredibly transformative (Cantrill and Allport, 1935;
Lazardfeld and Kendall, 1948). Radio represented so many things to the American people: a
sense of community, a source of entertainment, and a means of quickly hearing the most
breaking news updates.
In the years since its creation and mainstreaming, researchers have tested for radio effects
and the impact that media can have on its audience (Cantril and Allport, 1935). Social scientists
have established what kind of social and cultural gratifications radio can provide (Armstrong and
Rubin, 1989), and have sought to understand some of the more radical effects that radio can have
(Kello and Steves, 1998). From spreading propaganda during WWII (Horten, 2003), and
facilitating genocide in the 1990s (Kello and Steves, 1998), to becoming a crucial fixture of
American life for decades, radio has proven throughout its history to have an immense power.
That power seems less defined in an era also balancing television, print, and now new digital
media, but in today’s world, radio continues to have a strong hold on audiences, reaching
roughly 93% of Americans, outpacing both television (88%) and digital media use (83%) (‘State
of the Media’ Neilson, April 2018; ‘Audio and Podcasting Fact Sheet’ Pew Research, 2017).
Consistently, radio draws American audiences for a variety of reasons, including musical
listening, sports analysis, and broadcast news. One segment of radio culture and heritage that is
particularly informative for the current research is that of political talk radio.
To date, political talk radio is a content type that has received a significant amount of
attention and analysis from communication and social science scholars (Barker and Knight,
2000; Hall and Cappella, 2002; Mort, 2012). In this area of research, a number of guiding
questions have emerged, including what the effects on political attitudes are for talk radio
21
listeners (Barker and Knight, 2000), what kind of impact talk radio can have on election
outcomes (Hall and Cappella, 2002), and what kind of rhetorical tactics political talk radio uses
to persuade listeners (Mort, 2012). All of these questions come from a center of reasoning that
also informs this dissertation, namely interest in how particularly styled audio content can
influence political thinking amongst Americans, and how political outcomes may be swayed
based on the popularity and the salience of certain messages.
To this point, the study of political radio has established that one mainstream party in
America favors radio as a medium over the other. Conservatives for decades have had a more
established hold on political talk radio and currently out of the top 10 political talk shows on the
air, 8 of the hosts are conservative while only 2 are liberal or moderate (Lipsky, June 2019). This
has resulted in a bevy of literature on conservative talk radio (CTR), and significantly less
literature on liberal leaning talk radio. However, these CTR studies have been incredibly
revealing and have unveiled dynamics wherein which radio hosts and show creators utilize
baseline facts and comment on current events, but ultimately douse the truth in spin or punditry
that then serves a different function for listeners (Holbert, 2005; Jamieson and Cappella, 2008).
This style of opinion news is directly related to infotainment, but has often been analyzed as its
own genre of soft news content (Otto, Glogger, and Boukes, 2017).
CTR also represents the most comparable form of opinion style news that this
dissertation can turn to beyond the studied podcasts. Rhetorically speaking, CTR strategically
aims to persuade listeners towards conservative ideas, or at the very least, the ideas of the radio
show’s host. CTR also uses a style that is conversational by design. While many shows consist of
one host discussing issues and quandaries of the day largely on their own, the language use in
these shows is meant to be more accessible to the everyday listener (Mort, 2012). The rules of
22
decorum and language and emotiveness are subdued compared to other forms of news, and there
is more freedom to be dramatic or performative. These shows value that informality and relative
lack of restraint, and that casual talk-radio ambiance has been internalized by many political
infotainment podcasts, including all of those studied in this project.
Conservative talk radio also provides insight into some touchstones of political
podcasting because the favored tactic to engage listeners is opinion and punditry. Humor is also
commonly used, but opinion is the defining element of CTR and other partisan news forms.
Importantly, past studies show that audiences of content created in this style are still informed to
some degree, but they are also gratified and entertained in other ways because the content they
are listening to mirrors their own ideology and belief system and creates senses of agreement and
belonging (Mort, 2012; Boukes, Boomgaarden, Moorman, and De Vreese, 2014). This use of
media to reinforce opinions has been increasingly on the rise (Slater, 2007), and in online spaces
this dynamic is even more present. As such, radio creators and on-air outlets have had to adapt to
compete, even for the attention of longtime listeners.
Signs of this attempt at competition from radio producers can be seen in a number of
ways. One clear example is how many traditional radio shows are expanding to offer online
podcast content that supplements their on-air broadcasts (Rosenblatt, March 2020). These
podcast offerings have been viewed as an extension of radio and a means to engage with
audiences who they may otherwise not encounter. These may be members of the public who are
more tuned in online than with older broadcast forms, or people who favor the ability to
download and listen to this content on their own time. Because podcasting removes the
restrictions associated with live listening of radio, and allows listeners to engage with content in
their own time and in their own chosen setting, some scholars believe podcasts are breathing new
23
life into radio as a medium even while it represents a medium that is separate and distinct from
radio itself (Llinares, Fox & Berry, 2018). At multiple points, older broadcasters have also
sought to onboard successful podcasts into their medium as opposed to competing with them.
Numerous podcast series have been featured for limited-edition radio runs, and a number of
podcast hosts have been onboarded to work on radio regularly (Sawyer, May 2020). Some shows
have also moved to television adaptations, creating a three-medium cross of content
(Mitrokostas, October 2019). But despite the growing acceptance of podcasts, there has not
always been such a positive, or even acknowledged, response to the medium.
Dating back to 2005, when Apple launched podcasts as a new form of digital content,
there was some initial public interest, but the phenomena largely proved an under the radar
technological advancement (Potter, 2006). The audience for podcasts stayed comparatively small
for some time and, as a result, the medium was long considered too niche for academic study
(Llinares, Fox and Berry, 2018). However, over the past few years there has been a renaissance
for podcasts, with more creators providing programs that pull in substantial audiences worthy of
academic investigation (Berry, 2015; Markman, 2015; Wrather, 2016). Recently released
consumer data asserts that 51% of Americans have listened to a podcast, and in 2018 alone, an
estimated 90 million Americans listened to podcasts on a weekly basis (Podcast Insights, 2019;
Peiser, March 2019). Of these 90 million Americans, a substantial segment of listeners tune into
podcasts for news and information gathering (Newman and Gallo, 2019). But while popularity
for the medium is on the rise, research that examines podcasts have been varied and not fully
comprehensive.
Some preliminary studies about podcasts have included investigation into storytelling and
entertainment framing in podcasts, podcast use by media outlets and journalists, and
24
‘approachable news’ concepts seen in this new medium (McHugh, 2016; Funk, 2017; Park,
2017). To this point, research indicates that audience uses of podcasts span the entire spectrum
between purely entertainment shows and hard news offerings, but that certain ideals and
concepts define the bulk of podcasting endeavors. These include, but are not limited to intimacy,
authenticity, independence, and informality (Berry, 2016). These ideals and baked in layers to
much of the podcasting happening today has in turn influenced audiences to value these facets
and to expect these touchstones in their podcast programming in almost all genres (Berry, 2016).
This is especially important to the current study because many podcast listeners are, at least to
some degree, interacting with politics and civic concerns in their chosen podcast diets (McClung
and Johnson, 2010; MacDougall, 2011; Euritt, 2019). For hard news podcasts, these touchstones
may manifest differently or may not be able to manifest at all. Informality, for example is not a
facet of most hard news reporting, but it is a defining characteristic of political infotainment
podcasts. Indeed, all of these listed foundational elements are particularly emulated through
political infotainment podcasts, though more evidence of that will be demonstrated in chapter 4.
Further, Chada, Avila, and Gil de Zúñiga (2012) found, that political podcast use actually
leads to increased political participation for most audience members. This may be because of the
perceived intimacy of the medium, which helps listeners to feel more invested in the storytelling
or discussion taking place in podcast spaces or because the perceived authenticity of podcasts
hosts prompts increased trust that may translate to increased listener participation. Yet tracking
podcasts and the communities that spring around them has proved difficult for scholars to this
point (Wrather, 2019). Recently, however, attempts at creating a uses and gratifications model
for listeners of podcast have begun gaining steam, as have more in depth looks at the
communities that surround certain popular podcasts (Perks, Turner, and Tollison, 2019).
25
In South Korea, similar podcast types to the ones studied in this dissertation have been
analyzed as products of intimate and authentic public service (Park, 2017). These podcasts,
which favor more entertainment-based storytelling tactics, and which are heralded as
approachable kinds of ‘citizen journalism,’ were studied for their discussions and their impact.
The findings of this work showed that this kind of informal, citizen journalism can have real
impact on audiences. This included potential persuasive effects and an increased want amongst
listeners to engage with the democratic process. However, the methodology for that research
consisted of interviews and surveys, and while it did provide some interesting insight into how
more informal political podcasts can be received and appreciated by audiences, it was still only a
first step (Park, 2017). In fact, in their consideration for future research section, Park specifically
cited the potential for delving deeper into podcast effects and narrative styles, something that
could not be done through interviews or surveys.
Comparatively, in Singapore, where podcasts gained popularity faster than many other
countries, and have remained a fixture of digital media, some preliminary experiments have been
conducted testing for the medium effects of podcasts (Skoric, Sim, Han and Fang, 2009). The
findings of these experiments indicated that podcasting as a medium alone did not persuade
audiences more than text based political content, however, the researchers discussed how
different topics or communicative strategies enacted over podcasts could prove more successful
(Skoric et al., 2009). The agenda building taking place in these podcasts were a cause of intrigue,
and the potential power of these content forms was cause for academic curiosity. Importantly,
political podcasting has been banned by the government in Singapore during multiple election
cycles for its perceived ability to influence voters more than traditional modes of political
discourse. In 2006, political podcasts became a highly monitored type of media, and even when
26
political podcasting has been permitted, podcasters discussing politics in the country have had to
register with the government. Failing to do so, or covering topics that the government deems as
‘fake news’ can result in content deletion and other potential offenses (Associated Press, May
2019). While this is, of course, a more extreme reaction to political podcasts, cracking down in
such a way indicates that politicians and government officials in Singapore feel that the medium
is powerful and capable of some kind of serious effect.
One final branch of podcast literature, which has proven to be fruitful and filled with
more experimental methodologies, exists in education studies. Because podcasts are being used
at a higher rate in schools, and are being integrated into classroom teaching, the need to
understand impacts of podcasts on students has prompted steady analysis (Saeed and Yang,
2008; Kennedy, Hart and Kellems, 2011). This being said, those experimental designs remain
only tangentially important to the current research endeavor. But what they do demonstrate is
that the solidity of findings for podcasts are increased with experimental research, and that causal
elements can be identified. Also, the tendency of American schools to introduce their pupils to
podcasts may indicate an increasing acceptance of the medium on the whole. By making
podcasts a more normalized source of news, information, and entertainment, schools have in
many ways promoted this medium, building habits among younger people that may translate
beyond merely educational spaces.
The Hybrid Media System and New Media Effects
As stated previously, the media world today is increasingly defined by its high-choice
and preference-based dynamics (Prior, 2005; Djerf-Pierre and Shehata, 2017; Hameleers and van
der Meer, 2020). Because there is more content than ever before, new mediums coming online
every day, and so many more outlets from which audiences can pick and choose, citizens in any
27
digitized democracy interact with media in a multitude of forms. One particular theory that
describes how this new online media landscape operates is the theory of hybrid media systems.
Put forth by Chadwick in 2013, the hybrid media system paradigm claims that the media
landscape has been fundamentally changed thanks to the addition of the internet to everyone’s
daily lives. This change has far sweeping consequences, and has been transformative on every
level of content creation, distribution, and internalization. In the hybrid media system, the
internet has not only created new types of media and distribution, but has also adopted styles and
content forms from older mediums like television and print, while continuing to work in
conjunction with those mediums (Chadwick, 2017, p. 3). Podcasts are an excellent example of
this, as they pull from the aural cultures of radio, and often craft their episodes and series in ways
that call back to radio and television styles. By using familiar production and narrative tactics,
podcasts as a medium can seem familiar and accessible enough to new audiences, while also
standing alone as a different kind of media. Content wise, podcasts often tout elements like
intimacy, authenticity, and informality more so than broadcast infotainment products, but there
are also differences from a purely technological view. Podcasts offer new means of listening and
engagement, blending online logics of streaming and downloads that are asynchronistic
(Burroughs, 2019). They have also been better built into devices like mobile phones, making
them more easily accessible to some members of society than radio is (Schroeder, 2018). But
while podcasts are in some ways novel additions to the media ecosystem, they coexist with more
traditional broadcasting mechanisms and have shown the hybridity of media in the digital age.
Despite initial fears that the internet would eradicate older mediums and their reach,
digital media has not fully displaced television, print, or radio when it comes to news or
entertainment. Instead there has been “a growing systematic interdependence” between older
28
mediums and newer ones, (Chadwick, 2017, p. 254). Business models and production of media
have shifted and changed to accommodate the digital age, but a cohesive media landscape is
emerging that blends old and new mediums together (Nelson, 2020). None of these mediums
exist within a vacuum. They are competing for the same advertising revenue, trying to corner
similar marketplaces, and seeking to engage with the same viewers. As such, the hybrid media
system is defined by its interdependence, and understanding that interdependence is critical for
scholars and practitioners. Studies that fail to look at the whole system when considering their
specific niche market, risk missing the forest for the trees: all media is interconnected, and news
media is no exception to that rule. In fact, quite the opposite.
Formed from continued study of political media in the UK and the US, Chadwick’s
theory goes beyond the general workings of the media system and looks particularly at the
political impact that the hybrid media system has. He investigates how “forces of flow” and
shifting levels of power have impacted not only the media ecosystem, but also the political and
democratic processes. This lends the theory especially well to the current research, which is
situated at the corner of digital media, journalism, and politics. Ultimately, Chadwick puts forth
the idea that media is defined by hybridity of source, genre, and meaning (2017, p. 184). The
vast majority of people look to different sources for information about the world and politics, and
they gather news and facts from a multitude of outlets. There are also few members of the public
who go through life interacting purely with news and information they encounter either through
broadcast or online, and if they do, the crossover of content from online domains to older ones,
and vice versa, continues to be substantial (Chadwick, 2017, p. 4). Indeed, according to
Chadwick, “the hybrid media system is built upon interactions among older and newer media
logics – where logics are defined as technologies, genres, norms, behaviors, and organizational
29
forms,” (p. 4). Operating within these logics are a host of actors, many of whom have some kind
of political motive, and significant players in this system have long been identified in political
communication scholarship. These include politicians, members of the news media, lobbying
groups, activist organizations, the public, and more.
These political actors have always, to some extent, helped to steer information flows in
our news systems in ways that suit them while somehow modifying the agency of others.
Throughout history there has been an ongoing negotiation of power between government, the
media, activist groups, the public, and other influential parties, but the hybrid media system has
changed every groups’ ability to steer conversation, and has enabled a significant change in the
flow of information (Chadwick, 2017, p. 75). This age of hybridity has opened the door to more
voices circulating about any given topic, and made it so that actors do not have to be
professionals, but can be amateurs as well. This accessibility and independence is something
highlighted in podcast studies as well, where the weight of amateur and professional podcasting
is less pronounced and does not resolutely mean the difference between failure and success
(Llinares, Fox, and Berry, 2018).
These affordances in the hybrid system also speak to some of the early ideals of what the
internet could be, namely a place where publics are better represented and where collective
digital actions could be more democratic (Benkler, 2006; Castells, 2008). To some extent that
democratizing shift has been made. Independent individuals and new start-up media outlets have
been given the space to be a part of larger social dialogues, and have proven to be popular. These
include the podcasts at the focus of this study’s content analysis and their hosts. Many of these
figures have used their digitally acquired popularity to engage in larger discussions with the
public and their peers, and those discussion have been further amplified with the assistance of
30
social media. This amplification makes sense, since in the hybrid media system, social media
platforms are supposed to be the amplifiers of the internet, who over the last decade have
thoroughly “reshaped the mediation of politics,” (Chadwick, 2017, p. 241). But for all of their
ability to raise voices and circulate unique takes and new perspectives, social media is not a
purely benevolent purveyor of a digital public sphere. For all of the potential aid and assistance
social media can theoretically bring, there are numerous negative elements that come attached
with these platforms, that have not only been identified by Chadwick, but numerous other
scholars as well.
Social Media: The Megaphone Manager and Master of Media
In his second edition of The Hybrid Media System: Politics and Power, Chadwick
expands his original theory and uses the election of Donald Trump in 2016 as a case study to
illustrate both good and bad parts of the hybrid media system. With every passing year, the
capability of the digital sphere grows and expands, and this growth impacts information flow and
the larger media system on the whole. To demonstrate this, Chadwick points to the unexpected
success of Donald Trump’s 2016 presidential campaign strategy. He credits Trump and the
Trump campaign with navigating a new digitally driven way of gaining attention, that
undermined the long-established hold television media has on the political arena while also
exploiting all of the positives of television coverage (Chadwick, 2017, p. 262). Chadwick claims
that through social media use, Trump was able to supplement his campaign in ways that created
hype outside of older broadcast means. That hype on social media was largely paid for with
strategic ad buying happening on platforms like Facebook and Twitter. However, the rampant
success of these paid digital strategies ultimately resulted in an incredible amount of earned
media coverage from television, radio, and print outlets. This was possible because over the past
31
ten years, social media platforms have embedded themselves not just in the everyday lives of
hundreds of millions of Americans (and billions around the globe), but also as engagement
metrics now used by the mainstream media across all mediums.
In order to assess engagement and the interests of the larger electorate, journalists and
news media outlets are increasingly looking to social media metrics to measure how important
and interesting issues are to the public (Chadwick, 2017, p. 256). Theoretically, such a transition
would be beneficial, especially if the idealized hype of social media platforms as being new
spaces for public discourse and debate are accepted (Shirky, 2011). But while social media
technically has had the ability to provide a kind of digital public sphere, that possibility has never
been truly actualized (Fuchs, 2014). In reality, social media has become increasingly shaped by
market interests, and more and more trending topics and viral moments have been tied to
promotional or corporate initiatives (Smith, Blozovich and Smith, 2015).
These dynamics on large social media platforms have dulled the public sphere potential
for social media, and while millions of Americans still use social media to discuss their interests
and communicate about larger topics, the very fabric of the social media used to measure public
interest are questionable. This is especially obvious when considering how platform algorithms
have prioritized the value of sharing stories and promoting paid content over the truth or validity
of the content that is shared (Chadwick, 2017, p. 261; Klonick, 2018). Ultimately shareability is
more important to social media platforms than authentic public deliberation and interaction,
because more shares means greater audience reach, which means more potential revenue from
advertising dollars, the main income of most of these social media platforms (Fuchs, 2014). Still,
legacy news outlets and outlets native to the digital space continue to use social media indexes as
an indicator of public attention and interest, and have shaped their coverage and stories
32
accordingly (Jungherr, 2014; Chadwick, 2017). This has then paved the way for numerous
detrimental effects in online news spheres, not the least of which are the continued amplification
of political polarization, the sharing of fake news, and the heightened occurrence of negative
reinforcement spirals, (Chadwick, Vaccari, and O’Laughlin, 2018; Hutchens, Hmielowski and
Beam, 2018).
Effects of the Hybrid System: Reinforcement, Polarization, and Increased Partisanship
All of the mentioned effects above coexist with the notion of a hybrid media system, and
indeed many of them share a common understanding of the internet and the current media
landscape. For example, the basic hypothesis for scholars who acknowledge the existence of
reinforcement spirals and content polarization is that there is a huge volume of choice that people
now have online and in traditional media. That choice, along with more targeted production and
marketing of overtly political news, has in turn allowed people to increasingly consume news
stories according to their own personal taste (Slater, 2007; Feldman, 2011; Pearson and
Knobloch – Westerwick, 2018). Empowered with this ability to only tune into like-minded news,
people are continuing to consume media that reiterates frames they favor. They may gravitate to
media for its tone, lack of or adherence to formality, general opinions, political attitude and more
(Hmielowski, Beam, and Hutchens, 2017). This then narrows people’s exposure to conflicting
information, which can have significant impact on how informed citizens truly are.
When a desire to solidify one’s beliefs with like-minded content tops actual consumption
of facts, this endangers not only normative news-reporting practice, which values objectivity, but
also the health of the democratic process, which necessitates that people have a fuller scope of
the issues and considerations of the country. The past few decades have provided evidence that
some people are more passive in their information gathering, rather than actively informed (Gil
33
deZúñiga, Weeks, and Ardèvol-Abreu, 2017). These members of the public are more prone to
monitoring news coverage rather than actively seeking it out (Schudson, 2007). This being the
case, it is more important that content be honest and fact-based.
In the current hybrid media system, however, there is no infrastructure to ensure that
standard. Social media logics do not necessitate the presence or promotion of fact, as these
platforms are built around programmability, popularity, connectivity, and datafication (van Dijck
and Poell, 2013). On the whole, social media is defined by the agency it gives its users to choose
their own programming, even as social media platforms use technological programming to shape
public discourse and interaction. Popularity then becomes a huge boon for content circulating in
the social media sphere, and that popularity can be organic (circulated by users) or artificial
(promoted by the platforms and other powerful parties). The increased connectivity of social
media also promotes things like sharing and peer endorsement, and ultimately, this sharing and
engagement is datified, allowing social mediums, content creators, and parties who buy curated
online data to target and track user wants and habits through quantifiable means (van Dijck and
Poell, 2013).
Podcast creators encounter the logics of social media all of the time, even as they pull
certain ideas of programming and scheduling from mass media logics (Kaplan, 2013; van Dijck
and Poell, 2013; Sienkiewicz and Jaramillo, 2019). While the episodic formula of podcasts often
pulls from cultures of radio or television (McHugh, 2015), the agency and pre-meditated choice
on the part of users, which is a built-in part of podcasts, stems from online spaces. Because
podcasts require downloads, or at the very least, active subscription and opt-in on streaming
platforms, they do not have the same accessibility and ‘chance’ listening that mediums like radio
have (Sharon and John, 2019). Podcasts, unlike most radio, are also designed to be personal and
34
to gratify more directed opinions, beliefs, and world views (Sharon and John, 2019). This
personalization of programming is then directly tied into the marketing of podcasts which must
rely on datafication (learning what their listeners and other listeners are responding to) to try and
build and maintain any measure of popularity. Oftentimes this tailoring of content and building
of identity, tone and style are created through consultation with popular radio and television
programming, but in the end, podcasts represent their own emerging genre and medium, guided
by a mix of informality and personalization that is new and largely untested (Llinares, Fox and
Berry, 2018). That personalization especially prompts queries of potential polarization, and begs
questions about whether podcasts, and other digital mediums, are further exacerbating an already
precarious and polarizing point in history.
Embroiled in the study of polarization, there is debate over phenomena like filter bubbles
and echo chambers. Some academics claim that social media platforms automatically filter users
towards certain content thanks to their algorithms, which are designed to lead users towards
consuming more and more content (Pariser, 2011; Flaxman, Goel, and Rao, 2016). Many have
disputed the existence or the strength of filter bubbles as initially described (Borgesius et al.,
2016), but questions of social media agency versus the power of individual users still remain.
What cannot be denied is that in order to compete in the increasingly competitive social media
world, more and more news outlets have begun shifting to increased opinion-based or partisan
news styles (Feldman, 2011; Scheufele and Nisbet, 2012). An increase in ideological leaning is
by definition a breaking of objectivity norms, and its potential impact on American political
knowledge and participation is profound. But this choice to heighten partisanship has been
proactively made by some media organization to try to compete, not just with other content, but
with rapidly increasing models of use emerging among users of new and social media.
35
One of these frightening models, at least from the point of view of creators, is the rise of
secondary screen use. Secondary screen usage involves audience members splitting their
attention between two or more screens, like watching television or listening to radio while also
using their smart phone. Studies have proven that this habit is on the rise, and that it diminishes
attention spans and information recall (Shin et. al, 2016). The means to combat this are limited.
Media outlets and content creators cannot deny their audiences secondary screens or dictate
exactly how an individual will interpret and respond to their content. As such, they have one of
three options: they can accept a potentially dwindling audience, they can accept an audience that
is less engaged with the material, or they can try to bolster the intrigue of their stories. The latter
has been the choice of many outlets, and the means through which to do this has been by
instituting more sensational narrative tactics (Wahl-Jorgenson et. al, 2016).
Lastly, social media and other online content has had a tremendous amount of
psychological effects on users, prompting significant analysis over the past few years. In
psychology journals alone, hundreds of studies have been published over the last decade that
cover a whole host of issues from social media’s detrimental effects on self-esteem and its
propensity to promote unhealthy peer comparisons, to the increasing level of identity building
that is happening in online spaces (Zyoud, Sweileh, Awang and Al-Jabi, 2018). Not all of these
studies are directly related to the work at hand, but findings regarding identity-building, feelings
of worth, and perception of larger social norms do deserve some mention here (Sherman et al.,
2016; Ngai, Tao, and Moon, 2015; Tiggeman, Hayden, Brown, and Veldhuis, 2018). Early data
suggests social media has tremendous power over people’s opinions of themselves, their peers,
and the world around them, and those forces and effects amongst the public no doubt effect how
audiences engage with news and political content (Yang, 2016).
36
For journalists and journalism scholars, these dynamics are not entirely new. News has
never existed in a vacuum. It has always met people where they live, speaking to them through
the filter of their own world understanding (Iyengar, 1987). This truth has been the same for
political practitioners and campaigns, who have to tailor their messages and persuasive practice
to try and engage with citizen’s in the contexts where they reside. However, social media’s
dominance, and the distinct dynamics of the hybrid media system, necessitate new models of
understanding and have required new approaches and techniques from journalists and political
operatives alike.
A Hybrid Profession: Journalism Practice in a Changing World
Throughout the history of American journalistic practice and study, there have been a
number of key touchstones that are synonymous with ‘good’ fact-based reporting. These ideals,
largely adapted from a libertarian theory of the press, were cherished and exercised before
America gained independence, and were then enshrined in the constitution as foundational
elements of American democracy (Siebert, Peterson and Schramm, 1956). Subsequently, the first
amendment and its legal affordances to free speech and a free press have not only shaped
America’s media system, but the entire country-wide conception of what fact-based reporting
should achieve and what roles journalists should play in society (Blasi, 1977).
Over time, the idea that news media has a social responsibility to inform the public
became a standard norm in journalistic cultures and understanding (Seibert, Peterson and
Schramm, 1956). In order to fulfill that responsibility, news reporters needed to embody and
emulate a variety of traits including an ability to remain objective, convey stories with a
seriousness of tone, and prioritize public knowledge over private interests. Some exceptions to
these objectivity and seriousness norms exist within segments of legacy journalism, like editorial
37
content and opinion-pieces, which ultimately make arguments and choose sides on given topics.
However, on the whole, fact-based journalistic reporting has tried to insulate itself from even the
appearance of bias. Even within media organizations that have reporting and editorial
functionality, there is usually a well-defined wall between unbiased reporting work and editorial
content (Kahn and Kenney, 2002).
Unfortunately, the norms guiding fact-based and unbiased reporting can be difficult to
adhere to, and are especially tricky in a market-based system where audience wants are not
always based in which outlet provides the most substantive facts (Nerone, 1995). There have
been problems both historically and in the current news climate with actually creating news that
is unbiased and objective, and so a social responsibility model of the press has never been fully
actualized (Merrill, 1974; Lambeth, 1995; Nerone, 1995; Christians et al., 2009). This is in no
small part thanks to the oversimplification that this model presents, and how its claims as a
theory are shaky given its broad and vastly unattainable dimensions (Nerone, 1995; Christians et
al. 2009). New dynamics in American news consumption and production have further strained
these idealistic models. Prior to the popularity of the internet, the theoretical constructs of
American media were already riddled with inconsistencies and disagreement, but now that has
been compounded by substantial changes in the new media ecosystem like the fragmentation of
the larger news public, and the tendency towards news personalization that continues to increase
(Deuze, 2004; Deuze and Witschge, 2018). Nevertheless, the expansion of digital media, wide-
sweeping adoption of the internet as a source of news, and the growing popularity of different
social media platforms, which ultimately govern platform content and public access, have all
begun to forever transform the American hard news landscape and its perceived credibility.
38
The Emerging Digital Press
The effects of this digital media push can be seen clearly in the facts and figures
surrounding American news consumption. In 2019 alone, 34% of Americans claimed that they
prefer getting their news online, a rise from 28% in 2016 (Geiger, September 2019). This
tendency towards reading, watching, and listening to news online is an especially popular trend
among younger generational cohorts, with 18-29 year-olds saying they get more than 63% of
their news from either online websites or social media as of 2018 (Shearer, December, 2018).
These statistics, and the countless others compiled in recent literature and public polling, indicate
a prevalent shift in news consumption, and show that online sources are become permanent
fixtures of information for millions of Americans and many more people around the globe
(Cammaerts and Couldry, 2016; Humprecht and Esser, 2018).
For a number of years, scholars and practitioners had hoped that this transition to the
internet would provide an increased diversity of voices and viewpoints, diluting some of the
power of the conglomerates that have largely shaped news agendas for decades. Studies of
earlier internet endeavors, including blogs and other forms of citizen journalism, expressed
optimism about improving democracy through online means (Benkler, 2006). Their hope was
that the internet would be a networked public sphere, operating in a way that provided more
equality of access and afforded more power to participants. Over time, however, the hope for that
democratized internet sphere have dwindled. Recent market research indicates that the most
visited sites for news and online information are still largely owned and operated by preexisting
powerhouse outlets (Humprecht and Esser, 2018). As of 2019, 80 percent of the top twenty news
outlets online were owned and operated by the 100 largest media companies, most of whom have
increased their presence in other mediums as independent ownership has diminished
(Independent Lens, 2019).
39
In the meantime, other top news sources online have emerged and begun to dominate
amongst competitors. Many of these outlets have made up for a lack of media-wide control with
large influxes of cash made from sizable online advertising and effective efforts to have their
news stories go ‘viral’ (Al-Rawi, 2019). Companies like Buzzfeed and Vice are good examples
of this new dominant online media, having gained popularity through early adoption of different
formats for web-based communication (Stringer, 2020). These kinds of news agencies have
largely tried to legitimize themselves through the adherence of journalistic norms and best
practice (Tandoc, 2018), but at the same time, their news is produced within a larger corporate
ecosystem whose mission values things like transparency, perceived authenticity, and a level of
informality or immersion that does not detract from an image of credibility while also remaining
appealing to the average online news consumer (Stringer, 2018). These touchstones are what
helped to build these companies up, setting them apart in a digital world that values content that
seems genuine or approachable. Thus, there is a need to blend old news norms with new world
expectations. The resulting news style at times forsakes seriousness of tone or objectivity
standards for the sake of seeming conversational and approachable to audiences. This kind of
compromise ranges in severity on a case by case basis, but the more informal standard for tone
and bias levels is becoming a hallmark for online news content (King, 2015).
In the face of this success from originally digital contenders in garnering audiences,
especially on social media, legacy outlets have had to adapt their styles and business models to
some degree as well. This involves not just their storytelling style and techniques, which has
happened at some outlets, but also a utilization of the same social media and digital tools to
circulate their stories (Giomekalis and Veglis, 2015). These tools, like search optimization and
others, can in themselves affect the impact of a story on audiences. A particularly clear example
40
of this comes in news story headlines which are now being created for offline and online readers.
Originally, the goal of headlines was to tell readers what a story was about, but increasingly
headlines are changing to have a better chance of being found and promoted in online spaces.
This change can inherently frame a story in one direction or another, and has resulted in a
shifting series of goals and intentions for journalists when crafting story titles. These goals often
prioritize higher audience reach over a straight relay of facts that are covered in the story (Hagar
and Diakoupolis, 2019).
Similarly, on sites like Twitter where there may be character limits or certain language
styles and structure that get the most attention on the platform, journalists and media outlets are
having to adjust their style and tone to improve their circulation (King, 2015). But while there is
some risk in maintaining the integrity of news content on social media when faced with the
above phenomena, there is seemingly no viable path forward for media outlets that does not
involve some social media presence and activity. In 2019, 72% of Americans used some type of
social media, and more and more people are turning to these platforms as gatekeepers of news
and information (Social Media Fact Sheet, Pew Research). As such, news outlets seeking to have
any influence and reach must continue to play in this digital sphere, and as they do that, some of
their journalistic standards will continue to be tested (Lewis and Molyneux, 2018). Two of these
norms that directly relate to infotainment as a content style are objectivity and seriousness.
Objectivity Norms
Scattered throughout this dissertation so far have been mentions of particular ideals in
fact-driven news reporting that have become central components of the practice. Perhaps none
are so valued and discussed - and certainly none are so central to the current research - as
objectivity. In US media particularly, the concept of objectivity has been a dominant pillar of
41
hard-hitting journalism since the 1920s and 1930s (Schudson, 1978). At its most basic level,
objectivity is the separation of truth and facts from bias and personal values or beliefs. To be
‘objective’, a news story must not use elements like loaded language or leading phrasing that can
have an influence on audience beliefs. The goal, instead, is to meticulously find facts and deliver
them to audiences. In journalistic reporting, there is an inclusion of situational context, but
historically that context only extends so far, as objective reporters are more chroniclers of a
moment rather than analysts of what the moment means or symbolizes (Schudson, 2001). At
different times this general definition has been examined, critiqued, and defined. Over the past
century, understandings of objectivity have ebbed and flowed, and the country has moved
through phases where a lack of bias in media is more or less appreciated (Martine and Maeyer,
2019). However, the general premise of objectivity in fact-driven reporting requires that
journalists separate their personal feelings and inclinations on an issue from their relaying of
stories to their audiences, and that is a norm still widely held throughout the profession.
News, in this normative imagining, is something that should not tell audiences what to
believe about a given phenomena or situation, but rather should give them the information about
the topic to draw their own conclusions. This kind of hard-hitting reporting is different from
editorial or opinion-based offerings, which do not hold the same ideals of objectivity. Still, the
shared standard of hard-hitting reporting that continues to persist in both the professional and
conceptual understanding of journalism, academics have outlined numerous limitations of the
objectivity norm. Recent research has synthesized these critiques into three main spheres of
criticism: (1) total objectivity cannot possibly be attained because people themselves cannot
remove themselves from their personal beliefs and subjectivity, (2) objectivity is only a tool of
42
journalists that is used ‘strategically or instrumentally,’ and (3) objectivity ‘can only function as
a guiding principle rather than a working rule,’ (Martine and Maeyer, 2019, p. 3).
While all of these critiques are valid and bear substantial support in journalism studies
literature, the first point, about people not being able to separate themselves from their belief
schema is particularly glaring. If this is the case, and social science inquiry has increasingly
established that it is, this means that actual understanding of what the facts are and what facts are
important enough to share with the public cannot be standardized (Schudson, 2001, Maras,
2013). Instead, people’s personal experiences and already existing biases and preferences dictate
their understanding of truth, their value of certain facts, and the perceived importance of stories
in general.
While studies of objectivity norms center on the inability of journalists and the media to
be fully objective, this lack of ability to separate oneself from their personal beliefs is something
all individuals have as well. To respond to that, more and more content creators in news and
beyond are tailoring content to appeal to given audiences. This tailoring can compromise on
objectivity, undermining practices about limiting bias and presenting all sides, in favor of
presenting that make an argument or lead audiences one way over another that may garner more
interest. In the current news ecosystem, partisan news has claimed an increasing foothold, and a
type of new partisan infotainment is at the very center of this dissertation inquiry. This political
infotainment also undermines more than just objectivity norms in its information relaying, but on
style and tone, which is often referred to as story seriousness.
Seriousness Norms
While objectivity has been a much more central consideration for scholars when
examining news reporting norms, concerns about topic and style seriousness in reporting has also
43
been a major consideration. For most scholarship, objectivity is an overarching umbrella concept
that by definition includes some rules on what the tone of news content should be and what
stories count as ‘newsworthy.’ Interspersed in his analysis of objectivity norms, Schudson (2011)
discusses the expected ‘coolness’ of a reporter’s tone, and how it should limit emotion and any
narrative devices that may formulate a story in a non-fact-based way (p. 150). Similarly, other
scholars have discussed how limiting bias and following facts leads to a more formal language
style that has become indicative of hard news for many years (Plasser, 2005).
Beyond general tone, seriousness also relates to the topics and news stories that gain
attention and receive prominent placement and promotion. Historically, journalists have felt that
entertaining the public is something that should not be a priority, as that can be seen as taking
“take the easy way out when confronted with complex issues,” (Plasser, 2005, p. 53). In this
equation, it is informing people that matters most on the part of a journalist or reporter, but that
standard is by no means universal. Recent studies on television news have tracked a continuing
movement towards informality and lighter news topics, especially in local markets (Henderson,
2019). In these spaces, the drive to entertain and keep people invested in the news program so
they will not change the station is becoming equally important to informing the public, and the
result has prompted concern about market-driven news production amongst academics
(McChesney, 2013; Henderson, 2019).
Some media outlets have seen this decline in news objectivity and seriousness and have
sought to combat it not by overtly softening their news offerings, but by expanding their
offerings throughout a number of new avenues. An early example of this in the internet age came
from the emergence of news media blogs. Originally blogs were seen as a media type that was
outside the norm, but as blogs gained popularity, big media players like the BBC, The New York
44
Times and others took note and created blog offerings that extended their media brand (Hermida,
2009). Some of these outlets attempted to use blogs as another means of distributing their normal
hard-hitting content, but expectations of the medium made many of those attempts unsuccessful
in reaching audiences or gaining any real attention. Instead journalists largely pulled from other
tactics that have long been featured in hard news while never taking center stage. This includes
things like human interest stories and long-form interviews with guests of public interest
(Hermida, 2009). Thus, while journalists could still claim much of their work in the blogosphere
was hard-hitting, the tactics being used were shifting and the newfound emphasis on parts of
stories that had previously been less used marked some interesting internet-based shifts in
seriousness norms.
Over the past fifteen years, many more mediums have emerged that offer similar
opportunities, and news outlets have navigated these changes and decided where their
preexisting styles can thrive, and where some kind of change or alteration is needed to gain more
public following. One such space has been in podcasts, and in the past five years, nearly every
large news outlet in America has created some kind of podcast presence that either promotes
their stories from another medium, or supplements their offerings with special features, issue
spotlight shows, and so on. Sources like NPR and The New York Times have created incredibly
popular podcast news programming that consistently chart in the top ten most popular podcasts
in the country, some of which have been hard hitting, but many of which are more informal or
conversational (Euritt, 2019).
Yet while these shows from legacy outlets are doing well, they are competing with other
soft news offerings from non-traditional outlets that similarly lean on informal tactics to discuss
world events, politics, and more. These soft news and infotainment podcasts do not usually hold
45
the same rigorous journalistic norms at the core of their philosophy or process, and as such, some
of the academic and professional fears about effects of lessening seriousness and objectivity in
news spaces hold extra weight. Indeed, many questions remain about how these softer news
sources frame their coverage, and what, if any, effects those frames may have, given that they are
partisan by design. These questions are the bedrock of this dissertation, and are incredibly
relevant for investigation about the selected political infotainment podcasts.
Framing the Narrative: Political Framing Tactics in the Digital Age
While original definitions of soft news were fixated on the topics covered in journalism
stories (Tuchman, 1972), current understandings of softer news and infotainment not only deal
with the issues being covered, but also with the frames through which stories are told. Because of
this, framing theory plays an integral role for any study looking to understand the softening of
news, compromising of objectivity and seriousness norms, and the effects of these changes.
Built upon the foundations of work from many social scientists and solidified into one
theory by Entman (1993) and others, framing theory has to do with the presentation of
information, ideas, and content to audiences and how certain styles and story choices affect
audiences in different ways. In news settings, frames are used by journalists and function as an
interpretive mechanism to highlight the salience of certain issues and to compare the given story
elements with other pre-existing knowledge bases held by audiences of that news content (de
Vreese, 2005; Powell et al 2019). Framing theory diverges from other similar theories of media
effects like agenda setting because it makes bolder claims, including that the particular frames
used in telling a story can not only influence what media consumers think about, but ultimately
what they actually think (Blumler, 2015). Debate about whether this is the case, or even whether
framing theory should be conceived as its own separate media effects paradigm remains
46
persistent in some segments of social science literature (McCombs, Shaw and Weaver, 2014),
but for the purposes of this study, there is a general understanding that framing theory is a
critically important and independent theoretical framework, which has needed some adjustment
historically, but still fits well into discussion of news creation and effects.
Early studies of framing primarily originated in a political news environment (Tuchman,
1978), but over time, analysis of framing has moved outside of the political, addressing many
more issues and areas. Indeed, framing on the whole is adaptable to a wide variety of settings,
because framing focuses on how content and stories are created and then situated within a larger
social understanding for audiences (Scheufele and Tewksbury, 2006). For many practitioners,
especially those in politics, who are seeking to persuade members of the public towards their side
(whether it be voting for a particular candidate, supporting a given issue, and so on), framing
tactics are necessary tools through which to try and sway an audience. When making content,
there is a certain level of strategy, agenda building, and any range of motivations behind the
creation of that message, but ultimately the types of frames chosen and the tactics that are used
play a major role in the framing effects imparted on individual audience members. Even without
an overtly stated strategy, all stories inherently have frames, and all stories connect with their
audience in psychological, cultural, and social ways. This is because frames call upon an
individual’s personal and specific conceptualization of a given issue, which has been internalized
through a schema that is individual and unique (Chong and Druckman, 2007; Cacciatori,
Scheufele, and Iyengar, 2016). Given that this is the case, successful frames need to have a level
of accessibility and applicability. They need to speak to the population internalizing them, and
without accessible and saleable frames, a message’s strength is seriously diminished, and its
framing potential is inherently undermined (Scheufele, 1999).
47
Importantly, this project makes a distinction between the broader term of ‘frames’ and
the more intentional and specific term of ‘framing tactics.’ Frames, as discussed above, are
constructs that are used in storytelling either intentionally or unintentionally, however framing
tactics are intentional by nature. In the political infotainment that represents the heart of this
study, that intentionality is a defining feature of the content style on the whole. The podcasts
analyzed in phase one and recreated in phase two reflect this, and involve more overt-strategy in
their framing, thus meriting the distinction of being labeled framing tactics. This is not to say that
political infotainers are the only people crafting stories with intentional framing tactics. Indeed,
studies have long shown that this intentionality is a facet of news creation on the whole.
In journalism contexts specifically, frame building (the macro element of creating
messages with purposeful frames) comes at the hands of journalists, their editors, and other
decision makers who translate news stories for the public (Scheufele and Tewksbury, 2006).
Ultimately journalists are storytellers, and their storytelling method in hard hitting reporting is
typically guided by a particular narrative style that comes with its own familiar frames
(Knobloch et al., 2004; Scheufele and Tewksbury, 2006). In an idealized version of hard-hitting
reporting, the framing tactics provided by journalists to the public should follow certain
protocols and at their core should be fair, impartial, and as neutral as possible (Gitlin 1980;
Deuze, 2004) The normative standards of limiting bias, remaining objective, and prioritizing fact
relevance is something that has been baked into the tactics used by journalists for years
(Knoblock et al., 2004). However, because of a variety of pressures and influences, journalistic
coverage today exists on a continuum when it comes to these values. This continuum presents a
number of tensions between intervention and neutrality, power and distance, market versus
48
public good pressures, and more, and these conflicts in turn influence and subsequently indicate
the kinds of framing and tactics that are being used (Hanitsczh, 2007).
Despite the increasing shift to internet mediums, the frame creation and agenda building
process in a political media setting still consists of the same stakeholders: politicians, the media,
opinion makers, and the public (Chong and Druckman, 2007). However, though all of these
parties still have agency and control when it comes to political news, the wants of American
audiences and the framing trends that have emerged as a result are shifting in an online era.
Today more Americans claim to want things like perceived ‘authenticity’ over strict adherence to
fact, a change from the norm that was prevalent even as recently as the late 1990s (Norris, 1999).
Whether this want is the result of changes in the cultural and social climate, the work of
reinforcement spirals brought about by changing media, or perhaps a blend of both, these new
audience wants when it comes to political content have influenced collective points of reference
(Slater, 2007). Since these points of reference are the foundationally critical parts of
internalization and interaction with frames, they pose a significant element to grapple with in
academic research (Cacciatori, Scheufele and Iyengar, 2016).
The recent change in news dynamics and media circulation, creation, and effect has in
turn required a new lens of critical understanding and analysis. One promising way to meet this
analytical need is through understanding the current media system as being guided by preference.
More specifically, the media ecosystem today is defined by the agency that most media
consumers have to pick and choose what media they interact with based on their personal
preferences and wants. Cacciatori, Scheufele, and Iyengar (2016) confront this new ‘preference
based model’ and online media affordances by introducing the idea of a fifth age of media effects
research, identifying three main elements of the internet age that have enabled this shift: (1) the
49
movement by news outlets to more narrowed story coverage that is aimed towards ideologically
fractured publics, (2) a tendency by individuals to select information based on their prior beliefs,
while also being effected by social network processes (echo chambers, filter bubbles), further
narrowing their information diets, and (3) new technologies and algorithms online, such as
search engines and personalized news lists, that structurally narrow news consumption of
individuals (p. 19). In essence, there is a converging problem brought about both by the interests
of social media platforms and by news organizations. Both entities are guided by advertising as
their main source of revenue, and in order to best market on the individual level, social media
platforms have striven to perfect algorithms that keep users engaged and consuming content
longer, mostly by pushing individuals towards similar content to what they have liked and
interacted with in the past (Van Dijck and Poell, 2013). In order to be part of that chosen content,
news media outlets have had to rethink their news product, and that has resulted in an increase in
both softer news frames and more polarized news coverage.
The Impact of Soft News and Infotainment Frames
Previous academic research about soft news and polarization have shown mixed results
both in terms of how content is created, and how it is received by audiences. Some scholars
believe that the systematic trend towards a softening of news is dangerous and that it fails to
truly inform the public, while also creating more feelings of alienation and jadedness (Brants and
Neiejens, 1998; Currah, 2009). The pressure on news outlets to perform in an online economy
has prompted a content shift that is influenced by “the clickstream of web consumption,” and
that influence from metrics online has been transformative (Currah, 2009, p. 6). Increasingly,
critics of soft news are worried that the internet news scape has produced a playing field where
the public (guided by priming and exposure effects happening on social media platforms
50
especially) is opting out of the forms of hard-hitting news that are necessary staples of
democracy (Currah, 2009). Some substantiating examples of this comes from numerous studies
conducted about young people’s news consumption habits (Loader, Vromen, and Xenos, 2016;
Antunovic, Parsons and Cooke, 2018).
In these studies, and others like them, focus was specifically bestowed on people young
enough to have always used online media as a source of news, and findings indicate that young
people have different habits of news consumption and different values when it comes to
identifying news worth and merit. The perceived importance of long-established news outlets is
waning amongst younger people, and this may be due in part to a decreasing inclination among
the public at large to invest substantial time in reading, watching, or listening to news
(Antunovic, Parsons, and Cooke, 2018). This lack of investment further amplifies the
fragmentation of media, and in order to breach through the disjointed attention span and second
screen use of young people today (Shin, An, and Kim, 2016), news organizations have had to
figure out a way to pitch their stories with a different cultural understanding of story worth. Facts
alone do not seem to be enough to entice readers, especially in younger generational cohorts
(Edgerly et al., 2017). Indeed, what often matters more is social endorsements of stories,
expressed through the ‘virality’ and popularity of a topic on social media (Sherman, Payton,
Hernandez, and Greenfield 2016; Harrigan et al., 2012).
Podcasting as a medium can offer some immediate gratifications for younger cohorts, if
these proposed media habits and preferences hold true and are actually the express wants of
younger audiences. Podcasts lend themselves well to secondary screen consumption, given that
they are audio-based and do not require a visual component for listeners to engage with. They
also are easily circulated through social media, and popular shows and episodes that receive
51
more likes, shares, and other forms of attention are thus endorsed for audiences by their peers or
by opinion makers in their selected online circles. The linguistic style of podcasts is also
commonly crafted to feel authentic to listeners, leaning on a more informal tone that is
conversational in nature (Llinares, Fox and Berrt, 2018). Infotainment podcasts especially, like
the ones at the focal point of this study, utilize the mediums predisposition to informality,
intimacy, and authenticity, and blend news and entertainment in a distinctive way that prioritizes
these podcasting touchstones.
All of these elements tied together make infotainment podcasts a natural response to
changing audience habits in the media environment, so it comes as little surprise that
demographic analysis of podcast listeners in the US shows younger people (age 18-34) dominate
the podcast market with a high share of listening (39%) (Edison Research, April 2019).
Interestingly, the ‘news and information’ genre of podcasting is the second most popular genre
among all age demos, with 38% of podcast listeners in the US tuning into some kind of news
podcast regularly (Edison Research, April 2019). Yet while this increasing exposure to news
sources in the podcasting world is arguably a good thing, there has been little differentiation in
the charting metrics between hard news and soft news on platforms that distribute podcasts like
Apple, Spotify and Pandora. In these spaces, the definition of what is news is being left at times
to content creators, though ultimately that call falls to the platforms themselves. As such, many
of the most popular news podcasts, especially in the political news category, are actually more
infotainment than they are hard news offerings, though that distinction is never articulated on
these podcast distribution platforms.
However, despite the staunch opposition from some academics about infotainment, there
are scholars that believe positive gains come from softer news formats. These scholars concede
52
that there are problems with the general public’s current news consumption habits, but that some
of these issues, like inattentiveness, jaded feelings, and political cynicism can actually be
remedied through the right kinds of soft news (Baym, 2014; Boukes and Boomgaarden, 2015).
There is also argument to be made that through changing news storytelling practice, media
outlets can actually better impart messages that might otherwise be complicated to audiences.
This current research aims to engage with this discussion, and the first means of doing that
involves a primary research question about the framing tactics being used in political
infotainment podcasts. In this study, it is key to first identify what kind of framing tactics are
being used and what tactics are most popular in this genre, before testing for the effects of those
tactics. As such RQ1 and its sub-questions lead this investigative effort:
RQ1: What framing tactics are being used within political infotainment podcasts?
RQ1a: Do different tactics occur based on political leaning of the podcasts?
RQ1b: Do tactics differ between infotainment podcasts and hard news podcasts?
While more elaborate discussion on method and findings will be included in chapter 3, 4
and 6, the analysis surrounding this question revealed numerous framing tactics being used in
political infotainment that ranged from hard news to more entertainment based. Ultimately, two
of these frame categories were selected for experimental study: humor and opinion.1 Both of
these framing mechanisms have yielded a substantial amount of scholarship, which is
particularly useful to the current research.
1 Typically one would not discuss any findings in the literature review of a dissertation, as this is a space to converse with past research, not demonstrate results. However, it is important to include literature about the two studied frames (humor and opinion) to provide readers with a fully flushed out literature context. Past studies on humor and opinion effects are particularly insightful, and in the end, disclosing the chosen tactics and informing readers of their already existing legacies outweighs the somewhat disjointed flow that arises from such an admission.
53
Humor Framing
Studies have consistently shown that people are capable of learning from humor and that
audiences can hear something comical and come away with more concrete information about a
person or topic, especially if they are open to the speaker and to the topic at hand (Gruner, 1967;
Kowaleski, 2012). But in order to fully appreciate the possible persuasive powers of humor
frames, one must first look at the study of humor in communication research and the theories of
how humor works and what makes something humorous.
Drawing on decades of literature having to do with humor and the theories surrounding
its creation, component make up, and intentions, Meyer (2000) provides a foundational
landscape for the rhetorical effects of humor to date. This research establishes three existing
concepts for humor and why audiences find something to be humorous; they are relief theory,
incongruity theory, and superiority theory. Relief theory argues that humor is a momentary
release of stress, and that an audience finds something funny because it lowers the anxiety in a
given moment or situation. Incongruity theory, meanwhile, claims that humor can be found when
something does not fit with an audience’s current context or larger experience set. When things
are unexpected, potentially inappropriate, and incongruent, that can prompt the perception of
humor. Finally, superiority theory argues that humor is a means of differentiating a speaker from
others, or certain things from other things. Through establishing goodness and badness, as well
as superiority and inferiority, humorous moments can be created, and an audience can appreciate
a given story or joke.
In his work, Meyer articulates these theories of what comprises humor as existing on a
continuum and makes mention of four critical effects that humor can have on a population. These
are identification, clarification, enforcement, and differentiation. These effects are not mutually
exclusive and can manifest in different ways. Identification, for one, often works to, “strengthen
54
the commonality and shared meaning perceived between communicators,” and when an audience
who is sympathetic to the humor experiences it, the speaker can establish a significant level of
credibility with their audience (Meyer, p. 317; Chang and Gruner, 1981). Clarification, in
comparison, has to do with knowledge and familiarity, and encompasses the capability of humor
to explain a phenomena, event, or thought to an audience. Lack of familiarity with an issue or
disagreement with a given humorous perspective, meanwhile, can prompt one of two things in
audiences: enforcement of a social schema or norm, or differentiation. Enforcement describes the
phenomena wherein which audiences hear something humorous and have their own ideas and
knowledge reinforced (Graham et al, 1992). Reinforcement implies a directional attitude shift
where people’s attitudes become stronger, either towards the speaker or away from them.
However, if that shift is away from the speaker, or if audiences agree with the speaker but realize
others may not, this can then prompt differentiation (Goldstein, 1976). As such, humor has the
power to divide people, even as it has the power to bring people together (Meyer, 2000).
For the purposes of this study, the most intriguing effects that humor can have are
clarification and enforcement. Both of these effects directly correlate with the chosen dependent
variables in this study, political knowledge, and political attitude strengthening. Previous
research about humor-infused news programming have shown that audiences will learn from the
use of humor, especially if they are part of an inattentive public (Baum, 2003). Infotainment
shows, namely late night talk shows, have provided the most prominent vein of humor news
studies, and have demonstrated that people have the ability to learn from infotainment and that
they may strengthen beliefs about an already existing attitude when they are exposed to
humorous frames (Moy, Xenos and Hess, 2005; Kowalewski, 2012; Baym, 2014; Boukes, 2018).
55
However, while studies exist about humor and its effects, few have used experimental design as
a method, and even less have grappled with humor in directional ways.
Meyer’s identified superiority theory of humor in particular, where humor is used to
create multiple teams, with good actors and bad actors, is critically important to understanding
how humor frames work in infotainment spaces. Especially in political infotainment, where there
is often more partisan framing, superiority rhetoric empowers one side over another and relays to
audiences that there is a clearly more favorable group or view in a given context. In superiority
framing there are winners and losers, right answers and wrong opinions, and those frames are
aimed at audiences who then relate to them based off of their individual lived experience. Not
everyone regards the winners and losers in situations the same way, because not everyone
supports the same issues or political figures or parties, and so this kind of humor framing can
result in either enforcement of the belief, or differentiation from the speaker.
While some studies in other segments of communication research have looked at these
rhetorical functions of humor before (Ramsey, Knight, Knight and Meyer, 2009), most studies
on humor in news and infotainment have not considered these theories of humor. This has left
the literature with a concept of how humor effects people that is not fully realistic. Research in
this vein often concerns itself with people who have little knowledge of a given issue and who
may be largely inattentive to the political process on the whole. Studies are then designed around
assumptions where humor can sway undecided people, or uninvested people in either direction of
a given argument. But given what social scientists have come to know about the American
electorate, namely that it has become increasingly polarized over the past few decades, building
effects studies without taking into account people’s already existing belief schemas is faulty.
Very few Americans are coming to any kind of content without some kind of engrained opinions
56
or beliefs about politics, and many Americans regularly tune into content that will reinforce their
existing opinions. As such, researchers need to begin to understand humor in a more directional
way, since people who differentiate from a humor source will likely tune out the content or flat
out reject it, as opposed to being swayed by it in any meaningful fashion.
Opinion Framing
The second softer news frame considered in this dissertation work, which was found to
be the most popular framing tactic in the study, is opinion. Opinion framing is something that has
always, to some extent, existed in news media. News outlets have long included opinion in their
coverage of events, either through editorial opinion pieces, or through the inclusion of punditry
in their actual stories. Scholars, as a result, have continued to study opinion news, and, as with
other forms of soft news, have had mixed reactions as to its usefulness or detriment in society.
Some researchers argue that opinion news can promote more in-depth discussions throughout the
American public that go deeper into issues than hard news that abides by objectivity norms
allows for (Jameison, Hardy and Romer, 2007). Others have demonstrated that opinion news
alone does not necessarily sway people’s opinions more than hard news (Feldman, 2011).
More recently, however, research has demonstrated that people’s continued exposure to
opinion frames can cause attitude strengthening and more substantial effects on perception and
belief (Lecheler, Keer, Schuck, and Hänggli, 2015). This area of research holds particularly rich
findings for the effects of opinion frames, especially given the way that the high choice media
environment of today continues to allow people to personalize their news content based on
partisanship and preexisting bias. With a system that is capitalizing on people’s increased content
intake, and with social media trying to inspire even more consumption by promoting like-minded
outlets and stories, Americans who have specific opinions or who favor certain frames are more
57
likely to be repeatedly exposed to those frames. As such, these opinions and beliefs have the
chance to be internalized and more concretely accepted, further exacerbating people’s continued
polarization and arguably making opinion frames more powerful in the process.
Sometimes research about opinion frames is not overtly conceptualized as opinion
framing, but rather is viewed in terms of tone. In these studies, the frames that are studied are
considered to be positive or negative towards a given issue, but ultimately that positive or
negative assessment is by definition an opinion (Jacobs and van der Linden, 2018). This is not
unlike studies of humor which conceive of humor as a manifestation of stated superiority in
some cases. News that provides an opinionated leaning asserts superiority, however overtly or
subtly, about an issue being discussed. As such, it stands to reason that opinion frames and
humor frames may act in similar ways, making them frames that can be studied in the same
analysis credibly. Importantly, studies of this kind, which ascribe good and bad values, also
provide corroborating examples of how directionally framing an argument or issue around a
subjective opinion (usually having to do with goodness or badness) can affect audiences. What
this research has largely demonstrated is that audiences are swayed by these kinds of frames, and
that they are especially moved if those frames reaffirm their preexisting opinions. These findings
lay down the foundation of the current research, which partly seeks to understand the potential
for attitude strengthening that is prompted by political infotainment podcasts.
A Partisan Population: Attitude Strength as a Measure of Polarization
For nearly as long as researchers have studied public opinion, scholars have grappled
with the concept of ‘political attitude’ and the ability of politicians, the media, and other
influential actors to change people’s minds. Theorists and practitioners alike have sought to
understand how people form their political attitudes, and how prolonged debate can affect the
58
attitudes of audiences. Political advocates and campaigners especially have studied the electorate
and worked continuously to tailor their communicative practice to best sway voters (Karpff,
2016). As such, a significant amount of research regarding political attitude stems from political
communication and political science circles. However, this vein of academic study is also built
on psychology, sociology, and older models of rhetoric and persuasion.
Foundationally, studies of political attitude hinge on the concept of persuasion, because
persuasion is often the best mechanism for changing attitudes. Central to the goal of persuading
individuals, is the hope to endear an audience to a given attitude or side (Hovland, Jannis and
Kelly, 1953). However, some kinds of persuasion are notably easier to attempt than others.
Rhetoricians have continuously found that the easiest means of persuasion involves taking
people with a subtle attitude and amplifying that attitude into something stronger (Rhodes,
Toole, Arpon, 2016). These studies also demonstrate that weak attitudes lack the power to drive
an individual’s actions in their day to day life and do not carry much meaning to the holder, but
when attitudes reach a certain degree of importance in a person’s estimation, they become
critical touchstones of that person’s understanding and world view (Petty & Krosnick, 1995).
Petty and Krosnick (1994) further articulate the importance of strong attitudes versus weak
attitudes with their four benchmarks of strong attitudes: they are persistent over time, they are
resistant to change, they effect the information processing of the person who holds them, and
they can have a strong impact on behavior.
With those four elements in mind, it is easier to comprehend why political campaigns and
other persuaders work so hard to cultivate attitudes from weak or non-existent bases and then
repeatedly strengthen them (Mutz, Sniderman and Brody, 1996). Continued reinforcement and
successful, long-term persuasive strategies are an investment in the attitude itself, and in politics,
59
where single-issue voting is a heuristic used by many, creating and maintaining strong attitudes
on given topics is incredibly important for campaigns (Perloff, 2013; Brader, 2005). Once
someone has a strong attitude, it is difficult for subsequent persuaders to change their mind and
have them support an opposing attitude (Krosnick et al., 1993). Furthermore, individuals with
strong attitudes are more easily activated towards some kind of action or primed for the
acceptance of other similar attitudes (Rhodes, Toole, and Arpon, 2016). Studies have shown that
in order to potentially have real persuasive impact, persuaders should target people with either no
attitude or a weak attitude on that topic (Perloff, 2013).
Political persuasion that attempts to change attitudes comes in a range of forms. For
campaigns, candidates, and political advocates, there is still substantial investment made in one-
on-one level communication because research indicates that one of the best ways to influence
attitudes is through unmediated, interpersonal communication (Thornson, 2014). Generally,
most Americans appreciate more informal, conversational political dialogues, and that preference
translates into mediated political communications as well (Barton, Castillo and Petrie, 2014).
Easy, straightforward arguments in political advertisements and news coverage are valued more
and are received better than complex discussions, especially among people who do not pay much
attention to politics regularly (Cobb and Kuklinski, 1997; Kazee, 1981). To these inattentive
publics, softer news and infotainment programming has often been a bridge not just for learning
about politics, but for campaigns to engage with potential voters (Moy, Xenos and Hess, 2005;
Serazio, 2018). They represent a ripe opportunity for campaigns and advocates to bring their
message to more people who may otherwise be disengaged and try and change their political
attitudes, and to strengthen the feelings of those who tune in who are engaged.
60
Yet while there are still many Americans who may still qualify as being disengaged or
inattentive, less and less people have remained neutral on all political topics or unattached in
some way ideologically. Thus, even people who may never have interacted with a given topic or
issue, come to the issue with pre-assigned baggage, including party affiliation and their sense of
social and cultural identity (Mason, 2015). Further, in this era of polarization there has been a
shift from ingroup favoritism to outgroup hostility (Iyengar and Krupenkin, 2018). What this
means is that people are not just persuaded by the merits of an argument and how that argument
lines up with their preexisting beliefs, but also whether or not that attitude is endorsed by the out-
party, or group they do not identify with. For the purposes of this study, it was of the utmost
importance that an issue with little preexisting media attention was selected, allowing people to
start from a weaker base of support or opposition. Also, for the sake of more closely mirroring
the polarized reality of the media ecosystem, conservatives and liberals were divided up and
exposed to pro-attitudinal content, giving this research the ability to truly study attitude
strengthening in a focused, directional way.
This choice in experimental design directly reflects the aims and objectives of RQ2 and
RQ3, as well as their sub-questions seen below:
RQ2: How successful is humor in strengthening political attitudes compared to hard news?
RQ2a: Is humor more successful in strengthening political attitudes with liberals or
conservatives?
RQ3: How successful is opinion in strengthening political attitudes compared to hard news?
RQ3a: Is opinion more successful in strengthening political attitudes with liberals or
conservatives?
61
Frames of Understanding: Political Knowledge and Recall
The second dependent variable considered in this study is political knowledge. Political
knowledge is often times conceptualized as a single, but influential, component of the study of
political attitudes. At its most basic level, political knowledge is a collection of political facts, or,
“the various bits of information about politics that citizens hold,” (Delli Carpini and Keeter,
1993, p. 1179). These facts can range in their intricacy and detail, and can be concretely
internalized (remaining with the person long after it is learned), or can be more temporary, and
eventually forgotten. Studies about political knowledge in academia are also more recent than
political attitudes on the whole, but they draw from a long tradition of polling in the United
States that often tries to assess how much political knowledge Americans have on a given issue
(Delli Carpini and Keeter, 1993).
When political knowledge is taken from a concept to a concrete measure, the variable is
often divided into two different segments of facts: static and surveillance (Barabas, Jerit, Pollock,
and Rainey, 2014). Static facts are facts that are by definition more foundational. This kind of
political knowledge often has to do with the basic processes and functions of politics and
government and they are internalized over time by citizens and remembered for longer periods.
These are pieces of information that researchers and pollsters have assessed as being basic
enough that any citizen in the US who has gone through a grade school education should know
them (Delli Carpini and Keeter, 1993). Surveillance facts, meanwhile, speak to a more
situational political knowledge. These are facts that are provided to audiences, often times
through the mass media, and do not have direct bearing on static understandings of the
government and politics (Barabas, Jerit, Pollock, and Rainey, 2014).
Usually, analysis of situational political knowledge is referred to as recall research, and
though there is a differentiation between recall and political knowledge, recall falls under the
62
realm of political knowledge. In public polling, situational political knowledge is often used to
track how much certain stories or events permeate into public awareness. Scholars of journalism
and political communication have also adapted this measurement of situational political
knowledge (often referred to as recall or reception) to gain insight into what messages audiences
remember and how best to inform audiences in a changing media landscape (Booth, 1970; Price
and Zaller, 1993; Bode, 2016). Studies of this kind acknowledge that recall has less to do with
long-term knowledge, but these short-term assessments are still key to understanding learning
and effects that happen in mass media contexts. Furthermore, measuring the level of detail that
participants in a study can recall can establish how much learning has taken place, even if the
learning is more situational in nature (Bode, 2016).
Previous research about mass media and political knowledge has prompted mixed results.
Some studies indicate that citizens who regularly engage with hard news have higher bases of
political knowledge. These works also show that this higher base of political knowledge can lead
people to be more politically active than their less engaged peers (Prior, 2005; Lee and Wei,
2008). However, scholars who have studied soft news have established that more entertaining
content can actually inform people to a higher degree, especially those individuals who are not so
engaged with the news on a regular basis, resulting in higher recall rates of political information
(Merle and Craig, 2012; Xu, 2014). The reasoning for these discrepancies vary, with some
studies claiming that more lax tone and entertaining tactics make content more accessible
(Boukes et al, 2015), while others see a more direct connection between a person’s enjoyment of
content and their ability to remember and learn from it (Xu, 2014). Nevertheless, for as many
scholars who find that softer news has potential benefits, there are just as many, especially in
studies of political knowledge, who urge caution about making claims one way or another (Bode,
63
2016). On the whole, more research is needed, especially when assessing news media that is
spread or shared through new and social media.
Despite the continued lack of definitive answers in this area of study, this work is
increasingly important given the current movement from more objective media to more
subjective norms, and from serious tones to more informal and conversational ones. To date,
studies of political knowledge where it relates to hard and soft news frames have remained
elusive in podcast research. Indeed, the closest segment of study has been based in advertisement
and market research, which has focused on brand recall based on podcast ads and sponsorships
(Fischer, 2019). As such, the contribution of this current study will be more than just another
entry into the conversation about whether hard news or soft news is best for increasing political
knowledge. Instead, there will also be new evidence involving a yet to be studied medium, that is
largely finding its footing in a more subjective and entertaining political sphere.
In the current research, political knowledge was measured on both levels. The
foundational level was measured in the post-test as a means of ensuring that there were no major
discrepancies between groups in the experiment. Large discrepancies may have indicated other
reasons for any changes in situational political knowledge and political attitude strengthening,
but the study uncovered no differences that would cause reason for concern. Situational political
knowledge, however, was the main variable of interest in this study, and the subject of RQ4,
RQ5, and their sub-queries:
RQ4: How successful is humor in increasing political knowledge as compared to ‘hard news’?
RQ4a: Is humor more successful in increasing political knowledge with liberals or
conservatives?
RQ5: How successful is opinion in increasing political knowledge as compared to hard news
framing?
64
RQ5a: Is opinion more successful in increasing political knowledge with liberals or
conservatives?
Summary and Research Questions
Guided by these stated areas of research and inquiry, this project will aim to expand
conversation about what counts as news, what political news and infotainment can look like in
the digital media ecosystem, and how the current age of media hybridity defines many of the
effects and measures of impact seen today. Examining political infotainment podcasts will bridge
the gap of these many concepts, and all are needed in order to critically engage with this style of
content and the effects it can have. Moving forward, these identified theories and academic
discussions will be directly reflected both in the methodology of this study and in the analysis of
its findings, elevating this research from something that is merely exploratory into something
narrowed in and focused on tangible effects. The following research questions were created as a
means to those ends:
RQ1: What framing tactics are being used within political infotainment podcasts?
RQ1a: Do different tactics occur based on political leaning of the podcasts?
RQ1b: Do tactics differ between infotainment podcasts and hard news podcasts?
RQ2: How successful is humor in strengthening political attitudes compared to ‘hard news’?
RQ2a: Is humor more successful in strengthening political attitudes with liberals or
conservatives?
RQ3: How successful is opinion in strengthening political attitudes compared to ‘hard news’?
RQ3a: Is opinion more successful in strengthening political attitudes with liberals or
conservatives?
RQ4: How successful is humor in increasing political knowledge as compared to ‘hard news’?
RQ4a: Is humor more successful in increasing political knowledge with liberals or
conservatives?
RQ5: How successful is opinion in increasing political knowledge as compared to hard news
65
framing?
RQ5a: Is opinion more successful in increasing political knowledge with liberals or
conservatives?
66
CHAPTER 3
METHODS
In the face of a substantial literature shortage on political podcasts and the role of
podcasts in news spaces, this project has undertaken a multi-step process to begin much needed
analysis on this type of media. First, the storytelling strategies of these podcasts were identified,
then the effects of these strategies were examined. The storytelling strategies were mapped out
via qualitative content analysis, and the effects were studied using an online survey experiment.
Phase One: Content Analysis
To begin answering the research questions posed in this work, phase one consists of a
qualitative content analysis that examines four selected podcasts at both a micro and a macro
level. Three of these podcasts are US examples of political infotainment, and one podcast is a
hard news show from the United Kingdom. The selection of these podcasts is detailed below.
Further, the micro-level analysis conducted in this phase, which was analyzed at the sentence
level, developed a coding schematic that represents individual framing tactics used in these
podcasts. That analysis will be referred to in chapter 4 as the ‘Descriptive Analysis’ stage. The
macro-level analysis conducted, in comparison, makes more general claims based on thematic
patterns in these podcasts, and those findings and procedures are referred to in chapter 4 as
‘Thematic Analysis.’
Podcast Selection
In order to conduct a wide-ranging content analysis that could provide good findings
about the tactics used in political news podcasts, and the themes of these political conversations,
an exploratory assessment of the political podcast field in the US was undertaken. This
67
assessment preempted any actual content analysis, and began with a basic investigation about
roughly how many political podcasts exist, and how many of the top podcasts on platforms like
iTunes, Spotify, and others were political in nature. While no concrete methodological rigor was
used in this assessment, this first stage of research provided insights into the growing popularity
of political news podcasts. Thousands of listeners were commenting, reviewing, and interacting
with hundreds of political podcasts on these platforms and on social media. Further inquiry also
yielded different listener tracker data, which showed not only the wide-reaching impact of
podcasts on the whole, but how political news podcasts are a staple of popular podcasting
(Nielsen Podcast Insights, 2019). With this cemented study rationale in hand, streaming charts
like iTunes, Spotify, and Pandora were used to find what political podcasts had the most
listeners, while targeted searching was used to assess where popular podcasts fell on an
ideological spectrum (ranging from conservative to liberal).
Using a diverse case selection method, this project wanted to find three political podcasts
that ranged in their political leanings while still being considered infotainment (Seawright and
Gerring, 2008). The top two performing political podcasts in the country at the time of selection,
The Ben Shapiro Show (conservative) and Pod Save America (liberal), were chosen for further
study. With comparable audience reach (each accruing more than 15 million downloads a
month), but nearly opposite (while still mainstream) political views, The Ben Shapiro Show and
Pod Save America both qualify as infotainment. They are also both part of smaller, independent
media companies (The Daily Wire and Crooked Media) and utilize nearly identical revenue
processes, subsidizing their content and creating profit primarily from their use of embedded
advertising. No hosts on these shows make any claims about being journalists or doing their own
investigative reporting. However, they market themselves as relaying news of the day, padding
68
cited news coverage from other sources with entertainment tactics like punditry and humor.
Thus, while the podcasts themselves disagree on nearly every political topic discussed, their
format and content style made them easily comparable artifacts.
Selecting a third podcast that used an infotainment style while remaining more politically
independent, proved difficult. After wading through dozens of options online, and deciding that
the diverse cases approach would weight a difference in style, tone, and ideology over a
comparable audience reach, Slate’s Political Gabfest was selected. While Political Gabfest does
not draw millions of unique listeners each month, it compares to the Ben Shapiro Show and Pod
Save America on three critical levels: episode formatting, story structure, and narrative style (See
Table 1). Political Gabfest may be anchored by three professional journalists, but the informality
of their tone and the subjective opinions that are constantly shared make it infotainment by
design. The podcast does not do any of its own reporting, and relies almost exclusively on the
work of other journalists to shape informal conversations, making it a soft news product. What
Political Gabfest does offer, however, is a difference in political ideology. Instead of openly
supporting one party over an another (the three hosts have ranging political views and beliefs),
Political Gabfest looks at politics through the lens of journalists. If there is a side that Political
Gabfest is on, it is the side of the press, an entity separate from politics, while also being deeply
entrenched in the process. That perspective is not unique to this one show. In fact, there are many
other popular political podcasts run by journalists, but Political Gabfest offered the most
similarity to the other two selected podcasts when considering story structure, length, and style.
Finally, the fourth and final podcast studied in this dissertation is the Global News
Podcast, a podcast put out by the BBC in the UK each day. The Global News Podcast presents
the top stories around the globe, not just in the US, and is the most listened to news podcast in
69
the world (28 million unique listeners per month). This podcast was chosen as a means of
comparison, as it falls resolutely into a ‘hard news’ genre, adhering to norms of objectivity and
seriousness. The Global News Podcast also lacks much of the perceived bias that many media
outlets regularly grapple with. Finding hard news podcasts that would not cause concern about
bias within media was difficult. Because this research considers political bias a significant
potential influence, choosing a source outside of many concepts of what news is and is not
biased seemed the most credible way forward. Also, while it might have been possible to draw
entirely on older coding schematics for hard news sources (Valkenburg, Semetko, de Vreese,
1999; Carpenter, 2007), this project includes the analysis of a hard news podcast to be certain
that there are no new or different tactics that are unique to podcasts or to audio-based news
media (which has been less studied of late).
Table 1. Podcast Audience Reach and Host Social Media Presence
Podcast Audience Rank as of Spring
2020 (U.S.)
Twitter Followers
Pod Save America 15 mil/month 18 on iTunes 3.65 mil (combined)
Political Gabfest 75,000/week 100 on iTunes 2.07 mil (combined)
Ben Shapiro Show 15 mil/month 8 on iTunes 2.8 mil
Global News Podcast 28 mil/month 110 on iTunes 28 mil (BBC)
From the four podcasts, ten random episodes of each podcast were selected, using the
random week sampling method (Riff, Aust and Lacey, 1993). For the political infotainment
podcasts, the ten weeks selected were all the same, so that there may be overlap in story
coverage across the chosen shows. Such overlap allowed for better understanding of how each
70
side of the political aisle framed topics and contextualized events.2 These episodes were then
transcribed in full, allowing for analysis that was both audio and text-based in nature.
Qualitative Content Analysis
Qualitative content analysis is a method that provides researchers with the opportunity to
explore and to describe while leaving room for some creativity. Through allowing researchers to
create new codes and establish new schemas of understanding, qualitative content analysis
provides the freedom to fully define a linguistic landscape (Mayring, 2004). That opportunity is
much needed in this current research because existing coding schematics from past studies are
not fully comprehensive when it comes to political infotainment tactics. Research on softer news
styles, which are similar to but still distinct from these podcasts, have yielded two main thematic
areas where codes might fall: soft news tactics and hard news tactics (Reinneman et al., 2012). In
this study, those thematic areas have been varied somewhat by labeling most hard news tactics as
‘journalism tactics’, and most softer tactics as ‘entertainment tactics.’ Furthermore, for this
project, the breakdown of softness and hardness are not enough to make any substantial
categorizing model, and a more nuanced approach is needed.
The established codebook in this research needs to account for the journalistic norms of
objectivity and seriousness that are upheld in the hard news podcast and altered in the
infotainment podcasts. A breakdown of codes as hard or soft largely appeases the range of
seriousness norms that can exist in these genres of podcasts, but it fails to fully account for
objectivity. Existing research on talk radio and opinion news helped in expanding this process to
2 The studied episodes of the Global News Podcast were not sampled from the same random 10 weeks. This is because their parent company, the British Broadcasting Corporation, only keeps podcast episodes live and downloadable for 30 days. The company does not archive episodes, and did not possess audio or transcripts of the chosen weeks. As such, ten separate weeks were chosen for study in the project, but given the Global News Podcast’s focus on international stories, the difference in coverage was not an obvious impairment.
71
make room for this need. These studies, by definition, are focused on a compromising of
objectivity, and they have defined ‘punditry and opinion’ as a theme in itself, comprised of
multiple kinds of codes that fall underneath this opinion umbrella (Sobieraj and Berry, 2011).
Bridging these coding schemas together, while also adding newer elements requires a degree of
flexibility, and flexibility is the main strength that qualitative content analysis has to offer.
At its core, qualitative content analysis investigates language choices and tactics,
focusing in on both content and the context surrounding that content (Hseih and Shannon, 2005).
Analysis goes beyond what is said or written or viewed, and looks to other factors that may
influence meaning making. From this process, overarching themes can be identified and explored
at the same time that quantifiable data is collected. Codes can be tracked, and counted, and
assessed all while more wholistic deductions are made. As a result, many studies using
qualitative content analysis are interpretive and naturalistic, and can be seen as more subjective
(Hseih and Shannon, 2005). Given this tendency, researchers using this method need to find
ways to bolster their research framework. This can be done by conducting qualitative content
analysis in ways that prioritize integrity of findings over subjective opinion (Mayring, 2004), and
by supplementing qualitative content analysis with other, more empirical, methods as well
(Onwuegbuzie, Slate, Leech, and Collins, 2007). It is for this reason, as well as an interest in
RQ2, RQ3, RQ4 and RQ 5, which cannot be answered through content analysis alone, that this
project also includes an experimental component.
Developing the Coding Scheme
In preparation for this dissertation, and for this specific phase of content analysis, a pilot
study that established some preliminary coding guides for this research was conducted in 2017. It
compared one of the chosen podcasts, Pod Save America, with another podcast from the digital
72
media company ‘Crooked Media,’ Pod Save the People. The pilot examined two episodes in
total, and compiled a list of preliminary tactics from a total of two hours of audio programming.
While many insights were gained from this pilot examination, one serious drawback was the
limited number of episodes. With an analysis of only two shows, the ability to assess frequency
and any kind of storytelling patterns was greatly diminished. Despite this, the pilot did yield
helpful insights about how best to organizing the emerging themes and tactics present in these
podcasts. This descriptive coding, done in the style of Saldaña, also helped to establish basic
vocabulary building blocks that have been used and expanded in the current study (2009, p. 70).
The resulting codebook established during the descriptive cycle of the pilot process
revealed seventeen unique tactics, a list of which can be found in Table 2. From these tactics,
four different focused thematic groups emerged: journalistic elements, entertaining tactics,
partisan positioning, and sponsorship. Of the four themes present, journalistic elements were the
most popular, taking up the most substantial amount of time across the studied episodes, but
entertainment tactics were not far behind. Through tactics like purposeful jokes, humorous
expletive use, and pop culture references, the overall messaging and the style and tone of
information being provided to the audience was less serious than hard news. The pilot concluded
that in the case of Pod Save America, this is a purposeful shift in style predicated on the belief by
the podcast creators that messages resonate better with audiences if they feel more
conversational and authentic rather than rigid and premeditated (Zengerler, November 2017).
The high rate of these entertainment and partisan tactics also substantiates the categorization of
Pod Save America as a piece of infotainment, and set up a categorizing precedent for the other
two soft news podcasts that were selected for the larger study.
73
Table 2. Pilot Study Identified Codes and Themes
Descriptive Tactics Code Focused Themes
Directed Question DQ
Expert Sources ES
Informative News IN Journalistic Elements
Pundit-like Interpretation PLI
Personal Testimonial PT
Journalist Shout Out JSO
Expletive Use EU
Purposeful Joke PJ Entertaining Tactics
Pop Culture Reference PCR
Audience Action AA
Oppositional Source OS Partisan Positioning
Anti-Right Wing (overt) ARW
Topic Tie-In (in ad) TTI
Sponsored content SC Sponsorship
Crooked Media Promo CMP
Non-ad Sponsor Shout Out SSO
Yet while the pilot study served its intended functions and did offer increasing insight,
more work needed to be done merging new coding models with others that came from existing
literature on news as a whole. Many content analyses over the years have been conducted with
the intention of identifying framing tactics and topic codes in news spaces (Valkenburg, Semetko
and de Vreese, 1999; de Vreese, 2005; Carpenter, 2007). However, there is no standardized code
book that spans journalism scholarship. As such, detailed reading of the literature was required to
identify common threads and group together consistently emerging themes. Table 3 begins the
work of outlining the most prominent themes uncovered during the literature assessment,
highlighting general frames that have become standard pillars in study of news framing.
However, more thorough examination of the literature also revealed that many past studies have
assessed framing tactics on a full story level. This unit of analysis has proved useful, but also
74
produces more general findings (Saldaña, 2009). Story analysis, by definition, precludes
researchers from assessing frames on a shorter interval level, and, in the interest of identifying
the multitude of smaller frames that might ultimately sway a story into a given category, the
current research has used sentences as a unit of analysis instead of stories.
Table 3. Hard News Framing Tactics and Story Styles (Valkenburg, Semetko, de Vreese,
1999; Carpenter, 2007)
General Frames (tactics) Definition
Conflict/Action Emphasis on conflict amongst groups, organizations and
individuals
Human Interest Emphases on human individuals in the event covered
Responsibility Emphasis on party/person responsibility
Economic Consequences Emphases on economic impact of a given event or story
Media self-referential Emphasis on news media
Diagnostic Emphasis on what caused/started the perceived problem
Editorial Opinion-based storytelling disclosed as being more biased
Expert Sourcing Purposefully cites ‘experts’ as bearers of fact
To ensure that a resulting codebook would be viable and valid, a pre-testing phase was
undertaken including an intercoder reliability check. That check resulted in a 94% level of
agreement and a .7 Cohen’s kappa score (a number which has been substantiated as significant
and acceptable in political communication by scholars like Lombard, Snyder‐Duch and Bracken,
2002 and Mayring, 2004). Upon further investigation, the resulting kappa score seemed to also
be the product of confusion between coders about what needed to be coded within the story and
what did not.3 However, careful comparison and subsequent changing of those discrepancies
brought the overall kappa score to a level of .75.
3 For example, each podcast had segments at the beginning and the end of the show that gave production credit and other housekeeping items to listeners, but since these blurbs (and some other commentary) fell outside of an established and identified news story, they were not highlighted. Similarly, ad blocks and sponsored segment stories were not coded, and there was some confusion with whether those ads should be ignored all together within the NVIVO apparatus, or whether they should be coded as ‘other/non-coded.’
75
In the interest of analyzing these podcasts at both the descriptive sentence level and at the
larger thematic story level, two distinct code books were needed, and two separate coding
regimens were applied. The first is more general, consisting of story topics and themes, as well
as coding for the title of the podcast, the length of the selected story, and how many people were
speaking in the clip. Table 4 lists the twelve topics that were established during coding. For a
more thorough description and differentiation of these codes, as well as the sentence-level codes,
please see Appendix A (p. 172). Coding at this level allowed for some baseline conclusions to be
made about what stories each political podcast was covering, how much time they allotted for
different events and types of stories, and what kind of stories they favored or ignored. Findings
regarding those elements will be discussed in chapter 4.
Table 4. Topic Codes
Topics Code
Domestic Story DS
International Story IS
US Elections and Political Campaigns USEP
US Policy USP
Policy of a Foreign Country PFC
Tech and Business TB
Science and Medicine SM
Other Media Coverage OMC
Pop Culture, Arts and Sports PCAS
Interview with Guest IG
Podcast Game or Special Segment PG
Trump Story TS
Meanwhile, the first few phases of coding and categorization undertaken in this work
prompted a substantial shift in coding schema. By blending findings from the pilot and from
other work on opinion and humor framing, a new codebook was conceived that established more
than 22 unique tactics being used at the sentence level of these podcasts (see Table 5). Five
76
overarching groups of tactics emerged, some of which were assessed at some point in the prior
pilot study and literature review (journalism tactics, entertainment tactics, and advertising
tactics). The most developed of these segments to date was the hard news section, and most of
the tactics identified in that group, including expert and community sources, personal
testimonial, directed questions and direct quotations have previously been chronicled and
catalogued in academic study (Valkenburg, Semetko, de Vreese, 1999; Carpenter, 2007).
The entertainment tactics and the opinion/punditry tactics were somewhat developed in
academic literature, but had not been included in larger news studies for the most part.
Entertainment elements like purposeful jokes, sarcasm, pop culture references and more have
been catalogued in humor studies (Cantor, 1976), while opinion tactics like roasting, media
coverage criticism and different partisan codes have been identified in studies of conservative
talk radio (Keith, 2008). Also, though previous study has acknowledged advertising and
sponsorship, few studies have coded for the integrated level of self-promotion and sponsored
content that were very prevalent in the chosen political podcasts outside of explicit ad segments.
As such, all of these themes required expansion for this research, with each layer being added to
on an individual framing tactic level. Furthermore, another separate group, titled ‘Mobilization’
was also included to address consistent calls to action that were present in more than one podcast
at numerous times.
77
Table 5. Tactic Codebook
Framing Tactics Code Focused Themes
Directed Question DQE
Cited Quote CQ
Expert Sources ES
Community Sources CS
Facts and Figures FF Hard News
Personal Testimonial PT
Media Citation MC
Audio Clip AC
Purposeful Jokes and Sarcasm PJS Entertainment
Pop Culture Reference PCR
Meme Reference MR
Roasting R
Celebrity Feature CF
Call to Action CTA Mobilizing
Democrat Assessment (Positive) DAP Punditry and Opinion
Democrat Assessment (Negative) DAN
Republican Assessment (Positive) RAP
Republican Assessment (Negative) RAN
Media Coverage Criticism MCC
Media Coverage Endorsement MCE
Self-Promotion SP Advertising
Non-ad Sponsor Shout Out SSO
With the two necessary codebooks established, completing a content analysis at both the
sentence level and story level was possible, and findings (though further discussed and
elaborated in chapter 4) had a direct impact on the second phase of research: an online survey
experiment. In the end, the two most frequently used soft news framing tactic types, humor and
opinion, were selected for testing. The resulting survey experiment was designed to incorporate
78
podcast story clips that emulated the frequently seen framing tactics in both of those categories
and also in a hard news comparison group.
Phase Two: Online Survey Experiment
With the benefit of a thorough content analysis, and the insights provided from said
assessment, the second phase of research, an online survey experiment, could begin. This
involved a multi-step process where a survey experiment was created, audio materials emulating
podcast stories were produced and piloted, the experimental procedure was pre-registered, and
the project was eventually launched on Mechanical Turk.
Creating Podcast Stories
To begin the launch of an informed and credible online survey experiment, there were
two necessary steps that needed to be accomplished. The first was creating the scripted podcast
clips that would be featured in the experiment. This was an undertaking of some concern, given
that most experiments use already crafted news material to test with. Most preexisting
experimentation that created content used print news or written narrative as its medium of study
(Green and Brock, 2000). Some studies also created or edited their own materials for television
and video-based stories (Cappella and Jameson, 1996), and some particularly ambitious
researchers from the past prepared television clips, radio segments, and written news content
(Faccoro and DeFleur, 1993). There are also growing trends towards recreating social media
news feeds and other online news content/curating mechanisms (Bode, 2016), but no search of
past literature produced any kind of podcast experimentation with original content. Certainly, no
work had been done with political podcasts specifically. As such, this was a new and somewhat
79
uncharted terrain in experimentation, requiring careful consideration of any and all compounding
effects and control for differences between groups.
Despite the infotainment framing tactics being used, the podcast clips created for this
study still needed to control for language that might be too politically charged to be fruitful.
They also needed to walk the line between seeming organic and believable while also staying
even across ideological groups. Each and every tactic used, therefore, needed to be purposeful
and strategic, and managing an even distribution was difficult. Rigorous drafting was required to
ensure that the chosen variables of humor and opinion were actually being created and imparted
to listeners. Content needed to truly be humorous and to seem authentic to the given ideological
leaning. To ensure that this was the case, a number of outside script readers were consulted, who
ranged in their own personal political beliefs and whose feedback was assessed and internalized
in subsequent drafts. One limitation of this process is that no official pre-testing was conducted
with these materials. Funding restrictions limited this possibility, but, as clarified above, every
possible measure to create quality audio was undertaken that could be used.
At the same time, the entertainment or opinion variables in these created clips could not
outweigh the informative need of all of these podcasts. Because political knowledge was a major
variable for testing, there needed to be a shared level of baseline information across all audio
treatments, otherwise results may have been skewed in those measures. As such, a strict guide
was needed across the ideological offerings that made sure to emphasize core facts to the same
degree. Creating the hard news audio, in comparison to the infotainment clips, proved to be
substantially easier, as the Global News Podcast frequently featured short stories that consisted
of a host monologue which imparted straight facts and news. One actor, who studied the
seriousness of tone and style in the Global News Podcast, recorded that audio, sticking to the
80
detailed script the drafting process yielded. The infotainment podcasts, meanwhile, used two
hosts, one male and one female, to create a dialogue, and a conversational tone in the clips. All
three of these voice actors have had previous experience in podcasting and radio, and their
choices of tone, style, and inflection were all purposefully made to enhance the quality of these
audio clips.
The materials created for this project eventually consisted of roughly two-minute-long
clips about a recently proposed bill, The Universal School Meals Program Act. The central
premise of the bill is to increase funding to public schools to provide three free meals per day for
all students in grades K-12, and to ensure that those provided meals reflect a certain level of
nutrition (S.2609 - Universal School Meals Program Act of 2019). It was proposed in October of
2019 by Democratic leaders in Congress, but has not receive substantial media coverage since its
launch, and had received relatively no coverage prior to the launch of this experiment. The bill is
currently held up in the Senate, with no votes planned to pass it, but the reason for choosing this
legislation as the story to manipulate for testing was twofold.
First, this bill can be painted in partisan ways. Because it involves spending tax dollars
and government intervention, there are somewhat clear lines of where different politicized
parties may fall on the issue, and there is some basis of pre-existing attitude for participants even
if they had never previously heard of the USMPA. Creating a conservative ideological lean
(which would be against this bill) and a liberal lean (which would support it) as well as a non-
partisan hard news version was easily done. Second, this bill, while sponsored by identifiable
politicians like Senator Bernie Sanders and Rep. Ilhan Omar, had not penetrated into top tier
media coverage at the time of the experiment. As such, existing public opinion on this bill and on
this kind of policy was negligible, allowing for a larger potential change in attitude for
81
experiment participants, and for testing of opinion formation as well as strengthening. It was
important to choose a low-salience issue for a study of this kind, because choosing any sort of
highly contested and overtly partisan policy could backfire (Prior, 2007; Nyhan and Reifler,
2010; Wood and Porter, 2016).
The chosen topic also needed to be something that could be framed with humor or with
opinion in a successful way. Any topic considered too heavy or controversial could make that
framing difficult. In the humor group, tactics like purposeful jokes, sarcasm, and pop culture
references were used numerous times to create an overall humor slant for the story, something
that might not be possible with a darker or heavier subject. The opinion group, by contrast,
included assessments of each political party and the policy being discussed in the story as well as
general opinion, hyperbole, and criticism or endorsement of surrounding media discourse. To
preserve the integrity of these treatments, all humor and opinion audio materials were crafted to
evenly blend factual information with their entertainment elements, and the same broad facts
about the chosen story were shared across groups and across ideological leaning. These shared
facts, as well as the carefully constructed framing tactics of each treatment and condition can be
found in the written scripts used by the experimental media producers in Appendix C (p. 188).
Preregistration
With this step completed, the project was then preregistered with OSF. In order to
preregister, a solidified survey plan was needed, as well as a full articulation of variables of
study, sampling procedure, blinding process and more. These definitions could not be just
surface level – any and all concepts needed to be flushed out to their fullest extent prior to
preregistration and prior to launch of the study. Preregistration for this phase of the dissertation
included the 4 Research Questions concerning political attitude and political knowledge (RQ2-
82
RQ5). However, the preregistration did not include the sub-questions outlined in this
dissertation, as these were more exploratory inquiries drawn up from preliminary findings. Along
with the research questions, the preregistration also included all audio scripts, and the same pre-
test and post-test questions used in this survey experiment.
The benefits of preregistering in this way are significant, as preregistration assists in
keeping track predictions and postdictions (Nosek, Ebersole, DeHaven, and Mellor, 2018). This
in turn reduces the effect of hindsight bias, something that can undercut the validity of results for
different kinds of experimentation. Furthermore, preregistering the study also helped to install a
pre-planned analytic framework and to solidify what statistical processes would be used to
analyze results. The decision to use ANOVAs was chosen and defended in the preregistration
process, and the different variables that would be compared or retested and cross examined were
outlined. All of the variables discussed at length later in this chapter stem directly from the
preregistration. One difference, however, was that the preregistration offered more kinds of
analysis than were ultimately undertaken. This stemmed from a still evolving understanding of
how in depth a preregistration needed to be and how many variables would need to be analyzed
in this dissertation report. In the end, only the analyses that directly had to do with political
attitude strengthening and political knowledge were calculated. The results of these analyses can
be seen in chapter 5.
In terms of design for this experiment, the preregistration anticipated that this
investigation would consist of a straightforward survey with identical pretests and posttests that
all participants were exposed to (see survey design, Appendix B, p. 183). In between the pretest
and post-test, were four different possible audio treatment types: opinion, humor, hard news, and
a control group who received no treatment. These audio treatments were designed to emulate a
83
short podcast story clip and followed established storytelling styles used by all of the studied
podcasts, as discussed above. Further, the opinion and humor treatments were broken down into
two distinct groups, one involving a podcast story using liberal framing and one using
conservative framing. This resulted in five possible audio conditions for subject exposure, and
one segment of the studied participants who received no audio treatment.
When designing this experiment the intention was always, first and foremost, to expand
preexisting studies of infotainment and soft news, and to analyze infotainment effects in a way
that best mirrors the actual media environment today. Because podcasts are a digital medium and
an artifact of the internet age, testing their effects without consideration for how people use the
internet and consume content in digital spaces would be a mistake. As such, the design of this
experiment was constructed in a way that takes today’s high choice media environment into
account (Prior, 2005). Because a large segment of the news consuming population in America is
now interested in interacting with materials that reaffirm their pre-existing beliefs, cross-view
news consumption has been decreasing (Van Aeslt et al, 2017). Americans, and digital
consumers around the world, are increasingly choosing content or being led to content that
reaffirms their personal value system and political alignment. Indeed, that tendency is seen not
just in the media at large but also in the audience demographics of these chosen podcasts. Both
Pod Save America and The Ben Shapiro Show boast that they reach individuals across the aisle
from their given political leaning. However, the majority of their listeners are people who
already ascribe to their given ideology.
This research, and its accompanying preregistration, attempted to honor that actual
process of content selection and engagement amongst podcast listeners by conducting analysis
with a purely pro-attitudinal approach. This kind of research design is not one often used in
84
experimental study on infotainment and soft news, but recent experiments conducted by Panek
(2016) and others have demonstrated that individuals in a high-choice news media environment
are more likely to choose softer, more ideologically biased news over hard news. The intention
of this current experimental undertaking is thus to recreate a more realistic media dynamic and to
determine effects of these softer news framing tactics in an environment that better mirrors
people’s natural media selection patterns. A pro-attitudinal design can likewise provide
interesting insights when considering the two defining dependent variables of this study: political
attitude strength, and political knowledge.
Importantly, this research design was fashioned to reflect the core research questions in
this study. Its one-factorial design responded to the needs of these basic questions. However, the
belated addition of sub-questions about ideological interactions would have merited a different
experimental design entirely if they were part of the original design. To reconcile this
inconsistency, a 2x2 ANOVA was conducted comparing infotainment framing and ideology,
corroborating findings of the original design. Results are listed in appendix E (pg. 203).
Sampling and Blinding Procedures
In order to draw conclusions from this experiment, a properly assessed sample size was
needed. Using a power calculation, that relied on a power level of at least 0.95, the total
participant quota needed for each treatment group of this experiment was determined to be 385.
Given the four conditions in this experiment, the needed total sample size was 1,540. Again, this
calculation was made prior to inclusion of questions on the interaction of ideology.
Retrospectively, more participants were needed to make bold claims along those lines. On the
whole, respondents needed to be over 18 and had to be American citizens. One question
provided in the pre-test assessed political leaning (gauged through the respondent’s opinion of
85
government spending), but the random number generator used in Qualtrics, where the survey
was created, randomly placed participants in one of the four conditions. Participants were also
not made aware of what treatment they were sorted into or what manipulations were being used.
Survey Design
The survey experiment included roughly 3 minutes of survey questions and 2 minutes of
audio for those sorted into conditions 1, 2 and 3. The study took 5-6 minutes to complete.
In the pre-test phase, subjects began by reading IRB consent and disclosure forms and
then consented to be part of to the experiment. This disclosure section made clear the need for
subjects to take this survey on a device with audio player capability. Directly following the
consent section, participants were given an audio test to complete. The test vocalized a four-digit
number. Only with entry of that number (serving as an audio check), and with confirmation that
they were a US citizen using ISP checks, was a participant able to move forward.
The final pre-test questions asked participants about their views on government spending.
The aim of this question was to calculate people’s political leaning and already established
attitude on government investment in public programs. This question was pulled directly from
the American National Election Studies time study. Because this experiment involved a news
story about a bill spending public money to fund school meals, it was important to measure
where people stand on government spending prior to exposure to this treatment. That measured
attitude was then used to sort people into a conservative leaning or liberal leaning pool, though
their subsequent assignment to one of the four treatments remained random. Some respondents
initially chose a neutral ranking for their support. Those respondents were funneled to a second
question where they had to choose a side. Finally, all participants were asked if they support the
Universal School Meals Program Act (without receiving any context or information about the
86
bill). This allowed a measure of attitude on the bill prior to any experimental exposure.
After being sorted based on their measured political leanings, participants were randomly
assigned to one of four condition groups (humor, opinion, hard news, and no audio treatment).
Subjects in the first three condition groups were exposed to a two-minute audio clip about The
Universal School Meals Program Act. The humor and opinion condition were sorted into an
ideological leaning based on their response to the government spending question. All hard news
participants, meanwhile, listened to the same audio. The control group went directly from pre-
test to post-test.
Directly following treatment, study subjects graded two questions on a seven point scale:
Was the story clear? And, do you think the podcast contained false or misleading information?
Following this, three knowledge questions that directly relate to the story were asked. The
intention in asking these stories was twofold: first, to see if participants were actively listening to
the story, and second, to see if they learned anything from the story that members in the control
group did not. They were then asked questions about the importance of this school meals issue to
the country, and whether they support the Universal School Meals Program Act having heard
this podcast story clip. This support question exactly mirrored the question in the pre-test,
providing two measures that could be compared using a variety of means testing.
After giving their assessment on the story and the proposed policy, participants were
asked about their experiences with podcasts and political podcasts, how often they consume
news media, and what American political parties they are likely to vote for. Through asking
questions about their media consumption habits, how often they encounter podcasts, and what
political parties they may be sympathetic towards, the hope was to have a more fleshed out
picture of pre-existing attitudes. At this time, other important demographic data like gender, race,
87
age, and so on was collected, using phrasing and flow mirroring Pew Research Center surveys,
and a final comment section was available at the close of the survey.
Variables
Manipulated Variables
Opinion Frames. In the first condition, participants were exposed to a two-minute
podcast story that used opinion framing tactics in its storytelling. Participants in the group
received a baseline of facts that were shared between all three the audio treatment groups. Those
facts were supplemented with opinion and punditry. Because opinion and punditry can be found
in hard news as well, this treatment needed to have an overt level opinion that successfully
outpaced the rate of information sharing happening. As such, the treatment represents a more
exaggerated and opinionated tone than is found in hard news storytelling. These opinion and
punditry tactics were recreated in a liberal condition and a conservative one, and while the
opinions were opposite of each other, the relative amount of time dedicated to opinions, and
much of the flow and tone were as similar as possible between conditions.
Humor Frames. In the second condition, participants were exposed to a two-minute
podcast story that used humor throughout the storytelling. They received a baseline of facts that
was shared between all three of the audio-receiving groups, but those facts were cushioned with
jokes, satire, and sarcasm. The humor framing tactics were recreated in a liberal condition and a
conservative one, and while the jokes were different across the ideological groups, the relative
level of humor was the same.
Hard News Frames. In the third treatment group, participants were exposed to a 90
second podcast story based off of a typical hard news style. The baseline facts shared in all of the
audio treatments were available in this podcast clip as well, but those facts were elaborated on
88
and more detail was provided about the bill in question and the process of getting the bill through
congress on and on the Senate floor. The goal in this treatment was to limit bias in storytelling,
and as such both liberal and conservative stances on the issue were mentioned and presented, but
the podcast clip itself was meant to inform and not to persuade. It was as objective as possible
and used a serious tone that represents two top norms governing hard news production. Further,
there was only one hard news audio clip, because there should be no reaction based on partisan
cues in a hard news treatment.
Measured Variables
Political attitude strength. This was a central variable to the entire study, as RQ2 and
RQ3 directly focus on political attitude and require repeated measures ANOVA to establish
meaningful attitude strengthening. This variable was assessed by comparing differences between
those participants in the control group and those in the other three conditions. Attitude
strengthening was ultimately calculated using two identical questions regarding participant
support of the Universal School Meals Program Act. The first question was posed in the pre-test,
prior to treatment exposure, and the second question was asked in the post-test, after exposure.
Political Knowledge/Recall. This was also a central variable to the entire study,
reflecting the variable of inquiry for RQ4 and RQ5. Three situational knowledge questions were
posed to participants about surveillance facts learned in all of the audio treatments. By
comparing listeners across and within treatments, these questions provide insight into how much
knowledge was gained through listening to these three styles of political news storytelling and if
there were any differences between and within groups.
Pre-existing political knowledge. This is a secondary variable measured as a check for
the main political knowledge/recall variable. These questions represented static, or foundational
89
political knowledge. Measurement for this variable involved asking three general political
knowledge questions to all participants in the study. A comparison could then be made between
participants in every treatment, as well as participants of each condition (making sure that
liberals and conservatives had a similar level of static political knowledge to begin with).
Pre-existing attitude on government spending. In order to accurately sort participants
into a pro-attitudinal condition, a pre-test question about government spending modeled after
similar questions posed by ANES was used. This variable was not directly asked about in the
research questions, but was essential to the experimental design of this study. Because the
treatment involved conversation on government spending, it was necessary to have a
measurement of people’s pre-existing attitudes and to do the group assignment that was liberal or
conservative based on that attitude.
Pre-existing political preference. This variable was measured with a question about
what political parties participants are likely to vote for. On a scale from one to ten, assessments
were made by each participant about Democrats, Republicans, Libertarians, and the Green Party.
gathering this data helped to solidify whether people’s attitudes actually reflected their identified
party affiliation, and to identify if that there was an even distribution between people who
identified as Democrat or Republican across all four treatment groups.
Political news use and exposure. This variable was a secondary variable assessed in the
post-test through a question asking how frequently respondents interact with political news. It
was used to run analysis about whether news use effects the two main variables, political attitude
and political knowledge.
Podcast use and exposure. This secondary variable was assessed in the post-test through
questions asking how frequently respondents listen to podcasts and to political podcasts
90
specifically. This data showed how much prior exposure participants had to this medium, and
how many of them may have come into contact with political podcasts in the past.
Summary
Using a two-pronged approach, this dissertation has the ability to answer five very
different research questions about the make-up and rhetorical styles of political podcasts, and the
effects of the narrative tactics that they use. Beginning with a qualitative content analysis, this
project conducts both a macro and micro level investigation of political podcasts. Assessing
themes and individual codes, this phase of research allows the asking of a directed but still broad
question about what narrative tactics are used in political podcasting and what the differences in
those tactics are between liberal and conservative outlets. This phase of research also allowed for
comparison between political infotainment podcasts and hard news podcasts, resulting in a more
comprehensive codebook and understanding of political discourse in these podcasts. The
findings of this phase were then used to recreate podcast clips in these styles, which allowed for
empirical testing. Using an online survey experiment, this research could look at the effects that
narrative tactics in political podcasts and hard news podcasts have on their audiences, namely in
their political knowledge and their political attitude. This then provides insights into the power
and potential of podcasts in the political news sphere and in news spaces on the whole.
One thing that differentiates the current study from most of the noted past studies
identified in this chapter and in chapters 1 and 2, is the fact that it uses a mixed method approach
and marries the flexibility of qualitative content analysis with the more empirical investigation of
an online survey-experiment. With mixed methodological approaches, there are both potential
pitfalls and positive attributes to consider. On the one hand, mixed methods approaches are
needed because they provide flexible data and analytic structure while also providing more
91
credibility and advancing a common language (Onwuegbuzie, Slate, Leech, and Collins, 2007).
Mixed methods approaches that specifically combine qualitative work with quantitative can
marry together the most promising elements of each methodological design, and when a research
design clearly articulates a need for both kinds of methods being linked together, there is a
valuable ‘pragmatism as the philosophical underpinning for the research,’ (Denscombe, 2008).
However, potential inconsistencies and variations in mixed methods work can occur and they
often stem from two places: lack of interaction with the literature (and the community that has
brought about similar methodological study), and lack of explicit concept in the study. There
needs to be a proven need to mix methodologies based on questions and research interest, and in
the case of this study, that need clearly exists.
Also, this experiment is different from some past research because it was conducted using
opt-in participants on Mechanical Turk as opposed to recruiting undergraduate students or other
members of the public. There are other, more costly, experiment options that can better provide
randomization and that might not require that opt-in model, however recent literature indicates
that findings provided from recruitment on M-Turk are comparable to those of other more
representative entities like TESS (Mullinix, Leeper, Druckman, and Freese, 2015). Running
experiments and survey experiments on Mechanical Turk is also significantly less expensive,
which is a benefit to this current endeavor given its limited level of funding. Nevertheless,
through rigid protocol and adherence to standardized procedure, this research design has the
integrity needed to offer key insights into political podcasts and their effects. Those findings,
which are varied and substantial, will be fully discussed and dissected in chapters 4 and 5.
92
CHAPTER 4
CONTENT ANALYSIS FINDINGS
Given the gaps in previous literature on podcasts and new digital styles of political news
media, this dissertation has undertaken a multi-step approach in an attempt to learn as much as
possible. The intention of the selected mixed-method design is straightforward: to not only
describe the phenomena of popular political podcasts in American life, but to also assess what
some of their intended and unintended effects can be on audiences. Chapter 3, and the cited pre-
registration, articulated the plan of action for this research, and those plans were created,
finessed, and followed at every possible instance. The results and findings gathered through that
process will be explored in this chapter and in chapter 5, and will ultimately provide answers to
the five stated research questions. However, this chapter focuses specifically on the findings of
the qualitative content analysis, which answer RQ1 and its sub-questions:
RQ1: What framing tactics are being used within popular political infotainment podcasts?
RQ1a: Do different tactics occur based on political leaning of the infotainment podcasts?
RQ1b: Do tactics differ between infotainment podcasts and hard news podcasts?
As established in chapter 3, the first phase of research involved an exploratory look into
the political podcast marketplace in America. Dozens of shows were identified and considered
for study, and it was clear from an initial investigation that political podcasts varied in ideology
and viewpoints, but were not radically different from a style perspective. Ultimately three
podcasts were selected for further analysis. Those three consisted of the most popular
conservative political podcast in the US (The Ben Shapiro Show), the most popular liberal
podcast (Pod Save America), and a popular show run by working journalists that has no overtly
stated allegiance to one ideology or other (Slate’s Political Gabfest).
93
The first level of content analysis for these selected podcasts involved a basic assessment
of how the shows were pitched and branded. For the infotainment podcasts, a basic mission
statement is the first glimpse for potential listeners and avid fans alike, and these statements paint
a picture of shared intention despite radically different enactment. Political Gabfest, for example,
bears the slogan, ‘Where sharp political analysis meets informal and irreverent discussion,’
(Slate, ‘Political Gabfest’). Pod Save America, in turn, heralds itself as being ‘A no-bullshit
conversation about politics,” and The Ben Shapiro Show boasts that it is ‘The fastest growing,
hardest hitting, most insightful, and savagely irreverent conservative podcast on the web,”
(Crooked Media, ‘Pod Save America’; The Daily Wire, ‘Shows’). Each of these summarizing
statements packs a bold agenda and identity and synthesizes the larger descriptions each podcast
provides when users download their content into one quick ideology. Yet while the content and
arguments within each podcast can be radically different, these descriptions speak to a clear
overlap that exists in most podcasts regardless of genre.
Intentional informality and witty repartee are major selling points for these podcasts, and
represent a commonality between all three studied infotainment shows that harks back to basic
definitions of podcasting content (Berry, 2016; Llinares, Fox, and Berry, 2018). Each of these
programs seeks to cut through fatiguing and stuffy media conversation and provide some kind of
analysis that is unique to their brand while also being accessible. They are not limited by
formalized restrictions or niceties. There is also explicit claim to credibility and authenticity from
all three podcast shows, as well as a shared push to find what is ‘real,’ to have conversations that
make meaning, and to leave a discussion with definitive answers and a more solidified world
view for their audiences. The combination of these factors makes all of these shows uniquely the
products of podcasting culture and reflective of the medium even though infotainment practice
94
may pull some core components from broadcast mediums. Importantly, these unique podcast
qualities were not as apparent in the Global News Podcast which paints itself simply as, ‘The
day’s top stories from BBC News,’ (BBC, ‘Global News Podcast). Already, just from basic
descriptions, the difference in priority and intention between infotainment podcasts and hard
news ones becomes clear.
With four selected podcasts chosen, and through the use of the ‘random week’ sampling
method, transcription was completed over the course of numerous months. Preliminary data from
this transcription showed that the hard news podcast averaged 9 stories per episode (with stories
averaging 3 minutes in length), while all of the political podcasts averaged 5 stories per episode
with a story length of 16 minutes. The topics of these stories ranged widely, and two identified
story types, those involving podcast ‘housekeeping’ and advertisement were not considered for
the larger analysis. With this whittled down criteria, the forty podcast episodes produced 215
unique stories for extended study.
The content of these stories was varied across shows, but twelve overarching story topics
were categorized and applied (see Table 6). These topics were mostly pulled from previous
research about story topics in news, and those standard topic-types included U.S. Policy, Policy
of Foreign Governments, Elections and Campaigns, Domestic Stories, International Stories,
Interviews with Guests, Pop Culture, Interviews, Science and Medicine, and Technology and
Business. Three additional topics were added for the purposes of this research given their
significant frequency. These include stories about ‘Other Media’ Coverage, Podcast Games or
Special Segments, and Stories About President Donald Trump. This category was not initially
considered in the first phases of codebook creation, with an assumption made that any and all
stories about the President could be placed in other categories. However, twenty of the identified
95
stories did not organically fit in one of the other topics, and among the infotainment podcasts,
some stories that began as coverage on a given area quickly devolved into conversation focused
on Trump beyond those constraints. As such, this category was added, noting that such a
category may not always be applicable in the future for a presidential figure.
Table 6. Topic Code Frequencies Across Podcasts
Topics Total BSS PSA PGF GNP
Domestic Story 16 5 4 2 5
International Story 46 2 1 3 40
US Elections and Political Campaigns 40 14 17 8 1
US Policy 11 4 3 3 1
Policy of a Foreign Country 10 0 0 0 10
Tech and Business 13 2 0 2 9
Science and Medicine 11 0 0 1 10
Other Media Coverage 9 7 1 1 0
Pop Culture, Arts, and Sports 4 0 0 0 4
Interview with Guest 10 0 9 1 0
Podcast Game or Special Segment 25 11 4 10 0
Trump Coverage 20 3 8 8 1
Total Stories 215 48 47 39 81
Given the difference in scope of these podcasts, some story topics appeared more
frequently in certain shows than others. For example, every instance of a story on the policy of a
foreign government happened on the Global News Podcast, and nearly 80% of international
stories also occurred there. Comparatively, some story types only existed in the political
podcasts. Game segments were exclusively seen in the infotainment group, but were a recurring
focal point of most of their episodes. Full-feature interviews were also a staple of the softer-news
programs, though interviews as snippets of story content were incredibly common at the Global
96
News Podcast. The Other Media Coverage code was only used on the political podcasts as well,
no doubt as a result of the fact that none of these podcasts conduct their own on the ground
investigative reporting for their broadcasts. Meanwhile, coverage of US elections and the Trump
Administration was the focal point of every political podcast, while the Global News Podcast
only dabbled in coverage on these issues fleetingly in comparison. Much of this can be explained
by the difference in agenda. While the Global News Podcast intentionally focuses on the world
at large, the three selected political podcasts were America-centric. This makes for less
substantial comparison of things like topic codes, but still illustrated basic framing building
blocks that have been established in formalized media outlets, which are then adapted by softer
news content to build their own credibility and help guide a familiar flow for their audiences.
With those basic features established, each podcast staked out their own individual styles.
The Ben Shapiro Show (BSS), which posts a new episode every day, consists entirely of Ben
Shapiro as the host conducting all programming and conveying all insights and opinion as one
voice. He supplements with audio clips and quotes he has pulled directly from outside sources,
but he has no cohost and interviews guests more rarely (none of the randomly selected episodes
contained any interviews by Shapiro). His shows are in line with the traditional format of CTR
(Conservative Talk Radio) and his formatting works so well for radio that his podcast shows are
also broadcast live each day with national radio distribution. Political Gabfest, in comparison,
uses a 3-host model and in only one of the selected episodes did they have a guest host to cover
for a missing member of their team. Pod Save America, by contrast, has a two episode per week
model and in one episode they utilize three hosts while in the other they usually utilize two. They
then also have live episodes where all of their hosts and special guest anchors join, bringing them
to as many as 5 hosts for an episode.
97
With all of these initial findings in hand, two veins of analysis emerge for consideration.
The first is a general analysis, which gave basic descriptive statistics about the codes found in
these podcasts and their frequency of use. These will be further discussed directly. However, a
second kind of analysis, which was thematic in nature, was also undertaken. This thematic
investigation was designed to identify what common threads all of the political infotainment
podcasts share and what topics their coverage overlaps on. These themes were important not just
for contextual purposes, but also for helping establish what buzzwords and language each camp
used in their discussions. Certain tactics became incredibly apparent throughout these emerging
themes, and those tactics will be more directly scrutinized and categorized to answer RQ1.
Descriptive Analysis
As stated previously, this descriptive portion of the content analysis was conducted at the
sentence level. The resulting data provided here shows how prevalent each framing tactic was in
two ways. The first is the ‘instance rate’ of codes. This shows how many times a code was used
in a given episode or in a podcast overall. The second form is a percentage score representing
how frequent this kind of code was in the studied podcasts. Be advised, a full accounting of that
codebook and associated rules and definitions may be found in Appendix A (p. 172). Even with
these rules, however, constant evaluation was needed about the merits of coding the podcast
language as one tactic or another.
The most obvious example for this came in distinguishing the code ‘facts and figures’
from ‘punditry and opinion’ or one of the many more nuanced opinion codes this study
identified. For the political podcasts especially (BSS, PGF, and PSA) there was a substantial
amount of facts provided to listeners. Some of these facts were commonly known, while others
were substantiated with media citations. However, certain framing choices and linguistic tells
98
prevented a large amount of the ‘facts’ that were given from being coded as such. Most often this
happened because, in the span of one sentence, a host would not only give the facts of a
situation, but also their opinion of it. This was obvious when certain phrases were used in the
given text like ‘I think,’ or ‘In my opinion,’ which happened frequently. There was also a needed
shift in codes when the language surrounding the given facts was exceptionally charged.
Assessments on whether things were good or bad or more or less important ultimately
deteriorated a statement’s claim at being ‘facts or figures’ and made them into something softer.
The result, as can be seen in Table 7, was that there was a large disparity in framing tactics use
between the political podcasts and the hard news podcast chosen. However, more thorough
examination of code types and coverage must also be undertaken at the individual podcast level.
Table 7. Code Category Instances and Percentages Across Podcasts
Journalism Entertainment Opinion Advertising
Global News
Podcast
649
(84.9%)
3
(0%)
110
(14.4%)
2
(0%)
Ben Shapiro
Show
544
(38.3%)
119
(8.38%)
747
(52.64%)
9
(0.01%)
Political
Gabfest
475
(36.5%)
101
(7.75%)
705
(54.1%)
21
(1.6%)
Pod Save
America
543
(37.6%)
266
(18.4%)
624
(43.2%)
11
(0.7%)
Global News Podcast
One of the hopes in choosing a hard-hitting news podcast to analyze along with the
political infotainment podcasts was to demonstrate that hard news still uses softer tactics to some
extent. Punditry and opinion specifically have long been a part of the journalistic process, as was
discussed at length in chapter 2, but the stark percentage difference between the Global News
99
Podcast and the other podcasts studied illustrates the very different nature of these ‘news’
programs. Table 7 provides a sweeping glance at this, but a more in-depth analysis of the coding
shows stark differences between the Global News Podcast and the others in the study.
The most notable is that all of the opinion codes for the Global News Podcast (110 total)
fell under ‘Punditry and Opinion.’ This code is a general code where opinions are given or where
some sort of hypothesizing about the future takes place. While all of the other podcasts shared a
frequent use of this code, the American podcasts had other commonly used codes under the
umbrella of opinion that gave assessments on the media and political parties. No such positive or
negative assessments were made in any of the studied Global News Podcast episodes, even when
looking at politics and policy in the UK, where the parent company of the podcast (BBC) is
located. Nine out of the ten episodes of The Global News Podcast contained some kind of
coverage of politics in the UK, but while there was general opinion and hypothesizing about
electoral outcomes and public perception of political actions, the Global News Podcast refrained
from boldly claiming one side of the political aisle was right over the other.
Table 8. Code Frequency for the Global News Podcast
Code Type Instances Percentage Journalism
Facts and Figures 291 38%
Expert Source 112 14.7%
Directed Questions 98 12.8%
Community Source 86 11.2%
Audio Clip 43 5.6%
Personal Testimonial 9 1.1%
Punditry and Opinion
Punditry and Opinion 110 14.4%
Entertainment
Purposeful joke 2 0%
Pop Culture Reference 1 0%
100
For the Global News Podcast, the only other section of tactics that had substantial coding
involved was Journalistic Elements, or the Hard News section (See Table 8). This was by far the
largest segment of Global News Podcasts codes (accounting for nearly 85% of all codes), and
was broken down into 5 main areas, highlighted in Figure 1. While the tree graph created by
NVIVO is condensed, the most common codes were ‘Facts and Figures’ (291), Expert Sources
(112), Direct Questions (98), Community Sources (86) and Audio clips (43). All of these codes
were found in the political podcasts as well, but this breakdown illustrates the general flow and
style of each Global News Podcast story. In these stories, an establishing level of facts are
defined, followed either by an audio clip giving more context to the story, or a directed question
aimed at a podcast correspondent, expert source, or member of the community giving their own
testimonial. Because of this set up, the Global News Podcast had a higher rate of journalistic
elements on the whole, but also of specific codes like Community Sources and Expert Sources.
Figure 1. Global News Podcast Tree Graph
The Ben Shapiro Show
Compared to the Global News Podcast, all of the political podcasts have very different
ratios of hard news tactics to infotainment tactics (see Tables 7, 8, and 9). For BSS in particular,
101
roughly 38% of codes fell into the journalism grouping. This number was almost identical for all
of the political podcasts. BSS and PGF also shared nearly identical rates of opinion tactics
(53%), and entertainment tactics (8%). The opinion category lead significantly on BSS, and the
most prominent code was punditry and opinion (396), but that was followed by high rates of
party and media assessment codes. These particular coding statistics and their meaning and
influence are discussed more extensively in the thematic analysis of these podcasts below. As for
entertainment, BSS and PGF contained very similar rates of use, and BSS specifically had 94
cited purposeful jokes and 23 pop culture references across its 10 studied episodes.
Table 9. Code Frequency for the Ben Shapiro Show, Pod Save America, and Political Gabfest
Code Type BSS PSA PGF
Journalism
Facts and Figures 136 111 150
Media Citation 52 44 35
Cited Quote 160 31 19
Expert Source 6 86 26
Directed Questions 58 205 208
Community Source 1 1 0
Audio Clip 101 22 0
Personal Testimonial 30 43 37
Punditry and Opinion
Punditry and Opinion 396 510 607
Dem Assessment (-) 155 4 16
Dem Assessment (+) 1 23 16
Rep Assessment (-) 27 65 49
Rep Assessment (+) 28 1 3
Media Coverage Criticism 122 21 2
Media Coverage Endorsement 10 8 24
Entertainment
Purposeful joke 94 252 89
Pop Culture Reference 23 14 12
Meme Reference 2 0 0
Mobilization
Call to Action 0 11 9
102
Much of the difference for BSS when compared to the other political podcasts,
meanwhile, can be seen in the breakdown of journalistic elements. Though BSS had an
equivalent level of journalistic codes, they varied substantially from PGF and PSA. The most
popular journalistic code for BSS was cited quotes, where direct quotes from a news story,
person of interest, or other communication avenue were read word for word. BSS had 160 quotes
cited, outpacing the facts and figures instance rate which amounted to 136. Shapiro also was the
only political podcast that featured audio clips at a high rate. This is significant because it is a
tactic used frequently by The Global News Podcast and other hard news podcasts, but it was
never used in PGF, and only used in game segments on PSA. BSS rounded out its most
prominent journalistic codes with 52 media citations and 30 instances of personal testimonial
provided by Shapiro himself, which were used to establish a sense of credibility and genuine
knowledge on a given topic. This was a code usually reserved for interviewed subjects in the
Global News Podcast, but it made an appearance among all of the political podcasts where hosts
regularly lean on their past experience and qualifications to add to a given story or topic.
Figure 2. Ben Shapiro Show Tree Graph
103
Political Gabfest
When comparing the general composition of overarching coding categories, Political
Gabfest most closely resembles the Ben Shapiro Show. PGF’s category breakdown was roughly
54% opinion elements, 37% journalism elements, and 8% entertainment elements (Table 7).
Unlike BSS, however, PGF was far more predisposed to general punditry and opinion (607
instances) over assessments of the media or political parties. As with BSS, the reasons behind
these framing choices are discussed in the thematic analysis, but the tendency towards general
opinion and punditry without the favoring of one group over another is similar to the use of
opinion codes in the Global News Podcast. This may be a result of the PGF hosts being members
of the media, and all of their occupations writing hard hitting journalistic pieces, but there was
enough assessment of the political parties to make them comparable to other political podcasts
like PSA. In terms of entertainment, PGF was also very similar to BSS, with 89 purposeful jokes
and 12 pop culture references. Finally, PGF had a diverse pool of journalistic elements including
208 Directed Questions, 150 facts and figures codes, 37 personal testimonial instances, 35 media
citations, 26 expert sources and 19 cited quotes (Table 9).
Figure 3. Political Gabfest Tree Graph
104
Pod Save America
More so than any other podcast, Pod Save America attempts to blend entertainment,
news, and opinion more evenly. The coding totals represented in Table 8 indicate that for PSA
38% of codes were journalistic elements (even with PGF and BSS), but that only 43% of the
codes were opinion based. This was attributed to a more active push for entertainment tactics on
PSA, where there were more than double the level of codes compared to PGF or BSS (18%
across 10 episodes). This substantial increase in entertainment codes is entirely attributed to a
higher rate of purposeful jokes (252 instances), which outpaces the other two political podcasts
(see Table 9). PSA also used pop culture references, but those were on an equivalent level to
PGF and BSS (14 instances).
There are a number of ways to interpret this difference in opinion versus entertainment
framing tactics. Importantly, many of the jokes featured on PSA are told at the expense of a
particular politician, policy idea, or government group. As such, they can be perceived as a kind
of extension of the hosts opinions in some ways. Still, the difference in framing mechanisms
between humor and opinion are substantial. Previous studies on soft news and infotainment have
shown that humor frames can affect levels of political cynicism, political knowledge, and
information recall (Baum, 2003; Boukes and Boomgaarden, 2015; Feldman, 2011; Leckler et al,
2015). For the purposes of this study, it is evident that the largest framing device used among
popular political podcasts in America is opinion, and as such it is an obvious candidate for
continued study in the second phase of this dissertation’s research. However, humor was also
present in all three political podcasts, and was a prominent structural element for PSA
specifically, giving reason to also measure the impacts of this framing type on audiences as well.
Despite the substantial difference in percentages of opinion and entertainment elements,
PSA still adhered to familiar ratios of code usage. Like PGF, Pod Save America’s opinion codes
105
were dominated by the more general punditry and opinion code. They had some assessment
codes as well, but none of them matched the dominance of assessment codes on BSS. Similarly,
PSA used a lot of Directed Questions (205) and had a comparable number of Facts and Figures
codes (111) to PGF. Their journalistic elements, however, were also largely dominated by expert
testimony (86 instances), which often stemmed from interview segments with expert guests. This
interview format was not shared to the same extent with PGF or BSS, but it is an oft used tactic
in harder news shows like the Global News Podcast. Despite this, PSA’s other popular
journalism codes which were present in the other political podcasts included media citations (44
instances), cited quotes (31 instances), personal testimony (43 instances), and audio clips (22).
Figure 4. Pod Save America Tree Map
With the comprehensive list of all codes completed and compiled, the second phase of
analysis for these podcasts, one that was thematic in nature, could take place. This level of
consideration was critical not just for answering RQ1, but also for creating the treatments
eventually used as the focal point of a survey experiment. Without this thematic assessment, the
study ran the risk of not properly replicating the actual styles, tones, and general feel of political
106
podcasts in the experimental phase, but with the addition of this thematic analysis, a more
comprehensive context and shared language across political podcasts was identified.
Thematic Analysis
Within the first level of coding, familiarity with certain linguistic techniques and framing
tactics was established. Through the act of transcribing and listening to these podcasts numerous
times, a shared language linking all three of the American political podcasts emerged with
central actors and elements that each show discussed and analyzed. General feelings about these
shared themes were chronicled in memos through transcribing and coding, but only when all
coding was done did NVIVO bear out many of those hunches and presumptions. A simple word
frequency count conducted on an individual level for each show resulted in numerous buzzwords
like ‘Trump,’ ‘media,’ ‘president,’ ‘Republicans,’ Democrats,’ ‘campaigns,’ and ‘elections.’
Table 10 breaks down the most commonly found words and highlights overlap between all four
podcasts. Among the American podcasts, the commanding actors dominating conversation were
President Donald Trump and the Democrats, but the use of words like media and government
represented more generalized actors as well. Throughout each political podcast there was
continued discussion about the role of the president, political parties, government as a whole, and
the media. Many of the subsequently defined codes were direct responses to discussion on these
actors and analysis of their faults and merits.
107
Table 10. Top Words Across Podcasts
Podcast Top Words
BSS Trump (593) President (429) Anti (267)
Media (228) Democrats (186) The left (183)
PGF President (258) Trump (185) Story (160)
Democrats (136) Government (117) Mueller (106)
PSA Trump (555) Democrats (227) President (169)
Donald (150) Campaign (134) Election (133)
BBC Story (203) Government (85) World (80)
Correspondent (73) President (62)
Role of the Media
One of the most frequently discussed actors across the board was the media. Discussion
about the news, about reporters, and about the intentions or potential biases of the media at large
were constant throughout each political podcast. But the framing of the media and the values and
negative traits attributed to the media varied widely. This difference in portrayal was somewhat
political, with the conservative program bemoaning the media at a significantly higher rate than
PSA or PGF (6:1 and 64:1 respectively), but there was shared disillusion with the American
press across ideological bounds. Hosts from the right and from the left of the political spectrum
posed compliments and criticisms of the media at large and particular stories and moments of
coverage. Ultimately, it seemed that faith in the media and that high assessment of journalism on
the whole was attributed not based on politics, but on occupation. Hosts who were members of
the ‘mainstream media’ were far more complimentary of the institution, as were interviewed
guests on other podcasts. Their connection to the profession of journalism was in itself an overt
endorsement of the press and the larger American media system, and the presence or absence of
that connection played out in profound ways.
108
Table 11. Media Coverage Criticism & Endorsement (Total Instances and Coverage Range per
Episode)
Podcast Media Coverage
Criticism
Media Coverage
Endorsement
Ben Shapiro Show 128
0.29% - 12.6%
10
.04%-1.1%
Political Gabfest 2
0.48%
24
0.3%-4.6%
Pod Save America 21
0.64% - 1.68%
8
0.11% - 1.06%
Of the studied hosts and podcasts, none were more overtly hostile towards the media than
Ben Shapiro. Though Shapiro cited the media at a similar frequency to PSA and PGF (52
individual citations, or roughly 5 per episode), and not all citations were negative, there was a
much higher rate of criticism and much more elongated arguments, not necessarily about the
story a cited news piece covered, but about the faultiness of the coverage itself (see Table 11).
Shapiro framed the media in a negative light continuously, claiming that their coverage was
riddled with lies, inaccuracies and bias. Shapiro gave somewhat credible reasoning for part of
this hostility – the continued coverage he himself received from other news outlets. In the ten
podcasts that were randomly selected and studied of BSS, three separate news stories about him
or using his image were published in the New York Times, The Washington Post and other
outlets. One of these instances created so much frustration for Shapiro that he titled his episode
for that day (January 21, 2019) ‘Liars, Damn Liars, and the Media.’
This entire episode was dedicated to highly-trafficked media stories that Shapiro saw as
false, misleading, and biased, but one particular story involved him directly and statements he
had made the week before. In response Shapiro spoke for more than 15 minutes breaking down
the arguments and perceived malpractice of the media against him. His general feelings on the
matter were eventually summed up in one cohesive argument:
109
“There are media outlets that took that 21 second clip I showed at the beginning, I played
at the beginning a few minutes ago, they took the 21 second clip and posted that, and
when I contacted these media outlets and said guys, you might want to actually put up
like, you know, the full three minute clip so people know what I'm talking about, the
media outlets said oh you know what you're right. Maybe we'll put up the three-minute
clip, but we’re not going to change the article or change the transcript. That's right
because you got clicks off it because you’re liars. Because if you lie, and if you lie
consistently enough, then people start to believe the lie,” (‘Liars, Damn Liars, and the
Media’ 1.20.2019).
While the personal connection in this commentary is obvious, the sentiment of the media being
liars who seek only to profit off of their audiences no matter what the truth is remained a
constant touchstone for Shapiro. Despite some ambivalent or even complimentary moments
towards a given journalist or story, Shapiro regularly circled back to the media being ‘vile’ or
‘disgusting,’ (‘Trump Unleashed’ 5.7.2019). Often Shapiro cited bias amongst mainstream
outlets that helped Democrats and injured Republicans, like when he claimed the Washington
Post was attempting to ‘define Title IX’ and what it is to be transgender. Their coverage was
continuously called ‘dishonest,’ ‘self-defeating’ and tied into the agenda of ‘the left,’ (‘The
Republicans’ Best Friend,’ October 21, 2018; Real Bitter Clingers,’ October 14, 2018; ‘It’s Not
Easy Being Green’ February 10, 2019).
Beyond the explicitly stated reasons for media criticism, the Ben Shapiro Show also
exists in a context of conservative media. Previous study of this political content style has
demonstrated a continued rejection of the media and a distrust of journalism from mainstream
outlets (Jamieson and Cappella, 2008; Rauch, 2019). Much of the storytelling in conservative
media hinges on discrediting other media as being fake or at the very least biased (Rauch, 2019).
This dynamic has thus become a central, recognizable, identity builder for many people who
make up the audiences of these outlets. As such, speaking in this common language with similar
casting of good actors and bad actors is a needed storytelling strategy and framing tactic for BSS
to fit into its ideological ecosystem. The ratio of coded instances for Media Coverage Criticism
110
(128) to Media Coverage Endorsements (10) on the show begins to illustrate how central this
casting of the media as a bad actor is for BSS (see Table 11). The rates of those codes are also
revealing, and in one the episode ‘Liars, Damn Liars, and the Media,’ roughly 12.8% of all
coded content in the episode was critical assessment of the media.
Though it manifests in a strikingly different way, PSA similarly demonstrates use of a
common language throughout their episode dialogue. However, their touchstones are progressive
as opposed to conservative, and their conception of media is more split between bad and good
(with 21 media criticism instances and 8 explicit media endorsements). On the whole, the media
for PSA is seen as a necessity and an ultimately positive actor, even if it is often critiqued. In the
PSA imagination, the press is an essential element for the public good that provides the public
with critical information, which is especially needed ‘in the Time of Trump,’ (The Election is
Nigh!). Over the course of ten episodes, 44 individual news stories were cited by PSA and a
majority of their conversation topics circled around the investigative reports of trusted
journalists. Not all members of the press are created equal in the PSA schematic though, and
members of the media who stray into punditry and editorializing are frequently dismissed,
accounting for the rates of coverage seen in Table 11.
Similarly, PSA hosts regularly criticize the media for not being hard enough on Trump,
citing that the forty fifth president of the United States is a break from the norm and that the
press has often covered with lackluster results: one host on the episode ‘Pollercoaster’ argued,
“we’ve just decided that the as a political media culture… we have no expectations for him.”
This arc of the president behaving badly and the media failing to convey that to the public is a
common theme throughout PSA, but despite that, there is still an established level of respect for
the press, and in just the ten selected episodes there were two instances where a journalist ‘friend
111
of the Pod’ was called upon to guest host. The rate of negative assessment for the press also
never went over 1.68% percent of any given PSA episode, indicating that even when negative
assessment was given, discussion on that topic and any blame for the press was not expanded
upon as it was on BSS.
Friendliness with the press, meanwhile, is a built-in feature of PGF, which is conceived
entirely on the premise that three journalists will discuss their opinions and personal expertise on
the week’s current events every episode. Over ten shows, the podcast cited 37 distinct media
stories, with at least one (but typically more) media citation driving each conversation. In all of
those citations, only one criticism was made, proving a stark difference from both PSA and BSS.
Explicit endorsements of a piece, meanwhile, totaled 24, with numerous episodes clocking in
with as much as 4% of their content being explicit praise for the press or a specific article or
feature.
Unlike the other podcasts, PGF also boasted about personal connections to the authors
and investigators in the discussed news pieces. At least seven times, a stated personal connection
was disclosed as existing between the host and the journalist in question, and nearly every
episode included some element of shared knowledge of a given reporter, outlet, or journalistic
beat. For PGF, compared to all of the other political podcasts, the casting of the media as not
only a public good but as a thriving and vital element of political life was more apparent. There
were no discussions of potential bias, the problematic tendencies of punditry, or potential
ownership and power issues baked into the business models of some larger news organizations.
For PGF, the media is seen as arguably the strongest proponent of the people, and the one real
truth teller in an America governed by shaken norms and feuding political factions (‘All About
the Benjamins’; ‘The Too Much News Edition’).
112
Trump: Eccentric or Adversarial
Of all perceived actors and important individuals deserving of coverage, no person
spawned more debate than Donald Trump. The forty-fifty president of the United States was a
top word in all three political podcasts surveyed. For PSA and BSS, Trump was the most spoken
word (coming in at 555 and 593 mentions respectively), while PGF had Trump as the second
most frequently used word (185 mentions). This one metric is somewhat limited, given that each
podcast had different word rates and so strict numerical tracking of those word frequencies are
not fully illuminating, but the numerous stories that fully revolved around Trump and the fact
that it was a top word for all podcasts begins to articulate the longevity of discussion surrounding
Trump. There were also numerous other incidents where he is referred to as ‘The President’ or
‘45’ free from other linking to his name. The range of Trump’s presence in these political
podcasts was dominating, and of the 154 individual story segments identified across the three
political podcasts, Trump was mentioned at least in some capacity in nearly all of them. The
coverage the president received during these mentions varies widely, and any positive review of
the president happened based on political party and support for conservative policy. However,
two general framings of the president were evident after extended study. The first is that the
President is just eccentric and outside of the political norm, and the second is that Trump is a
danger to the public, to democracy and to the US on the whole. All discussion of the President
fell within these two poles and the resulting assessment ebbed and flowed along these lines.
In terms of seeing Trump as an adversary or the enemy of US policy, the public, and so
on, no podcast went further than PSA. In the PSA understanding of the current world order,
Donald Trump is the single gravest threat to America and to the fabric of American life (‘The
Election is Nigh!’). For PSA Trump is the worst actor possible, who is malicious, stupid, inept,
etc. He is a criminal and is surrounded by other ‘idiot criminals’ (‘Hot Tub Crime Machine’). He
113
represents a menace to democracy, human decency, and the American way of life through his
racism and xenophobia (‘Ride or Die With Dictators). He’s not as rich as he claims but instead is
‘grifter’ and a con man (‘Hot Tub Crime Machine’). He is dishonest, chaotic, purposefully
misleading, and ultimately guided by his own personal interest above the interests of the country,
(‘The Pundit Gap’).
But while all of these overt criticisms spawn organically in the conversation about PSA’s
given topics, the use of purposeful jokes on the podcast synthesize these bleak assessments with
even more punch. One such example came as a co-host, Jon Lovett, attempted to summarize why
they were working so hard to get Democrats elected in the 2018 election: “We have spent two
years trying to convince people to go to the Olive Garden instead of a restaurant where they
poison you, don't pay their staff, and the owner is a racist. And you know, the Olive Garden isn’t
perfect, alright. You wish it was more consistent, but you know that when you're there, you’re
family,” (The Election is Nigh!). Jokes like this are constant on PSA, with a total joke count of
252 across the ten studied episodes, and Donald Trump and his administration are the leading
target of this comedic censure. While the use of humor framing will be discussed in more detail
below, it is critical to note that this use of jokes, satire, and humor as a persuasive weapon
against Trump is not singular to PSA. Indeed, BSS also used similar styles of censure against his
perceived greatest foes: the media and the Democrats.
When speaking about Trump, in comparison, Ben Shapiro is a bit more varied. He
applauds the president and praises him, especially when it comes to matter of policy and
conservative values:
“He has a really good record. The fact is that the economy has been extraordinarily solid
under President Trump. It has continued to grow at a very very solid rate. He brought up
hiring, the unemployment rate is at a record low. Not only that but he has helped to
rebuild the military. He has helped rebuild the federal judiciary. He helped deregulate.
114
There are a lot of great accomplishments that President Trump can point to,” (‘Trump
Unleashed’ 5.27).
Shapiro also regularly heralds Trumps hostile posturing towards the media and towards
Democrats as being good and being persuasive to the American public, (1.20, 10. 14, ‘The
Incitement Lie’).
Even when he is criticizing the president, Shapiro often vocalizes the missteps and
fumbles of Trump and his administration by claiming that they may be bad, but they cannot be
worse than what the Democrats or the media elites would do or say. An example of this is his
continued discussion on anti-Semitism, when Shapiro acknowledges that the president has not
always had a good track record on anti-Semitism (the most noted example stemming from
remarks after the white supremacy rally in Charlottesville). Despite these ‘bad moves’ on the
part of the president, Shapiro claims that Trump has better pro-Israel policy plans and that he has
since made a movement to be harsher on anti-Semitism rhetorically than Democrats (Ben
Shapiro 11.4). In essence, Trump may not be perfect, but he is better than the alternatives. When
criticizing the president, this is far and away the most common strategy utilized on BSS, but a
few straight criticisms of the president without comparison or qualification are also provided.
The most notable of these in the selected ten episodes came when President Trump praised North
Korean dictator Kim Jung Un in a ploy to criticize his political opponent, former Vice President
Joe Biden:
“Okay, [sigh] Mr. president, if you wish to project stability, if you don’t wish to project
defensiveness, what you don’t really want to do is something as immoral as citing the
world’s worst human being to say that your political opponent is a stupid human. I mean,
that is bad stuff,” (‘Trump Unleashed’).
Despite the problem points, however, in the conservative arena, Trump is overarchingly seen as a
good actor and a means to a conservative end as well as a buck against the more ‘mainstream’
system.
115
Both the Ben Shapiro Show and Pod Save America represent, at least within this
conducted study, the two poles of ideology and Trump discussion. Unsurprisingly, Political
Gabfest falls between these two extremes. There is still a large amount of criticism lobbed at the
president from this show, which has referred to him as ‘dangerous’ and ‘self-motivated,’ (The
Great Blotch Edition). As with Pod Save America, PGF also asserts many of its worst criticisms
through jokes and jest, as they did when discussing potential new hires for the position of Chief
of Staff. One host claimed, “Newt Gingrich, who I think is actually probably a totally credible
candidate at this point, is the most egomaniacal, chaotic person on the planet, except, possibly,
Donald Trump,” and though it was said in a joking manner, those feelings are reiterated
repeatedly through the podcast (‘Tinkle Contest With a Skunk’). However, PGF does attempt to
appear less biased in its assessment than PSA or BSS. Hosts frequently say things like ‘to be
fair,’ before offering potential commentary from the opposite ideological lens, and when harsh
criticisms are made, there are usually explicit references to the fact that this is the opinion of the
given host. Thus, while the hosts of PGF do have strong words of censure for Trump, they still
use some linguistically normalizing forces that are reminiscent of their journalistic background.
Ultimately, they applaud investigative reporting digging into Trump, but they sway far more
between Trump being a true villain and Trump just being a president they do not agree with or
do not think is up to the job (‘The Now We Have Pipe Bombs Edition’).
The State of American Government and Political Life
As political podcasts, all three of the chosen series regularly engage with the United
States’ past, present and future politically. In these conversations, there is an established
understanding for each podcast individually about the state of American life and the expectations
we should all have of our government and our political institutions. In these discussions, two
116
major actors emerge, the Republicans and the Democrats, and each podcast was analyzed for
their positive and negative assessment of these groups. Table 12 shows data collected from four
separate codes on these parties.
Table 12. Republican and Democrat Assessments (Instances and Coverage Range Per Episode)
Podcast Republican
Assessment (+)
Republican
Assessment (-)
Democrat
Assessment (+)
Democrat
Assessment (-)
Ben Shapiro
Show
27
0.15%-1.59%
28
0.24%-4.38%
3
0.91%
155
2.6%-11.5%
Political
Gabfest
3
0.31%-1.38%
49
0.35%-2.66%
15
0.24%-1.58%
16
0.04%-3.6%
Pod Save
America
1
0.12%
65
1.24%-5.31%
15
0.12%-2.4%
4
(0.23%-0.74%)
A more normative view of that process is most evident in Political Gabfest, where hosts
focus on the longevity of the democratic republic, the intention of the founders, and the built-in
stop gaps within the system that preclude any seismic shifts in American political life (‘The
Little Ditty About Mitch and Elaine Edition’; ‘The A Bit Snitty Edition’; ‘The Way Too Much
News Edition’). As with all of the political podcasts, there is a substantial amount of political
opinion and assessment aimed at both parties, but more than any other show, PGF redirects
current events back to history and ideals of the democratic process. When faced with a situation
some hosts found ‘swampy’ one host, David Plotz, chose to look past party and make arguments
on process: he responded to criticisms that trading favors politically and getting easier funding
for one’s state based on personal connections was okay, claiming, “It’s good for a political
system to indicate that if you work hard in politics and build up alliances and gain power, it will
serve your constituents, and that teaches your constitutions that politics in itself is an effective
and useful action,” (‘The Little Ditty About Mitch Elaine Edition’). This kind of analysis is
117
something not usually seen on either PSA or BSS, where ultimately the mentality is that there is
a good team and a bad team, and endorsement or criticism fall directly along team lines.
Nevertheless, PGF is not wholly unbiased, and though their hosts are a bit more
ideologically split, there are plenty of assessments of goodness and badness aimed at Democrats
and Republicans. For Democrats, assessment in positive or negative directions was essentially
split (15 instances of positive endorsement with a coverage range from .24% to 1.58%, and 16
instances of negative endorsement with a range from .04% to 3.6%). This almost even split
between support for the Democrats and criticism of their process or practice upholds much of the
mission of PGF, which is to come at the issues of the day from a range of viewpoints and to use
the spirit of free and fair argument when grappling with political issues of the day. The evenness
made sense in the scope of a show run by journalists, who grapple with the industry norm of
objectivity and presenting both sides. However, that evenness did not carry over for discussion of
Republicans in the US government. Comparatively, PGF gave Republicans a positive assessment
at only 3 times over the course of 10 episodes (with coverage ranging from 0.31% to 1.38%), but
were critical of them in 49 separate instances (with coverage ranging from 0.35% to 2.66%).
This disparity solidified that between the two parties, PGF was more critical of the Republicans
and far less likely to defend the GOP over the Democrats.
Interestingly though, for PGF, many criticisms needed to be coded not just as a Negative
Democrat Assessment or a Negative Republican Assessment, but as both. In multiple situations,
the hosts were frustrated with both parties, though they were perhaps most annoyed with one
over the other. One of the most animated of these moments came when Democrats and
Republicans signaled that they may want to work together on a national infrastructure bill, but it
ultimately came to nothing. In that face of that gridlock, one host claimed,
118
“Well, you know, fuck them. I mean it matters more that the country gets the stuff built
and … if you decide to forgo the opportunity because I’m so nihilistic about politics,
because all I want is the win in 2020, I don’t want to hear that. It’s just not – that’s an
incredibly cynical thing to do and I understand that we’re in a terrible political situation,
but part of the terrible political situation is that both parties, but particularly the
Republican party, acts purely… to win the next political fight and not to actually
accomplish the things that they say they want to accomplish.” (The A Bit Snitty Edition).
On the whole, PGF seemed to sympathize more with the Democratic Party and to side with them
on more political issues than they did Republicans. However, despite the disparity in positive and
negative assessment, the cohesive band that brought each episode of PGF together was a
prioritizing of country and democracy over the politics and partisanship of Washington. Their
overarching message to viewers, as a result, was that they should want and advocate for more
representative, and better working government. They were much less guided by the framing of
politics as team sport, and instead stayed faithful to politics as a means to governance and a
common public interest.
In comparison, BSS is a show that most definitely hinges on an us versus them narrative.
This is something that has been incredibly common in conservative talk radio for years, but the
tactic BSS used specifically was not to elevate Republicans as an infallible party or a party being
run with an even, steady hand. For BSS, assessment on Republicans was split with 27 instances
of positive assessment (with a coverage rate of 0.15% -1.59%) and 28 instances of negative
assessment (coverage 0.24%-4.38%). On the whole, there was actually more overt criticism of
Republicans than there was praise for them, and the main culprit for those assessments was
President Trump. As discussed previously, the Ben Shapiro Show makes a point to call out what
they see as bad policy or behavior from the President, but they nearly always pivot from the
criticism to one central point: no matter what Trump and the Republicans may do, the Democrats
have been, and will always be worse. This central premise is discussed at length in all BSS
episodes, and across ten podcasts there were only three instances where Shapiro had anything
119
positive to say about Democrats or ‘the left.’ Usually, the theme of a given event or story was, at
least to some extent, something summed up by Shapiro in October of 2018: “Democrats suck at
everything,” (‘The Republicans’ Best Friend’).
This unimpressed sentiment carried and even escalated over the course of the studied ten
episodes. This is due in large part to a perceived gap for Shapiro between the moral standing of
Democratic and Republican policy stances. For Shapiro, a proud pro-life advocate, the moral
failings of a party that is pro-choice was matched only by his belief that ‘the left’ is actively and
undeniably anti-Semitic. This meant that the 155 separate instances of negative Democrat
Assessments (with coverage ranging from 2.6% of a given episode to 11.5% of an episode)
covered the gambit from mild criticisms of a policy roll out to full on diatribes about the integrity
and intention of the Democratic Party:
“After the Democrats have spent years demonizing Republicans, after they have said that
Donald Trump is not human…after the Democrats demonized Brett Kavanaugh, calling
him a gang rapist, after they suggested that the Covington boy, the Covington Catholic
High School boys were a bunch of racist bigots, after all of that, now the Democrats have
suggested that if you critique Ilhan Omar for being soft on 9/11 and being soft on
terrorism more broadly, you are inciting violence against her. This is sheer crap and it is
not only that, it is badly motivated crap,” (The Incitement Lie).
Beyond moral considerations, Shapiro also continuously reminds listeners that Democrats, in this
world view, are also flawed in their understanding of democracy and the American ideal. One
regularly occurring segment on BSS included reading from the Federalist Papers, and then
subsequent analysis from Shapiro about what the framers of the constitution envisioned for
American society and government. In these segments, and throughout each episode, there is
always criticism of the Democratic party, and an exaltation of conservative ideology. As such, it
is clear to listeners of the podcast, that Shapiro’s ultimate vision for America is that it be a
120
country governed by concepts like small government, power of the states, and individual
freedoms.
Pod Save America, meanwhile, represents a kind of ideological inverse of BSS. Where
Shapiro heralds the smallness of the federal government and the removal of restrictions and
regulations, hosts on PSA fundamentally agree on their more liberal Democratic ideals. This
resulted in disparity not only in the Republican Assessments, but in Democratic ones as well.
Over the course of the ten episodes studied, there were 15 instances of Positive Democratic
Assessments (with a coverage range from 0.12% to 2.4%) and only 4 instances of Negative
Democratic Assessment (ranging in coverage from 0.23%-.74%). Comparatively, there were 65
coded instances of a Negative Republican Assessment (ranging from 1.24% coverage to 5.31%
coverage), and only 1 instance of a Positive Republican Assessment. As with BSS, there was a
clear allegiance with one part over another, but PSA undoubtedly backs Democrats. Their claim
to that backing is similarly based in moral choice as well, as co-host Dan Pfeiffer once
articulated:
“What I am struck by is the cruelty and the hypocrisy of the Republican Party. This is not
just Trump, this is not some MAGA hat-wearing Alex Jones follower that we got
burrowed into some department. This is what the Republican party was going to do no
matter who won, and the cruelty of it is just is to try to deny people the opportunity to
live their lives the way they see fit. And the hypocrisy is the Republicans run around
claiming to be the party of freedom. Hashtag freedom! They love fucking bald eagles,
their Twitter avatar, and what - but they don't mean freedom for everyone. They mean
freedom for a select group of people,” (‘Pollercoaster’).
This claim, that the Democratic party is one governed by all people, and that it is a party that is
representative of the interests of every American, was consistently made throughout all episodes
of PSA. However, a simple rendering of the assessment codes for Democrats and Republicans do
not give a full picture, for PSA or any of the other podcasts. Instead, because there was often an
121
abstractness and a level of hypothesizing involved when discussing how things should be, much
analysis about the state of the country and where it should be headed were coded as punditry.
This was the case for all three of the podcasts, but further analysis of that punditry
revealed for PSA, like BSS, that their vision for what America should be was based on their
party of choice and the surrounding ideology of that party. To PSA, more than any other political
podcast studied, the current administration and status quo was a danger to American ideals and
needed to be removed, so that others might gain power and ‘restore’ a better sense of the ‘real
American identity,’ (‘Ride or Die With Dictators’). It is also important to note again that PSA
had a much higher percentage of humor codes used than the other podcasts. Much of the ridicule
and negative commentary aimed at Republicans in episodes of PSA came in the form of jokes
and sarcasm. This was standard for BSS as well (though aimed at Democrats), and was
somewhat common in episodes of PGF.
Phase One Summary
While the analysis completed in phase one resulted in a tremendous amount of insight,
the most important findings involve the framing tactics used in all of these podcasts. RQ1
specifically asks what framing tactics are deployed within popular political infotainment
podcasts, and the above sections extensively highlight the numerous answers to that question.
Categorically, popular political infotainment podcasts are split between opinion, hard
news/journalism, and humor framing tactics. Within those categories a range of tactics exist, but
the most popular were punditry and opinion (1632), facts and figures (694), directed questions
(572) and purposeful jokes (438). The analysis did not indicate that political leaning effected the
framing tactics used, as BSS and PGF had almost identical rates of opinion, journalism, and
entertainment elements. However, a thematic assessment did reveal that podcasts defined by
122
their political party or ideology do have a shared language, but that they cast good and bad actors
in ways that differ from each other based on their political ideology. BSS most notably cast the
media and Democrats or ‘the left’ as the ultimate bad faith actors in society, to a degree that
outpaced PSA and PGF’s discontent with Republicans. This tactic was noted for potential
recreation in phase two, but on the whole, the flow, tone, and persuasive strategies seen in all
three political podcasts resembled each other.
RQ1 also asks about the difference in tactics between the hard news podcast (Global
News Podcast) and the three political infotainment podcasts. A basic comparison of framing
rates showed stark differences between the hard news and soft news contenders. The Global
News Podcast consisted of 85% journalism codes and only 15% opinion codes. Meanwhile the
political podcasts all clocked in between 37 and 38% journalism codes, with ranging opinion and
entertainment elements (Figure 7). Despite these differences, however, the analysis also
discovered that the political podcasts largely adopted both story format and different journalism
tactics from this tradition of hard news. In podcasts with more than one host, directed questions
were a defining element, and facts and figures were substantiated with cited quotes, expert
sources, and personal testimonial as well. Some podcasts, like PSA, also regularly conducted
interview segments, which were often seen on the Global News Podcast. However, because the
Global News Podcast was affiliated with the BBC, it did not need to cite other media outlets.
Instead it cited its own reporting, or at the very least reporting that could be substantiated by their
outlet. The political podcasts, comparatively, did no personal investigation and relied on other
media sources to cull facts and figures from.
Phase one of research also provided more than just an answer to RQ1. This investigation
allowed for context building that was necessary for phase two’s experimental design. Listening
123
to all of these podcasts provided both the quantitative proof of how heavily these podcasts are
framed, while also supplying an access point into the tone, style, and culture surrounding each of
these podcasts. This is needed to successfully craft comparable audio clips to be tested in an
experimental design. Only through familiarity with language choices, rhetorical strategies, and
the framing tactics could the audio clips accurately emulate the podcasts being studied.
Further, this thorough content analysis gave great insight into how much softness these
podcasts contain, and what exactly their relationship is with facts and truth. The end result was a
clear illustration that these political podcasts are defined by the dual nature of infotainment. BSS,
PGF, and PSA were all driven to not only inform but also entertain, through use of opinion,
humor, or human-interest framing. The informing part of their work was bolstered by a
continuous and steady citing of other news stories, but supplementing that news with opinion and
humor makes the final product a piece of soft journalism. These podcasts were unique, however,
even when compared to past forms of infotainment because of their extra emphasis on intimacy
between hosts and listeners, informality of tone and style, ‘authenticity’ of coverage and
conversation, and independence from other bigger media outlets. Though many of the granular
framing tactics of infotainment studied were reminiscent of older infotainment iterations, this
emphasis, which was baked into both the philosophy of the podcasts and the resulting episodes,
represented its own kind of framing. What this means for core questions posed in this study’s
introductions is that infotainment on podcasts is not radically different in terms of form or
narrative tactics from iterations on broadcast mediums, but there is enough difference in the
culture and philosophy of the medium itself to differentiate the two.
124
CHAPTER 5
SURVEY-EXPERIMENT FINDINGS
While phase one of this research examined the narrative composition of political podcasts
in the US, and the framing tactics that they use to connect with audiences, phase two was
designed to assess the effects of these framing tactics on potential listener knowledge and
opinions. RQ2, 3, 4, and 5 all directly ask about the effects of softer news tactics and the way
they compare to hard news tactics:
RQ2: How successful is humor framing in strengthening political attitudes as compared to ‘hard
news’ framing?
RQ2a: Is humor framing more successful in strengthening political attitudes with liberals
or conservatives?
RQ3: How successful is opinion framing in strengthening political attitudes as compared to
‘hard news’ framing?
RQ3a: Is opinion framing more successful in strengthening political attitudes with
liberals or conservatives?
RQ4: How successful is humor framing in increasing political knowledge as compared to ‘hard
news’ framing?
RQ4a: Is humor framing more successful in increasing political knowledge with liberals
or conservatives?
RQ5: How successful is opinion framing in increasing political knowledge as compared to hard
news framing?
RQ5a: Is opinion framing more successful in increasing political knowledge with liberals
or conservatives?
To assesses effects, an experiment was used. The two observed dependent variables were
political attitude and political knowledge.
Sample Description
Prior to compiling the answers of RQ2-RQ5 (all of which directly relate to the two
selected dependent variables in this study), a general breakdown of the participant pool is
needed. Throughout the project, all recruits were found and compensated through Mechanical
Turk and were screened for U.S. citizenship. More than 2,000 people participated in the survey
125
at least in part, but of the 1,541 who completed the survey in full, 54% were men (N=826) and
46% were women (N=705). Meanwhile 80% of participants were white, 9% were African
American, 5% were Asian, 4% were Hispanic and the rest of the identified groups made up 1%
or less of the participant pool. Education levels of the participants ranged from those who had
not finished college, to those who possessed Doctoral or professional degrees.
Initially, recruitment from Mechanical Turk pulled from the available population on the
platform, regardless of political party affiliation or ideological backgrounds. This resulted in far
fewer participants with conservative beliefs. Because the design of this experiment was
interested in the strengthening of attitudes, a critical component to the humor treatment and the
opinion treatments was assigning participants to a condition that best reflected their pre-existing
political leaning. Without selecting for political party affiliation, the participant pool yielded
about 4 liberal participants for every 1 conservative participant. This uneven distribution lowered
the statistical power of the experiment, so after recruiting roughly one third of the participants,
an extra modification within Mechanical Turk was used to specifically screen for U.S.
conservatives. This modification yielded 3 conservatives to every 1 liberal, and in the end 43%
of participants were conservative and 57% were liberal (based on spending attitude).
This sampling modification was not pre-registered, but was needed given the preliminary
results which showed stark differences between conservatives and liberals. The change in
sampling also directly relates to the creation of the study’s four sub-questions. Originally the
differences between liberals and conservatives did not appear to be defining factors in this study.
The framing tactics used were the focus of the experimental design. However, data gathered in
the first wave of study indicated that political ideology was a critical part of the effects process.
As discussed in chapter 3, grouping of these two ideological groups was done using a
126
pre-test question about government spending. Sorting based on these kinds of questions is not
irregular, and is often used in poling and in academic study alike. Given the nature of the podcast
story being studied, government spending preferences were the most important indicators that
could properly sort people based on their pre-existing opinions. This research specifically opted
to not assign participants to conditions based on self-identification of party because while people
may identify overtly with one party or another, their ideology can vary and their opinions on
government spending may not always align with the party they identify with. This was clear
from the collected data, where 44% (683) of participants identified support for the Democratic
party but 47 of those stated Democrats were actually sorted into the conservative condition
thanks to their government spending answer. For Republicans, that difference was even larger,
with 51% (796) of total participants claiming they would support Republicans. 35% (281) of
Republican supporters were sorted into the liberal condition, however, thanks to their views on
government spending. Understanding this distribution of participants offers helpful insights in
terms of group polarization and the results discussed later in this study. Most importantly, this
measurement showed that despite having certain attitudes about government spending, a large
amount of ‘liberals’ in this study were actually supporters of the Republican party. Among the
sampled liberals, there were equally staunch supporters of the Democratic party, but on the
whole, liberal support rates did not indicate the same level of party loyalty.
This data about party approval and likeliness of voting for Republicans, Democrats, the
Green Party, and the Libertarian Party was collected in the post-test phase and is represented in
Table 23 in Appendix D on page 195. Table 23 shows a breakdown of favorability scores across
the experiment’s four conditions and six treatments. In this figure, and the subsequent analysis,
treatment one represents the conservative opinion group, treatment two is liberal opinion,
127
treatment three is conservative humor, treatment four is liberal humor, treatment five is hard
news, and treatment six is the control group that received no audio clip. Meanwhile, condition
one is the whole opinion framing group, condition two is the humor framing group, condition
three is hard news, and condition four is the control group. Overall, there was an even
distribution across all conditions of political ideology and party support. Each condition
contained the same rate of preexisting political ideologies as the others. For the purposes of this
research, however, all questions about party support were asked after the treatment and the post-
treatment tests. Only when the support questions and the political knowledge questions were
completed did participants engage with their party affiliation overtly, and this was done in the
hopes of mitigating any potential cueing effects.
Along with general demographic data and party support numbers, this survey also
calculated participant experience with podcasts and their frequency of use. Table 24 in Appendix
D (pg. 195) shows the percentage breakdowns of the two questions directly relating to podcasts.
The results indicate that Turkers use podcasts at a substantial rate, outpacing national figures.
This is not surprising, given that previous research has shown Turkers to be more technologically
savvy than larger public samples (Hitlin, 2016). However, more than half of the participants
(N=812) reported listening to a podcast within the last week, and only 5% (N=74) had never
listened to a podcast. Interestingly, the rate of people who had listened to a political podcast
before was also high, with two thirds of participants reporting that they had listened to a political
podcast before (N=1,018). 358 participants (23%) claimed that they also listen to political
podcasts regularly, establishing an existing level of familiarity and popularity of this podcast
genre among the participants in this sample.
Finally, all participants in the study were asked how closely they follow the news on
128
television, in print, on the radio, on their phones, and online. This was asked to get a better
understanding of how tuned in participants are to national events, matters of policy, and the news
being reported right now. Chapter 2 discussed at length the different models of citizenship in
America and how those models manifest and involve our media and wider press system. This
research ultimately believes that citizens are monitorial, and that they view and experience news
to differing degrees. There is no means of watching and consuming enough news to reach the
long theorized ideal of an ‘informed citizen,’ but all citizens engage with news and the media to
some extent. Those differing extents between individuals bore out in the data here where 31%
(N= 480) claimed to watch the news ‘very closely,’ 42% (N=645) claimed to watch moderately,
21% (N=325) said somewhat, 6% (N=87) said not very closely and only 1% (N=9) said they
don’t encounter the news at all (Figure 5, p. 196).
These results are helpful to the current study, because people who have experience
watching and consuming news are, to some extent, already exposed to the hard news and
infotainment framing tactics being studied in this research. However, extensive news
consumption could have threatened the integrity of the findings if the topic discussed in all three
audio treatments was widely known or often discussed. This potential danger, however, was
mitigated by picking a relatively obscure policy, the Universal School Meals Program Act, to
measure attitudes and knowledge about. Indeed, a question in the post-test asked participants if
they had any preexisting knowledge of the Universal School Meals Program Act prior to taking
the survey, and only 27% responded that they had (N=450). The likelihood that a substantial
amount of those respondents were actually fully knowledgeable about the act is slim, especially
considering the limited media coverage surrounding the bill which has yet to pass. Also, based
on a preliminary look at the comments section provided at the end of the survey, numerous
129
respondents clarified that they may have heard about the idea of giving free lunches, and not
about the specific Act in question.
Political Attitude
One of the two selected dependent variables analyzed in this research was political
attitude. The goal of this dissertation was to establish how infotainment framing tactics compare
to hard news tactics, specifically when it comes to strengthening a political attitude. The design
of this experiment and the breakdown of the infotainment treatments into different directional
conditions allowed for a multi-leveled analysis of infotainment effects both based on ideological
preference and more generally. Answers for RQ2, RQ3, and their sub-questions draw on this
analysis, and hinge on repeated measures tests of support. These measurements of support were
taken in the pre-test and the post-test directly prior to and after exposure to the audio treatments.
In the scale used, 1 represents strong support for the Universal School Meals Program Act while
5 represents strong opposition to the act. Table 13 shows the discernable shifts in opinion
between the posttest and the pre-test. It depicts the mean changes for the four main conditions.
Table 22 in this chapters summary provides the mean changes for all groups.
Table 13. Support Means in Pre-test and Post-test
N Support Pre-test Support Post-test Total Change
Condition 1 – Opinion 386 2.22 2.55 0.33*
Condition 2 – Humor 416 2.16 2.46 0.30*
Condition 3 – Hard News 368 2.12 2.14 0.02
Condition 4 – No Audio 372 2.05 2.05 0.00
Humor Framing
RQ2: How successful is humor framing in strengthening political attitudes as compared to ‘hard
130
news’ framing?
Beginning with RQ2, this study found that humor framing was a successful mechanism
for changing political attitudes. Using a repeating measures ANOVA, mean scores of support for
the Universal School Meals Program Act were identified prior to treatment exposure, and again
after treatment exposure. The humor group at large changed their original level of support (2.16)
by 0.30 points (ending with a support mean of 2.46). On a five-point scale, this movement was
substantial and represents a solid change in opinion among respondents. Through ANOVA
calculations, the mean changes were analyzed and found to be statistically significant (p value of
<0.005). In that analysis, there was a Wilks’ Lambda value of .904, a Pillai’s Trace of .096, and a
Hotelling’s Trace of .106. In comparison, the hard news treatment had virtually no change from
their first support score (2.12) and their second support score post-treatment (2.14). Their total
movement was marginal, 0.02 points, and further analysis showed no significant statistical
change in political attitudes as a result of the hard news framing.
What this finding means for RQ2 is that humor framing is more successful in changing
political attitudes than hard news is. Both the humor condition audio and the hard news audio
covered the same general facts, and provided the same baseline information about the proposed
school meals bill. However, the difference between audio clips had to do with tone and style.
The humor clip was crafted to include a more even mix between information and entertaining
elements like jokes and pop culture references. The hard news clip, meanwhile, did not use these
more informal tactics, and stuck to unbiased fact relaying. These differing tactics, in turn,
prompted different reactions from participants. The level of demonstrated movement between the
pre-test measure and the post-test measure of support between the humor treatment group and the
hard news group illustrate this clearly, and the significance scores calculated for the humor
131
treatment rejected the null hypothesis that no political strengthening would take place. The same
significance could not be established for hard news tactics. This bolsters the emerging claim that
humor tactics in political podcasts are better able to strengthen a person’s political attitude in the
direction of their pre-existing beliefs than hard news.
Understanding that humor was an effective tactic for strengthening attitudes, it was
important to assess how that change manifested across ideological leanings. To respond to that
query, RQ2 had a sub-question, which was not preregistered, interested in assessing the
differences in political attitude strengthened amongst conservatives and liberals.
RQ2a: Is humor framing more successful in strengthening political attitudes with liberals or conservatives?
Table 14. Support Means in Pre-test and Post-test
(N) Support Pre-test Support Post-test Total Change
Condition 2 – Humor 416 2.16 2.46 0.30**
Conservative Humor 180 2.78 3.62 0.84**
Liberal Humor 236 1.68 1.61 0.07
Table 14 provides the breakdown of mean changes among the two treatments that made
up the humor condition. What they established was a much higher rate of opinion strengthening
in the conservative humor group. Conservatives exposed to a humor-infused podcast moved
almost a full point in opposition of the bill, with a change of 0.89 points. The liberal group,
meanwhile, moved significantly less, only changing their opinion by 0.07 points. This difference
is actually so extreme, that when tested individually, only the conservative group has any
statistically significant strengthening of opinion. The conservative group moved nearly an entire
point in their assessment of support (on a five-point scale), and there was a collective group
132
movement towards more pronounced opposition of the Universal School Meals Program Act.
This mean change is demonstrably high, and much more pronounced than the movement
displayed by conservative participants in the hard news group (seen in Table 16). These results
thus show that the use of humor for conservatives is a successful tactic. If the intention of a piece
of content is to strengthen an individual’s political attitude and further persuade them towards a
conservative leaning, humor is a viable rhetorical strategy for conservative podcasters and audio
creators. Meanwhile, liberals responded in a less meaningful way. While their mean change was
present and did move in the anticipated direction, significance testing showed that there was not
enough movement in political attitude to be deemed as more than the result of chance. An
interaction test comparing support, ideology, and treatment groups also showed that humor
actually had slightly less effect for liberals in terms of solidifying their opinions than the hard
news condition (Table 16, Table 22).
Definitive reasons for this difference among conservatives and liberals cannot be
identified at this time, as more study with testing and retesting would be required. However, it is
important to note that studies of humor have long grappled with the creation of materials that are
actually funny. Humor experiments necessitate that the treatments involved are demonstrably
humorous and perceived as such by participants, and humor can be a deeply contextual element.
Chapter 3 detailed the rigorous rounds of editing and finessing used to try and create the best
content pieces possible. During that process, and in the provided comments of the experiment
itself, respondents from both sides of the ideological aisle praised the humor audio as being
enjoyable and funny. However, in trying to create equivalent materials, it is possible that the set
up or style was actually more indicative of conservative media than liberal media. As such, some
of the potential effects humor can have on liberals may have been subdued.
133
Also, it is critical to note that there was a stark difference between the starting points of
liberals and conservatives in pre-existing support. This will be discussed more at length in
subsequent sections, however, looking at the first measure of support, which was taken prior to
podcast exposure, liberals actually had less opinion strengthening possible than conservatives
did. Their support was already elevated to begin with. Their mean score fell between ‘somewhat
support’ and ‘strongly support.’ This strong support became marginally more pronounced in the
second measure collected in the post-test, though at a smaller rate than could be deemed
significant. Meanwhile, conservatives moved from a neutral point to a moderately oppositional
one. They started from a ‘neither support nor oppose’ score to a ‘somewhat oppose score,’
meaning their opinion was still less strong than liberals, even though their movement was so
much more substantial. Nevertheless, the analysis undertaken to answer RQ2a ultimately found
that humor framing was more successful in strengthening political attitudes with conservatives
than it is with liberals.
Opinion Framing
RQ3: How successful is opinion framing in strengthening political attitudes as compared to
‘hard news’ framing?
The analysis for RQ3 was completed in a nearly identical way to the work done for RQ2.
Using a repeated measures ANOVA of the levels of support, the opinion framing group had a
substantial change in political attitude from an initial mean of 2.22 to a secondary mean of 2.55.
Their total movement was thus 0.33 points on a five-point scale (Table 13). This change between
the pretest and posttest means that there was a statistically significant change in political attitude
when opinion framing tactics were used. As with the humor tactics, opinion tactics on the whole
proved to be a successful means of strengthening political attitudes and hard news produced
134
relatively no change. The overarching findings of the study indicate that this infotainment tactic
is a better persuasive mechanism than exposure to traditional hard news. The findings of this
ANOVA rejected the null hypothesis (which would state that there is no change in political
attitude), bolstering the claim that opinion framing does strengthen political attitudes when
applied in a pro-attitudinal way.
Similarly to RQ2, RQ3 also poses a sub-question about the differences in framing effects
between liberals and conservatives, which was not preregistered, but represents an added
exploratory inquiry within this research:
RQ3a: Is opinion framing more successful in strengthening political attitudes with liberals or conservatives?
Table 15. Support Means in Pre-test and Post-test
(N) Support Pre-test Support Post-test Total Change
Condition 1 – Opinion 386 2.22 2.55 0.33**
Conservative Opinion 163 2.96 3.96 1.00**
Liberal Opinion 223 1.70 1.52 0.18**
However, while humor tactics were found to be significant rhetorical tools for only
conservatives in the humor treatment, both liberals and conservatives displayed statistically
significant movement when exposed to pro-attitudinal opinion frames compared to the hard news
mean score. Individuals in the conservative opinion group had a pre-test support mean of 2.96
and a post-test support mean of 3.96. This was the most pronounced movement of any condition
throughout the study, and the full point shift in attitude shows just how much strengthening
power opinion framing had amongst conservative participants. The p value for this movement
was <0.005, fully rejecting the null hypothesis and establishing that significant strengthening of
political attitude did occur.
135
Meanwhile, the movement in the liberal opinion group, was less pronounced but was still
significant when compared to the hard news group mean. The liberal opinion group’s support
measure checked in at 1.7, and their second support measure moved to 1.52. This meant they
moved 0.18 points. The repeated measures ANOVA established a p value of .044 when
compared to the larger hard news group, also rejecting the null hypothesis and showing
significant attitude strengthening. However, an interactions test breaking down the movement of
all participants based on ideology actually shows that the political attitude strengthening liberals
experienced when exposed to hard news and to opinion was nearly equivalent (Table 16, Table
22). As such, this test cannot definitively claim that opinion framing effects liberal listeners in a
more significant way than hard news framing.
The difference in means between the two ideological groups in the opinion treatment
demonstrates how different the movement of opinion is between conservatives and liberals. Still,
it is important to consider that liberals appeared to begin with more strongly held beliefs than
conservatives. While the movement for liberals was still significant, they had less space available
in terms of gaining support, for both the humor and opinion conditions, as their group mean
started from a place that more than somewhat supported the bill. Conservatives, meanwhile, had
a mean that essentially made their group undecided prior to listening, allowing for much more
movement in their overall assessments. What this means for the current research is that these
opinion tactics work for both groups, but further study with other topics that are less supported
from the gate might prove more similar rates of susceptibility between conservatives and liberals.
Political Attitude Findings Summary
On the whole, the findings suggest that infotainment framing tactics work better than
hard news tactics to strengthen political attitudes. Across the board, each treatment group that
136
was exposed to an infotainment treatment moved in the expected direction based on the frames
they were given. But while this study has looked at the breakdown between conservatives and
liberals, the larger take away should be how pronounced the movement is generally when
participants are exposed to infotainment framing tactics. Both humor and opinion tactics,
common elements of political infotainment podcasts, persuaded listeners in the direction that
podcast hosts wanted. Table 16 demonstrates this dynamic with post-support findings that also
account for an ideological interaction. In this chart, the liberal ideology and the control condition
were the statistical reference variable. Appendix D contains a pre and post ANOVA chart and
more descriptive mean change findings that were critical to this analysis of political attitude
strengthening.
Table 16. ANOVA Results for Political Attitude Among Infotainment and Hard News Podcasts
(Ideological Interaction and General Knowledge Covariate)
Dependent Variable Parameter B
Std. Error t Sig.
95% Confidence Interval Partial
Eta Squared
Noncent.
Parameter
Observed
Powerb Lower Bound
Upper Bound
Support Intercept 1.616 .072 22.32 .000 1.474 1.758 .246 22.321 1.000 General Knowledge .038 .027 1.409 .159 -.015 .091 .001 1.409 .291 Conservative 1.259 .116 10.82 .000 1.031 1.487 .071 10.820 1.000 Liberal 0a . . . . . . . . Opinion -.166 .092 -1.80 .072 -.347 .015 .002 1.803 .437 Humor -.063 .090 -.698 .485 -.240 .114 .000 .698 .107 Hard News -.126 .092 -1.36 .171 -.307 .055 .001 1.369 .277 No Treatment 0a . . . . . . . . Conservative Opinion
1.177 .155 7.591 .000 .873 1.481 .036 7.591 1.000
Conservative Humor
.734 .152 4.822 .000 .435 1.033 .015 4.822 .998
Conservative Hard News
.200 .157 1.271 .204 -.109 .508 .001 1.271 .246
a. This parameter is set to zero because it is redundant. b. Computed using alpha = .05
Meanwhile, the smaller rate of change for the hard news group represents its own finding.
Between the pre-test and the post-test measures of support, there was only a 0.02 point change in
137
opinion. However, a deeper dive into this data showed that this was the result of liberals and
conservatives cancelling each other out. A test of ideological interaction in the study showed that
there was attitude strengthening that happened on both sides when participants were exposed to
hard news (for significant mean changes, see Table 22 in this chapter’s summary). For
conservatives, these rates of movement were not nearly as pronounced as the infotainment
groups, but both liberals and conservatives solidified their opinions when exposed to hard news.
That movement was much more proportionate in liberal participants, and further study would be
needed to substantiate whether or not infotainment or hard news tactics are more impactful for
these listeners. Overall, the larger difference in movement shows that while exposure to
information alone may sway people’s opinions to some extent, it does not pack the same power
of persuasion that infotainment frames have in some segments of the public. This claim cannot
be made for more polarizing topics in the political arena, where opinions may already be so
charged that the dynamics of persuasion change significantly, but the current findings do suggest
that infotainment podcasts are more effective agents of political attitude change or strengthening.
Political Knowledge
The second of the dependent variables selected for study in this experiment was political
knowledge/recall. The goal in this portion of the experimental design was to see if there were
significant differences between infotainment and hard news framing tactics in how well they
inform people. To test this, three questions were posed to all of the treatment groups following
their exposure (or lack of exposure) to the audio clips. All of the questions were multiple choice
and a simple frequency analysis showed what percentage of people in each group got the
answers right. This frequency check allowed for a more nuanced look at what questions were
easier or harder for each treatment group. A breakdown of these percentages across the four
138
condition groups in this experiment can be found in Table 29 on page 202. Table 17 shows the
calculated knowledge measure, or how many right answers each treatment group averaged.
Calculation of this score made conducting an ANOVA possible. This analytic plan was outlined
in chapter three and preregistered. An ANOVA table for political knowledge with an interaction
for ideology and a general knowledge covariate can be found in Table 18.
Table 17. Political Knowledge Scores (Average of Right Answers for 3 Story Based Questions)
N Knowledge Score
Condition 1 – Opinion (386) 1.74**
Condition 2 – Humor (416) 1.56
Condition 3 – Hard News (370) 1.49
Condition 4 – No Audio (372) 0.99
Table 18. Political Knowledge ANOVA (with Ideological Interactions and General Knowledge
Covariate)
Parameter B Std.
Error t Sig.
95% Confidence Interval
Partial Eta Squared
Noncent. Parameter
Observed Powerb
Lower Bound
Upper Bound
Intercept .787 .059 13.389 .000 .672 .902 .105 13.389 1.000 PKScoreStatic .127 .022 5.825 .000 .084 .170 .022 5.825 1.000 Conservative .038 .094 .407 .684 -.147 .224 .000 .407 .069 Liberal 0a . . . . . . . . Opinion .691 .075 9.218 .000 .544 .837 .053 9.218 1.000 Humor .597 .073 8.137 .000 .453 .741 .041 8.137 1.000 Hard News .387 .075 5.177 .000 .241 .534 .017 5.177 .999 Control 0a . . . . . . . . Conservative Opinion
.052 .126 .414 .679 -.195 .299 .000 .414 .070
Conservative Humor
-.158 .124 -1.280 .201 -.401 .084 .001 1.280 .249
Conservative Hard News
.132 .128 1.033 .302 -.119 .382 .001 1.033 .178
a. This parameter is set to zero because it is redundant. b. Computed using alpha = .05
Humor Framing
RQ4: How successful is humor framing in increasing political knowledge as compared to ‘hard
news’ framing?
139
Based on an analysis of the knowledge scores assessed in this experiment, more
knowledge was gained from the humor treatment than the hard news treatment. Participants in
the humor treatment answered an average of 1.56 questions correctly, while hard news
participants answered 1.49 questions correctly. However, while there was a higher score accrued
by participants in the humor treatment, the recall rates are still close together, amounting to a
difference of only 0.07, and ultimately do not vary enough to be deemed statistically significant.
This means that on the whole, humor and hard news tactics produced roughly the same level of
political knowledge, or at least no difference that could not be explained by chance.
A more nuanced investigation based on each of the three measured recall questions
provides supplementary insights that a simple assessment of knowledge scores cannot provide.
Looking at a comparison of the easy, medium, and hard questions, there is a clear difference
between the humor treatments and the hard news treatments on some of the questions. However,
the differences found were inconsistent, as demonstrated in the data provided in Appendix D.
Individuals in the humor treatment vastly outpaced the hard news treatment for the easy question
(76% correct compared to 63% correct) and had superior recall of the hardest question (23%
compared to 17%). These questions asked what the overall goal of the Universal School Meals
Program is and what two interest groups were consulted in the drafting of the bill. However, for
the medium question, which asked about what specific meals were included in the proposed
legislation, individuals in the humor treatment were less likely to select the right answer than the
hard news participants (57% compared to 67%).
The general takeaway from this portion of the analysis is that there was no significant
difference between hard news and humor framing tactics when it came to informing participants.
Subsequent discussion on the impacts of these findings will be addressed in chapter six. RQ4
140
was also interested in more than humor generally, however, and its sub-question confronts any
differences that may exist based on political ideology.
RQ4a: Is humor framing more successful in increasing political knowledge with liberals or conservatives?
Table 19. Political Knowledge Scores (Average of Right Answers for 3 Story Based Questions)
N Knowledge Score
Condition 2 – Humor (416) 1.56
Conservative Humor (179) 1.52
Liberal Humor (236) 1.58
Despite the lack of significant change on a treatment-wide basis, analysis based on
ideological leanings did show that liberals recalled slightly more than conservatives. Liberals
exposed to a humor treatment had an overall knowledge score of 1.58 and conservatives had a
score of 1.52. Ultimately, this slim margin of change failed to reject null hypothesis and create
enough significance to demonstrate differences not attributed to chance. What this means for the
study is that no evidence exists that conservatives and liberals have a different relationship with
humor-based infotainment and information recall ability. Participants, regardless of party,
remembered key facts at similar rates.
Opinion Framing
The final segment of this experimental research hinged on RQ5, which looks at opinion
framing as a potential influence on political knowledge. This question asks:
RQ5: How successful is opinion framing in increasing political knowledge as compared to hard news framing?
Unlike the humor group, the individuals in the opinion treatment answered all questions
correctly at a higher rate than the hard news group (Table 15). The group on the whole scored a
1.74 correctness level, compared to the 1.49 score of the hard news group. The difference in
141
scores, 0.25 points on a three-point scale, represented a marked distinction in effect. This
difference easily rebuked the null hypothesis that no difference would exist between opinion and
hard news. Ultimately, opinion framing tactics were shown to increase political knowledge,
better informing listeners, at least in an immediate test of recall. A deeper dive on the individual
questions in the opinion group is likewise helpful to see the stark differences between opinion
and hard news, but also between opinion and humor.
As a whole, the opinion treatment group scored 80% on the easy question, 74% on the
medium question, and 21% on the hard question. The hard news group, comparatively, answered
63%, 67% and 17% correctly on each question respectively. For this group, the difference in
recall is more consistent, and the findings suggest that opinion tactics may increase political
knowledge when compared to hard news. Again, a one-way ANOVA was used to compare the
means and standard deviation between the opinion treatment and to ascertain a statistical
significance level based on the calculated political knowledge scores. The results of those tests
also suggest a more consistent difference in knowledge and recall between the hard news and the
soft news treatments. However, there was a large disparity between the conservative and liberal
conditions, the results of which answer the question posed by RQ5a.
RQ5a: Is opinion framing more successful in increasing political knowledge with liberals or conservatives?
Table 20. Political Knowledge Scores (Average of Right Answers for 3 Story Based Questions)
N Knowledge Score
Condition 1 – Opinion (386) 1.74**
Conservative Opinion (162) 1.82**
Liberal Opinion (223) 1.69
142
Once again, the breakdown of liberals and conservatives illustrated that conservatives
answered the easy question and medium question with a very high level of accuracy (88% and
90%). Liberals, meanwhile, fared better than they did in the humor treatment, but
underperformed when compared to conservatives (74% on easy question and 62% on medium).
For the final question, however, the same huge and contrasting gap existed between
conservatives (only 4% of whom answered correctly) and liberals (33% correctness score).
These wider discrepancies across all questions resulted in a larger difference in total political
knowledge scores (1.82 for Conservatives and 1.69 for Liberals), which a significance test
confirmed was a substantial difference. The results indicate that conservatives learned more and
could recall more information when exposed to opinion framing tactics than liberals.
This varying result prompts questions about what outside factors may contribute to
differences between liberal and conservative listeners of opinion-based political infotainment.
Looking specifically at recall and attitude strength, opinion-based infotainment had a much
higher impact on conservatives. However, political knowledge as a variable in this study was
measured in a way that precluded some of the potential effects seen in the political attitude
measures. In this test, there was no pre and post measure of knowledge. There was simply a
recall measurement post-treatment to which conservatives responded correctly at a higher rate.
This begged the question of whether conservatives within the hard news grouping did better than
liberals in the same segment, but no significant difference was identified between those groups.
Conservatives in the hard news group recalled less from hard news than the conservatives in the
opinion group. As such, there is reason to assume that the framing mechanisms are the key
component causing the substantial recall effect.
As with the creation of the humor treatments, careful consideration and deliberate
143
planning and piloting went into creation of the study’s opinion audio materials. That rigorous
work and high production standard was undertaken as a means of mitigating outside effects, and
the final products provided as equal and comparable a product as was possible, however the
change in effects is still undeniable. Potential reasons for this change will be discussed more at
length in the chapter 6. However, in saying this, both conservatives and liberals learned more
from the opinion treatments than the members in the hard news groups did. This was true when
analyzing these groups as a collective and separately. Opinion tactics also had more impact on
recall than the humor tactics analyzed above for RQ4.
Importantly, this experiment also conducted a second wave of political knowledge
questions based on static, or foundational facts. These were questions that most Americans may
already have answers to in their personal knowledge base. The calculated percentages for each
treatment group are listed below, and showed that conservatives had a generally higher rate of
knowledge than liberals (highlighted in Table 21). This could have been a more worrisome trend,
because one group was more informed than another on a basic level. If that group had proceeded
to dominate across all tests and conditions, this measure may have given some context as to why
and harmed the integrity of this experiment’s findings. But while the conservative pool outpaced
liberals when it came to political knowledge in the opinion treatment, the fact that liberals were
the group who learned more from humor frames, and that conservatives in the hard news
treatment learned as much as their liberal counterparts, indicates base knowledge may not be as
directly relevant or constitute a compounding variable.
144
Table 21. Political Knowledge Percentages (President, Senate, Spending Questions)
N President Senate Spending
Treatment 1 - Opinion (386) 87% 59% 40%
Condition 1 – C. Opinion (162) 96% 63% 44%
Condition 2 – L. Opinion (223) 80% 56% 38%
Treatment 2 – Humor (416) 83% 57% 33%
Condition 3 – C. Humor (179) 91% 67% 38%
Condition 4 – L. Humor (236) 77% 49% 30%
Treatment 3- Hard News (370) 83% 58% 32%
(Condition 5)
Treatment 4 – No Audio (372) 77% 49% 32%
(Condition 6)
Phase Two Summary
In the second phase of this dissertation research, findings yielded answers to all four of
the remaining research questions, which were focused on effects. Through recreating the tone,
language, and style of the studied political podcasts and hard news podcast in phase one, this
experiment provided the first academic examination of what exposure to infotainment framing
tactics in political podcasts might mean for listeners. The results were varied, but all of the data
collected provides important additions to previous academic research. The main points of these
findings have been summed up in Table 22 below.
Table 22. Descriptive ANOVA Results for Attitude Strengthening and Knowledge Scores
Measure Opinion Humor Hard News Control F(1, 541) η2
Support
(Mean
Change)
0.33** 0.30** 0.02 0.00 14.98 0.028
Political
Knowledge
Score
1.74** 1.56 1.49 0.99 46.68 0.084
**p < .001. Statistical tests conducted between infotainment and hard news groups.
145
Firstly, phase two grappled with the impact of humor and opinion framing tactics on
political attitude. The experiment looked at both attitude change and attitude strength, and final
results showed a statistically significant difference between the effects of infotainment and
harder news framing tactics. Opinion framing was particularly impactful, causing the most
substantive change in political attitudes. Both liberals and conservatives showed responsiveness
to the created audio stimuli in the opinion treatment, and they further solidified their support or
opposition to the Universal School Meals Program Act. Humor framing, in comparison, was also
a successful means of changing political attitude. The humor treatment as a whole experienced a
statistically significant change in attitude when compared to the hard news group, but that
change was driven by the conservative condition. Liberals who were exposed to the humor
treatment did adjust their attitudes, but not to a degree that was strong enough to reject the null
hypothesis.
These results, which tackled the questions posed by RQ2 and RQ3, indicate that using
humor and opinion frames in a way that reinforces people’s existing beliefs further persuades
them towards one side of an argument. All net change documented among these four groups
represented a directional shift towards the anticipated side, indicating that political infotainment
podcasts have the potential to persuade their listeners and further polarize their positions, as long
as those listeners already hold the same political beliefs. Participants in the hard news treatment,
meanwhile, did not have the same level movement from one side to another. This suggests, at
least on topics like the one chosen, which have not garnered a lot of public attention, that there is
more limited attitude change that comes as a result of hard news exposure. Facts alone do not
seem to change minds to such a high degree, at least on low salience issues, but infotainment
tactics can readily do so.
146
A second variable analyzed in this phase of research was political knowledge and the
immediate recall ability people had when asked questions about the simulated podcast stories
they were exposed to. The findings from this segment of the study show that only opinion
framing prompted increased recall when compared to hard news framing. In the humor
condition, the difference in recall rate did not amount to a sizeable enough variance to rule out
chance as a potential cause. Meanwhile, the difference between opinion and hard news was
prominent. Between the hard news and opinion treatments, there was a 0.25 point difference on a
three point scale, which represents a significant increase in recall among those participants. The
fundamental take away from the analysis on political knowledge is that opinion-based
infotainment can be a more effective informer than hard news. Humor as a framing mechanism
was not found to be superior, but more tests may prove the slight increase found here is more
significant than this study revealed.
147
CHAPTER 6
DISCUSSION AND CONCLUSIONS
At its most fundamental level, the analysis in this dissertation grapples with questions of
infotainment and the effects that changing content styles have in political news spaces.
Preexisting academic conversation about this topic has remained mixed, and the findings on the
impact and effect that soft news and infotainment can have has been varied (Barnett, 1998;
Baum, 2003; Stockwell, 2004; Baym, 2014; Boukes, 2018; Boukes, 2019). Scholars and
practitioners alike have debated the merits and shortcomings of softening news structures, but
given today’s high-choice media environment, and the market pressures that are pushing more
and more outlets towards some degree of softness, this debate is still as relevant as ever (Prior,
2005; D’Amato, March 2018).
That relevance is especially glaring when looking at this debate as part of a larger fabric
concerned with democracy, the powers and responsibility of the media, and how citizens must
use factual news content to participate in the political realm and to actualize their citizenship
(Schudson 1999; Boczkowski, Mitchelstein and Walter, 2012; Schudson, 2018). That practice of
interacting with news to stay informed has never been more complicated. Given the existence of
the hybrid media system, and the everchanging media landscape people encounter online, news
trends and habits are transitioning all the time (Chadwick, 2017; Chadwick, Vaccari and
O’Laughlin, 2018). Academia has been reactive to this, attempting to build on a wealth of
knowledge while weeding out past findings and arguments that may no longer bear much fruit.
But despite progress trending in the right direction, there is still so much to be deliberated and
documented in this area of communication studies.
What this project seeks to add to that overwhelming discussion is multifaceted. On a
primary level, this work provides more layers and more contexts through which to examine the
148
phenomena of softening news. This work specifically looked at political infotainment in
podcasts, and documented how the hybrid genre of infotainment has been adapted by political
podcasters in the US. To date, the research on podcasts has been expanding, but the output of
research has not matched the steadily increasing hold that podcasts have on the country and
around the world (Park, 2017; Funk, 2017; Llinares, Fox and Berry, 2018). This medium is only
growing, with more and more creators and shows coming online and entering the mainstream
every day (Podcast Insights, 2020). Popular podcasts have increased their ratings, drawing a
substantial number of regular listeners who number in the millions.
This stamp of public approval makes podcasts worthy of increased investigation more
broadly. With roughly one million podcasts available for download now around the world, not
every podcast is deserving of academic consideration, but there should be more conscious work
by scholars to see what podcasts are transforming communicative styles and prompting audience
engagement and community building. Some studies have begun this work, but the continuously
increasing hold that podcasts have on Americans and people around the world indicate there is
still much left to learn (Mclung and Johnson, 2010; Markman, 2015; McHugh, 2016;
Sienkiewicz Jaramillo, 2019). The driving force behind the present study is a belief that it is
especially important to consider podcasts that actively seek to change public opinion and interact
in the political sphere. The overt attempts at persuasion, and the competing desire of some
political podcasts to inform, entertain, and sway audiences towards certain attitudes poses a
number of serious questions, many of which have been investigated at length in this dissertation.
Discussion
To begin understanding these podcasts, and to see how infotainment framing tactics have
been adapted from older mediums and transformed in this digital space, there first needed to be a
149
descriptive and exploratory investigation of these podcasts on a more granular level. Such a
baseline has yet to be established in academic research, and this project provides one of the first
content analyses conducted about news podcasts ever (Funk, 2017)4. Through looking at three of
the most popular political podcasts in the US, while also analyzing one of the world’s most
listened to hard news podcasts, phase one of this project dug into the ‘what’ and ‘how’ of
political news podcasts. The resulting codebook is informative and wide ranging. On the whole,
12 topic types were categorized as common among political podcasts, while 22 general codes
were identified. This significantly increased existing coding schematics for soft and hard news
studies. Further, a deeper look into the themes, popular language choices, and framing tactics
used amongst these podcasts provided increased clarity about how political conversations are
taking place on podcasts. The way that podcast hosts make meaning and tell stories about the
news of the day is familiar, drawing on content like talk radio and late-night comedy news
shows, but there is also a certain podcast-specific language that all three studied shows possess.
This shared process of communicating was present across the podcasts, even when the
opinions and ideology of the shows varied. Indeed, many stories from the Ben Shapiro Show had
glaringly different topics than those of Pod Save America or Political Gabfest, but the casting of
characters and the chosen figureheads and important touchstones were largely shared across the
board. Many good actors and bad actors were flipped, as political ideology largely dictated who
was seen as friend and foe, but a thematic analysis that looked beyond just the presence of codes
and framing tactics revealed that all of the podcasts chose to tell stories in relatively similar
ways. This meant that not only did all of these podcasts share similar rates of hard and soft news
4 While Funk (2017) conducted a qualitative analysis of podcasts, and assessed things like topic and tone, the methods used were less rigorous and did not involve any kind of intercoder reliability. This was not a content analysis, but a more general examination of podcasts on the whole. The work was still incredibly informative, and represent a tangible first step in research of this kind, worthy of mention in this report.
150
tactics, but they also relayed their content in a way that was somewhat formulaic. This formula
draws on older storytelling logics from TV and radio making at accessible to newcomers, but
also reflects unique attributes of podcasting including emphasis on intimacy, authenticity, and
informality.
These findings proved illustrative and provided a better context for scholars about how
language and arguments are molded and shaped on these kinds of shows. At the same time,
phase one’s findings reasserted that phase two of the research would be a manageable
undertaking. One potential issue when attempting to translate the framing tactics and styles of
these podcasts to an experimental model was that not all participants would have the podcast
literacy of more active podcast users. However, the findings of phase one showed that there is
both a legacy of opinion/entertainment news in this country that precedes podcasting, and that
the mechanisms used by podcasting are inclusive enough so as not to alienate new audiences, at
least when it comes to general style and tone.
Phase two also controlled for the potential rejection of these podcast framing tactics and
stylings by sorting recruited participants assigned to an infotainment treatment into a pro-
attitudinal condition. This choice represents a change from most previous studies, but it better
reflects the high-choice media environment in which podcasts exist (Prior, 2005; Van Aelst et al,
2017). Many people in the digital age, either through choice or through filtered exposure, are
interacting with content that reaffirms or mirrors their previously held beliefs, whether that
content is true or not, further polarizing their attitudes and ideologies (Hameleers and van der
Meer, 2020). Podcasts as a content offering are predicated on this choice-model, because they
require audiences to actively seek them out and continuously subscribe or stream specific shows
(Llinares, Fox, and Berry, 2018). There is no publicly available data about the political identities
151
of the audiences of the infotainment podcasts examined in this study, but the trend for selecting
media which aligns with one’s preexisting preferences indicates most of these audiences already
possess the given political beliefs of the podcasts they listen to. There may be some individuals
who listen to PSA, PGF, or BSS across the political aisle, however the branding, framing, and
opinions broadcast in these shows likely deter most dissimilar thinkers from investing in these
shows long term.
Regardless of which political affiliation these podcasts claim, however, all of these shows
have built an episode formula wherein which they educate listeners about a given issue, and
build an agenda for consideration, but do so through entertainment and opinion frames. This
dependency on elements like excessive punditry, purposeful jokes, and pop culture references
undercuts a number of traditional fact-based journalistic ideals. One of the most important of
these is the idea that fact-based reporting should be defined by objectivity, and that all sides
should be considered, thus limiting for bias and promoting truth (Siebert, Peterson and Schramm,
1956). However, these standards are not universal in legacy journalism. Editorial content and
opinion-news both have strong traditions in hard news circles, but they move beyond a mere
agenda-setting functionality, telling readers what news is important and which stories to pay
attention to, and instead add in elements of agenda building as well. These segments of news
production ultimately argue a given point and advocate a position within a certain context or
event (Kiosis et al, 2006), in a similar way to the podcasts in this study.
However, the political infotainment in this study is still separate and distinct from
editorial content because of the way it’s creators conceive of this content. Importantly, none of
the three political infotainment podcasts claim to be a news show or to be engaging in any kind
of hard-hitting news reporting, not even PGF, which is hosted by three actively working
152
journalists. In this study, only the BBC’s Global News Podcast was classified as a hard news
product. Instead, the infotainment podcasts all promote their status as political talk shows, or
shows designed around public discussion and analysis, rather than something built off of their
own independent investigative reporting. Nevertheless, there is reason to believe that these
shows could be primary sources of news gathering for many listeners, and these podcasts’ entire
design necessitates the use of hard-news and facts that are then framed with creator opinions on
that topic in one direction or another.
The experimental portion of this project demonstrates some of the more tangible effects
that political infotainment podcasts may have on their listeners when they frame issues in these
entertaining and opinionated ways. One series of effects include strengthening political attitudes,
and further polarizing listeners towards one more fixed stance on a given issue. The findings
showed that both humor and opinion framing tactics were effective persuasive tools that
solidified participant attitudes. The most substantial attitude change happened among
conservative participants, however, some of that stark difference may be attributed to the fact
that conservatives started the experiment undecided on the USMPA while liberals were already
halfway between somewhat supportive and very supportive. More testing would need to be done
with other topics to better analyze the difference in those affects along ideological lines,
including topics on which all participants started from a more undecided place. Changing the
analytical model from a repeated measures ANOVA to a 2x2 analysis could also prove useful,
because it would provide a larger and more detailed look at the interactions present when
participants are exposed to these framing tactics.
The other studied variable in this project was political knowledge, and the findings were
again impactful. Both groups of infotainment framing tactics were as effective or more effective
153
when it came to informing participants compared to hard news. Humor news trended slightly
higher in recall than hard news did, but not to a degree where one could be perceived as more
effective than the other. However, opinion news prompted better recall than hard news did, and
this higher recall was driven by conservative listeners. This finding may relate to studies about
traditionally conservative media, and how opinion news has long been a style of content that is
popular in conservative circles (Mort, 2012). If, for example, conservative audiences are more
regularly exposed to opinion framing in their news intake, or if they consume more media that
blends news with opinion, it may explain why their recall was higher than their liberal peers.
However, this study did not measure the media diets of participants or their news consumption
habits. As such, this is only one possible area of connection, and not a definitive explanation. As
stated in chapter 5, there is also some possibility that tweaks to the used audio treatments may
have prompted more response across the board. Perhaps the style and tone did not reflect liberal
infotainment expectations to the same degree. If this is the case, further experimentation with
other topics and scripted audio treatments would help in solidifying findings.
Implications for Scholars and Research
From a persuasive standpoint, the studied use of infotainment in political news spaces
appears to be a successful rhetorical tool, as demonstrated by the findings collected in phase two
of this project. Both humor and opinion framing tactics were found to prompt attitude
strengthening, and opinion tactics were particularly powerful, moving both conservatives and
liberals to a statistically significant degree. As discussed in chapter 5, conservatives especially
were moved in their thinking when exposed to pro-attitudinal framing tactics, but there was
movement by both liberals and conservatives in every test towards their given poles of thinking.
What this means then, is that if the goal of these podcasts is to persuade their audiences or to
154
strengthen their degree of belief towards a certain partisan leaning, they are likely achieving that.
The findings of this study indicate that using infotainment framing tactics as opposed to a hard
news model can sway and strengthen people’s political attitudes. Audiences are persuaded both
by humor and by opinion, and all of these shows use both of these tactics to supplement their
discussion of facts and figures. As such, there is reason to infer that political infotainment
podcasts can be forces of political persuasion, and that they already utilize and rely on tools that
strengthen the political attitudes of their audiences.
This persuasive possibility alone is important, but a smaller finding identified over the
course of phase one becomes a much more prominent source of potential power than it
previously did. This finding was discussed briefly in chapter 4 and has to do with a smaller
cluster of framing tactics found within all of the infotainment podcasts, which the study titled
‘Mobilization.’ At first glance, the impact of the mobilization grouping and the lone studied
tactic in that group (Call to Action) seems somewhat negligible. While each infotainment
podcast used a Call to Action at least once, the only podcast to regularly draw on the tactic was
Pod Save America. Many of the calls from the podcast hosts to their audience had to do with
public process. They urged listeners to vote, to canvass for political candidates, to donate money,
and to call their representatives. Standing on their own, those calls to action may not be entirely
meaningful, but when viewed in conjunction with this study’s findings on political attitude
strengthening, they begin to represent more possible power. While this study did not have the
means to empirically test the Call to Action tactics, demonstrating that people may listen to these
podcasts and end the episode more cemented to an opinion or attitude could indicate potential for
these podcast hosts to inspire real-world actions. Studies on partisan news in other media forms
155
have indicated the potential for this, but further study specifically geared at podcasts should be
conducted in future (Wojcieszak et al., 2016).
Previous research on political attitudes have also demonstrated that having stronger
attitudes makes people more inclined to act in the interest of that attitude (Petty and Krosnick,
1994; Rhodes, Toole and Arpon, 2016). The podcast hosts studied in this inquiry have already
attempted to tap into this possibility. On Pod Save America and the Ben Shapiro Show
specifically, listeners have been asked to show support by joining different online and offline
movements, like protests and boycotts. There is indication online that these calls to action may
be heard, at least by some audience members, as listeners of multiple shows have discussed their
attendance at real live rallies because of podcast endorsements. Listeners of BSS have promoted
their attendance at events like the March for Life (Graham, January 2019), and PSA listeners
have been a part of protests like the Women’s March and current Black Lives Matter rallies,
documented their attendance and support on social media. However, bold claims about these
kinds of podcast calls to action can only stem from future research that has yet to be undertaken.
The results of this study, also inform academic research not just about the persuasive
power of infotainment, but also about the continued necessity of hard news and good
investigative reporting. While infotainment news in this study was shown to strengthen political
attitudes, that strengthening is not, by definition, a good thing. For politicians and political
operatives, yes, it is desirable to change people’s minds, and if both parties are successful, this
can increase the polarization of the electorate, but in terms of public deliberation, increased
polarization represents a potential threat. Polarization can fray shared public understandings and
trust in institutions that assemble needed facts all citizens should operate with (Ingraham, 2018).
This in turn dilutes productive dialogue between members of society and makes constructive
156
action that much more difficult. When divides between citizens are heightened and exploited, it
maintains an us-versus them political cycle that leads to negative public sentiment and continued
disengagement (Norris, 1999; Blumler, 2015; McCoy, Rahman and Somer, 2018).
Increasing polarization is also not a part of the normative ideal of good, fact-driven
reporting, which had long prioritized informing people in an objective way. In this understanding
of the role of journalism, there is a desire to be socially responsible actors guided by objectivity
and truth telling (Zelizer, 2009). Not all journalists hold this ideal, and not all facets of
journalistic life reflect these norms. Notably, there are exceptions in the profession existing for
editorial content and opinion-based works, but usually there are implemented safe-guards that
delineate between hard hitting and fact-oriented news and editorial content (Kahn and Kenney,
2002). When it comes to a normative understanding of journalism, many scholars, as mentioned,
have also bemoaned both the social responsibility theory of the press and the idea that news can
be truly objective in any form or iteration (Nerone, 1995; Christians et al, 2009; dunningfelds,
2011). Still, the concept that hard hitting news outlets should be an ethically conducted public
service remains in the understanding of practitioners, academics, and the public. Indeed, this
fact-driven pursuit is intrinsically tied into the very notions of citizenship and democracy baked
into the American process, which seek to give citizens the information they need to make
decisions for themselves.
In the current high-choice media environment, citizens are, at best, monitors of news.
They interact with information and facts about people and events in their own ways and to their
own ends (Schudson, 2018). In saying this, people still need to have facts, and cohesive public
deliberation and debate requires that society’s collective understanding of what is fact and what
is fiction lines up. This is especially apparent when considering the political knowledge and
157
recall variables studied in phase two of this research. Phase two demonstrated that opinion
framing tactics can increase the recall ability of the audiences who hear them, and while the
political knowledge scores were not as large as the attitude strengthening measures, more
learning did occur in these conditions than among the hard news group. Unfortunately, while this
experiment made sure to investigate and source all of the cited facts provided in the study, not all
softer news and infotainment programming is willing to do the same. Listening to the 30 selected
infotainment podcasts in this study demonstrated that, and while lies and falsehoods were not
explicitly calculated, it is safe to say that telling stories with a lot of opinion often undercut or
completely sidestepped substantiated fact.
On the whole, what this study shows is that the continued existence of infotainment
stories and outlets in the US media ecosystem can prove polarizing and that they are powerful
forces not only for attitude change, but also for learning. If people are susceptible to these softer
news frames as mechanisms of persuasion, and if they learn from the content that they choose, it
stands to reason that Americans may continue to move further and further towards opposing
poles, forsaking objectivity in their media content and explicitly choosing biased sources that
reflect their world view. This can then amplify distrust of media and discontent with the political
system, which can then exacerbate the problem in a cyclical fashion (Norris, 1999).
This threat was especially apparent during the first phase of analysis. On both sides of the
political aisle there was dissatisfaction with ‘the media’ and how long heralded news outlets
cover stories and fulfil their role of societal informers. The conservative podcast, The Ben
Shapiro Show, took a more directed stance against the media, frequently claiming that the media
was untrustworthy, biased, unfair and so on. The left-leaning Pod Save America, in comparison,
largely supported the media, but made their numerous complaints about certain stories and
158
coverage tactics well known to their audience. In both cases there is a narrative, though to
admittedly varying degrees, about the credibility of hard news sources. Almost all of the
criticism of the media, on the part of both podcasts, usually came when the media’s stated facts
and findings differed from the political or ideological viewpoint of the podcast in question. The
rate of this media endorsement or criticism was discussed at length in chapter 4, but the larger
takeaway should be that that these two infotainment podcasts, and others like them, see the
media as producing building blocks that they can then use to frame events from their ideological
perspective. These political podcasts use the work of hard news outlets to provide facts to their
audiences, but subsequently add a level of opinion or satire that has real effects on listeners. As
such, journalism scholars should be aware of this, and should note that the increasing popularity
of podcasts like these may affect the perceived credibility of fact-driven reporting for listeners.
Limitations
While every precaution was taken over the course of this project to bolster the reliability
and validity of its findings, limitations do exist that deserve acknowledgement. Firstly, and
arguably most importantly, this study was conducted in a unique and unprecedented time. When
the experiment first launched, the U.S. was only starting to begin coverage of COVID-19, and
the virus seemed more an abstract concept than a real threat to the public. By the end of the data
collection, however, there were instituted lockdowns around the country, and there had already
been increasing signs of struggle and instability in the US and global economy. This bore out in
interesting ways in the study, and while no mitigating effects could be tracked per se, there were
some comments from participants about how schools were not currently open (thus making the
proposed policy a bit less relevant in their eyes). Asking people about their opinions on
159
government spending at this time was also interesting because the nation was grappling with
huge questions of how to provide relief to the public and private business in this uncertain time.
Similarly, much of the crafting of this project and the surrounding discussion stemmed
initially from a very normative understanding of news reporting and the role of journalism in
society. The spring and summer of 2020, however, have illuminated how fragile that
understanding is and how those norms need to be reevaluated, adjusted, and more concretely
defined. Even as these norms must be questioned, there is still so much worth protecting in the
American imagining of a free press system, not the least of which is the purveying of facts from
quality sources to the larger public. The service that many news reporters provide to the public is
undeniable, and through chronicling the issues and events of the moment, journalists are often
the best providers of facts that all Americans should know.
Despite the intentions of the profession, however, recent years have shown a rising
hostility between the US government, the media, and the public. That tension has been more
clearly demonstrated during coverage of recent big events like COVID-19 and the Black Lives
Matter protests in this country. In this time, showing the truth and covering facts has put pressure
on entrenched systems that have long been broken or misused, and those systems have shown
resistance to being exposed and talked about in the wider public. Indeed, the US Press Freedom
Tracker received more than 279 claims of assault on journalists between May 26 and June 3,
compared to a normal range of 100-150 claims per year, many of which were enacted by police
officers or government workers (Reilly, Veneti, Lillecker, June 2020). A disproportionately high
percentage of the journalists assaulted by police have been African American or journalists of
color, and the resurgence of Black Live Matter back into the spotlight has shown on a macro and
160
micro level how much work and change there needs to be in systems of practice, and in systems
of knowledge and understanding.
On the part of this project, another serious limitation stems from the selection process of
the infotainment podcasts. In hindsight, popularity and political leanings did not need to be the
only metrics to select cases. Better representations across multiple levels could also have
bolstered findings. Diversifying the voices chosen, not just based on political ideology, but on
lived experience and membership to different subcultures of American life would have helped
tremendously. As it stands, of the eight regular hosts on the infotainment podcasts, all are white
and only one is female. Retrospectively, it would have only emboldened the validity of these
findings to include a more diverse pool of infotainment perspectives and voices. One potential
means of doing that would have included subbing Crooked Media’s flagship podcast, Pod Save
America, with one of their newer infotainment shows, Hysteria, which is hosted by three women
including one woman of color. The show only launched in summer of 2018, but it represents one
of the few female-led popular podcasts involving politics. Yet even though Crooked Media has
tried building up its company to include more women and people of color, that standard has not
been fully mainstreamed across the medium as a whole.
Since its inception, podcasting has struggled in terms of representation amongst hosts,
especially when it comes to race (Friess, March 2017). This may be due, in part, to the racial
disparities in listening that existed in early days of podcasting. As of 2008 roughly 75% of
podcast listeners were white in America, but by 2018 those numbers have changed significantly
and more concretely represent actual demographic breakdowns in the US (Webster, November
2018). With increased popularity, and the visibility prompted on platforms like Apple and
Spotify in light of the Black Lives Matter movement for podcast hosts of color, hopefully more
161
diverse podcasts will grow their audiences and continue to manifest in the political infotainment
genre specifically. Regardless, increasing consideration for the diversity of studied podcasts
should be the work of future research so as to better round out the findings stated here and in past
studies.
Design wise, this study was bolstered by its mixed methods approach, but there are some
lingering issues and criticisms of qualitative content analysis, which insist that the methodology
is too lax and subjective. Such a criticism about the potentially reductive nature of qualitative
content analysis might gain traction when considering how the kappa score establishing
intercoder reliability in this study came in at 0.7, instead of the traditional 0.8 measure. However,
much of the subjectivity of this first phase of analysis was grounded and made more
academically rigorous by the inclusion of the second phase of research. By conducting an
experiment as well as a qualitative content analysis, this project was able to establish the context
and a kind of ‘world order’ seen in popular American political podcasts while also using
empirical means to test effect of the framing tactics these podcasts actually use.
Even with this emphasis on adding empirical testing, this project’s focus on objectivity in
journalism does prompt some needed queries about objectivity in social science methodologies
and in the construction of this project. Chapter 3 made a clear disclaimer about the built-in
subjectivity of phase one of this research. Qualitative content analysis, being descriptive and
exploratory, offers a level of creativity and partiality naturally (Mayring, 2004), but this project
did hope to counteract much of that subjectivity by using experiments as a second method. The
reasoning for this was grounded in the advantages of experimentation, and the stringent
standards that are expected in communications research using this method. Nevertheless, there is
still some level of subjectivity even in more precise and rigorous methods like experimentation
162
(Mayo and Spanos, 2009). This subjectivity can be attributed to the habits and actions of the
researcher, and to the method itself which often deals with raw data that has a theory-dependent
interpretation (Culp, 1995). However, the experimental design of this project took every
precaution to pull its measurement scales from tested and retested sources (like public polling
apparatuses), and to preregister the entire process of the experiment. That preregistration largely
locked the data collected and the analyses run into one plan, and adherence to that plan hopefully
helped to stave off much of the subjectivity that may otherwise have appeared.
Further, despite the more stringent standards an empiricism experimentation supplied, the
use of Mechanical Turk as a platform also represents a potential limitation. Chapter 3 discussed
how recent literature has demonstrated that Mechanical Turk is still a helpful tool for
experimental research, but Turkers are known to vary from the larger public in a number of ways
(Mullinix, Leeper and Freese, 2015). These include increased levels of education, higher
technological literacy, an ideological skewing in a liberal direction, higher rates of
unemployment, and lower rates of income (Hitlin, July 2016). Some of these factors were better
controlled for in the experiment, like the eventual change in recruitment to target conservative
Turkers, better balancing the distribution of liberals and conservatives in the study. However, use
of Mechanical Turk precludes this study from making wider reaching and generalized claims
about infotainment framing tactic effects on the whole American electorate. While Americans
were the ones surveyed, they came from a pool that is not fully representative, and other, more
expensive means of recruitment may have been more demographically consistent.
Retrospectively, this project’s experimental design was also crafted solely with the core
research questions in mind. The addition of a breakdown of change based on participants being
liberal or conservative took place after the collection of data and the running of the experiment.
163
Preliminary data indicated that there were some interactions between liberals and conservatives
happening that may prove interesting for this project and beyond, but a different analytical
framework would have better reflected such an investigation. Chapters 3 and 5 mention how a
2x2 design where infotainment conditions were measured against ideological background may
have proven more beneficial. However, given the original intent of this project, the larger
objective of the stated research questions, and constraints of funding, the detailed four treatment
design was the best choice to represent the current high-choice media environment and to
compare the effects of infotainment framing tactics with hard news tactics.
On the whole, the most limiting practical factor for this doctoral research had to do with
funding. In order to guard against any criticism about the findings involving political attitudes
and political knowledge being a one-time fluke, it would have been ideal to run a number of
studies with different topics to make sure that results remained consistent. These studies could
have also diversified the starting points for political attitude, including some tests where attitude
formation was factored in (as in this project, where many people started with no attitude and
moved towards an attitude), and some where people’s weaker attitudes that already existed were
taken into account. Unfortunately, budgetary constraints precluded more testing from being
conducted, and limited the amount of piloting/pre-testing that could be conducted for the audio
clips used in this research. That being said, the experiment cited in this research was
meticulously crafted to prevent as many unwanted effects and unanticipated problems as
possible. Whenever possible, each question was based on existing measures used by political
polling apparatuses and academics alike, and the treatments themselves were scripted, piloted to
check that humor tactics were actually funny and that the style and tone were accurate, tested for
164
audio consistency, and finally redrafted and completed to make the highest quality (and most
comparable) product possible.
Future Studies and Practical Prescriptions
Moving forward, scholars and practitioners alike have a number of avenues to consider
and explore. One area in need of expansion involves the social media communities that can form
around podcasts. While preparing for this current research, it was clear that social media has a
host of uses for podcasters and listeners alike. There is a sense of community around these
podcasts that expands past the reviews on distribution platforms like Apple and Spotify. All of
the podcasts studied have a presence on social media platforms like Twitter and Facebook and
they use these platforms not just to market and circulate their episodes, but also to create
fan/listener discussion about the content and the podcast as a whole. As such, more study of
social media’s impact could have at least two tracks – the marketing and branding side of social
media and podcasts, and the community building and meaning making surrounding podcast
content that happens among fans in the comments and on discussion boards.
Methodologically speaking, studies of podcasts would also benefit from more
experimental analysis. Experimental research involving podcasts has been few and far between,
and there are many effects that could be studied, beyond a political context. The majority of
podcasts consumed in America have nothing to do with news or politics, but those entertainment
genres and other kinds of storytelling still deserve to have their components and effects studied if
they are widely popular or if certain kinds of audience meaning making that are unique and
different exist surrounding the content. Similarly, there is space for academic research that can
better simulate the actual process of finding and downloading podcasts. Experiments that can
somehow recreate the technological affordances of this medium, and the different ways that
165
people listen to them and engage with them, would also be undeniably beneficial. Beyond
experiments, there are other methodologies that deserve expansion in a podcast setting as well.
Interviews with avid listeners for one, or with creators who are on the front lines of what it
means to make a successful podcast and what this format and medium mean from a production
standpoint and through a creative lens. In short, any and all research on podcasts that expands on
the currently limited amount of work in this area should be welcome, and, if the current
popularity of this medium holds, it would be a grave academic oversight to not take more note of
podcasts and their legacy.
Finally, while there has been a tremendous amount of studies conducted on hard and soft
news, the review of existing literature undertaken for this project demonstrated a lack of
cohesive explanation and understanding. The scholarly community has established that, yes, soft
news exists, and it continues on thanks to both audience wants and market pressures (Patterson,
2000; Baum, 2003; Boukes and Boczcowski, 2015). However, the results of different analyses
have shown pros and cons to softer news approaches, and there have been no all-encompassing
studies that try to bind together findings and make more general claims. Accomplishing this will
by no means be an easy task, but it is necessary for scholars of communication to try to do this
work.
Journalism and political communication scholars have begun to rise to this need, and a
significant amount of work has been done that begins the process of categorizing soft news and
creating mechanisms to identify and classify different types of softness (Reineman et al, 2012;
Otto, Glogger and Boukes, 2017). However, seeing that softness (in the form of political
infotainment) has the ability to sharply strengthen attitudes, it stands to reason that these framing
tactics can and have be used to influence our current political landscape. Numerous studies of
166
opinion news and the ideological leaning of certain outlets have been published, however
communication scholarship on the whole would likely benefit from a more cohesive union of
journalism and political communication studies when continuing to investigate this phenomenon.
Recommendations for Journalists
The findings outlined in this project provide important insights for a host of practitioners
working in political news spaces, including hard-hitting reporters and journalists. The
experimental phase of this research showed that infotainment based in opinion and humor can be
a more effective persuasive mechanism than the facts and figures of hard news. This finding
should comfort journalists who see their job as being based in informing the public, and not
steering the public towards one belief or another. However, an interaction analysis based on
ideology did show that even just hard news, purposefully crafted without charged language,
opinions, or humor, still moves people towards their predisposed ideology. This movement was
especially pronounced for liberal participants, who strengthened their attitudes at a similar rate
when exposed to humor, opinion, and hard news tactics. Conservative participants, however,
encountered hard news and still strengthened their opinion, but to a significantly less degree than
they did when exposed to softer news tactics.
These findings can be taken for their good and bad aspects. On the one hand, it is
promising to see that hard news can dull the effects of polarization among certain segments of
the population, and that covering stories in a hard-hitting style and tone can keep people more
open minded to an issue or cause at large. Unfortunately, there is still some evidence of
polarization happening when participants are exposed to hard news, and the increase of
polarization among the electorate has been documented in this dissertation as not only a threat to
the news media ecosystem, but also a larger conflict for democracy. If this polarization
167
continues, and if the self-selecting mechanisms of media consumption also remain prominent,
hard news outlets trying to utilize the medium of podcasts or expand their digital brands may
face significant competition from infotainers making use of these platforms. This is because
infotainment as a content style, with its conversational and entertaining qualities lends itself
exceptionally well to podcasting. Also, some political infotainment podcasts are already
differentiating themselves from mainstream media by attacking the credibility of hard news
sources. This is not radically new discourse from conservative circles, as distrust of the
mainstream has been cited across mediums and content types for decades. However, critique of
the media, and at times roasting of the press, was present across both sides of the aisle.
Improving the public’s perception of the media and hard-hitting reporting is something
all journalists must grapple with, and while it would be preferable to offer tangible solutions, this
research does not truly begin that incredibly difficult work. Indeed, in the given political and
social climate, such a task is even more monumental and daunting than it ever was. Distrust in
institutions remains high, and these are uncertain times for many Americans and people around
the globe. Nevertheless, in terms of practical necessity, it is clear that purveyors of hard news
and investigative journalism need to find a way to endear their work and their service to the
public again. This means more than simply getting facts out into the world. It requires a
reconstruction of shared facts, which appears to be decreasing steadily.
Accomplishing this task will be difficult, and arguably the market and political pressures
surrounding the media do not make much room for such an endeavor from within the media
itself. Regardless, journalists should be working each day to advocate for their role and
importance to the democratic health of this country. This can and should be done in conjunction
with scholars, with educators, and with organizations focused on media literacy in the internet
168
age. Similarly, journalists should be working with and pressuring social media platforms as
much as they can to help enact better standards of information sharing with the public. While this
study only tangentially dealt with issues of fake news and misinformation, the studied podcasts
ran the gambit in their claims from cited truth to arguments mired down by hefty spin that made
them more false than accurate. Social media should, in some capacity, have mechanisms to alert
their users of this range in information credibility and accuracy, but accomplishing that while
also preserving first amendment considerations will be an incredible challenge.
These numerous roadblocks and problem points in no way lessens the critical need
America and the world has for hard news that is created and shared with credibility and integrity.
Quite the opposite. If anything, the findings of this research merely show that hard news outlets
have real competition from infotainment sources like these podcasts, and that journalists and
media companies should seek other ways to endear themselves to the larger society or to
incentivize the value of unbiased news for an increasingly polarized public.
Notes for Infotainment Creators
On the whole, some of the most powerful findings of this project directly benefit political
infotainment podcasters. Both phase one and phase two of this work use these infotainment
products as the focal point of study, and the end result shows that political infotainers can have
effects on their audiences both in strengthening their political attitudes and in increasing their
political knowledge. The studied podcasts, and others like them, are undertaking strategic
persuasive tactics that can be used with credible effect. People are swayed by opinion and by
humor, and there is reason to believe they may be especially open to persuasive and informative
efforts if those come from podcasts they subscribe to and follow. This is attributed to the
medium of podcasting itself, which requires an opt-in from listeners who seek content out and
169
choose it to fill in their wanted and needed content diet. Importantly, this study did not recreate
the medium of podcasting, and future studies would be required to better gauge how some of the
affordances of podcasting may lend themselves specifically to infotainment styles, however
framing and storytelling tactics frequently used by the infotainment podcasts at the center of this
study were recreated and empirically tested.
The findings of this research demonstrate that humor and opinion framing tactics really
can change minds and can strengthen opinions while also informing an audience with basic facts.
This studied attitude change was much higher amongst conservatives, however, continued study
is necessary to make claims about liberals because in this project they started from a higher
support base than their conservative peers. Subsequent experiments would want to make sure
that liberals were as undecided as conservatives and then see how much movement each side
made in the face of infotainment framing tactics. Regardless, the movement demonstrated across
subjects shows the potential power of infotainment tactics. For political podcasters, who express
their opinions and try to persuade listeners to one pole of thinking or another, this is a good
thing, as it means that they are likely successful in strengthening opinions among their listeners,
especially loyal audience members who already ascribe to their given ideology.
A final note must be included for infotainers, reminding these content creators that their
genre of political infotainment relies on the harder hitting and more rigorous work of
investigative news outlets. Indeed, all of the studied infotainment podcasts, regardless of politics
and personal opinions, understood this, citing and sourcing from hundreds of news stories over
the course of the thirty studied episodes. Providing these citations and facts and figures is critical
to an infotainers job, because it establishes credibility. Hosts of these podcasts appear to be
knowledgeable and authentically informed when they call upon established facts and reporting.
170
These facts then bolster their opinion or humor techniques, helping to create a consistent voice
and narrative. All of these podcasts, and others like that, benefit from that, and it is important to
realize how necessary hard news is to that process. This is not to say infotainers must always
agree with the media, or must appreciate the ways in which stories are covered, but there should
be value placed on the factual products provided by journalists and reporters. Infotainers do not
have a necessary staple of their own content without these facts and sources, and even though
polarization and softening news may be beneficial to infotainers, decline in the production of
hard news and established facts would be harmful to the genre of infotainment over time.
Notes for Political Practitioners
Other notable beneficiaries of the phenomena studied in this dissertation and the apparent
effect that soft news tactics in political infotainment podcasts have are political practitioners.
Clearly there is power in the frames of opinion and humor, and while those frames are currently
more manifest to one side over the other, that could be the result of more exposure to these tactic
types over time, or a result of a skewed starting point for the specific topic studied in this
research. What this research can say is that it is likely beneficial for campaigns and practitioners
to tap into the reach and resource of political podcasts when trying to gain favor for a given piece
of policy or to bolster their campaigns credibility. This has been done with regularity and
consistency on Pod Save America, where each episode features at least one interview, usually
with a currently serving politician, or an expert on a given topic or issue of the day. The Ben
Shapiro Show similarly conducts regular interviews on their Sunday programming, though
Political Gabfest favors the outside counsel of other journalists over politicians.
If political practitioners are seeking to energize potential voters and to have a more
personalized piece of outreach with citizens, podcasting is also a favorable medium for that
171
endeavor. The intimacy embodied by the style and tone of these podcasts, as well as the
informality that comes from their conversational style, help facilitate a sense of authenticity,
which is an important index for voters, especially in younger generational cohorts. Political
infotainment podcasts especially provide a chance for lesser known politicians, or politicians
who need to build good will and energy on their own side, to get their name and ideas out there.
Inclusion in these podcasts may personalize politicians while also bolstering their credibility
thanks to overt or subtle endorsement by important political influencers (the podcast hosts). Most
listeners of these podcasts tune in because they value the thoughts and opinions of these
podcasters, and as such these podcasters and their good will and time may be an exceptional
resource for political practitioners. More studies would provide definitive answers to those
questions, but from this specific research, the findings both about framing effects and about
podcasts/audio as a medium here should be seen as potentially impactful, and indeed a successful
recreation and utilization of these frames could prove to be rhetorically and persuasively useful.
Prescriptions for Citizens
In closing, the great hope of this project has been to not only expand academic
understanding, but to bridge scholarship with the practical lived experience of Americans today.
This dissertation chronicles a few ways that words, style, tone, and framing have power. At the
core, these studied political infotainment podcasters are storytellers, cultivating a collection of
conversations about topics that matter to Americans and to people around the world. Sometimes
these stories are driven by fact and truth, and sometimes they are driven by humor and or
opinion. As infotainment, they are, by definition, a blend of genres, designed to inform and to
entertain with equal measure. But as listeners and audience members, it is critical to remember
that the ‘inform’ part of that equation is subjective. When dealing with politics especially, facts
172
are not always so clear, and opinions can often cloud perception one way or another. The
charged nature of politics in America long predates the podcast age, but also informs the content
seen in these shows and others like them. On the whole, infotainment podcasts can vary in their
commitment to honesty, transparency, and sourcing, and this dissertation makes no claims as to
the intention of these works. It only chronicles how popular this genre is becoming, and how
these narrative styles may affect listener attitudes and knowledge.
To be clear, this is not a dissertation intent on criticizing the larger genre of infotainment
or on political podcasting at all. This is an investigative response by someone who has
appreciated the genre personally, and who sees real value in having conversations about politics
in ways that podcasts specifically provide. The aural tradition of podcasts, the intimacy of the
medium, and the more casual stylistic choices for language and tone are all strengths to be
valued even while they may present new and different effects. These creative choices can bring
people into the fold, prompt more political dialogue, and may perhaps allow citizens to converse
with each other in more productive and proactive ways. However, the danger of infotainment
programs lies in the potential complacency and lack of curiosity that continuous consumption of
like-minded media may produce. People on an individual level should be critical of all the media
they consume, including political podcasts, enjoying their best qualities, while also remembering
to stay vigilant of what is really true and what may just be prettily spun punditry.
173
APPENDIX A
COMPREHENSIVE CODEBOOK
Code Book Legend Descriptive Tactics Code Focused Themes Directed Question DQE Direct Quotation DQO Expert Sources ES Community Sources CS Facts and Figures FF Hard News Elements Personal Testimonial PT Media Citation MC Audio Clip AC Purposeful Jokes and Sarcasm PJS Entertaining Tactics Pop Culture Reference PCR Meme Reference MR Roasting R Celebrity Feature CF Call to Action CTA Mobilizing Tactics Democrat Assessment (Positive) DAP Punditry and Opinion Democrat Assessment (Negative) DAN Republican Assessment (Positive) RAP Republican Assessment (Negative) RAN Media Coverage Criticism MCC Media Coverage Endorsement MCE Self-Promotion SP Advertising Non-ad Sponsor Shout Out SSO
Code Description and Breakdowns
Baseline and Topic Coding
• Which podcast it is – this will be coded by selecting the entirety of the given file and
labeling it in the case section designated for the podcast being studied.
o Example: look at Pod Save America Episode, right click the file in the file viewer
in NVIVO, code at existing case (not node), select Pod Save America case.
• How long the story runs for – in a separate document (or piece of paper) calculate the
time that lapses between the NEW STORY time stamp and the END STORY time stamp.
Do this for each story (make sure you have title of the story displayed prominently so you
know which time goes with which story).
• How many people are speaking – in the same document where you are charting time of
story, also note how many speakers there are. To prevent confusion when assessing how
174
many speakers are in an audio clip, gather this number from the identified names or
initials in the transcript. You may count identified speakers that have been coded in the
audio clip section, but only if they have a unique identifier in the transcript. Some audio
clips do not have accompanying transcript, so do not account for these speakers.
• Code for individual stories – Open the file and highlight each story (Only segments of
each podcast transcript that start with (NEW STORY, title, time). Include this New Story
stamp and the End Story stamp at the end, highlighting everything in between. Right
click and code at existing case (not node) – ‘individual stories.’ Do this with all stories in
a given file before you start coding.
• Code for ad segments – highlight each ad segment from start ad to end ad and code in the
‘ad segments’ case
o Exception, ads that are within a story – see process below in do not code section
• Topic Classification – highlight the title of the story (located within the New Story
stamp) given and choose one classification of topic from the list below
o Domestic Story – A story that takes place in the US that does not have to do with
policy, elections and political campaigns, tech and business, other media
coverage, pop culture and the arts. Some examples would be a major hurricane is
happening in US territory, a protest took place on US soil, a mass shooting in a
US school, and many others. This is the largest umbrella a US-based story can fall
into, and if a topic does not neatly align with one of the other categories below
and takes place in the US, it is coded as a domestic story.
o International Story – A story that takes place outside of the US that does not
directly have to do with the foreign policy of another country. This story can be
centered around foreign individuals or parties, some kind of natural disaster, or
even around Americans abroad, as well as numerous other focal points. However
the defining characteristic of these stories is that they take place outside of
America, and that they are not centered on policy, tech and business, other media
coverage, or pop culture and the arts.
o U.S. Elections and Political Campaigns – A story that centers on US elections,
politics, or a political campaign. Examples include analysis of a particular
political candidate or their actions, looks at campaign tactics, and commentary on
upcoming, current, or previous US elections.
o U.S. Policy – story focused on US policy, can be foreign or domestic, but must
center on the US and not other actors/parties.
o Trump coverage – story focused on Donald Trump and or his cabinet and
government. Not stories having to do with the Trump campaign, those fall into
Elections and Political Campaigns segment.
o Policy of Foreign country – story focused on the policy of a foreign country or
government. May overlap with US policy, but if the story primarily focuses on the
foreign power and their stance, the story is coded in this category.
o Tech and Business – stories about technology and business. Can include stories
about big tech mergers or regulations from governments, happenings in large
corporations, commentary on the markets and how the economy is affecting
business, and other stories in this area.
o Sciences and Medicine – stories about science related fields. Can include stories
about any realm of science, including medicine, but the defining element of this
175
code is that the whole story or a majority of the story must focus on the science
and perhaps the impact of that science.
o Other Media Coverage – this kind of story looks at media coverage and the
media’s analysis of a given issue. It can center on one particular story, or it can
look at media coverage on the whole for a given issue. For example, if a podcast
host is analyzing the merits or failings of some sort of media coverage, or if they
look critically at the media on the whole, and that is the defining through line of a
given story segment, those would be the kinds of stories that fall into this
category.
o Pop Culture and the Arts – any stories having to do primarily with celebrities, pop
culture, arts, and entertainment. Be aware that celebrity commentary on an issue
is not enough to qualify a story about pop culture and the arts. The focus of the
story must predominantly be about a given pop culture happening or element, or
some achievement in the arts or a given celebrity themselves to qualify for this
coding. Examples may be if a story covers the recent death of a celebrity,
commentary and review of a television show or movie takes up a whole story
segment, or something of the like.
o Interview with guest – any story in which the whole story is just the podcasters
asking repeated questions of a specified guest. Many stories have interviews, or
some sort of direct source material, but in order to be labeled an interview with
guest, the focus must be on the guest and their opinions and not a larger story.
This code is often indicated already in the title of a story stating, ‘interview with
X’ and the interviewed party is always asked more than one question, usually
having to do with a variety of topics, not just insights on one issue or topic.
Things not to code:
• Anything that falls outside of an identified story (for example, snippets at the beginning
or the end of a podcast that tell what’s coming in the podcast or give credit to producers).
Starting points for coding should happen directly after (NEW STORY: TIME) stamp and
should conclude for that story before the (END STORY: TIME) stamp. However, these
stamps are included in the highlighted, coded text when identifying the ‘individual
stories’ cases
• Advertising segments – be aware that sometimes there is self-promotion within an
otherwise acceptable story that has been marked for coding, but the actual ad segments
we are not coding. However, in some podcasts, like the Ben Shapiro Show, ads can
happen in the middle of a story. For the purposes of this project, all of the story is
highlighted, including the ad segment, until the end story stamp occurs and coded as one
individual story. The ad segment within that story is then highlighted and coded as ‘other
(not coded).’ You do not need to apply this code to ad segments that fall outside of stories
as NVIVO will allow me to disregard those through the case function.
• Introductions to a reporter, journalist in the field, podcast hosts, or guests. The only
exception to this is if that introduction exists in a sentence that would be coded for
something else.
176
o Example: “There are currently more than 53,000 people without power on the
coast of South Carolina, and Kerry Andrews, our US correspondent, is on the
ground there now.” (coded as facts and figures)
• A person’s name or initials in the transcript. There are exceptions to this rule. For
example, if a purposeful joke spans the work of more than one speaker, include their
names, because we want to get all of the joke in one code. Also, when coding for an
audio clip, you literally code from the start of the clip until the end. This will include the
person’s name who is speaking. Finally, when you are listening to a podcast with people
who interrupt each other or who give short answers to a longer dialogue (thus splitting a
person’s segment of punditry and opinion, personal testimonial and so on) you can
highlight that person’s name and include their interruption in the same code as the other
person’s larger narrative. When doing this, if possible, include the original speaker’s
name as well, but only if all of their dialogue fit in one type of code. Also, if someone is
just agreeing with a short sentence (thus endorsing another speakers point) code that
agreement as part of the code of the speaker they agree with).
Unit of Analysis:
• In coding the actual text of each story, we are largely using the sentence as the unit of
analysis. There are a few exceptions of codes that are not applied to a whole sentence
(these are described in the sections below), but for the most part a code, unless otherwise
specified, is applied to a whole sentence. Coders read the sentence and from the coded
list select the closest code possible. Every sentence is coded within a story, unless it falls
in the ‘things not to code’ section seen above.
• While the analysis is being done at the sentence level, for the purposes of NVIVO
coding, multiple sentences in a row that constitute one code may all be grouped together.
o For example, if a podcast host is giving a breakdown of facts and figures which
goes on for a whole monologue, and each individual sentence would be
considered codeable for facts and figures, highlight the whole block of sentences
from start of that code to end of that code and code all as facts and figures.
Hard News Elements
• Directed Question: when a podcast host asks a question either of their cohost, news
correspondent, story source, or a guest of the show seeking an answer (AKA not
rhetorical). Directed questions are usually prefaced with or followed up with story
context, which can be coded as facts and figures, punditry and opinion, personal
testimonial and other codes. But the actual question is all coded as ‘directed question.’ Be
advised, due to some hiccups in transcription, there may be a few questions that end with
periods instead of question marks. Use tone context to deduce whether or not a statement
is indeed a question.
o Example: ‘Emily, the Democratic Party has long struggled with messaging to
their base and to the larger public. What are some strategies that could be helpful
to them as they look for new ways to do that this election?”
177
• Cited Quote: When anyone is cited on the podcast as having said something word-for-
word (or written it, like in a tweet or article). Highlight any portion that says who said the
quote and then the quote in its entirety. In doing this, you are not necessarily highlighting
the whole sentence preempting the quote. You just want to get who said it if you can.
Also note – there is no recoding of a direct quote into some other code. So if the quote
itself cites facts and figures or uses expert sources or personal testimonial, it is all just
coded as a direct quotation. There is only one exception to this rule – if the person cited is
a journalist or the quote comes from a media story, then the media citation code is also
used to make note of that media citation.
o Example: ‘John Marris of the New York Times writes, quote, there is a lot to be
learned still about the recent diseases killing off millions of Asian pigs. Scientists
believe that there is a chance this could go fully apocalyptic, causing irreparable
harm to the pork industry and endangering the food security of billions
worldwide.’
• Expert Sources: This involves reference to some sort of expert on the issue (for example,
police investigating a crime, a Doctor on a medical issue, Lawyer specializing in a
specific area of law, etc). They may give detail, naming the expert or their field, or they
could just say something along the lines of ‘Experts are saying…’ At times they also only
refer to the expert as a source, but context in the rest of the statement hints that it’s a
government source or police source, both of which would fall in the expert category. This
expert source can talk directly with the podcaster/correspondent, or they can be cited
through a quote or paraphrased in story summary. Expert sourcing also does not require
that the source material was directed at the podcast at all, so citing a press release from a
certain office is also permitted as an expert source. Cite all of the facts that come from an
expert source as expert sources. This code is analyzed at the sentence level.
• Community Sources: sources that are not expert (as defined above) but who have
witnessed or experienced the events or issues covered in the story. Their testimony,
whether factual or opinion-based are all coded as community source material and are not
coded for other codes that may exist within their code material (like personal testimony).
Also, community sources do not have to give direct quotes. For example, even statements
like ‘locals are saying’ can be included in community source coding as well. Be careful
with this, however, because sometimes, especially in material that might otherwise be
coded as punditry and opinion, a speaker might say something like ‘everyone is saying,’
or ‘people are saying’ and that is not inherently credible as a community source. Use your
best context clues to infer if this information actually stems from a speaker taking
account of people’s real feelings, or if this kind of claim is used to back a speaker’s own
opinionated argument. This code is analyzed at the sentence level.
• Facts and figures: These are quantifiable stats and happenings/events that can be checked
and verified. This code is analyzed at the sentence level, and the facts and figures code
can range from one sentence to a large monologue. The key with this code is that biased
framing is minimal. This is the imparting of facts and context updates on a given story
but there is no mention of the speaker’s opinion or phrasing like ‘I think,’ ‘what might
happen,’ ‘it’s interesting to note,’ and so on. There is also sparing use of charged
language and limited evaluative phrasing. In essence, facts and figures codes are used for
baseline information that can be verified but does not imbue some kind of personal bias
or argument.
178
o Recollections of past issues are another example of facts and figures – however,
once that past example is used to hypothesize what could happen in the future, it
becomes punditry and opinion.
o Example of facts and figures segment: “In Namibia today, 24 cars were at risk of
falling into a large river when the bridge they were on became compromised.
Police and emergency workers responded to the scene, saving all passengers on
the bridge at the time. Ultimately the bridge suffered a partial collapse, resulting
in the loss of 7 cars into the water. Recovery efforts are still underway at this
time.”
• Personal testimonial: either podcaster, news correspondent, or guest on show relays a
personal experience. This code requires explicit ownership of the event happening to
them personally. Different from expert source because this person is not basing their
testimonial on experience in a more credible sphere. As such, it is possible for an expert
source to give a personal testimonial, but at the sentence level they need to make clear
that this was their personal experience and they need to emphasize that it was a personal
thing to them over their expert knowledge of a situation. This is not a very common
occurrence, but does happen if the expert is an interview segment guest (where a whole
story is dedicated to an interview with a given person, and they answer multiple questions
on a number of topics, thus being more than a source on a story, but they themselves
becoming the story). Analyzed at the sentence level.
• Media Citation: citing a particular news outlet’s story or journalist to lend credibility and
inform or to make the point of a podcaster/correspondent’s argument on a given issue.
Analyzed at the sentence level. In order to qualify, however, the mention of this media
article needs to not bear any real endorsement or criticism of the piece. Though there is
endorsement in the sense that they use the piece to convey more facts, there is no
evaluative language that says the piece was something like ‘good,’ ‘thorough,’ ‘lacking,’
‘confusing,’ and so on. Media mentions with those kind of evaluative claims are coded
with codes falling under the ‘Punditry and Opinion’ theme.
o Example: “Chris Macklan at the Washington Post has broken a story regarding
the vacancies currently in the State Department. Macklan and his sources have
identified eighty key spots that are currently empty in the administration, resulting
in a substantial amount of backlogged work and some diplomatic missteps.”
• Audio Clip: There is some kind of audio material (or video if it’s the BSS, because he has
a live stream with his podcasts, but they only manifest as audio for podcast listeners)
being shared as a supplementary material to provide context and insight into a given story
to viewers. This material is not coded as more than an audio clip. In order to code, the
entire chunk of clip related material is highlighted. Sometime this has transcript
accompanying it, sometimes it does not (if not it often looks something like *audio clip
of people cheering* or something of the like).
o Important to differentiate especially with BBC podcast – some of the
correspondents create story segments where they do not speak with the host
directly. This is not coded as an audio clip. Audio clips are presented by the
podcast hosts and are then analyzed in some capacity, often time without direct
quotes from those featured in clips. Further, some interviews on all of the
podcasts are more packaged, with no directed questions asked and instead a
variety of clips organized together to make a guest segment. These clips are not
179
the same audio clips, and should be treated as another interview, and thus coded
for the full array of possible categories.
Entertaining tactics:
• Purposeful Jokes and sarcasm: use of intentional jokes, resulting in laughter or not
(humor studies). This subject can be up for interpretation, but there are usually tonal hints
that someone meant for something to be a joke and laughter after the fact can be an
indicator. Some shows (BSS) do not have more than one host – thus laughter will not be
any kind of after-the-fact indicator. This code does not have to be analyzed at the
sentence level. Purposeful jokes can span more than one sentence (if they use some kind
of comedic build up) or can consist of just a few words. This code is also one that can
overlap with another code. For example, purposeful jokes often fall in the midst of some
sort of opinion discussion, and while a whole segment may be opinion, a few words or
one sentence in that opinion can also be a joke. Also, one coding of a personal joke can
include more than one speaker. If podcast hosts are interacting with each other over the
joke, or building on the joke together, it is all coded as one purposeful joke.
o ‘J: Mike I have a joke. You ready?
M: Hit me with it
J: Vote like the Democrats will win. Buy life insurance like the Democrats will
lose.’
o ‘I just can’t understand how anyone can be this dumb. It’s so incredibly stupid to
act this way. This is no way to run an administration. It honestly feels like we as a
country elected a semi-aware cheese puff to be the president, but only if a cheese
puff could also be a racist, narcissistic a**hole.”
§ Highlighted section is all opinion, but the joke is in green.
• Pop culture references: reference to pop culture, celebrities, movies, television, etc. This
is another code that is not necessarily analyzed at the sentence level and can be applied to
segments that are already coded. Can range from the name of a celebrity to a joke that
uses pop culture in it.
o Example: “So here we are, all waiting with an air of desperation on any kind of
news, with Jon Snow style puppy eyes and abandonment issues.”
§ Coded all as a purposeful joke, but only the last section is a pop culture
reference.
• Meme reference: I want to differentiate between online fads and memes and the larger
sphere of pop culture, because while memes might be a part of pop culture, internet-based
fads and memes are big for many of these podcasts. All of the hosts are on twitter or
some type of social media, and social media is not just influencing the news they see or
conversation with listeners, but also the language they’re using. These references can be
more or less obscure, and they’re used differently by conservative hosts and liberal ones.
o Example: “Watching that press conference we are all the gif of the guy who just
blinks a lot. Nothing sums it up quite like that. I’m shook and I’m blinking and
that’s all I can do.”
• Celebrity feature: a celebrity (non-politician) is featured on the podcast in an interview or
passing cameo (celebrity is communicating with the podcast, it can’t just be a clip of the
180
celebrity). This is not analyzed at the sentence level. You just highlight the celebrity’s
name when they are introduced to the show. Trying to calculate how often these shows
invite celebrities (not journalists, politicians and so on) to join them for entertainment
value.
• Roasting: verbally dragging or haranguing a person, group, or idea. Defined by its
hyperbole and loaded language, roasting requires taking something seen as bad and
blowing it up into something terrible. However, to be classified as a ‘roasting’ the
speaker must say they are going to roast someone, or can use a synonym like ‘drag’
‘crucify’ or ‘humiliate.’ There needs to be a stated intention of roasting, otherwise a bad
assessment of someone is classified within other codes in the punditry and opinion theme.
Analyzed at the sentence level, but a roast often involves a monologue of multiple
sentences.
o Example: ‘Let’s just stop a minute here to drag the media, because they make it so
damn easy. What a bunch of worthless, diabolical animals. They’re terrible. The
worst kind of shit on the bottom of a shoe. They’re the kind of people even a
mother couldn’t love. A saint would condemn them. God himself would take a
pass on forgiving these idiots. Awful. Terrible. I hate the media. “
Mobilizing Actions:
• Call to Action: calls to some sort of political action. This includes, but is not limited to
voting, donating money to political causes, protesting, boycotting, calling senators,
knocking on doors, working at phone banks, and volunteering time for a campaign or a
political event.
o Example: “Don’t forget to vote on Tuesday and in the meantime you need to
convince all your friends to do the same. I don’t care if they’re your over achiever
friends or the stoners who barely get out of bed in the morning. Get them to the
polls, make them go canvass with you, have them skip the Starbucks coffee that
day and donate to some last-minute fundraising. Whatever they do, it all helps.
Every last bit helps, so get out there and let’s win this thing.”
Punditry and Opinion:
• Punditry and Opinion: Lines up with other studies of political news. Opinion is usually
made clear because the respondent has been asked for their opinion, or they preface in
some way with ‘In my opinion,’ ‘I think,’ and other personal qualifiers. However,
sometimes opinion is more subtle and comes down to tone and bias of framing. A
podcaster or correspondent can be giving the facts of a situation, but if they frame it
towards one side more heavily, even without disclosing that is what they are doing, the
text is coded as opinion. Punditry is similar, and mostly comprises the answers of podcast
cohosts or correspondents when they are not just regurgitating cited facts. Punditry can
weave in checkable sources or statements, but on the whole those sources are being made
to put forward an argument as opposed to strictly inform. Punditry also often deals with
hypotheticals. Phrases similar to ‘if this were to happen,’ ‘if I were running this
campaign’ and others can signal that the speaker is participating in a pundit like fashion
(Karidi, 2018). Other examples of punditry also include extensive paraphrasing, so
181
basically a reporter or speaker is trying to convey the happenings of a story or the
opinions of someone being covered in the story and they don’t use direct quotes, but
instead say things that are similar to what might have been said, but aren’t checkable and
are in some way leading. This category is analyzed at the sentence level.
o Keep in mind, there are codes in this section that are more detailed and they
should not overlap with punditry and opinion. So, if the opinion is evaluating a
given political party or a person in a political party, consult the below codes and
choose one of those. Those codes cannot be double coded with ‘punditry and
opinion.’ Also, if the opinion is evaluating the media as a whole, their coverage of
a certain issue, or even a single story or journalist, that likewise falls within its
own coding category that is separate from this broader category of punditry and
opinion.
o Example: “The idea that we are in the 21st century and we are still arguing about
the science behind climate change is batshit crazy. There is no room for debate on
this. Facts are facts, and yet you have people who just don’t believe in them or
choose not to give a shit. This is a crisis that we are all facing, and ignoring it is
not an option, in my opinion.”
• Democrat Assessment Positive: Assessment of the Democratic party in a positive way.
This analysis goes further than a stated fact, for example, ‘Democrats are expected to win
nationally by seven points,’ is not in itself analysis about the party. There needs to be
some sort of assessment about whether the Democratic party or a figure in that party is
doing well. This is analyzed at the sentence level, but the assessment itself (something
like ‘The Democrats are crushing it right now’) is not the only thing coded. Any
justification that precedes or follows an assessment is also coded in this category. Also
keep in mind, a rule about all of the assessment categories for political parties – if there is
hypothetical analysis, like ‘this could happen’ or ‘if they do this’ that is punditry. Dealing
in hypotheticals automatically grounds the code as punditry and opinion. The evaluation
needs to be of current and past actions/stances to fall into an assessment category.
o Example: “It makes me so happy to see the Democrats getting out there and really
talking about healthcare and housing and issues that people care about. This is a
great message for them, but it’s also just the kind of stuff that matter to everyday
citizens. Healthcare is the number one thing people care about because it is
something we all need at one time or another and the system is broken. So there
needs to be this conversation, and so far the Democrats are walking the walk and
talking the talk.”
• Democrat Assessment Negative: Assessment of the Democratic party or an individual or
group that identifies with the party in a negative way. As with the code above, this
analysis goes further than stated fact. There needs to be some sort of judgment about the
democrats or one of their members doing poorly. Can include justifications to that
argument as well in this code. Analyzed at the sentence level, but as with many codes in
this theme, an argument or assessment may span more than one sentence. Also keep in
mind, a rule about all of the assessment categories for political parties – if there is
hypothetical analysis, like ‘this could happen’ or ‘if they do this’ that is punditry. Dealing
in hypotheticals automatically grounds the code as punditry and opinion. The evaluation
needs to be of current and past actions/stances to fall into an assessment category.
182
o Example: “Democrats have long been hypocritical on this issue. You can’t just
run around claiming that we all get free healthcare and everything is puppies and
rainbows in your grand new plan and then say but don’t worry about how we pay
for it. It’ll all work out. We are worried. We’re worried because we’re talking ten
trillion dollars over ten years. You expect us to believe the government, with its
hardly flawless reputation on spending is trustworthy on this? Nah. Not buying
it.”
• Republican Assessment Positive: Assessment of the Republican party in a positive way,
this includes praise of Trump, particular republican members, or the party on the whole.
Needs to go further than just a fact about the party or person and give some judgment
about them being good. Can include extended justifications for why they are good or
have done something good. Analyzed at the sentence level, but can span more than one
sentence. Also keep in mind, a rule about all of the assessment categories for political
parties – if there is hypothetical analysis, like ‘this could happen’ or ‘if they do this’ that
is punditry. Dealing in hypotheticals automatically grounds the code as punditry and
opinion. The evaluation needs to be of current and past actions/stances to fall into an
assessment category.
o Example: “Trump’s strategy of making this election about the economy is
excellent. The economy has literally never been better. Unemployment is at
historic lows, people are working and keeping food on the table, and that’s a good
thing. We don’t want to mess with what’s working so republicans who are
pushing this are playing the smart game.”
• Republican Assessment Negative: Assessment of the Republican party in a negative way.
This includes critique and criticism of Trump, particular republican members, or the party
on the whole. Needs to go further than just a fact about the party or person and give some
judgment about them being bad or lacking. Can include justifications to that argument as
well in this code. Analyzed at the sentence level, but can expand for multiple sentences.
Also keep in mind, a rule about all of the assessment categories for political parties – if
there is hypothetical analysis, like ‘this could happen’ or ‘if they do this’ that is punditry.
Dealing in hypotheticals automatically grounds the code as punditry and opinion. The
evaluation needs to be of current and past actions/stances to fall into an assessment
category.
o Example: “The Republicans right now are desperate to make this election about
the economy. They want to run on the economy because the numbers are good.
The numbers are in their favor. But what does Trump do? He pivots to
immigration. He makes it about racial resentment. He stokes fear and holds up the
caravan as the number one issue. He says his wall is still top priority. It’s
astounding, because he’s got this perfectly viable path to victory and he fumbles
by going for the easy, low hanging, and yet divisive fruit.”
• Media Coverage Criticism: Requires the podcaster or guest to openly criticize either the
media on the whole, coverage on a certain issue, or an individual story or journalist.
Similar to the media citation code in the hard news section, but this code is defined by an
evaluation of the media that is negative. An assessment of the media is made, and once
that assessment is made in a negative fashion, the code ‘media citation’ is no longer
applicable and this code is used instead. Analyzed at the sentence level but can cover
many sentences together. May double code this with another code, a good example being
183
any assessment code about a political party, as political parties are often discussed in
conjunction with media.
o Example: “The New York Times put out an op-ed this weekend entitled, “Is
Trump plotting or just plain crazy?” and I just want to know how the fuck that’s
helpful. We’re two years into this presidency. The time of that kind of talk is over.
The world is on fire, so let’s not hypothesize about the guy who started it and
whether he’s malicious or just stupid. I want updates on the freaking fire.”
• Media Coverage Endorsement: Requires the podcaster or guest to openly praise or
endorse the media on the whole or a particular example of media coverage. Similar to the
media citation code in the hard news section, but this code is defined by an evaluation of
the media that is positive. An assessment of the media or article is made, and once that
assessment is made in a positive fashion, the code ‘media citation’ is no longer applicable
and this code is used instead. Analyzed at the sentence level but can cover many
sentences together because you cover all of the insights of the given article of media
coverage. May double code this with another code, a good example being any assessment
code about a political party, as political parties are often discussed in conjunction with
media.
o “You know it’s a rare thing for me to say that I think the media is handling
something well, but this time I have to give it to them. We’re seeing, for the first
time, a willingness to just call a lie a lie. We’re letting down those old models of
what should be. Don’t exaggerate, don’t use loaded language. But guess what? A
lie is a lie, and when you call it something else you fail to actually tell the story in
a truthful way. So good on them for making this change. Took them long
enough.”
Advertising:
• Self-promotion: podcast host or guests shout out their own businesses, network affiliates,
or upcoming events, thus advertising themselves or their peers at whatever network
makes their content.
Example: “Don’t forget to check out other slate podcasts by going to slate.com/podcast. There
are all sorts of amazing content up there from our friends throughout slate. They’re doing great
work, and they’re doing it for you, so be sure to check that out.”
184
APPENDIX B
EXPERIMENT DESIGN AND FORMAT
Participant Inform and Consent: Hello! You are being asked to participate in a research study conducted by Emily O’Connell from American University. The purpose of this study is to assess the effects of popular framing techniques in political podcasts. This study will contribute to the student’s completion of her PhD dissertation. To participate in this study, you will need to respond from a device that has the ability to play audio recordings. This study consists of a survey experiment that will be administered to individual participants online. You will be asked to provide answers to a series of questions related to news and politics, and may be exposed to short podcast clips. Participation in this study will require approximately 5 minutes of your time. The investigator does not perceive more than minimal risks from your involvement in this study. However, your participation will advance academic research on the impact of political podcasts and podcast news. While demographic information will be collected such as age, gender, race, and educational background, this is an anonymous survey experiment (no names and geographic location required). The results of this research will be presented in a student doctoral defense and published in that student’s doctoral thesis. The results of this project will be coded in such a way that the respondent’s identity will not be attached to the final form of the study. The researcher retains the right to use and publish non-identifiable data. All data will be stored in a secure location accessible only to the researcher. Upon completion of the study, all information that matches up individual respondents with their answers will be destroyed. Your participation is entirely voluntary. You are free to choose not to participate. Should you choose to participate, you can withdraw at any time without consequences of any kind. You may also refuse to answer any individual question without consequences. If you have questions or concerns during the time of your participation in this study, or after its completion or you would like to receive a copy of the final aggregate results of this study, please contact: Emily O’Connell Ericka Menchen-Trevino School of Communication School of Communication American University American University [email protected]
Giving of Consent
I have read this consent form and I understand what is being requested of me as a participant in this study. I freely consent to participate. I have been given satisfactory answers to my questions. The investigator provided me with a copy of this form. I certify that I am at least 18 years of age. (Participant checks yes on first page, moving forward to study)
185
Pretest *sound check question where sound cloud audio clip is played with a 4-digit number and
participants must input the number into the answer box*
Are you a US citizen?
a) Yes
b) No
Some people think the government should provide fewer services, even in areas such as health
and education, in order to reduce spending. Other people feel that it is important for the
government to provide many more services even if it means an increase in spending. Where
would you place yourself on this scale, or haven't you thought much about this?
(Scale from 1 -7, 1 being ‘cut services/spending’ to 7 being ‘more services/spending.’ 4 or the
neutral middle point is, ‘don’t know, haven’t thought’)
**Only people who answer don’t know haven’t thought are sorted to a secondary question as
follows:
Thanks for your answer. Now choose from the remaining options. (scale of 1 ‘cut services and
spending’ to 6 ‘more services and spending’ no longer a ‘don’t know option)
Do you support The Universal School Meals Program Act?
a) Strongly support
b) Somewhat support
c) Neither support nor oppose
d) Somewhat oppose
e) Strongly oppose
Four Test Treatments Scenario 1: Humor (Liberal or Conservative)
• 90 second audio clip with two speakers framing the argument towards a liberal or
conservative lean using humor
Scenario 2: Punditry and Opinion (Liberal or Conservative)
• 90 second audio clip with two speakers framing the argument towards a liberal or
conservative lean using opinion
Scenario 3: Hard News
• 90 second audio clip discussing facts of the policy and the issue it’s hoping to address.
Gives the arguments of both sides, ultimately not framed in any opinion direction
Scenario 4: No treatment
186
Post Test
Baseline questions Was the podcast story clear? (seven-point scale, ranging from not at all, to very much) (Xu,
2014). Do you think the podcast contained false or made up information? (seven-point scale, ranging
from not at all, to very much) Policy Attitude and Knowledge Questions Prior to participation in this study, were you already aware of The Universal School Meals
Program Act?
a) Yes
b) No
c) Unsure
What is the main goal of The Universal School Meals Program Act?
a) Provide free school meals to poor children in American public schools
b) Provide free school meals to all children in American public schools
c) Provide reduced cost meals to poor children in American public schools
d) Provide reduced cost meals to all children in American public schools
e) I don’t know
What meals would be covered for students under The Universal School Meals Program Act?
a) Just lunch
b) Breakfast and lunch
c) Lunch and dinner
d) Breakfast, lunch, and dinner
e) I don’t know
What two outside groups of people did lawmakers consult when creating The Universal School
Meals Program Act?
a) Farmers and parent-run advocacy groups
b) Teachers unions and doctors
c) Doctors and farmers
d) No groups were consulted by lawmakers
e) I don’t know
Is The Universal School Meals Program Act important to the country? Matrix (5 point scale from
very important to very unimportant)
Do you support The Universal School Meals Program Act?
f) Strongly support
g) Somewhat support
h) Neither support nor oppose
i) Somewhat oppose
j) Strongly oppose
187
Political Demographics
We have a number of parties in the United States, each of which would like to get your vote.
How probable is it that you will ever vote for the following parties? Please specify your views on
a ten-point scale where 1 means ‘not at all probable’ and 10 means ‘very probable.’
• Democratic Party (1-10 scale)
• Republican Party (1-10 scale)
• Green Party (1-10 scale)
• Libertarian Party (1-10 scale)
Podcasts and News
When was the last time you listened to a podcast?
a) Within the last two days
b) Within the last week
c) Within the last month
d) Within the last year
e) More than a year ago
f) I have never listened to a podcast
Have you ever listened to a political podcast?
a) Yes, I listen regularly
b) Yes, but I do not listen regularly
c) No
How closely do you follow the news on television, in print, on the radio, through mobile devices
or online?
a) Very closely
b) Moderately
c) Somewhat
d) Not very closely
e) Not at all
Basic Demographic Questions
Choose one or more races you consider yourself to be:
a) White
b) Black or African American
c) American Indian – Alaskan Native
d) Asian
e) Native Hawaiian or Pacific Islander
f) Other (offer text-box)
Are you Spanish, Hispanic, or Latino?
a) Yes
188
b) No
What is your gender?
a) Male
b) Female
c) Non-conforming (offer text box)
d) Do not wish to say
What is the highest level of education you have completed?
a) None, or grades 1-8
b) High school incomplete (grades 9-11)
c) High School Graduate (grade 12 or GED Certificate)
d) Technical, trade or vocational school AFTER high school
e) Some college, no 4-year degree (includes associate degree)
f) College graduate (B.S., B.A., or other 4-year Degree)
g) Post-graduate training/professional school after college (towards a Master’s degree or
Ph.D., Law or Medical School)
In what year were you born? -text entry for self-input
Comments Question
189
APPENDIX C
PODCAST CLIP SCRIPTS
Humor Scripts Conservative Humor
Speaker 1: Okay, so let’s talk briefly about this crap bill congress just pitched.
Speaker 2: The lunch one?
Speaker 1: The Universal School Meals Program Act.
Speaker 2: See that right there is why crap like this bothers me. Like it’s one thing to give out
free lunches, which I already think is a reach, but meals? As in multiple a day? Let’s just really
go for it and incinerate the money. As soon as the taxes come in, make a huge pile, bring in some
dragons, hit it with a little Dracaryus, and watch it burn. At least we’d have a little entertainment.
Speaker 1: Well let me explain to the audience what we’ve got here. This bill, the USMPA,
proposes that the American taxpayer subsidize not just the lunches of poorer kids who may
actually need it, but the lunches, and presumably the breakfasts and dinners, of all kids. If Jeff
Bezos, richest man in the entire world, has kids, they get free meals. If Mark Cuban manages to
knock somebody up, despite the somewhat severe personality disorder he’s grappling with,
there’s free meals for them too. Mark Zuckerburg, the Google guys, whoever that Jack dude is
from twitter, their kids all ride the free meal train.
Speaker 2: So what’s the argument here? How can politicians justify this?
Speaker 1: When have politicians ever needed to justify anything?
Speaker 2: Touché.
Speaker 1: I don’t know, some of them are claiming it reduces ‘stigma’ which is political speak
for we have to overly regulate because some people feel left out. A few others make the half-way
decent argument that through standardizing school meals, they’ll be ensuring better health
benefits for all students -,
Speaker 2: But would they really do that, or, would they just throw old fruit cups and a chicken
breast on a plate and call it healthy? I mean we all went to school. We know what we’re working
with here, and it ain’t some wholesome, organic, display.
Speaker 1: They’re supposedly working with doctors and, local farmers to source better
ingredients.
Speaker 2: Bull shit. They’re not helping farmers – they’re raising taxes, and last time I checked
farmers paid taxes too. This bill is trash. Scrap it, and move on already.
190
Speaker 1: Agreed.
Liberal Humor
Speaker 1: Okay, so let’s talk briefly about this new bill pitched in congress. The Universal
School Meals Program Act.
Speaker 2: God, have they got no one over there with some flair? Where’s the branding? I mean
I guess they tell you exactly what they’re going to do, but damn with everything going on you
think they’d go with some pizzaz to gain some attention.
Speaker 1: Well what were you expecting?
Speaker 2: Picture this – three different schools around the country. Normal students, normal
lunches, maybe you pan in on some kids who aren’t eating, and then – out of nowhere, BAM!,
Lizzo appears. She’s got the flute, she’s rapping, she’s dancing. Everyone gets their lunch and
everyone’s happy, and we fade to black and it just says ‘Get that Bread Bill.’
Speaker 1: Uh… I guess that’s one idea. But I doubt they’ve got any A-list celebrities chomping
at the bit to make policy reveal videos.
Speaker: Agree to disagree.
Speaker 1: Anyway, congress has proposed this law which wouldn’t just include lunch, but
would provide up to three free and nutritious meals per day to all kids in the US school system.
Speaker 2: But that’s awesome. I’m not getting why you scrapped my celebrity idea. We could
find someone. Like Oprah – Oprah loves helping people. Someone call Oprah!
*laughter*
Speaker 1: Okay regardless of who is or isn’t potentially endorsing this, the lawmakers behind
the bill are claiming that by making school meals free they can remove stigma for poorer kids
and families, boost the health of all students, and improve children’s energy, concentration
ability, and moods.
Speaker 2: They’re saying it would boost kids’ moods?
Speaker 1: Uh, yeah?
Speaker 2: Well what the hell are we waiting for? I mean 13 year olds are the single most
ruthless population on the planet and you’re saying we could fix that, even just a little bit? Let’s
get on this ASAP.
*laughter*
191
Speaker 1: I mean I agree with you. The bill seems to cover its bases. They’ve got doctors and
farmers on board –
Speaker 2: Farmers?
Speaker 1: Yeah.
Speaker 2: Well then wrap it up, we’re done here. If you’ve got farmers you’ve got a law.
Nothing politicians love more than placating farmers.
Speaker 1: It’s not that simple. There’s still some debate about money and funding, despite the
fact that this bill could help millions of kids.
Speaker 2: Eh I wouldn’t worry about that. They’ve got money for endless wars, they can throw
some change at feeding kids, right?
Speaker 1: Here’s hoping.
Speaker 2: Plus we’re calling Oprah. She’ll figure this out.
Speaker 1: God Bless Oprah.
192
Punditry and Opinion Conservative Punditry and Opinion
Speaker 2: So every episode we do a ‘lightening legislation’ spotlight. Today I want to talk about
the Universal School Meals Program Act. It’s a law currently being floated that would provide
up to three free meals per day to all US school children regardless of how much money their
parents make.
Speaker 1: What a joke.
Speaker 2: You’re not a fan?
Speaker 1: Hell no. This is just another useless bill designed to spend way more money than we
need to. I could in theory understand free lunches. But three free meals a day? We’re just now
collectively going to be paying for every child’s meals? Parents are supposed to do that. If you
have a kid, it’s your job to feed them, and the people who can’t afford to they already get free
meals. So we’re just going to pay for free meals for middle class and rich kids? Why would we
ever do that?
Speaker 2: Some doctors have argued that there’s this stigma for kids who can’t afford the meals.
There’s that whole ‘meal shaming’ thing the media has been running with…
Speaker 1: And there we go! This is a case of the media needing a story, overblowing a situation,
and siding with whoever in congress helps them make better ratings. These guys don’t want to
help this country – they want to bolster welfare programs that keep people dependent and cost us
tax dollars. Now, do I believe some kids have felt embarrassed over a school lunch debt before?
Sure, I think all of us as kids had a day where we forgot our lunch or forgot our money, but that’s
on the parents. We already have laws that give people who are really in need access to free
lunches. But we’ve got a huge deficit and we want to add to it by feeding people who can feed
themselves? Tell me how that makes sense.
Speaker 2: It doesn’t. And what makes it worse is that they’re branding this as something that
farmers want. There’s a stipulation that says if you use local products schools get an extra
kickback from the government, but there are way more effective ways to help farmers than this.
It’s a pathetic play to get support from people in this country that are already hurting and who
need more than a little bit of business from public schools.
Speaker 1: Honestly, this bill is a joke. We’ve got serious issues to deal with. Congress needs to
actually work on those.
193
Liberal Punditry and Opinion
Speaker 1: So every episode we do a ‘lightening legislation’ spotlight. Today I want to talk about
the Universal School Meals Program Act. It’s a law currently being floated that would provide
up to three free meals per day to all US school children regardless of how much money their
parents make.
Speaker 2: Okay, see, this is the kind of stuff I want congress working on.
Speaker 1: So you feel strongly about this?
Speaker 2: Hell yeah! I mean we’ve seen stories in the news for months about student meal debt
and all sorts of shaming happening across the country; little kids having to go hungry because
they can’t pay a bill or because they’re embarrassed. It’s heartbreaking, and it’s happening
everywhere. I think a good response is to just give everyone the meals. If you make it equal and
a non-issue, kids can get on with learning.
Speaker 1: So what do you say to people who criticize how much it would cost?
Speaker 2: Ugh! It’s always ‘how much it will cost?’ Like do we forget we’re in wars that have
cost trillions of dollars? Why do we always have money for that, but feeding kids the most basic
meal is going to break the bank? Those arguments are petty and partisan. They don’t really look
at this bill. The people who are sponsoring this wrote it with doctors and with local farmers and
there’s provisions in there that not only guarantee better nutrition for kids, but also stimulate
local economies. The schools get incentives to use local products so they undoubtedly will.
Speaker 1: Right.
Speaker 2: So it’s maybe costing us some money, but we’re helping kids and we’re investing a
chunk of that money back into other areas that need it. Couldn’t be more of a win-win.
Speaker 1: I agree. Plus, how many parents are making a healthy lunch for their kids for 3 bucks
a day?
Speaker 2: Zero. The answer is zero. Three bucks is a steal.
Speaker 1: okay so the verdict is?
Speaker 2: We want this bill. It’s going to help all kids be healthy and do better in school. This is
important, Congress, so make it work!
194
Hard News
One Speaker –
Members of congress have recently introduced the Universal School Meals Program Act, a law
which would provide up to three free meals per day to all U.S. school children.
Currently, a family of four in the US would need to earn less than $48,000 annually to be eligible
for state funded meals. Other lower-income families may apply for reduced rate meal assistance,
but the USMPCA would eliminate these requirements, in favor of making free meals available to
all students.
Cosponsors of the bill claim that providing free meals can create standardized health benefits for
all children, reduce stigma for poorer students who may currently struggle to pay for school
meals, and improve student participation, concentration, and learning. In order to ensure that
offered meals would be healthy and create these projected benefits, the bill also increases the
amount schools can spend on each meal and provides monetary incentives to school districts for
sourcing at least 30 percent of their meals locally.
The USMPCA has the support of a number of doctor’s and clinician groups, who believe that
standardizing school lunches will improve the quality of food consumed by students across the
board. Because the Universal School Meals Program would dictate a certain level of nutrition,
supporting doctors believe that it could improve the energy, mood, and general health of students
across America.
While the bill has been presented on the house floor, no consensus has yet been reached in the
house and no vote has been called. Currently, discussions on plan payment, and the role of the
federal government are continuing, but the legislators behind the proposed act are hopeful the
bill will pass sometime soon.
195
APPENDIX D
TOP LINE DATA
Table 23. Political Party Support Scores Ranked from 1-10 (10 being most Support)
(N) Dem Rep Green Lib
Treatment 1 – Opinion 386 4.14 5.8 2.66 3.6
Condition 1 – C. Opinion 163 1.5 7.76 1.3 3.7
Condition 2 – L. Opinion 223 6 4.3 3.6 3.5
Treatment 2 – Humor 416 4.2 5.8 2.92 4.05
Condition 3 – C. Humor 180 1.89 7.27 1.98 4.46
Condition 4 – L. Humor 236 5.9 4.7 3.6 3.74
Treatment 3 – Hard News 368 4.65 5.8 2.3 3.68
(Condition 5)
Treatment 4 – No Audio 372 4.6 5.8 2.85 3.6
(Condition 6)
Table 24: Podcast Listening Habits Question N Percent
When was the last time you listened to a podcast?
- Within the last two days (433) 28%
- Within the last week (379) 25%
- Within the last month (365) 24%
- Within the last year (196) 13%
- More than a year ago (99) 6%
- I have never listened to a podcast (74) 5%
Have you ever listened to a political podcast?
- Yes, I listen regularly (358) 23%
- Yes, but I do not listen regularly (660) 43%
- No (528) 34%
196
Figure 5. News Frequency Graph
News Frequency - How closely do you follow the news on television, in print, on the radio,
through mobile devices or online?
# Answer % Count
1 Very closely 30.80% 483
2 Moderately 41.90% 657
3 Somewhat 21.05% 330
4 Not very closely 5.68% 89
5 Not at all 0.57% 9
197
Figure 6. Education Level Graph
What is the highest level of school you have completed or the highest degree you have received?
# Answer % Count
1 Less than high school degree 0.32% 5
2 High school graduate (high school diploma or equivalent including GED) 9.09% 142
3 Some college but no degree 17.08% 267
198
4 Associate degree in college (2-year) 11.84% 185
5 Bachelor's degree in college (4-year) 46.19% 722
6 Master's degree 12.67% 198
7 Doctoral degree 1.02% 16
8 Professional degree (JD, MD) 1.79% 28
Total 100% 1563
199
APPENDIX E
SURVEY EXPERIMENT DATA: ANOVA TABLES AND OUTPUTS
Table 25. Repeated Measures ANOVA (Ideological Interaction Political Attitude)
DV Parameter B Std.
Error t Sig.
95% Confidence Interval Partial
Eta Squared
Noncent. Parameter
Observed Powerb
Lower Bound
Upper Bound
Support Pre Intercept 1.714 .060 28.41 .000 1.595 1.832 .345 28.41 1.000 Conservative 1.149 .111 10.32 .000 .930 1.367 .065 10.32 1.000 Liberal 0a . . . . . . . . Opinion -.046 .089 -.512 .608 -.220 .129 .000 .512 .081 Humor -.030 .088 -.345 .730 -.202 .141 .000 .345 .064 Hard News -.008 .089 -.088 .930 -.183 .167 .000 .088 .051 Control 0a . . . . . . . . Conservative Opinion
.146 .150 .976 .329 -.148 .441 .001 .976 .164
Conservative Humor
-.050 .147 -.340 .734 -.339 .239 .000 .340 .063
Conservative Hard News
-.101 .152 -.663 .507 -.400 .198 .000 .663 .102
Support Post Intercept 1.668 .062 26.81 .000 1.546 1.790 .319 26.81 1.000 Conservative 1.286 .115 11.20 .000 1.061 1.511 .076 11.20 1.000 Liberal 0a . . . . . . . . Opinion -.152 .092 -1.659 .097 -.332 .028 .002 1.659 .382 Humor -.056 .090 -.622 .534 -.233 .121 .000 .622 .095 Hard News -.116 .092 -1.260 .208 -.296 .064 .001 1.260 .242 Control 0a . . . . . . . . Conservative Opinion
1.161 .155 7.506 .000 .858 1.465 .035 7.506 1.000
Conservative Humor
.722 .152 4.750 .000 .424 1.020 .015 4.750 .997
Conservative Hard News
.189 .157 1.204 .229 -.119 .497 .001 1.204 .225
a. This parameter is set to zero because it is redundant.
b. Computed using alpha = .05
200
Table 26. Multivariate Tests for Political Attitude
Value F Hypothesis df Error df Sig.
Pillai's trace .030 47.928a 1.000 1536.000 .000
Wilks'
lambda
.970 47.928a 1.000 1536.000 .000
Hotelling's
trace
.031 47.928a 1.000 1536.000 .000
Roy's
largest root
.031 47.928a 1.000 1536.000 .000
Table 27. Multiple Comparisons of Treatment Groups for Political Attitude
(I) TC (J) TC
Mean
Difference
(I-J) Std. Error Sig.
95% Confidence
Interval
Lower
Bound
Upper
Bound
Opinion Humor .07 .082 1.000 -.15 .28
HardNews .25* .084 .017 .03 .47
Control .33* .084 .000 .11 .56
Humor Opinion -.07 .082 1.000 -.28 .15
HardNews .19 .083 .152 -.03 .40
Control .27* .083 .007 .05 .49
HardNews Opinion -.25* .084 .017 -.47 -.03
Humor -.19 .083 .152 -.40 .03
Control .08 .085 1.000 -.14 .31
Control Opinion -.33* .084 .000 -.56 -.11
Humor -.27* .083 .007 -.49 -.05
HardNews -.08 .085 1.000 -.31 .14
*. The mean difference is significant at the .05 level.
201
Table 28. Multiple Comparisons of Condition Groups for Political Attitude
(I)
Condition
(J)
Condition
Mean
Difference
(I-J) Std. Error Sig.
95% Confidence
Interval
Lower
Bound
Upper
Bound
Tukey
HSD
1.00 2.00 1.87* .101 .000 1.58 2.16
3.00 .26 .106 .135 -.04 .57
4.00 1.82* .100 .000 1.53 2.10
5.00 1.33* .092 .000 1.07 1.60
6.00 1.41* .092 .000 1.15 1.68
2.00 1.00 -1.87* .101 .000 -2.16 -1.58
3.00 -1.61* .098 .000 -1.89 -1.33
4.00 -.06 .092 .990 -.32 .21
5.00 -.54* .083 .000 -.78 -.30
6.00 -.46* .083 .000 -.69 -.22
3.00 1.00 -.26 .106 .135 -.57 .04
2.00 1.61* .098 .000 1.33 1.89
4.00 1.55* .097 .000 1.28 1.83
5.00 1.07* .089 .000 .82 1.33
6.00 1.15* .089 .000 .90 1.41
4.00 1.00 -1.82* .100 .000 -2.10 -1.53
2.00 .06 .092 .990 -.21 .32
3.00 -1.55* .097 .000 -1.83 -1.28
5.00 -.48* .082 .000 -.72 -.25
6.00 -.40* .082 .000 -.63 -.17
5.00 1.00 -1.33* .092 .000 -1.60 -1.07
2.00 .54* .083 .000 .30 .78
3.00 -1.07* .089 .000 -1.33 -.82
4.00 .48* .082 .000 .25 .72
6.00 .08 .072 .865 -.12 .29
6.00 1.00 -1.41* .092 .000 -1.68 -1.15
2.00 .46* .083 .000 .22 .69
3.00 -1.15* .089 .000 -1.41 -.90
4.00 .40* .082 .000 .17 .63
5.00 -.08 .072 .865 -.29 .12
1.00 = Conservative Opinion, 2.00 = Liberal Opinion, 3.00 = Conservative Humor, 4.00 =
Liberal Humor, 5.00 = Hard News, 6.00 = Control
202
Table 29. Political Knowledge Percentages (3 Story Based Questions)
N Easy Med Hard
Treatment 1 - Opinion (386) 80% 74% 21%
Condition 1 – C. Opinion (162) 88% 90% 4%
Condition 2 – L. Opinion (223) 74% 62% 33%
Treatment 2 – Humor (416) 76% 57% 23%
Condition 3 – C. Humor (179) 79% 64% 9%
Condition 4 – L. Humor (236) 73% 51% 34%
Treatment 3- Hard News (370) 63% 67% 17%
(Condition 5)
Treatment 4 – No Audio (372) 58% 32% 9%
(Condition 6)
Table 30. Political Knowledge Scores (Average of Right Answers for 3 Story Based Questions)
N Knowledge Score
Treatment 1 – Opinion (386) 1.74
Condition 1 – C. Opinion (162) 1.82
Condition 2- L. Opinion (223) 1.69
Treatment 2 – Humor (416) 1.56
Condition 3- C. Humor (179) 1.52
Condition 4 – L. Humor (236) 1.58
Treatment 3 – Hard News (370) 1.49
(Condition 5)
Treatment 4 – No Audio (372) 0.99
(Condition 6)
203
Table 31. Multiple Comparisons of Treatment Groups for Political Knowledge Scores Tukey HSD
(I) 1 (J) 1
Mean
Difference
(I-J) Std. Error Sig.
95% Confidence Interval
Lower Bound Upper Bound
Opinion Humor .20100* .05853 .003 .0505 .3515
HardNews .28106* .06034 .000 .1259 .4362
Control .75389* .06021 .000 .5990 .9087
Humor Opinion -.20100* .05853 .003 -.3515 -.0505
HardNews .08006 .05927 .531 -.0724 .2325
Control .55288* .05914 .000 .4008 .7050
Hard News Opinion -.28106* .06034 .000 -.4362 -.1259
Humor -.08006 .05927 .531 -.2325 .0724
Control .47283* .06093 .000 .3161 .6295
Control Opinion -.75389* .06021 .000 -.9087 -.5990
Humor -.55288* .05914 .000 -.7050 -.4008
HardNews -.47283* .06093 .000 -.6295 -.3161
*. The mean difference is significant at the 0.05 level.
Table 32. 2x2 Analysis Political Attitude with General Knowledge Covariate
Parameter B
Std. Error t Sig.
Partial Eta Squared
Noncent. Parameter
Observed Powerb
Lower Bound
Upper Bound
Intercept 1.617 .072 22.344 .000 1.475 1.758 .246 22.344 1.000 PKScoreStatic .037 .027 1.389 .165 -.015 .090 .001 1.389 .284 Conservative 1.259 .116 10.826 .000 1.031 1.488 .071 10.826 1.000 Liberal 0a . . . . . . . . Opinion -.166 .092 -1.801 .072 -.347 .015 .002 1.801 .436 Humor -.063 .090 -.697 .486 -.240 .114 .000 .697 .107 Hard News -.124 .092 -1.345 .179 -.304 .057 .001 1.345 .269 No Treatment 0a . . . . . . . . Conservative Opinion
1.177 .155 7.591 .000 .873 1.481 .036 7.591 1.000
Conservative Humor
.734 .152 4.822 .000 .435 1.032 .015 4.822 .998
Conservative Hard News
.197 .157 1.256 .209 -.111 .506 .001 1.256 .241
Conservative No Treatment
0a . . . . . . . .
a. This parameter is set to zero because it is redundant.
b. Computed using alpha = .05
204
REFERENCES
Abel, A. D. & Barthel, M. (2013) Appropriation of Mainstream News: How Saturday Night Live
Changed the Political Discussion. Critical Studies in Media Communication, 30 (1), 1-16.
Agirre, I. A., Arrizabalaga, A. P., & Espilla, A. Z. (2016). “Active audience?: interaction of
young people with television and online video content.” Communication & Society, 29
(3), 133-147.
Al-Rawi, A. (2019). Viral news on social media. Digital journalism, 7 (1), 63-79.
Allan, S. (2010). Journalism and the Culture of Othering. Brazilian Journalism Research, 6 (2),
26-40.
Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on social
media. Research & Politics, 6 (2), 1-8.
Alhabash, S., & McAlister, A. R. (2015). Redefining Virality in Less Broad Strokes: Predicting
viral behavioral intentions from motivations and uses of Facebook and Twitter. New
Media & Society, 17 (8), 1317-1339.
Ames, K. (2016). Talk vs. chat-based radio: A case for distinction. Radio Journal: International Studies in Broadcast & Audio Media, 14 (2), 177–191.
Antunovic, D., Parsons, P., & Cooke, T. R. (2018). ‘Checking’ and Googling: Stages of news
consumption among young adults. Journalism, 19 (5), 632-648.
Armstrong, C. B., & Rubin, A. M. (1989). Talk radio as interpersonal communication. Journal of Communication, 39 (2), 84-94.
Associated Press. (May 9, 2019). Singapore Outlaws Fake News, Allows Govt to Block, Remove
it. https://apnews.com/76bb290db7724086b449071145c98d58
Audio and Podcasting Fact Sheet. (2017). Pew Research Center. Accessed April, 17 2019:
https://www.journalism.org/fact-sheet/audio-and-podcasting/
Baek, Y. M., & Wojcieszak, M. E. (2009). Don’t Expect Too Much! Learning From Late-Night
Comedy and Knowledge Item Difficulty: [Publisher: SAGE Publications Sage CA: Los
Angeles, CA].Communication Research. https://doi.org/10.1177/0093650209346805
Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., ... &
Volfovsky, A. (2018). Exposure to opposing views on social media can increase political
polarization. Proceedings of the National Academy of Sciences, 115 (37), 9216-9221.
Barabas, J., Jerit, J., Pollock, W., & Rainey, C. (2014). The question(s) of political knowledge.
American Political Science Review, 108 (4), 840-855.
205
Barker, D., & Knight, K. (2000). Political talk radio and public opinion. Public opinion
quarterly, 64 (2), 149-170.
Barker, P. W., Chod, S. M. and Muck, W.J. (2020). PS: Political Science & Politics, 53(2), 326-
327.
Barnett, S. (1998). Dumbing Down or Reaching Out: Is it Tabloidisation wot done it?. Political
Quarterly, 69 (2), 75-90.
Barton, J., Castillo, M., & Petrie, R. (2014). What persuades voters? A field experiment on
political campaigning. The Economic Journal, 124(574), F293-F326.
Baum, M. A. (2002). Sex, Lies, and War: How Soft News Brings Foreign Policy to the
Inattentive Public. American Political Science Review, 96 (1), 91.
Baum, M. (2003). Soft News Goes to War: public opinion and American foreign policy in the new media age. Princeton, N.J: Princeton University Press.
Baum, M. A. (2003). Soft News and Political Knowledge: Evidence of Absence or Absence of
Evidence? Political Communication, 20 (2), 173.
Baum, M. A., & Jamison, A. S. (2006). The Oprah Effect: How Soft News Helps Inattentive
Citizens Vote Consistently. Journal Of Politics, 68 (4), 946-959. doi:10.1111/j.1468-
2508.2006.00480.x
Baym G. (2007). Representation and the Politics of Play: Stephen Colbert's Better Know a
District. Political Communication, 24 (4): 359-376.
Baym, G. (2014). Stewart, O’Reilly, and The Rumble 2012: Alternative Political Debate in the
Age of Hybridity. Popular Communication, 12 (2), 75-88.
Baym, G. & Shah, C. (2011) Circulating Struggle. Information, Communication & Society, 14
(7), 1017-1038.
Becker, A. B. (2011). Political humor as democratic relief? The effects of exposure to comedy
and straight news on trust and efficacy. Atlantic Journal of Communication, 19 (5), 235-
250.
Bell, E. J., Owen, T., Brown, P. D., Hauka, C., & Rashidian, N. (2017). The platform press: How
Silicon Valley reengineered journalism. Tow Center for Digital Journalism Report.
Columbia University.
Benkler, Y. (2006). Political Freedom Part 2: Emergence of the Networked Public Sphere. In The
Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale
University Press, (pp. 212-272).
206
Bennett, W. L., & Entman, R. M. (2000). Mediated politics: Communication in the future of
democracy. New York; Cambridge, U.K: Cambridge University Press.
Benson, R., Blach-Ørsten, M., Powers, M., Willig, I., & Zambrano, S. V. (2012). Media Systems
Online and Off: Comparing the Form of News in the United States, Denmark, and
France. Journal of Communication, 62 (1), 21-38.
Berry, R. (2015). A Golden Age of Podcasting? Evaluating Serial in the Context of Podcast
Histories. Journal of Radio & Audio Media, 22 (2), 170–178.
Berry, R. (2016). Podcasting: Considering the evolution of the medium and its association with
the word ‘radio’. The Radio Journal: International Studies in Broadcast and Audio Media, 14, 7-22.
Berthon, P. R., Pitt, L. F., Plangger, K., & Shapiro, D. (2012). Marketing meets Web 2.0, social
media, and creative consumers: Implications for international marketing strategy.
Business Horizons, 55 (3), 261-271.
Bimber, B. (2017). Three Prompts for Collective Action in the Context of Digital Media.
Political Communication, 34 (1), 6–20.
Binstock, R. H., & Davidson, S. (2012). Political marketing and segmentation in aging
democracies. In Routledge handbook of political marketing (pp. 36-49). Routledge.
Blasi, V. (1977). The Checking Value in First Amendment Theory. American Bar Foundation
Research Journal, 2 (3), 521-649.
Blumler, J. G. (2015). Core Theories of Political Communication: Foundational and Freshly
Minted. Communication Theory, 25 (4), 426–438.
Blumler J. G. & Gurevitch M. (1995). The Crisis of Public Communication. London: Routledge.
Blumler, J. G., & Kavanagh, D. (1999). The third age of political communication: Influences and
features. Political Communication, 16 (3), 209–230.
Boczkowski, P. J. (2009). Rethinking Hard and Soft News Production: From Common Ground
to Divergent Paths. Journal of Communication, 59 (1), 98-116.
Boczkowski, P., Mitchelstein, E., & Walter, M. (2012). When Burglar Alarms Sound, Do
Monitorial Citizens Pay Attention to Them? The Online News Choices of Journalists and
Consumers During and After the 2008 U.S. Election Cycle. Political Communication, 29
(4), 347–366.
Bode, L. (2016). Political news in the news feed: Learning politics from social media. Mass communication and society, 19 (1), 24-48.
207
Bode, L. (2020). Words that matter: How the news and social media shaped the 2016
Presidential campaign. Brookings Institute Press.
Booth, A. (1970). The recall of news items. The Public Opinion Quarterly, 34 (4), 604-610.
Borgesius, F. J. Z., Trilling, D., Möller, J., Bodó, B., Vreese, C. H. de, & Helberger, N. (2016).
Should we worry about filter bubbles? Internet Policy Review. Retrieved from:
https://policyreview.info/articles/analysis/should-we-worry-about-filter-bubbles
Boukes, M. (2019). Infotainment. In T. P. Vos, F. Hanusch, D. Dimitrakopoulou, M. Geertsema-
Sligh & A. Sehl (eds.), International Encyclopedia of Journalism Studies; Forms of
Journalism. Hoboken (NJ): Wiley-Blackwell.
Boukes, M. (2018). Agenda-Setting With Satire: How Political Satire Increased TTIP’s Saliency
on the Public, Media, and Political Agenda. Political Communication, 36 (3), 1–26.
Boukes, M., & Boomgaarden, H. G. (2015). Soft News With Hard Consequences? Introducing a
Nuanced Measure of Soft Versus Hard News Exposure and Its Relationship With
Political Cynicism. Communication Research, 42(5), 701-731.
Boukes, M., Boomgaarden, H. G., Moorman, M., & de Vreese, C. H. (2015). Political News with
a Personal Touch: How Human Interest Framing Indirectly Affects Policy Attitudes.
Journalism & Mass Communication Quarterly, 92(1), 121-141.
Bowyer, B. T., Kahne, J. E., & Middaugh, E. (2017). Youth comprehension of political messages
in YouTube videos. New Media & Society, 19(4), 522-541.
Brader, T. (2005). Striking a responsive chord: How political ads motivate and persuade voters
by appealing to emotions. American Journal of Political Science, 49(2), 388-405.
Brants, K. (2005). “Who’s Afraid of Infotainment?“. Communication Theory & Research. An
EJC Anthology, hg. v. Denis McQuail, Peter Golding und Els de Bens, London: SAGE,
103-117.
Brants, K, & Neijens, P. (1998). The Infotainment of Politics. Political Communication, 15 (2),
149-164.
Brenan, M. (September 2019). Americans Trust in Mass Media Edges Down to 41%. Gallup
Polling: https://news.gallup.com/poll/267047/americans-trust-mass-media-edges-
down.aspx
Brown, R. D. (1997). The strength of a people: The idea of an informed citizenry in America, 1650-1870. Univ of North Carolina Press.
208
Browning, N., & Sweetser, K. D. (2014). The Let Down Effect: Satisfaction, Motivation, and
Credibility Assessments of Political Infotainment. American Behavioral Scientist, 58(6),
810-826.
Buhl, F., Günther, E., & Quandt, T. (2019). Bad News Travels Fastest: A Computational
Approach to Predictors of Immediacy in Digital Journalism Ecosystems. Digital
Journalism, 7(7), 910-931.
Burgers, C., & de Graaf, A. (2013). Language intensity as a sensationalistic news feature: The
influence of style on sensationalism perceptions and effects. Communications: The
European Journal of Communication Research, 38(2), 167-188.
Burroughs, B. (2019). House of Netflix: Streaming media and digital lore. Popular Communication, 17(1), 1-17.
Cacciatore, M. A., Scheufele, D. A., & Iyengar, S. (2016). The End of Framing as we Know it …
and the Future of Media Effects. Mass Communication and Society, 19(1), 7–23.
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116-131.
Cammaerts, B., & Couldry, N. (2016). Digital journalism as practice. The SAGE handbook of
digital journalism, 326-340.
Cantor, J. R. (1976). Humor on television: A content analysis. Journal of Broadcasting & Electronic Media, 20(4), 501-510.
Cantril, H., & Allport, G. W. (1935). The psychology of radio. Oxford, England: Harper.
Cappella, J. & Jamieson, K. (1997) Spiral of cynicism. The press and the public good. New
York: Oxford University Press.
Cardo, V. (2006). Entertaining Politics -- New Political Television and Civic Culture. European Journal Of Communication, 21(3), 414-416.
Carney, N. (2016). All lives matter, but so does race: Black lives matter and the evolving role of
social media. Humanity & Society, 40(2), 180-199.
Carpenter, S. (2007). U.S. Elite and Non-Elite Newspapers’ Portrayal of the Iraq War: A
Comparison of Frames and Source Use. Journalism and Mass Communication
Quarterly, 84(4), 761-776.
Carpentier, N. (2011). The concept of participation. If they have access and interact, do they
really participate? In N. Carpentier & P. Dahlgren (Eds.). Interrogating audiences:
theoretical horizons of participation. Communication Management Quarterly, 21, 13-36.
209
Carpentier, N. & Dahlgren, P. (2011). Introduction: Interrogating audiences – theoretical
horizons of participation. In N. Carpentier & P. Dahlgren (Eds.). Interrogating audiences:
theoretical horizons of participation. Communication Management Quarterly, 21, 7-12.
Carpini, M. X. D., & Keeter, S. (1993). Measuring political knowledge: Putting first things first.
American Journal of Political Science, 1179-1206.
Casella, P. A. (2013). Breaking news or broken news? Reporters and news directors clash on
“black hole” live shots. Journalism Practice, 7(3), 362-376.
Castells, M. (2008). The New Public Sphere: Global Civil Society, Communication Networks
and Global Governance, in Annals of the American Academy of Political and Social
Science, 616 (1), 78-93.
Cavazza, N., & Guidetti, M. (2014). Swearing in Political Discourse: Why Vulgarity Works.
Journal Of Language & Social Psychology, 33(5), 537-547.
Chadha, M.; Avila, A.; Gil de Zúñiga, H. (2012): Listening In: Building a Profile of Podcast
Users and Analyzing Their Political Participation. Journal of Information Technology & Politics, 9(4), pp. 388–401.
Chadwick, A. (2013). The Hybrid Media System. Politics and Power. Oxford University Press.
Chadwick, A. (2017). The Hybrid Media System. Politics and Power. Second Edition. Oxford
University Press.
Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do tabloids poison the well of social
media? Explaining democratically dysfunctional news sharing. New Media & Society, 20
(11), 4255-4274.
Chang, M., & Gruner, C. R. (1981). Audience reaction to self-disparaging humor. Southern Communication Journal, 46, 419-426.
Chang Sup, P. (2017). Citizen news podcasts and engaging journalism: The formation of a
counter-public sphere in South Korea. Pacific Journalism Review, 23(1), 245-262.
Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review Political Science, 10,
103-126.
Christians, C., Glasser, T., McQuail, D., Nordenstreng, K., & White, R. (2009). Normative
Theories of the Media: Journalism in Democratic Societies. University of Illinois Press.
Christensen, L. H. (2008). “Stored Before an Online Audience": When Ordinary People Become
Media Personas. Conference Papers -- International Communication Association, 1.
210
Cobb, R., Ross, J. K., & Ross, M. H. (1976). Agenda building as a comparative political process.
The American political science review, 70(1), 126-138.
Cobb, M. D., & Kuklinski, J. H. (1997). Changing minds: Political arguments and political
persuasion. American Journal of Political Science, 88-121.
Cohen, J. E. (2017) Law for the Platform Economy. UC Davis Law Review.
Coleman, S. (2012). Believing the news: From sinking trust to atrophied efficacy. European Journal of Communication, 27(1), 35–45.
Colorado State University. “Disadvantages of Content Analysis.” Accessed November 25 2018:
https://writing.colostate.edu/guides/page.cfm?pageid=1319&guideid=61
Culp, S. (1995). Objectivity in Experimental Inquiry: Breaking Data-Technique Circles.
Philosophy of Science, 62 (3). 438-458.
Cunningham, S., & Craig, D. (2016). Online Entertainment: A New Wave of Media
Globalization? International Journal of Communication, 10, 5409-5425.
Currah, Andrew (2009). What’s happened to our news? Challenges Series. Oxford: Oxford
University’s Reuters Institute for the Study of Journalism.
D’Amato, E. (March 2018). Fake News May Be a Problem, But Soft News is a Crisis. Medium:
https://medium.com/@erikdamato/fake-news-may-be-a-problem-but-soft-news-is-a-
crisis-f3ba434811b1
Davis, J. L., Love, T. P., & Killen, G. (2018). Seriously funny: The political work of humor on
social media. New Media & Society, 20(10), 3898-3916.
De Vreese, C. H. (2005). News framing: Theory and typology. Information Design Journal & Document Design, 13(1).
Delli Carpini, M. X., & Williams, B. A. (2001). Let us infotain you: Politics in the new media
age. In W. L. Bennett & R. M. Entman (Eds.), Mediated politics: Communication in the future of democracy (pp.160-181). Cambridge, UK ; New York : Cambridge University
Press. Retrieved from http://repository.upenn.edu/asc_papers/14
Delli Carpini, M. X. and Keeter, S. (1993). Measuring Political Knowledge: Putting First Things
First. American Journal of Political Science, 37 (4), 1179-1206.
DeNardis, L. (2019). The Social-Media Challenge. In Graham, M. and Dutton, W. H. Society
and the Internet: How Networks of Information and Communication are Changing Our Lives. Oxford University Press: Oxford. 389-402.
211
Denscombe, M. (2008). Communities of practice: A research paradigm for the mixed methods
approach. Journal of Mixed Methods Research, 2(3), 270-283.
Dermody, J., Hanmer-Lloyd, S., & Scullion, R. (2010). Young people and voting behaviour:
alienated youth and (or) an interested and critical citizenry?. European Journal of Marketing, 44(3/4), 421-435.
Deuze, M. (2004). What is multimedia journalism?. Journalism Studies, 5(2), 139-152.
Deuze, M. & Witshge T. (2018). Beyond Journalism: Theorizing the transformation of
journalism. Journalism, 19 (2), 165-181.
Dickenson, T. (2017, January 13). Meet the Leaders of the Trump Resistance. Rolling Stone.
http://www.rollingstone.com/politics/features/meet-the-leaders-of-the-trump-resistance-
w460844
Dixson, A. D. (2018). “What’s Going On?”: A Critical Race Theory Perspective on Black Lives
Matter and Activism in Education. Urban Education, 53(2), 231-247.
Djerf-Pierre, M., & Shehata, A. (2017). Still an agenda setter: Traditional news media and public
opinion during the transition from low to high choice media environments. Journal of Communication, 67(5), 733-757.
Deuze, M. (2008). The changing context of news work: Liquid journalism for a monitorial
citizenry. International journal of Communication, 2 (18), 848-865.
Draznin, H. (September 2018). “How the Skimm Founders are Inspiring Millennials to Get Out
and Vote.” CNN Money: https://money.cnn.com/2018/09/13/news/companies/boss-files-
the-skimm-millennials-vote/index.html
Dvir-Gvirsman, S., Tsfati, Y., & Menchen-Trevino, E. (2016). The extent and nature of
ideological selective exposure online: Combining survey responses with actual web log
data from the 2013 Israeli Elections. new media & society, 18(5), 857-877.
Earl, J. and Kimport K. (2011). Where Have We Been and Where are We Headed?. In Digitally Enabled Social Change: Activism in the Internet Age, MIT Press.
Edgerly, S., Vraga, E. K., Bode, L., Thorson, K., & Thorson, E. (2017). New Media, New
Relationship to Participation? A Closer Look at Youth News Repertoires and Political
Participation. Journalism & Mass Communication Quarterly, 95(1), 192–212.
Edison Research. (April, 2019). ‘The Podcast Consumer 2019.’
https://www.edisonresearch.com/the-podcast-consumer-2019/
Ellingsen, S. (2014). Seismic Shifts: Platforms, Content Creators, and Spreadable Media. Media International Australia, 150, 106-113.
212
Ellison, S. (2014). God and Man at a Southern Appalachian Community College: Cognitive
Dissonance and the Cultural Logics of Conservative News Talk Radio Programming.
Review Of Education, Pedagogy & Cultural Studies, 36(2), 90-108.
Eltantawy, N., & Wiest, J. B. (2011). The Arab spring, Social media in the Egyptian revolution:
reconsidering resource mobilization theory. International Journal of Communication, 5
(18), 1207-1224.
Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of
communication, 43(4), 51-58.
European Commission Joint Research Centre. (2018, April 27). The digital transformation of
news media and the rise of online disinformation. ScienceDaily. Retrieved April 20, 2019
from www.sciencedaily.com/releases/2018/04/180427144724.htm
Euritt, A. (2019). Public circulation in the NPR Politics Podcast. Popular Communication, 17(4),
348–359.
Evans, A. (2020). ‘Rocking Our Priors’: Fun, Enthusiastic, Rigorous, and Gloriously Diverse.
PS: Political Science & Politics, 53(2), 320-322.
Faccoro, L. B. & DeFleur, M. L. (1993). A cross-cultural experiment on how well audiences
remember news stories from newspaper, computer, television, and radio sources.
Journalism Quarterly, 70, 585–601.
Feldman, L. (2011) The Opinion Factor: The Effects of Opinionated News on Information
Processing and Attitude Change, Political Communication, 28(2), 163-181.
Fields, E. E. (1988). Qualitative content analysis of television news: Systematic techniques.
Qualitative Sociology, 11(3), 183-193.
Figueiras, R. (2019). Punditry as a reward system: audience construction and the logics of the
punditry sphere. Critical Studies in Media Communication, 36(2), 171-183.
Finley, A. J., & Penningroth, S. L. (2015). Online versus in-lab: pros and cons of an online
prospective memory experiment. In A. M. Columbus (Ed.), Advances in Psychology Research, vol. 113 (pp. 135-162). Hauppauge, NY: Nova Science Publishers, Inc.
Fischer, V. K. (2019). Unaided and Aided Brand Recall in Podcast Advertising. An Experiment
in the Role of Source Credibility's Impact on Brand Message Efficacy. Dissertation,
Texas State University.
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter Bubbles, Echo Chambers, and Online News
Consumption. Public Opinion Quarterly, 80, 298-320.
213
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the
question of digital communication platform governance. Journal of Digital Media &
Policy, 10(1), 33-50.
Fortunato, D., Stevenson, R. T., & Vonnahme, G. (2016). Context and political knowledge:
Explaining cross-national variation in partisan left-right knowledge. The Journal of
Politics, 78(4), 1211-1228.
Fraser, N. (1990). Rethinking the public sphere: A contribution to the critique of actually
existing democracy. Social Text, 25/26, 56-80.
Freiss, S. (March 2017). Why are #PodcastsSoWhite? Columbia Journalism Review.
https://www.cjr.org/the_feature/podcasts-diversity.php
Fuchs, C. (2014). Social media and the public sphere. Triple C: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 12(1), 57-
101.
Funk, M. (2017). Decoding the Podaissance: Identifying Community Journalism Practices in
Newsroom and Advocational Podcasts. ISOJ Journal, 7(1), 67–88.
Geiger, A. W. (September 11, 2019). Key Findings About the Online News Landscape in
America. Pew Research Center. https://www.pewresearch.org/fact-tank/2019/09/11/key-
findings-about-the-online-news-landscape-in-america/
Giomelakis, D., & Veglis, A. (2015). Employing search engine optimization techniques in online
news articles. Studies in media and communication, 3(1), 22-33.
Gerber, A. S., Huber, G. A., Doherty, D., Dowling, C. M., & Ha, S. E. (2010). Personality and
political attitudes: Relationships across issue domains and political contexts. American
Political Science Review, 104(1), 111-133.
Gil de Zúñiga, H., & Hinsley, A. (2013). The Press Versus the Public. Journalism Studies, 14(6),
926-942.
Gil deZúñiga, H., Weeks, B., Ardèvol-Abreu, A. (2017). Effects of News-Finds-Me Perception
in Communication: Social Media Use Implications for News Seeking and Learning
About Politics. Journal of Computer-Mediated Communication, 22(3), 105–123.
Gitlin, T. (1980). The whole world is watching: Mass media in the making & unmaking of the
new left. Univ of California Press.
Gitlin, T. (2012). Occupy nation: The roots, the spirit, and the promise of Occupy Wall Street (p.
64). New York: itbooks.
Goldstein, J. H. (1976). Theoretical notes on humor. Journal of Communication, 26(3),104-112.
214
Goode, L. (2009). Social news, Citizen Journalism, and Democracy. New Media and Society, 11
(8), 1287-1305.
Grabe, M. E., Zhou, S., Lang, A., & Bolls, P. D. (2000). Packaging television news: The effects
of tabloid on information processing and evaluative responses. Journal of broadcasting &
Electronic media, 44(4), 581-598.
Graham, E. E., Papa, M. J., & Brooks, G. P. (1992). Functions of humor in conversation:
Conceptualization and measurement. Western Journal of Communication, 56,161-1 83.
Graham. R. (January 2019). This Will Be A Weird Year for the March for Life. Slate. (Accessed
May 11, 2020). https://slate.com/human-interest/2019/01/march-for-life-ben-shapiro.html
Graves, L. (2017). The Monitorial Citizen in the “Democratic Recession.” Journalism Studies,
18(10), 1039–1250.
Grimelman, J. (2018). The Platform is the Message. The Georgetown Law Review.
http://james.grimmelmann.net/files/articles/platform-message.pdf
Gruner, C. R. (1967). Effect of humor on speaker ethos and audience information gain. Journal of Communication, 17(3), 228-233.
Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence from
the consumption of fake news during the 2016 US presidential campaign. European Research Council, 9, 1-49.
Habermas, J. (1964). The public sphere: An encyclopedia article. New German Critique, 3
(Autumn, 1974), 49-55.
Hagar, N., & Diakopoulos, N. (2019). Optimizing content with A/B headline testing: Changing
newsroom practices. Media and Communication, 7(1), 117-127.
Hall, A., & Cappella, J. N. (2002). The Impact of Political Talk Radio Exposure on Attributions
About the Outcome of the 1996 U.S. Presidential Election. Journal of Communication,
52(2), 332.
Hameleers, M., & van der Meer, T. G. (2020). Misinformation and polarization in a high-choice
media environment: How effective are political fact-checkers?. Communication Research, 47(2), 227-250.
Hamilton, J. (2004). All the news that's fit to sell: How the market transforms information into
news. Princeton University Press.
Hanitzsch, T. (2007). Deconstructing journalism culture: Toward a universal theory.
Communication Theory, 17(4), 367-385.
215
Harcup, T. (2016). Alternative journalism as monitorial citizenship? A case study of a local news
blog. Digital Journalism, 4(5), 639-657.
Harris, A. (2008). Young women, late modern politics, and the participatory possibilities of
online cultures. Journal of Youth Studies, 11(5), 481-495.
Harrigan, N., Achananuparp, P., & Lim, E. P. (2012). Influentials, novelty, and social contagion:
The viral power of average friends, close communities, and old news. Social Networks,
34(4), 470-480.
Hatemi, P. K., & Verhulst, B. (2015). Political attitudes develop independently of personality
traits. PloS one, 10(3),
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118106#sec001
Henderson, K. (2019). The seriousness of storytelling: What constraints to professional news
routines reveal about the state of journalistic autonomy in local television newsrooms.
Journalism, 1464884919854651.
Hermida, A. (2009). The blogging BBC: Journalism blogs at “the world's most trusted news
organisation”. Journalism Practice, 3(3), 268-284.
Hitlin, P. (July, 2016). Research in the Crowdsourcing Age, a Cast Study: How scholars,
companies, and workers are using Mechanical Turk as a ‘gig economy’ platform, for
tasks computers can’t handle. Pew Research:
https://www.pewresearch.org/internet/2016/07/11/turkers-in-this-canvassing-young-well-
educated-and-frequent-users/
Hmielowski, J. D., Beam, M. A., & Hutchens, M. J. (2017). Bridging the partisan divide?
Exploring ambivalence and information seeking over time in the 2012 US presidential
election. Mass Communication and Society, 20(3), 336–357.
Hoffman, L. H., & Young, D. J. (2011). Satire, Punch Lines, and the Nightly News: Untangling
Media Effects on Political Participation. Communication Research Reports, 28(2), 159-
168.
Holbert, R. L. (2005). “A typology for the Study of Entertainment Television and Politics.”
American Behavioral Scientist 49(3).
Holton, A. E., Coddington, M., Lewis, S. C., & De ZÚÑIGA, H. G. (2015). Reciprocity and the
News: The Role of Personal and Social Media Reciprocity in News Creation and
Consumption. International Journal of Communication (19328036), 9, 2526–2547.
Hooker, J. (2016). Black Lives Matter and the paradoxes of US Black politics: From democratic
sacrifice to democratic repair. Political Theory, 44(4), 448-469.
216
Horning, M. A. (2017). Interacting with news: Exploring the effects of modality and perceived
responsiveness and control on news source credibility and enjoyment among second
screen viewers. Computers in Human Behavior, 73. 273-283
Horten, G. (2003). Radio goes to war: The cultural politics of propaganda during World War II. Univ of California Press.
Hovland, C.I., Janis, I.L., Kelley, H.H. Communication and persuasion. New Haven, CT: Yale
University Press; 1953.
Howe, L. C., & Krosnick, J. A. (2017). Attitude strength. Annual review of psychology, 68, 327-
351.
Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis.
Qualitative Health Research, 15(9), 1277-1288.
Humprecht, E., & Esser, F. (2018). Mapping digital journalism: Comparing 48 news websites
from six countries. Journalism, 19(4), 500-518.
Hutchens, M. J., Hmielowski, J. D., & Beam, M. A. (2019). Reinforcing spirals of political
discussion and affective polarization. Communication Monographs, 86(3), 357-376.
Ince, J., Rojas, F., & Davis, C. A. (2017). The social media response to Black Lives Matter: how
Twitter users interact with Black Lives Matter through hashtag use. Ethnic and Racial
Studies, 40(11), 1814-1830.
Innocenti, B., & Miller, E. (2016). The Persuasive Force of Political Humor. Journal of Communication, 66(3), 366–385.
Ingraham, M. (September 2018). Most Americans Say They Have Lost Trust in the Media.
Columbia Journalism Review: https://www.cjr.org/the_media_today/trust-in-media-
down.php
Iyengar, S. (1987). Television news and citizens' explanations of national affairs. American
Political Science Review, 81(3), 815-831.
Iyengar, S., & Krupenkin, M. (2018). The strengthening of partisan affect. Political Psychology,
39, 201-218.
Iyengar S. and Westwood S. J. (2015) Fear and loathing across party lines: New evidence on
group polarization. American Journal of Political Science, 59, 690–707.
Jacobs, L., & van der Linden, M. (2018). Tone matters: Effects of exposure to positive and
negative tone of television news stories on anti-immigrant attitudes and carry-over effects
to uninvolved immigrant groups. International Journal of Public Opinion Research, 30(2), 211-232.
217
Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative
media establishment. Oxford University Press.
Jenkins, H. (2006). Convergence culture: where old and new media collide. New York: New
York University Press.
Jenkins, H. (2013). Textual poachers. Television fans and participatory culture (20th anniversary
edition). Oxon: Routledge.
Jennings, F. J., Bramlett, J. C., & Warner, B. R. (2019). Comedic Cognition: The Impact of
Elaboration on Political Comedy Effects. Western Journal of Communication, 83(3),
365–382.
Judd, C. M., & Krosnick, J. A. (1982). Attitude centrality, organization, and measurement.
Journal of personality and social psychology, 42(3), 436.
Jungherr, A. (2014). The logic of political coverage on Twitter: Temporal dynamics and content.
Journal of Communication, 64(2), 239-259.
Kahn, K. F., & Kenney, P. J. (2002). The slant of the news: How editorial endorsements
influence campaign coverage and citizens' views of candidates. American political
science review, 96(2), 381-394.
Kaid, L. L., and A. J. Wadsworth. (1989). Content analysis. Pages 197–215 in P. Emmert and L.
L. Baker, editors. Measurement of Communication Behavior. Longman, White Plains,
New York.
Kaplan, D. (2013). Programming and editing as alternative logics of music radio production.
International Journal of Communication, 7, 21.
Karpf, D. (2016). Analytic activism : digital listening and the new political strategy. Oxford
University Press: Oxford.
Kazee, T. A. (1981). Television exposure and attitude change: The impact of political interest.
Public Opinion Quarterly, 45(4), 507-518.
Keith, M. C. (Ed.). (2008). Radio cultures: The sound medium in American life. Peter Lang.
Kellow, C. L., & Steeves, H. L. (1998). The role of radio in the Rwandan genocide. Journal of
Communication, 48(3), 107-128.
Kennedy, M. J., Hart, J. E., & Kellems, R. O. (2011). Using enhanced podcasts to augment
limited instructional time in teacher preparation. Teacher Education and Special
Education, 34(2), 87-105.
218
King, L. (2015). Innovators in digital news. IB Tauris.
Kirk, R. E. (2007). Experimental design. The Blackwell Encyclopedia of Sociology.
Klaidman, S (1987). The Virtuous Journalist. Oxford University Press.
Klich, T. (February 2018). “The Betches Founders Rebrand: Building a Media Venture Beyond
Instagram.” Forbes: https://www.forbes.com/sites/tanyaklich/2018/02/01/the-betches-
founders-rebrand-building-a-media-venture-beyond-instagram/#6b3cc75693a4
Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online
Speech. The Harvard Law Review. Vol. 131.
Knobloch, S., Patzig, G., Mende, A. M., & Hastall, M. (2004). Affective news: Effects of
discourse structure in narratives on suspense, curiosity, and enjoyment while reading
news and novels. Communication Research, 31(3), 259-287.
Kowalewski, J. (2012). Does Humor Matter? An Analysis of How Hard News versus Comedy
News Impact the Agenda-Setting Effects. Southwestern Mass Communication Journal, 27(3), 1–29.
Kreiss, D. and Ananny, M. (2013). “Responsibility of the State: rethinking the case and
possibilities for public support of journalism,” First Monday 18(4).
Kreiss, D., & McGregor, S. C. (2019). The “Arbiters of What Our Voters See”: Facebook and
Google’s Struggle with Policy, Process, and Enforcement around Political
Advertising. Political Communication, 36(4), 499-522.
Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot, C. G. (1993). Attitude
strength: One construct or many related constructs?. Journal of personality and social
psychology, 65(6), 1132.
Lambeth, E. B. (1992). Committed journalism an ethic for the profession. Indiana: Indiana
University Press.
Lambeth, E. B. (1995) Global Media Philosophies. In Global Journalism: Survey of
International Communication, 3rd edition ed. John Merrill, 3-18. New York: Longman.
Lang, A. (Ed.). (2014). Measuring psychological responses to media messages. Routledge.
Lazarsfeld, P. F. and Kendall, P. (1948). Radio Listening in America: The People Look at Radio—Again (New York: Prentice-Hall).
Lecheler, S., Keer, M., Schuck, A. R. T., & Hänggli, R. (2015). The Effects of Repetitive News
Framing on Political Opinions over Time. Communication Monographs, 82(3), 339–358.
219
Lee, C. S., & Ma, L. (2012). News sharing in social media: The effect of gratifications and prior
experience. Computers in human behavior, 28(2), 331-339.
Lee, T.-T. (2018). Virtual Theme Collection: Trust and Credibility in News Media. Journalism
& Mass Communication Quarterly, 95(1), 23–27.
Lee, T. T., & Wei, L. (2008). How newspaper readership affects political participation.
Newspaper Research Journal, 29(3), 8-23.
Levendusky, M., & Malhotra, N. (2016). Does media coverage of partisan polarization affect
political attitudes?. Political Communication, 33(2), 283-301.
Lewis, S.C. and Molyneux, L. (2018). A Decade of Research on Social Media and Journalism:
Assumptions, Blind Spots, and a Way Forward. Media and Communication, 6 (4) 11.
Lilleker, D. G., & Koc-Michalska, K. (2017). What Drives Political Participation? Motivations
and Mobilization in a Digital Age. Political Communication, 34(1), 21–43.
Lindgren, M. (2016). Personal narrative journalism and podcasting. Radio Journal: International Studies In Broadcast & Audio Media, 14(1), 23-41.
Lipsky, M. (June 2019). Radio’s Top Talkers for 2019. The Radio Agency:
https://www.radiodirect.com/radios-top-talkers-for-2019/
Llinares, D., Fox, N., & Berry, R. (Eds.). (2018). Podcasting: New Aural Cultures and Digital Media. Switzerland: Palgrave Macmillian.
Loader, B. D., Vromen, A., & Xenos, M. A. (2016). Performing for the young networked
citizen? Celebrity politics, social networking and the political engagement of young
people. Media, Culture & Society, 38(3), 400-419.
Lombard, M., Snyder‐Duch, J., & Bracken, C. C. (2002). Content analysis in mass
communication: Assessment and reporting of intercoder reliability. Human communication research, 28(4), 587-604.
MacDougall, R. C. (2011). Podcasting and Political Life. American Behavioral Scientist, 55(6),
pp. 714–732.
Malka, A., & Lelkes, Y. (2010). More than ideology: Conservative–liberal identity and
receptivity to political cues. Social Justice Research, 23(2-3), 156-188.
Maras, S. (2013). Objectivity in Journalism. Polity Press, Cambridge.
Markman, K. M. (2015). Considerations—Reflections and Future Research. Everything Old is
New Again: Podcasting as Radio's Revival. Journal of Radio & Audio Media, 22(2),
240–243.
220
Martine, T., & Maeyer, J. D. (2019). Networks of Reference: Rethinking Objectivity Theory in
Journalism. Communication Theory, 29(1), 1–23.
Mason, L. (2015). “I disrespectfully agree”: The differential effects of partisan sorting on social
and issue polarization. American Journal of Political Science, 59(1), 128–145.
Mason, L. (2016). A cross-cutting calm: How social sorting drives affective polarization. Public
Opinion Quarterly, 80(S1), 351–377.
Mathson, S. (May 2019). Current Top Podcasts in US Updated for 2020. Plink HQ. Accessed
May 2020. https://plinkhq.com/blog/toppodcasts/2019/05/08/top-48-podcasts-us-itunes-
charts-2019/
Mayo, D., & Spanos, A. (Eds.). (2009). Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge:
Cambridge University Press.
Mayring, P. (2004). Qualitative content analysis. A companion to qualitative research, 1, 159-
176.
McChesney, R. W. (2013). Digital Disconnect: How Capitalism is Turning the Internet Against
Democracy. New York: The New Press.
McClung, S., & Johnson, K. (2010). Examining the Motives of Podcast Users. Journal of Radio & Audio Media, 17(1), 82–95.
McCombs, M. E., & Shaw, D. L. (1972). The agenda-setting function of mass media. Public
opinion quarterly, 36(2), 176-187.
McCombs, M. E., & Shaw, D. L. (1993). The evolution of agenda�setting research: Twenty�five years in the marketplace of ideas. Journal of Communication, 43(2), 58-67.
McCombs, M. E., Shaw, D. L., & Weaver, D. H. (2014). New Directions in Agenda Setting
Theory and Research. Mass Communication & Society, 17(6), 781–802.
McCoy, J., Rahman, T., & Somer, M. (2018). Polarization and the global crisis of democracy:
Common patterns, dynamics, and pernicious consequences for democratic polities.
American Behavioral Scientist, 62(1), 16-42.
McHugh, S. (2016). How podcasting is changing the audio storytelling genre. Radio Journal:
International Studies in Broadcast & Audio Media, 14(1), pp. 65–82.
McNair, B. (2017). After Objectivity? Schudson’s sociology of journalism in the era of post-
factuality. Journalism Studies, 18(10), 1318–1333.
221
Melinescu, N. How Much is Infotainment the New News? (June 2015). PCTS Proceedings
(Professional Communication & Translation Studies) [serial online] 8: 3-10.
Merle, P., & Craig, C. (2012). Experiment Shows Higher Information Recall For Soft Rather
than Hard Business News. Newspaper Research Journal, 33(3), 101–109.
Merrill, J. C. (1974). The Imperative of Freedom: A philosophy of journalistic autonomy. New
York: Hastings House.
Meyer, J. C. (2000). Humor as a Double-Edged Sword: Four Functions of Humor in
Communication. Communication Theory (1050-3293), 10(3), 310–331.
Miczo, N. (2019). A review of communication approaches to the study of humor. Annals of the International Communication Association, 43(4), 257–272.
Mihailidis, P. (2014). The civic-social media disconnect: exploring perceptions of social media
for engagement in the daily life of college students. Information, Communication & Society, 17(9), 1059-1071.
Mitrokostas, S. (October, 2019). 10 Popular Podcasts That Inspired TV Shows and Specials.
Insider. https://www.insider.com/big-podcasts-that-became-tv-shows-2019-10
Montanaro D. (January 2018). “Here’s Just How Little Confidence Americans Have in Political
Institutions.” NPR, All Things Considered:
https://www.npr.org/2018/01/17/578422668/heres-just-how-little-confidence-americans-
have-in-political-institutions
Mort, S. (2012). Tailoring Dissent on the Airwaves: The Role of Conservative Talk Radio in the
Right-Wing Resurgence of 2010. New Political Science, 34(4), 485-505.
Moy, P., Xenos, M. A., & Hess, V. K. (2005). Communication and Citizenship: Mapping the
Political Effects of Infotainment. Mass Communication & Society, 8(2), 111-131.
Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The Generalizability of
Survey Experiments. Journal of Experimental Political Science, 2(2), 109–138.
Mutz, D. C., Sniderman, P. M., & Brody, R. A. (Eds.). (1996). Political persuasion and attitude change. University of Michigan Press.
Nelson, J. L. (2020). The enduring popularity of legacy journalism: An analysis of online
audience data. Media and Communication, 8(2), 40-50.
Nerone, J. C. (1995). Last Rights: Revisiting Four Theories of the Press. Urbana: University of
Illinois Press.
Neuendorf, K. A. (2016). The content analysis guidebook. Sage.
222
Newman, N., & Gallo, N. (2019). News podcasts and the opportunities for publishers. Accessed
April 1, 2019: http://www.digitalnewsreport.org/publications/2019/news-podcasts-
opportunities-publishers/
Newton, K. (1999). Mass media effects: mobilization or media malaise?. British Journal of
Political Science, 29(4), 577-599.
Ngai, E. W., Tao, S. S., & Moon, K. K. (2015). Social media research: Theories, constructs, and
conceptual frameworks. International Journal of Information Management, 35(1), 33-44.
Norris, P. (1999). On message: communicating the campaign . London: SAGE.
North, L. (2016). The Gender of “soft” and “hard” news: Female journalists' views on gendered
story allocations. Journalism Studies, 17(3), 356-373.
Nosek, B. A., Ebersole, C. R., DeHaven, A.C., Mellor, D. T. (March 2018). The Preregistration
Revolution. Proceedings of the National Academy of Sciences, 115 (11) 2600-2606; DOI:
10.1073/pnas.1708274114
Nyhan, B., and Reifler, J. (2010). “When Corrections Fail: The Persistence of Political
Misperceptions.” Political Behavior 32 (2), 303–30.
Onwuegbuzie, A. J., Slate, J. R., Leech, N. L., & Collins, K. M. (2007). Conducting mixed
analyses: A general typology. International Journal of Multiple Research Approaches,
1(1), 4–17.
Otto, L., Glogger, I., & Boukes, M. (2017). The Softening of Journalistic Political
Communication: A Comprehensive Framework Model of Sensationalism, Soft News,
Infotainment, and Tabloidization. Communication Theory, 27(2), 136-155.
Pan, Z., & Kosicki, G. M. (1993). Framing analysis: An approach to news discourse. Political
Communication, 10(1), 55-75.
Panek, E. (2016). High-Choice Revisited: An Experimental Analysis of the Dynamics of News
Selection Behavior in High-Choice Media Environments. Journalism & Mass
Communication Quarterly, 93(4), 836–856.
Pariser, E. (2011). The Filter Bubble: what the Internet is hiding from you. New York: Penguin
Press.
Patterson, T. E. (2000). The United States: News in a free-market society. Democracy and the
media: A comparative perspective, 241-65.
223
Patterson, T. E. (July, 2016). News coverage of the 2016 presidential primaries: Horse race
reporting has consequences. Harvard Kennedy School Shorenstein Center Report.
https://shorensteincenter.org/news-coverage-2016-presidential-primaries/
Pauwels, K., & Weiss, A. (2008). Moving from Free to Fee: How Online Firms Market to
Change Their Business Model Successfully. Journal of Marketing, 72(3), 14–31.
Park, C. S. (2017). Citizen News Podcasts and Engaging Journalism: The Formation of a
Counter Public Sphere in South Korea. Pacific Journalism Review, 23 (1).
PBS. “Democracy on Deadline: Who Owns the Media?” Independent Lens. Accessed April 14,
2019. http://www.pbs.org/independentlens/democracyondeadline/mediaownership.html
Pearson, G. D. H., & Knobloch-Westerwick, S. (2018). Perusing pages and skimming screens:
Exploring differing patterns of selective exposure to hard news and professional sources
in online and print news. New Media & Society, 20(10), 3580–3596.
Pedersen, R.T. (2012). The game frame and political efficacy: Beyond the spiral of cynicism.
European Journal of Communication, 27(3), 225-240.
Peiser, J. March 6, 2019. “Podcast Growth is Popping in the US, Survey Shows.” The New York
Times: https://www.nytimes.com/2019/03/06/business/media/podcast-growth.html
Perks, L. G., Turner, J. S., & Tollison, A. C. (2019). Podcast Uses and Gratifications Scale
Development. Journal of Broadcasting & Electronic Media, 63(4), 617–634.
Perloff, R. (2013). The Dynamics of Political Communication. New York, NY: Routledge.
Perloff, R.M. (2013). The Dynamics of Persuasion: Communication and Attitudes in the 21st Century. New York: Routledge
Perrin, A. & Jiang, J. (March 2018). About a quarter of US adults say they are ‘almost
constantly’ online. Pew Research Center: http://www.pewresearch.org/fact-
tank/2018/03/14/about-a-quarter-of-americans-report-going-online-almost-constantly/
Peters, J. (2017). The Sovereigns of Cyberspace and State Action: The First Amendment’s
Application – or Lack Thereof – to Third-Party Platforms. Berkeley Technology Law Journal, 32(2), 989-1026.
Petty, R., & Krosnick, J. (Eds.). (1994). Attitude strength: Antecedents and consequences.
Hillsdale, NJ: Erlbaum.
Plasser, F. (2005). From hard to soft news standards? How political journalists in different media
systems evaluate the shifting quality of news. Harvard International Journal of
Press/Politics, 10(2), 47-68.
224
Political Gabfest. (Accessed February 2020). https://slate.com/podcasts/political-gabfest
Popkin, S. L. (1991). The reasoning voter: Communication and persuasion in presidential campaigns. Chicago: University of Chicago Press.
Postman, N. (1986). Amusing Ourselves to Death: Public Discourse in the Age of Show
Business. London: Methuen.
Potter, D. (2006). Ipod, you pod, we all pod. American Journalism Review, 28(1), 64.
Power of Pop: Media Analysis of Immigrant Representation in Popular TV Shows. (2017). The
Opportunity Agenda. Accessed December, 2018:
https://www.opportunityagenda.org/explore/resources-publications/power-pop
Price, V., & Zaller, J. (1993). Who gets the news? Alternative measures of news reception and
their implications for research. Public opinion quarterly, 57(2), 133-164.
Prior, M. (2003). Any good news in soft news? The impact of soft news preference on political
knowledge. Political Communication, 20(2), 149-171.
Prior, M. (2005). News vs. entertainment: How increasing media choice widen gaps in political
knowledge. American Journal of Political Science, 49(3), 577-592.
Prior, M. (2007). “Is Partisan Bias in Perceptions of Objective Conditions Real? The Effect of an
Accuracy Incentive on the Stated Beliefs of Partisans.” In Annual Conference of the
Midwestern Political Science Association.
Questionnaire Design. Pew Research Center. Accessed: April 9, 2019:
https://www.pewresearch.org/methods/u-s-survey-research/questionnaire-design/
Rainie, L. and Perrin, A. (July 2019). Key Findings About Americans’ Declining Trust in
Government And Each Other. Pew Research: https://www.pewresearch.org/fact-
tank/2019/07/22/key-findings-about-americans-declining-trust-in-government-and-each-
other/
Ramsey, M. C., Knight, R. A., Knight, M. L., & Meyer, J. C. (2009). Humor, organizational
identification, and worker trust: An independent groups analysis of humor�s
identification and differentiation functions. Journal of the Northwest Communication
Association, 38, 11�37.
Rauch, J. (2019). Comparing Progressive and Conservative Audiences for Alternative Media and
Their Attitudes Towards Journalism. Alternative Media Meets Mainstream Politics:
Activist Nation Rising, 19.
225
Reeves, A., McKee, M., & Stuckler, D. (2016). ‘It's The Sun Wot Won It’: Evidence of media
influence on political attitudes and voting from a UK quasi-natural experiment. Social
science research, 56, 44-57.
Raeijmaekers, D., & Maeseele, P. (2017). In objectivity we trust? Pluralism, consensus, and
ideology in journalism studies. Journalism, 18(6), 647–663.
Reilly, P., Veneti, A. and Lillecker, D. (June 2020). Violence against journalists is not new, but
attacks on those covering #BlackLivesMatter protests is a bad sign for US press freedom
London School of Economics US Centre. Accessed July, 2020.
https://blogs.lse.ac.uk/usappblog/2020/06/12/violence-against-journalists-is-not-new-but-
attacks-on-those-covering-blacklivesmatter-protests-is-a-bad-sign-for-us-press-freedom/
Reinemann, C., Stanyer, J., Scherr, S., & Legnante, G. (2012). Hard and soft news: A review of
concepts, operationalizations and key findings. Journalism, 13(2), 221-239.
Resnick, B. (October 2018). “9 Essential Lessons From Psychology to Understand the Trump
Era.” Vox: https://www.vox.com/science-and-health/2018/4/11/16897062/political-
psychology-trump-explain-studies-research-science-motivated-reasoning-bias-fake-news
Rhodes, N., Toole, J., & Arpan, L. M. (2016). Persuasion as reinforcement: Strengthening the
pro-environmental attitude-behavior relationship through ecotainment programming.
Media Psychology, 19(3), 455-478.
Rickford, R. (2016, January). Black lives matter: Toward a modern practice of mass struggle. In
New Labor Forum (Vol. 25, No. 1, pp. 34-42). Sage CA: Los Angeles, CA: SAGE
Publications.
Ridout, T. N., Franklin Fowler, E., Franz, M. M., & Goldstein, K. (2018). The long-term and
geographically constrained effects of campaign advertising on political polarization and
sorting. American Politics Research, 46(1), 3-25.
Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and
constructed week sampling in newspaper content analysis. Journalism quarterly, 70(1),
133-139.
Robinson, M. (1976). Public Affairs Television and the Growth of Political Malaise: The Case of
‘Selling the Pentagon.’ American Political Science Review, 70, p. 409-432
Rosenblatt, B. (March, 2020). New Podcast Listeners are Coming From Radio, Not Music.
Forbes. https://www.forbes.com/sites/billrosenblatt/2020/03/29/new-podcast-listeners-
are-coming-from-radio-not-music/#33b04a2c6790
Saeed, N., & Yang, Y. (2008, January). Incorporating blogs, social bookmarks, and podcasts into
unit teaching. In Proceedings of the tenth conference on Australasian computing
education, 78, 113-118. Australian Computer Society, Inc.
226
Saldaña, J. (2009). The Coding Manual for Qualitative Researchers (2nd ed). Los Angeles:
SAGE.
Sanders, K. (2003). Ethics & Journalism. Sage Publications.
Sawyer, M. (May, 2020). It’s Boom Time For Podcasts – but will going mainstream kill the
magic? The Guardian. https://www.theguardian.com/tv-and-radio/2020/may/03/its-boom-
time-for-podcasts-but-will-going-mainstream-kill-the-magic
Scheufele, D. A. (1999). Framing as a theory of media effects. Journal of Communication, 49(1),
103–122.
Scheufele, D. A., & Nisbet, M. C. (2012). Online news and the demise of political debate. In C.
T. Salmon (Ed.), Communication yearbook (Vol. 36, pp. 45–53). Newbury Park, CA:
Sage.
Scheufele, D. A., & Tewksbury, D. (2006). Framing, agenda setting, and priming: The evolution
of three media effects models. Journal of Communication, 57(1), 9-20.
Schmitt, J. B., Debbelt, C. A., & Schneider, F. M. (2018). Too much information? Predictors of
information overload in the context of online news exposure. Information,
Communication & Society, 21(8), 1151-1167.
Schroeder, R. (2018). Towards a theory of digital media. Information, Communication & Society, 21(3), 323-339.
Schuck, A.R.T. (2017). Media malaise and political cynicism. In: The International
Encyclopedia of Media Effects, (eds.) Patrick Rössler, Cynthia Hoffner, & Liesbet von
Zoonen, Hoboken, NJ: John Wiley and Sons.
Schudson, M. (1999). The good citizen: a history of American civic life. Cambridge: Harvard
University Press.
Schudson, M. (2001). The objectivity norm in American journalism. Journalism, 2(2), 149-170.
Schudson, M. (2008). Why Democracies Need and Ulovable Press. Polity Press: Malden MA.
Schudson, M. (2018). Why journalism still matters. John Wiley & Sons.
Schütz, A. (1946). The well-informed citizen: An essay on the social distribution of knowledge.
Social Research, 463-478.
Seawright, J., & Gerring, J. (2008). Case Selection Techniques in Case Study Research: A Menu
of Qualitative and Quantitative Options. Political Research Quarterly, 61(2), 294–308.
227
Serazio, M. (2018). Producing Popular Politics: The Infotainment Strategies of American
Campaign Consultants. Journal of Broadcasting & Electronic Media, 62(1), 131-146.
Sharma, R. (2008). Your Moment of Zen?: Exploring the Possibility of Political Enlightenment
via Infotainment. Ohio Communication Journal, 4695-108.
Sharon, T., & John, N. A. (2019). Imagining An Ideal Podcast Listener. Popular Communication, 17(4), 333-347.
Shearer, E. (December, 2019). Social Media Outpaces Print Newspaper in the US as a news
source. Pew Research Center. https://www.pewresearch.org/fact-tank/2018/12/10/social-
media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/
Sienkiewicz, M., & Jaramillo, D. L. (2019). Podcasting, the intimate self, and the public sphere.
Popular Communication, 17(4), 268–272.
Siebert, F., Peterson, & Schramm. (1956). Four theories of the press. Urbana: University of
Illinois Press.
Sherman, L. E., Payton, A. A., Hernandez, L. M., Greenfield, P. M., & Dapretto, M. (2016). The
Power of the Like in Adolescence: Effects of Peer Influence on Neural and Behavioral
Responses to Social Media. Psychological Science, 27(7), 1027–1035.
Shin, D., An, H., & Kim, J. H. (2016). How the second screens change the way people interact
and learn: the effects of second screen use on information processing. Interactive Learning Environments, 24(8), 2058-2079.
Shirky, C., (2008). Here Comes Everybody: The Power of Organizing Without Organizations.
New York: Penguin Press.
Shirky, C. (2011). The political power of social media: Technology, the public sphere, and
political change. Foreign affairs, 28-41.
Silverman, C., & Singer-Vine, J. (2016). “Most Americans who see fake news believe it, new
survey says.” BuzzFeed News.
Skoric, M. M.; Sim, C.; Han, T. J.; Fang, P. (2009). Podcasting and politics in Singapore: An
experimental study of medium effects. Journal of Contemporary Eastern Asia, 8(2), pp.
27–43.
Singer, J. (2006). The Socially Responsible Existentialist. Journalism Studies, 7(1), 2–18.
Slater, M. D. (2007). Reinforcing Spirals: The Mutual Influence of Media Selectivity and Media
Effects and Their Impact on Individual Behavior and Social Identity. Communication
Theory, 17(3), 281–303.
228
Smith, K. T., Blazovich, J. L., & Smith, L. M. (2015). Social media adoption by corporations:
An examination by platform, industry, size, and financial performance. Academy of
Marketing Studies Journal, 19(2), 127.
Social Media Fact Sheet. (2019). Pew Research Center.
https://www.pewresearch.org/internet/fact-sheet/social-media/
Sobieraj, S., & Berry, J. M. (2011). From Incivility to Outrage: Political Discourse in Blogs,
Talk Radio, and Cable News. Political Communication, 28(1), 19–41.
‘State of the Media: Audio Today in 2018.’ (April, 2018). Neilsen:
https://www.nielsen.com/us/en/insights/reports/2018/state-of-the-media--audio-today-
2018.html
Stephens, D. (2018). The Ides of Laughs: The Politicisation of American Late-Night Talk Shows
Over Time and Under Trump. Journal of Promotional Communications, 6(3).
Stockwell, S. (2004). Reconsidering the Fourth Estate: The functions of infotainment.
Proceedings, Australian Political Studies Association, University of Adelaide.
Street, J., Inthorn, S., & Scott, M. (2012). Playing at Politics? Popular Culture as Political
Engagement. Parliamentary Affairs, 65(2), 338-358.
Stringer, P. (2018). Finding a Place in the Journalistic Field: The pursuit of recognition and
legitimacy at BuzzFeed and Vice. Journalism Studies, 19(13), 1991–2000.
Stringer, P. (2020). Viral Media: Audience Engagement and Editorial Autonomy at BuzzFeed
and Vice. Westminster Papers in Communication and Culture, 15(1).
Strömbäck, J. (2017). Does public service TV and the intensity of the political information
environment matter?. Journalism Studies, 18(11), 1415-1432.
Sülflow, M., Schäfer, S., & Winter, S. (2019). Selective attention in the news feed: An eye-
tracking study on the perception and selection of political news posts on Facebook. New
Media & Society, 21(1), 168-190.
Tandoc, E. C. (2018). Five ways BuzzFeed is preserving (or transforming) the journalistic field.
Journalism, 19(2), 200–216.
Taniguchi, M. (2011). The Electoral Consequences of Candidate Appearances on Soft News
Programs. Political Communication, 28(1), 67-86.
Theocharis, Y., & Quintelier, E. (2016). Stimulating citizenship or expanding entertainment?
The effect of Facebook on adolescent participation. New Media & Society, 18(5), 817-
836.
229
Thorson, E. (2014). Beyond opinion leaders: How attempts to persuade foster political awareness
and campaign learning. Communication Research, 41(3), 353-374.
Thussu, D. K. 2015. Infotainment. The International Encyclopedia of Political Communication.
1–9.
Tiggemann, M., Hayden, S., Brown, Z., & Veldhuis, J. (2018). The effect of Instagram “likes”
on women’s social comparison and body dissatisfaction. Body Image, 26, 90–97.
Tomer, I. (2018). “Making Content Anywhere a Reality.” Broadcasting & Cable, 148(2), 19.
Tuchman, G. (1972) Objectivity as a Strategic Ritual. American Journal of Sociology, 77, 660–
79.
Tuchman, G. (1978). Making news. New York: Free Press.
Tuggle, C. A., & Huffman, S. (2001). Live reporting in television news: Breaking news or black
holes?. Journal of Broadcasting & Electronic Media, 45(2), 335-344.
Valkenburg, P. M., Semetko, H. A., & De Vreese, C. H. (1999). The effects of news frames on
readers' thoughts and recall. Communication Research, 26(5), 550-569.
Van Aelst, P. (2017). Media Malaise and the Decline of Legitimacy. In Myth and reality of the legitimacy crisis: Explaining trends and cross-national differences in established
democracies, 95.
Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1(1), 2-14.
Vinton, K. (June 2016). “These 15 Billionaires Own America’s News Media Companies.”
Forbes: https://www.forbes.com/sites/katevinton/2016/06/01/these-15-billionaires-own-
americas-news-media-companies/#bf18597660ad
Vos, T. P., Eichholz, M., & Karaliova, T. (2019). Audiences and Journalistic Capital: Roles of
journalism. Journalism Studies, 20(7), 1009-1027.
Wahl-Jorgensen, k., Williams, A., Sambrook, R. Harris, J., Garcia-Blanco, I., Dencik, L.,
Cushion, S., Carter, C., & Allan, S. (2016) The Future of Journalism, Digital Journalism,
4(7), 809-815.
Watson, J. C. (2002). Times v. Sullivan: Landmark or Land Mine on the Road to Ethical
Journalism? Journal of Mass Media Ethics, 17(1), 3–19.
Watson, J. (2008). Journalism ethics by court decree the Supreme Court on the proper practice
of journalism. [electronic resource] : LFB Scholarly Pub. LLC.
230
Weaver, D. H. (2007). Thoughts on agenda setting, framing, and priming. Journal of communication, 57(1), 142-147.
Weaver, D., & Elliott, S. N. (1985). Who sets the agenda for the media? A study of local agenda-
building. Journalism Quarterly, 62(1), 87-94.
Webster, J. G., & Ksiazek, T. B. (2012). The dynamics of audience fragmentation: Public
attention in an age of digital media. Journal of communication, 62(1), 39-56.
T. Webster (November, 2018). Podcasting and race: the state of diversity in 2018. Edison
research. Accessed July 20, 2020. https://www.edisonresearch.com/podcasting-and-race-
the-state-of-diversity-in-2018/
Wells, C., Shah, D. V., Pevehouse, J. C., Yang, J., Pelled, A., Boehm, F., ... & Schmidt, J. L.
(2016). How Trump drove coverage to the nomination: Hybrid media campaigning.
Political Communication, 33(4), 669-676.
Westwood, S. J., Iyengar, S., Walgrave, S., Leonisio, R., Miller, L., & Strijbis, O. (2017). The tie
that divides: Crossnational evidence of the primacy of partyism. European Journal of Political Research.
Williams, B. & Delli Carpini, M. X. (2011). After Broadcast News: Media Regimes, Democracy,
and the New Information Environment. Cambridge University Press.
Williams, L. (2020). Political Science and Podcasts: An Introduction. PS: Political Science & Politics, 53(2), 319-320.
Winn, R. (April 2020). 2020 Podcast Stats & Facts (New Research from Apr 2020). Podcast
Insights. Accessed June 1. https://www.podcastinsights.com/podcast-statistics/
Witschge, T., & Harbers, F. (2018). Journalism as practice. Journalism, 19, 105-123.
Wojcieszak, M., Bimber, B., Feldman, L. & Jomini-Stroud, N. (2016) Partisan News and
Political Participation: Exploring Mediated Relationships, Political Communication, 3(2),
241-260,
Wolfsfeld, G. (2011). Making Sense of Media and Politics. New York: Routledge,
Women’s Media Center. (2019) “The Status of Women in the U.S. Media 2019.”
Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes’ steadfast factual
adherence. Political Behavior, 41(1), 135-163.
Wook Ji, S. & Waterman, D. (2014). “The Impact of the Internet on Media Industries: An
Economic Perspective,” in Society and the Internet: How Networks of Information and Communication are Changing our Lives. Oxford University Press, Oxford.
231
Wrather, K. (2016). Making 'Maximum Fun' for fans: Examining podcast listener participation
online. Radio Journal: International Studies In Broadcast & Audio Media, 14(1), 43-63.
Wrather, K. (2019). Writing Radio History as it Happens: The Challenges and Opportunities of
Collecting Podcast Histories. Journal of Radio & Audio Media, 26(1), 143–146.
Xu, J. (2014). The impact of entertainment factors on news enjoyment and recall: Humour and
human interest. Journal of Applied Journalism & Media Studies, 3(2), 195-208.
Yang, J. (2016). Effects of popularity-based news recommendations (“most-viewed”) on users'
exposure to online news. Media Psychology, 19(2), 243-271.
York, C. (2013). Overloaded by the news: Effects of news exposure and enjoyment on reporting
information overload. Communication Research Reports, 30(4), 282-292.
Young, D. G., and Tisinger, R. M. (2006). Dispelling Late-night Myths: News Consumption
among Late-night Comedy Viewers and the Predictors of Exposure to Various Late-night
Shows. Harvard International Journal of Press/Politics 11 (3), 113-134.
Ytre-Arne, B., & Moe, H. (2018). Approximately informed, occasionally monitorial?
Reconsidering normative citizen ideals. The International Journal of Press/Politics,
23(2), 227-246.
Zelizer, B. (2009). Journalism and the academy. In K. Wahl-Jorgensen and T. Hanitzsch (Eds.)
The Handbook of Journalism Studies (New York: Routledge).
Zengerler, J. (2017, November 22). The Voices in Blue America’s Head. The New York Times
Magazine. https://www.nytimes.com/2017/11/22/magazine/the-voices-in-blue-americas-
head.html
Zenor, J. (2014). Parasocial politics: audiences, pop culture, and politics. Lanham: Lexington Books.
Zornixk, G. (2018). “The Disrupters.” Nation, 306(13), 12-15.
Zyoud, S. H., Sweileh, W. M., Awang, R., & Al-Jabi, S. W. (2018). Global trends in research
related to social media in psychology: mapping and bibliometric analysis. International
Journal of Mental Health Systems, 12.
Independent Lens. ‘Democracy on Deadline.’ Accessed May 2019.
https://www.pbs.org/independentlens/democracyondeadline/mediaownership.html