Quinn Gabrielle Cantor’s Reflection

Quinn wrote this reflection to accompany her essay, “Race and Rhetoric: Examining How the Audience’s Race Creates Rhetorical Constraints and Influences Rhetoric,” in her Composition I class in fall 2021.

The writing process of this essay is completely different than all the other essays I’ve ever written. Much of this difference comes from the drafting process that we did, where we had to generate and develop our ideas, and then create an audience-facing draft, and then finally create our revisions vs my highschool writing process where we had to develop a thesis statement first, create an outline, and then write our essay. I think the writing process this time made writing the essay much easier for me, because it made sense. I came up with different ideas during the idea generating part, developed one specific idea further, which is actually what became my essay topic, and then discussed it with my professor during our conference. I think the idea generating and developing really helped me sort of create the framework for my essay; because the main idea was already there, I simply had to develop it further and connect it with the previous texts we read (Halevi, Dirk, Allen).

I think the hardest part of the essay was making the audience-facing draft because at that point it was already more like an essay. I struggled with organizing my thoughts and ideas mainly, since I had a lot of ideas and connections that could work, but I didn’t know how to pick out which ones actually work and which ones don’t. I think what helped with my struggle was when we did the checklist for our essay and assessed what things we did successfully and what things we were still missing. I was able to see which ones I still needed to work, so I added them on my to-do list, and my professor’s comments on my introduction also helped a lot – since I would have missed it otherwise. I also found the peer review very helpful since my partner actually pointed out things that I have to do better on, and it was really helpful even just reading their work as well, since I was able to get some idea of what worked in his essay that I didn’t do so well on on mine. Another person I received feedback from was a classmate during class. Since we had chosen the same text, we mostly talked about our ideas with each other and discussed what our individual topics were about. Just like my peer review, I found reading and learning about other people’s ideas and texts really helpful since it allowed me to assess my own writing as well.

I think my final essay is pretty solid. I cut out a lot of the ideas I had written originally, which I struggled with, but in this process I was able to expand on my ideas deeper. I really like my body paragraph about Villarosa’s use of scientific objectivity, because I was able to directly connect it to two texts: Dirk and Allen, since they respectively talk about genre awareness and scientific writing. I think the most difficult part of my essay was the introduction since I wasn’t sure how to introduce the Rhetorical Situation Model without revealing my entire essay (which is actually what my professor had pointed out). I ended up literally saying what I was going to talk about in my essay, since it directly introduced my topic.

After the entire process of writing this essay, I think my understanding of the rhetorical situation model (RSM) and rhetorical theory grew so much. Even though we read a lot of texts on RSM and we analyzed a bit in class, I didn’t really fully know how to apply it until I had to use it myself in the essay. I really didn’t know how to analyze from a rhetorical perspective because I was so used to writing research essays in high school. However, once I started labeling the elements of the RSM, I think that’s when the ideas started flowing out, which is why I wrote so much in my idea-generating draft. That’s when I realized that it’s true what we discussed in class: writing makes us think deeper and makes us write more. I actually began to appreciate the use of RSM in analyzing texts, because it automatically makes us think deeper than the surface meaning or message of the text, and I know for sure that I will be using it a lot more when encountering new texts.

Karina Silva’s Reflection

Karina wrote this reflection to accompany her essay, “The Brain, the Block, and the Bummed Writer,” in her Composition I class in fall 2021.

The point that I wanted to focus on in this essay was to connect my sources to each other, or rather my theories, to the “understanding of the neuroscience behind writing.” I wanted to focus on connecting my theories together because the purpose of my essay was to assess the theories. I also wanted to show that writer’s block, the exigence of this paper, is complex as it has several “symptoms.” In order to emphasize the purpose and the exigence, I had to demonstrate how my sources overlapped and contrasted. Basically, connecting the “everything is in everything,” “flow,” and the “zone of proximal development” theories was the entirety of my paper.

Tyler Tran, the Undercurrents writer behind “It’s Your Voice, Why Not Use It?” connects two sources that both show how difficult it is for students to break out of writing habits:

Since students are taught to believe that there is only one correct way to write from the beginning as Kinloch mentioned, it is difficult for them to break this habit. When a criminal is released from jail, it takes a certain amount of time for them to adjust to living in the free world. […] Gemmell, who wrote about her students’ own experiences with this, observed that ‘many students resisted this new focus’ (Gemmell 65). It is surprising that these students were not openly joyous about being able to write with their own voice, but it is understandable. (Tran)

Tran connected Rebecca Gemmell’s experience with having their students change their writing with Valerie Felita Kinloch’s observations of student’s use of “standard English.” This connection was Tran’s way of emphasizing how students are conditioned to use “standard English” and how that type of language was preventing them from using their own voice in their writing.

Taking from Tran’s example, connecting two sources together gives one the ability to emphasize certain messages that contribute and lay beneath the purpose of the paper. By focusing on connecting sources, one can also concentrate on discussing the uses and limitations of each source. These uses and limits also help contribute to the purpose of the essay. For example, in my paper I found that the limits of each of my theories either revolved around effectiveness, like the flow and “everything is in everything” theory, or on brain activation, like the” zone of proximal development theory.” By examining the uses of my theories, I was able to elaborate on the purpose of my paper by emphasizing that writer’s block is a complex condition and that there are several aspects to it.

2021 Editor’s Introduction

To slow the spread of COVID-19, UMass Boston opted to operate remotely for the entire 2020-2021 academic year. As instructors quickly adapted to teaching online and via Zoom, students attended class meetings from their bedrooms, kitchens, and living rooms. Residing across time zones, some students attended Zoom classes during night hours, while other members of their household slept. Remote learning life brought moments of delight, as we introduced each other to our pets and the quirks of our home spaces, as well as frequent moments of frustration (“You’re on mute!”). For some, operating remotely came to mean putting in the effort to learn despite the increasing weight of isolation, as the months of social distancing stretched from fall to spring. For others, remote learning has meant trying to find time and energy for academic life while coping as an essential worker and/or as the parent of a child learning from home. While the Undercurrents editorial team has been impressed by the large number of high-quality submissions we receive each and every year, which makes the selection process an enormous challenge, we are especially proud of this year’s nine honorees. At the same time, we also celebrate the thousands of students and more than sixty faculty members in our program who, despite the disruptions and setbacks in their own lives, still found ways to connect, write, and reflect.

While the acquisition of new rhetorical knowledge and practice is a central goal of the program, such learning has implications for identity, as new discourses bring new ways of writing and speaking, and therefore new ways of belonging. Hadassa Soussou takes up this tension by raising a concern that the rich language repertoires of multilingual students would be impoverished by the wrong pedagogical approach. Soussou makes the case for incorporating a narrative approach to the multilingual writing classroom, in which students can develop language and literacy proficiencies—including strategies for organization, structure, and audience engagement—while cultivating a sense of confidence as novice members of an American English academic discourse community. Seemingly demonstrating a product of such a pedagogical strategy, John Nobile Carvalho’s literacy narrative captures his own journey as a multilingual student pursuing higher education in a cross-cultural contact zone: “I feel responsible to share the American culture with my friends and family who live in Brazil,” he writes. “Likewise, I am responsible for sharing Brazilian culture with American society.”

But the development of new discourses and cross-cultural contact is not always met with unbridled enthusiasm, since encounters with new discourses can bring clashes with familiar communities, beliefs, and identities. Tyler Tran identifies a writerly strategy of avoiding such clashes: “self-censorship,” or “the act of students replacing their true voice, with a voice that is not entirely their own”—aptly echoing the title of Jacqueline Jones Royster’s now-famous essay on her own experiences with crossing discourse boundaries as a Black rhetorician, “When the First Voice You Hear Is Not Your Own.” In another examination of the impact of language learning on the self, Adia Samba-Quee’s critical engagement with a well-known pedagogical debate between Stanley Fish and Vershawn Young affectingly traces the push-and-pull of entering academic discourses. While Samba-Quee is drawn in by Young’s argument for codemeshing—that is, the strategic blending of discourses, such as Black rhetorical traditions and (White) American academese—Samba-Quee also sees risks inherent to such blending: “I’m not one to believe in gatekeeping, and what I want most of all is to be understood, I am curious to what extent will letting outsiders into our world, a world the product of years of mistreatment and oppression, come back to hurt us.”

Outlining one example of the oppressive systems by which marginalized communities are excluded, Sarah Islam examines the inherent biases of archival holdings and the histories that are generated from them. Since “documentation tools have only been available to . . . those in power,” Islam argues for structural and methodological reform in archival research in order to save the materials that are historically ignored and elevate the stories that have previously been silenced. Navasz Hansotia, too, has questions about exclusionary rhetorical practices, as she responds to an essay by James Warren, who contends that students’ lack of familiarity with the rhetorical situation of college application essays presents an undue challenge to students. Hansotia asserts that the challenge is especially problematic for international student applicants, who may be doubly disadvantaged due to lack of familiarity with the cultural contexts of their application. Exposing the hidden agenda of the application essay, Hansotia extends Warren’s argument to raise a call for change in the ways that application essay prompts are designed.

Taking a somewhat different tact that is no less concerned about questions of access and inclusion in academic contexts, Maggie Buyuk asks an especially timely question: “Can scientific rhetors ease the confusion and frustration that scientific research causes the general public?” As the spread of misinformation about vaccines and the COVID-19 virus seems to expand, Buyuk presents rhetorical strategies for going public with complex (and life-saving) scientific information. Also in favor of techniques for educating through open communication rather than restricted access, Hannah Ortiz casts doubt on the wisdom of censoring discriminatory works of literature—including those expressing overtly racist views, as doing so might “attract interest without context.” Rather, controversial or offensive texts might be put to use as occasions for teachers and students to openly and actively examine the contexts and consequences of those perspectives. Likewise calling for more expansive perspectives on literate activity, Alex Quadros makes the case for increased attention to orality in the composing process; reminding us that the acts of writing are not solely restricted to the inscription of words on a page by human hands, Quadros calls for the recognition of conversation and speech-to-text tools as legitimate resources in the writing process. On behalf of the editorial team, I welcome readers to the pandemic issue of Undercurrents and thank our nine honorees for sharing their outstanding work with the world.

-Lauren M. Bowen, Editor-in-Chief of Undercurrents and Director of the Composition Program

2021 Honorees

The works below were written by first-year students in the Composition Program at the University of Massachusetts Boston, selected for publication by Composition Program faculty serving on the Undercurrents editorial board. Please see our Editor’s Introduction to learn more about our 2021 issue, click About the Journal to learn more about Undercurrents, or click the links below to enjoy our 2021 selections.


Photo of Maggie BuyukMaggie Buyuk’s The Battle of Science and the Public: How to Make Scientific Writing Friendly

“Clear language and structure can build a trusting relationship with the public and scientific discourse community, while still adhering to the standard rules and regulations of scientific research writing.”

 

 


Photo of John Nobile Carvalho

John Nobile Carvalho’s Becoming Bilingual: An Experience That Changed My Life

“Now this language is not only a source of curiosity; it has practically become the tool that allows me to connect with the world, and in a way even with myself.”

 

 


Photo of Navasz Hansotia

Navasz Hansotia’s The Ignored Insights Of An International Student

“Very often, as international applicants do not have the opportunity of their schools explaining to them the implicit nature of the prompt and the counsel’s expectations, they are eliminated at the first level of the ‘game.'”

 

 


Photo of Sarah IslamSarah Islam’s Ethics in the Archives

“We must uncover the truths hidden in the archives and make them known to the world. But along with providing the voiceless with a platform, we must strive to fix the system that allowed for the silencing of oppressed communities.”

 

 


Photo of Hannah Ortiz

Hannah Ortiz’s The Repurposing of Biased Literature for Moral Development

“With the correct educational intervention, biased or controversial literature can be read and discussed, furthering the moral development of every person in the classroom.”

 

 


Photo of Alex Quadros

Alex Quadros’s Paradise Found? Orality in the Composition Process

“Although the majority of research on orality in the composition process does focus on writers with disabilities, I see no logical reason for the role of orality to be reserved for a select few if it could benefit writers en masse.”

 

 


Photo of Adia Samba-Quee

Adia Samba-Quee’s Texts in Conflict

“Gee’s writing taught me how to have more critical relationships with the language I use and what it says about the groups I am a part of, but Young’s writing reminded me of the very institutions Gee upholds with his Discourse theory.”

 

 


Photo of Hadassa Sossou

Hadassa Sossou’s Using a Narrative Approach to Cater to Multilingual Student Writers

“Encouraging multilingual students to use narrative thus can help them to organize their ideas and take a position that not only explains their claims and conveys their thinking, but is compelling to an audience.”

 

 


Photo of Tyler Tran

Tyler Tran’s It’s Your Voice, Why Not Use It?

“Supporting students’ efforts to write with our own voices continues the flow of original ideas, allowing the conversation to thrive and continue, and the positive cycle of writing lives on.”

Breaking Free from Gender Norms: Adolescent Constructions of Femininity Through the Patriarchy and High School Musical

by Ina TolentinoPhoto of Ina Tolentino

Ina is a double major in nursing and psychology from Elk Grove, CA. Ina “always had trouble embracing myself with the way masculinity and femininity are pitted against each other” and found meaning in writing this essay, as they believe “there’s so much value in painting ourselves however we want … regardless of the expectations of gendered roles.” As a non-binary individual, Ina feels that writing this piece was a healing experience and a reminder to embrace all aspects of who they are. They love reading and writing, which has led to a bedroom flooded with different kinds of books. Now and in the future, Ina hopes to reach people and help them, whether it is through the medical field, psychology, or writing. They note that “life is full of people, relationships, and stories worth sharing, and there’s something very special about being a part of that process—giving or receiving.”


Pink and blue. Sparkly and strong. Through these common oppositional stereotypes, gender is so easily understood as a dichotomy – perpetuating femininity and masculinity as mutually exclusive. This extreme divide of gender can be attributed to patriarchy, the principle that quantifies worth based on gender. bell hooks in “Understanding Patriarchy” defines it as “a political-social system that insists males are inherently dominating, superior to everything and everyone deemed weak, especially females” (1). The patriarchy’s sexist ideologies are upheld and enforced “through various forms of psychological terrorism and violence,” especially prevalent in medias with stigmatized depictions of gender expression (hooks 1). If these discriminations are internalized, an understanding and construction of one’s gender identity can easily become mutated. The goal of this paper is to expose the patriarchal undermining of femininity and analyze its effect in adolescent media on young girls in order to advocate for more inclusive, accepting, and even fluid gender expression.

hooks describes the patriarchal influence on her own childhood, specifically how her and her brother’s behaviors were expected to fall in accordance with a “predetermined, gendered script” that was commonly “assigned…as children” (1). This script establishes rules about gender expression that crucially hinder early enactment of identity, causing “confusion about gender” at a young age (1). The script’s basis on “patriarchal values and beliefs” forces children to be and act according to its definitions of gender, regardless of their own natural dispositions. hooks anecdotally shares a time she broke free from this script, aggressively playing marbles, “a boy’s game” (1). However, she was punished with both verbal and physical abuse by her patriarchal father for displaying masculinity; he belittled her as “‘just a little girl’” and repeatedly beat her with a board (2). She was “banished – forced to stay alone in the dark,” symbolic of the way she was diminished to and trapped within her “natural” place of patriarchal femininity (2). Because the patriarchy deems such strict standards for the “natural” roles of genders, females and males are diminished to the confines of feminine submissiveness and masculine domination. The indoctrination and reinforcement of these roles in childhood is traumatizing, and though hooks’s physical violence of the fifties might not be as relevant in today’s context, the psychological terrorism she describes still is.

hooks, quoting therapist Terrence Real, outlines the patriarchal destruction and convolution of views on gender: “‘Psychological patriarchy is the dynamic between those qualities deemed ‘masculine’ and ‘feminine’ in which half of our human traits are exalted while the other half is devalued’” (6). By imposing what is “right” and “natural” against what is “wrong” and “unnatural” solely based on gender, the patriarchy uses gender stereotypes to dictate what identities are socially acceptable. On a fundamentally universal, human level, the patriarchy constructs stigmas of gender that deny wholeness of identity. This discrimination between the genders has been shown to create confusing relationships with identity, as scholar Adam Rogers, whose studies emphasize competence and gender development in adolescence, researched:

The subjective experience of oppression (e.g., discrimination) elicits feelings of social and psychological dissonance that are fundamentally distressing, and which demand a coping response from the individual. This coping response involves the reshaping of a person’s social identity as they try to make sense of their relationship to the systems of power in which they are embedded. (Rogers et al. 336)

Understanding, shaping, and claiming gender identity becomes a complicated process in a social context that not only emphasizes the rift between the genders, but places “greater inherent valuing of masculinity compared to femininity” (Rogers et al. 336).

Thus, if “to be immersed in any culture is to learn to see the world through the ideological lenses it validates and makes available to us,” immersion in a patriarchal culture involves enacting identity as a response to what is understood and validated by patriarchy (Scott et al. 48). Childhood and adolescence are crucial time periods for establishing identity within the context of ideology since gender is one of the earliest learned social constructs. Patriarchal gender portrayals, especially when popularized and perpetuated throughout childhood, then become key influencers for children to understand gender in the world and in themselves.

And one of the most popular and arguably patriarchal media targeted toward children is the widely accepted fan-favorite Disney Channel Original Movie, High School Musical (HSM). Its overwhelming success prompted two immediate sequels and a more recent television spin-off. Its millions of viewers leave no room for doubt on its substantial impact on 2000s tween pop culture (Keveney). The plot follows basketball jock Troy and nerdy brainiac Gabriella, “star-crossed” lovers bonded by a passion for singing, who must break free from the status quo of high school social archetypes to express their multifaceted identities. Its overarching message is one of liberation, urging the audience to dissolve clique-y, stereotypical perspectives. Yet, this theme questionably does not seem to apply to gender, since it has very different ways of embodying and quantifying femininity.

Sharpay Evans – pink “It Girl” and literal drama queen – is the story’s antagonist, actively combating Troy and Gabriella from invading the school’s drama program. Sharpay displays some typically masculine traits, such as aggressiveness, assertiveness, and strong-minded outspokenness, breaking the aforementioned patriarchal script (Stevenson 107). However, she does so with a conflicting, excessive performance of femininity to compensate, leading her to “serve a hegemonic rather than subversive function” (Stevenson 109). Sharpay’s outward appearance of the epitomized girl is actually so inflated that it is essentially treated as a source of absurd comic relief, since she is pointedly decked in glittery pink everything, including her microphone and locker. Especially when she’s placed in complete opposition against the film’s leading female, Gabriella, (possessing a quiet personality and more muted femininity), Disney’s intentions for gender portrayal can be challenged. Why does Disney’s polarization of femininity have to correlate to its protagonist-antagonist relationship? Why are their contrasting traits shown as good versus bad and right versus wrong, perhaps even natural versus unnatural? Even if Sharpay’s gender performance is an over-exaggeration, Disney’s acceptance of femininity can still be called into question. The High School Musical cinematic universe in which she resides, one often naively valorized by youth as the “dream” high school reality, reveals itself to be one that not only ridicules femininity, but antagonizes it.

Thus, as young viewers correlate antagonism with Sharpay, and Sharpay with hyper-femininity, discrimination against femininity can easily be internalized. Maura Leaden in her thesis, “Unlearning Disney,” speaks on her consumption of Disney Channel and its effect on her own gender identity. Leaden has described HSM as a treasured childhood “safe-haven” (24) but revisiting its themes with a feminist lens has complicated her attachment; she now realizes that it had “restricted…aspects of [her] femininity, sexuality, and emotions” (26). She recalls a confusing and discouraging “inadequacy” since her tween self did not know where to fall in comparison to Disney’s femininities (35). There was no compromise in Disney’s opposing portrayals; girls were either bashfully quiet or unashamedly loud, smart scientists or over-the-top fashionistas, “innocent maidens” or “sinister witches,” (35) so Leaden had serious difficulty choosing how to embody her femininity and sexuality:

The binary, being either [good or bad, right or wrong, natural or unnatural, alluding to Gabriella and Sharpay respectively] erases the possibility of anything in between. There is no image of young women negotiating a sexuality that is self-possessed and self-satisfying, yet also kind and loving and profoundly mutual. (51)

The messages HSM and Disney send have clearly had a prevalent effect in Leaden’s capability to understand her own gender identity, since her self-comparison, as a form of self-discrimination, has inhibited her from comfortably claiming femininity.

While Leaden’s anecdotal experience with HSM is highly personal, it is not necessarily exclusive. Common Sense Media, a research-based organization focused on educating parents about media/technology’s effect on kids, studied gender-typed television portrayals and how they contribute to children’s worldviews (Ward et al. 6). Their extensive research revealed that watching TV and movies that reinforce specific gender roles leads children to have much stricter beliefs about what their gender can and should do (Ward et al. 38). Overall, popular media consumption has shown to be a prominent force on ideology and identity formation of children. Taking this data and HSM’s popularity into account, Leaden’s experience with HSM can be read as more than just a singularity; HSM’s confusing and harmful ideologies about femininity could easily permeate the ideologies of all its younger viewers, like it did with her. Leaden’s experience can be seen as a microcosm, encapsulating something much larger about general tween culture surrounding Disney Channel.

Adolescent ability to claim femininity then becomes relative to mainstreamed views of patriarchal gender performance like HSM’s, since “the degree to which a [feminine] identity is stigmatised or valorised, seen as part of a wide spectrum of possible femininities or regarded as aberrant, will depend on the norms and understandings prevalent” (Paechter 24). If the most HSM has to offer in terms of femininity is just pink villainy, how can girls be expected to understand, lest embrace, their own femininity? Considering how Sharpay and everything she represents is antagonized, even accepting femininity can be a struggle since “girls who experience discrimination might come to perceive that identifying with femininity is a liability” (Rogers et al. 337, emphasis added). So, as a response to social spheres existing in reality, reinforced by patriarchal mass media which is constantly devaluing and opposing the feminine, girls might be led to relatively understand femininity as a hindrance, a burden, a fault–something to dissociate from. In fact, Adam Rogers, and other scholars focusing on gender socialization, conducted a longitudinal study with adolescent females to observe the effects of gender discrimination on gender identification. An inverse relationship was found between the two: “Girls who reported more frequent experiences with discrimination…reported one year later that…they felt less similar to other girls, and that they felt more similarly to boys” (Rogers et al. 344). It seems in this case; femininity has become an object to reject.

As girls navigate this negativity surrounding femininity, because there is so little gray in between the black and white of patriarchy’s masculine and feminine, it makes sense that these over-essentialized identities are all they have to compare themselves to. Carrie Paechter, who studied embodiments of femininities in elementary schools, describes how children’s understanding of femininity has been divided into two “co-constructed oppositional identities”: the girly-girl and the tomboy (225). With patriarchal standards that make femininity and girly-girl-ness so easily recognized as a liability, seeking refuge in masculinity, and enacting a tomboy identity becomes a coping mechanism of “psychological protection” (Rogers et al. 344). This rejection might even be viewed as an act of resistance from femininity’s patriarchal connotations; for example, being a tomboy might act as a way to reject the more traditional values of banishment (hooks was punished and banished for displaying masculine traits) or of hegemony (Sharpay covers her masculine traits with extreme femininity and is ridiculed for it); embracing masculinity might disprove the patriarchal stigma that girls are weak.

However, taking up a tomboy identity in spite of femininity only reinforces harmful patriarchal values. After all, the “central aspect of claiming [a tomboy] identity” (Paechter 228) is not just embracing the masculine, but also “embracing the expulsion of the feminine” (Paechter 231). Embracing masculinity is so harmful in this sense since it is constructed only in contemptuous relationship to femininity; femininity is further devalued, othered, and misunderstood. A binary then forms within the binary, further splitting genders and gendered traits into unnatural, seemingly irreconcilable divisions. Girls believe they “hav[e] to opt for one identity or the other,” since gender is perceived as mutually exclusive (Paechter 234). Psychological terrorism still persists, since “de-identifying with their gender collective may only serve to further isolate girls” from their emotional well-being and gender identity (Rogers et al. 345). This confusion, this dissonance within female identity remains a result of the patriarchal “‘tortured value system’” that cyclically contorts both genders (Real qtd. by hooks 6).

So perhaps the only way to claim natural-ness is through wholeness. Revisiting “Understanding Patriarchy” and hooks’s anecdote, she describes the marbles she saw while playing with heavy awe: “All sizes and shapes, marvelously colored, they were to my eye the most beautiful objects” (hooks 1). These marbles should be understood as a larger metaphor in hooks’s narrative, symbolizing the rich and colorful diversity that lies within gender, which should be accessible to all, regardless of whether they “belong” to one and exclude the other. hooks combats the patriarchal tyranny that denies access to this “openheartedness and emotional expressiveness that is the foundation of well-being” (6). She suggests the only way to be free from the unnatural oppression of the patriarchy is by accepting natural identities, discarding the expectations and even existence of gendered roles, and accepting all forms of gender expression. It is paradoxical to create through destruction, so gender should not be constructed through denying or rejecting, but through welcoming and embracing.

Patriarchy shatters identities, forcing their broken shards into dictated gender roles. Gender expression must be liberated to actively combat popular media’s patriarchal gender norms. Accepting gender unconditionally is the only way to achieve whole and natural inherency. We must pick up, recollect, reclaim the pieces of gender we’ve lost, the pieces patriarchy has taken from us, and piece ourselves back together.

Works Cited
hooks, bell. “Understanding Patriarchy.” Louisville Anarchist Federation Federation, 2010.

Keveney, Bill. “Can ‘High School Musical’ Do It Again?“. USA Today, 2017.

Leaden, Maura, “Unlearning Disney: Developing a Feminist Identity While Critiquing Disney Channel Original Movies.” 2020. Rollins College, Honors Program Theses.

Ortega, Kenny. High School Musical. Walt Disney Studios Home Entertainment, 2006.

Paechter, Carrie. “Tomboys and Girly-Girls: Embodied Femininities in Primary Schools.” Discourse: Studies in the Cultural Politics of Education, vol. 31, no. 2, 2010, pp. 221–235.

Rogers, Adam A., et al. “Is My Femininity a Liability? Longitudinal Associations Between Girls’ Experiences of Gender Discrimination, Internalizing Symptoms, and Gender Identity.” Journal of Youth & Adolescence, vol. 51, no. 2, 2022, pp. 335–347.

Scott, Tony, et al. “Writing Enacts and Creates Identities and Ideologies.” Naming What We Know: Threshold Concepts of Writing Styles. Utah State University Press, 2016.

Stevenson, Lesley. “‘Bad Bitch’ or Just a ‘Bitch’: The Mean Girls of High School Films.” Through Gendered Lenses, vol. 7, 2016, pp. 105–122.

Ward, L. Monique, and Jennifer Stevens Aubrey. “Watching Gender: How Stereotypes in Movies and on TV Impact Kids’ Development.” Common Sense Media, 2017.

Rage Against the Machine: How Screen Time is Impairing Our Intelligence and What We Can Do About It

by Jillian SteevesPhoto of Jillian Steeves

Jillian is a history major from Danvers, MA. Her essay was inspired by an article she read in her composition class about how screens could damage cognitive functioning, and she thinks that “for a lot of people, myself included, overindulgence in tech is increasingly affecting our quality of life.” Jillian noted the irony of a history major writing about a current topic, but she “found there to be a lot of overlap between my history studies and the topic of my essay,” and she was able to “take many of the skills I’ve picked up as a historian — research, analysis, written communication — and apply them during my writing process.” She considers learning to be a lifelong journey, and “wants to continue exploring, researching, and acquiring knowledge long after I’ve graduated.” She writes that, “the more we understand about the world, the easier it is to use our knowledge to make positive changes.”


“It’s because you’re always on that phone.” Members of my generation will be all too familiar with this adage; it seems to have become a sort of mantra for older adults. Whenever a problem arises – mental health, social issues among peers, or declining performance in school – it seems to always be chalked up as just another side effect of smartphones and computers. For the modern teen, these concerns are usually brushed off as older generations just being old-fashioned.

But it is much to my chagrin that I have to admit: Mom and Pop may not be entirely wrong. Nicholas Carr, for one, certainly seems to think so. His article for The Atlantic, “Is Google Making Us Stupid?” is what introduced me to the idea that the Internet could be rewiring our brains. In the article, Carr describes a process by which modern technology is reshaping our brains at the cost of many cognitive functions, such as reading comprehension and the ability to focus for long periods of time. The Internet exposes us to so many different things at once, he explains, that our brains have had to sacrifice cognitive quality for quantity.

Carr is not alone in this belief either. In recent years, evidence of this phenomenon has begun to emerge in the fields of psychology and neuroscience. A quick search into the relationship between digital technology and cognition turns up no shortage of scientific publications on the topic. One such study, titled “‘Brain Health Consequences of Digital Technology Use,” was published in 2020, in the journal Dialogues in Clinical Neuroscience. Gary W. Small and the other authors state that, while computers and smartphones do have some positive effects, such as improved memory and multitasking skills, they come at the cost of many other skills. “Potential harmful effects,” the authors write, “of extensive screen time and technology use include heightened attention-deficit symptoms, impaired emotional and social intelligence, technology addiction, social isolation, impaired brain development, and disrupted sleep” (Small, et al. 2020). Because you’re always on that phone indeed.

Carr’s outlook, and that of many of his neo-Luddite peers, seems rather bleak. In Carr’s original article, he writes of his fears that, “we will sacrifice something important not only in ourselves but in our culture” (Carr 2008). The consensus seems to be that, if screens really are making us stupid, then humanity must be doomed; as technology continues to progress, so will the downfall of humanity, until we are no longer able to practice in art, literature, culture, and deep thinking – those very things that make us human in the first place. But is this pessimistic frame of mind a reasonable one? Are these changes to the brain causing irreversible damage? Will the end of human intelligence really be brought about by the advent of the smartphone? Probably not.

The idea that excessive screen time can alter our brain wiring is based around the idea of neuroplasticity. Neuroplasticity can be described simply as the brain’s ability to change its structures and functions in response to new situations. Because the Internet provides information and sensory stimuli in a different way to other forms of media, the neural networks of our brains will inevitably change in order to better process this information. This physical restructuring and rewiring of the brain’s biology is what accounts for the cognitive changes associated with Internet usage.

However, what Carr fails to mention is that neuroplasticity is not a one way street. If the brain can rewire itself in response to the Internet, it can also rewire itself in the opposite way. The key to reversing the cognitive effects of technology seems to be reversing the behavior that caused them in the first place. To put it simply: if it’s the Internet that is causing the problem, then logging off the Internet is the solution.

This principle is not mere speculation either. While research on the topic is still in its beginning stages, there are a few studies which have examined case studies related to decreasing technology use. One such study, published in the American Journal of Family Therapy, examined twenty-nine individual case studies where the subjects had completed a “digital detox,” analyzing common trends between them. The authors reported that most, if not all participants experienced increased attention span, better sleep, and improved interpersonal communications during their period of unplugging. (Morris and Cravens-Pickens 2017). Overall, the outcomes yielded mainly positive results.

While the scientific research looks promising, it is also somewhat underdeveloped, due to the relative newness of the subject. With this short supply of research, many individuals have taken the matter into their own hands, experimenting with eliminating digital technology from their own lives. For instance, journalist Johann Hari has recently published an article in The Guardian, documenting a three month vacation from technology. While he struggled to adjust to tech-free life at first, he writes that, “Within a few days, I started to flow, and hours of focus would pass without it feeling like a challenge…I had feared my brain was breaking. I cried with relief when I realised that in the right circumstances, its full power could come back” (Hari 2022).

The student-directed documentary Disconnected turned out similar results. The film follows the day-to-day life of three Carleton College students, who were challenged to give up computers for one month as an experiment for a class. Much like Hari, the students – Andrew, Chel, and Caitlin – found themselves struggling at first; the students had to teach themselves how to use a typewriter, a library card catalog, and even how to send snail mail. However, after the period of initial adjustment, the positive side effect of the technological detox became apparent. In a talking head, Caitlin explains that, “it’s just kind of an inconvenience. But at the same time, I’m finding myself spending more time on things that I should have been doing. Like homework” (Disconnected 24:33–41).

It is worth noting that, despite how different the subjects of each case study are, the results are strikingly similar. Hari, born in the late 1970’s, would not have been introduced to computers until his brain had fully developed, and smartphones until much later still; the Carleton College students, on the other hand, were born and raised in the digital age. And yet, their technological upbringings had seemingly little effect on how easily they were able to adapt to a tech-free life, and to restore their attention spans, productivity, and deep thinking skills. The principle of the technological detox seems to work both for those wanting to return to a previous state of mind and for younger people wanting to achieve an entirely new one. The verdict is clear: eliminating technology from our lives is the key to increasing our intellectual capacity.

So, this means that we should all completely banish modern technology from our lives, right? Well, that’s easier said than done. Even if you do believe in this anti-technology solution, I’m willing to bet that nobody is leaping up to throw their smartphones and computers away. We have school assignments, work-related documents, and bills to pay that simply aren’t accessible without the Internet. While using the Internet may come at the cost of our intelligence, the cost of not using it at all is even greater. To eliminate modern technology from one’s life may come with the cost of a good grade or a job. There’s no easy way around it: our lives exist online. This all-or-nothing solution takes the original dilemma, of intellectual quality being sacrificed for quantity, and turns it on its head. Digital technology definitely can be helpful, and without it, we would be forced to resort to tasks that are time-consuming and inconvenient, if we are even able to do said tasks in the first place. It becomes just as problematic when intellectual quantity is sacrificed in the name of quality.

However, this either-or approach to intellectual quality and quantity presents a false dichotomy. If our goal is not to focus in on one or the other, but rather to achieve a balance between both, it becomes possible to reap the benefits of both worlds, relearning certain brain functions without having to give up the convenience of modern technology. The solution, then, is not to eliminate smartphones and computers, but simply to decrease the amount that we use them.

There are a few ways that we can go about decreasing our screen time. In an article for Time Magazine titled “9 Ways to Finally Stop Spending So Much Time on Your Phone,” author Catherine Price describes some techniques to help keep us off of our devices. The first is to set specific goals when it comes to using technology. “Before you do anything else,” Price writes, “ask yourself: What things do you do on your phone that make you feel good? Which activities make you feel bad? What behaviors or habits would you like to change?” (Price 2018). Turning on the computer or cell phone with a goal already in mind helps to prevent mindless scrolling. Instead, you can log in, focus on the specific task you need to accomplish, and log out. Setting specific goals might also look like limiting screen time. This applies more to technology’s recreational functions. If you don’t want to give up social media altogether, it is useful to set a limit for yourself so that you know when to stop.

The second technique is to create a schedule for your time spent offline. This will help to prevent you from feeling bored and keep you from turning on your device as a means of curing your boredom. Price suggests creating a list of activities to keep yourself occupied without looking at your screen. She explains, “You’re also likely to find yourself with longer periods of time to fill. In order to keep yourself from reverting to your phone to entertain you, it’s essential that you decide on several activities you’d like to use this time for” (Price 2018). This includes typical to-do list activities, such as work and school assignments, or household chores, but it also includes potential hobbies. Having a list handy with alternative, non-digital forms of entertainment will stave off the urge to check your phone out of boredom.

Finally, Price recommends eliminating technological distractions. This means purging your smartphone of any apps that are known to distract you, especially social media. Even if you don’t delete every social media app on your phone, it is still helpful to reduce their number – for instance, one may decide to keep Facebook, but delete Snapchat and TikTok. Reducing distractions also means turning off notifications. Turning off the notifications for particular apps ensures that only the most important messages get through to your phone; a text message, for example, could be important, while a notification from Instagram is an unnecessary distraction. Putting your phone on silent mode during periods of extended focus, such as while working on homework assignments, can also help to improve concentration.

It won’t be easy, and it will certainly require some difficult sacrifices, but with a little hard work, it is possible to rewire our brains. Our attention spans, decision making abilities, communication skills, and all of our other various cognitive capabilities are not, as Nicholas Carr suggests, a lost cause. Our brains’ neuroplasticity means that we can make deliberate lifestyle choices that affect the way we think and behave. There is hope for humanity’s intelligence yet, and the solution lies in what our parents have been telling us all along. Structure and schedule the time you spend online, so that you don’t overdo it. Occupy yourself with hands-on hobbies, like reading (real books) or playing sports. And for goodness sake, get off that phone!

Works Cited
Carr, Nicholas. “Is Google Making Us Stupid?The Atlantic, Jul.-Aug. 2008.

Disconnected A Month Without Computers. Directed by Reed Langton-Yanowitz, et al. APT Worldwide, 2010. Film.

Hari, Johann. “Your Attention Didn’t Collapse. It Was Stolen.” The Guardian, Guardian News and Media, 2 Jan. 2022.

Morris, Neli, and Jaclyn D. Cravens-Pickens. “‘I’m Not a Gadget’: A Grounded Theory on Unplugging.” The American Journal of Family Therapy 45.5 (2017): 264–282. Web.

Price, Catherine. “9 Ways to Finally Stop Spending So Much Time on Your Phone.” Time, Time USA, 8 Feb. 2018.

Small, G. W., Lee, J., Kaufman, A., Jalil, J., Siddarth, P., Gaddipati, H., Moody, T. D., & Bookheimer, S. Y. (2020). “Brain Health Consequences of Digital Technology Use
.” Dialogues in Clinical Neuroscience, 22(2), 179–187.

The Brain, The Block, The Bummed Writer

by Karina SilvaPhoto of Karina Silva

Karina is a psychology major with a biology minor from Winthrop, MA. Karina says that writer’s block has “always been an obstacle for me, especially whenever I have to write papers for classes.” She notes that this paper was one of the first that she felt passionate about writing, and she feels that “researching the neurology behind writer’s block” enabled her to “further appreciate the wonders of the brain” and “develop an interest in research surrounding neuroscience and psychology.” Karina is a Brazilian-American who speaks Portuguese as well as English. Her hobbies include drawing, and she notes that art helps her “describe any thoughts that I am unable to describe in either English or Portuguese.” Her art and writing have helped her take note of “how much I’ve learned and developed throughout my time at UMass Boston so far.”

Karina’s reflection written in class to accompany this essay is available at this link.


There is no doubt that writing can be frustrating. Writing a story is similar to a love-hate relationship. There is a time where you feel like blessing the pages you write with your creativity. The other times, you want to rip out pages because of the pure frustration coming from the mere lack of ideas. Writer’s block is seen as a vile disease, and many individuals suffer from it. You’re probably one of those people, and that’s why you’re reading this. Therefore, looking into some theories on how to cure writer’s block may help you come across a solution.

Understanding The Neuroscience Behind Writing
When developing a vaccine that fights against viral diseases, researchers look into the viruses themselves and examine how they impact our cells. While writer’s block isn’t a biological condition, there is some psychology that underlies the matter. So before reviewing some methods, knowing the reasoning behind writer’s block may be essential in the evaluation process.

Writing has been a human skill for centuries, but the physical ability to write is not the only way our brain is involved in writing. James Levy reviews Dr. Alice Flaherty’s The Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain, and he draws attention to an argument in which Flaherty makes about the development of writing: “ …human beings have been able to engage in verbal communication for an estimated 100,000 years or so […] On the other hand, we are not hardwired to write […] In evolutionary terms, a widely accepted theory posits that human beings acquired the ability to write only within the last 5,000 years or so” (2). Compared to speech production and comprehension, the human brain has only recently adapted the ability to write – hence the development of the frontal lobe and even part of our temporal lobe. The ability to speak and participate in communicating is due to the fact that we have specific regions in the brain called the Broca’s and Wernicke’s area. “The Broca’s area is involved in speech production while the Wernicke’s area functions in verbal comprehension.” (Guy-Evans) However, writing is considered to be a skill to our brain since it involves making judgements and organizing thoughts. Even though taking part in writing can activate a majority of the frontal lobe, it still requires practice, due to the fact that humans do not have a neurological predisposition towards writing.

Now, even though writing is a newly adapted skill, there are certain features of writing that are considered unique and can explain the sensations that we face during our writing process. Researchers from the University of Greifswald held an experimental study consisting of volunteers writing a continuation of a short story for two minutes. In those two minutes, Martin Lotze, who was one of the researchers, found brain activation in the occipital lobe and hippocampus for beginner writers. For expert writers exclusively, who practice writing frequently, they found activity in the speech areas (Zimmer). Zimmer explains that low activation levels in these brain areas may be the reasoning behind writer’s block.

In summary, there are certain areas of the brain that are activated when we write. The more we practice writing, the more we are able to become fluent and our brain can even draw connections to speech articulation and comprehension. When we undergo writer’s block, certain areas in our brains are not as activated as they should be while we are writing. Most, if not all, techniques and solutions may revolve around increasing levels of activation in the frontal and occipital lobes.

Joseph Jacotot’s “Everything is in Everything”
The “everything is in everything” theory by Joseph Jacotot, which Geoffrey Carter refers to in “Writer’s Block Just Happens to People,” states that “it is always easier to utilize what [learners] already [know]” (Carter 101). Jacotot’s theory states that by using real-life examples that relate that person or a specific background, learners are able to further comprehend what is being taught to them. In writing, and more specifically in literary analysis, you may see objects as emblems or even as a turning point. However, Carter states that “it might be useful to experiment with playing with names to get one’s writing process underway” (101). In order to generate ideas and facilitate writing, we should embrace the blank page, and observe the names of objects and how they relate to a storyline. These real-life examples can even include names, and by playing around with their names, they eventually inspire writing (101).

Geoffrey Carter’s take on the “everything is in everything” theory can slowly help the frontal lobe activate – as one is consciously naming objects and making judgements up until one relates them to the story being written. Due to the fact that we are relating whatever we brainstorm to concepts that we use daily, our ideas will be easier to remember. It’s similar to how mnemonics work.

The “everything is in everything” theory does not guarantee that someone would understand the extent to which their idea may be significant, or even contradictory, to their writing. Bartosz Czekala states that “Mnemonics don’t guarantee understanding. Learning with mnemonics lacks context” (Czekala). Similar to how mnemonics functions, working with the “everything is in everything” theory leads to the generation of ideas, but they are unable to provide any significance behind these ideas. For example, let’s say that I think of a flower and then relate it to a relationship I am writing about with the “everything is in everything” theory. With the “everything is in everything” theory, I can only make the comparison between the flower and the relationship. I can’t come up with an explanation as to why the relationship I am writing about is like a flower. Is it blossoming? Can it easily fall apart? Is the appearance of the relationship like a flower?

Overall, the “everything is in everything” theory relates to writing as it indirectly states that metaphorizing real life examples to our stories is a way of brainstorming. The process of comparing and contrasting objects to the story one is writing can help cure writer’s block, but only to an extent. This extent includes brain activation; however it does not include contributing to the significance of a text, which is a goal writers consider important but may be struggling with.

Mihaly Csikszentmihalyi’s “Flow Theory”
Mihaly Csikszentmilhalyi explains that while we are doing an activity that we enjoy, such as writing, we enter a state called optimal experience. “Most enjoyable activities are not natural; they demand an effort that initially one is reluctant to make. But once the interaction starts to provide feedback to the person’s skills, it usually begins to be intrinsically rewarding” (Csikszentmihalyi 68). Csikszentmilhalyi’s definition of “optimal experience” is when we are able to concentrate on the activity and therefore, we gain experience from which we find to be rewarding. Certain skill sets that one may find beneficial may be different from what others think. When we usually write, especially based on an idea we are passionate about, we encounter the optimal experience even when it is for a short period of time.

The optimal experience in which Csikszentmilhalyi refers to is similar to the high concentration of neurons firing in the frontal lobes, releasing dopamine. Dopamine is a neurotransmitter that allows us to feel motivation. “At the broadest level, dopamine facilitates psychological plasticity, a tendency to explore and engage flexibly with new things, in both behavior and thinking” (Kaufman and Gregorie). During the surge of dopamine, our abilities to make judgements are enhanced due to our neurons changing the network connections.

So, what does the flow theory suggest we do about writer’s block? When we face a writer’s block crisis, the chances of getting motivated are low. Rather than focusing on generating ideas, like the “everything is in everything” theory, flow theory is about regaining the willingness to write.

When you look up flow in the dictionary, it is defined to be the continuation of something. In flow theory, flow is the process of “[keeping] the flow going. There is no possible reason for climbing except the climbing itself; it is a self-communication” (Csikszentmihalyi 54). In simple terms, flow is the duration of an optimal experience. Through Csikszentmilhalyi’s flow theory, we can achieve flow through our body, mind and memory. We can use our bodies by participating in physical activities and discovering our body’s potential. By using our mind, we can hyperfocus on the stimuli that we endure daily and use them to describe the sensations we experience. Using our memory to achieve flow is similar to retrieving past experiences, including explicit knowledge. (Csikszentmihalyi 33). One may argue that the sensations we endure with our mind, memory, and body can be used to brainstorm ideas. In terms of the writing process, flow theory can be helpful due to the fact that we can easily explore ourselves and the associations we encounter daily.

Lev Vygotsky’s “Zone of Proximal Development”
If you’ve taken a psychology class, whether in high school or during college, you may have heard of the zone of proximal development. The zone of proximal development theory states that there are three circular regions, as which the middle one is “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem-solving under adult guidance, or in collaboration with more capable peers” (Vygotsky 86). There is a criterion of skills that we are able to master because we have the knowledge required. However, we are unable to execute the skill because we don’t know which manner the knowledge should be applied in. By having another individual guide us, we will be able to master the skill and therefore know how to use the information that we know in order to demonstrate the skill.

Most times, you have the ideas for your writing, but you are unable to find the right method for expressing them. The “zone of proximal development theory” is related to writer’s block to the extent in which writers that are dealing with the condition require a peer to help them express their ideas on paper. More specifically, the writer should receive guidance from “[the] presence of someone with knowledge and skills beyond that of the learner” (McLeod). This is so that the writer will be able to find the most effective way to express their idea, without being strayed away from the story’s message. The zone of proximal development not only breaks away writer’s block but also allows for a writer to express their full potential due to the fact that they have external support and source for critiques.

The only limit of the zone of proximal development is that in practice, one may not always have a mentor to guide them and there is a chance, regardless of how experienced the mentor is, that the writer might be misled. Also, the theory requires one to have judgments developed as it revolves around placing these judgements in order rather than creating them. Therefore, it does not correlate with activation in the frontal lobe to a high extent like the “everything is in everything” and “flow theories.” Overall, the zone of proximal development focuses on the positive effects on collaboration as a solution to writer’s block.

So, is There an Effective Solution to Writer’s Block?
In simple terms, there is not an effective solution to writer’s block. Several theories that revolve around getting rid of writer’s block focus on very specific aspects. The loss of these particular features can be considered the symptoms of writer’s block. The “everything is in everything” theory focuses on regaining the ability to brainstorm while “flow theory” emphasizes on utilizing our bodies to gain motivation. The “zone of proximal development” discusses how we can use collaboration as a way to break out of writer’s block.

Though contrasting in various ways, most of these theories relate back to our brains as they participate in helping the frontal lobe activate. Both the “flow” and “everything is in everything” theories allow us “metaphorical thinking, which is at the root of all human artistic activity [and] is a complex function involving several regions of the brain. Some people are better at it than others because of their particular brain ‘wiring’” (Levy 3). Most of the judgments, if not all, that are involved with the “everything is in everything” and “flow theory” are versions of metaphorical thinking. With metaphorical thinking, these theories allow for frontal lobe activation and therefore the rewiring of neural networks that allow for advanced thinking. On the other hand, the “zone of proximal development” can allow for frontal lobe activation, but only indirectly. This is because one would already have judgments before collaboration. However, collaboration can allow for judgement revision and therefore also allow for neural networking. Just not to the extent in which “flow” and “everything is in everything” would allow.

Many writers and psychologists will theorize effective ways to cure writer’s block. There will always be limitations that come with these theories. The main takeaway that you should obtain is that certain techniques may be helpful for others but not for yourself. The “everything is in everything” theory may help you when you are dealing with brainstorming troubles. On the other hand, the “flow theory” can provide support on how to gain motivation. The “zone of proximal development” theory can allow others to help you along your writing journey. Whatever aspect of writer’s block you are dealing with, there is always a theory available. Our minds are advanced and require specific treatments for neural network reorganization. Do not be frustrated if something may not work as much as you hoped, instead try to gain insight from what you have learned.

Karina’s reflection written in class to accompany this essay is available at this link.

Works Cited
Carter, Geoffrey V. “Writer’s Block Just Happens To People.” Bad Ideas About Writing. West Virginia University, 2017.

Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper and Row, 2009.

Czekala, Bartosz. “The Truth About Effectiveness and Usefulness of Mnemonics in Learning.” The Universe Of Memory, 28 May 2020.

Flaherty, Alice. The Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain. Boston, Houghton Mifflin, 2004.

Guy-Evans, Olivia. “Wernicke’s Area Location and Function.” Simply Psychology. 2021.

Kaufman, Scott Barry and Carolyn Gregorie. “How to Cultivate Your Creativity [Book Excerpt].” Scientific American, Scientific American, 1 Jan. 2016.

Levy, James. “A Neurologist Suggests Why Most People Can’t Write – A Review of the Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain.” Review of The Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain. 2004.

Mcleod, Saul. “The Zone of Proximal Development and Scaffolding.” Simply Psychology, 2019.

Vygotsky, L. S. Mind in Society: The Development of Higher Psychological Processes. Edited by Michael Cole et al., Harvard University Press, 1978.

Zimmer, Carl. “This Is Your Brain on Writing.” The New York Times, The New York Times, 20 June 2014.

Fake News’ Negation of a Useful Education

by Vance NaftalPhoto of Vance Naftal

Vance is an international relations Major from Baltimore, MD. Vance chose to write about the negative effects that misleading/false news stories can have on an individual, especially when in college, because he “sees society moving farther and farther away from truth and leaning more into sentiments based on emotions — which is directly at odds with the point of a higher education.” Vance started college at 16 years old. At 17, he decided to take a gap year (which became a gap of 4 years) to enter the workforce. At 21, he found himself yearning to be a part of an academic institution again and transferred to UMass Boston, as he “missed learning in a social setting surrounded by peers with different opinions and cultures.” He writes that “transferring to UMass Boston was the best decision I’ve ever made” and that “the community at UMass Boston is truly an accurate picture of what the community of Eastern Massachusetts looks like, and I’m proud to be a part of it.”


During the events leading up to the presidential election of 2016, many of the problems endemic to the early 21st century were unearthed and displayed for all Americans to see. On one side we had cold hard facts, and on another side, we had a full-scale attack on those facts and the people that benefited from them. The campaign of Donald Trump suspended the neoliberal era style of policy debate and replaced it with unorganized attacks on credibility. The election of Trump may have been a shock to many Americans, but his disregard of facts in order to appease a voter base largely uninformed on the issues of the election was an example of opportunism within the new media environment that politicians should have started paying attention to long ago.

The reality of the modern era is that the bulk of an individual’s free time is spent on the internet. The internet does not treat all people equally. It has been crafted so that each person’s experience is tailor-made specifically for them. A series of algorithms show content to consumers that makes them feel comfortable and does not at any point challenge their beliefs. This means that if a person starts using the internet with a certain political stance, their internet experience will revolve around content that shares those same views. This, more often than not, leads consumers to absorb poorly sourced, false information or “fake news.” This fake news is seriously harmful to society and in many cases negates or skews the knowledge students absorb while undergoing higher education which has led modern society to become more polarized than ever.

Let me present an example of how fake news exists within a conversation with the average person. Last week, while I was working my shift managing a CVS, I had a conversation with an older woman who was very concerned for her safety. Upon inquiring why she was concerned, she informed me that there were thousands of Haitian “drug traffickers” at our Texas border on the verge of invading our country. I respectfully declined any further talk of this, I could tell the conversation was about to take a racist turn for the worse, but I couldn’t help but find myself curious at the notion of thousands of invaders at the Texas border. Upon researching this topic further, I found that around 12,000 Haitian refugees are seeking asylum, at the time of me writing this, and this is a result of significant civil unrest following the assassination of the Haitian president (Alden). As a member of the United Nations, the United States guarantees the right of asylum to refugees, so why was this event framed as an “invasion” by this frightened woman’s source? The simple answer is fake news. Unfortunately, the infiltration of fake news within society does not stop with people who have never received a higher education. Fake news has, in fact, rooted itself within the minds of students, who should theoretically question these sources as well.

People have always found ways to exploit the gullibility of the masses to their benefit, but this is far simpler in the 2020s than it has ever been before. We now live in a world where most people learn the bulk of their common knowledge from informal online sources rather than peer reviewed journals. Stephan Lewandowsky writes in an article published by Science Direct, “In this world, power lies with those most vocal and influential on social media: from celebrities and big corporations to botnet puppeteers who can mobilize millions of tweetbots or sock puppets… experts are derided as untrustworthy or elitist whenever their reported facts threaten the rule of the well-financed or the prejudices of the uninformed.” (Lewandowsky et al. 355) Simply put, the political/social climate has become so that to disagree with a galvanized individual using facts is equal to arrogance, while simply believing whatever your news source of preference provides you is “free thought.” While being inherently backwards, reason has nothing to do with it. This is a result of an American populace that has experienced more crises in recent memory than it can keep track of. Most people often have very little conception of the roots of these crises, so they turn to news sources that make them feel comfortably informed.

In order to understand the context in which modern American students live within, we must first discuss the standard of information accepted as knowledge in the 2020s. So how does an uninformed person find a source, and why are many of these sources harmful? The answer to this question lies within a phrase that has become an idiom within the last 20 years: Google it. Nowadays, when most people do not know something, they simply conduct a Google Search to find an answer. For questions with simple answers like “How many feet are in a yard?” or “What year did World War II begin?” there will be pretty unanimous answers. On the other hand, in cases where answers tend to involve multiple, at times subjective factors, like: “Why did Russia annex Crimea in 2014,” or “Was Abraham Lincoln an honest person?” Googling something leads to often convoluted, opinion-based answers.

The problem with this is not rooted in the fact that there is bad information. There has always been bad information out there. In fact, many of the presidential elections of the late 19th and early 20th centuries resembled the chaos of the 2016 election. The issue is that, in a world where nearly all people have access to information that is correct, false information still finds a way to prevail. This is a result of something called “search engine optimization” or “SEO.” SEO is essentially a term used to describe the way that algorithms figure out what content people “should” consume on the internet. Search engines like Google are companies; they are not charities. While many people may assume that the ease of access to Google means that Google does not draw profits from simple searches, that is not entirely correct. Google’s search engine profits are mainly derived from ads that companies submit to be displayed to people using Google. Companies want to see profits coming in from their Google ads in order to continue paying Google for their services. In order for those companies to have a higher chance of making a sale from their ads, Google has to do some leg work. This is what the algorithms are used for. Google ads are not going to try to push swim trunks to someone who lives in Alaska. They would instead display ads prioritizing cold weather items like road salt or antifreeze (Pennycook and Rand 2522).

This works the same way for news services. Chances are that if someone uses Google to read left-wing news sources like NBC and CNN, they would stay away from right-wing news sources like FOX. Google knows this. If someone regularly reads FOX articles, Google will advertise more right-wing sources giving a consumer a very one-sided view on how the world works, eliminating all unsolicited access to alternative opinions. At first, this may not seem like an issue. After all, if someone likes FOX, chances are they would never click on NBC or CNN anyways. I would tend to agree with this too. The problem begins when the news service advertisements gradually become more and more polarized leading people towards sources that are downright conspiratorial – which is exactly how the Google algorithm is programmed. When this becomes the standard of information that floods a person’s Google search, quick Google searches no longer provide factual answers to somewhat ambiguous questions. They instead provide highly partisan, often poorly sourced answers that hold no bearing in the realm of academia or truth in general (Pennycook and Rand 2523). This will not be apparent to the consumer though. They will think that their sources are as accurate as an academic journal. They looked for an answer: they Googled it! That is the standard of finding answers in the 2020s.

One such fallacious source is the popular right-wing outlet called Prager University or PragerU. Despite the name, PragerU is not an accredited institution and does not provide any sort of academic classroom setting online or in person. Its main message opposes immigration and downplays crises like the Coronavirus, climate change, and institutionalized discrimination. These political motives result in PragerU essentially being a right-wing tabloid rather than a credible news source. University of New Mexico PhD student/historian, Joseph Hall-Patton, who has an MA in History from California Polytechnic State University took an in-depth look at one of PragerU’s videos about “myths” commonly associated with slavery. Within his observation of the video in question, Patton pointed out that nearly every point that was made in the video was historically incorrect and strewn with made up facts or fake news. One of Hall-Patton’s major points was that PragerU’s sources were either nonexistent or lacking credibility altogether. For example, one source claimed to be referencing a “renowned historian” who Hall-Patton, a professional historian, had never heard of. Upon looking up the historian, he found that their source was a highly controversial figure with little renown to speak of within the academic community. Another issue Hall-Patton ran into while checking PragerU’s sources was that they had sourced material that was completely irrelevant to their video claiming that it backed up the facts stated (Hall-Patton). For example, let’s say that I was talking about Amazonian army ants for a nature video, but when looking at my sources for said video, it was discovered that I sourced an academic journal about North American fire ants. For an individual rushing to finish a high school paper, that may be understandable if not forgivable, but for a self-purported news source, that is an unacceptable error that dismantles academic integrity and prestige.

When it comes down to it, what does this mean for the college student of the 21st century? Well, a Boston based news station, WGBH, surveyed a sample group of students and found that 59% of them believed that there was a partisan divide on campus. Of that 59%, 77% identified as liberal and 15% identified as conservative (Parker). These students did not develop their political views on campus. They arrived with them. Before even stepping foot on a college campus, they had been fed by Google’s revenue driven algorithm to read the material that had led them to their current political stance. There is one major reason why this is problematic. University is supposed to be a space where learning is nurtured and grown, but when students arrive with closed minds (no matter how open-minded their sources tell them they are) unable to receive any information to the contrary of what they have absorbed as literal children, they are unable to nurture or grow any new perspectives at all. While news has always been highly partisan, we are now living within an era in which the tools we use to obtain news are programmed to push a malleable mind further and further away from reason into a realm of conspiracy (Rhodes 14).

Let’s take a look at the subject of international relations as an example. There are three main political theories within international relations: Liberalism, Realism, and Marxism. Liberalism tends to be a moderate to center right perspective, Realism is conservative, and Marxism is far left. The goal of an international relations curriculum would be to adequately teach the functions, roles, and beliefs of all of these theories as objectively as possible leading students to eventually discover their own beliefs at some point, but when students arrive with fallacious notions of what these theories actually are and have already shut down any desire to engage in discourse on the matter, the purpose of education has essentially gone down the drain.

The thing about polarity is that it eliminates opportunities for innovative compromise. Universities are supposed to be places that create a safe space for thinking outside of the box before one enters the professional world. Many times, these thoughts can lead to relationships and innovations that can shift the way society functions for the better, but this can only occur if the student body as a whole is open to think outside of the box. The reality of the internet is that it strives to put everyone within a box to drive profits within its own sector. With the exponential growth and pull towards fake news outlets, this has begun to create a closed-minded society without any real desire to change its bad habits and commence social progress. I believe that if this trend of polarization does not change soon, constructive education may enter a dark age where very few actually receive any benefit from higher learning at all.

Works Cited
Alden, Edward. “Why Are Haitian Migrants Gathering at the U.S. Border?Council on Foreign Relations, 1 Oct. 2021.

Hall-Patton, Joseph. “Debunking PragerU’s ‘History of Slavery’ With Candace Owens.” YouTube, uploaded by The Cynical Historian, 14 Oct. 2021.

Lewandowsky, Stephan, et al. “Beyond Misinformation: Understanding and Coping with the ‘Post-Truth’ Era.” Journal of Applied Research in Memory and Cognition, vol. 6, no. 4, 2017, pp. 353–69.

Parker, Kim. “The Growing Partisan Divide in Views of Higher Education.” Pew Research Center, 2019.

Pennycook, Gordon, and David G. Rand. “Fighting Misinformation on Social Media Using Crowdsourced Judgments of News Source Quality.” Proceedings of the National Academy of Sciences, vol. 116, no. 7, 2019, pp. 2521–26.

Rhodes, Samuel C. “Filter Bubbles, Echo Chambers, and Fake News: How Social Media Conditions Individuals to Be Less Critical of Political Misinformation.” Political Communication, vol. 39, no. 1, 2021, pp. 1–22.

Your Teachers Are Bullshitting You

by Kylie MedeirosPhoto of Kylie Medeiros

Kylie is a biology major from Fairhaven, MA. Because she was specifically told for this assignment that students “couldn’t bullshit this essay,” she was reminded of all the times throughout her academic career when other teachers said things of a similar nature. “Being short on time, and desperately in need of a research topic,” she found that it would be “incredibly ironic to bullshit my paper in the form of a research on the topic of BS itself.” While writing her essay Kylie found it interesting how much creative freedom she was given to research and write about something she truly cared about. This essay is special to her because of its significance in her life and the ways in which it can be meaningful to others. Kylie is extremely passionate about science and is looking forward to becoming a forensic pathologist in the future. She looks forward to meeting new people from all walks of life during her time at UMass Boston.


Teachers are known to be infamous bullshit detectors; every student in their academic career has heard the familiar phrase “we can tell when you write bullshit” from at least one of their professors. As students, we are told that we can’t do a number of things when it comes to our assignments and more specifically, our essays. You can’t write an essay the day before it’s due, you can’t write a paper on something you aren’t interested in, you can’t go into a project blind. Throughout my years in school, I have heard each and every one of these excuses. And to that I have always said…watch me.

On Friday, March 25, 2022, I found my exigence for our biggest writing assignment yet. Earlier that same week, the assignment was introduced to us; it was a research paper. And that was it – we the students got to choose our research topic as long as it fell within the realm of linguistics. It was an attempt to make the project easier on us, following the concept that if we chose our own research topics, we may be more motivated and interested in the copious work that goes into a research paper. While the sentiment was there, for indecisive people like me, it felt like a nightmare. What was I passionate about? What did I thirst to learn more about? Quickly I could feel myself spiraling downwards into a pit of questions that I knew would only trap me further in writer’s block. The sinking feeling was all too familiar and with research needing to be at least started for homework, I was beginning to feel desperate for a topic – until I thought about my past writing experiences in depth. I thought about how my most recent piece of writing was so different from any other English piece I had done before, and it was because of my teacher and the relationship promoted in the classroom – we were encouraged to be ourselves in our writing and to be bold in our writing decisions. And so, I wondered, how does the relationship between a student and a teacher impact the quality of a student’s writing, if at all?

Finally! I had done it – I had my inquiry. With the basis of my research discovered, I set forth using the library’s database for scholarly articles in search of sources to act as secondary research. In doing my research for homework, I soon realized that I had come to an issue: there was a significant lack of sources on any usable type of data. I could find nothing that would help me formulate an essay in the coming weeks. I made an effort to find the most relevant articles to my inquiry, to get my homework done at the very least, but I knew I could not continue. My topic had to change.

After watching a presentation in the library about how to search for the best sources in the easiest ways, I felt taunted, knowing that I would not be moving forward in my search, but back to square one, thinking about my topic. When I expressed my issues to my professor, she sympathized and offered help by asking what I was passionate about or things I found interesting. The problem was, I wasn’t sure what I wanted to know, I couldn’t pinpoint a topic that could hold my interest for the coming weeks. “You can’t write about something you’re not passionate about,” my professor said after I voiced my lack of interest. Well, that certainly wasn’t true; I knew I could write a paper on a topic I didn’t find interesting, in fact, I had done it many times before and likely would again. Her statement got me thinking about the copious amount of times I had heard teachers assume that we can’t write papers on books we didn’t read or write papers about topics we barely understand; they assume we can’t bullshit our way through it. Once again, I felt the cerebral glow of an idea forming and realized I had found another inquiry, “How can a teacher tell if a student is bullshitting, if they can at all?”

Bullshitting to get through an essay isn’t a rare occurrence: it’s something that nearly everyone has done or will do, whether it be babbling to reach word counts or lying about the amount of information known about a topic. In fact, when asked, 30 out of a total of 36 people (83.3%) admitted to having bullshit a paper before, much like me (Medeiros 2022). So, if bullshitting is so common, surely teachers mean it when they say they can tell? But it may not be so simple. If teachers were certain in their bullshit detecting abilities, students who do it should, in turn receive lower grades on their essays. This, however, was not continuous with the data collected in my survey, or my past grades, which received average grades of A-B (87.9% of students). Why then, if bullshit is so discouraged, do teachers continue to reward those students with good grades? Would that not only incentivize them to keep doing what they are doing? Maybe the truth is that teachers aren’t so sure what bullshit means to them.

Bullshit is certainly a difficult term to define, especially when it’s more of an idea or concept that varies from person to person rather than a concrete adjective that can be used to explain the quality of a paper. In “Antecedents of Bullshitting”, the author John Petrocelli, as he searches for the cause of B.S. writing, offers insight from philosopher Harry Frankfurt to create a partial definition of bullshit. He states: “…bullshitting is defined here as communications that result from little to no concern for truth, evidence and/or established semantic, logical, systemic, or empirical knowledge. When people intentionally or unintentionally express ideas or information in ways that are disconnected from a concern for evidence or established knowledge, they are in essence bullshitting” (Petrocelli 250). This idea is further explored by Joshua Cruz, a Texas professor, when he concurs that “…students play by the rules of the classroom and produce a piece of writing that meets academic standards, but they care nothing for the topic that they have chosen to write about,” (Cruz 8). It has been inferred that bullshit is what the author believes the audience wants to hear and the process of writing it is centered around convincing the reader to believe them. Essentially, bullshit is a type of falsification in which the author must appear to be more credible than they actually are to form an opinion about something. This is a researcher’s definition though, so what do the students qualify as bullshit? What exactly do they believe makes their essays “bullshit”?

To gain a better understanding of the concept, I asked the bullshitters themselves (college students mostly) what was considered bullshit. Based on the survey I conducted, to garner how students actually view bullshit, I found that they tend to believe that bullshit writing is work that is passionless, unrevised, and is likely done with little to no preparation or even before class. There seemed to be a general consensus that bullshit was not synonymous with poor writing and that bullshitting did not include plagiarism. In a sense, bullshit writing is the final solution and is written with little concern for the truth of the information presented and focuses more on convincing the reader that they know what they are talking about. The survey revealed that 18 out of 36 students (50%) focused more on what the teacher, who is also the audience, wanted than the actual information they were presenting or directly applying what they had learned. (Medeiros 2022).

Unsurprisingly, the definition of bullshit provided by a few anonymous teachers varied from the student definition and did not hold the same sentiments. Educators tended to interpret bullshit as “good at first glance” but lacking structure, flow, and substance, with one teacher stating, “Often bullshitters connect ideas from different sources but there is no flow to the paper. Also, the evidence may not support the idea because there is little understanding of the topic” (Medeiros 2022). While there may be some truth, it is not unlikely that a bullshitter would be able to provide evidence that supports their idea. In fact, Emily Wilburne shows how bullshitters may even change their initial claim to match the reports of the evidence they have chosen, in “Pulling Essays From Your Ass: A Guide on How to Bullshit Your Way to an A.” It is possible and likely that there is not only one way to bullshit, but the way in which most students do tends to follow a pattern of trying to impress the audience for a good grade, rather than simply trying to fill a page, regardless of the sense it makes. Having a better idea of what bullshitting meant to the perpetrators and victims alike, I could dive deeper into the phenomena, which had me wondering – why do they do it? Despite the deterrent that teachers use, telling students they’ll know what is bullshit, the students take that risk, so why?

Why anybody does anything is a challenging thing to determine, especially with variety in situations and differences in the way people think; even so, there tends to be trends in behavior. Bullshit appears to be a defense mechanism of sorts – a last ditch effort for a situation. In “Understanding Undergraduate Bullshit as a Function of Language and Subject Position,” Joshua Cruz, a professor at Texas Tech University and Doctor of Education, offers the idea that bullshit happens as a result of lack of knowledge or interest. He states that, “several empirical studies contain interviews with students who openly admit to bullshitting an assignment when they felt they simply could not respond in any other way…” and “bullshit is a response to something that perhaps we do not want to admit as educators: we can be boring” (Cruz 2). Because students are generally uninterested in the material they are being forced to write about, as well as tend to be substantially uniformed about the topics they are forming opinions on, bullshit is produced to fill in the gaps. The reasons provided by Cruz are strong possibilities and I agree with them to be the primary causes of the bullshitting. When I asked college students about the reasoning behind their own experiences with bullshit, most motive to do so came from the fact that they felt forced. While individual responses varied from procrastination to lack of interest, to poor understanding, the broadness of explanations could be boiled down to the simple fact that they felt obligated to turn something in.

Writing bullshit to complete an assignment is still taking the effort to complete the assignment; in fact, bullshitting and trying to make it look like one knows what they are talking about sounds incredibly difficult. If so, is there a possibility that bullshit may be useful in some ways? Journalist Emily Wilburne in “Pulling Essays From Your Ass: A Guide on How to Bullshit Your Way to an A” explains how to properly bullshit a paper and shows that it is more work than one may think: “By the time you stop writing, your goal is to have a set of complete body paragraphs. Each paragraph should be focused on one idea and should contain all the information and quotes that demonstrate that idea. Again, your writing does not have to be fancy, it only has to be able to explain your ideas to anyone who reads it” (Wilburne). Her article shows that bullshitting is a skill in itself and takes its own type of thought. Because of the manipulation of information and knowledge of the rhetorical situation used to formulate good bullshit, it can nearly be compared to rhetoric. Cruz backs up this idea, claiming that the use of bullshit as rhetoric comes from an absence of power in the students and that “the use of bullshit itself is a gesture of power” (9). Surprisingly, in the survey I conducted, the opinions of the students on how beneficial bullshit could be were extremely varied, despite over half of them already having admitted to bullshitting before. Even more surprising was that 18 out of 30 responses were certain that bullshit had the opportunity to be useful (Medeiros 2022).

Using this information, coupled with my sources and prior experiences, I have determined that bullshit is a form of rhetoric and can be very advantageous. As indicated by Linda Flower and John Hayes in “The Cognition of Discovery: Defining a Rhetorical Problem,” the best writers are those that consider the audience: “This difference matters because, in our study, one of the most powerful strategies we saw for producing new ideas throughout the composing process was planning what one wanted to do to or for one’s reader.” (Flower and Hayes 27). The authors believe that manipulation of the audience for a specific literary goal is an example of rhetoric due to the awareness of the entire rhetorical situation that is necessary to do so – much like when bullshitting. Because writing coherent bullshit requires the author to use information to formulate an opinion that can convince the reader that the author knows enough about the topic and requires knowledge of the project’s rhetorical situation, it can be used to show one’s understanding or to develop a better understanding of rhetoric.

Having a better idea of what exactly bullshit is, and under what circumstances it tends to occur, leads me back to my initial inquiry: How do teachers know when students bullshit? To put it simply, they don’t. Bullshit has become a word synonymous with laziness and a lack of effort; it can mean many things, but at its core, bullshit is a falsity, whether it is done well or not. Teachers are not able to detect bullshit, they are only able to detect poor writing and poor skill; just because a paper was bullshitted, doesn’t mean that it is bad, in fact the point of bullshit is to be good! The entire premise of bullshit is to convince the audience of your knowledge and authority, and if the teacher has seen through that, then you have done a bad job, either in the linguistic aspect or in the rhetorical aspect. Bullshitting is forming an opinion on the basis of knowledge that is assumed to be true, rather than proven, meaning that in order for a professor to fully be able to detect bullshit, there must be an inherently correct answer. As long as the paper contains data necessary to back up one’s claims and provides appropriate reasoning in a well-written and effective paper, a teacher will not be able to detect bullshit. I am a firm believer that teachers should seek out poor understanding of the rhetorical situation and weak writing abilities, rather than “bullshit.”

Works Cited
Cruz, Joshua. “Understanding Undergraduate Bullshit as a Function of Language and Subject Position.” Researchgate.net, Sept. 2018.

Flower, Linda, and John R. Hayes. “The Cognition of Discovery: Defining a Rhetorical Problem.” College Composition and Communication, vol. 31, no. 1, 1980, pp. 21–32,

Medeiros, Kylie. “Bullshit!” Student Questionnaire. 6 April. 2022.

Petrocelli, John V. “Antecedents of Bullshitting.” Journal of Experimental Social Psychology, vol. 76, 2018, pp. 249-58.

Wilburn, Emily. “Pulling Essays from Your Ass: A Guide on How to Bullshit Your Way to an A.” Medium, Medium, 11 Jan. 2018.

Moral Punishment

by Anna KrasnoslobodtsevaPhoto of Anna Krasnoslobodtseva

Anna is a biochemistry major from Milton, MA. She decided to write about the prison system and punishment because of conversations about morality during her Composition II class, and because she believes “the prison system in the United States is often discriminatory and does not work toward the rehabilitation of people like it should.” She felt that writing this was “more free than a traditional structured high school essay” and that it “involved a lot of learning, by means of stepping outside of what I am comfortable with writing and exploring new research methods, going to the library, and taking on a voice.” Anna speaks three languages and loves to spend time outside, walking, hiking, and exploring. She is inspired by nature and has a goal to visit all the National Parks and hike the Appalachian Trail.


I’m sure we have all heard the famous saying or notion, “An eye for an eye, a tooth for a tooth.” We all inherently understand and want for people to get what they deserve; it is the basis of our justice system. Well, Gandhi is credited with adding a key idea to the phrase: “An eye for an eye will make the whole world blind.” If we return someone’s pain directly back to them, everyone suffers. And yet, if someone punches us our only natural instinct is to punch them back, or even punch them back harder, knock out a few eyes or teeth. That want for vengeance is a part of every human, especially those with siblings. In his essay “The Moral Instinct” Steven Pinker cites Bertrand Russell who said, “The infliction of cruelty with a good conscience is a delight to moralists— that is why they invented hell” (2). We want revenge; we want evil people to rot in hell because it makes us feel better and morally superior. Revenge is a natural human emotion, but just because it is natural does that make it correct? If someone does something morally wrong, then others cannot do something morally wrong to punish them. How do we punish someone in a moral way and do they deserve to be punished in a moral way?

Before examining how to punish someone, we must analyze why humans punish in the first place. As Russell puts it, punishment is just a delight to moralists, just a fun activity, like going to an amusement park. However, I think the idea is a little more complicated than just wanting a fun excuse to feel morally superior. Punishment can be a source of vengeance and wanting justice, but is there more to the story than just this idea? If you ask a parent why they put their child in time-out, they will likely tell you it is because they did something wrong and need to learn their lesson. The same logic applies to our justification for punishing adults and criminals. In her book The Case Against Punishment, Deirdre Golash questions our foundations of punishment, and if it is even necessary. Golash explains, “The idea that punishment does more good than harm corresponds to the purpose of preventing crime. And the idea that punishment benefits the offender corresponds to the purpose of making the offender a better person” (5). We like to justify that the reason someone gets punished is so that crime can be prevented. Or, to make the offender a better person by helping them “learn their lesson.” It makes sense, if you ask anyone to justify why they are punished or punishing someone they will lively explain using the same words. But is this justification the true reason for why we punish or is it merely a justification?

Think again about the punching example, if your fists are not too tired. Our innate response to being punched is to punch back. We would even do it without thinking or justifying it. It is a purely emotional, even animalistic response. Maybe in our response to wrongdoing, punishment is more emotional, rather than a calculated response like we want to believe. Rob Canton comes to the same conclusions. In his article, “Crime, Punishment and the Moral Emotions: Righteous Minds and Their Attitudes Towards Punishment” Canton rationalizes that, “Since there are no adequate grounds for punishment, he looks for origins instead and finds them in innate ‘retributive emotions’ that favour reciprocity and tit-for-tat, and in the social and cultural adaptations that modulate and refine these inherited and ‘hard-wired’ dispositions” (57). Retributive emotions. We all have them. We punch our sister, and when she starts to cry we tell our parents that she punched us first. First we act, then we justify. Another important issue arises when Canton says, “there are no adequate grounds for punishment.” But what about murder, rape, and kidnapping? Do none of these crimes require punishment? If nothing requires punishment, then why do we require prisons?

You might be thinking, well of course we need prisons, where do we put those who are dangerous to our society? Prisons are necessary and important for punishing those who did harm, keeping immoral people out of society, and teaching them morality. We may believe that prisons are meant to teach morality to those who are incarcerated, to teach them their lesson and make them into better people. But this is all unfortunately a dream. James Logan questions the need and effectiveness of the United States prison system in his book Good Punishment. Logan notes, “Each year some 644,000 persons are incarcerated for various offenses while some 625,000 are released onto the streets. It is widely estimated that about 50 to 75 percent of released inmates will be returned to prison within a few years” (62). Recidivism is a big issue with the prison system as those who commit crimes may commit new crimes and are placed back into prison. If the prison system was working properly and teaching people their lessons on how to be more moral, then there would be no problem of recidivism. But, as shown by the statistics, more than half of the people going to prison do not learn their lesson and gain a new sense of morality as we would like to assume. This is only further proof of the retributive emotions that Canton explains. Prison does not work to better people but rather to make those who put them there feel better.

Recidivism might not seem like a large issue. If someone did not learn their lesson the first time, then they can try again. After all, they are getting a second chance to be a better person, right? Wrong. Logan explains, “A serious social consequence of all of this is that a significant fraction of offenders will find the obstacles to obtaining basic shelter, education, and employment (all of which enhance the establishment of stable family, communal, and societal relationships) insurmountable” (95). Prisons do not help to rehabilitate people in the slightest. It seems that they do quite the opposite, leaving those previously incarcerated homeless, illiterate, and even less productive members of society than when they were locked up. Is it moral to force people to pay such a high price for a crime that they committed? If someone made a bad decision, does that mean they deserve to be confined to a life of suffering even outside of prison?

If the reason for prison is not to morally correct someone, then the true reason for prison is the want for justice. We can look at the famous Stanford Prison Experiment (SPE), which was conducted in 1971 by Philip Zimbardo in a mock jail in the Stanford University basement. The experiment involved creating a simulated prison with college students acting as prisoners and guards. The guards quickly turned violent, abusing and torturing the inmates. The results of the experiment can reveal much about human psychology regarding prison and the treatment of inmates. In her essay, “The Stanford Prison Experiment’s Torture Hermeneutics: Difference and Morality in the US University, 1968 to 9/11,” Danielle Bouchard reasons, “By Zimbardo’s own reckoning, the SPE demonstrated that evil is not a phenomenon of individual pathology, but rather of extreme social situations that could cause anyone (or those Zimbardo refers to as “good people”) to engage in acts they would otherwise find abhorrent” (407). The college boys who were chosen to be prison guards eventually torture the prisoners because of their positions of authority. So much so that the experiment needed to be ended prematurely. It shows that high positions of authority and feelings of superior morality can lead even ”good people” to do unthinkable and immoral things. The guards’ quick turn to torture demonstrates that prison is not entirely about correcting and making people better, but also about revenge. As Golash observes, “Justice is not limited by personal responsibility or proportionality to the original offense; it is enough that the person on whom vengeance is taken is on the side of the enemy” (6). We want to punish those who have taken the side of the enemy, not just those who have done something morally wrong per se. Many times prisoners are viewed as the enemy because they acted in a way that our own morality would not allow us personally to do. They are seen as the “other,” and we ourselves feel morally superior to them.

The question is: are we truly morally superior to the prisoners? Bouchard examines this question by stating “[a]ny deed that any human being has ever committed, however horrible, is possible for any of us under the right or wrong situational circumstances” (410). It is a terrifying thought that anyone, or even you yourself, could commit a terrible crime. It begs the question of the existence of free will, but we will not venture into that dark corridor. Instead, I think it is important to understand that although we may not be currently in a prison, we are not morally superior to those who are. It may be delightful, as Russell asserts, for moralists to inflict pain with their good conscience, but our consciousness is no better than those that we may view as immoral. You or I may not be any better or superior than someone currently locked up in prison; we may have just stumbled into this place through the alignment of correct circumstances. That is to say that in punishing others, we should have no moral superiority or want for revenge guiding us in our actions, but rather a want for the person incarcerated to become a better or more moral person than the one they came to prison as. As hard as it may seem, we must abandon our retributive emotions because the second we get a sense of moral superiority is the second that we become the prison guards in the SPE torturing those below us because chance decided that we would be above and they would be below.

What and how must we do to fix the prison system so that it rehabilitates those in it instead of simply punishing them? Prisons in the United States, as they are today, are not a good solution; they simply do not work to rehabilitate and teach people morality as we all hope for. There must be a better system that actually works to rehabilitate prisoners. Halden Prison in Norway is designed to be a more humane prison. A video created by Christophe Haubursin highlights the ways in which Halden Prison is designed. The design is chosen to be more humane for the prisoners, ultimately for it to feel less like a prison. Designers choose more natural materials that are less rugged, like glass. Halden has a campus design with many windows that take advantage of the natural lighting. It’s designed so that there is less conflict between the prisoners and more interactions with the guards. The video explains, “Being imprisoned is the punishment, the architecture does not have to be” (Haubursin). Just because someone is imprisoned because of a crime, they already do not have a normal human life, trapped in an establishment. The least they can get is a little dignity to feel like they are in a nice place and not a concrete barricade. Not only does the prison look and feel less prison-like, but it also works. Norway’s recidivism rate dropped to 20% from the 60-70% high seen in the 1990s (Dorjsuren). Bolorzul Dorjsuren highlights the successes of Halden and similar prisons in Norway in his article, “Norway’s Prison System Benefits Its Economy.” Dorjsuren remarks, “The main reason for these statistics is due to a focus on “restorative justice,” an approach that identifies prisons in the same category as rehabilitation facilities” (Dorjsuren). Where prisons are a place of rehabilitation instead of a place of punishment or the assertion of dominance over prisoners rather than the permanent or long term confinement, which results in the inability of the prisoner to reenter society as seen in the United States. If the prison system in the United States is changed, to be more like the one in Norway, prisoners could emerge as better and more moral people. Which is what I, for one, want for those who committed a crime – everyone deserves education, and a second chance at life.

Even if some people can and will be rehabilitated in Halden Prison, some people might not “deserve” this treatment. The problem still stands: what do we do for the serial killers, the murderers, the child rapists, and those who kill and torture. Do they deserve the same treatment back? Currently, 27 states say yes, poke them with a needle or electrocute them (Deathpenaltyinfo.org). Or as the prisoner Richard Moore in South Carolina has recently chosen, a firing squad (Bogel-Burroughs). Moore killed a store clerk by shooting him in the heart, and now Moore will receive the same treatment at the hands of state correctional officers. This brings us to the original question of “an eye for an eye?” Does Moore, or any other prisoner on death row, deserve to die because they killed someone? And is it moral to kill someone under any circumstances? Do the correctional officers deserve to go on death row because they also killed someone, or is it all fine because Moore killed someone first? Most people would say yes. A report from the Pew Research Center on the matter found that, “Among the public overall, 64% say the death penalty is morally justified in cases of murder, while 33% say it is not justified. An overwhelming share of death penalty supporters (90%) say it is morally justified under such circumstances, compared with 25% of death penalty opponents”(Pew Research Center). Over half of the people polled say that it is not only okay, but also morally justifiable to kill someone if they killed someone else. Most people, myself included, would say that killing is immoral. But in the case of someone who has done something immoral, most people also say that killing the criminal is not only okay, but morally justified.

Talk about an eye for an eye. If you kill someone, people then consider that it is okay to kill you. So in what ways do our morals influence our decisions of lethal importance such as capital punishment? Capital punishment is supposed to be about fairness. In his article, Canton explains this using psychologist Jonathan Haidt’s work explaining human foundations of morality. One of Haidt’s six pillars of morality is fairness/cheating, which explains our approval of capital punishment. Canton observes that, “‘The language of balance, equilibrium, and geometry pervade analyses and descriptions of retribution.’ While what counts as proportionate punishment varies across cultures ‘what is clear is that the principle of retribution is tied to a principle of proportionality” (63). As humans we want the punishment to be fair, we want the offender to receive the same treatment as they gave. We have a lot of words for this like: the golden rule, a taste of their own medicine, or “an eye for an eye.” It is an inherent understanding of humans and one of the pillars of our moral foundations. According to Dorjsuren in Norway, however, the current longest prison sentence for a case of murder is only 15 years. In Norway they feel that 15 years of life is enough punishment; it is fair enough for murder. So, maybe the capital punishment system is incorrect in the United States. Maybe our system needs to be abolished, and people should not need to pay for a crime with their life. Indeed capital punishment might not be correct but it is important to understand that it is only human – justifiably moral even. Capital punishment, a seemingly immoral practice, can be justified with the moral foundations. And in our eyes, even if killing is seen as immoral, our moral pillar of fairness often overrides this for the sake of equity.

Moral punishment: the statement almost seems contradictory. It seems almost impossible to punish someone in a way that is seen as moral. Prisons ultimately cause more harm to the inmates and leave them unable to rejoin society, and recidivism shows that punishment does not help at all to improve a person’s morality. In Norway, by contrast, prisons are designed to rehabilitate the inmates rather than punish them, which works to reduce the rates of recidivism. However, prisons in the United States are not likely to change anytime soon because of our retributive emotions and the pillars of morality. It is a human idea to want fairness even if it means something morally wrong is being done. We can create a laundry list of reasons why it is okay to kill someone; we can morally justify almost anything. But ultimately, our reason for wanting to punish is our feeling or retributive emotions, our want for revenge. It’s time to invest in some protective goggles because we will still want an eye for an eye, even if it means the whole world will need to be blind.

Works Cited
Bogel-Burroughs, Nicholas. “South Carolina Prisoner Chooses to Be Executed by Firing Squad.”
The New York Times. 15 Apr. 2022.

Bouchard, Danielle. “The Stanford Prison Experiment’s Torture Hermeneutics: Difference and Morality in the US University, 1968 to 9/11. Journal of American Studies, vol. 53, no. 2, Cambridge University Press, 2019, pp. 401–27.

Canton, Rob. “Crime, Punishment and the Moral Emotions: Righteous Minds and Their Attitudes Towards Punishment.” Punishment & Society, vol. 17, no. 1, SAGE Publications, 2015, pp. 54–72.

Death Penalty Information Center. State by State. (2021).

Dorjsuren, Bolorzul. “Norway’s Prison System Benefits Its Economy.” The Borgen Project. Jan. 2021.

Haubursin, Christophe. “How Norway Designed a More Humane Prison.” YouTube, Vox, 12 April 2019.

Golash, Deirdre. The Case Against Punishment : Retribution, Crime Prevention, and the Law. New York University Press, 2005.

“Learning English – Moving Words Mahatma Gandhi.” BBC News, BBC.

Logan, James Samuel. Good Punishment? : Christian Moral Practice and U.S. Imprisonment. Grand Rapids, Mich: William B. Eerdmans Pub., 2008.

Most Americans Favor the Death Penalty Despite Concerns about Its Administration.” Pew Research Center – U.S. Politics & Policy, Pew Research Center, 13 July 2021.

Pinker, Steven. “The Moral Instinct.” The New York Times, 2003.