Site icon The Yale Herald

The Future Fund Runs Out: EA at Yale

Design by Etai Smotrich-Barr

Interviews have been lightly edited for clarity

In December of my freshman year, my friend and I dragged our feet into the last meeting of the Effective Altruism Intro Fellowship for the Fall 2022 semester. During our dinner a few hours earlier, she had explained to me that she was a member of the Effective Altruism organization—EA for short—where every Wednesday, in a basement room in WLH, she spent an hour discussing everything from animal rights to utilitarianism to the existential risk of artificial intelligence. 

Tonight’s topic: critiques of EA. At the head of the table, the fellowship leader, a Yale College undergraduate, sat with four of his fellows. I sat at a desk in a corner of the room. He was quick to remind everyone that the reading group was far more effective when everyone did the readings, and then he directed everyone’s attention to a Vox article from the column titled “Future Perfect” written by journalist and Effective Altruist Dylan Matthews. The leader opened up his Yale EA Intro Fellowship document guide, which was modeled after other EA curricula around the country. The room fell silent as he posed questions to the fellows. 

Since its start nearly 15 years ago, EA has built a global brand for itself, spanning over 200 chapters worldwide that focus on facilitating reading groups, mobilizing activists, and guiding Effective Altruists to their perfect career paths. According to its own website, EA “is a project that aims to find the best ways to help others, and put them into practice.” If that sounds vague, that’s because it is. Members of EA don’t unify around a particular solution to the world’s problems but around a way of thinking. Effective Altruists are passionate about a wide range of issues: preventing the next pandemic, providing basic medical supplies to those in need, ending factory farming, improving decision making, and helping create the field of AI alignment research. But the ways they go about this are far richer in thought than in action. 

There is a particular archetype that joins EA. Following the lead of its founders, philosophers Will MacAskill and Toby Ord, who founded the movement at Oxford in 2009, Effective Altruists are typically young, highly-educated, agnostic, left-of-center, white men who are dissatisfied with politics and self-identify as intellectuals. For these people, EA promises a chance to become the change they wish to see in the world by discussing their ideas and finding ways to rationalize issues in order to solve them effectively. Sam Bankman-Fried, the disgraced founder of the crypto trading platform FTX, fit this archetype to a T. 

Bankman-Fried was a student at MIT in 2013 when he met MacAskill through MIT’s Epsilon Theta (a co-ed fraternity at MIT where many of the original members of FTX first met). MacAskill was visiting MIT in hopes of spreading Effective Altruism to potential believers. Although a gifted mathematician, Bankman-Fried also held a deep passion for animal welfare, so much so that he was seriously considering a career in the field. MacAskill explained to Bankman-Fried that he could do far more good in animal welfare if he used his skills more effectively by going into finance and donating the money he made to charities focused on animal welfare. This idea, known in EA circles as the “earn to give” principle, was a key aspect of EA in the early days but has been slowly phased out of the movement. 

During Bankman-Fried’s time operating Alameda Research (whose CEO was Bankman-Fried’s then-partner Caroline Ellison) and FTX, a trading firm and cryptocurrency exchange, respectively, EA helped him gain credibility and investors. For everything EA had given him, Bankman-Fried tried to give back. In February of 2022, Bankman-Fried created The Future Fund, a subsidiary of FTX that made grants and investments to “improve humanity’s long-term prospects.” By June, The Future Fund had donated 132 million dollars to charity and was projected to donate 1 billion dollars by the end of the year. Over a quarter of the grants paid out by The Future Fund went to charities controlled by Effective Ventures, the U.K.-based charity chaired by MacAskill; 14 million dollars went to the Center for Effective Altruism. Additionally, Bankman-Fried paid out $200,000 to Vox in order to encourage them to write articles on EA, prompting the creation of the column “Future Perfect.” In 2022, both companies filed for bankruptcy. It was revealed that Bankman-Fried and other executives used billions of dollars in customer money from FTX to cover up financial failures at Alameda. 

The very public implosion of Bankman-Fried’s businesses has brought a lot of negative publicity to the global EA movement, amplifying criticisms of EA that have existed since its creation. Outsiders to the organization often view EA’s principles as very rigid. EA skeptics use a popular hypothetical scenario to highlight this idea: Let’s say you are walking down the street with five dollars in your pocket, and an unhoused person asks you for money. EA purists would say that you should not give that money to the person, but, in fact, keep it and donate it to an organization where you can be sure of how the money will be used. Giving the money to the unhoused person would be ineffective because you do not know how the person you are giving the money intends to use it. 

Shelly Kagan, Clark Professor of Philosophy at Yale, presents an alternative approach: “There’s room for the moral view that says it is more important to give the five dollars to the unhoused person in front of you than to some charity, even if you were confident that [the latter would] do more good. There’s something about our essential humanity that you should not turn your face away from somebody.” Kagan, although not an Effective Altruist himself, has been looked to by the national EA community because of a seminar he taught during the spring of 2021 called “Ethics and the Future.” Kagan told me, “Some of my philosophical views have…broad affinities with EA. And, occasionally, my stuff shows up on reading.” The seminar’s reading list has been posted on the EA forum (a popular platform where many Effective Altruists can communicate) and has been added to a compilation of EA syllabi and teaching materials. Additionally, Kagan has two children who have attached themselves to the movement. 

Yet, Kagan’s class challenged long-termism—the ethical view adopted by many Effective Altruists that focuses on bettering the long-term future—by providing readings that both support and deny the idea. Long-termism argues that existential risks, even extremely unlikely ones, are where we should be focusing most of our efforts because the infinite catastrophe of losing every possible human life multiplied by near-zero chances still comes out to doing infinite good in the long run. This thinking underlies much of EA’s emergent activities, specifically in the field of AI alignment—reducing the extinction risk AI poses to humanity. Skeptics of long-termism question our ability to foresee the future accurately and argue that mathematically calculating future happiness results in what philosopher Derek Parfit called the “repugnant conclusion”: prioritizing a future where one trillion people live minimally happy lives over a present where 8 billion people can live well. Effective Altruists saw Kagan’s class as merely another platform to continue talking about their ideas in a more legitimate context. EAs thrive off being recognized and associated with esteemed academics at brand-name institutions; it gives them reputability—just look towards MacAskill and Ord at Oxford in 2009. 

Following the fall of FTX, critics have called attention to the somewhat strange behaviors that some members of EA follow. Derek Thompson of The Atlantic even called EA a “cult.”  They wish to spread their movement to every corner of the world; they want to educate those unfamiliar with EA by giving them free books and running fellowship programs, and they increasingly target young people. Kagan holds a more mild view: “[EA] is a group of people who have a shared outlook on life…I’d probably call it a philosophical view…Most of them believe in something like utilitarianism. The broader category is called consequentialism.” Utilitarianism is the idea that we must act in a way that will maximize benefit for the greatest number of people, consequentialism is the theory in normative ethics that states the moral value of an action or decision should be judged by its consequences. Whatever you call it, EA exudes an essence of insularity to those outside of the movement, with a unified ideology, tight-knit community, and set curricula.  Bankman-Fried’s collapse did not start the issues within EA but, rather, exposed them to the public’s eye. 

Yale’s relationship with EA goes far beyond Kagan’s teaching. The Yale chapter of EA began in 2014 after one of the founders attended a lecture at the Yale Law School delivered by Peter Singer, an Australian philosopher and principal leader of the movement, on how to donate to charity effectively. A principal member of Yale’s EA chapter, who I will call John, told me that “The talk spoke to me…[Singer] referenced…students who were doing a very early version of earning to give, so giving away a pretty high fraction of what they were making to charity. I was inspired by that, because I thought, this is something I could do with my life. I was still trying to figure out my career. But the idea that no matter what I did, I should be looking to support some of the best charities on earth spoke to me. And things followed from there.” John described the early days of the chapter as “very scrappy,” saying, “It was a few philosophy professors and some college kids who were excited, for the most part.” Although the movement was incredibly young, EA groups were starting to sprout up at nearby colleges like Harvard, Princeton, and Denison.

John explained Yale EA’s early activities as “raising awareness. We held a few fundraisers on campus. We did…giving games, which is sort of like handing people money as they walk by and inviting them to donate to one of a few charities. We used that as an opportunity to talk about how you would choose a charity…” He continued to tell me that the reception of EA in 2014 was fairly neutral, given that the movement hadn’t yet gained a public reputation because of how new it was. He maintained, however, that “the biggest benefit of the club was introducing a bunch of people…to these ideas.” Among the founding group of EA members at Yale, “almost all of them are still involved to some capacity with EA.” John told me he “worked at Open Philanthropy, which is a very large funder in the Effective Altruism space.” In fact, he told me he “met Sam [Bankman-Fried] a few times. I was invited to work at Alameda to research. I turned it down. It seemed a little bit too risky at the time. I don’t regret that now.” 

Before we got off the phone, John sincerely thanked me for “telling his story.” The discrepancy between wanting to be recognized and wishing to remain anonymous seems to fall in line with a larger phenomenon in the EA movement: a misunderstanding of how to take accountability. 

Nearly 10 years later, Yale EA still closely follows its original intention. I sat down with two members of the current Yale EA board, Arjun Warrior, TD ’26 and Asavari Saigal, GH ’26, who explained that Yale EA currently functions to facilitate reading groups that introduce students to EA’s ideas. Yale EA offers the Intro Fellowship every semester and summer, and according to the website, “Applicants should be driven to do as much good as they can, open-minded and eager to update their beliefs in response to critical discussion and holistic evidence, and ready to commit about 3 hours per week across 8 weeks.” 

Saigal was recruited to join the Intro Fellowship the summer before her first year. She explained, “For me, doing that fellowship was a lot less like being introduced to new ideas and more like, Oh, my God, other people are thinking the way I am thinking. So, I didn’t really think of it as learning, either. I thought of it more as finding it.” Saigal was living in India at the time, so she often joined the discussions in the middle of the night, finishing them in the early hours of dawn. 

Warrior is currently the coordinator of the Intro Fellowship and tells me that the fellowship attracts all kinds of people: “There are a lot of people who have a really strong quantitative background, and [like] this idea that you can sort of use logic, use reason, use evidence to figure out what’s the best way to make an impact. And then there are people, [like me, who have] this real sense of urgency about making change better…You open up the newspaper, and there are tragedies happening left and right. It can be paralyzing to recognize that there’s so much going on [as] you’re 19 years old, on the cusp of stepping into trying to figure out how to make a difference.” 

Not all people who complete the fellowship will continue with EA, and that is not considered a failure. Saigal said, “If you continue with EA or not, that’s really not an indicator of the success of the fellowship. So as long as you leave with some sort of opinion on how to do good and start thinking about it more often. Just let that idea permeate into your life. That is when it’s successful.”

Yale EA also ran a semester-long In-Depth Effective Altruism (IDEA) Fellowship, which was discontinued in the fall of 2023. Beyond the reading groups, Yale EA organizes community-building events, like weekly dinners and retreats, in order to give those who have gone through the fellowship an outlet to continue to engage with EA in a less structured setting. Saigal said that recruitment did get harder, but she was mainly frustrated with the fact that Bankman-Fried was tied to the organization in the first place. Warrior shared a similar sentiment regarding Bankman-Fried: “He’s a bad actor, he did terrible things, and he deserves this negative attention. That doesn’t mean that the rest of us should stop trying to make the world better.” EA’s global brand shares a similar attitude and has made a show of arguing that the ideas of EA on their own are strong enough to prevail through this time of unfavorable attention through blog posts and changing rhetoric in their chapters. The official line is that Bankman-Fried was a bad person who did bad things and would have been a bad actor wherever he attached himself. In their view, EA is the victim, not the perpetrator. 

Campuses in and around Silicon Valley are known to have a very strong EA presence, given that most of the work being done around them is related to, if not directly connected with, EA’s areas of interest. In comparison, Yale’s chapter is small. In the EA forum, there are 175 posts that mention Yale’s chapter, whereas Berkeley and Stanford have nearly 500 each. The bigger chapters have action groups “focused on running alternative protein research and…artificial intelligence alignment,” Warrior told me. At Yale, members “mainly just run…reading groups.” However, some members of Yale EA take it upon themselves to do more than attend their weekly reading groups. Saigal told me, “I am part of a lot of Dwight Hall’s organizations. I am also on the board at the homeless shelter and the soup kitchen… It’s kind of hard to believe that I’m making a difference when I’m just like, talking about things. When you’re actually doing them, it’s different.” The choice to take action at Yale is in the hands of the individual member rather than that of the organization. 

Even as they claim a separation between Bankman-Fried and EA, many EAs are clearly hesitant to identify themselves with the movement. For this article, I contacted several people who were once closely associated with EA at Yale and beyond. Some responded—more didn’t, saying that they were distancing themselves from the organization. But even the people who picked up my calls were hesitant to label themselves as “Effective Altruists”; to call oneself an “Effective Altruist,” one must embody and absorb the principles completely. It feels cowardly not to be able to stand behind an organization’s values. The Effective Altruists I spoke to talked about EA as if it were a spectrum, but if it is, in fact, a spectrum, there should be no issue attaching oneself to a movement that theoretically allows for a good bit of nuance. It appears that even though the EAs of Yale (and everywhere else) insist that membership hasn’t declined and that they believe in the brand’s ability to persist against bad media attention.

There is a strange party line within EA: devoting your time to the EA movement does not necessarily mean you believe in the organization’s core values, nor does it make you a true “Effective Altruist.” In the world of EA, your beliefs are separated from your actions—this is how EAs explain how they feel about Bankman-Fried. Although Bankman-Fried supported the movement by creating The Future Fund and actively identifying himself with the EA community, they say he never truly believed in the principles of EA, or else he would not have done such a thing as embezzle customers’ money in order to save his own business. In fact, Saigal expressed a general frustration in the EA community about Bankman-Fried’s action: “It started conversations among us about why people feel like it’s okay for them to use the term EA to do fraud and crime.” This is curiously naive—Time reported in March 2023 that EA leaders, including MacAskill, had been warned about Bankman-Fried’s questionable business practices. But as long as the getting was good, EA was happy to have him as an exemplar of how much good it was possible to do while also being rich. 

Just two years ago, Yale EA had an office in the Bank of America building on Church Street that looks over the New Haven Green. High up in the tower, Yale Effective Altruists would meet with a sprawling view of campus behind them. Now, in a post-Bankman-Fried world, EAs gather in the depths of WLH, where the windows barely reach the sidewalk. 

Exit mobile version