The Baltic Way

Demonstrators in Estonia join hands and wave the flag of the dissolved (but now restored) Republic of Estonia. (Jaak Künnap)

At around 7:00 PM local time on 23 August, 1989, an ambitious photojournalist finds himself in a helicopter flying over a major highway in Latvia. Peering out of the helicopter’s window with his camera in hand, he can hardly believe his eyes. On the ground below lies a massive human chain of demonstrators holding hands along the length of the road, whose ranks stretch as far as the horizon. The historic protest this photojournalist witnessed is a key moment in the history of Eastern Europe, and an important step towards dismantling the old social and political order imposed by the now weakening Soviet Union.

Known today as the three “Baltic states”, Estonia, Latvia, Lithuania were for much of their recent history, under the thumb of a larger, more powerful political entity. They were, after all, situated close to several major historical powers, including Sweden, Germany, and Russia. Still, these countries had long been adamant in preserving their national identities and cultures, even in the face of power opposition. In few other instances has this been more true than during late stages of the Baltic states’ membership in the Soviet Union.

On 23 August 1939, Nazi German leader Adolf Hitler and Soviet Union leader Josef Stalin came to an agreement known as the Molotov-Ribbentrop Pact. Under this agreement, the land lying between the two countries, which included Poland, the Baltic states, and others, would be divided into “spheres of influence” for both Germany and the Soviet Union. While Poland was split between the two rising superpowers, the Baltic states were left entirely to the Soviets. Using both military and political means to pressure them, Stalin essentially bullied the Baltic states into forming the Estonian, Latvian, and Lithuanian Soviet Socialist Republics. The process of Sovietization of the three nations included the deportation of tens of thousands of citizens which were deemed “hostile” by Soviet officials.

Fast forward fifty years from that fateful agreement and one finds a Baltic region weary of decades of Soviet control which, among other perceived violations of Baltic autonomy, included the purposeful introduction of Russian migrants (who influenced local policy in favor of the central Soviet authority in Moscow), and policies that suppressed the expression of the languages and cultures of the three Baltic nations. Meanwhile, the Soviet Union officially maintained the stance that all three nations voluntarily joined the USSR, instead of being forced to do so by Nazi or Soviet political maneuvering. Leading up to the 50th anniversary of the Pact, tensions and rhetoric surrounding reform or even independence in the Baltic states surged. These movements were condemned as harmful “nationalism” by Soviet authorities, as they prepared for a possible military crackdown on the region.

With the significance of the 50th anniversary in mind, local officials in all three nations began to plan one massive demonstration they hoped would capture the attention of the entire world. Though it is not clear who came up with the idea of a massive human chain, the concept was communicated to political and social organizations across Estonia, Latvia, and Lithuania, and a plan was approved. The chain would link Tallinn, Riga, and Vilnius, the three capitals of the Baltic nations. Organizers determined that in order for the plan to work, around 1,500,000 participants would be needed. 

According to most estimates, the expected numbers were met, and perhaps even exceeded on that fateful Wednesday evening. Along the highways connecting the Baltic capitals, demonstrators held hands and sang national hymns. The flags of all three nations flew proudly in the wind, as reporters swarmed to capture the spectacle. Solidarity demonstrations were held in cities across the world, including Moscow (although police quickly dispersed that demonstration). Though the whole ordeal lasted little more than 15 minutes, up to two million participants were estimated to have taken part in this historic demonstration, which amounted to a quarter of the total population of the three Baltic nations. 

The event was highly publicized in the media across the world, bringing international attention to the issue of Baltic independence. Though the demonstration was initially denounced by Soviet media, again claiming that the protests was little more than a manifestation of harmful nationslist rhetoric, it did cause Soviet leader Mikhail Gorbachev to reconsider the issue of Baltic independence. In December of 1989, mere months after the protest, an official condemnation of the secret protocols of the Molotov-Ribbentrop Pact was signed by Gorbachev. After free elections were held in the Soviet Union for the first time in 1990, pro-independence candidates in the Baltic countries were placed into public office. And finally, by the end of the following year, the Republic of Estonia, the Republic of Latvia, and Republic of Lithuania were established and internationally recognized as free and independent countries.

Today, the Baltic Way remains one of the largest peaceful demonstrations in history. It has been a source of inspiration for protestors in places such as Catalonia and Hong Kong, who have emulated the creation of a human chain. The independence movements of the Baltic nations contributed to the total collapse of the Soviet Union and its stranglehold on Eastern Europe. All three of the Baltic nations are now comfortably outside of the Russian sphere of influence, being members of both NATO and the EU. The Baltic countries are among the most developed and prosperous countries in the world. 

Charles Hamilton Houston: “The Man who Killed Jim Crow”

Houston delivers his argument. (Charles Hamilton Houston Institute)

The era of “Jim Crow”, a period of American history with widespread societal and legal discrimination against African-Americans (especially in the South), is generally considered to have ended with the Civil Rights Movement in the 1960s. Famous leaders from this period, such as Dr. Martin Luther King Jr., Malcolm X, and others are widely celebrated, since their actions took place at the climax of the movement. Less appreciated, however, are the many leaders who paved the way for this celebrated generation of activists, many of whom, including the subject of today’s article, never lived to see the fruits of their labor.

Charles Hamilton Houston was born in 1895 to middle-class African-American family in Washington D.C. Houston’s father, William, was an attorney. Houston was described as a brilliant child, graduating from Dunbar High School at the age of 15. He then went on to Amherst College in Massachusetts, where he left as one of six valedictorians in his class. Following a brief stint teaching English at Howard University, Houston applied as an officer to the United States Army upon the country’s entry into World War I. This was a formative experience for the young Houston, who witnessed the constant bigotry and racism that was present in a still segregated army. Using the law, he was determined to right the wrongs he saw everywhere. Houston returned to the United States in 1919, shortly after the war’s end. He enrolled in Harvard Law School, becoming the first African-American to serve as an editor of the Harvard Law Review, and graduated with honors in 1923. Houston was soon admitted law to District of Columbia Bar, where he would begin to practice law alongside his father. He also aided in the creation of the National Bar Association, which unlike the dominant American Bar Association, recognized and accredited African-American attorneys.

Beginning in 1924, Houston returned to Howard University, only this time teaching law instead of English. Mordecai Johnson, the university’s president, saw potential and Houston, and allowed him a significant role in reforming Howard Law School. Although it was responsible for training three fourths of the country’s Black lawyers, Howard Law School still only held part-time night classes. After Houston was appointed vice-dean (effectively with the powers of a dean) of the law school in 1929, he helped bring about its transition into a full-time law school. With his new role as the head of the African-American law’s central institution, Houston envisioned a new generation of Black lawyers who could use their skills for the advancement of their people. Among his students were James Nabrit, Oliver Hill, Spottswood Robinson, and Thurgood Marshall. Houston’s role in fighting Jim Crow, however, was not limited to the classrooms of Howard University. Rather, by working with the attorneys he trained himself at Howard, Houston was able to make considerable strides towards racial equality under the law.

Resigning from his post at Howard in 1935, Houston would spend the remainder of his life working on civil rights law. He assumed the position as the first special council to the National Association for the Advancement of Colored People (NAACP). One of his first cases following his departure from Howard was in Hollins v. State of Oklahoma, which concerned a Black man sentenced to death by an all white jury. Houston and his all-black defense team were able to prevent the man from being executed. Though it was a goal of Houston to rid American juries, it would be decades before that become a reality. Another of Houston’s primary concerns was the segregation of public schools, which was deemed constitutional by the 1897 Supreme Court case Plessy v. Ferguson, under the doctrine of “separate but equal”. He would dedicate much of his work towards attacking this doctrine, which he believed was the keystone for much of Jim Crow’s stranglehold on the South. Alongside Thurgood Marshall and the Baltimore branch of the NAACP, Houston argued in Murray v. Pearson before the Maryland Court of Appeals. The case concerned Donald Gaines Murray, an applicant to the University of Maryland School of Law who was rejected due to his race. The court ruled in Murray’s favor, and ordered the school to admit Murray. This ruling, however, did not mean the end of segregation in America’s, or even in Maryland’s schools. The court noted that only because the University of Maryland School of Law was the only law school in the state, did the Equal Protection Clause of the Fourteenth Amendment apply. In theory, a separate Black-only law school could legally exist in Maryland. Nonetheless, this was heralded as a victory for Houston and his devoted followers.

The precedent of outlawing segregation in institutions which were the only of its kind within its state was carried on to the federal level, thanks to Houston’s work in Missouri ex rel. Gaines v. Canada. This case was very similar in background to Murray, with the added impact that it started to raise doubt within the Supreme Court of the United States about the legitimacy of “separate but equal”. Still, however, the doctrine remained as the official legal precedent in American law. Concurrent with his struggle towards desegregating American schools was Houston’s battle towards racist housing covenants. These were legally binding contracts attached to properties that restricted who could purchase it, which often meant discrimination against prospective Black homeowners. Using these covenants, real estate developers could directly control the demographics of the neighborhoods they built. In 1948 the Supreme Court ruled in Shelley v. Kramer that the enforcement of these covenants by state or local authorities was unconstitutional, thus ending a decades-long battle by Houston and the NAACP. Though Houston himself did not argue before the court, his advice and connections to the Howard Law School alumni who did, are another example of Houston’s vital role in dismantling Jim Crow on multiple fronts.

Charles Hamilton Houston died of a heart attack on April 22, 1950, at the age of 54. Just four years after his death came the landmark decision Brown v. Board of Education, which successfully overruled the doctrine of “separate but equal”. The case was headed by the director-counsel of the newly established NAACP Legal Defense Fund, and one of Houston’s most loyal disciples, Thurgood Marshall. In 1967, Marshall would be appointed by Lyndon B. Johnson as the first African-American justice to serve on the Supreme Court of the United States.

We owe it all to Charlie.

– Thurgood Marshall

The Indonesian Genocide

Detained members of PKI’s youth branch await their fate (The New York Times)

There were few parts of the world left unaffected by the global political and ideological struggles that occurred as a result of the Cold War. Countless nations faced both external and internal conflicts that–though often seen at the time as events solely relevant to the apparent parties–were really the result of larger machinations by the two global superpowers of the time, the United States and Soviet Union. Lives were lost, peoples were uprooted, and countries faced lasting damage to its social fabric, all in the name of power games played by those who were completely removed from the destruction they allowed and insulated by the consequences they created. In few other instances was this more true than the mass killings that took place in Indonesia in the 1960s, in which millions of suspected communists or opposition sympathizers, ethnic minorities, or even alleged religious apostates were systematically murdered by the Indonesian government. While much of the Indonesian leadership under prominent general Suharto are undeniably and hugely responsible for the atrocities that took place, the role of several Western countries who facilitated the killings due to political interests cannot be ignored.

Now the fourth largest country, in the world, Indonesia did not take its modern form until 1945. The nation’s many islands were under Dutch colonial control for well over three centuries, dating back to the lucrative spice trade that contributed to the Dutch Golden Age in the 1600s. Intense political pressure from the United States (who pushed a heavily anti-colonial agenda following WW2), as well as surging internal conflicts forced the hand of Dutch leadership, granting absolute sovereignty to the Indonesian people in 1949. However, even before receiving full international recognition, the government of Indonesia was already beginning to take form under Sukarno, a seasoned and idealistic statesman who played a major role in earlier independence movements. Sukarno would be Indonesia’s president for the next two decades, and his policies would be significant in setting the stage for the events to come. His most consequential plan was that of “Guided Democracy”, which, after being implemented in 1959, aimed to resolve the political tension that plagued the early days of Indonesia. Though Sukarno would assume near-dictatorial power, he promised that each of Indonesia’s three main factions: the nationalists, Islamists, and communists, would each have representation in his cabinet, similar to how the elder leadership Indonesian village would operate. The plan, however, failed. In the end, it was conflict between the factions that unraveled Sukarno’s Indonesia, with plenty of outside parties willing to take advantage of the chaos.

The first nail in the coffin for Sukarno came when he began an intense anti-Western political campaign, especially against the United States, which ensured that any sympathy he had from the Western bloc was now gone. Not long after Sukarno’s first denunciations of Western imperialism, a radical militant group executed six top Indonesian generals suspected of harboring pro-Western sympathies, with the impression that Sukarno himself would support them in stopping a potential coup. This incident is now known as the 30 September movement. Despite the movement’s hopes, Sukarno distanced himself from the attacks, leading many to wonder who was responsible. Suharto, the well-respected general who dispersed the militants from Jakarta’s central square, immediately blamed the PKI, by far the largest communist faction Indonesia and, with membership nearing four million, the third largest communist party in the world at the time. Nationalist, Islamic, and even Christian groups joined in blaming PKI, distributing propaganda that demonized the party. Sukarno, himself having Marxist sympathies and also wanting to keep the peace, discouraged any violence against PKI. However, with Suharto gaining influence over the army and other factions extremely quickly, Sukarno was powerless to stop the events that would soon unfold.

Beginning in 1965, with the military firmly under his personal control, Sukarno set about a brutal campaign that would claim the lives of more than one million people. The Indonesian Army, with the support of various local groups, captured and summarily executed suspected PKI sympathizers, as well as members of other factions that were deemed enemies of Indonesia. Besides the PKI itself, political groups with strong ties to the party, such as the Gerwani (The Indonesian Women’s Group), BTI (Indonesian Farmer’s Union), and even members of the PKI’s youth branch had their ranks decimated. Some ethnic populations, such as Chinese-Indonesians, were also targeted due to their suspected anti-Indonesian or pro-communist sentiments. In addition to political or ethnic groups, some units within the Indonesian military had PKI affiliations or connections to 30 September, making them targets for internal purges. It should be noted, however, that the mass killings in Indonesia from 1965 to 1966 did not constitute a civil war. Open combat between opposing factions was very rare, and the groups that carried out the violence were given complete political legitimacy.

Victims of the killings included women, children, and the elderly, all of whom met a variety of brutal ends. After being interrogated, tortured, bayoneted, clubbed, and/or shot, corpses were often left mutilated in the streets, or dumped into nearby caves or rivers. Though higher ranked PKI members or sympathizers almost always faced execution, up to 750,000 people were also imprisoned in long-term jails, some of whom were not released until the 1980s. In addition to the mass killings and detainments, damage to homes, farms, and other properties were common, leaving surviving victims with little to return to.

While Suharto’s vicious power grab is without a doubt the main reason behind the atrocities being committed, he himself was not the only one that could gain from the removal of the PKI and its supporters from Indonesia. The collapse of Sukarno’s government and the rise Suharto just so happened to occur during one of the most tense periods of the Cold War. The Western powers, particularly the United States and United Kingdom, had lots to gain if Indonesia’s massive communist supporter base were to be eliminated. Regarding the American role in the massacre, its interest in the matter was fairly obvious: an opportunity to destroy the PKI was an opportunity to ensure that Indonesia would be loyal to the United States, and hostile to the Soviet Union. Even before the walls started to cave in on his administration, the CIA sought to remove the communist-sympathetic Sukarno through a variety of means, even resorting to making a fake pornographic film that featured his likeness. But before long, Suharto was doing much of the CIA’s work (that is, killing communists) on its behalf, and American operatives were more than willing to assist in propaganda efforts that strengthened the general’s cause. The CIA provided Suharto with equipment that would aid in spreading anti-PKI sentiments among the Indonesian people, as well as hit lists that contained the names of thousands of PKI members. Meanwhile, US embassy officials offered to suppress internal media narratives about of Suharto’s massacres, in order to ensure that Suharto would be portrayed as butcher. While the US certainly played their part in facilitating the killings, they were not the only ones with a stake in the matter. The Indonesian government under Sukarno had long been in conflict with Malaysia, which at the time was a member of the British Commonwealth, and thus an valued ally to the UK. Like their American allies, British and Australian foreign officials used inflammatory propaganda to further the destruction of the Sukarno and the PKI.

In the end, Sukarno’s presidency collapsed completely, with Suharto seizing power 1967 and completely reorganizing the Indonesian government under his “New Order”. He would rule Indonesia for the next 31 years. The chief perpetrators of the mass killings were not punished, since Suharto and his top officials became the authority by which they should have punished. In 2016, an international human right’s court in The Hague ruled that the Indonesian government was guilty of crimes against humanity, while also calling out the United States, United Kingdom, and Australia as complicit in those crimes. The court in question had no actual legal authority over any of the parties in question, so those deemed responsible have yet to face any consequences. The Indonesian killings of the 1960s is both a cautionary tale to the corruption that a quest for power can create, and a call to action to make world leaders more responsible for their actions and aware of the grave consequences their decisions can have.

Rani of Jhansi

A 1901 British sketch of Rani, described as India’s Joan of Arc (Indian Express)

A near-mythical figure in India, Rani of Jhansi is a figure closely associated with the struggle for British independence, and yet she is seldom remembered in the West, beyond perhaps a footnote in the centuries-long occupation of the Indian subcontinent. From her bold defiance towards her country’s traditional gender norms, to her martyrdom at the hands of the British colonizers, her story is one which all draw inspiration can from.

Manikarnika Tambe was born on November 19, 1828, in the city of Varanasi in the Benares State of Northern India. At the time of her upbringing, much of the Indian subcontinent under the control of the British East India Company (EIC), either through direct control, or through the suzerainty of local autonomous kingdoms or states. With a series of trading forts along India’s coastal cities, the EIC began its presence in India as one of the several European trading corporations interested in lucrative commerce with the dominant Mughal Empire. But due to internal weakness within their opponent’s ranks, and the adoption of a clever “divide and conquer” strategy of winning over local lords, the EIC was able to expand its control of India slowly over the course of a century, defeating the otherwise militarily and even technologically superior Mughal Empire. Thus, by the time Manikarnika was growing into a young woman, the banner of the “Company Raj” (the name given to the EIC’s territory) flew over much of India.

Born to an advisor for a local ruler, the status of the Tambe family allowed Manikarnika to receive an education, and have firsthand exposure to the type of strong leader she would aspire to be. In addition to reading and writing, Manikarnika learned marksmanship, horseback riding, and other arts traditionally taught to young men. At the age of 14, she was married to Gangadhar Rao Newalkar, the ruler of the independent princely state of Jhansi. Following the Indian female tradition of renaming oneself after marriage, the new bride was now named Rani Lakshimbai. Despite being the official queen consort of Jhansi, Rani was far from the spitting image of femininity during her time. She did not wear a veil over her face, nor did she shy away from public interactions with commoners and officials alike. Rani’s reign, however, was to be short lived, as the issue of succession became salient to her, and others. Under a policy mandated by the EIC called the Doctrine of Lapse, any state without a clear and legitimate heir was to be absorbed into British control. Unfortunately for Rani, she and her husband were unable to produce and heir, and as Rao’s health worsened, the couple adopted a five year old son to serve as heir to the throne. After the death of Rao in 1853, however, Lord Dalhousie, the incumbent British Governor-General of India, rejected the legitimacy of the new heir, and annexed Jhansi into EIC territory anyway. A shocked and betrayed Rani was removed from the Jhansi royal palace, given pension, and was expected to live the rest of her life in relative obscurity.

Rani’s furor and desire for justice rendered the prospect of inaction impossible, and luckily for her, such sentiments were widespread throughout India, especially in the North. British-imposed social reforms, harsh taxes, and an overall opposition to the presence of an colonizing foreign power, all helped ignite the Indian Rebellion of 1857, the largest of its kind since British hegemony began. Rebelling factions included Mughal remanants, mutineers from the EIC’s armies, and the armies of various states, kingdoms, or rulers, including Rani’s own. Alongside the EIC, several jurisdictions, such as Bombay, Madras, and Bengal, provided support against the rebels, while many others remained neutral. Regardless, Rani was determined to restore the rightful ruler of Jhansi to his throne. As fighting raged throughout the subcontinent, and mutineers massacred the British garrison in Jhansi, Rani reassumed control of her state in the summer of 1857, with the intention of holding it until a deal could be made with the British. But when General Hugh Rose and his forces arrived in March 1858, accounts claim that Rani had a change of heart, and refused to surrender the Jhansi fortress to Rose. A massive bombardment and siege ensued, followed by brutal street fighting. While Jhansi troops, led and inspired by their queen, fought valiantly, Rose eventually won the day.

From Alisha Haridasani Gupta of the New York Times:

As the town burned, the queen escaped on horseback with her son, Damodar, tied to her back. Historians have not reached a consensus on how she managed to pull this off. Some contend that her closest aide, Jhalkaribai, disguised herself as the queen to distract the British and buy time for her to get away.

In the end, the British took the town, leaving 3,000 to 5,000 people dead, and hoisted the British flag atop the palace.

After fleeing the Jhansi fort, Rani met with other rebel leaders at the town of Kalpi, where another clash with British forces would take place. A defeat there forced Rani and her allies to flee regroup in nearby Gwalior, where another army would be raised to face the British once more. In what would become one of the final acts of the rebellion, Rani’s forces engaged with British cavalry in June 1858, with Rani herself leading the charge. The force was defeated, with Rani herself being mortally wounded in combat. According to legend, she was leading a charge while dressed in male military garb, when she was struck by enemy fire. Hugh Rose, her longtime military adversary, commented that “The Indian Mutiny had produced but one man, and that man was a woman.” Per Rose’s personal account, Rani was given a burial with full honors. Not long after the defeat at Gwalior and Rani’s death, the rebellion was quelled, and British rule over India would continue for another century under Crown rule, rather than under the EIC.

Though she was unable to secure all of her aims, Rani of Jhansi’s legacy is one of defiance, both in the face of a great colonial power, and of a stringently patriarchal society. Besides being immortalized in Indian history, she has been the inspiration for countless of films, novels, and songs. Curiously enough, she is also the namesake of the Rani of Jhansi Regiment, an all-female guerilla force raised by Indian nationalists during WW2 to aid in the Axis fight against the British Raj. As present-day news headlines discuss the controversy of British colonialism, as well as the injustices of the Indian patriarchy, Rani of Jhansi is a captivating figure that may yet serve as a symbol for a new India.

Morally “Ambiguous” Genocides

Illustration of the 1804 Haitian Massacre (Wikimedia Commons)

Genocide, in both the public conscience and international law, is considered among the worst crimes an individual or state can be responsible for. It is a crime considered perhaps unmatched in its barbarity, destruction, and divergence from the idea of global peace and tolerance. In the eyes of some, however, there are genocides whose circumstances allow a certain level of justification, or even moral plausibility. While this article will continue to condemn genocide in any form and without regard for historical context, it will examine the origins of such sentiments, and explore how they can likewise be formed in crimes with absolutely no claim to morality, except in the eyes of its perpetrators.

The Haitian Revolution, and the subsequent massacre of the nation’s white population in 1804, was an event that shocked the world, and is one example of a genocide that maintains some level of moral controversy, even more than two hundred years after the fact. Saint-Domingue, as the French colony in modern-day Haiti was then known, consisted almost entirely of sugar or coffee plantations that were owned by a small white French elite, and worked on by an African slave majority. The colony was among the most valuable in the world, producing nearly unrivaled amount wealth for the French, especially given its relatively small area. No different from their counterparts in other parts of the Americas (although work on sugar plantations were often more brutal than those producing other crops), slaves in Saint-Domingue were treated as little more than property, enduring inhumane and abusive conditions, typically leading to the death of a slave in a matter of years. In fact, by the dawn of the Haitian Revolution, the number of new slaves being imported into Saint-Domingue grew larger than its entire white population. By 1790, with only six percent of Saint-Domingue’s population being white, five percent being free people of color, and the rest slaves, it was only a matter of time before the resentment towards such a massive imbalance of power boiled over into full-scale revolution. Though the Haitian Revolution technically started as a dispute involving free people of color, lower class whites, and planters, the chaos caused by that conflict as well as the French Revolution sparked a massive slave revolt in 1791.

The Haitian Revolution developed first into a war for emancipation, then as a struggle over total control of the island between rival Haitian factions, and finally as a war for total independence from a now Napoleonic France. By 1804, this final goal was reached, and Jean-Jacques Dessalines, a prominent military leader of full African descent, was deemed Governor-General, and later Emperor of Haiti. Passionately recalling the countless acts of brutality committed by the white French plantation owners and military leaders both before and during the Revolution, Dessalines called for all whites remaining in Haiti to be put to be killed. Through the winter and spring of 1804, Haitian soldiers (often former slaves) under the personal supervision of Dessalines marched between Haiti’s cities and massacred up to 5,000 white civilians. By April, Dessalines’ goal of a complete removal of whites was met, with the only full whites remaining in the country being women married into black families. The events of 1804 made slaveowners in other parts of the Americas–especially in the American South, where many surviving Haitian planters fled to–concerned about a similar event happening on their own plantations. As these white American or European leaders condemned what they viewed as barbarity, Haitian leadership defended the massacre as a necessary act of justice and retribution, an idea which persisted long after the last bullet had been shot.

From C.L.R. James’ 1938 historical account of the Haitian Revolution, The Black Jacobins

“The massacre of the whites was a tragedy; not for the whites…. for these there is no need to waste one tear or one drop of ink.”

James and others sympathetic to Dessalines’ actions cite the generational cruelty inflicted by the whites on Saint-Domingue, the brutal, scorched earth tactics used by French forces during the Haitian Revolution, and the threat of counterrevolution, all as factors that ultimately justify the 1804 massacres. Regardless of what opinion they actually have, readers will certainly see how these factors are, to an extent, sufficient in justifying the actions of Dessalines and his men. What is more important, however, is recognizing how the core principles that justified in the 1804 Haiti Massacre are seen in instances of genocide whose evil and senselessness are seldom contested.

Let us now look at the most famous of genocides, the Holocaust, a crime which no reasonable person would ever defend as just or necessary. After WW1, Germany was left humiliated by its defeat, becoming as politically and economically weak as the relatively new nation had ever been. From a desire to restore German dignity and standing in the world, rose the Nazi party under Adolf Hitler, and the establishment of the Third Reich. Besides the unfair Versailles Treaty given to them by the Allied Powers of Britain, France, and others, Nazi ideology blamed various groups within German society for their failures. No group was blamed to a greater extent than Jews, whom the Nazis claimed sabotaged the efforts of Germany in favor of promoting their personal and political interests. This element of blame was key, since there was a sense that not only should the Jews be removed for the betterment of German society, but that the expulsion or extermination of Jews was a righteous act of justice and retribution for the crimes they had committed against the nation. So, despite clear evidence that no such Jewish conspiracy existed, and that German Jews had served as heroically in WW1 as their “Aryan” counterparts, the emotional satisfaction the Nazi ideology and plan of action offered to the German people was sufficient in allowing the Hitler and his party to take power.

In the case of the Haitian Massacres and the Holocaust, though different in many ways, the two seem to have in common one key idea: that they were justified by the perpetrators as an act of justice, rather than a crime. During the Haitian Massacres, Dessalines and his sympathizers justified the slaughter of thousands of civilians by reminding the people of Haiti the atrocities they had faced at the hands of the white planters. The Nazi propaganda machine made similar claims about the Jews: that the suffering and weakening of the Germans was to be blamed on the acts of sabotage and treachery done by Jews, or other groups which the Nazis wished to exterminate. In predicting and preventing future acts of genocide, the international community must watch for countries whose social conditions may generate a similar situation to that of Haiti or Nazi Germany; that is, countries in which a controlling group may be in a position to blame a potential victim group for certain wrongs, however justified that blame may be.

Discrimination Against Afro-textured Hairstyles

Rock musician and pop culture icon Chuck Berry in 1958, sporting the “conk” hairstyle popular with many African-American men at the time. (NPR)

Caused by the uniquely flat shape of the follicle, Afro-textured hair is nearly universal in ethnic groups across the African continent, and can also be found among certain peoples in the South Pacific and Oceania. It is thought that the curly shape is an adaptation to hot climates, since it allows for a better dissipation of heat, as opposed to straight hair which is better at retaining heat. Though there is no true consensus on the origins of the hair’s unique characteristics, the societal significance of Afro-textured hair is very clear. Discrimination against hairstyles natural to African-Americans has been of particular concern in recent years, but the practice has existed since the very beginning of the African diaspora in North America.

The various tribes of West Africa, the place of origin of the vast majority of African-Americans, had varying traditions and customs associated with one’s hair. Long, intricate braided hairstyles were often used to denote wealth or status within the tribe, and could be a source of pride for an individual. Not unlike other cultures, the more ornate one’s hairstyle, the important that person likely was. However, after the capture and forced relocation of millions of West Africans during the Transatlantic Slave Trade, male slaves would often have their hair shaven by their new European masters. In addition to being a remedy to the terribly unsanitary conditions endured by the slaves, the shaving of Black hair was meant to remove their individual identity, as well as their cultural one. Female slaves would sometimes also have their head shaven, especially those who were employed in outdoor work. These female field slaves commonly used headwraps to protect against the harsh sunlight, a garment that even became mandatory in some jurisdictions or plantations. Discrimination and regulation of Black hair became just one of many tools employed by slaveowners to remove the slave’s identity, and to reduce him or her to mere property rather than an individual.

After the Civil War and the emancipation of slaves in 1865, Afro-textured hair, like many aspects of African-American life, continued to face discrimination long after the chains had been broken. Minstrel shows, a common form of entertainment in America both before and after emancipation, typically featured a white actor donning blackface and an Afro-textured wig. These shows featured songs, plays, sketches, or other acts which caricaturized those of African descent. Jokes made at the expense of Afro-textured hair was among the several ways in which minstrel shows or racist popular entertainment could portray Black Americans as strange, inferior, or even subhuman. Beginning in the 1920s, a new hairstyle known as the “conk” became popular among male African-American leaders and public figures. The style featured an aggressive straightening of the hair, which was usually achieved through the use of a homemade hair relaxer containing lye, a substance capable of causing severe chemical burns if handled improperly. Civil rights leader Malcolm X famously denounced the hairstyle, as he believed a subservience to white society was symbolized in a desire to reject one’s natural hair texture, even at the risk of serious physical harm. Regardless, the notion of straight hair being more “proper” or “professional” continued to exist in American society through Jim Crow movement and beyond.

The rise of the Black Power and Black Pride movements in mid-20th century America prompted a widespread change in the attitudes towards Afro-textured hair in many Black circles. The increasingly popular “Afro” hairstyle became a symbol of pride in one’s heritage, and a defiance to the status quo. Public figures such as popular singer Billy Preston passively helped usher Afros into popular culture, others such as political activist Angela Davis explicitly used the hairstyle as a way of representing the struggle towards racial equality. Discrimination against hairstyles found in Black Americans, activists such as David argued, were symbolic of and fundamentally equivalent to other forms of discrimination. The Afro was perhaps the first major movement against an American beauty standard largely shaped by white culture and white individuals. Though the Afro eventually fell out of fashion, flat-tops, dreadlocks, cornrows, and a plethora of other natural hairstyles entered mainstream Black culture, thanks mainly to Black public figures who embraced them, even if they faced intense backlash from both in and outside the community.

Even decades after the initial movement towards an embracing of Afro-textured hair, it can be argued that discrimination against natural hairstyles continued to exist in schools and workplaces throughout America. These institutions typically claim a certain standard of proper dress, which, in their eyes, makes no room for certain hairstyles which embrace the individual’s natural hair texture. Rogers v. American Airlines, a federal court case in 1981, upheld a American Airlines dress code that banned the braided cornrows of one its employees, Renee Rogers. Rogers cited Title VII of the Civil Rights Act of 1964, which bans workplace or employment discrimination on the basis race, and argued that a ban on her cornrows constituted racial discrimination*. Cornrows, Rogers argued, were culturally significant to Black Americans due to its history and featuring of natural African hair texture. The United States District Court for the Southern District of New York ruled in favor of American Airlines, rejecting the cultural significance of cornrows, and asserting that because Rogers’ hair could technically be altered, it could be subject to regulation by an employer. The ruling in Rogers is now subject to much criticism and debate from modern scholars, with many seeing the court’s interpretation of Title VII as too narrow, and that it did not appropriately consider the cultural factor behind Rogers’ hairstyle.

The CROWN (Create a Respectful and Open Workplace for Natural Hair) Act was signed into law by California governor Gavin Newsom in 2019, the first act explicitly banning discrimination in any state. Similar legislation has been passed in various other states, including New York, the state in which Renee Rogers first faced bans against her hair. Louisiana, a state with a troubling history of race relations, became the first to mandate training and familiarity with Afro-textured to become a licensed barber in its state. As of August 2022 a CROWN Act at the federal level has passed through the House of Representatives, and is awaiting a vote in the Senate. This is the second time the CROWN Act was introduced to the US Congress, but a Senate vote against it in 2021 ended the bill’s first attempt at being passed into law. Though the fight against discrimination against Afro-textured hair may seem to be reaching its end, the significance of hair in shaping Black culture and American race relations will likely continue for the time being, just as it has done for generations.

*Discrimination on the basis of sex was also claimed, as the banning of cornrows was alleged to have affected neither white nor Black men.

Environmental Warfare

Terraces in Peru; terrace farms were once key for sustaining the Inca Empire’s large population (National Geographic)

Whether he is a nomad who wanders the desert, or a city dweller who settles the fertile banks of a river, man have always been defined by the land he lives on. But just as natural environments give humanity life, their destruction can spell death for the very same. Warfare, a concept as old as humans themselves, is typically understood as being waged through the destruction of individuals. However, many of history’s military leaders have seen the value in targeting natural environments as a way to destroy or displace combatant and civilian targets.

Environmental warfare has been seen throughout human history, but our first example demonstrates environmental destruction on scale which was hitherto rarely seen. America’s great pre-colonial civilizations of the Pueblo, Maya, Aztec, and Inca peoples contained environmental infrastructure as sophisticated as its Eurasian counterparts. When Spanish conquistadors first witnessed the intricate dams, dikes, and canals which sustained the livelihood of millions of indigenous Americans, they were thoroughly impressed. But as their eyes turned from admiration to the prospect of conquest, the Spaniards realized the destruction of this infrastructure meant the destruction of their new adversaries’ capacity to resist. In addition to weakening its enemies, the Spanish armies were thousands of miles from home, and thus lacked the ability to sustain itself through anything other than pillage. It therefore was doubly beneficial for Spanish conquistadors, mainly through the help of their massive cohort of indigenous allies, to destroy as much environmental infrastructure as they could. For example, during the 1521 siege of the Aztec capital of Tenochtitlan, Spanish-allied soldier’s cut the dikes surrounding the city, flooding the streets. The city’s aqueduct was also severed, leading to city’s surrender in just a few months. During Francisco Pizarro’s conquest of the Inca Empire during the 1520s and 1530s, he, just like his fellow conquistadors, sustained his army by raiding local Inca food stores. Ironically, Inca resisters used many of the same scorched earth tactics to slow the Spanish advance, by targeting imperial warehouses and generally destroying any infrastructure, whether man-made or natural, which may be useful to their colonial adversaries. Though the Spanish crown, who sought to develop their American colonies into consistent streams of income, attempted to limit the destruction done by its conquistadors due to its long-term economic consequences, the regulations it put in place did little to protect the widespread destruction of local environmental infrastructure. In the end, the destruction of indigenous American land and infrastructure proved essential for asserting Spanish dominance in the region, and forcing its many local peoples into submission.

A far more recent and controversial use of environmental warfare is seen in the United States intervention in Vietnam. The infamous “Agent Orange” was a chemical herbicide was utilized during a military operation known as Operation Ranch Hand. The goal of the operation was to destroy parts of the dense Vietnamese jungles which served as valuable hiding places for Viet Cong troops and infrastructure throughout the country. Concerns about the legality and morality of the operation were sidelined thanks to a precedent established by the British military just a decade prior, who used the exact same batch of chemicals against communist insurgents in British Malaya. Content with this fact, American planes went on to destroy 5 million acres of forest, with the operation affecting 20% of all forests in South Vietnam. Though Agent Orange is the most well-known tool of Operation Ranch Hand, it is actually part of a family of agents known as the Rainbow Herbicides–each agent named after different color of the rainbow. Agent Blue, for example, was an herbicide used to destroy rice patties deemed valuable to enemy forces. An herbicide identical to the ones still used on many American farms and lawns, it was dispersed throughout the Vietnamese countryside after burning or shelling food stores and patties was deemed inefficient. From the beginning of Operation Ranch Hand, there was significant concern and pushback from the scientific community, which was only amplified when press coverage of the war relayed such sentiments to the general public, only worsening the war’s popularity. The use of Agent Orange has received particular condemnation due to the millions of Vietnamese civilians and tens of thousands American veterans who suffered a variety of medical issues due to exposure to the substance. More common than immediate or long-term illness suffered by those directly exposed, are severe birth defects in children born from the same. The ecological and medical issues caused by Operation Ranch Hand continue to the present day, and is one of the major factors that contribute to the Vietnam War’s poor legacy in the public eye.

The destruction of the environment as a tool of war, known to some as “ecocide” or “environcide”, is not considered a crime in international law in and of itself. However, the practice often overlaps with legal definitions of crimes against humanity or crimes, especially if the act was made with the intent of causing the destruction it created. Regardless of how it is codified in international law, one can be certain that the practice of environmental warfare has, and will continue to exist so long as humans settle conflicts with violence and war.

The Suicide of Danny Chen

Danny Chen (left) with his cousin after completing basic training in Fort Benning, Georgia (NBC News) Chen would die by suicide six months after this photograph was taken.

Walk far enough along Canal Street through the heart of Lower Manhattan’s Chinatown, and you will eventually find a notable street marked with two different names: Elizabeth Street and Private Danny Chen Way. A quick Google search will reveal the tragic story of the road’s secondary namesake. Private Danny Chen died by suicide in Afghanistan after receiving intense racial abuse at the hands of his fellow soldiers. Chen, hailing from the very same neighborhood, was honored by his Chinatown community shortly after his untimely death. While the practice of naming landmarks after fallen soldiers is by no means unique, neither is Chen’s story of facing racism within the ranks of the United States military. Since Washington’s initial refusal to enlist Black soldiers in his Continental Army, racism in the American armed forces have been a notable subtopic within the larger study of American race relations. Though significant progress has been made since the country’s first battles, recent incidents such as the suicide of Danny Chen have drawn concern to the status of racial minorities in the military, as well as how incidents of racial abuse should be addressed, especially when the abusers have direct authority of the victim.

Born in 1992 in the largest overseas community of ethnic Chinese in the world, Danny Chen grew up similar to many of his peers. His mother was a seamstress and his father was chef, both of whom immigrated from Taishan, China. Chen was the tallest in his family, standing over six feet tall. He was bright, social, and devoted to his family. He worked hard both in and out of school, and was considering attending Baruch College, from which he received a full scholarship offer. However, as his high school days came to a close, his interests shifted, considering a career in the military instead. He planned to enlist in the Army, and dreamed of later becoming an officer for the NYPD. His family, on the other hand, were not so enthusiastic about his new goals. Him being his parents’ only son and child, they hoped he would pursue a safer career by getting his college education, believing that he had a bright future. In the end, however, Chen chose to pursue his dream of serving his country and community, and enlisted in the military shortly after graduating high school.

Chen first reported for basic training in Fort Benning, Georgia, where he got his first taste of military life. Like many new recruits, Chen was eager for the weeks ahead. He wrote to his parents often, describing to them how toilet paper was particularly rare on base, or how intrigued he was when shot a gun for the first time. But as time went on, and the stress among his fellow recruits mounted, his attention was increasingly called to a peculiar fact: he was the only Chinese man in his platoon. Week after week, recruits would call him “chink”, “Ling Ling”, and a variety of other racial insults. While the words themselves did not bother him, it become more and more apparent that he was being singled out within his platoon due to his race. At the time of him joining, Asians-Americans of any background made up just four percent of the entire US military. This fact, in addition to being the first in his family to join the military, made it so Chen had no one to turn to, and had to handle the situation without anyone truly on his side. Chen was a rather shy and unassuming figure, and tried to deflect the verbal abuse as best he could with humor. In the end, Chen was able to survive basic training, and in April 2011, he was assigned to the 21st Infantry Regiment of the 25th Infantry Division, based in Fort Wainwright, Alaska.

While his status as the only Chinese in his platoon did not change, he tried to make the most out his situation in Alaska. After his expected deployment to Afghanistan was delayed, Chen spent his time with friends in and around base, hoping to overcome that barrier which seemed to always divide him and his fellow soldiers. But in August 2011, after months of impatiently waiting, he and his unit were finally deployed into Kandahar Province, Afghanistan. As eager as Chen was to serve his country and prove his worth, the racial abuse, coupled with extreme hazing, only become worse. He was berated constantly with racial slurs, and was even forced into communicating orders to his comrades in Taishanese, the Chinese dialect of his parents. He was frequently singled out for extra guard duty, to the point where he would fall asleep on the job, and would be brutally beaten by his fellow soldiers as punishment. In one incident, Chen was dragged naked across 50 feet of gravel after misusing a water heater. Chen’s final day of service would be similarly humiliating. On October 3, 2011, he forgot his helmet for guard duty, leading to him be pelted by his platoon with rocks as he forced to crawl back to his trailer to retrieve it. Other soldiers observed that Chen seemed hardly phased by the ordeal, until the truth revealed itself later than morning, when Chen, age 19, was found dead from a self-inflicted gunshot wound to the head.

Of the eight men charged with Chen’s death, they officially served a total of 11 month in prison. The majority of the accused, including the lieutenant who commanded the platoon, had their charges dropped, or received no sentence and were instead demoted or discharged.

Chen’s death was not the first of its kind. Hazing in the military, especially against soldiers belonging to racial and ethnic minorities, as long been of concern. Around the same time Chen completed basic training, a Chinese-American marine named Harry Lew committed suicide in Helmand Province, Afghanistan after being beaten and having sand thrown in his face by a superior. Five years after Chen’s death, Pakistani-American marine recruit Raheel Siddiqui jumped off of a bridge after receiving abuse at the hands of a drill instructor with an established history of mistreating Muslim-American soldiers.

As racial discrimination reports continue to be received from concerned soldiers, many are troubled about how the culture of the military should adapt to an ever increasing diversity in the among its ranks. While history has shown the ugly side of American race relations across many of its institutions, it appears that within the United States Armed Forces, the concerns of the past continue to be felt in the present.

Creationism and Evolution in American Schools: A Brief History

Cartoon mocks the theory of evolution by overlaying Charles Darwin’s head onto a monkey (Harvard Museum of Natural history)

Throughout history, mankind has always sought the truth. Education allows humans to pass on the truths we discover onto the next generation. As discussed in last week’s article, education is one of a civilization’s most important tools for shaping itself, as the central principles that guide any people are the ones passed on through learning. Charles Darwin’s theory of evolution, first published in his 1859 book, On the Origin of Species, became far more than a scientific breakthrough, soon becoming a catalyst for larger debates about the relationship between science and religion.

Though the United States was one of the first modern societies to founded with an adherence to the idea of the separation between church and state, its ties to Christianity were inevitable due to an overwhelmingly large proportion of its population being Christian. As a result, Bible-oriented, theistic sciences were almost universally taught in American schools, though this fact can also be attributed to there being few other contemporary explanations for the natural world. With regards to the origins of humans and the natural world, things were explained through an ideology known as creationism: that both humans and nature were created by a divine, intelligent deity. This deity, of course, fit neatly into the description of the Abrahamic God that is worshipped in Christianity, and could therefore be taught in schools without contradicting the religious beliefs that American schoolchildren, in all likelihood, held in their personal lives. However, scientific developments in the 19th century brought about fierce debates surrounding the secularization of education, particularly in the natural sciences.

While Charles Darwin is often remembered as the “discoverer” of evolution, there were actually many thinkers from centuries prior who shared similar beliefs and observations. The first records of discussions about the permanence of species can be found from ancient Greek philosophers such Empedocles. 17th century English naturalist John Ray, known for his significant work in early taxonomy, first classified humans as primates, thus implying that humans were descended from nature. Even Charles Darwin’s own grandfather, Erasmus Darwin, believed that evolution occurred in all species, although his ideas were largely unclear. Regardless of where its ideas may have originally been drawn up, evolution never gained widespread acceptance from the scientific community up until Charles Darwin’s famous theory was published. Darwin’s theory was quite simple: all biological species, including humans, evolved over time through random genetic mutations. If a mutation was beneficial to a species’ survival, it would be more likely to be passed on to offspring, and would eventually be common through the population. If a mutation was harmful to the species, it would simply die out. While Darwin’s work was no doubt significant from a scientific perspective, its implications with respect to humans and society were also massive.

As with any new scientific idea, evolution was subject to intense criticism and debate from Darwin’s peers. So while doubt about evolution existed on intellectual grounds from many of the world’s respected thinkers, such as in the ones discussed in the famous 1860 Oxford evolution debate, the rejection of Darwin’s ideas also came as a result of adherence to religious beliefs. According to more traditional interpretations of the Bible, evolution contradicted the scripture. While the stories in the Bible had long been contradicted by a number of scientific discoveries in centuries prior, evolution had one particularly important disagreement with fundamentalist Christian belief: it claims that humans were not made intelligently in the image of God, but developed randomly over time through genetic mutations. As evolution became more widely accepted through the end of the 19th century, and interpretations of the Bible began to change, American schools began to teach theistic evolution, a version of Darwinism that was not considered to be at odds with a belief in God.

The early days of evolution being introduced in American school were surprisingly tame, with few objecting the existing harmony between science and faith that was being taught in schools. It was not until after the First World War did a rising Christian fundamentalist faction of Americans begin to assert that evolution, in any form, was directly contradictory to the scripture. By the 1920s, a decades-long series of legal and political debates about evolution and creationism in American schools began, as some states attempted to ban evolution outright in schools. Religious and social tensions, various judicial philosophies surrounding the First Amendment’s stance on religion, as well as the fact that education is a power delegated to individual states, all contributed to the complex and drawn-out nature of the issue in the United States.

One of the first major events in the debate occurred shortly after Tennessee’s ban on teaching evolution. Now known as the Scopes Trial, a Tennessee teacher was found guilty of violating this ban, although he accepted the $100 fine, as it brought national attention to the issue. A later appeal to the Supreme Court of Tennessee overturned the conviction, but only due to an unrelated technicality. The court still held that the ban on teaching evolution was permitted by the constitutions of the United States and Tennessee. This victory allowed creationists to continue their campaign to remove evolution from public school textbooks. It was not until decades later did progress in the other direction begin to occur. In 1968, the Supreme Court of the United States, under the famously progressive Warren Court, ruled in Epperson v. Arkansas that the states prohibition on teaching evolution was unconstitutional, as it was against the Establishment Clause of the First Amendment. The court ruled that banning evolution essentially promoted a religion on behalf of the state. Less than two decades later in 1987, Edwards v. Aguillard held that Louisiana’s law requiring creationism to be taught alongside evolution was unconstitutional for the same reason.

While any educational standards in the United States that either mandate creationism or ban evolution have long been removed, there still remains a degree of hesitation about the topic as more subtle ways to discourage the teaching of evolution have been put into place by policy makers. In 1999, for example, Kansas temporarily removed the teaching of the origin of life from its educational standard completely, leaving individual school districts to decide whether or not to teach it at all. Across several states, textbooks that contained teachings about evolution were marked with disclaimers that told the reader that its contents were not scientific fact, but were instead theory.

Though the measures taken by some jurisdictions to limit the teaching of evolution is not unconstitutional, it does represent the legacy of fundamental Christianity and religious zeal that still exists in many aspects of American society. Today, creationism continues to be taught in schools, but instead of being taught as fact, it is just one perspective that students can view in order to better understand the world around them, and to arrive at their own conclusions based on their independent reasoning.

Fatime al-Fihri and the University of al-Qarawiyyin

University of al-Qarawiyyin in the afternoon (CNN)

Places of higher learning have long been the backbone of history’s most prosperous civilizations. From the Platonic Academy of Athens to England’s University of Oxford, these centers of learning serve not only as a meeting place of different ideas and great thinkers, but as enduring cultural icons for the people they serve. But one university, considered to be the oldest of them all, comes from a seemingly unlikely place, and an even more unlikely person. Nestled in the old walled district of Fes, Morocco, the University of al-Qarawiyyin, in addition to holding Earth’s oldest known library, is considered by many to be longest continuously operated place of higher learning the world. Though the exact period when it began teaching is uncertain, it is generally presumed that instruction began shortly or immediately after its founding as a mosque in 859 CE. Its founder, an ambitious yet benevolent heiress named Fatime al-Fihri, created al-Qarawiyyin as a religious and educational meeting place for Fes’ diverse populace. Today, the university is still an iconic landmark of the former Moroccan capital, and is one of the nation’s most respected academic institutions.

Though few details of her early life are documented, Fatime al-Fihri is considered to have been born at the beginning of the 9th century CE in Tunisia. Her father, Muhammed al-Fihri, was a successful merchant who left a considerable fortune to Fatima and her sister, Maryam. Before his passing, however, the Muhammed al-Fihri and his family moved from their village in Tunisia to the city of Fes in search of a new life. Fes was a bustling city that was a blend of North African, Arab, and European cultures, due to migration being fairly common at the time. As a result, both of the al-Fihri sisters grew up in an environment surrounded with people with vastly differing origins. As the sisters grew older, both realized that the rapid flow of migrants, especially those coming from Spain, were beginning to crowd many of Fes’ mosques. As a result, both used a significant share of the wealth inherited from their father to found mosques that would be open to people of all backgrounds. While Maryam’s al-Andalusiyyin Mosque, also called the Mosque of the Andalusians, is known throughout Fes, Fatime’s al-Qarawiyyin is one the premier sites in all of North Africa. Motivated to create a place of learning in addition to its function as a place of worship, Fatime was thoroughly involved in al-Qarawiyyin’s entire construction process. And, according to some sources, she fasted throughout the 18 year build, further proving her dedication to the project.

Finally, in 859, al-Qarawiyyin opened its doors to the world. Though initially similar to many large mosques at the time, where educational functions were included to supplement the building’s main religious purposes, al-Qarawiyyin soon became a fully fledged educational institution. As the centuries passed, the university’s curriculum broadened, peaking around the 13th and 14th centuries. In addition to its specialization in religious studies and Islamic law, the university expanded its scholarly work into astronomy, mathematics, philosophy, and geography. This period, during which Morocco was under the rule of the Marinid Sultanate, was also considered to be near the height of the Islamic Golden Age. The Golden Age, a period of rapid cultural, scientific, and intellectual advancement, was far different from the European Dark Ages often associated with the period in Western history.

Though the Islamic Golden Age applied to the many nations and empires throughout the Muslim world, the University at al-Qarawiyyin is no doubt a large part of the legacy left behind by this bright period of human advancement. Its alumni reflect the success of both the Muslim civilizations at the time, and Fatime’s original goal of making it a place of true cultural and intellectual exchange. Early forms of algebra were first developed at al-Qarawiyyin, and some even claim that Pope Sylvester II first introduced Arabic numerals to Europe after a brief visit the university. Maimonides, who attended the university some time in the 12th century, went on to become of the most renowned Jewish philosophers and Torah scholars of his time. Ibn Khaldun, one of the most accomplished social scientists of the Middle Ages, is one of many prominent Muslim scholars who studied at the university. His careering spanning the latter half of the 14th century, Khaldun made tremendous strides in political science, history, and economics, and continues to gain the recognition of scholars across the globe today. Finally, famed explorer, merchant, and diplomat Leo Africanus grew up in Fes and studied at al-Qarawiyyin as a young man in the early 16th century. Following his time at the university, his many adventures (which included being sold into slavery and later baptized by the Pope himself) culminated in the production and widespread printing of Description of Africa, which remained the most comprehensive work on the subject until the 19th century.

As Fes, Morocco, and the entire Islamic world began to decline from its former glory, al-Qarawiyyin began to show its age. Its collections were greatly reduced, as was the breadth of its teachings. Subjects such as astronomy and medicine were struck down entirely. As Morocco became under French control in 1912, the university continued to suffer, with many of the country’s elite instead being sent to Western-style colleges either elsewhere in Morocco, or to France itself. However, after regaining independence, the Moroccan government added al-Qarawiyyin to its state university system in 1963, giving it adequate support to continue operation as a multi-disciplinary university.

Be it through students walking through the university’s ornately decorated tile hallways, or through distant passersby admiring the mosque’s silhouette on the Fes skyline, the legacy of Fatime al-Fihri lives one through her commitment to advance humanity through connection and learning.