Diplomacy and Destiny

It has been said that war is politics by other means and few would disagree with the Clausewitzian sentiment, but one might also state that diplomacy is warfare by peaceful means. Often diplomacy seeks to gain without violence the same objectives that empires of old sought to gain through war. Relying upon Machiavellian precepts of being feared rather than loved, and by justifying the means by the end results, great diplomats have doggedly pursued national interests, sometimes believing destiny had already prescribed a greater future than present circumstances provided. One such diplomat was William Henry Seward (1801-1872). In 1853, seven years before becoming U.S. Secretary of State for the Lincoln administration, Senator Seward stated in a speech titled The Destiny of America, “Nevertheless it is not in man’s nature to be content with present attainment or enjoyment. You say to me, therefore, with excusable impatience, ‘Tell us not what our country is, but what she shall be. Shall her greatness increase? Is she immortal?’”[1] Steward believed the answer to these questions were the affirmative and would spend his career seeking to increase the greatness of the nation he served.

Like other expansionists, Seward would link U.S. commercial strength with the acquisition of foreign markets and territorial holdings. When Mexico and British Canada proved unfertile soil for acquisition, Seward looked elsewhere. Seward believed that the United States had a destiny to spread its notions of liberty to the new nations breaking free from European imperialism, particularly those liberating themselves from Spain. Unfortunately, he also believed, as many did, that shaking off imperial control did not necessarily mean the people of Latin America were prepared to self-govern.[2] Seward believed the southern neighbors would be better served if they became part of the United States. Seward achieved a piece of his goal by pushing for the purchase of Alaska, and while it was considered folly at the time, the discovery of gold changed how most viewed the acquisition. He had less success in his efforts to secure other territories in the Caribbean and Central America. However, he would be remembered for the tenacity with which he sought U.S. expansion; a tenacity that often diverged from diplomacy and bordered on bullying.[3] Those who were unfortunate to have sparred with Seward would have felt bombarded and under attack, and would have wondered at the fine line Seward drew between diplomacy and war. With a focus firmly on the destiny of U.S. greatness, Seward behaved more like a commanding general than a diplomat. Seward believed the destiny of the United States was not limited to contiguous land of North America, but that it reached far beyond. Eventually Steward’s tenacious diplomacy would be replaced by combat in a war that would acquire some of the territory Seward had desired. His vision of U.S. expansion, while not achieved during his time in office, did influence the direction of U.S. expansion as the nineteenth century drew to a close. Whether through diplomacy or warfare, men like Seward were determined to see the United States fulfill its destiny of greatness.

Endnotes

[1] Frederick Seward, Seward at Washington as Senator and Secretary of State: A Memoir of His Life, with Selections from His Letters, e-book (New York: Derby and Miller, 1891), 207.

[2] William Henry Seward, Life and Public Services of John Quincy Adams Sixth President of the United States with Eulogy Delivered before the Legislature of New York, e-book (Auburn, NY: Derby, Miller and Company, 1849), 122-123.

[3] George C. Herring, From Colony to Superpower: U.S. Foreign Relations Since 1776 (New York: Oxford University Press, 2008), 255-257.

Off the Battlefield and Socks

An old photo depicting men and women knitting socks flashes before my mind’s eye. Young and old, men and women, the wounded. Knitting socks was a way to support the troops of World War I. Today a trip to Walmart can easily supply a package of cotton socks. Wool socks, sturdy and durable might take a bit more searching to find, but a visit to a good sporting goods store, especially one selling skiing supplies, will do the trick. The days when proper foot care required handmade socks are long gone, and with the passage of time the memory of the dedicated service provided by the sock makers has faded. It is estimated that sixty-five million men were mobilized to fight in WWI, and each soldier would have needed socks as he went to war, and then more socks to replace the ones worn out from long marches or damp trenches. On the home front, knitting campaigns called people to action. Idle hands at home meant soldiers on the battlefield would suffer.

The technological advancements of the early 1900s did not eliminate the need for handmade socks, and as the world entered a second war, the patriotic call again went out for more socks. However, technology had made war so much more destructive. The bombing campaigns of WWII left towns in rubble and displaced an estimated sixty million Europeans. When the war ended, the hardships of war did not. Basic essentials for survival were still in desperate need. The infrastructure destroyed by military campaigns had to be rebuilt before the suffering could end. Battlefields had to be cleared and communities reestablished. Unfortunately, the humanitarian efforts of busy hands and caring hearts ran into political roadblocks. Decimated nations could not process and deliver the goods effectively. A care package from a long-distant relative or a long-distance friend had an easier time getting through to a family in need than did the large scale aid from relief organizations.

By the end of the twentieth century, handmade socks were a novelty rather than a necessity, and nations had learned valuable lessons about both the effects of war on and off the battlefield, and the need for post-war recovery efforts to eliminate humanitarian crises once war had ceased. As the century ended, the severity of war had not necessarily diminished, but the percentage of the population directly affected by war had. War still displaced, disrupted, and decimated local populations, but seldom reached the distant homelands of the foreign nations providing military support for weak governments. Therefore, the patriotic call to serve those who sacrificed and suffered in the name of liberty, freedom, or national interest was easily drowned out by the pleasurable distractions of life in a homeland untouched by war. By the end of the twentieth century, war, much like homemade socks, was a novelty rather than a reality – something other people might do, but not something that had a place in the modern, fast-paced, safer world many were sure the new century would bring.

Intervention: Ideology Versus National Interest

It has been twenty-four years since the First Gulf War[1]; a short war which might better fall under the categorization of international intervention into a conflicted region rather than as a truly international war.[2] Military interventions were not uncommon during the twentieth century, but the First Gulf War was unique in that it found support from parties who, only mere months and years prior, had been locked in the seemingly endless power struggle known as the Cold War. The international community, appalled at the blatant disregard for the national sovereignty of Kuwait, rallied support for military intervention when other means of international pressure failed to stop the ongoing invasion. As 1990 drew to a close, the debates raged in Washington, D.C. and elsewhere as to the justifications for and against intervention. At the very heart of the debate was the question of whether the international outrage over Iraq’s aggression was due to the economic national interests of oil consuming nations or if the ideology of international cooperation and peacekeeping was the justification for intervening in a conflict between two parties.

World War II demonstrated that it is unwise to overlook a hostile nation’s disregard for the national sovereignty of its neighbors. Yet even as clear as the lessons of WWII were to the international community, going to war to protect another nation’s sovereignty was not an easy choice. The argument was made that the protection of oil resources was the reason for a call to action rather than the ideological desire to defend a nation’s right to go unmolested by its neighbor. Oil, despite all other justifications for intervention, was at the center of the First Gulf War. It had been the catalyst for the invasion of Kuwait and it was undeniably of great economic national interest to many of the nations that rallied to Kuwait’s defense. It would be foolish to argue that oil wasn’t the issue at the heart of the war, but it would be incorrect to argue that it was the only issue. With the end of the Cold War, international focus had turned to the increased promotion of cooperation among nations, and to a greater support of international law. Sanctions were seen as a better option than military action in most cases. Whether or not all other non-violent means had been exhausted, it was decided that a military action was needed in order to enforce international law and protect international interests.

Not all hostile violations of sovereignty have received the attention the invasion of Kuwait did, and the reasons for the lack of international intervention are seldom debated with the vigor which was seen in 1990. The First Gulf War is one of the few examples of where a shared economic national interest and the ideology of international cooperation stood together to provide the justification needed for intervention.

Endnotes

[1] The Gulf War (2 August 1990 – 28 February 1991)

[2] Operation Desert Storm (17 January 1991 – 28 February 1991)

Victory in the End

Just as declarations of war seldom mark the moment conflict begins, peace treaties seldom mark the end. One of the most famous examples of a battle fought after a war had ended occurred 200 years ago during the Battle of New Orleans.[1] The war had been unpopular in the United States and victory on the battlefield scarce.[2] Pressure had been mounting to settle the war even without clear victory having been achieved. On December 24, 1814 a diplomatic contingent agreed to the Treaty of Ghent. The objectives for having gone to war had not been met, but the United States had proven to itself and the world that it could wage a war without the assistance of outside nations.

Four days after the treaty was signed, the Battle of New Orleans would commence. Under the command of Andrew Jackson, militia numbing around 4,700 faced off against 5,300 British army regulars who were supported by naval contingents. In the end, the British would suffer 2,400 casualties, the Americans only 70.[3] Having occurred after the peace negotiations had concluded, the Battle of New Orleans would have no effect on the end of the war but would have a lasting effect on the American psyche.[4] In 1959 Johnny Horton recorded the song Battle of New Orleans and the song reached number one on the charts. Albeit a humorous version of the battle, the song would reintroduce the public to a war often overlooked in U.S. history; a war that had solidified the independence hard won a generation prior. The Battle of New Orleans may not have ended the War of 1812, but it did end questions of U.S. independence, viability, and sovereignty.

The War of 1812 often gets overlooked, but the little war is the stuff of legends. From the burning of Washington, to the battle to save Baltimore, and finally to the Battle of New Orleans, the War of 1812 changed the United States. Diplomatically and militarily, the United States proved it could to fight and survive without the aid of Europe. In the end, it mattered little that the victory of New Orleans occurred after peace negotiations had technically ended the war. In the end, it mattered little that the war as a whole had been a stalemate. In the end, what mattered was that victory had been possible, and decisive victory on the battlefield had been achieved. Rag-tagged or not, a nation set upon survival and independence had not been defeated. In the end, that was victory.

Endnotes

[1] (December 28, 1814 – January 8, 1815)

[2] George C. Herring, From Colony to Superpower: U.S. Foreign Relations Since 1776 (New York: Oxford University Press, 2008), 128.

[3] John Whiteclay Chambers, ed. The Oxford Companion to American Military History (Oxford: Oxford University Press, 2000), 496-497.

[4] Herring, 132.

Cuba and the United States

I have long found the US/Cuba situation fascinating particularly in light of the fact that many nineteenth and early twentieth century U.S. politicians and businessmen had the wish of annexing Cuba, or at least keeping Cuba a friendly U.S. playground. Cuba, so close to the United States, was often a hoped for prize. Many power brokers in the United States felt sure Cuba would eventually choose to join its neighbor to the north. The fact that it never did but instead rejected the United States during the Cold War makes it all the more interesting and begs the question of why it choose such a different path from the one hoped for by men like Theodore Roosevelt, President McKinley, and many others.

In 2002, historian Louis A. Pérez, Jr. wrote an article for the Journal of Latin American Studies titled “Fear and Loathing of Fidel Castro: Sources of US Policy toward Cuba.” The following is a short paper I wrote after reading this and other articles discussing theories as to why the United States persisted with Cold War policies towards Cuba even after the end of the Cold War.

Loathsome Rejection: Cuba and the United States

Masked behind a cloud of Cold War fear, Cuba’s rejection of the United States was the loathsome reality of a failed U.S. attempt at imperial influence and a direct blow at the very heart of the Monroe Doctrine. Fidel Castro was “inalterably held responsible” and according to Louis A. Pérez Jr. in “Fear and Loathing of Fidel Castro: Sources of US Policy Toward Cuba,” Castro became a problem that would blind policy makers for over forty years, even after the end of the Cold War.[1]

“Castro was transformed simultaneously in to an anathema and phantasm, unscrupulous and perhaps unbalanced, possessed by demon and given to evil doings a wicked man with whom honourable men could not treat.”[2]

Pérez stated that the “initial instrumental rationale” for U.S. policy with Cuba, particularly the policy of sanctions, may have become “lost” over time, but that it was initially created under the precepts of containment.[3] However, in the case of Cuba, the practice of utilizing economic pressure through embargoes was undermined by the Cuban Adjustment Act of 1966 which allowed political asylum to any Cubans who made it to U.S. shores. This act became a release valve for the pressures created by the embargoes. While poor Cubans remained poor, the middle-class Cubans, who were most affected by U.S. sanctions, could attempt to seek refuge elsewhere. “The Logic of the policy required containing Cuban discontent inside Cuba,” but this logic was lost amid the emotional reaction the United States had towards Fidel Castro and his rejection of the United States. This rejection was compounded by the challenge to “the plausibility of the Monroe Doctrine,” and the United States “primacy in the western hemisphere.”[4] If rejection was not enough to engender such resentment, inviting the Soviet Union to become a military as well as an economic ally was more than U.S. policy makers could stand without seeking retribution.

Cold War fear and rhetoric does not sufficiently account for the continued and virulent animosity between the United States and Cuba, and Pérez was not the only scholar to take note. As the Soviet system crumbled and the Cold War came to an end, “the antagonism displayed by the U.S. government toward Cuba and Castro …intensified.”[5] The continued containment of Cuba in the post-Cold War era negated decades of U.S. assertions that the Cuban policy was the direct result of its status as a Soviet satellite. While others would write about the illogical continuation of Cold War policy, Pérez argued that U.S. policy toward Cuba had less to do with Cold War fear and containment, and more to do with loathing and retaliation for the rejection of the United States and the embarrassment such a rejection caused.

Certainly there was a real national threat in having Soviet missiles located so close to U.S. shores, but that threat does not account for U.S. policy before and after the missiles. Wayne S. Smith, who was stationed in Cuba as a vice-consul during the Cuban Revolution, claimed that Castro and his revolutionaries were not communist threats in 1956.

“We found no credible evidence to indicate Castro had links to the Communist party or even had much sympathy for it. Even so, he gave cause for concern, for he seemed to have gargantuan ambitions, authoritarian tendencies, and not much in the way of an ideology of his own. He was also fiercely nationalistic. Given the history of U.S. military occupations, the Platt amendment, and the outsized U.S. economic presence in Cuba, he did not hold the U.S. in high regard.”[6]

Without a doubt, the United States needed to address the threat posed by Castro, but to bypass speaking softly and instead proceeding to the wielding of a big stick was a move that would ensure crisis rather than avoiding crisis, especially when the Soviet Union was more than happy to lend Cuba a hand. The Soviet’s willing assistance, especially after the embarrassment of the Bay of Pigs, was all the justification needed for President Kennedy to pick the moment of crisis rather than giving Nikita Khrushchev the opportunity.[7]

Pérez does not argue against the notion that there was a real threat posed by Cuba, but instead he points out that the United States was handed a “trauma” when the U.S. playground turned into a war zone, and then into a dangerous Cold War threat.[8] This trauma affected the U.S. ability to rationally create and implement a policy that would stabilize relationships and reduce threat. “Dispassionate policy discourse on Cuba … was impossible” [9] as long as Castro remained Cuba’s leader, because he was “a breathing, living reminder of the limits of U.S. power.”[10]

Endnotes

[1] Louis A. Pérez, Jr. “Fear and Loathing of Fidel Castro: Sources of US Policy toward Cuba,” Journal of Latin American Studies 34, no. 2 (May 1, 2002): 227, http://www.jstor.org/stable/3875788 (accessed February 20, 2013).

[2] Ibid. 250.

[3] Ibid., 228.

[4] Ibid., 233.

[5] David Bernell, “The Curious Case of Cuba in American Foreign Policy,” Journal of Interamerican Studies and World Affairs 36, no. 2 (July 1, 1994): 66, http://www.jstor.org/stable/166174 (accessed February 19, 2013).

[6] Wayne S. Smith, The Closest of Enemies: A Personal and Diplomatic Account of U.S.-Cuban Relations Since 1957 (New York: W. W. Norton and Company, 1987), 15-16.

[7] Philip Zelikow, “American Policy and Cuba, 1961-1963.” Diplomatic History 24, no. 2 (Spring 2000): 325. http://web.ebscohost.com.ezproxy1.apus.edu/ehost/detail?sid=39889c50-22ab-48a2-b2e4-cd8946fd73a9%40sessionmgr15&vid=1&hid=18&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=2954415 (accessed February 19, 2013).

[8] Pérez, 231.

[9] Ibid., 250.

[10] Ibid., 251.

Other Readings

Dominguez, Jorge I. “U.S.-Cuban relations: From the Cold War to the colder war.” Journal of Interamerican Studies and World Affairs 39, no. 3 (Fall 1997): 49–75. http://search.proquest.com.ezproxy2.apus.edu/docview/200219310/13BF83A38607C999D8F/7?accountid=8289 (accessed January 31, 2013).

Herring, George C. From Colony to Superpower: U.S. Foreign Relations Since 1776. New York: Oxford University Press, 2008.

Paterson, Thomas G. “U.S. intervention in Cuba, 1898: Interpreting the Spanish-American-Cuban-Filipino war.” Magazine of History 12, no. 3 (Spring 1998): 5. http://search.proquest.com.ezproxy2.apus.edu/docview/213739998/13BF824CD53256D7D45/11?accountid=8289 (accessed January 31, 2013).

Williams, William Appleman. The Tragedy of American Diplomacy. 1972 New Edition. New York: W. W. Norton and Company, 1988.

Humanity on the Battlefield

There is a popular story that goes around at Christmas time about soldiers all along the Western Front calling a truce and singing Silent Night on Christmas Eve. What is often left out of the story is the anger this show of humanity caused in the higher leadership. During war, a reminder that the enemy is not the monster which propaganda depicts can interfere with morale, and with a soldier’s determination to win at all costs. Yet on that Christmas Eve, men on opposing sides of a futile war remembered that only politics separated them. Christmas marked the fifth month of war and the third month in the trenches. World War I was still in its early days and there was still hope for victory and for the short war the generals and politicians on both sides had promised the soldiers. The peace which was hoped for on Christmas Eve 1914 would not be found until Christmas time 1918. The brutality of the war and the anger of generals would squelch attempts to repeat what had sprung up so naturally along the Western Front in 1914. However, the legend of the first Christmas of WWI would remind generations that in war humanity can survive.

Paranoia and Insecurity: A Lesson from WWII

“On a morning in December 1941, a small nation which the United States had sought to contain and squeeze into submission through economic and diplomatic pressure, attacked with crippling force a naval base belonging to one of the largest nations of the world. Japan’s aerial attack on Pearl Harbor shook the United States and its sense of security.”[1] In the movie 1941, director Steven Spielberg created a comical portrayal of a population driven to protect their coastline from Japanese attack. In Spielberg’s outlandish film the insecurity caused by the attack on Pearl Harbor fed paranoia and panic and resulted in chaos. The movie was a comical spoof on the real paranoia that existed during the World War II, a paranoia which allowed a nation to justify its own attack on liberty.

On February 23, 1942, a Japanese sub entered the coastal waters near Santa Barbara, California and launched a bombardment on an oil field in Ellwood. Just days before the attack, President Roosevelt had created Executive Order 9066 which authorized the creation of policies that would lead to the internment of U.S. citizens. Coupled with propaganda films portraying the enemy as barbarians and animalistic, the events of late 1941 and early 1942 created an insecurity within the population that seemed to justify the civil rights violations that would follow.

Terror is an effective tool in a war and can have a much greater effect on a population than that of physical attack. An enemy will try to strike fear into the hearts and minds of its opponent with the hope that terror will weaken it. Modern technology made it possible for fear to be rapidly spread through media, and media played a vital role in spreading propaganda messages during World War II. The U.S. government worked hard to control propaganda, both the enemy’s and its own, but public fear was used as a tool to garner support as well. Justifiable actions of a nation at war, actions which deliberately heightened public fear and restricted civil liberty, seem less justifiable when the war ends but the insecurity remains. After World War II ended, the fear generated by the physical attacks on the nation diminished, but the fear created by the pervasive use of propaganda during the war remained imbedded in the public psyche. History seems to indicate that nations can quickly recover from the physical challenges of war, but the psychological challenges which are often heightened by the use of politically motivated propaganda take much longer to repair. Long after the physical attack becomes just a memory, paranoia and insecurity can linger continuing to justify the restriction of liberty.

End Notes:

[1] Jessie A. Hagen, “U.S. Insecurity in the Twentieth Century: How the Pursuit of National Defense Ingrained a State of National Insecurity,” American Military University, 2014.

Additional Reading:

Conley, Cornlius W. “The Great Japanese Balloon Offensive.” Air University Review XIX, no. 2 (February 1968): 68–83. http://www.airpower.maxwell.af.mil/airchronicles/aureview/1968/jan-feb/conley.html.

Dower, John W. War Without Mercy: Race and Power in the Pacific War. New York: Pantheon, 1986.

Roosevelt, Franklin D. “Executive Order 9066 – Authorizing the Secretary of War to Prescribe Military Areas,” February 19, 1942. Papers of Franklin Roosevelt. The American Presidency Project. http://www.presidency.ucsb.edu/ws/index.php?pid=61698.

———. “Fireside Chat, December 9, 1941,” December 9, 1941. http://www.presidency.ucsb.edu/ws/index.php?pid=16056.

———. “Fireside Chat, February 23, 1942,” February 23, 1942. http://www.presidency.ucsb.edu/ws/index.php?pid=16224.

“Civil Rights.” PBS: The War. Last modified 2007. http://www.pbs.org/thewar/at_home_civil_rights_japanese_american.htm.

“George Takei Describes His Experience in a Japanese Internment Camp.” io9. http://io9.com/george-takei-describes-his-experience-in-a-japanese-int-1533358984.

National Security: the Value of Nutrition and Education

In the years leading up to World War I, many progressive thinkers began to campaign for social reform. The industrial revolution changed society in many ways, not all of which were good for the nation or for national security. Unskilled labor and skilled labor alike were susceptible to the ills of urban life. Just as the war in Europe was igniting, one group of progressive reformers was introducing home economics textbooks and coursework into schools. Proper hygiene and good nutrition began to be taught alongside other subjects. Malnutrition and disease were viewed as ills which not only weakened society but undermined national well-being. The reformers who pushed for better living conditions and education for urban families gained a powerful ally when the United States entered WWI. The ally was the U.S. Army. When faced with a modern war, modern both in weaponry and technologically, the U.S. Army quickly discovered that it was no longer beneficial to go to war with illiterate soldiers. Modern war demanded healthy soldiers and demanded that the soldiers could communicate efficiently with each other. Basic health and literacy became a necessity for the modern army. The ground gained in understanding this truth was not easily won. The soldiers who fought in the war learned firsthand the value of both a healthy body and the ability to communicate with their fellow soldiers. Having a common language coupled with the ability to read and write in it would be something the returning soldiers would seek for their own children. These veterans would push for change. By the end of World War II the realities of modern war mandated the necessity of having a nation populated with citizens possessing basic health and education. Education and proper nutrition became a matter of national security.

Additional Reading:

  • Keene, Jennifer D. Doughboys, the Great War and the Remaking of America. Baltimore: The Johns Hopkins University Press, 2001.
  • National Security Act of 1947, Title X.
  • There were various publications designed to introduce Home Economics in the schools. Some have been scanned and can be found in different e-book collections. Original copies can be found through used bookstores. My favorites were authored by Helen Kinne and Anna M. Cooley.

No Man’s Land

A term older than World War I but popularized during that war, no man’s land refers to a stretch of land under dispute by warring parties, but it can also refer to lawless areas with little or no governing control. A buffer zone, on the other hand, is an area which provides a sense of protection from the enemy. When physical fortifications offer little protection, buffer zones can provide a perception of security. Nations great and small seek the perception of security when security is elusive. Treaties and alliances are traditional means of creating a sense of security, as is the creation of buffer zones. During the Cold War, the competing nations sought to expand their spheres of influence, thereby creating buffer zones between themselves and their enemies as their spheres grew. When the Cold War ended and the buffer zones were no longer needed, many of the buffer nations found themselves with fewer friends and with fewer resources to prevent lawlessness. These nations found it difficult to avoid the development of no man’s land within their borders.

The United States reasoned, even in the earliest days, that oceans made excellent buffer zones against the conflicts of Europe. Unsettled territories were adequate as buffers but only to a point. While unsettled territories didn’t pose a direct European threat, they were still loosely under the influence of powerful countries. Additionally they often attracted outlaws fleeing justice and smugglers seeking a base of operation near their markets. In 1818, Andrew Jackson decided to pursue a group of raiders into Florida. The problem was that Florida was owned by Spain and Spain had little ability to prevent lawlessness in the territory. When Jackson’s army crossed into Florida, he invaded a foreign nation. Without the consent of Spain, such an action created an international incident. Fortunately Secretary of State John Q. Adams was able to capitalize on Jackson’s actions, and convinced Spain that a treaty was better than a war. His reasoning for defending Jackson’s violation of Spanish sovereignty was that “it is better to err on the side of vigor.”[1] Certainly not the first time a nation chose a declaration of strength as its response to an international crisis of its own making, but possibly the first time such a response became national policy. As Secretary of State, Adams greatly influenced the foreign policy decisions of the president and authored much of what President Monroe presented to Congress. In March 1818, President Monroe declared to Congress that when a nation no longer governed in such a way as to prevent their lawlessness from spilling onto their neighbors, then the neighbors had the right to protect themselves and to seek justice even if it meant violating the sovereignty of another nation.[2] In other words, when an area became no man’s land, it was to the benefit of all nations for the lawlessness to be eliminated by whoever had the strength and will to do so.

Eliminating no man’s land in North America was a task that occupied the United States for more than a century. Eventually, the United States would reach from ocean to ocean and would gain the military might of a great nation. However even as the twentieth century dawned, the United States struggled to bring law to all of its territory. During the century of expansion, some in the United States saw potential in the acquisition of territory in the south, particularly in Central America. Others recognized the difficulty of governing such a vast nation. Faced with lawlessness due to revolt in Mexico during World War I, Wilson authorized the U.S. Army’s invasion of Mexico. However, Wilson recognized the value of having a buffer zone south of the border and eventually withdrew the army. In order to ensure that the southern nations created a friendly buffer zone, the United States supported governments that kept the peace, even though keeping the peace came at the expense of basic human rights. Like many leaders before and since, President Wilson put aside ideology and accepted peace-by-force as being better than lawlessness.

Reflecting on history, some leaders have sought security by building huge empires, some by establishing buffer zones, and others by the targeted elimination of no man’s land. Regardless of the method men and nations have chosen, it is clear that international law, notions of liberty and self-determination, and hope for world peace are always secondary to the goal of eliminating the threat posed by no man’s land.

Endnotes:

[1] Samuel Flagg Bemis, John Quincy Adams and the Foundations of American Foreign Policy (New York: Alfred A. Knopf, 1956), 315-316.

[2] James Monroe, “Spain and the Seminole Indians,” American Memory, Library of Congress, (March 25, 1818),  http://memory.loc.gov/cgi-bin/ampage?collId=llsp&fileName=004/llsp004.db&Page=183.

At the End: the Cold War

Twenty-five years ago the Berlin Wall was opened. Unplanned and unauthorized by the powers who controlled the border between east and west, the opening of the Wall signified an end of the Cold War and the beginning of a new era. While the momentous nature of act of opening a gate and letting people pass from east to west gained much attention at the time, other factors had been at play that would pave the way to peace and solidify the end of the Cold War in ways which went relatively unnoticed by the general public. Much has been written on the subject but not by authors with huge public followings. In honor of the twenty-fifth anniversary, we should look back. The following is a short essay* on one aspect of the end of the Cold War – just enough to pique your interest.

Historian John Lewis Gaddis has written that “the Cold War itself was a kind of theater in which distinctions between illusions and reality were not always obvious.”[1] It was fitting then that thespians took the stage for the final act. While is it common knowledge that President Ronald Reagan graced the silver screen in his younger days, it is less well known that other important actors of the final act had theatrical experience prior to their Cold War roles. Mikhail Gorbachev had been an “aspiring actor,” [2] in his youth, and the influential Pope John Paul II “had been an actor before he became a priest.”[3]The success of such actors on the Cold War stage was not due simply to their arrival upon the stage, but due in great part to the stage setting in which they inherited.

As with the origins of the Cold War, the end of the Cold War is not precise. Unlike hot wars which tend to end with the signing of peace treaties and have a clear chain of events preceding peace settlements, the end of the Cold War is ambiguous.  As historian George C. Herring pointed out, there is a myth that Reagan’s strong posturing and rhetoric are the direct cause of Soviet defeat.[4] Yet, to ascribe to such a myth negates the important role of the other actors and for those who set the stage on which the thespian preformed. Gaddis wrote, “it took visionaries – saboteurs of the status quo – to widen the range of historical possibility.”[5] More importantly, it took actors well versed in the art of improvisation, actors who could recognize the changing dynamics of the Cold War and grasp the opportunities of change. While there are numerous scenes to the last act of the Cold War, three key roles were played by each actor.

First, after the Able Archer exercises, President Reagan “drew the obvious – but for Cold War adversaries often elusive – conclusion that the Soviets feared the United States as much as American feared them.”[6] This shift led Reagan to adjust his strategy. While on one hand he ratcheted up the rhetoric, on the other he became more amiable to negotiations because he knew the United States had the upper hand.

Second, Mikhail Gorbachev recognized that public language was not really the same as diplomatic language, and that politicians like Reagan were acting to an audience. While Reagan certainly had an image to keep and a role to play, Gorbachev had an equally, if not more crucial part to play. He had to convince his people that glasnost and perestroika were positive changes, and that negotiations with the West were not signs of weakness.

The third actor, Pope John Paul II, helped “expose disparities between what people believed and the systems under which the Cold War had obliged them to live.”[7]

The Pope’s visit to Poland revealed that the USSR’s satellite enjoyed no popular legitimacy: They were Puppet regimes hated by their subject population. But Pope John Paul went further. He demystified the power of those regimes. With his words, his presence, and his injunction not to feel afraid, the Pope was for a while the real government of Poland.[8]

It would be wrong to assert that the Cold War, even in its final decade, lacked any real cause for fear, but Pope John Paul II diffused the overwhelming and consuming fear that had dominated the public since the days of Stalin.

The three actors took the world stage and improvised rather than continued the Cold War script where the bipolar status quo was viewed as “more stable than multipolar systems.”[9] The final act of the Cold War commenced once these three actors realized that President Roosevelt had been correct that the fear itself was the only thing causing the fear, and that the political divide could be cracked and then normalized once the people stopped feeling the oppression of the fear. While Cold War theatrics occasionally resurfaced, particularly when Reagan gave his famous “tear down this wall” speech in 1987, they did not deter the movement towards normalization between the United States and the Soviet Union.[10] The real success of the final act in the Cold War play is that while tough talk and grand speeches still placated the public perception of strength, changes were occurring specifically within the Soviet Union.  The stage had been set by the policies of containment, “collapse of détente,” the inherent weaknesses of the Soviet system, and mutual overspending on deadly war machines, but the final act was the result of leader desiring a change in the status quo.

* Due to unexpected issues this week, I am recycling an old essay rather than creating something new to commemorate the anniversary of The Fall of the Berlin Wall.

Endnotes:

[1] John Lewis Gaddis, The Cold War: A New History (New York: The Penguin Press, 2005), 195.

[2] George C. Herring, From Colony to Superpower: U.S. Foreign Relations Since 1776 (New York: Oxford University Press, 2008), 894.

[3] Gaddis, 195.

[4] Herring, 894.

[5] Gaddis, 196.

[6] Herring, 896.

[7] Gaddis, 196.

[8] John O’Sullivan, “Warm Cold Warrior,” National Review 57, no. 7 (April 25, 2005): 38.

[9] Gaddis, 196.

[10] Herring, 898.