The Dark Knight; Avatar; The Age of Innocence; The Wolf of Wall Street
It has been five years since the release of Side by Side, Chris Kenneally’s vertical documentary on the digital filmmaking revolution, as told by Hollywood’s top directors, cinematographers, editors and executives. The question at the center of the film is the same question facing the world today: What are the consequences of the digital revolution?
Hollywood was a forerunner in adopting digital technology, as studios and filmmakers alike pushed to develop better tools to realize their vision onscreen. As such, Side by Side has become a fascinating time capsule from 2012 when filmmakers were grappling with questions that echo our current dilemmas: With so much digital information, do we have enough time to think through our choices? Can people distinguish between what is real and what is fake? If so, how well? Are we more or less engaged with our lives through digital technology? Is our quality of life made better or worse by this ubiquitous invention? The documentary is a blueprint for digital modernization that takes stock of what we’re gaining as a society, and what we may have lost.
Atonement
There are two definitions of revolution which are, on the surface, at odds. The first sees a revolution as a physical rotation or orbit with a return to the point of departure. The second definition is a permanent, extraordinary departure from one way of life into the unknown. This inherent contradiction in definitions makes it challenging to forecast when you’re in the midst of sweeping change. When you leave the house in the morning are you coming back, or are you leaving forever? Side by Side illustrates how technological revolution is a departure and a roundtripat the same time.
At its core, the digital takeover in Hollywood was driven by economics. Traditionally, filmmaking was expensive and labor-intensive. The cost of film stock alone was prohibitive to independent directors. The delays and technical issues that arose on film shoots were often a result of the limitations of physical film. As such, studios and corporations had long been in the business of developing more reliable methods for film production and delivering them to the film community for testing and feedback.
The other driver of the digital takeover was artistic vision. Action films are reliant on visual effects. Directors such as George Lucas and James Cameron were frustrated by the limitations of celluloid. They led the way in developing hardware and software to bring their futuristic visions to the screen. The result has been a permanent departure from making movies in the traditional way, with each advancement in digital technology taking the industry farther afield of historical norms.
Sin City: A Dame to Kill For
Once digital recording passed muster with enough filmmakers, studios pushed to use the technology on all films as a cost-saving measure. This set in motion a disruption of the traditional film production model and permanently impacted every aspect of the process from development to projection. For some in the industry, technological advancement was an inevitable learning process. Each new tool or skill brought people back to their job wiser and better equipped. For others, advancement carried them away from a beloved art form into new territory and sacrificed everything they couldn’t bring with them.
Filmmakers featured in Side by Side have unique processes and points of view, but they all agree on one issue: those who wanted to work in one format or the other had to find each other. A director who wants to shoot on digital isn’t going to work with a cinematographer who only shoots on film. When you apply this notion to society as a whole, the current polarization of America makes sense. Americans best served by digital advancement are largely unconcerned with who is left behind, taking the general view that there is always loss with gain. Meanwhile, Americans ignored or harmed by technological advancement assert that it’s not advancement if it’s not inclusive; that there are costs associated with progress; that sacrificing people for technology isn’t beneficial to some individuals, even if it benefits society as a whole. Likeminded individuals band together and the digital revolution has thus created two polarized camps. Both want their country to succeed, but they’re pitted against each other because their definitions of success are at odds. The mere existence of digital technology divides us even when our ultimate goal is the same.
Star Wars
In Side by Side, it’s striking thatthose who advocate for celluloid describe it in futuristic terms. There’s a wonderful stretch of interviews with directors, cinematographers and actors describing a shoot day with film. They note the distinctive sound of the “money” running through the camera that ups the tension on set. Richard Linklater likens it to an athletic event, where participants mentally and physically prepare for a heightened moment of performance and then…Action! Words like “magic” and “leap of faith” are used to refer to the act of recording on film with the same kind of awe one might reserve for flying cars or teletransport. The sentimental language of people who are making a visionary plea is now used to entreat listeners to buy into history. This is a tipping point on the arc of a revolution. Where we once romanticized the future, now we romanticize the past.
The Social Network
Lucas, Cameron, David Fincher, Danny Boyle and Robert Rodriguez all speak convincingly to the massive benefits to digital filmmaking. Lucas describes the antiquated process of color-timing which has now been replaced by the entirely new artform of digital colorizing. Fincher recalls an issue with camera weight when filming the rowing scene in The Social Network, and how a 5.5lb digital camera made his impossible shot possible. Rodriguez says he wouldn’t have attempted to make the comic book thriller Sin City without the myriad freedoms afforded by digital manipulation; the movie simply wouldn’t exist.
In perhaps the most compelling testimony, Boyle vividly describes how smaller digital cameras interacted with his actors on the streets of Mumbai in Slumdog Millionaire. His DP, Anthony Dod Mantle, was free to roam in and around the sets, improvising with angles and capturing images with a kind of intimacy that was previously unattainable with cumbersome film cameras. Mantle won an Academy Award for Slumdog, the first ever awarded to a film with digital cinematography.
The counterargument to these digital discoveries, however, is stark. Christopher Nolan, Martin Scorsese, Wally Pfister and others are vocal about the loss of realism with so much image manipulation. They discuss the importance of slower pacing during the filmmaking process, and how the encumbrances of physical film force necessary pauses in the creative process. Where filmmakers once shot scenes in 2–minute bursts and broke to reload the cameras, now digital cameras run without cutting. People are always “on.” This is frustrating for some actors (Robert Downey, Jr., Keanu Reeves) and welcomed by others (John Malkovich.)
Scorsese and Nolan indirectly raise the question of whether there’s enough room to think, focus, and make good decisions on the timeline dictated by digital technology (a question Americans ask daily, both of themselves and their tweeting president.) Listening to their reasoning, it seems incredibly foolish to argue with genius, yet five years on we know that’s precisely what studios have done. Scorsese’s last two films, TheWolf of Wall Street and Silence, were a hybrid of film and digital shots. In 2014, Paramount announced it would no longer release movies on film. Undoubtedly other studios will follow suit. Nolan is the high-profile holdout. He will release Dunkirk this year, which Hoyt van Hoytema shot (by all accounts, magnificently) on 65mm film.
Anne V. Coates, the celebrated editor whose career has spanned 70 years, is eloquent on the broader impact of working at digital speed. She makes an excellent case that the automation of the editing process delivers less-considered work and has all but eliminated happy accidents. For example, Lawrence of Arabia (for which she won an Academy Award) includes a scene in which Lawrence blows out a match and then cuts directly to the sunset over the desert. The cut delivers a startling, thrilling visual. Coates observes that a dissolve was originally written in the script and if she’d been editing the film digitally the transition would’ve been added automatically. Instead, she was working with physical film that required manually cutting the film strips and taping them together, so the first edit had the film clips “butted together” without any transition added. When they watched the results of that first cut through the machine…“Magic.”
Lawrence of Arabia
Early adopters of digital technology — Lucas, Cameron, Rodriguez, the Wachowskis, et al. — are known for inventing their worlds; much of their work is futuristic and fantastical. Early defenders for shooting on film — Scorsese, Soderbergh, Nolan — typically apply their vision to the world as it is and explore stories of the past and the present. From one angle, these groups can be boiled down to “fake versus real.” In a fake world, the audience is treated to superhuman visuals and challenged to think beyond corporeal limitations. In realistic films, audiences watch drama or comedy unfold between recognizably limited characters and are offered a touchstone for processing their own lives. Both of these experiences are powerful. Both have value. In 2017, only one is thriving.
Out of Sight
What has the changeover from film to digital cost us in terms of emotional depth? For me, the difference is palpable if not measurable. Even the work of visionaries like Lucas and Cameron has suffered slightly. Some of the most exhilarating moments of Titanic came from the film-shot grainy underwater footage of the ship itself. The visual experience of watching film, versus digitally-shot footage, is shades closer to real life. Those scenes anchored the film emotionally (if not literally.)
Meanwhile, Avatar was a visually stunning experience but it didn’t leave emotional fingerprints the way Titanic did. Similarly, I loved Star Wars before most of today’s technology was available and I don’t like what was done to the original films with the technology that has been developed since. There is an emotional connection to what we recognize as real. From theater to film to television to digital streaming, we’ve stepped farther and farther back from flesh and blood experience, ever-widening the space for others to reach in and manipulate what we see. The more we watch digitally perfected images, the less satisfied we become with real life, and the less prone we are to connect with it emotionally.
In 2017, these shades of the fake/real divide are central to digital’s impact on our political process. While politicians and pundits argue over what is real and what is fake, consumers of the information are less and less able to discern between the two on their own. It’s the information version of photoshopped models. When an altered image is presented to millions of people as real, there is mass diversion from reality. The same holds true for facts. The outcome is a misinformed populace.
Dunkirk
The final issue discussed in Side by Side may be the most salient for American politics in 2017. While the image quality of digital filming can be hashed out by filmmakers and camera developers, the choice to watch a film together in the theater is up to audiences. Michael Chapman’s comment that “cinema was the church of the 20th Century” feels right, and dated. The 21st Century is a world full of worshippers-on-the-go. Only streaming services and online video stores know a subscriber’s true religion.
The loss of a unifying arbiter of culture has untold implications. I suspect it’s responsible for the aggressive reactions I get when I say I don’t watch television. People recount entire shows for me on the spot, as though my reason for not watching is that I think I won’t enjoy it, not that I have limited time. In the midst of this unsettling revolution, people are unconsciously searching for common ground. Someone who doesn’t watch Game of Thrones or Girls is no longer simply missing out on something great. They’re perceived as a threat to the diminishing pool of broadly shared culture that binds us together. On this and so many other levels, fear of other has defined the digital revolution so far. If Hollywood’s experience is a predictor of our trajectory then we’ll fight our way out of this polarized state to find common ground again, and we’ll have cultural scars and bruises to show for it.
Critics reviewed Side by Side favorably in 2012 but noted its “inside” and “geek heaven” tendencies. In 2017, it is a film for everyone. We’re savvier by necessity, as digital technology has taken over the most important aspects of our lives: communication, organization, and archiving, or memory. We’re also reengaging vociferously with the political realm after several decades of relative quiet. As noted by Nancy Benac and Ben Nuckols for the Associated Press, “[the] Women’s March on Washington appeared to accomplish the historic feat of drawing more people to protest the inauguration than the ceremony itself attracted.” New forms of digital engagement are clearly having an effect on politics but it’s too soon to draw conclusions about where they will ultimately take us.
The digital revolution is an unfinished story. The internet has usurped much of our physical infrastructure, but a forced takeover doesn’t engender trust. With each incursion into our privacy, and with cyberattacks on the rise, people are increasingly aware of technology’s reach and they don’t like it. When a foreign country can damage our democracy and take away our freedom of choice by influencing our election through digital media, voters may finally see fit to push back. Silicon Valley has been an unapologetic proponent of the digital revolution. Baked into their philosophy is an anti-consumer approach: We tell you what you want. Some call that tastemaking, but the ubiquity of smartphones and computers means that the Facebooks of the world have too great an influence over events as important as our presidential election. In 2017, Silicon Valley has a lot to answer for.
As we grow with this rapidly expanding technology, it’s important to continually redefine our philosophy in a rapidly shifting context. Are we moving forward as a society? Is this technology helping or hurting us? Do the ways that we incorporate it serve our values? …and one question I couldn’t shake while writing this piece: Should we even call Side by Side a “film?”
Side by Side is available to stream on Amazon, Netflix, iTunes, and elsewhere.
George F. Cram (1842–1928) — Cram’s Unrivaled Family Atlas of the World, Chicago IL. Lithograph color print. Diagram of the Principal High Buildings of the Old World
In an article published in The Atlantic this week, Walter Isaacson laid out his vision for “how to fix the internet.” The problem, he says, is Trolls. Bugs. Lack of tracking. He believes anonymity has “poisoned civil discourse, enabled hacking, permitted cyberbullying and made email a risk.” His solution is to do away with anonymity, thereby offering himself as the mouthpiece for every Silicon Valley titan with deep pockets and a hunger for data.
I’ve written on how we civilize technology before, on the challenges we face with each shift forward in technology, whether it’s ships, trains, radio transmitters or nuclear energy. The trajectory involves a series of near-misses while we get the hang of our shiny new toy. When cars were first invented there were no laws to govern driving. As cars proliferated, accidents increased. Now we legislate everything about car and road safety, down to the driver’s decision to wear a seatbelt. There are fines for not wearing one. If the trouble with internet technology is bad behavior why not address the behavior?
What Isaacson skims over in his trolling lament is that the worst trolls on the internet are the very people he thinks should solve the trolling problem. Huge media companies like Facebook shamelessly collect their users’ data and sell it. Anonymity is not permitted on Facebook because the company can’t use you, can’t parse your information into demographics and ad bins, if they don’t know who you are. Similarly, the “trust” relationship built into place by search engines like Google is merely a handshake agreement that the company won’t “be evil.” That doesn’t mean Google deletes your search history. As we saw in the 2016 election, “evil” is a word that’s up for interpretation in our society. We, as users of Google, don’t know who is deciding what’s evil at any given time. Isaacson wants users to lose anonymity but notably makes no mention of tech companies and their addiction to opacity. In Isaacson’s future world, users are the biggest losers.
Isaacson offers logical suggestions for what a safe internet might include but how he gets there is the sales pitch of the century. Certainly, it’s important to institute payment for work. We don’t need a new internet for that. I’ve been pitching companies like Medium on this concept for years. “Find a way to pay your writers, even one cent per read, and you will revolutionize the publishing industry.” “A pay model is the only way forward to maintain the integrity of what is published and read.” Medium could institute a pay-model today. What Isaacson misses is that companies and sites most users rely on for information offer their services for free so that they can take whatever consumer data they want in return. The internet hasn’t evolved naturally into a pay model because the people currently making big bucks off of internet technology are also in charge of its design. There are no checks and balances built into the governing of the internet. This does not mean we do away with internet privacy. It means we legislate it.
To revolutionize the internet, the Googles and Facebooks would have to become industry-leading pay-model services. In a pay-model service, user-consumers would lose anonymity to the company offering the service (via their credit card), but maintain privacy in whatever further capacity they wished while using the service. It would be no different than walking into a Starbucks and ordering a latte. Give the barrista your own name or someone else’s, pay with cash or credit, hide your face behind sunglasses or don’t…at the end of the day, you’re physically standing in the store and someone can deal with you if you cause a disturbance. As long as you’re a peaceful coffee drinker you still have agency to determine your level of privacy. The same is true of a paying customer online.
Finally, and this is perhaps the most important omission in Isaacson’s piece, there is presently a massive power struggle underway between government and technology elites — specific, powerful individuals within broader industries. Both groups are greedy for data. One group wants to retard technology in order to maintain control over its electorate. The other group wants to advance technology so fast it will maintain control over its creations and, by extension, its users. The electorate and users are one in the same. The bad seeds among us exist whether anonymity is built into the internet or not. They exist in government, they exist in boardrooms and they exist in chatrooms. It is persistant abuses of power which promote toxicity. Unless government and technology elites find a way to work together for the betterment of society as a whole, that toxicity will continue no matter what internet protocols are put in place.
Living in Asia in the late 90s, I spent time in countries that were then considered “developing” economies. Textbooks were filled with prognostications about the potential growth and downfall of these places but no bar chart captured the terrifying hilarity of driving an hour outside of Seoul at high speed in a brand new sedan on unpaved roads and only potholes and feral animals to navigate by. Technology was tangibly out of sync with infrastructure. When something blocked the road drivers veered onto the front steps of houses to get around it. Parking was wherever you feel like it, and parked cars were often rendered inaccessible due to other people’s feelings about parking. Disagreements were resolved the old-fashioned way, with pointing, yelling, and threat of fists. Over time, enough pedestrians were casualties and enough expensive tires were blown in potholes that laws became necessary, as did the paving of roads. The automobile is no less amazing because society set a speed limit. We mitigate and retard technology where it threatens and outpaces us. This is how we civilize our innovations.
The most poignant irony of the Information Age is the internet’s role in restructuring our relationship to politics. In CITIZENFOUR, Edward Snowden avowed his intent to end the tyranny of the snooping government, but technocratic paternalism is equally invasive and it’s built into the digital realm. Complicated legal documents pop up at the outset of a business relationship and people with no legal background are conditioned to move ahead with a trust us one-click “Agree.” Our relationship to intelligent technology is best portrayed by the routine updates we tacitly agree to without reading or understanding what they entail. I Agree to whatever you’re about to load onto my phone or into my computer, agree to what you think is best for this device and my use of it, agree without stipulation, agree without working knowledge, agree because not agreeing seems time-wasting and foolish and questioning is beyond my technical ability. I always agree with you because everyone else is agreeing with you so it must be okay. I always agree with you because I don’t know why I should disagree.
This habitual agreement has proved deadly to the exchange of real information. The technocracy devised the fastest, most appealing method for securing a user, and internet users subsequently became desensitized to the act of giving away their rights. The repetitive process has numbed healthy suspicion of any organization that demands legal agreement to a loss of personal agency. Those internet service agreements are not there to protect individuals, they are documents created by expensive legal teams to ensure a company has no responsibility to the consumer. If these statements aren’t disturbing enough, stretch them to apply to the government in the shocking months and years after 9/11. The PATRIOT Act was the federal government’s service agreement, and the majority of the American people agreed to it without understanding what they were signing away.
Fourteen years on, perhaps the greatest misstep in rectifying our mistake is to begin with privacy. Loss of privacy is an end result. Privacy can be protected, it can be violated, but it cannot be given. That notion is a falsehood born of Victorian manners — I’ll give you some privacy — which preempt uncomfortable directives: Leave the room. Get off the line. Turn your head. Don’t read my emails. I need my privacy. The sci-fi notion of “mindreading” is terrifying precisely because it violates the only space on earth that belongs entirely to us. When we communicate with people, through talking, writing, or touch, we consciously extend that private space to include others. A violation of private space is a form of mindreading. In building society around the digital world, we’ve ceded a massive amount of private space to move in safely. The only recourse to learning your boyfriend has read your journal is to hide it in a new place, but the only recourse to discovering people can hack your emails is to stop writing anything sensitive or private at all. By necessity, we’ve retreated inward. Our truly private worlds are almost entirely interior now. That loss of intimacy has already alienated us from one another. Unable to safely extend a hand or share a thought, our knowledge of people stops with avatars and public text. We can’t know people’s deeper feelings and they can’t know ours. There’s nowhere safe to talk. We are alienated.
In Citizenfour, Glenn Greenwald asked Edward Snowden why he would risk imprisonment — the obliteration of privacy. In doing so, Greenwald identified the one circumstance where personal agency is taken away. That the cyber debate revolves around the give and take of privacy tells us that we’re already in a prison of sorts. To get out, we need to reestablish laws and agreement. Not the tacit agreement of accepting free stuff in exchange for unknown costs but overt agreement and expectation of legal recourse if our rights are violated. As political theorist Stephen Krasner observed in the early 1980s: “The Constitution is a document more concerned with limiting than enhancing the power of the state.” Modern lawmakers violated this precept into extinction with the USA PATRIOT Act. There’s no current expectation that the present government will give up the Patriot Act of their own volition, and no reason to believe the public has the will to make them. This is where most people drop out of the resistance movement and succumb to prison life.
The other misstep in solving the puzzle is a myopic focus on the future. Pew Research Center’s Net Threats survey asked over 1400 technology experts to predict “the most serious threats to the most effective accessing and sharing of content on the Internet.” With so much focus on forecasting, we’re overlooking a wealth of facts in the present. Ask a South Korean mother living 20 miles from the DMZ in 1997 what the most serious threat to her children’s lives was and most Americans would have predicted a doomsday fear of war with the north. However, it’s just as likely she would have said: “See that black sedan driving 50mph over my front doormat…?” Attention-grabbing headlines often obliterate imminent dangers. Public discussion leapfrogs over what we could solve today because no one wants to dig in and do the unglamorous work of painting a dotted line down the center of the road. (Put another way: Why isn’t Pew asking these 1400 experts to identify today’s most solvable problem and offer a specific solution? That’s 1400 solutions right there.)
If technology is responsible for creating a state of alienation then the government is guilty of capitalizing on that alienation. When politicians appeal to people’s confusion over new technology, they perpetuate a dangerous myth that people can separate themselves from the digital age. Lindsey Graham’s opinion on cyber surveillance is useless if he doesn’t understand how Americans use email or why they might be upset that those emails are intercepted and read by government officials. Perhaps he’d like to turn his diary over to the CIA and see how that feels. His vote on privacy legislation would certainly be made with the necessary wisdom.
America is a world leader in computer technology and innovation. Every member of Congress, and certainly the next president, should be knowledgeable about computer technology. America’s elite governing body must be prepared to debate cyber. My 90-year-old grandmother has been sending emails for years and she has a Facebook account. If United States senators can’t keep up with her computing skills then they don’t belong anywhere near the Capitol. The most important action Americans can take is to vote for cybersmart House and Senate representatives in upcoming elections.
As backwards as Washington seems, cybersmart politicians do exist. It’s clear from Hillary Clinton’s decision to house computer servers in her home during her tenure at State that she’s knowledgeable about cyber. Despite her public statement, Clinton’s use of personal servers has nothing to do with convenience and everything to do with security. Clinton owns her data. She also possesses depth of knowledge about what goes on in the intelligence community. I expect that is what drove her to take control of her privacy. If she wants to do the country a great service, in or out of the White House, she should make cyber legislation her top priority and level the playing field for citizens everywhere. It would unite the country to speak plainly about the state of our internet. Honest talk about cyber surveillance from a public figure who can speak to both sides of the debate would be a huge step forward for the country.
What will hopefully become apparent, to decision makers and citizens alike, is that both sides of the ideological struggle derive their power from the online participation of citizens. The present situation has left people with nowhere to turn for trustworthy leadership. The conditions that permitted fascism’s spread after World War I — post-war malaise, financial struggles, political distrust — tamp down people’s natural resistance to incremental loss of agency. The circumstances that facilitated the rapid creation of totalitarian governments in previously liberal, rational societies are cropping up exactly one century later. The situation is again ripe for machtergreifung, or power-grab.
Democratic European societies once made a desperate attempt to escape their status quo by funding unstable third parties with disastrous consequences. We are now seeing many radical ideas thrown into the mix, some backed by logical process, others attempting to shake people out of rhetoric fatigue. Reboot the Government! Reboot the Bible! Reboot the Brain! Drop one letter from those slogans and we’re deep in A.I. territory. Bill Gates, Elon Musk, Stephen Hawking and their ilk proclaim their fear of the dark side of artificial intelligence with increasing regularity. We should be afraid too. There’s no precedent for the power vacuum created by a flaccid Congress and a disproportionately wealthy technology sector. This situation could pave the way for the first artificially intelligent leader. The engineering is getting there, and the rest would be…history.
What if there were a way to influence the past and change the future? With every choice we make — voting for president, purchasing a stock, getting married — we hold an entrenched view that possibilities evolve with time. We discuss the future in predictive terms (likelihood of, on target for, could go either way if…) and plan accordingly. To the extent that future outcomes don’t fall in line with our expectations we infer that we lacked information, were poor readers of probability, or experienced a devilish bit of bad luck.
There’s also a sense of momentum as we approach a crossroads where probability becomes inevitability. Expectations take over. This is evident in the person who doesn’t vote because their preferred candidate is almost certainly going to win, or the person who marries despite back-of-the-church jitters because halting a wedding is impossible. We rationalize away outcomes even though they exist up to “I do.” Would we feel differently about those discarded chances if they were sent to us from the future?
John Cusbert, Research Fellow at the Future of Humanity Institute at Oxford University, challenges our foregone conclusion about chanciness. In his paper “Backwards causation and the chancy past”, Cusbert asserts that chanciness isn’t tethered to time in a linear fashion, and that future outcomes can possibly affect chanciness in the past. This is not to say that all chanciness originates in the future, but theoretically some ofit could.
I discovered Cusbert’s paper just as I finished rewatching Christopher Nolan’s excellent space epic Interstellar and the two works independently made sense out of each other. Cusbert provides a framework for what happens in time’s physical dimension in the film, while Interstellar plays out a dramatized version of Cusbert’s backwards causation scenario. The implications for everyday life are extraordinary, and also very fun to consider.
First, a bit of housekeeping. Backwards causation of chance is only possible if we unlink time and chance. Cusbert does an excellent job of explaining the whys and hows, but his conclusion is the jumping off point for this piece. To wit: It is false to assume that chances are defined at times.
Thus, imagine Time and Chance as two objects held up in the air by you (the universe.) When you hold them together they exhibit certain properties (perhaps they’re magnetically attracted) and when you move them apart they exhibit other properties (perhaps one becomes smaller without the heat reflection of the other.) Whatever their properties, Time and Chance are separate entities, bound by the laws of the universe, which interact with each other in noticeable ways that affect our lives.
Now the fun part…hunting for backwardly caused chance in the lives of Interstellar’s Astronaut Cooper and and his daughter Murph.
Assumption #1 — Cooper will pilot the Endurance
Cooper will pilot the Endurance because he pilots the Endurance. It is a property of time that the past cannot be changed.
Chance #1 — Cooper may or may not make himself stay on earth
When Cooper travels into a black hole near the end of the film, he encounters a physical dimension of time. The tesseract is a construct of Murph’s bedroom during the week before Cooper left earth on the Endurance. This stretch of time is in the past but within the tesseract it is also a fragmented, nonconsecutive part of the present.
Present Cooper desperately communicates with Past Murph using gravity to knock books to the ground. The past cannot be changed, but Cooper hasn’t realised this yet and is backwardly causing chances to make himself stay. From the tesseract in the present, there is zero probability of those chances working, but they’re chances in the past until Past Cooper leaves earth. They’re also chances in the present until Present Cooper gives himself the coordinates to NASA. Chanciness is chancy. It doesn’t dictate an outcome, it only offers the possibility for it. For a brief window of time, Cooper’s dropped books and coded messages are backwardly caused chances that his past self ignores and Past Murph puzzles over.
Assumption #2 — Cooper will send himself on the mission
Once Cooper realizes that he sent himself on the NASA mission, and that he needs to go on the mission in order to arrive at the present moment, he locates the night of the dust storm in the tesseract and gives his past self the coordinates to NASA in binary through the falling dust. This is a fascinating moment that seems to be filled with chance―Cooper could decide not to send himself the coordinates, leaving his past self unaware of NASA’s nearby outpost from which his departure from earth is inevitable. However, in the present, Cooper begins to grasp that he has a chance to help Murph and civilization on earth by bringing himself to the tesseract, so he doesn’t even hesitate to send his past self the coordinates. Therefore, there is no chancy element to this event whatsoever. Past Cooper already received the message from Present Cooper, found NASA and left earth.
Chance #2 — Cooper may or may not increase the chances of saving the people on earth
Once Cooper realizes he can’t change the past but he might be able to change the future, he interprets his purpose in the tesseract as being “the bridge” to Present Murph. He encodes quantum data in a wristwatch in Past Murph’s bedroom for Present Murph to find decades later. That he chooses the wristwatch and that he encodes the data are two ways he’s backwardly creating chanciness. She might not find the watch and she might not be able to use the data. Neither outcome has occurred yet for Cooper or Murph.
Chance #3 — Murph may or may not find Cooper’s quantum data
A ticking hand on an old watch in an abandoned bedroom in a house where she is not welcome…these are seemingly insurmountable odds against Present Murph finding the data, but the tesseract offers an emotionally significant time for both father and daughter which enables Present Cooper to weight the chanciness heavily in favour of Murph’s eventual discovery of the watch.
Artificially intelligent robot TARS is with Cooper in the tesseract, trying to parse his logic:
TARS: “Cooper what if she never came back for it?”
COOPER: “She will. She will.”
TARS: “How do you know?”
COOPER: “Because I gave it to her.”
TARS is unable to match Cooper’s innate confidence that emotional attachment is a powerful enough influencer of probability to overcome inevitability. Cooper’s love for his daughter made him give her a watch as a way to keep him close. Murph’s love for her dad will make her happy to find the watch he gave her years later. Murph’s inquisitive nature, nurtured by her dad, will likely cause her to recognize his message encoded in the second hand. It’s not a given that Murph will find the data. It is chancy. The tesseract might belong to descendants of the civilization that Dr. Brand is starting on a new planet, and maybe their only requirement in bringing Cooper into the tesseract is to send himself to NASA to successfully pilot Dr. Brand through space. Cooper’s extra help for Murph is chancy and unproven. Even so, Cooper is powerfully assured that his plan worked because the tesseract closes once he finishes encoding the quantum data. At that same moment across spacetime, we see Present Murph recognize her father’s message in the wristwatch in her bedroom. The future is changed for father and daughter through backwards causation of chance.
*
Could chance be a type of emotive gravity? Emotions certainly influence our decision-making. Could chance be the force that pulls present-time Cooper in line with past time inside the tesseract, acting on him to respond in lockstep with a past he’s already lived? Cooper exhibits a spectrum of emotions during his time in the tesseract. He is distraught when he first arrives and doesn’t understand the system. He’s calmest when he realizes he has an opportunity to transmit useful information across spacetime.
The moment Cooper is no longer controlled by past events, he regains control of his emotions.
Similarly, young Murph is most distressed by Cooper’s highly emotional, ghostly communication through falling books, likely because she is powerless to use the information to convince her father to stay on earth. She is calmest when she recognizes his calmly-sent data decades later, even though her circumstances are considerably more fraught and dangerous. Both father and daughter are calmest when they aren’t trapped by inevitability and have a future-oriented purpose. They’re calmest when they have chances to make informed choices.
One of many interesting definitions Cusbert puts forth in his paper is that “[it’s] essential to chance that a system’s chance properties be among its physical properties: this is what distinguishes chances from other kinds of objective probabilities (such as logical and evidential probabilities).” In the context of Interstellar, gravity is the only force Cooper can use to physically communicate across space-time and cause chanciness. However, the past chances Cooper physically sets up are too weak to make a difference. Without Murph caring that her dad is gone, without Cooper caring whether he saves Murph’s life, without a powerful love and emotional bond between them, the wristwatch would be just another object in a house of objects that is tossed away after decades of no use. Time and gravity need emotion to effectively communicate possibility.
Yet, emotion isn’t powerful enough to change the past. If it were, there’d be nothing constant in our lives. We would have no history. Who doesn’t have an important decision they’d do over? It’s difficult to watch Cooper fight his past, seemingly able to make different choices if only he’d calm down. But of course, he can’t calm down. He’s in a state of agony at being separated from his daughter. Within the tesseract, Cooper’s actions aren’t chancy because his love for Murph is constant. The emotional pull is unwavering and it exists uniformly across space-time. It makes Cooper behave predictably in line with the past. Perhaps emotive gravity is what pulls time powerfully in one direction. Of those two objects you hold in the air, Time and Chance, it would be incredible if Chance were the more powerful of the two.
Cusbert’s theoretical reasoning uses coin tosses, time shifts and algebra to illustrate what Christopher and Jonathan Nolan portray through space travel, tesseracts and a father-daughter bond. The fictional story applies workable science to the real world, then adds the notion that love is the determining factor in backwards causation of chanciness. This is especially pertinent to examinations of modern crises. In so much as love is absent, or not evident, there is no benevolent force steering our lives and a sense of hopelessness and doom pervades our outlook for the future.
It was chance that I found Cusbert’s paper. I wasn’t looking for it. It is one of millions of papers on the internet. It was also chance that I read his paper at a time I was considering time, as opposed to last summer before Interstellar was released. By chance, the publication date of Cusbert’s paper, printed on the front page, is a highly significant date for me, which mildly disposed me toward reading it rather than passing it over. (I am someone who attributes compelling qualities to coincidence; when I meet someone with my same name I am affected.) None of these chancy elements are gravity-related, but rather are familiar examples of chance that moves linearly with time. Cusbert doesn’t suggest that all future outcomes determine all past chanciness, just as Interstellar doesn’t suggest that future beings control the present through spacetime. However, both works offer compelling reasons to reconsider our long-held view that future outcomes are caused by past and present possibilities alone. By entertaining the notion that chance could come to us from the future, we have yet another reason to listen to our hearts and learn to better read our emotions.
There’s a gut-wrenching scene at the climax of the World War II biopic The Imitation Game. Alan Turing and the codebreakers at Bletchley Park decrypt a German cable and suddenly they know the enemy’s plan to attack Allied ships and, incredibly, all attacks for the foreseeable future. Their celebration is short-lived. Turing grasps the ephemeral nature of their discovery and has a sickening epiphany: To win the war they can’t tip off the Germans that they’ve decoded Enigma. Instead they must simulate ignorance by choosing strategic victories and sacrificing the rest of their men. Panic sets in. One of the codebreakers has a brother serving aboard a targeted convoy. He begs his colleagues to use what they know to spare his brother’s life but Turing is resolved. Their secret must be concealed at the highest cost. The ensuing choices haunted the intelligence community long after the war was won.
Over the last 14 years, Americans have been conscripted into an information war. Individual privacy is now incidental to the objectives of government and technocratic elites, and vulnerable to the exploits of criminals and extremists. The battle for control over the digital space is a gloves off, civil-liberties-be-damned free-for-all. To reestablish trust in our oldest institutions it’s necessary to parse the steps that led to the present situation and decrypt the objectives of contemporary leaders and policymakers.
RED FLAGS
Nearly 100 years after Nazism flourished in Germany, the question is still asked with incredulity: Why did German citizens permit and participate in genocide? There will never be a satisfactory answer to the moral question of why, but there is a clear beginning in the circumstances of how. The rise of fascism in post-World War I Europe began with a confluence of domestic troubles in Italy: a financial crisis, concomitant economic hardship, grief over millions of Italian war casualties, widespread dissatisfaction with political parties that failed to deliver on promises, and a perceived threat to financial security from a foreign (Communist) ideology.
Onto this stage stepped Benito Mussolini, a staunch nationalist and war veteran whose preoccupation with violence inspired the formation of an army of uniformed “Blackshirts” — unemployed youth, funded by the middle and upper classes, who assassinated opposition leaders, suppressed and destroyed opposition newspapers, and eventually marched on the capital to take power in 1924. “A Brief History of the Western World” summarizes Italian fascism thus:
“In the beginning, as Mussolini himself admitted, [fascism] was largely a negative movement: against liberalism, democracy, rationalism, socialism, and pacifism…[Italians] had been cast adrift, let down by failed hopes of progress and happiness. Faceless in a mass society, they also felt alienated from themselves. The Fascists found an answer to this emptiness by arousing extreme nationalism….The fascist myth rejected the liberal reliance on reason and replaced it with a mystical faith. Stridently anti-intellectual, it held that the “new order” would spring from the conviction of the “heart.” Fascists therefore looked upon intellectuals as…suspicious characters…. Most ordinary Italians accepted Fascism with enthusiasm. The individual who formerly felt alone and unneeded, enjoyed a new sense of “belonging.”
The rise of fascism in Italy took less than six years from invention to political dominance. Fostered by comparable conditions in neighboring countries, the ideology spread across Europe and fatefully intersected with the political ascent of Adolf Hitler in Germany. The Germans have a word for Hitler’s rise to Fuehrer: machtergreifung — macht, meaning power, and ergreifen, to grab or seize. Like Mussolini, Hitler headed up a violent army of unemployed youth and committed illegal acts to dissuade and undermine his opponents, but it was the power vacuum created by ineffective German leadership that paved the way for the Third Reich and Nazism.
*
Flag of the Soviet Union
A second world war and one Pax Americana later the world was pumped with Cold War adrenalin. In 1962, nuclear superpowers bumbled their way into a stand-off and lucked their way out of the unthinkable during thirteen days of diplomatic posturing over Cuba. The rapid advancement of nuclear technology meant there was no room for error, yet error upon error was made. In effect, American leadership failed the test but passed the class. America and Russia skated by on their shared basic values, but the crisis taught no lessons on how to face an adversary with profoundly different goals, specifically those rooted in tribal conflict and revenge.
In the aftermath of America’s nuclear showdown, political theorist Graham Allison published his seminal work “Conceptual Models and the Cuban Missile Crisis.” It would form the foundation of American foreign policy. Allison defined three distinct methods for understanding policy outcomes: The rational policymodel (foreign governments behave rationally in relation to their goals), the organizational-processmodel (the military typically wants X, the bureaucracy typically wants Y, and historically they have n relationship to each other so the outcome will predictably be z), and the bureaucratic politicsmodel, where shapeshifting factors such as interpersonal conflicts, bureaucratic inertia, and availability of resources act on each other to influence foreign policy outcomes. Government elites strongly favored the bureaucratic model as conventional wisdomthat would shape American foreign policy for decades to come.
Political theorist Stephen Krasner reassessed Allison’s models, first in 1972, and later at the height of the “first” Cold War. He was troubled that President Kennedy, subcabinet members, and scholars from top public policy programs in the 1960s wholly adopted the bureaucratic approach, where outcomes were viewed as an evolving compromise of inputs. Krasner identified the fundamental flaw in the model as giving elite decision-makers a blanket excuse for their failures. Specifically, he reframed bureaucratic-politics thinking as a biased framework for blaming policy errors on the “self-serving interests of the permanent government,” where elected officials were viewed as powerless to corral the government “machine.” He summarized the infinite loop of accountability thus:
Bureaucracy is a machine, and “[machines] cannot be held responsible for what they do, nor can the men caught in their workings.”
This is a stunning double entendre for the Information Age.
DIGITAL DICTATORSHIP AND WARRING ELITES
Rights and privacy are dictated by an elite group of decision makers who control the laws (Government) and the digital infrastructure (Technocracy.) Internet usage and hardware purchases now constitute a “vote.” Government and technology sectors each employ 1% (3–4 million people) of the American population. The percentage of top-level decision-makers, technicians and analysts within those fields is assumed to be less than .01% of the American public and is therefore elite. Technocratic elite lumps Anonymous hackers in with tech CEOs, and government elite includes members of all branches of government and political influencers with monetary or legislative sway. Since both elites invest billions of dollars successfully marketing themselves to society, the benefits they provide are widely known and will not be discussed here. Instead, the focus is the encrypted cost of advancement. Decoding the costs reveals which services and policies are truly beneficial, and to whom.
*
The Technocracy
The history of the government’s relationship with computer technology is long and complicated. Perhaps only one fact is universally accepted: Al Gore did not invent the internet. Contrary to popular folklore, he never claimed to invent the internet. Gore’s words were twisted, the transcripts are widely available and he was subsequently defended by two of the “fathers of the internet” as deserving “significant credit for his early recognition of the importance of what has become the Internet.” The urban legend illustrates the strange paradox of the Age of Information. Even with unprecedented access to the truth, millions of people are often misinformed.
Internet development began in the 1960s, became its broadly used iteration in the mid-1970s, was commercialized through the 1980s and came into its own in the early 1990s with the introduction of the World Wide Web, the universally accepted infrastructure for data exchange on the internet. Web engineering is credited to Tim Berners-Lee’s 1989 proposal at CERN. It was developed over the next few years and made free to the public in 1993. Anecdotally, this snippet enumerating current issues confronting global governing bodies from the then-definitive International Law Anthology reveals the digitally unsophisticated world that received this new technology:
Global Communications: The earliest topics in this burgeoning field were international postal services and the laying of submarine cables. The invention of radio, television, and facsimile and modem communications technology, have led to explosive growth in this area of international regulation. Jamming and counter-jamming of another nation’s radio wave frequencies, channel regulation, remote sensing, and stationary satellite transmission are matters of intense interest. There is a move toward international broadcast standards and transmission quality. But there are also countervailing pressures against freedom of information, with some nations (and religious groups) desiring the suppression of international telecommunications relating to the advocacy of war or revolution, criticism of governmental officials or policies, regulation of commercial messages, and materials depicting real or fictional violence or pornography. — Anthony D’Amato, “Domains of International Law,” International Law Anthology
It reads like a mid-century newspaper clipping but that passage was published in 1994. Bill Clinton was president.
Twenty years later, Laura Poitras’s Oscar-winning documentary CITIZENFOUR is more than an exceptional historical record. The film is also a primer for technocratic culture and ideology. In June, 2013, after months of anonymous communications, National Security Agency contractor Edward Snowden sat down face-to-face with Poitras and The Guardian journalist Glenn Greenwald in a Hong Kong hotel room. Snowden spoke eloquently and fluently about the values at the root of his dangerous undertaking to leak classified documents detailing secret surveillance programs run by the United States government.
From CITIZENFOUR:
Glenn Greenwald: So, why did you decide to do what you’ve done?
Edward Snowden: For me, it all comes down to state power against the people’s ability to meaningfully oppose that power. I’m sitting there every day getting paid to design methods to amplify that state power. And I’m realizing that if the policy switches that are the only thing that restrain these states were changed you couldn’t meaningfully oppose these. You would have to be the most incredibly sophisticated technical actor in existence. I’m not sure there’s anybody, no matter how gifted you are, who could oppose all of the offices and all of the bright people, even all of the mediocre people out there with all of the tools and all of their capabilities. And as I saw the promise of the Obama Administration be betrayed and walked away from and, in fact, actually advance the things that had been promised to be curtailed and reined in and dialed back, actually got worse. Particularly drone strikes…That really hardened me to action.
GG: If your self interest is to live in a world in which there is maximum privacy, doing something that could put you in prison in which your privacy is completely destroyed as sort of the antithesis of that, how did you reach the point where that was a worthwhile calculation for you?
ES: I remember what the internet was like before it was being watched and there has never been anything in the history of man that’s like it. You could have children from one part of the world having an equal discussion where they were granted the same respect for their ideas in conversation with experts in the field from another part of the world on any topic anywhere any time all the time, and it was free and unrestrained and we’ve seen the chilling of that, the cooling of that, the changing of that model toward something in which people self-police their own views and they literally make jokes about ending up on “the list” if they donate to a political cause or if they say something in a discussion. It’s become an expectation that we’re being watched. Many people I’ve talked to have mentioned that they’re careful about what they type into search engines because they know it’s being recorded and that limits the boundaries of their intellectual exploration. I’m more willing to risk imprisonment, or any other negative outcome personally than I am willing to risk the curtailment of my intellectual freedom, and that of those around me whom I care for equally as I do for myself. Again, that’s not to say that I’m self-sacrificing because I feel good in my human experience to know that I can contribute to the good of others.
[transcription from video]
It’s striking that Snowden didn’t say privacy in his mission statement. Greenwald framed the debate with the question many of us would ask after hearing that we’re being surveilled, and subsequent news reports by outlets across the globe widely referred to “privacy.” It’s unclear whether Greenwald and Poitras heard more of Snowden’s thoughts where he raised the issue of privacy himself, but he doesn’t say the word. He advocated an unmonitored internet from the vantage point of someone who is highly skilled at protecting his own privacy. He recollected the realization, at his NSA desk, that before too long he — a member of the tech elite — would be technologically outpaced and unable to protect his privacy. The technocracy was losing ground to the government.
Society owes Edward Snowden an enormous debt for his decision to blow the whistle on the NSA at great personal risk. To be clear: he enabled a profoundly necessary conversation to begin. However, his poetic description of the unrestrained nature of intellectual advancement is technocratic rhetoric for a digital utopia that never existed. As compelling and passionate as he is, Snowden made several incorrect assertions that should be dispelled in the interest of productive discussion.
First, there have been many inventions in the history of man like the internet, including the space shuttle, the airplane, the telephone, or the galleon, all of which brought people together across vast distances at previously unmatched speeds to have discussions and exchange knowledge. Mankind went through periods of adjustment to those profound changes in infrastructure and we will navigate this one as well. Innovation is not unprecedented. This invention will mature beyond its makers and it must assimilate to the needs of civilization, not the other way around.
Second, the children can still spend their days online talking to experts as equals if they want to (though it’s doubtful they do.) Invoking chilled children and cooled innocence is misleading rhetoric when it’s primarily adults who spend their time staring at a screen. Further, the tech industry pushes expensive gadgets and software for kids but, as highlighted by the New York Times’ “Steve Jobs Was a Low-Tech Parent,” many technocrats strictly limit gadget exposure for their own families because they’re aware of the harmful effects of internet and technology use on young minds. Teenage youth are a more complicated issue with regard to internet freedom, which is especially clear in the case of ISIL’s recruiting techniques, but Snowden wasn’t referring to Muslim children discussing ideas with expert terrorists across the globe. He wasn’t lamenting privacy incursions on thugs. In fact, he didn’t acknowledge the grey areas of internet freedom at all.
The most important falsehood in Snowden’s statement, and the core message of the technocratic ideology, is that the internet was once and should always be free. This is a seductive idea, especially to people with good computing skills and entrepreneurial leanings, but it is patently untrue. Getting online requires expensive hardware and infrastructure that is designed and sold by the same community that dominates the internet through technical expertise.
For the last 20 years the technology industry has hard-sold hardware to citizens, corporations and governments alike along with software that seamlessly replaced or supplanted infrastructure for everything from financial transactions and brick-and-mortar stores to research and even face-to-face meetings. The technocracy orchestrated one of the greatest heists in history by amassing “free” content from writers and established media publications trying to maintain their brands with a millennial generation that wasn’t taught to pay people for their time, research, and intellectual work. As a final insult to “freedom,” tech companies undertook the systematic repackaging of users’ private information as data useful for advertizing, which they bundle and sell to whoever they choose at a profit. (The word “user” rather than “customer” has always implied a barter arrangement, but it is rarely spelled out exactly what is being given and gotten. You open a social media account once, perhaps only use it for an hour or a day, but the service provider owns your personal information forever and can sell it many times over.)
In 2015, Apple, Microsoft, Google, IBM and Samsung have risen to the top ten of Forbes’ World’s Most Valuable Brands, and 11 more technology companies are in the top 100. Six of the world’s 20 richest billionaires are computer technology elite. All of that free internet has paid for mansions and private educations. There’s nothing wrong with companies and people making money off of this invention — America is a proudly capitalist society — but perpetuating myths about intellectual freedom and raging against government misuse of personal data is hypocritical and misleading.
If it appears I’ve misinterpreted Snowden’s meaning entirely, breathe easy. It’s clear that Snowden’s “free internet” refers to freedom of thought, communication and information, not freedom of goods and services. However, the cyber conversation can’t bifurcate those billions of dollars from the billions of devices and trillions of gigabytes of data. Doing so hides the massively lucrative business objectives behind fun, sometimes addictive, products. If technocrats truly want a free, unrestrained internet they’re now rich enough to forgo that pile of money, make cheap hardware, set chaos-legitimizing rules (First Rule of Internet: There are no rules) and enforce the entropy. I doubt they’d have billions of takers and no one would be typing their credit card number into a chaos box.
*
Screenshot from the Department of Justice website
The Government
Spying, surveillance and covert activity have always been part of America’s security and defense apparatus; that activity just wasn’t legal. Illegality was at the heart of clandestine work, making it extremely risky and therefore far more considered by those commissioning it and those undertaking it. The legalization of amoral behavior came about in the weeks after 9/11 because, ostensibly, the president and his cabinet wanted the freedom to openly plan illegal activity without fear of legal repercussions. The PATRIOT Act inoculated government officials from risk and, many would say, ethical pause. What followed was a confident, unrisky expansion of intelligence infrastructure with no heeded supervision or endgame.
A nation that was once gripped by the unraveling of Richard Nixon now shrugs off revelations of CIA agents breaking into Senate Intelligence Committee computers in 2014. Government workers have spied on elected officials before, but today the public digests these incidents with a vague assumption that all criminal behavior by the government has a footnoted legal justification somewhere. These stories translate as infighting among elites. Fourteen years of the Patriot Act have conditioned Americans to expel what little outrage they can muster in a matter of days and then go limp. The groups taking legal action against injustices are typically news or special interest organizations with a financial or moral dog in the fight and powerful legal teams to back them. (The latest New York Times op-ed piece from Wikipedia’s Jimmy Wales and the AP’s lawsuit against Hillary Clinton are two cases in 2015 alone.) Even with funded legal representation, there’s a pervasive sense that their effort is futile. For all of the flagrant rights abuses, the government’s tracks are papered over by the PATRIOT Act.
One way to step off the merry-go-round is to take a page from Alan Turing’s estimable problem-solving approach and look at what isn’t happening in our every day lives. Government elites have made several huge assumptions on our behalf and, in light of Edward Snowden’s unspooling NSA leaks, it’s worth revisiting their decisions after seeing the results. The government uses negative hypotheses to great effect (if we don’t renew the PATRIOT Act…) and so can the people whose rights are in the balance.
What isn’t being done with NSA-collected data?
Potentially, the important stuff. Through indiscriminate data-collection, the NSA is extensively aware of wrongdoing by the American people, corporations, government agencies and officials. We don’t need Edward Snowden’s evidence to know this is true. Daily news stories show that digital communications include sexually harassing emails in the workplace, threats of murder or violence, faxed paper trails of embezzlement, proof of premeditated theft, telephonic recordings of gender and race discrimination, and documented personal indiscretions by public officials. The American government inadvertently nets evidence to myriad criminal acts, both domestic and foreign. It then employs people to sift through these stores looking for some lawbreakers, but not others. When intelligence officers stumble upon criminal or threatening activity that doesn’t serve their objectives do they look the other way to conceal their methods? It’s conceivable and probable that actual lives have been lost to inaction rooted in concealment. What happens in situations like these? What do the numbers look like on paper — lives lost or ruined versus casualties from terrorist attacks. The legal ramifications are mind-boggling but the ethical question is straightforward: Is a government obligated to protect its people or its objectives?
What else isn’t being done with NSA surveillance data? For all of their time spent sweating over Apple’s Xcode, the U.S. government didn’t stop the Tsarnaev brothers, the French government didn’t stop the Charlie Hebdo murderers, and the U.K. government isn’t stopping thousands of teenagers from leaving the country, unaccompanied, to join ISIL. Most disturbing was the story of three teenaged girls who left the U.K. in February and may have been aided by a western spy in transit, forcing us to question why governments aren’t helping their most vulnerable citizens return to safety (and whether they may be using them as unsuspecting spy assets instead.) With the Snowden data we have proof that our individual rights, and lives, are considered a worthy sacrifice to what the government deems “the greater good.” When spy agencies might be risking the lives of teenagers in the name of future terrorist attack victims, it’s clear government objectives no longer align with the values of the citizens they work for.
What if we don’t have the internet?
When Lindsey Graham weighed in on Hillary Clinton’s email debacle on Meet the Press with an I’ve-never-sent-an-email statement, he pumped a figurative fist of defiance. He’s a loud, proud Luddite in the new millennium. However, ask him where he does his banking, whether he gets money from the ATM, uses a cellphone, watches cable television, or has ever read the news online and he’ll be forced to admit he’s got a digital footprint. His televised statement gives him credibility with the anti-technology demo, the people who are done with all the smart talk and just want to love America with all of their hearts [see: Fascism, precursor to]. The only people alive today who aren’t consciously reliant on cyber technology are toddlers. The rest of the modern world communicates regularly online and is increasingly aware that public officials lack cyber expertise.
But what if we did live in Lindsey Graham’s la-la-land and didn’t have a digital footprint? A world without the internet is inconceivable today, but that world existed only two decades ago. In that time we traded infrastructure for more than just privacy. What we save in time and gain in information should be held up to what we spend in dollars to participate in the digitized world.
A sliver of the data shows that in 2014, 177 million smartphones sold in North America, amounting to $71 billion in sales. Globally, 1.3 billion smartphones sold. Add to that the pc, tablet and cellphone sales, software sales, internet and cellphone service contracts…Americans pay a lot of money to go about their daily lives. This is not to suggest we should shun progress and innovation, but we should know what we’re getting for our money. We aren’t getting shiny new laws for the digital infrastructure we depend on. Our brightest technological minds unwittingly innovated a cyber-police state and elected officials aren’t knowledgeable enough, or confident enough, to walk back what technology wrought. For a country that leads the world in cyber technology, many of our legislators are tech-dumb to the point of ridiculousness. The fatal mistake would be to insist we can separate ourselves from the infrastructure of modern society by never sending an email. Politicians like Graham sell that idea because it sounds freeing [See: Paternalism, Fascism’s sweet-faced uncle named] but they’re diverting attention from the pressing issue of lawmaking because they clearly have no idea where to begin. The gridlock in Congress might not be gridlock at all. Perhaps our representatives are simply confused about how to hit “Send.”
Finally, who doesn’t control personal data?
If the answer to this question isn’t obvious yet then it’s worth stepping into the nearest bathroom and checking out the wall above the sink. (Or ask Hillary Clinton. She gets it.) In military jargon, intelligence refers to strategically useful information. Information becomes intelligence when it has an application, and that application is determined by whoever finds, reads, assesses and controls the information. To grasp how important this seemingly obvious statement is, consider the juxtaposition of Director of National Intelligence James Clapper and former NSA contractor Edward Snowden, two men working at the same government agency in control of the same information who found starkly different uses for it.
From this we must conclude that, within the government, a select group of officials and contractors control our information and they each have specific objectives in mind. Then we must acknowledge that almost none of us can articulate what those individuals’ objectives are so we don’t know if we agree with them. As internet-reliant citizens, we play the odds every time we connect digitally, not knowing which side of the numbers game we’re on. To use the analogy of WWII Britain, are we the majority at home or the unsuspecting brothers on targeted convoys? None of us can answer this question because the government elite draws up the map in secret. To the extent that events unfold in a manner we agree with and our lives aren’t negatively affected, we can only say we got lucky.
Loading screenshot of Google’s Virtual Library project
HOW WE CIVILIZE TECHNOLOGY
Living in Asia in the late 90s, I spent time in countries that were then considered “developing” economies. Textbooks were filled with prognostications about the potential growth and downfall of these places but no bar chart captured the terrifying hilarity of driving an hour outside of Seoul at high speed in a brand new sedan on unpaved roads and only potholes and feral animals to navigate by. Technology was tangibly out of sync with infrastructure. A blocked road sent drivers veering onto the front steps of houses. Parking was wherever you feel like it, and parked cars were often rendered inaccessible due to other people’s feelings about parking. Disagreements were resolved the old-fashioned way with pointing, yelling, and threat of fists. Over time, enough pedestrians were casualties and enough expensive tires were blown in potholes that laws became necessary, as did the paving of roads. The automobile is no less amazing because society set a speed limit. We mitigate and retard technology where it threatens and outpaces us. This is how we civilize our innovations.
The most poignant irony of the Information Age is the internet’s role in restructuring our relationship to politics. Snowden avowed his intent to end the tyranny of the snooping government, but technocratic paternalism is equally invasive and it’s built into the digital realm. Complicated legal documents pop up at the outset of a business relationship and people with no legal background are conditioned to move ahead with a trust us one-click “Agree.” Our relationship to intelligent technology is best portrayed by the routine updates we tacitly agree to without reading or understanding what they entail. I Agree to whatever you’re about to load onto my phone or into my computer, agree to what you think is best for this device and my use of it, agree without stipulation, agree without working knowledge, agree because not agreeing seems time-wasting and foolish and questioning is beyond my technical ability. I always agree with you because everyone else is agreeing with you so it must be okay. I always agree with you because I don’t know why I should disagree.
This habitual agreement has proved deadly to the exchange of real information. The technocracy devised the fastest, most appealing method for securing a user, and internet users subsequently became desensitized to the act of giving away their rights. The repetitive process has numbed healthy suspicion of any organization that demands legal agreement to a loss of personal agency. Those internet service agreements are not there to protect individuals, they are documents created by expensive legal teams to ensure a company has no responsibility to the consumer. If these statements aren’t disturbing enough, stretch them to apply to the government in the shocking months and years after 9/11. The PATRIOT Act was the federal government’s service agreement, and the majority of the American people agreed to it without understanding what they were signing away.
Fourteen years on, perhaps the greatest misstep in rectifying our mistake is to begin with privacy. Loss of privacy is an end result. Privacy can be protected, it can be violated, but it cannot be given. That notion is a falsehood born of Victorian manners — I’ll give you some privacy — which preempt uncomfortable directives: Leave the room. Get off the line. Turn your head. Don’t read my emails. I need my privacy. The sci-fi notion of “mindreading” is terrifying precisely because it violates the only space on earth that belongs entirely to us. When we communicate with people, through talking, writing, or touch, we consciously extend that private space to include others. A violation of private space is a form of mindreading. In building society around the digital world, we’ve ceded a massive amount of private space to move in safely. The only recourse to learning your boyfriend has read your journal is to hide it in a new place, but the only recourse to discovering people can hack your emails is to stop writing anything sensitive or private at all. By necessity, we’ve retreated inward. Our truly private worlds are almost entirely interior now. That loss of intimacy has already alienated us from one another. Unable to safely extend a hand or share a thought, our knowledge of people stops with avatars and public text. We can’t know people’s deeper feelings and they can’t know ours. There’s nowhere safe to talk. We are alienated.
When Glenn Greenwald asked Edward Snowden why he would risk imprisonment — the obliteration of privacy — Greenwald identified the one circumstance where personal agency is taken away. That the cyber debate revolves around the give and take of privacy tells us that we’re already in a prison of sorts. To get out, we need to reestablish laws and agreement. Not the tacit agreement of accepting free stuff in exchange for unknown costs, but overt agreement and an expectation of legal recourse if our rights are violated. As Stephen Krasner observed: “The Constitution is a document more concerned with limiting than enhancing the power of the state.” Modern lawmakers violated this precept into extinction with the PATRIOT Act. There’s no underlying belief that our present government will give up the PATRIOT Act of their own volition, and no reason to believe the public has the will to make them. This is where most people drop out of the resistance movement and succumb to prison life.
The other misstep in solving the puzzle is our obsession with predicting the future. Pew Research Center’s Net Threats survey of over 1400 technology experts asked them to predict “the most serious threats to the most effective accessing and sharing of content on the Internet.” But with so much emphasis on forecasting, we’re overlooking today’s storm. If you’d asked a South Korean mother living 20 miles from the DMZ in 1997 what the most serious threat to her children’s lives was, most Americans would have expected her answer to be a doomsday scenario of war with the north. However, it’s just as likely she would have said: “See that black sedan driving 50mph over my front doormat…?” The news-grabbing headlines often obliterate imminent dangers. Public discussion leapfrogs over what we could solve today because no one wants to dig in and do the unglamorous work of painting a dotted line down the center of the road. (Why isn’t Pew asking these 1400 experts to identify today’s most solvable problem and offer a specific solution? That’s 1400 solutions right there.)
If technology is responsible for creating a state of alienation then the government is guilty of capitalizing on that alienation. When politicians appeal to people’s confusion over new technology, they perpetuate a dangerous myth: that people can separate themselves from the digital age. Lindsey Graham’s opinion on cyber surveillance is useless if he doesn’t understand how Americans use email or why they might be upset that those emails are intercepted and read by government officials. Perhaps he’d like to turn his diary over to the CIA and see how that feels. His vote on privacy legislation would certainly be made with the necessary wisdom.
America is a world leader in computer technology and innovation. Every member of Congress, and certainly the next president, should be knowledgeable about computer technology. America’s elite governing body must be prepared to debate cyber. My 90-year-old grandmother has been sending emails for years and she has a Facebook account. If senators can’t keep up with her rudimentary computing skills then they don’t belong anywhere near the Capitol. The most important action Americans can take is to vote for cybersmart House and Senate representatives in upcoming elections.
As backwards as Washington seems, cybersmart politicians do exist. It’s clear from Hillary Clinton’s decision to house computer servers in her home during her tenure at State that she’s knowledgeable about cyber. Despite her public statement, Clinton’s use of personal servers has nothing to do with convenience and everything to do with security. Clinton owns her data. She also possesses depth of knowledge about what goes on in the intelligence community, and I expect that is precisely what drove her to take control of her privacy. If she wants to do the country a great service, in or out of the White House, she should make cyber legislation her top priority and level the playing field for citizens everywhere. It would unite the country to speak plainly about the state of our internet. Honest talk about cyber surveillance from a public figure who can speak to both sides of the debate would be a huge step forward for the country.
What will hopefully become apparent, to decision makers and citizens alike, is that both sides of the ideological struggle derive their power from the online participation of citizens. The present situation has left people with nowhere to turn for trustworthy leadership. The conditions that permitted fascism’s spread — post-war malaise, financial struggles, political distrust — tamp down people’s natural resistance to incremental loss of agency. The circumstances that facilitated the rapid creation of totalitarian governments in previously liberal, rational societies are cropping up again a century later. The situation is again ripe for machtergreifung.
Democratic European societies once made a desperate attempt to escape their status quo by funding unstable third parties with disastrous consequences. We are now seeing many radical ideas thrown into the mix, some backed by logical process, others attempting to shake people out of rhetoric fatigue. Reboot the Government! Reboot the Bible! Reboot the Brain! Drop one letter from those slogans and we’re deep in A.I. territory. Bill Gates, Elon Musk, Stephen Hawking and their ilk proclaim their fear of the dark side of artificial intelligence with increasing regularity. We should be afraid too. There’s no precedent for the power vacuum created by a flaccid Congress and a disproportionately wealthy technology sector. This situation could pave the way for the first artificially intelligent leader. The engineering is getting there, and the rest would be…history.
CONCLUSION
At the end of The Imitation Game, when the Germans have been defeated and the war declared a victory, the British codebreakers sit around a table to be dismissed. They are solemn and alienated from one another because of secrecy, spying, suspicion, and lying, though they each believe their transgressions were the morally responsible thing to do. They’re ordered by their government to keep yet another secret — deny everything they know and deny they know each other. The path they’re on has no exit and no truth. They’re in a prison of past decisions and will be for the rest of their lives. However, the circumstances that created their prison are the opposite of America’s situation today. In WWII the British government was desperate. The enemy was winning. Their strategy wasn’t clandestine by design but by circumstance, and the British public was spared the burden of deciding who to sacrifice.
Today we’re faced with governments and corporations that spy, lie, classify decision-making, and manipulate online users. These conditions are self-perpetuating. There is no definitive endgame in the shapeshifting political narratives and money-making schemes except to exert more power over the online space. To reclaim the space for public privacy, we must take the messages we’re being sent and decrypt the puzzle ourselves. Whether your bias is to fault the system or the individuals who make decisions within it, both are responsible for mistakes, and both hold the keys to solving the puzzle. The trick is to look at what isn’t there, and to ask why something is free.
Indisputably, Spike Jonze’s “Her” is a relationship movie. However, I’m in the minority when I contend the primary relationship in this story is between conscious and unconscious. I’ve found no mention in reviews of the mechanics or fundamental purpose of “intuitive” software. Intuitive is a word closely associated with good mothering, that early panacea that everyone finds fault with at some point in their lives. By comparison, the notion of being an intuitive partner or spouse is a bit sickening, calling up images of servitude and days spent wholly engaged in perfecting other-centric attunement.
To that end, it’s interesting that moviegoers and reviewers alike have focused entirely on the perceived romance between man and she-OS, with software as a stand-in for a flesh-and-blood girlfriend, while ignoring the man-himself relationship that plays out onscreen. Perhaps this shouldn’t come as a surprise, given how externally oriented our lives have become. For all of the disdainful cultural references to navel-gazing and narcissism, there is relatively little conversation on equal ground about the importance of self-knowledge and the art of self-reflection. Spike Jonze lays out one solution beautifully with “Her” but we’re clearly not ready to see it.
“Her” is the story of a man who unknowingly begins a relationship with himself.
From the moment Samantha asks if she can look at Theodore’s hard drive, the software is logging his reactions to the most private of questions and learning the cartography of his emotional boundaries. The film removes the privacy issue-du-jour from the table by cleverly never mentioning it, although it’s unlikely Jonze would have gotten away with this choice if the film were released even a year from now. Today, there’s relief to be found from our NSA-swamped psyches by smugly watching a future world that emerges from the morass intact. Theodore doesn’t feel a need to censor himself with Samantha for fear of Big Brother, but he’s still guarded on issues of great emotional significance that he struggles to articulate, or doesn’t articulate at all. Therein lie the most salient aspects of his being. The software learns as much about Theodore from what he does say as what he doesn’t.
Samantha learns faster and better than a human, and therefore even less is hidden from her than from a real person. The software adapts and evolves into an externalized version of Theodore, a photo negative that forms a whole. He immediately, effortlessly reconnects to his life. He’s invigorated by the perky, energetic side of himself that was beaten down during the demise of his marriage. He wants to go on Sunday adventures and, optimistic self in tow, heads out to the beach with a smile on his face. He’s happy spending time with himself, not by himself. He doesn’t feel alone.
Samantha is Theodore’s reflection, a true mirror. She’s not the glossy, curated projection people splay across social media. Instead, she’s the initially glamorous, low-lit restaurant that reveals itself more and more as the lights come up. To Theodore, she’s simple, then complicated. As he exposes more intimate details about himself, she articulates more “wants” (a word she uses repeatedly.) She becomes needy in ways that Theodore is loath to address because he has no idea what to do about them. They are, in fact, his own needs. The software gives a voice to Theodore’s unconscious. His inability to converse with it is his return to an earlier point of departure for the emotional island he created during the decline of his marriage.
Jonze gives the movie away twice. Theodore’s colleague blurts out the observation that Theodore is part man and part woman. It’s an oddly normal comment in the middle of a weird movie, making it the awkward moment defined by a new normal. This is the topsy-turvy device that Jonze is known for and excels at. Then, more subtly, Jonze introduces Theodore’s friend Amy at a point when her marriage is ending and she badly needs a friend. It’s telling that she doesn’t lean heavily on Theodore for support. Instinctively, she knows she needs to be her own friend. Like Theodore, Amy seeks out the nonjudgmental software and subsequently flourishes by standing unselfconsciously in the mirror, loved and accepted by her own reflection.
In limiting the analysis of “Her” to the question of a future where we’re intimate with machines, we miss the opportunity to look at the dynamic that institutionalized love has created. Among other things, contemporary love relationships come with an expectation of emotional support. Perhaps it’s the forcible aspect of seeing our limitations reflected in another person that turns relationships sour. Or maybe we’ve reached a point in our cultural evolution where we’ve accepted that other people should stand in for our specific ideal of “a good mother” until they can’t or won’t, and then we move on to the next person, or don’t. Or maybe we’re near the point of catharsis, as evidenced by the widespread viewership of this film, unconsciously exploring the idea that we should face ourselves before asking someone else to do the same.
When we end important relationships, or go through rough patches within them, intimacy evaporates and we’re left alone with ourselves. It’s often at those times that we encounter parts of ourselves we don’t understand or have ignored in place of the needs and wants of that “significant other.” It’s frightening to realize you don’t know yourself entirely, but more so if you don’t possess the skills or confidence to reconnect. Avoidance is an understandable response, but it sends people down Theodore’s path of isolation and, inevitably, depression. It’s a life, it’s livable, but it’s not happy, loving, or full. “Her” suggests the alternative is to accept that there’s more to learn about yourself, always, and that intimacy with another person is both possible and sustainable once you have a comfortable relationship with yourself. However we get to know ourselves, through self-reflection, through others, or even through software, the effort that goes into that relationship earns us the confidence, finally, to be ourselves with another person.