Saturday, February 20, 2021

Suggested reading: Technocracy unbound

Image source: The Guardian

1. Graded on where you live

You may not have followed the story about A-level results in Britain last summer; after all, for non-Brits another country's college entrance exams may seem pretty remote from their own daily crises. But it's worth taking a closer look because it starkly illustrates how, under the guise of "objectivity," algorithms can perpetuate injustice.

The General Certificate of Education Advanced Level test in the UK is somewhat equivalent to the SAT or ACT in the US: it's used to determine what university or other post-secondary education a student qualifies for, if any. Falling short means that you don't qualify. Obviously for students the stakes are huge.

As has been well documented, "standardized" tests are anything but: scores on such tests in the US correlate more strongly with factors such as race, family income and parental education level than they do with success in college and beyond (non-normalized high school grades correlate more strongly with college success than SAT scores). Increasingly colleges and universities in the US are de-emphasizing or even (as in the case of the University of California system) eliminating the SAT or ACT as requirements for admission, since their role in perpetuating systemic educational inequities is so clear. [1]

In the UK last spring the A-levels had to be cancelled due to the COVID-19 pandemic and the closing of schools. Universities were faced with making admission decisions based only on the A-level grades as assessed by teachers. Britain's Department for Education directed that, to prevent grade inflation, the assessed grades should be adjusted by the Office of Qualifications and Examinations Regulation (Ofqual). Ofqual developed an algorithm to adjust the assessed A-level grades based on the A-level grades earned by students at the same schools over the previous three years, how those grades related to national averages, and whatever adjustments were needed to fit scores to a pre-determined national distribution. [2]

The results were disastrous, and disastrously unfair: nearly 40% of teacher's assessments of their students were downgraded. High-performing students from disadvantaged schools were more likely to see their assessed grade reduced, while students attending schools with historically above-average results were more likely to see their grades raised. As Kenan Malik writes,

Algorithms are, as the writer and broadcaster Timandra Harkness puts it, "prejudice engines". The data with which they are fed is inevitably tainted by the prejudices and biases of the human world. Unchecked, that feeds into the results they produce. And where algorithms make predictions, those prejudices and biases are projected into the future.

The reason the exam algorithms penalised pupils from disadvantaged schools is that this is the algorithm built into real life. The education system has long served to thwart the ambitions of working-class pupils and to ease the path of the more privileged ones. The results debacle is but a sharper expression of what usually happens year after year. [3]

In one school, over the previous three years 8 out of 64 students (12.5%) earned the highest grade (A*), and none received a U ("unclassified," or fail). If those patterns held for the 27 students from that school taking A-levels in 2020 you would expect 3 students to earn an A* and none to receive a U. However, the algorithm predicted that only 5.7% (or 1.5 students) from that school would earn an A*, and 2.3% (or 0.62 students) would receive a U. Obviously you can't assign a grade to a fraction of a student, so the number who received algorithmically-assigned A* grades was reduced to 1, and the number who received a failing grade was increased to 1. This process was applied at every school: if the algorithm predicted a non-zero probability of a U, the student with the lowest teacher-assessed grade had to be assigned a U, even if their assessed grade was much higher. [4]

As you might guess, the downgraded results inspired outrage. Following days of vociferous protests by students, parents, teachers, and school administrators, the education secretary Gavin Williamson made a complete U-turn: the algorithm's assignments would be discarded and the teachers' assessments would be the A-level grades of record. But by then it was too late: universities had awarded places in their incoming classes based on the algorithm. Although the change back to teacher assessments meant more students qualified for university entrance, there were no places left for them.

Jessica Johnson. Image source: The Guardian

"I've fallen into my story": One of the affected students was Jessica Johnson, who had won an Orwell Foundation Youth Prize in 2019 for a story about school results being assigned by a biased algorithm based on social class. Then in 2020 her own teacher-assessed English A-level grade of A was reduced to a B by Ofqual's algorithm. She had been admitted to her first-choice school, the University of St Andrews, but the offer was withdrawn when her grade was lowered. "I've fallen into my story. It's crazy," Johnson said.

Her real-life story, at least, ended well: after the A-level grades were reverted to teacher assessments, her withdrawn place in the incoming class at the University of St Andrews was reinstated. Not everyone was so lucky, though. An unfair algorithm had changed the lives of thousands of students.

"We got told you can go wherever you want in life if you work hard enough," Johnson said before the U-turn, "but we've seen this year that no matter how hard you worked, you got given a grade based on where you live." [5]

Eyal Weizman, founding director of Forensic Architecture. Image source: Goldsmiths

2. Human rights as a security threat

Last February Eyal Weizman was scheduled to give a talk at the opening of an exhibit at the Museum of Art and Design in Miami devoted to the work of Forensic Architecture, the London-based nonprofit organization of which he is founding director. But Weizman never made it onto the plane. Two days before his flight his visa was revoked with no reason given and without the possibility of appeal. When he went to the US Embassy to reapply, he was told that "the 'algorithm' had identified a security threat."

Forensic Architecture uses the tools of computer image and audio analysis and 3D modelling to investigate suspected human rights violations. The MOAD exhibit featured Forensic Architecture's work on a CIA drone strike in Pakistan, a Chicago police killing, the Israeli bombing of Rafah, and ICE's Homestead detention center in Florida.

Weizman, a Fellow of the prestigious British Academy, was requested by the US Embassy to provide 15 years of his travel history and the names of anyone he'd been in contact with who might have triggered the algorithm. He writes,

we are being electronically monitored for a set of connections—the network of associations, people, places, calls and transactions—that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders and staff are being targeted by the US government as security threats. 

Weizman refused to comply, but the information is likely known to government agencies anyway. As has long been the case, those who monitor governments are themselves placed under intensified governmental surveillance.

Since the MOAD exhibit opened, Forensic Architecture has continued its investigations: they have produced reports on the massive Beirut port explosion, police brutality at Black Lives Matter protests, and the exposure of personally identifiable data collected by COVID-19 contact-tracing software. As Weizman writes, "Our work seeks to demonstrate that we can invert the forensic gaze and turn it against the actors—police, military, secret service, border agencies—that aim to monopolise information." [6]

Image source: The Guardian

3. Eliminating cars, increasing traffic

Speaking of technocratic solutions gone awry: London has done much that is promoted by "green city" advocates. It has virtually eliminated private cars in the central city at peak times, offers a ubiquitous public transportation network and has built "cycle superhighways." And yet, as Gwyn Topham notes in The Guardian, until the pandemic hit London traffic had reached the point of gridlock. The reasons are many:

  • Uber and Lyft: Although ride-sharing promised to take private cars off the road, in fact it increased them: the number of licensed private-hire vehicles in London went up more than 75% to nearly 90,000 between 2013 and 2017. And ride-shares seem to cannibalize public transportation and taxis rather than reducing individual car use. Not to mention that at any given moment, about a third of ride-share vehicles on the road are empty of passengers. The Union of Concerned Scientists has found that due to "deadheading" (travelling without a passenger) and the use of ride-sharing instead of lower- or non-emission alternatives such as public transit or walking, "ride-hailing trips produce 47 percent more carbon emissions than a similar trip taken in your own private car. . .[and] are 69 percent more polluting on average than the transportation options they displace." [7]
  • Delivery vans: Supposedly a more efficient way for us to receive stuff, our delivery system is an overlapping network of companies, routes and drivers. A single household might receive three or four separate deliveries over the course of a day; an office building might receive dozens. Amazon Prime and other next-day shipments are an especial problem, because instead of being efficiently consolidated deliveries are made one item at a time.
  • Bicycle lanes: Dedicated bicycle lanes are safer for cyclists (though not for the pedestrians who have to dodge them), but they reduce the road space available to buses, slowing them down. And buses are far more efficient at moving people around the city than bikes. Not to mention that buses, unlike bikes, are broadly accessible to disabled riders and can transport people in all sorts of weather. However, when buses are slowed by traffic they lose ridership. One London bus route measured in 2016 had an average speed of 4 mph, which is only slightly faster than most people walk.
  • Decentralized decisionmaking: Local government is better and more responsive than remote central government, no? But some issues are best resolved at a higher level than the local. London is a patchwork of 32 boroughs plus The City, leading to contradictory policies and congestion-creating inefficiencies. The head of a London bus company states, "Regent Street [which runs for little more than a mile between Regent's Park and Pall Mall] had 36 different operators doing recycling. Each person was doing the right thing, but add them all up together. . ." [8]

This is not to say that all these things are only bad. But lack of coordination among different entities with different priorities leads to a situation of cross-purposes and stand-still, which is realized quite literally on urban streets.

Image source: Huffington Post

4. Management by algorithm: Uber and Lyft's business model

In November 2020 California voters, swayed by a relentless $200 million dollar ad campaign paid for by Uber, Lyft, Doordash, and Instacart, overwhelmingly passed Proposition 22. Prop 22, authored by the same companies, exempted their "independent contractor" drivers from California Assembly Bill 5 (passed in 2019), which had reclassified them as employees with rights such as unemployment insurance, health insurance, minimum wage, and collective bargaining. Uber and Lyft had never complied with the provisions of AB 5.

The majority of voters might have been less susceptible to Uber and Lyft's manipulative ads had they understood more about their business model and practices. As researcher Aaron Benanav has written in The Guardian,

One might assume that misclassifying drivers as independent contractors enables rideshare companies such as Uber to make exorbitant profits. The reality is far weirder. In fact, Uber and Lyft are not making any profits at all. On the contrary, the companies have been haemorrhaging cash for years, undercharging users for rides in a bid to aggressively expand their market shares worldwide. Squeezing drivers' salaries is not their main strategy for becoming profitable. Doing so merely slows the speed at which they burn through money.

The truth is that Uber and Lyft exist largely as the embodiments of Wall Street-funded bets on automation, which have failed to come to fruition. These companies are trying to survive legal challenges to their illegal hiring practices, while waiting for driverless-car technologies to improve. The advent of the autonomous car would allow Uber and Lyft to fire their drivers. Having already acquired a position of dominance with the rideshare market, these companies would then reap major monopoly profits. There is simply no world in which paying drivers a living wage would become part of Uber and Lyft's long-term business plans. [9]

As has become clear to drivers since Prop 22 went into effect. While it supposedly guaranteed that drivers would earn at least 120 percent of minimum wage and receive health care stipends, the proposition is full of loopholes that enable the companies to evade these promises. A study by Ken Jacobs and Michael Reich of the UC Berkeley Labor Center found that under Prop 22's provisions:

  • waiting or travel time between passengers is not counted as work time,
  • mileage and insurance costs while drivers are between passengers are unreimbursed, and while they are engaged with passengers are under-reimbursed,
  • the majority of drivers don't qualify for the health care stipends,
  • as independent contractors, drivers must pay both the employer and employee share of payroll taxes and do not receive paid rest breaks, paid meal breaks, paid sick leave, unemployment insurance, or worker's compensation insurance.

The authors calculate that the true guaranteed wage paid by Uber and Lyft under Prop 22 after subtracting for unpaid time and un- or under-paid costs is actually $5.64 per hour. California minimum wage is currently $13 per hour, 130% greater. [10]

A scare tactic used in the ad campaign was that Uber and Lyft rides would become more expensive if Prop 22 didn't pass in order to pay for employee benefits mandated under AB5. Prop 22 passed, and Uber and Lyft have raised their passenger charges anyway. Meanwhile, drivers report that their pay has fallen. [11]


  1. Richard V. Reeves and Dimitrios Halikias, "Race gaps in SAT scores highlight inequality and hinder upward mobility," Brookings Institution, 2017: https://www.brookings.edu/research/race-gaps-in-sat-scores-highlight-inequality-and-hinder-upward-mobility/. Unfortunately, as Reeves and Halikias point out, simply making college admissions more equitable does not close racial gaps in degrees awarded. Long-term inequities in K-12 education and college preparation must also be addressed.
  2. Angelique Richardson, "Elbowed off the pavement," LRB Blog, 20 August 2020: https://www.lrb.co.uk/blog/2020/august/elbowed-off-the-pavement
  3. Kenan Malik, "The cruel exams algorithm has laid bare the unfairness at the heart of our schools," The Guardian, 23 August 2020: https://www.theguardian.com/commentisfree/2020/aug/23/the-cruel-exams-algorithm-has-laid-bare-the-unfairness-at-the-heart-of-our-schools
  4. Alex Hern, "Do the maths: Why England's A-level grading system is unfair," The Guardian, 14 August 2020: https://www.theguardian.com/education/2020/aug/14/do-the-maths-why-englands-a-level-grading-system-is-unfair 
  5. Jessica Murray, "Student who wrote story about biased algorithm has results downgraded," The Guardian, 18 August 2020: https://www.theguardian.com/education/2020/aug/18/ashton-a-level-student-predicted-results-fiasco-in-prize-winning-story-jessica-johnson-ashton
  6. Eyal Weizman, "The algorithm is watching you," LRB Blog, 19 February 2020: https://www.lrb.co.uk/blog/2020/february/the-algorithm-is-watching-you
  7. Jiayu Liang, "Ride-hailing: Convenience at what cost?" Catalyst, Spring 2020, pp. 9-11,21: https://www.ucsusa.org/sites/default/files/2020-05/catalyst-spring-2020.pdf
  8. Gwyn Topham, "How London got rid of private cars—and grew more congested than ever," The Guardian, 11 February 2020: https://www.theguardian.com/politics/2020/feb/11/how-london-got-rid-of-private-cars-and-grew-more-congested-than-ever
  9. Aaron Benanav, "Why Uber's business model is doomed," The Guardian, 24 August 2020: https://www.theguardian.com/commentisfree/2020/aug/24/gig-economy-uber-lyft-insecurity-crisis
  10. Ken Jacobs and Michael Reich, "The Uber/Lyft Ballot Initiative Guarantees only $5.64 an Hour," UC Berkeley Labor Center Blog, 31 October 2019: https://laborcenter.berkeley.edu/the-uber-lyft-ballot-initiative-guarantees-only-5-64-an-hour-2/
  11. Michael Sainato, "'I can't keep doing this': Gig workers say their pay has fallen after Prop 22," The Guardian, 18 February 2021: https://www.theguardian.com/us-news/2021/feb/18/uber-lyft-doordash-prop-22-drivers-california

No comments :

Post a Comment