Formal policy analysis concerning the regulation of facial recognition technology.
One cause that I spend a lot of my time and effort working on is technological justice. As new technologies are developed and deployed, these technologies have asymmetrical effects. People with access to societal power leverage technology to improve their standing, and marginalized people are often pushed even further to the edges. In particular, AI, machine learning, and other systems that make decisions in the place of a human being are often used as an excuse to reproduce and enforce the boundaries that oppress people every day. With new technology, new regulation must be introduced, and the design of this legislation is no simple process. As a supporter of Design Justice and community-helmed processes, I believe that decisions about regulation of new technology must be democratic to prevent the misuse of our digital space for profitable and authoritarian ends.
One example of this is Facial Recognition Technology (FRT), which is used to identify individuals in public spaces, often for the pursposes of law enforcement. This technology is often racist and transphobic in its implementation, conflating individuals with darker skin. As a final project for a Public Policy class with my esteemed professor, Dr. Shobita Parthasarathy, I produced a pair of advisory documents for the US Senate.
The first document is a Backgrounder that explains a piece of legislation related to FRT, and the second document is a Governance Recommendation that advises a Senate committee on how to proceed with community-informed design processes to ensure that the proper decisions are made regarding the regulation of FRT. Because lawmakers are extremely busy, their time is valuable. Therefore, these documents have been edited to be concise, precise, and formal.
You can view the Backgrounder report at the following link.
You can view the Governance Recommendation report at the following link.
Facial Recognition Technology Governance Recommendation Report (PDF)