• Gunjan Syal

Risk-taking Deepfakes: TED Circles (May 2021)

Updated: Jun 15

This insights report is a recap of the TED Circles: Risk-taking Deepfakes workshop held on Sat May 15, 2021 10–11:30 am ET. transform this hosted TED Circles are a free, online, safe and inclusive place to connect with innovators from all over the globe. This workshop included representation from at least 5 countries including Canada, USA, Norway, Italy, and China. Our special guest in attendance was the cybersecurity expert Roozbeh Taheri-Nia.


Workshop Goals

Deepfakes are all around us and increasingly becoming better at impersonating 'real humans' in the digital world. Let's meet and discuss:

  1. What is the motivation and value behind Deepfakes?

  2. Can Deepfakes cause real harm?

  3. Can we balance the risk vs. benefits from Deepfakes?

Team:



For inspiration, here are a TED Talk and a HBR article that represent the topic:



Workshop Insights

With the rapid advancements in deepfakes technology around the world, we are wondering whether these technologies increase prosperity, or may pose a disastrous threat to society at large.


It was enlightening to host this discussion in the pandemic landscape with representation from at least 5 countries including Canada, USA, Norway, Italy, and China. The variety of industry backgrounds and perspectives enhanced our discussion. We covered insights from assessing blockchain's uses to relevant policies, along with cybersecurity and ethics.


We began the discussion by considering an MIT Technology Review article wherein the author detailed Facebook’s decision to release 100,000 deepfakes and train artificial intelligence on how to identify them. This development opened up the conversation about deepfakes and their impact on society.



With respect to the cybersecurity industry, Roozbeh underscored three elements involved in identity access management of humans online. Requesting following information from users directly before proving access helps authenticate them as human users, with their identity:

  1. What do you know? (passwords)

  2. What do you have? (special tokens or dynamic password generators)

  3. Who you are? (bio-metrics)


Over the years, authentication methods have moved beyond strictly usernames and passwords. As Roozbeh pointed out, there are now security questions and the incorporation of bio-metrics like fingerprint recognition and retina scanners. Mobile device manufacturers like Apple have Face ID, and Amazon Echo is starting to implement voice recognition to conduct transactions. Multi-factor authentication features can also lay the groundwork for deepfake-attacks, with the potential for other people to be authenticated with falsified information. The common vector threat is synthetic identity whereby cyber-criminals use deepfakes to falsify their identity, and mimic the facial features and speaking patterns of others to conduct fraud.


Roozbeh: "Almost all authentication methods can fail when dealing with deepfakes."


As we discussed Danielle Cirton's TED talk, we reviewed the impact of deepfakes on our lives. We discussed the Muslim journalist Rana Ayyub's fight for justice following a deepfakes attack. Gunjan shared several factors from the TED talk that explain why deepfakes can easily threaten our society and democracy. Through the aid of technology, the 'hidden perpetrator' is able to maintain anonymity and avoid traceability.


This gives way to the 'liar's dividend': the ability to escape accountability and continue the unethical behaviors, while escaping the consequences which would be otherwise punishable. Gunjan summarized that this perfect storm can be orchestrated by a lack of legal system's ability to trace deepfakes back to the perpetrators. This is also driven by lack of current policies for tech-based crimes. The TED talk mentions specific areas where society lacks actions to minimize the impact from tech-based crimes. This is where we entered a critical question:


Gunjan: "How can we, as individuals, influence and mediate these impacts?"

Vikas pointed out the importance of fact-checking as we take in the information that is being shared with us online. We also need to consider the sources of information we consume.


Vikas: "We are living in a world where we have narratives from both sides of the spectrum. It eventually becomes very important for a particular individual to go back and objectively analyze what they have been told before accepting that reality."

Vikas mentioned that the media may be contributing to the problem. The media can present narratives and information aimed at what is considered controversial, or what the society may want to see. In the current climate, the remote world makes confirmation bias especially prevalent. One foundational step in preventing the adverse impact of deepfakes is asking ourselves, 'Is what I am seeing authentic?'


At the same time, given the overwhelming amount of information available to us, rapid advances in technology, as well as improving deepfakes quality, it may be impossible to identify all falsities when fact-checking. Stephen offered his insights on deepfakes development and their impact. In particular, Stephen reflected on the Facebook article mentioned at the start of our discussion and suggested that companies and societies should not be complacent in believing that they already have the tools to recognize deepfakes. He drew attention to the fact that Quantum can be used within the deepfakes technology of generative adversarial networks (GANs). This technology can be accelerated, making the development much more straightforward with the possibility of doing it 'live'.


Stephen: "We are in an era where the truth is eroding, and that’s going to be a real problem for society."

As the conversation continued, we realized that a thorough solution to tackle deepfakes through cybersecurity does not yet exist.


Roozbeh: "This will be an arms race between deepfakes and the platforms that use authentications like facial recognition. We will have to learn what the criminals and bad actors are coming up with, and then counter them."

Amid the rising cybersecurity breaches powered by deepfakes, we once again questioned what we can do to protect ourselves from these vectors of attack. It seems as though there are no absolute procedures in place to prevent such harm. Roozbeh mentioned that no level of secure technology is without risk. It is about the level of risk that we choose to accept. There is no guarantee, even with the reputation of a technology, that we're immune from any error or breach of security.


What then becomes important is clarity surrounding who makes decisions and influences them. This, in turn, paves the way for the conversation around public policy. Although policies and regulations are continuously improving, there are always going to be bad actors in society. Ultimately, it is challenging to strike a balance between the implementation of policies and the creation of a society with absolute openness and freedom.


This also poses a question about whether or not technology companies should influence outcomes. For example, in light of Twitter, Facebook, and YouTube's recent actions banning specific personnel who are creating conflicts, we find ourselves wondering if everyone can have the freedom to say what they want.


Who will be the policy-makers, and who will take the responsibility for real impact to the society from deepfakes?

There is no right or wrong answer to this question. Each stakeholder will have different perspectives and beliefs. Even so, one thing we can do as a society is to hold wrongdoing individuals accountable for their actions in the digital sphere.

In general, deepfakes have a reputation of being used for malicious activities. This raises a question:


Are there positive use cases for deepfakes that can add positive value, and what are they?

Despite concerns about how the notion of 'accessibility' is defined within the digital world, we can see deepfakes coming into positive use through Google's Lookout and Microsoft's Seeing.AI. There are many other positive use cases such as those share here: https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387






Amid the current pandemic, facilitating educational learning has been a sizable challenge. Deepfakes can potentially provide much needed creative support in this area:



There are many additional deepfakes use cases in areas such as creative arts, entertainment and even public safety: https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387






In summary, although we have devoted significant time towards discussing the negative implications of deepfakes, including policy gaps and ability of cybersecurity to assess and prevent such threats, we can conclude that there is no absolute definition of a technological innovation for good or evil. It is a matter of perspective, and how we utilize and influence the technology at hand, along with proactively educating our stakeholders and audience to negate the wrong-doers.


We hope you will join us again on June 19 at 10am ET (Juneteenth) - this time to discuss the use of our body's data for innovation and the ethical considerations involved.