The Present Should Be Signed

Aug 9, 2023
4 minute read

When I wrote The Future Will Be Signed almost six years ago the latest in AI advancements was Google Duplex. If you're like me and have never used Google Duplex, it's a feature of Google Assistant that could make calls on behalf of a person and automatically perform a task, such as booking restaurant tables. While you may have never heard of Google Duplex there's a good chance you've used a generative AI tool like ChatGPT, Midjourney, or GitHub Copilot.

Authenticity

We’re going to need a way to prove the authenticity of a piece of digital content, everywhere, in a simple manner. This is where public key cryptography comes in. Our current solutions are noble efforts, but remain too complex.

It's quite an understatement to say that AI has come a long way since 2018, and yet the blog post's core thesis is even stronger today than when it was written. At the time I was concerned about a future where deepfakes, audio manipulation, and text generation spread across the internet. We're now living in the beginning of that future, this is our present. It has never been faster or easier to generate inorganic content, the tools to do so are more usable and accessible than ever.

AI already has us questioning what we see on the internet, and the problem isn't going away. Fake news articles are being written by ChatGPT, fake books are being written with ChatGPT, and of course fake reviews made up by ChatGPT are being used to sell all of this.

Trust

This infrastructure is going to have to be baked directly into the software that developers build, in a way that is transparent to the end user. A politician (or anyone) needs to be able to sign a tweet, audio recording, or video clip to prove authenticity of what they are saying. With the creation and fabrication of content being so easy, we’re going to need a model where the person creating the content can prove it is trustworthy, and otherwise it should be treated as inauthentic.

When I worked on Twitter's Societal Health team I spent a lot of time thinking about misinformation, disinformation, abuse, harassment, and civic integrity. These issues often took the form of coordinated inauthentic behavior by large groups of people trying to manipulate people and the public conversation. The scale of the problem seemed enormous, now it's larger than ever, and only getting bigger. We still need tools to help us differentiate authentic and inauthentic behavior or content, but there haven't been many meaningful efforts to build authenticity into the products people use.

Arguably the largest advancements have come from a technology I personally have few positive feelings about, cryptocurrencies. When you believe everyone is an adversary then you need to build systems for trust. Bitcoin, Ethereum, and other crypto projects have shown that you can build a system based on public key cryptography that ensures a sense of truth. You may not like what that truth is, and it's easy to do so because of all the "Web3" that have been hilariously misused and abused in a seemingly unending amount of ways. I'm not pinning my hopes to the blockchain solving our trust problem, but I appreciate that much better user experience paradigms for trustless systems have emerged over the last five years because they were necessary for crypto to succeed.

Scale

In some ways the problems are actually worse than ever. Anyone can buy verification on X Twitter and impersonate their favorite brand. People have grown hostile and are treating platforms as adversaries because platforms no longer care about the people using their product. Platforms are even stealing usernames from active users, how can anyone trust what they read online when they don’t know who’s writing it?

Platforms are treating their users as adversaries as well. If you get locked out of your Google account you might as well consider your digital life gone. A company like Google doesn't and can't scale support to the level of personal help we've historically been accustomed to in common society. Protecting user safety means support agents must assume that someone writing them for help is a scammer, fraudster, or hacker trying to break into someone else's account. The incentive structures for helping people are all backwards because the risk of Google turning over someone's Gmail account to the wrong person far outweighs the positives of helping thousands of people. This may only affect 1/100,000 people, but when you're that 1 person, losing your entire digital identity is horribly destructive experience.

People need a sense of trust, some shared truth, and we're still in search of that online. As more of our lives happen on an inherently untrustworthy internet the status quo becomes more and more untenable, something has to give. Things will either get better or they will get worse, and based our approach of trying nothing and being all out of ideas, they are likely to get worse. The guardrails are coming off the system, if we wait too long then trust in our systems online and offline may fully erode.

It's discouraging that we can't figure out a way to solve the problems we have today, but an even bigger repudiation of the status quo is that we don't even talk about this large systemic risk, and probably won't until it's too late.

Joe Fabisevich is an indie developer creating software at Red Panda Club Inc. while writing about design, development, and building a company. Formerly an iOS developer working on societal issues @Twitter. These days I don't tweet, but I do post on Threads.

Like my writing? You can keep up with it in your favorite RSS reader, or get posts emailed in newsletter form. I promise to never spam you or send you anything other than my posts, it's just a way for you to read my writing wherever's most comfortable for you.

If you'd like to know more, wanna talk, or need some advice, feel free to sign up for office hours, I'm very friendly. 🙂