Welcome to The Martech Weekly, where every week I review some of the most interesting ideas, research, and latest news. I look at where the industry is going and what you should be paying attention to.
The ultimate knowledge tools for Martech have arrived. We have ONE day left for our Black Friday / Cyber Monday offer, giving you the opportunity to access TMW PRO Advantage: The industry’s most powerful resource at our lowest price ever. CLICK HERE to get 50% off for the life of your subscription (offer ends 11:59 pm PST, Monday 27th November).
OpenAI and the decline of trust
“Trust is the confidence in one’s expectations”
- Niklas Luhmann
Luhmann was a German sociologist, philosopher, and proponent of systems thinking back in the 80s. The quote above is one of the clearest definitions of what creates trust between people. But can we extend that definition to machines?
Trust is something you spend a lifetime building but can be lost in a minute, goes the common saying. But I want to give another spin on that: trust is something that can be regained, and strengthened after losing it. Redemption is a powerful thing.
I think a complete loss of trust is what we’ve seen this week with the chaos of OpenAI that kicked off last weekend. Sam Altman was unceremoniously and unexpectedly removed from his CEO position by the OpenAI board based on what they claim were communication issues, which sparked a hellish week of Twitter speculation.
After severe backlash, by the end of the weekend it looked like Altman was to regain his position at OpenAI, but couldn’t come to an agreement, got hired along with his co-founder Greg Brockman by Microsoft, and then, roughly 60% of employees threatened to leave unless the board resigned (including the board member and co-founder who was initially for ousting Altman, Ilya Sutskever). Barely 24 hours later, he was reinstated as OpenAI CEO.
The board will now be shuffled again, ousting the initial perpetrators of a schism that has cost the company dearly.
OpenAI’s board came dangerously close to incinerating $100 billion in value in the most roughshod way possible. Talk about an enormous failure in corporate governance. And even worse, for the leading company in the hottest trend in tech that has grown the fastest by far, OpenAI has won the trust and endorsement of the tech community and has taken pride of place in a vast majority of Martech company product strategies.
And yet the board offered no clear explanation, no offboarding for Altman, and no real vision for the future of OpenAI without Altman’s leadership. Microsoft – OpenAI’s biggest backer – found out minutes before it was announced, and the staff had no warning at all. What a hot mess.
This essay is about how Generative AI is creating more mistrust in the digital ecosystem. Which was inspired by how much trust for OpenAI has been evaporated because of a reckless board, an entrepreneur gunning to speed run to the next platform shift, and a business structure destined for failure.
The paradigm shift
Open AI came seemingly out of nowhere to become one of the fastest-growing companies in the world. Sam Altman announced on the 6th of November that Open passed 100 million weekly active users, and is getting closer to generating $1 billion in revenue next year. And yet, the company is reportedly burning hundreds of thousands of dollars a day to meet the demand put on their servers.
In 2021, the company was worth $14 billion; 2023 saw the company grow to $29 billion, and is currently fundraising for a deal that would value the business at $86 billion – roughly triple the size from the start of the year. Even by Silicon Valley standards, this is the most unbelievable valuation curve for a company that launched its first real consumer product only 12 months ago, and there’s a reason why.
Over the past year, the company has expanded its offering and optimized its products in incredible ways. Starting with ChatGPT back in November 2022, it went on to launch the improved LLM GPT 4, which saw a big accuracy and capability boost; then the Plugin marketplace that allows you to integrate with other websites and data sets; then ChatGPT Enterprise, which gives larger companies the option to have their own internal GPT tool; and then the company shipped the ability for ChatGPT to interact with audio and video. The new DALL-E 3 also came out this year, with unparalleled image generation abilities; and lastly, just before Sam Altman’s ouster on Developer Day, OpenAI announced a product where people can create their own mini models for specific use cases.
With more than 300 new generative AI-focused startups launched this year and 13 companies reaching a billion-dollar valuation, speculation of the commercial opportunity in generative AI is at an all-time high. According to Crunchbase, 1 in every 4 dollars invested in US startups has gone to an AI-focused company this year.
Among public companies, generative AI has a special place in future-facing value projections. It’s every CEO’s favorite stock booster.
But it’s not all just talk. Around 40 to 50% of enterprise companies are rolling out or scaling up their deployment of AI products and tools this year – a big leap forward in AI adoption.
Without OpenAI, we wouldn’t have seen the explosion of AI adoption from the Martech industry. I’ve lost count of the vendors that have fully embraced this new paradigm of automation in the Martech stack. As I detailed in the long tail of generative AI, we haven’t seen anything as impactful to a Martech company’s product strategy since the iPhone.
Across the creative and efficiency value chain and use cases that stretch across marketing sales and customer service, there are huge companies offering Generative AI products at rapid speed.
And marketers are getting creative with how they apply Generative AI to their workloads, doing everything from SEO optimization, video creation, design and illustrations, and copywriting.
The world has been changed by OpenAI; there’s no doubt about it. The team there figured out how to bring LLMs to the mainstream, and are succeeding at it. So much so that other countries such as France, Germany and China are building their own proprietary models and compete in what seems like an arms race to the next platform shift online.
The cat’s out of the bag and there’s no returning from here. When it comes to the future of automation in Martech, it’s OpenAI’s world now, and we’re just living in it.
You might think that all this growth is a good thing, until you start to look around at what I’ve often described as the growing sewer of low-quality Generative AI gibberish on all of our major platforms. The decline in online trust is only growing with the advent of AI tools.
This week in our Wednesday Martech briefing newsletter, we featured that both Amazon and YouTube are working at counteracting the problem of generative AI-powered fake content. Amazon has a bigger issue with the prevalence of fake reviews that are easily enabled by tools like ChatGPT. Amazon will use AI to weed out fake reviews by feeding it proprietary data points, including seller advertising spend, customer-submitted complaints, user behavioral patterns, and review history. They’ll also use the LLM to flag unnatural patterns in the language used in reviews. They are cracking down.
YouTube also announced a new policy requiring creators to label AI-generated content that uses events, people and IP, which is mostly a trust system that will not work, but establishing rules makes it easier for the platform to ban bad actors, as scant policies exist for AI deepfakes. Facebook has banned Generative AI tools from political advertising to quell the rising misinformation in elections, and TikTok has banned deepfakes of non-public figures to protect the safety and privacy of regular people using the video-sharing app.
Despite all the hysteria about Artificial General Intelligence (AGI) and claims that OpenAI is creating a Terminator that will destroy us all, AGI is not the problem here. The problem is right in front of us. The real risk is removing friction from people to easily do things that have tangible consequences on the quality and reliability of the platforms we rely on online.
Bruce Reed, the official behind the recent US Executive Order on AI, clearly detailed what he sees the risks are with the technology, and it’s all centered on how it can significantly damage our information economy:
“The meeting, Reed says, hardened his belief that generative AI is poised to shake the very foundations of American life. “What we’re going to have to prepare for, and guard against,” Reed says, “is the potential impact of AI on our ability to tell what’s real and what’s not.”
The irony of OpenAI’s growth is that multiple reports such as the MITRE-Harris Poll and a recent Salesforce Connected Consumer survey agree that consumers are trusting AI systems and what they are producing less.
Let’s illustrate this for a minute. Bernard Marr, writing for Forbes, gives us a very likely scenario of what will happen as Generative AI tools continue unabated:
“Just picture a scenario like this in the year 2025: The world's gaze is fixed on an impending international summit, a beacon of hope amidst rising tensions between two global powerhouses. As preparations reach a fever pitch, a video clip emerges, seemingly capturing one nation's leader disparaging the other. It doesn't take long for the clip to blanket every corner of the internet. Public sentiment, already on a razor's edge, erupts. Citizens demand retribution; peace talks teeter on collapse.
As the world reacts, tech moguls and reputable news agencies dive into a frenzied race against time, sifting through the video's digital DNA. Their findings are as astounding as they are terrifying: the video is the handiwork of cutting-edge generative AI. This AI had evolved to a point where it could impeccably reproduce voices, mannerisms and the most nuanced of human expressions.
The revelation comes too late. The damage, though based on an artificial fabrication, is painfully real. Trust is shattered, and the diplomatic stage is in disarray. This scenario underscores the urgent need for a robust digital verification infrastructure in an era where seeing is no longer believing.”
So if you’re staring down at your screen, how can you tell what’s real or not real? Today, you can’t be so sure anymore. And that is a huge problem. The reason why Meta, Amazon, and YouTube – along with a swag of large online platforms – are working on integrity and trust is that it hurts the bottom line. No trust in content or commerce means more friction to engage or buy, which means less commercial opportunity, which means the decline of the very platforms we’ve trusted to deliver trustworthy information, products, and services.
OpenAI and its competitors threaten to undo years of goodwill and trust directed at large online platforms by virtue of just making it incredibly easy to create and scale new information, which includes millions of bad actors.
We used to have to worry about moderating humans on these platforms, but as AI becomes more sophisticated in evading detection, it creates an infinite game of moderation checkmates until the heat death of the universe. Sure, OpenAI is accelerating the technology, products and commercial opportunity, but they are also accelerating the sowing of mistrust in our information platforms.
On the other side of this is the perennial problem of how much we should trust using these systems. Already, marketers trust algorithms to deliver ads and content, recommend products in emails and on websites, make decisions, and more. But those are things we can more or less control as small parts of a bigger strategy.
Would you let AI create and execute a marketing strategy for you? Most marketers would say no, mostly because they don’t hallucinate like all these tools do. If they did, they’d be out of a job and shipped to the insane asylum post haste!
But the big leap in AI is not when the technology stops hallucinating; it’s when – going back to Luhmann – we believe it can meet our expectations. The great leap from trusting another sentient human being with growing your business to trusting an AI to do it for you is perhaps a leap too far. At least in our generation.
Losing trust is losing everything
Ironically, the leadership of a company that is most responsible for bringing these tools into the world has damaged the trust of its own consumers and partners by ousting Sam Altman. And Altman, as CEO, lost the trust of his board.
Which brings me to the main problem here: How can you trust a company to bring us this amazing AI future when its board and CEO make such foolish mistakes?
Just look at the corporate structure of this company. Is it non-profit, or is it for? It’s actually both, but the rebounding CEO Sam Altman is not a shareholder, and it has been governed by a board of mostly academics. And yet, the median salary per employee is over $900 thousand a year. Not bad for a non-profit job, huh? Confusion abounds for this company that has had its trajectory altered from a non-profit seeking to safely progress to AGI, to quickly deploying a for-profit business model that is playing a significant role in the decline of online trust.
Ben Thompson describes the dysfunction of a dual for- and non-profit business model at Open AI:
“Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has a mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence.”
With the latest rounds of Funding, Microsoft now has a near controlling stake in the for-profit arm of OpenAI, and as part of the $10 billion deal, has enmeshed the data and server parts of both businesses together, which further adds complexity to the incentives of OpenAI.
Clearly, the tensions here have something to do with safety and commercialization. All companies that grow big have some kind of negative externality. OpenAI is a capital-intensive business that needs a lot of resources to make the breakthroughs they’ve been able to make. Sure, a for-profit model that seeks to be first to commercialize the technology makes sense, but in the face of the company’s mission statement to benefit all humanity, this doesn’t make a lot of sense at all.
I think what’s underlying all of this is that the human systems that make decisions for companies (boards and leadership structures) do not match the advancements of the technology.
Think about this for a minute: OpenAI has created a system that is getting close to human-level cognition, but the incentives, board and leadership structure are all ancient. Sam Altman’s firing reflects this in totality; the structure of the company was clearly not aligned on why the company exists, which created an environment of manipulation and mistrust.
Perhaps the real innovation here is figuring out new models to align people’s interests to better balance safety and commercial opportunity in the radical new challenges that OpenAI presents to the world.
After reading all of this, you might be rethinking the value exchange of bringing Generative AI to the market in the first place. We get an internet we are trusting less so we can have AI systems we can’t fully trust to use for all the things that matter, but they can write emails for you and finish your sentences.
Is this really what we need? If you talk to anyone at the end of their career, they will likely tell you that earning the trust of others is the real currency of any industry. It’s the great lubricant for financial opportunity and upward mobility.
And beyond the technology innovation, this poignant example from OpenAI serves as a lesson that we need to think less about the technology and more about the incentives that drive it. The problems compound not because of the tech, but its specific application, and that application is driven by what people want from it and who gets to call the shots.
Trust will continue to erode on digital platforms until we find a way to regain and redeem it. And right now, there’s no clarity on what will maintain the quality and veracity of what we see online. But still I’m hopeful that trust lost can be trust regained and strengthened.
But for the moment, OpenAI seems ok with eroding that trust for themselves, while making trusting everything else we see online less tenable. In Altman’s worldview, putting ChatGPT in everything is the only viable path to maximizing shareholder returns, but we lose so much more than we stand to gain if that happens. We will lose trust, which is everything.
P.S. Don’t forget about our Black Friday / Cyber Monday offer!
If you want access to the creme-de-la-creme in the Martech world, if you want to truly transform and future-proof your career, if you want to join one of the most curious and knowledgeable Martech communities worldwide and connect and network with some of the brightest minds in the industry, then CLICK HERE to get 50% off for the life of your subscription (offer ends Monday)!