TMW #203 | The (A)ick

Dec 1, 2024

200 editions. 4 years of insights. All yours.

We recently published our 200th edition of The Martech Weekly…

That’s four years of delivering the most valuable, noise-free insights in the marketing technology industry.

With a TMW Pro subscription, you’ll unlock access to all past editions and the next year’s worth of deep dives and market briefings — keeping you fully informed on how Martech is evolving.

And for Black Friday, you can get it all for 50% off.

Click here to grab the deal before it’s gone.


Welcome to The Martech Weekly, where every week I review some of the most interesting ideas, research, and latest news. I look to where the industry is going and what you should be paying attention to.

👋 Get TMW every Sunday

TMW is the fastest and easiest way to stay ahead of the Martech industry. Sign up to get TMW newsletters, along with a once-a-month complimentary Sunday Deep Dive. Learn more here.


It’s OK to feel uncomfortable about Gen AI

Hey everyone, Juan here back for the weekly Sunday essay. Keanu’s pretty dang good, isn’t he? I’ve been heads down preparing new products for VISION DAY but I’ve ducked back in for an essay this week for our end-of-the-month free feature. It’s great to be back writing for you all, I hope you enjoy it. xoxo 

The (A)ick 

Last week, I posted a reaction to a new campaign from AI startup Artisan, and it blew up. I was disgusted by what I saw, which was the message “STOP HIRING HUMANS” plastered everywhere and anywhere this startup was marketing. To me, it seems a little anti-human at its core, and it turns out that quite a few people agreed with me. 

I chalk this little campaign up to classic Silicon Valley growth-brained culture, where the message doesn’t matter as long as you’re running impressions and clicks. But the message is still an important one as we contemplate what’s to come with the arrival of Generative AI. 

Right now, the anti-human algorithmic culture of scale is leaving a bad taste in people’s mouths. Not only those regular folks just trying to get on with doing their job, but investors, startup founders, and you – the marketing technologist. 

There’s no mistaking that Generative AI is causing a huge financial bubble and an extremely overinflated hype cycle. But as the dust settles, the questions about these technologies are becoming clearer, at least for our corner of the Martech world: Is this stuff actually useful? Can it help me make an extra million in revenue next year? Can it get any better? Will it diminish my career as a marketing in the world? 

This essay will investigate the (A)ick. That feeling of something not quite right in the Generative AI space from three perspectives; the promise and limitation of scale, the purported and actual usefulness of generative tooling and the species risk it introduces for us mere mortals. What’s giving us the (A)ick? 

A psychological scientist explains what 'the ick' really is | news.com.au —  Australia's leading news site

Let’s get to it. 

Scaling Ick

We all know there are issues with LLM’s and Generative AI technologies. They frequently get things wrong, hallucinations run rampant, and the promise of the technology becoming agents and useful interns is always “just around the corner.” 

The amount of progress since 2020 has been breathtaking, don’t get me wrong. We now have machines that can reasonably pass the Turing test and can take and easily pass most exams. OpenAI’s O1 model is showing an exam pass rate of 83% for mathematics, 89% for coding, and 78% for Ph.D-level science questions. 

But the past does not predict the future here. In the past, what got us here was the ability to stack transformers on top of each other and train LLM models with as much publicly available web data as (in)humanly possible. 

The path to improving the technology seems to be adding more compute and data to these pre-existing systems. You could be no further from the truth. Timothy B Lee argues from evidence that AI has hit a scaling limit with the existing models: 

“When OpenAI released GPT-4 in March 2023, it helped to cement the conventional wisdom about “scaling laws.” GPT-4 was about 10 times larger than the model that powered the original ChatGPT, and its larger size yielded a significant jump in performance. It was widely assumed that OpenAI would soon release GPT-5, an even larger model that would deliver another big jump in performance.” 

Ten times larger to go from generation 3 to generation 4. That’s a tenfold increase in hardware, data and compute. AI researcher and author of Rebooting.AI, Gary Marcus, adds to this the challenges to come for investors and the industry at large when we hit the inevitable scaling ceiling: 

“LLMs will not disappear, even if improvements diminish, but the economics will likely never make sense: additional training is expensive, the more scaling, the more costly. And, as I have been warning, everyone is landing in more or less the same place, which leaves nobody with a moat. LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even NVidia might take a hit, when people realize the extent to which its valuation was based on a false premise.”

Multiple outlets such as The Information and Reuters are now corroborating this point of view, reporting in unison that Open AI and other companies are now hitting scaling limits with the technology. The only way to improve the models is to as for ungodly sums of money, even at Silicon Valley standards. 

The key reason for this is the sheer energy consumption and cost to continue the scaling crusade. OpenAI is set to reach $7 billion in costs with a $5 billion loss this year, that pays for 350,000 servers containing NVIDIA A100 chips, with around 290,000 of those servers used for ChatGPT alone.

The cost to maintain LLMs at the current scale is a fascinating inversion of the market principles of the previous generation of large-scale internet products.  

You see SaaS, social media and search leveraged the open network of the internet to achieve the kind of scale that makes companies like Salesforce, Google and Meta extremely valuable – the user, not the company wears the cost of using technology through the principles of a distributed network. 

The telecommunications networks, undersea cables, open web, billions of websites creating content, individuals posting stuff to social media, and data centres that turn the cost of hosting content and commerce as close to $0 as you can get? All of these things conspire to allow technologies to get into the hands of billions of people at an extremely low cost. 

The classical example is Google Search. Google hosts very little outside of blue links; it’s the responsibility of those billions of websites to create content and host it themselves. Yet Google achieves a rare kind of scale where most of the energy to power it comes from users, therefore the more people search Google and the more people create content, the more valuable the business becomes. 

Generative AI is a sharp departure from this. Yes, there are some scaling benefits that reinforce the AI system by further training it through its own use. But the responsibility to scale the system sits squarely with the model owner: AI uses other people’s content and IP to improve its outcomes and pattern matching, but the storage and compute is all happening on the provider’s side. It’s a closed system where the more people use it, the more expensive the system becomes to run. 

Ed Zitron makes the point that there’s not enough money in the entire world to fuel the energy, hardware and compute costs to onboard another billion people to what already exists in Generative AI, yet alone the new models that require multiples more investment to reach the next plateau. In fact, Zitron argues, OpenAI will need to triple revenue to more than $11 billion to meet investor expectations, and even then it will lose money to just service the increased demand: 

“To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.

OpenAI loses money every single time that somebody uses their product, and while it might make money selling premium subscriptions, I severely doubt it’s turning a profit on these customers, and certainly losing money on any and all power users. As I've said before, I believe there's also a subprime AI crisis brewing because OpenAI's API services — which lets people integrate its various models into external products — is currently priced at a loss, and increasing prices will likely make this product unsustainable for many businesses currently relying on these discounted rates.” 

OpenAI’s fantastical financial projections, hooving and hogging a significant amount of capital out of the industry with no clear path to a profitable, yet alone sustainable business model, betrays the laws of scaling on the web. 

This stifles the startup ecosystem, many of which are building at the intersection of marketing advertising and technology. Once the investor fervour passes – like Uber, Airbnb and Meta have experienced in years past – the only way to recover costs is to pass them on to the customer. So if you’re building an AI startup that sits on a foundation model, expect to see price increases in the tenfold range in the coming years. 

Unlike SaaS, which leveraged the existing network which no other company directly owned to find scale, you’re either dead before you start to even compete on a foundation model, or totally risk-exposed by relying on another platform that can change wildly and vary in quality and uptime – something you can’t control. 

Imagine something going wrong with your customer service AI chatbot – like what happened with Air Canada – that offered a customer discount that didn’t exist, but a court of law forced the company to honour it anyway. In an example like this, an AI foundation model can just make some diversions from the script because of an update pushed to it or a change in training data. Air Canada cannot control what is likely their vendor’s vendor – the makers of the foundation model. And that’s a huge problem. 

There are some attempts at heading off the obvious platform risks for startups and companies looking to adopt Generative AI tooling. One example is Salesforce’s Agentforce Testing Center that allows business users to run models in a sandbox environment, test “utterance controls” over what AI agents are saying to consumers, and track spending. 

You see where I’m going. The limit of scale in AI is one of the biggest (A)icks out there, because we’re all even more reliant on a small group of companies to power what is arguably the most powerful tools available today, and as the costs come up, they will most definitely be passed on… along with all the changes you never asked for. 

Usefulness Ick 

The scale question directly impacts my second question – is this stuff actually useful? 

Let me show you one chart that gives us the answer: 

You might be asking yourself: If LLMs and Generative AI is such a homerun, why in the world are so many brands spending so much on consultants to try and work out basic use cases?  

Like all technology, we make it complex and hard to use ourselves. There are some very interesting products and services that are built off the back of the Generative AI craze. But as Benedict Evans points out, most people are only using AI tooling once a month – if at all in many countries – and more than 70% haven’t even heard of it. Outside the realm of white-collar desk work, Generative AI still has limited application. 

One of the reasons why is that we haven’t made up our minds yet on whether AI should be a platform in its own right, or a feature to add to something else. Is it the whole enchilada or just the hot sauce? There’s an identity problem for these products: We don’t know how to approach them, and the vendor landscape makes things even more confusing. 

From a Martech perspective, there’s a whole host of applications of AI to the stack. From Generative AI, automated charts, synthetic data, automated content creation, and data management. Many of these promise efficiency gains, the ability to find new customers at scale, personalization improvements and more. 

You can find a whole list of fascinating products that are helping customers drive real marketing returns in the fields of customer experience and data and analytics in our archive. You don’t need to look far to see the ROI. 

But in some cases, AI tooling is just forced upon us, as is the case with Microsoft Office and Google Search. The speed with which companies are making us use these tools is a self-defeating tactic, adding underbaked, hallucinating, and often just plain wrong features into your product roadmap surely must be a textbook way to drive customers away? 

Perhaps not. 

There is another useful application of AI – vendor lock-in. TMW’s Head of Research, Keanu Taylor, suggests in TMW #195 that AI is just a lock-in strategy by another name: 

“I’ll illustrate this with a hypothetical. Let’s say I’m a Salesforce customer who has purchased the whole suite. I want to create an Agentforce agent that’s only goal is to increase customer lifetime value. To do this, it would need to have decision-making control across the suite, as well as the ability to engage customers via specific channels. If a customer raised a complaint via Service Cloud, the agent might ping a customer service representative to call the customer and offer a goodwill gesture whilst deciding to drop them out of a particular marketing journey, which could require it to reach into Data Cloud and Marketing Cloud.

The idea here is to tie platforms together with capabilities that only work when they have access to the whole picture; AI agents are that tying mechanism.” 

AI adoption will slow down significantly as we all realise that the tooling, while better than Clippy is still lightyears away from AGI-level intelligence that can create content, handle marketing tasks and do thinks like personalization at scale much faster. 

And the sense of discomfort I’m getting is the massive push by large vendors to get companies to adopt their AI tooling. If it’s a CDP, a marketing automation platform or an analytics suite, it seems as though the SDRs in these companies have been briefed to sell AI tooling as a wonderful compliment and upgrade for brands at all costs. 

There are three ways to think about the usefulness of AI tooling: bundle, build or break. 

Bundle is the stuff you’re seeing with Salesforce Agentforce, Microsoft Copilot and Adobe Firefly – it’s adding AI features to existing software paradigms in an effort to use those tools in an altogether different way. 

Build is the entirely new software and platforms built on top of the AI tools. Things like Perplexity, an entirely new model for search or Midjourney – the first really consistently good image generator in the early days. The aim is to introduce entirely new concepts to the market; this has seen some benefits, but most new-world thinking does take time for adoption. 

Break is a disruptive force against a new industry. Reducing an entire customer experience down to a chat window with an LLM agent is one example, replacing designers with automated content creation tools is another. In a way, ChatGPT itself is a total disruptor to the existing way people do a myriad of knowledge work. 

The main question about this technology revolves around its reliability. The LLM and generative imagery tooling feels seriously incomplete compared with what it purports to do. Why are we all getting so excited by expensive and yet incomplete software we cannot fully rely on to do anything? 

As the recent software update issue from Cloudstrike attests, halting planes, public transport, hospitals, banks and broadcasting to the tune of $5 billion in costs for Fortune 500 companies illustrates the point. Cloudstrike is valuable because its security solutions can underwrite business-critical use cases like taking payments, directing flights or storing key information, and when it can’t, it becomes obvious very fast. 

If LLMs and AI agents want to become critical pieces of infrastructure, we’ll have to move a long, long way from random chat and the occasional meme generator. 

AI is just not there yet, and it’s very unlikely that it will reach a stability that allows marketers to embrace it in a way they can be confident in doing. 

The Stupid Ones

The last ick might be misunderstood as a screed against AI taking our jobs. Far from it. The real issue is the cultural implications when AI content floods our newsfeeds, conversations, and everyday interactions with other people. 

It’s not about our replacement, it’s about our augmentation. 

Firstly, there’s the new Apple ads. Have you seen them? Apple is out here championing people who deceive others and embrace a lazy and low-effort approach to things that matter – like your work and family – with their suite of AI tools. 

Take this video showing the kind of staff member most of us would absolutely hate to work with. Watch him literally wile away his entire day while everyone else is being productive around him. 

He writes a slobbering email filled with stupid slang, and then asks Apple Intelligence to make it more collegial before sending it to a coworker, asking him to take work off his plate in the most politically correct doublespeak you could muster.

No, I am not making this up – this is how Apple envisages how we should be using their new AI products. 

Cnet’s Bridget Carey reviewed Apple’s new Apple Intelligence AI features on the iPhone, Mac and iPad and says that “Apple Intelligence is for the Stupid Ones,” unpacking this bewildering positioning from Apple and arguing that Apple’s latest positioning of its AI feature set turns the products into tools of the lazy, of people who don’t want to put the effort into life and at worst become incredibly dishonest. What happened to the “bicycle for the mind”? 

In effect, Apple’s positioning is similar to our friends at Artisan featured in my LinkedIn Post, the “Stop Hiring Humans” company. These companies are increasingly against humanity: If your job is to automate human thought and abstract away skills and knowledge, rendering the human effectively useless in all the situations that matter, I’m sorry, but that’s fundamentally an anti-humanistic ideology and a big, big (A)ick. 

If successful, this kind of positioning will create a new wave of consumer behaviour change, and guess what? It will become increasingly transactional, people will become lazier and more of your content will be obfuscated by AI summarisation tools. 

It seems as though we’re doing our best to remove real people from almost every online interaction; if it’s AI-generated gibberish polluting our newsfeeds, inserting AI chatbots into recruitment processes, or people using AI search to get information about products and services; all of it is a huge ick because underlying all of it is a fundamentally anti-humanistic culture that denies us connection, authenticity and the ability to put real effort behind something and be rewarded for it. 

The naked revelation from tech companies that AI tools are here to take away your freedom of thought – and the visceral reaction from the public – only cements the idea that Generative AI is quickly developing a brand and reputation problem. 

False prophets

The core question of any marketing technology program is whether to embrace new technology. Will this new solution, idea or strategy lead to a better future? Many folks who manage large, complex marketing technology programs have millions of dollars riding on these ideas already – optimism is big business. 

AI has been shoehorned by big tech, VCs, and tech influencers as the unstoppable force of the future that you cannot miss. As the now popular but incredibly stupid catchphrase put it, “AI isn’t going to take your job, but someone using AI will” – clearly a sign that you must master these tools and invest in them in order to future-proof your own career. 

Seriously? We’ve seen this story play out many times before. And in almost all cases, nothing happens. We play with the tools, and then get back on the job doing the important stuff of marketing, like finding revenue and improving the customer’s experience. 

Can AI tools help with that? Probably. But at this point, they represent nothing more than a distraction. 

A great way to discern if it’s worth investing your time and dollars into technology is whether or not its product marketing is making prophetic claims about the future. Avoid at all costs companies that don’t speak about the real, demonstrable value that can be shown today – I can’t believe I have to write that last sentence. 

If we’re not careful, we might just sleepwalk into the haze of greater vendor lock-in, tools that don’t quite work but cost us like they do, and a bunch of crayons we might use once a month… and not much more. 


Stay Curious,

Make sense of marketing technology.

Sign up now to get TMW delivered to your inbox every Sunday evening plus an invite to the slack community.


Want to share something interesting or be featured in The Martech Weekly? Drop me a line at juan@themartechweekly.com.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.