How much content is too much? Agencies are starting to ask that question

By Auther TGT,

How much content is too much? Agencies are starting to ask that question

Agencies are beginning to rethink their approach to creating content for clients, thanks to the growing volume of content and more intense competition for eyeballs these days. From using statistical analysis to influencer marketing strategies, the content business is changing as agencies evaluate the quantity, ethics and impact of the content they make for clients. But how much is too much?

Brand-driven content has become a major way for marketers to reach consumers, generating awareness and loyalty along the way. Short articles or posts and videos were the top two content types that B2C marketers used in the past 12 months, per Content Marketing Institute in 2022.

The demand keeps rising, too. In the U.S., average time spent with digital media was 8 hours and 14 minutes per day in 2022, driven by consumption on devices like smart TVs, gaming consoles and other connected devices, according to Insider Intelligence. This was up 1.9% compared to the previous year’s 8 hours and 5 minutes per day. While the average isn’t increasing as quickly as 2020 pandemic rates, digital media time is still taking up a bigger share of our overall time spent consuming media.

Ethics and effectiveness of content creation

There comes a certain point in content creation strategizing in which brands need to weigh ethics and purpose alongside other more concrete goals, said Amy Luca, EVP, global head of social at Media.Monks. The goal is not to create as much content as possible, just for the sake of producing content — not to mention the mental health impact it could pose for people.

“I’m really trying to push my teams and the clients that we work with to really think about whether that content that we’re producing is adding value and is worth spending time with,” Luca told Digiday. “Are the imagery, topics, conversations, doing anything that will detract from mental health and or wellbeing of the consumers that we’re approaching?”

Luca believes the way to balance this is through analyzing the fit of the content, the audience and the brand’s goals. To improve this effectiveness, Media.Monks does statistical regression analysis for clients to determine the optimal amount of content. And clients are thinking more about long-term brand equity over the short-term views in social, Luca added.

“The algorithms don’t reward us for the content — we see a lot of diminishing returns from the algorithms if we’re just putting tons and tons of content that is content for content sake,” Luca said.

The influencer business

With a lot of social media content generated by influencers, influencer marketing agencies and firms are similarly having to strike the right balance between the quantity and quality of their content. Ryan Detert, CEO of influencer marketing company Influential, said influencers have to consider their content based on an individual basis, as well as what content and platform they are using.

“There isn’t going to be a one-size-fits-all answer when producing content for multiple platforms,” Detert said. “The same content that goes viral on TikTok may not go viral on YouTube Shorts and vice versa.”

Detert insists that quality content is not just high production value — it also needs to factor in relevance for that creator’s audience. The main elements for influencers trying to grow an audience are “consistency, authenticity and cadence,” he added.

At influencer management firm Cycle, the focus is on using certain lo-fi or low-resolution content that often drives more impactful results and makes the content feel more organic. Bea Iturregui, vp of creator and brand partnerships at Cycle, said the firm relies on influencers to know the best tactics for their particular audience.

“Sometimes this means having their Instagram Reel loop continuously or syndicating their in-feed post to their story,” Iturregui said. “Other times it means polling their followers or a quick piece of lo-fi content created in a home kitchen.”

“And it’s typically never a game of quantity,” added Corey Smock, Cycle’s VP of business development. “Influencer marketing isn’t about being the loudest in the room. It’s about making personal connections and cultural impact. That’s often accomplished through less, not more.”

Developing a content discipline

Some agencies are also focusing on their content offerings and working with clients on new approaches. Stagwell’s Instrument, a multidisciplinary digital and creative company, this month updated its brand positioning to bring together its product, digital design and brand marketing capabilities with two new core disciplines — content innovation and experience innovation. Last November, Instrument joined forces with digital agency Hello Design within the Stagwell network.

Instrument’s units will work with clients to scale across their content and digital experiences, focusing on creating stories it hopes will have impact. Paul Welch, executive director at Instrument, who leads content innovation, pointed out that the content landscape has changed a lot since the pandemic. There will always be new platforms, channels and types of media, Welch added, but Instrument focuses on partnering with right communities and a smaller quantity of content with higher value.

“It’s a lot of mid funnel work – we needed to have impact, we needed to have meaning and we needed to essentially move the needle or have an impression for our consumers,” Welch said. “So it isn’t necessarily about the highest quantity of viewership, it’s more about connecting more closely with whatever audience we want to talk to.”

Even though there is a lot of content in the market, consumers also have higher expectations now. J.D. Hooge, chief creative officer at Instrument, explained that consumers and clients are “more discerning” these days – and they also have a lot of options to watch something else if the content doesn’t resonate.

“They are going to call brands on their bullshit. They are going to hold brands to really high expectations as well,” Hooge said.

Luca of Media.Monks added: “[Marketers and social agencies] are going to erode brand equity, and at the end of the day, the consumers will switch. It will be high switching, because whatever gets their attention is the thing that they’re going to gravitate to.”

Source link

The pitfalls and practical realities of using generative AI in your analytics workflow

By Auther TGT,

The pitfalls and practical realities of using generative AI in your analytics workflow

We’ve heard much about how generative AI is set to change digital marketing over the last few months. As consultants, we work with brands to harness technology for innovative marketing. We quickly delved into the potential of ChatGPT, the most buzzworthy large language model-based chatbot on the block. Now, we see how generative AI can act as an assistant by generating initial drafts of code and visualizations, which our experts refine into usable materials.

In our view, the key to a successful generative AI project is for the end user to have a clear expectation for the final output so any AI-generated materials can be edited and shaped. The first principle of using generative AI is you should not trust it to provide completely correct answers to your queries.

ChatGPT answered just 12 of 42 GA4 questions right.

We decided to put ChatGPT to the test on something our consultants do regularly — answering common client questions about GA4. The results were not that impressive: Out of the 42 questions we asked, ChatGPT only provided 12 answers we’d deem acceptable and send on to our clients, a success rate of just 29%.

A further eight answers (19%) were “semi-correct.” These either misinterpreted the question and provided a different answer to what was asked (although factually correct) or had a small amount of misinformation in an otherwise correct response.

For example, ChatGPT told us that the “Other” row you find in some GA4 reports is a grouping of many rows of low-volume data (correct) but that the instances when this occurs are defined by “Google machine learning algorithms.” This is incorrect. There are standard rules in place to define this.

Dig deeper: Artificial Intelligence: A beginner’s guide

Limitations of ChatGPT’s knowledge — and it’s overconfidence

The remaining 52% of answers were factually incorrect and, in some cases, actively misleading. The most common reason is that ChatGPT does not use training data beyond 2021, so many recent updates are not factored into its answers. 

For example, Google only officially announced the deprecation of Universal Analytics in 2022, so ChatGPT couldn’t say when this would be. In this instance, the bot did at least caveat its answer with this context, leading with “…as to my knowledge cut off is in 2021…”

However, some remaining questions were wrongly answered with a worrying amount of confidence. Such as the bot telling us that “GA4 uses a machine learning-based approach to track events and can automatically identify purchase events based on the data it collects.”  

While GA4 does have auto-tracked “enhanced measurement” events, these are generally defined by listening to simple code within a webpage’s metadata rather than through any machine learning or statistical model. Furthermore, purchase events are certainly not within the scope of enhanced measurement.

As demonstrated in our GA4 test, the limited “knowledge” held within ChatGPT makes it an unreliable source of facts. But it remains a very efficient assistant, providing first drafts of analyses and code for an expert to cut the time required for tasks. 

It cannot replace the role of a knowledgeable analyst who knows the type of output they are expecting to see. Instead, time can be saved by instructing ChatGPT to produce analyses from sample data without heavy programming. From this, you can obtain a close approximation in seconds and instruct ChatGPT to modify its output or manipulate it yourself.

For example, we recently used ChatGPT to analyze and optimize a retailer’s shopping baskets. We wanted to analyze average basket sizes and understand the optimal size to offer free shipping to customers. This required a routine analysis of the distribution of revenue and margin and an understanding of variance over time. 

We instructed ChatGPT to review how basket sizes varied over 14 months using a GA4 dataset. We then suggested some initial SQL queries for further analysis within BigQuery and some data visualization options for the insights it found.

While the options were imperfect, they offered useful areas for further exploration. Our analyst adapted the queries from ChatGPT to finalize the output. This reduced the time for a senior analyst working with junior support to create the output from roughly three days to one day.

Dig deeper: 3 steps to make AI work for you

Automating manual tasks and saving time

Another example is using it to automate more manual tasks within a given process, such as quality assurance checks for a data table or a piece of code that has been produced. This is a core aspect of any project, and flagging discrepancies or anomalies can often be laborious.

However, using ChatGPT to validate a 500+ row piece of code to combine and process multiple datasets — ensuring they are error-free — can be a huge time saver. In this scenario, what would normally have taken two hours for someone to manually review themselves could now be achieved within 30 minutes. 

Final QA checks still need to be performed by an expert, and the quality of ChatGPT’s output is highly dependent on the specific parameters you set in your instructions. However, a task that has very clear parameters and has no ambiguity in the output (the numbers either match or don’t) is ideal for generative AI to handle most of the heavy lifting. 

Treat generative AI like an assistant rather than an expert

The progress made by ChatGPT in recent months is remarkable. Simply put, we can now use conversational English to request highly technical materials that can be used for the widest range of tasks across programming, communication and visualization.

As we’ve demonstrated above, the outputs from these tools need to be treated with care and expert judgment to make them valuable. A good use case is driving efficiencies in building analyses in our everyday work or speeding up lengthy, complex tasks that would normally be done manually. We treat the outputs skeptically and use our technical knowledge to hone them into value-adding materials for our clients.

While generative AI, exemplified by ChatGPT, has shown immense potential in revolutionizing various aspects of our digital workflows, it is crucial to approach its applications with a balanced perspective. There are limitations in accuracy, particularly concerning recent updates and nuanced details. 

However, as the technology matures, the potential will grow for AI to be used as a tool to augment our capabilities and drive efficiencies in our everyday work. I think we should focus less on generative AI replacing the expert and more on how it can improve our productivity.

The bottom line is clear – ChatGPT and additional LLM AI tools will be more and more common in our daily routines. Having said that, it’s important to have  professionals managing your content and take care of your analytics workflow.