Amidst the wave of tech-bro enthusiasm and proclamations of global transformation triggered by GenAI, it’s easy to get lost in the noise.


AI-generated content poses some clear questions around ethics and transparency for an agency like Content Nation. While it can expedite processes, we must also navigate the challenges it raises. At Content Nation, our commitment to responsible creativity is central to what we do. GenAI offers a chance to reinforce that commitment through innovation deployed in a conscientious and ethical manner.
Ferdinand Pereira,
Creative Director, Content Nation


Establishing authority
GenAI works off large language models that incorporate huge datasets from across the digital world. It’s like SEO on steroids, capturing huge swathes of information that us wonderful marketers can then utilise for our own content needs. However, much like search-engine-based research, there is a need to be cautious.
Search engines essentially work by linking up a user search with the answer that its algorithm assesses as being the most accurately matched… but the algorithm is not flawless. The same is true of GenAI. What’s more, arguably the sourcing and insight for GenAI is (currently) even more opaque than web searches—which at least redirect/link you to the page where the information is presented.

Originality and product context are the key challenges [of GenAI]. But I think it’s only a matter of time until these key challenges will be overcome.
Mohan Kalyana,
CEO, Swipey


Timeliness
Leading on from authority, the question of timeliness is particularly pertinent for use of ChatGPT’s free, open-access version. This GenAI model is ‘trained’ on a huge, static dataset that reaches up to a specific (and frequently changing) learning point, but no further. If you’re looking for answers about genuinely time-sensitive data (and frankly, that’s a lot), then you need to understand those limitations. The Pro and Enterprise versions do go beyond this, but with limitations around the search engine of choice. It’s likely this will change over time, but right now you need to think carefully about what answers are timely. Gemini is not itself invulnerable to this challenge, with checks required to ensure the result it provides is genuinely the most recent and up to date answer to the given question.
Truthfulness and accuracy
You have to verify everything that comes out of a GenAI response, that’s just an unshakeable fact. It’s possible to prompt ‘give me the source of this information’, or ‘give me citations’, and you should always do so—but frankly it can be rather hit and miss if a particular GenAI model will tell you this. Sometimes when asking for sources you simply receive a placeholder [source] without an actual link.
This challenge aligns with industry perception, with accuracy and quality being a prominent concern raised by 31% of respondents to the Salesforce GenAI Snapshot Survey. This is a real challenge for marketers to navigate, balancing public perception with the reality of GenAI authority and accuracy. Research by Capgemini reveals 73% of consumers globally trust content by GenAI. It’s your job not to let them down guys!
It can be a particularly frustrating task to verify information from ChatGPT due to the nature of its learning process. There’s no guarantee a website predating the specific learning point has survived in our savage digital landscape. That makes it extremely difficult to cross-check facts that were demonstrably true when it was trained, but may not be possible to evidence now.
Gemini sometimes pretends not to be an all-powerful job-stealing robot AI, and claims it can’t help with sources as it’s just a simple language model… we see you Gemini!

Bias
AI might pretend it’s an all-inclusive joy-fest, but it’s fundamentally built on the value of its inputs—and a lot of the global conversation is extremely biassed towards certain cultures or demographics. There is a long history of tech being biassed towards European/US and English-standard data models.
“Hey, it’s just information” says the tech-bro trying to defend this paradigm, but ultimately you need to consider the possible biases that may be built into GenAI, and indeed your own biases on how you prompt it to respond. That means potentially biassed, insensitive, or (at the most extreme) content that could be considered illegal in some markets. You need to be very careful that you understand that when leveraging GenAI content.


Convergence of content
AI does not have infinite creativity. Fundamentally, it is an incredibly impressive set of algorithms that can utilise data to generate a ‘unique’ response which aligns with your prompt. That data is (for all intents and purposes) the endless array of human-generated data produced through our digital lifetimes.
Even if we discount the self-fulfilling prophecy of GenAI creating GenAI responses from GenAI responses… there’s a real challenge with ‘all content looking the same’. If you’ve worked with GenAI, you can probably spot when content looks to have been generated using a GenAI platform. If we’re all using it, is all our content going to look the same? That’s why you can’t run a GenAI-only strategy—you need people. Take that boss! You can’t fire me yet.

My primary concern about the future of genAI is how can it be trained with novel and useful input, even as the input is increasingly becoming just other gen AI input?
CEO/CTO,
Software-as-a-Service Industry

Legal concerns
Content Nation is not qualified to give you legal advice, sorry. What we can say is that GenAI comes with some pretty weighty unanswered questions about legal implications.
The most obvious is copyright and content ownership. GenAI uses existing data to generate responses based on that data… but who owns the original data and whether permissions are legal for its use remains… Let’s just say murky. If you use GenAI to replicate a response, and it somehow quotes or copies some legally protected information, you may find yourself in hot water. This is an unlikely scenario, but not one that can be discounted.
Samsung has banned its workers from utilising GenAI on company devices for this very reason, following discovery that sensitive code had been uploaded by workers to the platform.
What’s even more complex is you need to consider those legal implications across a diverse regional landscape in Southeast Asia. Governments across the region have introduced a number of ‘fake news’ or misinformation laws in recent years. If you use GenAI to create content but don’t appropriately check if it complies with the local legal framework, you can’t blame the nasty robot for putting you/your client on the wrong side of the law.

Ethics and good practice
Linking into legal concerns is the fundamental question of ownership and good practice. If you’re using GenAI to generate a large share of content, you need to be very clear to communicate this fact with any relevant stakeholder.
Good communication is just good practice. Have a clear policy for communication about significant AI use, and ensure all team members know how, why, and what to respond with if a stakeholder directly queries your use of the platform.

In everything we do, we need to think what is the right way to do this, and how will it reflect on our client and our business. With GenAI, that means taking that obligation for clear communication and transparent practice a step further, to genuinely consider how, where, and why it is ethical to apply to our practice.
Loshini,
COO,Content Nation


Top Tip: Identifying and Assessing if Content is AI-generated
With the concerns raised above, you might be worried about a third-party contractor or distant worker ‘sneaking’ GenAI content into a submission without telling you. While by no means foolproof, there are some helpful AI content detectors on the internet for you to assess.
