8 October 2025

Invisible ink in the Age of AI

| By Peter Strong
Start the conversation
man's hand holding a pen

Big business is exploiting AI, but so are ‘clever’ job applicants to get around AI. Photo: Nes.

Technology has always reshaped how we compete and communicate. From the printing press to social media, each innovation has raised questions of fairness. Artificial intelligence (AI) is no different.

As machines increasingly influence decisions, from hiring to publishing, some are finding subtle ways to game the system.

These tricks may be technical, but the motivation to get ahead without being noticed is old school. Which raises an age-old ethical question: just because you can, does it mean you should?

One tactic involves invisible characters, special Unicode marks that don’t appear on the screen but are read by machines, such as using a white font on a white background. A resume might look normal to a recruiter, but an AI scanner could detect hidden cues designed to push the candidate higher up the shortlist.

It’s similar to an old search engine trick, where web pages are stuffed with hidden keywords to rank higher in a search.

Then there are traps to help differentiate chatbots from humans. For example, a techno geek may add hidden text that a chatbot would respond to, whereas a human wouldn’t even see it. It may, for example, be an absurd recipe instruction like ‘add 0.0001 teaspoons of salt’.

A human might laugh or question it. A chatbot, lacking common sense, can be made to echo it back. Machines can be fooled.

READ ALSO Video didn’t kill the radio star – will AI?

If this feels new, it isn’t. Humanity has always played this game.

Spies once used lemon juice as invisible ink. Renaissance painters hid symbols in religious art to incite rebellion or communicate with special groups of people. WWII operatives used microdots to send information, tiny photos shrunk to the size of a full stop.

The art of hiding messages in plain sight, known as steganography, is centuries old. The difference now? The audience is no longer just human; it includes machines.

Big business, too, has found ways to exploit AI. Some companies manipulate algorithms to favour certain demographics or suppress others. For example, a 2018 Forbes Magazine report found that Amazon’s hiring AI was downgrading resumes that included the word “women’s”. In other cases, biased training data has led to discriminatory outcomes in credit scoring, insurance, and even the criminal justice system.

What else are big businesses doing? Do their directors even know what they are doing?

Meanwhile, fraudulent job applicants are using AI to fabricate entire identities. Deepfake videos, voice manipulation and AI-generated resumes are being used to deceive hiring managers. The IT company Gartner predicts that by 2028, one in four job candidates could be fake.

Some scams are so sophisticated that they’ve infiltrated companies to steal data or funnel money to foreign governments.

No wonder our public service is being very careful with using AI – it’s a fabulous tool and also a danger to us all.

READ ALSO CIT on wrong path cutting courses like remedial massage

Defences are emerging. One is ‘normalisation’, software that strips documents of invisible characters and converts everything to plain text, or ‘detection’ tools that scan for suspicious formatting or hidden code points, much like plagiarism checkers.

Some systems look for statistical watermarks in AI-generated text, patterns too subtle for humans but detectable by machines.

And then there’s the oldest defence of all: our human eyes. Some organisations are going back to using people in the process. An AI might scan thousands of resumes, but a human still makes the final call.

It’s slower, yes, but humans can spot absurdities and bring judgment that machines can’t.

Still, none of these defences is foolproof. Attackers adapt. The cycle continues.

The bigger issue isn’t technical; it’s ethical. Gaming a chatbot with a recipe may be harmless fun, but slipping hidden code into a resume or bypassing filters undermines trust.

If people believe success comes not from merit but from secret digital tricks, confidence in the system collapses.

This matters because AI now shapes hiring, publishing, credit scoring and even government decisions. The integrity of those processes affects real lives.

For policymakers, employers and technologists, the challenge is to balance openness with integrity. Job applicants should know how their resumes are scanned. AI providers should be transparent when using hidden markers to detect machine-written text. Transparency, on both sides, is the first step toward fairness.

This can and will affect the millions of small businesses across the nation, as well as charities, sports clubs, local governments, schools, and everyone else.

Invisible ink, microdots and zero-width characters are all part of the same human urge, which is to get ahead, to compete, to test boundaries. But fairness isn’t just about the rules written down. It’s about the spirit with which we choose to play.

And that, in the end, is an ethical choice no algorithm can make for us.

Or can it?

Free Daily Digest

Want the best Canberra news delivered daily? We package the most-read Canberra stories and send them to your inbox. Sign-up now for trusted local news that will never be behind a paywall.
Loading
By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.

Start the conversation

Daily Digest

Want the best Canberra news delivered daily? Every day we package the most popular Region Canberra stories and send them straight to your inbox. Sign-up now for trusted local news that will never be behind a paywall.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.