7 April 2025

Ready or not, robots are here – so how can we trust them?

| Dione David
Start the conversation
Bluerydge's Adam Haskard holds a Sierra Blue brochure

Bluerydge’s Adam Haskard says the company’s ‘watershed’ secure robotics technology Sierra Blue gives consumers a much-needed look under the hood. Photo: Thomas Lucraft.

From self-driving cars to medical diagnosis and even military decision-making, AI and robotics are increasingly weaving themselves into our daily lives. As the dialogue around trust moves beyond theoretical, experts are sounding the alarm on the need for security.

A newly published paper, co-authored by Bluerydge’s Adam Haskard and the University of Canberra’s Damith Herath, dives deep into the trust and security issues surrounding the systems becoming omnipresent in our hospitals, factories, roads and battlefields.

Secure Robotics: Navigating Challenges at the Nexus of Safety, Trust, and Cybersecurity in Cyber-Physical Systems explores the crucial role of security in ensuring how humans relying on the robotics we build — whether doctors, soldiers or everyday consumers — can trust them even as we continue to adopt and operate them.

Canberra cyber security and technological capability company Bluerydge is also leading a high-profile project, Sierra Blue, that exemplifies the broader trends highlighted in the paper.

“Any industries using automated machinery or cyber-physical systems [CPS] that produce a net gain for the economy is at risk of physical harm and environmental damage due to robotic malfunctions or cyberattacks – particularly in autonomous robots requiring high built-in safety measures,” Mr Haskard says.

READ ALSO Next-gen AI project to deliver ‘mission-ready’ solutions in Defence and healthcare

Mr Herath says Australia has already witnessed the “devastating effects of cyber malfeasance in purely digital contexts”.

“When applied to CPS, these risks could have an exponential impact, affecting not just data but also physical infrastructure and essential services,” he says.

“In a follow-up study on vulnerabilities in industrial robotics, we examined a widely used collaborative robot arm (Cobot) – a robotic system designed to work alongside humans unlike earlier generations that were confined to isolated work cells for safety. Shockingly, this Cobot was easily compromised by simply inserting a USB loaded with malicious code.

“Given these robots operate directly in human environments, the risks are considerable. Once compromised, an attacker could take full control, potentially causing severe physical harm, operational disruptions and significant economic damage.

“This underscores the pressing need for stricter cybersecurity protocols in industrial automation to safeguard people and infrastructure. This goes beyond traditional, technically oriented cybersecurity measures, hence the new ‘Secure Robotics’ paradigm.”

Students in a classroom applaud.

Bluerydge recently shared insights on the secure robotics paradigm at a University of Canberra symposium. Photo: Thomas Lucraft.

While the reality of a future robotics or AI malfunction probably won’t resemble a Will Smith movie, Professor Phillip Morgan says real-world applications today could pose a threat to everything from the global economy to human life.

As the chair in Human Factors Psychology and Cognitive Science, director of research at the Centre for Artificial Intelligence, Robotics and Human Machine Systems at Cardiff University, and director at the Airbus Centre of Excellence in Human-Centric Cyber Security, Prof Morgan is considered a preeminent authority on interactions between humans and machines and the processes and training involved in building trust between them.

He points to the example of autonomous vehicles.

“If we deploy them too early, and they get hacked in a less serious (but still very bad) scenario, imagine them coming to a halt on a Wednesday at 12 pm in London and throwing the capital into gridlock. In a worse one, imagine them running red lights and endangering pedestrians. Are we prepared?” he says.

“Anything that’s connected to the internet is potentially at major cyber and privacy risk, whether it’s from cyber criminals intent on causing disruption, financial gain or simply parading their skills, or people who wish to access our data to use it for anything from marketing to exploiting us.

“How do we put ourselves in a future where we’re ready, such that if and when it does happen, it won’t mean complete erosion of public trust and the subsequent end of that entire industry?”

Prof Morgan says as technology advances at breakneck speeds, “security by design”, which looks at the greatest vulnerability in robotics and AI – human interaction – will be one of those most effective ways of mitigating the risks.

“We are not just talking hardware and software but the people designing, building and using the technology,” he says.

READ ALSO Canberra-based Bluerydge looking to help Defence SMEs navigate the grants process

The good news: people are actively working on the problem and some measures already exist – including Sierra Blue.

Backed by Australia’s Defence Trailblazer program – a federal initiative to accelerate sovereign tech capabilities – the project focusses on “localised, decentralised” AI models for Defence and healthcare that can function reliably even in tough conditions.

As Mr Haskard points out, it can be deployed in disconnected, denied or degraded internet and information spaces.

“It can be directly plugged into the de-centralised Sierra Blue model we’ve developed, allowing cyber-physical systems to collect data and telemetry to do real-time applications. That’s watershed, because we’re talking about real-time security assessments using the model as opposed to point-in-time assessment using a spreadsheet,” he says.

“Look at a robotic surgical system used in a theatre in a hospital, for example. Because patient data is collected during the operation, it cannot be hooked into the internet. But if those robot systems were connected to a local AI solution, we can have a human on loop gauging the telemetry on that without risk of the patient data being exposed on the internet.”

Also a security measurement model, Sierra Blue can be used as a quantification tool for secure robotics practitioners to reach a designated value known as an “R value” – the composite value of safety, ethical and human factors in an operational context that’s acceptable to consumers.

The higher the R value, the more secure a system must be.

“It gives the industry a deeper insight into the requirements and for consumers to get a look under the hood of AI and cyber-physical products,” Mr Haskard says.

To learn more or to book into a Sierra Blue demo day, visit Bluerydge.

REGION MEDIA PARTNER CONTENT

Start the conversation

Daily Digest

Want the best Canberra news delivered daily? Every day we package the most popular Region Canberra stories and send them straight to your inbox. Sign-up now for trusted local news that will never be behind a paywall.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.