Dynamic Business Logo
Home Button
Bookmark Button

Generated by AI

Australia debates the future of AI laws 

When talking about AI regulations, it is important to remember businesses will find it critical to infrastructure modernisation and creating data driven value.

Rhonda Robati Asia Pacific Executive Vice President for Crayon

Business will be handed a major say on the future of artificial intelligence in Australia, with the government planning to revamp the expert group helping set the direction of regulation on the pivotal technology,” reported the Australian Financial Review.

They added that “Two visions of regulation for high-risk AI are being considered behind the scenes, according to sources familiar with the matter: a stringent European Union-style AI law or legislation relying on broad principles.

Earlier, the Australian government considered new laws to regulate the use of artificial intelligence (AI) in “high-risk” areas like law enforcement and self-driving vehicles. Voluntary measures were also explored, including encouraging companies to label AI-generated content. The country outlined its plan to respond to the rapid rise of artificial intelligence.

Rhonda Robati Asia Pacific Executive Vice President for Crayon: “It is welcome news that Australian businesses will be able to contribute their recommendations on the future of AI in this country. AI is a continually evolving, highly complex field with millions of use cases – many of which are still unknown.  While the industry is making progress and finding new and innovative ways of using the technology, the regulation around how the technology is used is still evolving. 
 
“AI can give businesses a competitive advantage if utilised in the right way. It can transform solutions by taking in telemetry data to create better products, operational data to make more efficient operations, use employee inputs to better the experience for workers and use customer signals to create deeper relationships with customer contacts. When talking about AI regulations, it is important to remember businesses will find it critical to infrastructure modernisation and creating data driven value.
 
“A key area that must be decided is who is accountable for decisions AI systems make – at the moment this is a somewhat of a grey area and it needs some clarification. We understand and support the need of ethical responsibility to ensure AI is being used in the right way, supporting privacy and security of people and customers and end users.”

The Australian Financial Review (AFR) reported that one topic being considered by the advisory body is the model for future AI laws, which could impact various industries perceived as higher risk, including healthcare, finance, and housing. These laws might either ban specific practices such as creating social scores for customers, similar to what the European Union has implemented, or establish general standards like prohibiting discrimination.

According to sources familiar with the government’s plans, the new advisory body would be permanent and would include more business representatives. This shift is because companies will primarily be the ones deploying AI tools. Currently, business groups like the Business Council of Australia only serve as observers on the existing body.

Under the Canberra government’s plan, safeguards would have been applied to technologies that predicted the chances of someone committing a crime again, or that analysed job applications to find a suitable candidate.

Australian officials had said that new laws could have also mandated that organizations using high-risk AI must ensure a person was responsible for the safe use of the technology. The Canberra government also wanted to minimize restrictions on low-risk areas of AI to allow their growth to continue. An expert advisory committee was to be set up to help the government prepare legislation.

Ed Husic was Australia’s federal minister for industry and science. He told the Australian Broadcasting Corp. on Wednesday that he wanted AI-generated content to be labelled so it couldn’t be mistaken as genuine. “We need to have confidence that what we are seeing we know exactly if it is organic or real content, or if it has been created by an AI system. And, so, industry is just as keen to work with government on how to create that type of labelling,” he said.

“More than anything else, I am not worried about the robots taking over, I’m worried about disinformation doing that. We need to ensure that when people are creating content that it is clear that AI has had a role or a hand to play in that.”

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Yajush Gupta

Yajush Gupta

Yajush is a journalist at Dynamic Business. He previously worked with Reuters as a business correspondent and holds a postgrad degree in print journalism.

View all posts