Articles
Mar 27, 2025

What the Australian’s Government’s ban of DeepSeek says about the future of AI

DeepSeek was banned in Australia due to national security concerns over data privacy and foreign surveillance risks.

What the Australian’s Government’s ban of DeepSeek says about the future of AI

Why was DeepSeek banned in Australia?

The Australian Department of Home Affairs has mandated all federal departments and public offices to identify, remove and prevent all installations and use of DeepSeek due to the threat of cybersecurity and foreign government surveillance.

While private Australian citizens are still permitted to utilise the app, experts and government institutions alike have raised concerns about the various issues related to large language models such as data privacy, confidentiality and the truthfulness of the content generated by such tools.

What is the reason for the ban?

The Department of Home Affairs Secretary Stephanie Foster has cited ‘unacceptable level of security risks’ to the Australian government’s national security as reason
for the ban.

DeepSeek’s Terms of Use and Privacy Policy mentions the collection and usage of a vast amount of user data as well as the right to share this data with third parties, sparking controversy about the safety of the app.

What risks are there to privacy in the age of AI?

Recent developments in consumer software and their resulting popularity and consumption has placed data privacy and security at the forefront of many discussions. While the advent of AI enabled software has not fundamentally changed the nature of privacy risks users face, the scale at which AI systems are able to process and distribute data has challenged our control over what information is collected and what it is used for.

What impact does this have on users?

The Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) has released a whitepaper which details how the risk can translate to threats for users and society at large.

  • Fraud: AI systems are able to utilise personal information distributed through social channels to impersonate individuals and cause harm through identity theft and malicious activity.
  • Bias: Predictive and analytical systems are trained with vast quantities of data and while these data sets can be representative of a population, that does not necessarily eliminate bias in the output. These can cause tools to be prejudiced towards certain social groups and skew performance standards.

How can data security be preserved?

Stanford HAI argues that the scale with which AI systems function necessitates security reform to occur not on an individual level but on a collective level that can allow the consumers more leverage towards how their information is handled.

However, while both organisations and governments are still in a rapidly evolving landscape of AI, such security reform may be far from being realised. As such, measures
must be put in place across numerous aspects of the AI data chain.

  • Data Minimisation: One fundamental approach to managing privacy in AI is the restriction of collection to only the minimum necessary data.
  • Transparency and Consent: Organisations must clearly communicate what data is being collected, how it's used, and who has access to it. Furthermore, users must be well informed about these mechanisms and be given ample measures to provide or deny their consent.
  • Strong Encryption and Security Hygiene: Organisations must ensure that their AI systems are laced with robust encryption and security protocols to maintain the integrity of data.

What does this mean for you and your business?

As consumers and businesses alike attempt to navigate the increasingly complex world of AI, careful consideration must be given to key areas to ensure that they are ready to take on an AI-engaged future.

  • Legal and Regulatory Compliance: As data privacy regulations like GDPR, CCPA, and others become more stringent, businesses must ensure their AI systems comply with these laws. Businesses must implement robust data protection practices and regularly audit their AI systems to ensure they’re aligned with current laws and regulations.
  • Balancing Innovation with Ethics: Businesses must find a balance between pushing the boundaries of AI innovation and ensuring ethical use of personal data. The design and deployment of AI systems should account for fairness, transparency, and accountability.
  • Increased Control over Personal Data: AI's reliance on vast amounts of personal data puts consumers at the center of privacy concerns. However, stronger privacy regulations, like the GDPR and emerging data privacy laws, empower consumers to have more control over their information.

How can InLogic help keep your AI tools secure?

Ethical Framework for AI Development

We abide by a set of ethical principles and guidelines that govern AI development. They address fairness, transparency, accountability, privacy, and non-discrimination. We make decisions based on these principles to ensure that the tools we create align with societal and business values.

Data Privacy First

We can integrate RAG to give your employees more effective access and comprehension of your internal knowledge management system, enhancing productivity and decision-making by providing employees with timely, relevant information without the need for extensive searches.

Foster Transparency and Explainability

We provide clear documentation and explanations about how our systems work, what data is being used, and the decision-making processes behind them

Subscribe to our newsletter

Thanks for joining our newsletter.
Oops! Something went wrong