DeepSeek was banned in Australia due to national security concerns over data privacy and foreign surveillance risks.
The Australian Department of Home Affairs has mandated all federal departments and public offices to identify, remove and prevent all installations and use of DeepSeek due to the threat of cybersecurity and foreign government surveillance.
While private Australian citizens are still permitted to utilise the app, experts and government institutions alike have raised concerns about the various issues related to large language models such as data privacy, confidentiality and the truthfulness of the content generated by such tools.
The Department of Home Affairs Secretary Stephanie Foster has cited ‘unacceptable level of security risks’ to the Australian government’s national security as reason
for the ban.
DeepSeek’s Terms of Use and Privacy Policy mentions the collection and usage of a vast amount of user data as well as the right to share this data with third parties, sparking controversy about the safety of the app.
Recent developments in consumer software and their resulting popularity and consumption has placed data privacy and security at the forefront of many discussions. While the advent of AI enabled software has not fundamentally changed the nature of privacy risks users face, the scale at which AI systems are able to process and distribute data has challenged our control over what information is collected and what it is used for.
The Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) has released a whitepaper which details how the risk can translate to threats for users and society at large.
Stanford HAI argues that the scale with which AI systems function necessitates security reform to occur not on an individual level but on a collective level that can allow the consumers more leverage towards how their information is handled.
However, while both organisations and governments are still in a rapidly evolving landscape of AI, such security reform may be far from being realised. As such, measures
must be put in place across numerous aspects of the AI data chain.
As consumers and businesses alike attempt to navigate the increasingly complex world of AI, careful consideration must be given to key areas to ensure that they are ready to take on an AI-engaged future.
We abide by a set of ethical principles and guidelines that govern AI development. They address fairness, transparency, accountability, privacy, and non-discrimination. We make decisions based on these principles to ensure that the tools we create align with societal and business values.
We can integrate RAG to give your employees more effective access and comprehension of your internal knowledge management system, enhancing productivity and decision-making by providing employees with timely, relevant information without the need for extensive searches.
We provide clear documentation and explanations about how our systems work, what data is being used, and the decision-making processes behind them