Fair Use — A New Beginning

SRC Innovations
SRC Innovations
Published in
9 min readAug 4, 2023

We stand at the threshold of a new era — a world powered by AI systems. It’s a world that promises increased productivity, efficiency, safety, transformation, and personalisation. We’ve all experienced the remarkable capabilities of technologies like ChatGPT and Mid-Journey, but amidst the excitement lies a darker side that may be as detrimental as it is beneficial.

Current Risks

Concerns such as deep fakes, mass surveillance, discrimination, privacy breaches, accountability, and job security pose tangible risks in this environment. To navigate these dangers, it is crucial to establish safeguards against misuse and address inherent software flaws.

These are the biggest concerns around AI systems (as identified by Forbes):

  1. Lack of transparency
    The degree of openness and clarity in understanding how the AI system functions, makes decisions, and processes data. See Lack of transparency could be AI’s fatal flaw.
  2. Bias and discrimination
    The potential for AI algorithms and models to exhibit unfair or prejudiced behaviour, leading to unequal treatment or negative impacts on certain individuals or groups based on their characteristics, such as race, gender, ethnicity, age, or other protected attributes. See Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.
  3. Privacy
    The protection and control of individuals’ personal information and data. See Why artificial intelligence design must prioritise data privacy.
  4. Ethical dilemmas
    This covers issues such as their autonomous nature, their potential impact of their decisions on individuals or society, and their involvement of sensitive data. Watch video on the self-driving car dilemma. → https://hls.ted.com/project_masters/4597/manifest.m3u8
  5. Security risks
    Like any other software or technology, are susceptible to security breaches, attacks, and exploits if not adequately protected. See FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy.
  6. Concentration of power
    The accumulation and centralisation of decision-making authority, control, and influence within a small number of entities or organisations that possess advanced AI capabilities. See How to solve AI’s inequality problem.
  7. Job displacement
    The risk of automation and AI technologies to replace or eliminate certain job roles traditionally performed by humans. See AI and robots fuel new job displacement fears.
  8. Dependence on AI
    The level of reliance of individuals, organisations, or society as a whole on artificial intelligence technologies for various tasks, decision-making processes, and functions.
  9. Economic inequality
    The disparity in wealth, income, and economic opportunities that can be exacerbated or perpetuated by the adoption and deployment of artificial intelligence technologies.
  10. Legal and regulatory challenges
    The complex legal and regulatory issues that arise due to the rapid advancement and widespread adoption of artificial intelligence technologies.
  11. AI arms race
    The competition among countries, organisations, or entities to develop and deploy advanced artificial intelligence technologies for strategic, economic, or military advantage.
  12. Loss of human connection
    The potential decrease or erosion of genuine emotional or social interactions between individuals due to increased reliance on artificial intelligence and technology for communication and engagement. See My Weekend With an Emotional Support A.I. Companion .
  13. Misinformation and manipulation
    The potential for, particularly in the context of social media and information dissemination, to spread false or misleading information and to be exploited for nefarious purposes. See How AI will turbocharge misinformation — and what we can do about it.
  14. Unintended consequences
    The unforeseen outcomes or effects that arise from the deployment and use of artificial intelligence technologies.
  15. Existential risks
    The potential threats that advanced artificial intelligence technologies could pose to the continued existence of humanity or to the preservation of civilisation as we know it. This issue is captured in sci-fi movies like 2001: A Space Odyssey, West World and I, Robot.

Governments worldwide are racing to create legal frameworks that regulate AI to mitigate potential risks. Leading this effort, the European Union (EU) stands at the vanguard. In April 2023, they finalised a proposed framework that will undergo refinement and become law by year-end, setting the stage for responsible AI regulation.

By proactively addressing these challenges and implementing effective protection, we can ensure that the benefits of AI are harnessed responsibly, safeguarding our society and shaping a future where technology thrives hand in hand with human well-being.

The EU Proposal

The goals of the EU framework is to ensure AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. Additionally they want AI system to have human oversight as a preventative measure to prevent harmful outcomes.

The EU’s regulatory framework adopts a risk-based approach, categorising AI systems based on the level of risk they present to users: unacceptable risk, high risk, limited risk, and minimal or no risk. Each category is subject to corresponding regulations, with higher-risk systems facing more extensive oversight and control.

Unacceptable Risk

AI systems deemed as posing an unacceptable risk are those considered to be dangerous to individuals and will be prohibited. Such systems include:

  1. Cognitive behavioural manipulation of individuals or vulnerable groups, like voice-activated toys that encourage unsafe conduct in children.
  2. Social scoring, involving categorising people based on behaviour, socio-economic status, or personal traits. For example, this is to avoid similar state control like the Chinese city that rated aspects of residents’ behaviour.
  3. Real-time and remote biometric identification systems, such as facial recognition.

Certain exceptions may be permitted. For instance, “post” remote biometric identification systems, wherein identification occurs after a considerable delay, may be allowed for prosecuting serious crimes, but only with court approval.

High Risk

AI systems categorised as high risk, are those that have adverse impacts on safety or fundamental rights. These high-risk systems will be further divided into two distinct categories:

  1. AI systems within the scope of the EU’s product safety legislation. This includes products such as toys, aviation equipment, automobiles, medical devices, and elevators.
  2. AI systems falling into eight specific areas that will have to be registered in an EU database:
  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

Limited Risk

This largely refers to generative AI, like ChatGPT and Mid Journey. These systems would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training.

Minimal or No Risk

Limited risk AI systems must adhere to minimal transparency requirements, enabling users to make informed decisions. After interacting with the applications, users can choose whether to continue using them. Users must be informed when they are engaging with AI, including systems that generate or alter image, audio, or video content, such as deepfakes.

Alternative Approach

The UK government is pursuing a pro-innovation approach to legislation and regulation. Given the fast evolution of AI systems they want an agile and iterative development to react to the rapidly changing advancements in the field. Industry has praised this pragmatic and proportionate approach.

The approach is defined by five principles to promote responsible development and usage of AI systems:

  1. Safety, security and robustness
    AI systems in the UK need to have been trained and built on robust data.
  2. Appropriate transparency and explainability
    The users should be able to understand how the system operates.
  3. Fairness
    AI must not compromise the legal rights of individuals.
  4. Accountability and governance
    There should be adequate oversight and clear lines of accountability for the usage of AI systems.
  5. Contestability and redress
    It is essential to have mechanisms for seeking redress in case an AI system causes harm.

This approach is designed to allow AI system development to flourish while putting in safety guardrails for the public.

The EU and UK have both defined their objectives clearly (“what”), but they are yet to determine the specific methods (“how”), which presents a more significant challenge for the lawmakers.

The Challenges

There are numerous challenges facing governments in framing an effective legal and regulatory framework in striking a balance between promoting innovation and protecting its citizens.

The most pressing challenges are:

  • Clear definitions and terminology
  • Adaptable regulations that can keep pace with technical advancements
  • Auditing and certifications
  • Effective enforcement
  • Defining ethical guidelines and impact assessments
  • Ensuring transparency through full disclosure of training materials and methodologies
  • Creating accountability mechanisms for handling complaints or appeals
  • Interoperability and consistency across international borders

Australia

In 2018, Australia unveiled a voluntary ethics framework. Since then AI systems advancements have snowballed and now Australia faces the real challenge to put in place proper legislation and regulation.

On, 1 June 2023, the Australian government published a consultation paper as a first step on this path.

Ed Husic, the industry and science minister, said “People want to think about whether or not that technology and the risks that might be presented have been thought through and responded to in a way that gives people assurance and comfort about what is going on around them. Ultimately, what we want is modern laws for modern technology, and that is what we have been working on.”

At the time of writing this the government have invited public feedback on how to mitigate any potential risks of AI and support safe and responsible AI practices (closes 26 July 2023).

No official time frame has been put on enacting legislation and regulation.

AI in the Lawmakers’ Cross-hairs

Already in 2023 there have been several major incidents that is turning the attention in global law makers. Here are two of the biggest stories.

ChatGPT and Privacy

In April 2023, the Italian government briefly banned ChatGPT over privacy concerns. Italy threatened to investigate whether ChapGPT complies with the General Data Protection Regulation (GDPR). The GDPR governs the way in which personal data can be used, processed and stored.

The tipping point for the Italians (and organisations like Samsung) was an incident on 20 March that the app had experienced a data breach involving user conversations and payment information.

Other nations, Ireland and Germany among them, are watching events in Italy closely to determine if they should also ban ChatGPT on similar grounds.

The Italian Data Protection watchdog disapproved of the “mass collection and storage of personal data to train algorithms” in the platform, citing no legal basis, and expressed concerns about exposing minors to inappropriate content due to lack of age verification.

The ban has since been lifted after Open AI (ChatGPT’s owner) introduced several privacy-related changes, including making it clearer to European users about how they can delete their personal data from the chatbot program.

Read more:
Italy orders ChatGPT blocked citing data protection concerns Samsung bans use of generative AI tools like ChatGPT after April internal data leak

Stable Diffusion v Getty Images

In February 2023 Getty Images brought a lawsuit against Stability AI, accusing the company of using 12 million images without authorisation or compensation to train its AI model.

Not only are the generated images a strong likeness they even apply the Getty Images watermark in a brazen trademark infringement.

Read more on this story: Getty Images lawsuit says Stability AI misused photos to train AI.

AI and SRC

At SRC, our Srchy product, an eCommerce product catalog search service, uses machine learning to personalise search results based on a customer’s behaviour. In our usage of AI our data collection is anonymised so that collected information has been stripped of any identifiable characteristics, making it impossible to link the data to specific individuals. This practice is common where organisations seek to analyse trends and patterns without compromising individuals’ privacy or identities. Customers receive the full benefit of AI without any risk to their privacy.

Forewarned is Forearmed

There are serious concerns raised over the risks AI poses to society and humanity.

One is a letter issued by the non-profit Future of Life Institute and signed by more than 1000 people including Musk, Apple co-founder Steve Wozniak. In the letter they call for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Another statement, issued by the the Centre for AI Safety and signed by the likes of the heads of OpenAI and Google Deepmind warn of the existential threat to humanity.

Could this be exaggeration driven by insincere motives? Is this just another Y2K-like hysteria? Maybe. But given the importance of the people in the technology field endorsing these letters it’s probably to our advantage to pay heed and tone down any scepticism, because what if they’re right? Can the risk be ignored?

Effective legislation and regulation is a countermeasure to the risks. At the rate AI technology is evolving governments must act soon in an effective and maintainable approach, lest we push all innovation offshore.

Originally published at https://blog.srcinnovations.com.au on August 4, 2023.

--

--

SRC Innovations
SRC Innovations
0 Followers
Editor for

IT Consultancy based in Melbourne.