Artificial Intelligence Regulation: What Laws Do Countries Apply to This Tech?

Artificial Intelligence Regulation

Over the past decade, the exploitation of personal data has been at the forefront of public concerns about digital technologies. However, as more and more businesses use AI, the debate is entering a new phase.

The focus is now on how data is utilized by software, in particular by complex, evolving algorithms that might diagnose cancer, drive a car, or approve a loan. Furthermore, the range of ways in which AI may be trained and deployed complicates the picture. For example, AI systems can be hosted in the cloud and accessed remotely by anyone with an internet connection.

However, the more data that companies need, the more pressing the requirement to provide transparency about the following:

  • How the data will be used
  • Whether there is a legitimate basis for using them
  • Whether the people whose data it is have given their consent

Here is where artificial intelligence regulations come into play.

Since 2017, 69 countries have collectively adopted over 800 AI regulations. We’ll now turn our attention to some of these policy initiatives and find out how they impact businesses around the globe.

Why is AI regulation needed? The risks of using AI

Risks of using AI

AI is a powerful asset but it can also pose a threat to data privacy and security, especially as it constantly evolves and develops its own learning. Let’s take a closer look at each of these risks.

Bias and discrimination

AI systems that generate biased results have frequently made the news of late. Amazon‘s automated resume screener, which excluded female candidates, is one notorious example.

Apple’s credit card algorithm has likewise been accused of discriminating against women, with men receiving higher credit limits than women with equal credit qualifications. This prompted an investigation by New York’s Department of Financial Services.

Profound impact on people’s lives

A study by UC Berkeley found that risk prediction tools used in healthcare, affecting millions of people in the United States each year, had a considerable racial bias. At a given risk score, black patients are significantly sicker than white patients, as evidenced by signs of uncontrolled illnesses. Another relevant study discovered that the software used by prominent hospitals to prioritize recipients of kidney transplants discriminated against black patients.

A further highly controversial example was COMPAS, a risk assessment system used in state court systems throughout the United States. ProPublica, a Pulitzer-winning nonprofit news organization, discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.”

Privacy and security issues

Perhaps the most significant challenge faced by the AI industry is reconciling AI’s need for large amounts of structured or standardized data with the human right to privacy.

A prominent example would be facial recognition technology deployed in cities and airports around the U.S. As a result of privacy concerns, numerous cities, including Oakland, San Francisco, and Brookline, have adopted bans on the technology.

The “nature vs. nurture” argument is a timeless one, especially when it comes to the matter of human bias. The answer is not simple, since stakeholders in any given situation may have very different notions of what constitutes fairness: any attempts to design it into the software will be fraught. However, it’s within our power to implement proper AI regulations that capture the essence of the problematic aspects of AI technology.

Check out our portfolio of projects where we successfully implemented AI technology

How is AI regulated in various countries?

AI regulations across the world

Despite the transnational nature of AI technology and the extent of the ethical concerns that surround it, there is still no unified policy approach to artificial intelligence regulation or data use. So how are governmental bodies around the world approaching the issue?

European Union

In 2021, the EU reported that 6% of the Union’s small enterprises, 13% of medium enterprises, and 28% of large enterprises used AI. The differences might be explained by, for instance, the complexity of implementing AI technologies in an enterprise, or economies of scale, or costs.

However, we can say this for sure: the union aims to benefit from the economic and societal advantages of AI technologies, even as it is committed to striving for a balanced approach.

Here is how it intends to achieve that.

The Artificial Intelligence Act (the AI Act)

The AI Act, a proposed European law on artificial intelligence, would be the first AI law enacted by a principal regulator. The regulation groups AI applications into three risk categories:

  • Applications and systems that pose an unacceptable risk, such as Chinese government-run social scoring, are banned
  • High-risk applications, such as a CV-scanning tool that evaluates job applicants, must follow strict regulatory guidelines
  • Applications that are not explicitly prohibited or labeled as high-risk are mostly unregulated

At the same time, the regulation is accompanied by a robust funding policy to support AI: the Digital Europe and Horizon Europe programs will each contribute €1 billion to AI initiatives every year. Furthermore, 20% of the EU’s recovery scheme funds must be dedicated to digital transition and AI projects.

The AI regulations also have extraterritorial reach, which means that any AI system providing output within the European Union would be subject to them, regardless of the provider’s or user’s location.

General Data Protection Regulation (GDPR), Article 22

The EU already has in place Article 22 of the General Data Protection Regulation (Regulation (EU) 2016/679), which broadly protects people’s privacy and data. Specifically, GDPR has implications for AI technology. Article 22 prohibits decisions based solely on automated procedures that have legal consequences or similar effects for individuals, unless the program gains the user’s explicit consent or meets other criteria.

United States

Unlike the more comprehensive regulatory framework offered by the EU, the United States doesn’t yet have a federal privacy law. Instead, regulatory guidelines have been proposed by federal agencies, and by several state and local governments.

State lawmakers are, of course, considering AI’s benefits and challenges, and as a result a growing number of measures have been introduced, all aimed at studying the impact of AI and the room for policymakers to maneuver.

National AI Initiative Act (U.S. AI Act)

The National AI Initiative Act (U.S. AI Act) was enacted in January 2021. It was established to provide “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies.”

The United States AI Act established offices and task forces to implement a national AI strategy involving various federal agencies. These include the Federal Trade Commission (FTC), the Department of Defense, the Department of Agriculture, the Department of Education, and the Department of Health and Human Services.

Introduce yourself to AIRA — an AI-powered retina disease diagnosis tool developed by PixelPlex

The NIST AI Risk Management Framework (AI RMF)

The NIST AI Risk Management Framework (AI RMF) is designed to improve the ability to incorporate trustworthiness considerations into the design, development, usage and evaluation of AI products, services and systems.

The Framework is being created through a consensus-driven, open, transparent, and collaborative process that includes workshops and other avenues for people to contribute feedback. Through greater understanding, detection, and preemption, it is designed to assist organizations in managing enterprise and societal risks associated with the design, development, deployment, assessment and usage of AI systems.

Local Law 144 (the AI Law)

Local Law 144 is the first law in the United States to address the use of AI and other automated technology in the hiring process. Local Law 144 would require businesses to conduct bias audits on automated employment decision tools, including those that use artificial intelligence and related technology, and to publish specific notices about such tools to employees or job candidates in the city.

New York City joined Illinois, Maryland and several other jurisdictions to implement AI regulations in respect to hiring and promotion bias in the workplace. However, due to substantial public comment on the issue, the New York City Department of Consumer and Worker Protection has announced that enforcement will be delayed until April 15, 2023.

The California Privacy Rights Act (CPRA)

The CPRA, which became effective on January 1, 2023, directly addresses automated decision-making. Under the Act, consumers have the right to understand (and opt out of) automated decision-making technologies, which include profiling consumers based on their “work performance, economic status, health, personal preferences, interests, reliability, behavior, location or movements.”

Canada

Canada is heavily investing in AI. As of August 2020, $1 billion in contributions have been awarded across Canada.

AI regulations entered a new era following the Canadian Government’s announcement of a digital charter as part of a bigger revamp of the country’s data privacy landscape. Part three of Bill C-27, the Digital Charter Implementation Act, 2022, seeks to create the Artificial Intelligence and Data Act (AIDA), which would be Canada’s first AI legislation. We’ll now take a closer look at it.

Artificial Intelligence and Data Act (AIDA)

In general, AIDA and the EU AI Act are both focused on limiting the risks of bias and harm caused by AI while attempting to strike a balance with the need to encourage technical innovation. Both AIDA and the EU AI Act define “artificial intelligence” in a technology-neutral manner, so as to be “future-proof” and to keep up with breakthroughs in AI.

AIDA takes a more principles-based approach. In contrast, the EU AI Act is more prescriptive in categorizing “high-risk” AI systems and harmful AI practices and in limiting their development and deployment.

Except for transparency requirements, AI systems posing low or no risk are typically exempt from regulations. AIDA merely places transparency requirements on high-impact AI systems; it does not explicitly prohibit AI systems that pose an unacceptable level of danger. Most of AIDA’s substance and specifics are being left to future laws, including the definition of those “high-risk” AI systems to which most of AIDA’s requirements are attached.

Need help building a custom machine learning model? Turn to our ML consulting and development services

United Kingdom

The UK is already home to a thriving AI sector, with research suggesting that more than 1.3 million UK businesses will use artificial intelligence and invest over £200 billion in the technology by 2040.

At the same time, when it comes to AI regulations, there is more to do to address the complex challenges that the emerging technologies present. In its National AI Strategy, drafted in 2022, the government committed to developing a pro-innovation national position on governing and regulating AI.

Instead of delegating responsibility for AI governance to a single regulatory body, as the EU is doing through its AI Act, the UK government’s proposals will allow different regulators to take a tailored approach to the use of AI to boost productivity and growth simultaneously.

The core principles require developers and users to:

  • Ensure that AI is used in a safe way
  • Ensure that AI is technically secure and performs as intended
  • Make sure that AI is appropriately accessible and explainable
  • Consider fairness
  • Identify a legally liable person to be responsible for AI
  • Clarify routes to redress or competitiveness

What should businesses that use AI do?

What companies that use AI should do

Companies that use AI should:

  • Develop policies and procedures across the organization to create a compliance-by-design program that promotes AI innovation and ensures systems’ transparency and explainability
  • Audit and review AI usage regularly
  • Document these processes to comply with regulators who may demand further information
  • Avoid any kind of bias in their systems even if AI-related regulations are still not in place

The underlying purpose of enacted and pending AI regulations is to keep AI accountable, transparent and fair. Nevertheless, two aspects of AI decision-making make monitoring particularly difficult:

  • Users can manipulate data inputs and observe outputs, but they frequently need help explaining how and with which data points the system reaches a conclusion
  • Frequent adaptation occurs when processes evolve as the system learns

Overlaps, inconsistencies and gaps in the current approaches by regulators can make it harder for organizations and the public to remain compliant. However, the message is clear worldwide: AI regulations are here to stay. Thus, companies should:

  • Build policies and procedures across the organization to create a compliance-by-design program that promotes AI innovation and ensures systems’ transparency and explainability
  • Audit and review usage regularly
  • Document these processes to comply with regulators who may demand further information
  • Avoid any kind of bias in their systems even if AI-related regulations are still not in place

Find out how artificial intelligence is used in the insurance industry and what benefits it brings

Closing thoughts

As AI becomes embedded in all aspects of our lives, you’ll want your business to be operating from a place of strength. And as you navigate the treacherous waters of AI regulations, you have no time to lose on costly micro-management.

PixelPlex, your firm ally in this journey, continuously fosters and promotes cutting-edge technology that makes AI accessible in more diverse and inclusive ways. As a leading AI development company, we deliver cross-industry AI solutions capable of precise troubleshooting. Our AI developers build whatever you need to create within your business domain.

To help you enhance client satisfaction and achieve a better bottom line, our AI development team devises smart systems that generate interest-based offers and handle 24/7 customer support. So think ahead and reach out!

author

Anastasiya Haritonova

Technical Writer

Get updates about blockchain, technologies and our company

We will process the personal data you provide in accordance with our Privacy policy. You can unsubscribe or change your preferences at any time by clicking the link in any email.

Follow us on social networks and don't miss the latest tech news

  • facebook
  • twitter
  • linkedin
  • instagram
Stay tuned and add value to your feed