Here’s how a ‘laissez-faire’ approach to AI regulation in the U.S. could affect development

Charlie Meek

Charlie Meek

For the last 20 years, Charlie Meek has led creative and brand marketing …


For better or for worse, the White House’s new set of principles for artificial intelligence will set the tone for its development in the United States, experts say.

The proposed principles, which were released earlier this week, will be used to regulate the future development of artificial intelligence in the U.S.

The released principles encourage agencies to provide “fairness, non-discrimination, openness, transparency, safety, and security” in all AI developments.

The new principles are specifically limited to how federal agencies devise new AI regulations for the private sector — the rules won’t affect how federal agencies like law enforcement use facial recognition and other forms of AI.

“Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” the principles read. “Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”

The government is attempting to “avoid overreach,” as it’s referred in the memo, that could impede the technology’s expansion is heavily influenced by the need to stay competitive with authoritarian rivals like China and Russia who have released bold AI strategies of their own.

The Trump administration wants the U.S. to become a global leader in the AI industry, so much so that last year, President Donald Trump signed an executive order establishing the “American A.I. Initiative,” meant to encourage AI research and help build an AI-competent U.S. workforce.

There’s a 60-day public comment period before the rules take effect, but critics have already begun making the argument that AI requires large regulations to ensure the safety and privacy protections of citizens.

Early reactions to these principles from experts said the White House was taking a “laissez-faire” stance on AI development.

Arran Stewart, CVO and co-founder of rewards-based AI recruitment platform, told Global News whether or not developers are required to comply with principles — which are not the same as laws — it’s “slightly wooly.”

“There’s nothing really enforced, or guidelines or actual spectrum that can test whether or not artificial intelligence built by anybody as being compliant with the principles that they’ve laid out,” said Stewart.

Stewart, who has worked in the industry for over a decade, said testing compliance in artificial intelligence, with let’s say, cybersecurity, would be difficult to do unless enough people complained to warrant a full investigation.

“AI plays such a large role in so many things that we do day-to-day,” he said. “When you deal with a computer system, you just kind of accept what it tells you.”

With that in mind, Stewart said it corroborates with the United States’ need for looser regulations.

“They’re very aware that the global race for success and economic power is now underpinned by technology,” he said. “If they restrict themselves now, they may not continue to have maybe, an economic supremacy over everybody else.”

AI development, said Stewart, is highly dependent on having the freedom to push the envelope and create without unnecessary restrictions. According to Stewart, a “laissez-faire” approach to AI could be exactly what the industry needs.

“Any form of regulation makes business harder, whether it’s in artificial intelligence or the financial banking industry. As soon as you make things less strict, and less so the current of success can only flow one way, you inspire potentially creativity, innovation, the ability to have flexibility on the way that you do things.”

But the principles are vague and unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are tasked with taking on human roles in high-risk social settings, like mortgage lending or job recruitment.

AI tools are built by human beings, who are not known for being inherently objective.

In 2018, Amazon scrapped its AI recruiting tool when the company realized their technology harboured biases against women.

Reuters first reported Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

As a result, Amazon’s system taught itself that male candidates were preferable. Points were deducted for resumes that included the word “women’s,” like in “women’s debate team captain” and graduates from all-women’s colleges were downgraded.

“AI systems would have an unprecedented ability to discriminate against historically marginalized communities, people of colour, women and non-binary individuals,” said Albert Fox Cahn, an attorney who leads the Surveillance Technology Oversight Project.

“When we take a hands-off regulatory stance, it gives industry a free rein to implement programs that are having a horrible discriminatory impact.”

Cahn emphasized that AI systems affect our everyday lives, and most of us don’t even realize it.

AI technology can make the decision over whether someone gets hired for a job, whether someone gets approved for a mortgage, whether they get the apartment they apply for and even whether or not they go to jail or get set free.

While the principles are more like guidelines for future policy, Cahn said they “are setting the tone for dozens of different federal agencies regulating every area of life.” AI’s development, he said, needs to be regulated to ensure it doesn’t unintentionally promote sexism and racism.

“As we see these systems get implemented in everything from transportation to the medical sector, AI will have life-and-death consequences for the American people,” said Cahn.

The guidelines speak positively about AI development without discussing any of the potential risks. Most concerning, he said, is that a hands-off approach that promotes self-regulation will put pressure on agencies to hold back on stopping forms of AI development that could hurt people.

“The reason we have regulations of all of these industries is because, left on its own, the industry will typically rush to develop products that aren’t in the best interests of the public that have unintended consequences,” said Cahn.

“They make really problematic decisions. And so I think effective regulation is necessary to get effective, equitable and beneficial AI systems.”

— With files from Reuters and The Associated Press.

Read the full article at Global News