Regulating AI: 3 experts explain why it’s difficult to do and important to get right
#fakenews #misinformation #hoaxes #ChatGPT #GenerativeAI #airegulation
#fakenews #misinformation #hoaxes #chatgpt #generativeai #airegulation
Slides from #gikii23 on #AIregulation vs #InternetLaw 'Party like it's 1999' https://www.slideshare.net/EXCCELessex/gikii23-marsden-260695123
#gikii23 #airegulation #internetlaw
#AI #AIRegulation #Startups #BigTech #GenerativeAI: "Big AI — a term that’s long overdue for adoption — has been actively guiding potential AI policies. Last month, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon signed an agreement with the White House promising to invest in responsible AI and develop watermarking features to flag AI-generated content. Soon after, OpenAI, Microsoft, Anthropic, and Google formed the Frontier Model Forum, an industry coalition targeted to “promote the safe and responsible use of frontier AI systems.” It was set up to advance AI research, find best practices, and share information with policymakers and the rest of the AI ecosystem.
But these companies only account for one slice of the generative AI market. OpenAI, Google, Anthropic, and Meta all run what are called foundation models, AI frameworks that can either be language-based or image-focused. On top of these models, there’s a booming sector of far smaller businesses building apps and other tools. They face many of the same forms of scrutiny, but as AI rules are being developed, they worry they’ll have little say in the results and, unlike Big AI with large war chests they can tap for noncompliance, cannot afford disruptions in business."
https://www.theverge.com/2023/8/8/23820423/ai-startups-regulation-big-tech
#ai #airegulation #startups #bigtech #generativeAI
#AI #GenerativeAI #USA #AIRegulation: "All of the experts I spoke with agreed that the tech companies themselves shouldn’t be able to declare their own products safe. Otherwise, there is a substantial risk of “audit washing”—in which a dangerous product gains legitimacy from a meaningless stamp of approval, Ellen Goodman, a law professor at Rutgers, told me. Although numerous proposals currently call for after-the-fact audits, others have called for safety assessments to start much earlier. The potentially high-stakes applications of AI mean that these companies should “have to prove their products are not harmful before they can release them into the marketplace,” Safiya Noble, an internet-studies scholar at UCLA, told me.
Clear benchmarks and licenses are also crucial: A government standard would not be effective if watered down, and a hodgepodge of safety labels would breed confusion to the point of being illegible, similar to the differences among free-range, cage-free, and pasture-raised eggs."
#ai #generativeAI #usa #airegulation
#USA #AI #AIRegulation #GenerativeAI: "Indeed, the US government’s record to date on AI has mostly involved vague calls for “continued United States leadership in artificial intelligence research and development” or “adoption of artificial intelligence technologies in the Federal Government,” which is fine, but not exactly concrete policy.
That said, we probably are going to see more specific action soon given the unprecedented degree of public attention and number of congressional hearings devoted to AI. AI companies themselves are actively working on self-regulation in the hope of setting the tone for regulation by others. That — plus the sheer importance of an emerging technology like AI — makes it worth digging a little deeper into what action in DC might involve.
You can break most of the ideas circulating into one of four rough categories:
Rules: New regulations and laws for individuals and companies training AI models, building or selling chips used for AI training, and/or using AI models in their business
Institutions: New government agencies or international organizations that can implement and enforce these new regulations and laws
Money: Additional funding for research, either to expand AI capabilities or to ensure safety
People: Expanded high-skilled immigration and increased education funding to build out a workforce that can build and control AI"
https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable
#usa #ai #airegulation #generativeAI
#Canada #AI #AIRegulation #AIDA: "AI conversations have the characteristics of a hype cycle, which is one reason why we should slow down how we approach the matter from a policy and regulatory perspective. Unfortunately, Canada’s Ministry of Innovation, Science, and Economic Development (ISED) is operating in urgency mode. ISED has a mandate to establish Canada as a world leader in AI, and, apparently, to accelerate AI’s use and uptake across all sectors of our society.
The confidence with which ISED is asserting societal consensus on AI’s uptake is troubling. Very few of us have had a chance to think about if and how we do and don’t want AI to become installed in our society and culture, our relationships, our workplaces, and our democracy.
Though lacking any type of informed public demand for it, ISED has created a draft bill called the Artificial Intelligence and Data Act (AIDA), which, as part of Bill C-27, is making its way to the Standing Committee on Industry and Technology (INDU) in a few months, on the heels of a successful second reading in the House of Commons.
AIDA is an AI law for the private sector. Canada has an existing policy directive for the use of AI in the public sector, called the Directive on Automated Decision-Making, but this is notably a policy rather than law."
https://monitormag.ca/articles/were-in-an-ai-hype-cycle-can-canada-make-it-a-responsible-one/
#canada #ai #airegulation #aida
OpenAI, Google will watermark AI-generated content to hinder deepfakes, misinfo - Enlarge (credit: MediaNews Group/East Bay Times via Getty Images / Cont... - https://arstechnica.com/?p=1955816 #bidenadministration #airegulation #generativeai #midjourney #deepfakes #microsoft #chatgpt #policy #amazon #dalle2 #google #openai #gpt-4 #meta #ai
#ai #meta #gpt #openai #google #DALLE2 #amazon #policy #chatgpt #microsoft #deepfakes #midjourney #generativeAI #airegulation #bidenadministration
[en] Schneier: Automatically enforce laws, sue ... at mass scale?
"Imagine a future in which AIs automatically interpret - and enforce - laws"
"Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law."
"... This system would present an unprecedented threat to freedom. ..."
https://www.schneier.com/blog/archives/2023/07/ai-and-microdirectives.html
#microdirective #freedom #lawenforcement #airegulation #ai #llm #gpt4 #artificialintelligence #chatgpt #MediaHighlights
#microdirective #freedom #lawenforcement #airegulation #ai #llm #gpt4 #artificialintelligence #chatgpt #mediahighlights
[en] White House announcement: AI corp. pledge to mitigate the risks of the emerging technology
"... most influential companies building artificial intelligence have agreed to a voluntary pledge to mitigate the risks of the emerging technology ..."
"The companies ... vowed to allow independent security experts to test their systems before they are released to the public ..."
https://www.washingtonpost.com/technology/2023/07/21/ai-white-house-pledge-openai-google-meta/
#airegulation #ai #llm #gpt4 #artificialintelligence #chatgpt
#airegulation #ai #llm #gpt4 #artificialintelligence #chatgpt
#EU #Asia #AI #AIRegulation #AIAct: "The European Union is lobbying Asian countries to follow its lead on artificial intelligence in adopting new rules for tech firms that include disclosure of copyrighted and AI-generated content, according to senior officials from the EU and Asia.
The EU and its member states have dispatched officials for talks on governing the use of AI with at least 10 Asian countries including India, Japan, South Korea, Singapore and the Philippines, they said.
The bloc aims for its proposed AI Act to become a global benchmark on the booming technology the way its data protection laws have helped shape global privacy standards.
However, the effort to convince Asian governments of the need for stringent new rules is being met with a lukewarm reception, seven people close to the discussions told Reuters.
Many countries favour a "wait and see" approach or are leaning towards a more flexible regulatory regime.
The officials asked not be named as the discussions, whose extent has not been previously reported, remained confidential." https://www.reuters.com/technology/eus-ai-lobbying-blitz-gets-lukewarm-response-asia-officials-2023-07-17/
#eu #Asia #ai #airegulation #AIAct
#AI #AIRegulation #HumanRights: "We know that AI has the potential to be enormously beneficial to humanity. It could improve strategic foresight and forecasting, democratize access to knowledge, turbocharge scientific progress, and increase capacity for processing vast amounts of information.
But in order to harness this potential, we need to ensure that the benefits outweigh the risks, and we need limits.
When we speak of limits, what we are really talking about is regulation.
To be effective, to be humane, to put people at the heart of the development of new technologies, any solution – any regulation – must be grounded in respect for human rights."
#ai #airegulation #humanrights
SEC Chair Advocates Increased AI Use for Market Surveillance and Enforcement - U.S. Securities and Exchange Commission Chairman Gary Gensler says the SEC could b... - https://news.bitcoin.com/sec-chair-advocates-increased-ai-use-for-market-surveillance-and-enforcement/ #artificialintelligence #secchairgarygenslerai #aifinancialstability #airegulation #regulation #secchair #secai #sec #ai
#ai #sec #secai #secchair #regulation #airegulation #aifinancialstability #secchairgarygenslerai #artificialintelligence
#China #AI #AIRegulation: "China published measures on Thursday to manage its booming generative artificial intelligence (AI) industry, softening its tone from an earlier draft, and said regulators would seek to support development of the technology.
The rules, set to take effect on Aug. 15 and which Beijing described as "interim", come after authorities signalled the end of their years-long crackdown on the tech industry, whose help they seek to spur an economy recovering more slowly than expected after the scrapping of COVID-19 curbs.
Analysts said they were far less onerous than measures outlined in an April draft, and that the final rules also took care to stress that China wanted to be supportive of the technology while at the same time ensure security."
New interview with #euronewschool Professor @philipphacker and Sue Hendrickson of the Berkman Klein Center for Internet & Society at Harvard University at last week's #AIforGood Global Summit in Geneva which gathered 2,500 experts, entrepreneurs, policy-makers and UN officials to discuss #AIregulation⬇️
https://genevasolutions.news/science-tech/un-touts-initiative-to-regulate-ai-at-tech-summit-in-geneva
#euronewschool #AIforGood #airegulation
#AI #GenerativeAI #LLMs #Chatbots #AIRegulation #BigTech #Oligopolies: "Fourth, will the regulation reinforce existing power dynamics and oligopolies? When Big Tech asks to be regulated, we must ask if those regulations might effectively cement Big Tech’s own power. For example, we’ve seen multiple proposals that would allow regulators to review and license AI models, programs, and services. Government licensing is the kind of burden that big players can easily meet; smaller competitors and nonprofits, not so much. Indeed, it could be prohibitive for independent open-source developers. We should not assume that the people who built us this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make enough space for new players to build them."
https://www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hype
#ai #generativeAI #LLMs #Chatbots #airegulation #bigtech #oligopolies
#AI #AIRegulation #EU #USA: "As I’ve said before, there are many good aspects to the fact that we’re having this discussion. It’s miles away from the traditional way that new technology grows, in which no one thinks about regulatory questions until much later. But that doesn’t mean that all regulation is magically good regulation. There are so many ways this could go wrong — including in writing legislation that just locks in the big players and stifles the smaller, more innovative ones.
Unfortunately, from what I’ve seen so far, I have little faith that the regulation is going to play out in a manner that is helpful, and fear that it will be actively harmful to developing AI tools in a useful and helpful manner."
"Mustafa Suleyman, co-founder of #DeepMind, calls for accountability in #AI. As AI's influence grows, so does the need for robust regulation. Let's ensure AI serves us, not the other way around. Stay informed with [AI for Dinosaurs](https://aifordinosaurs.substack.com) #AICompliance #AIRegulation"
#deepmind #AI #aicompliance #airegulation
For anyone looking to read up on AI risk assessment, and how the government is being recommended to respond, NIST has you covered with multiple meetings, free and publicly available online
Here's the first meeting
https://www.nist.gov/video/nist-conversations-ai-generative-ai-part-one-full-discussion
#ai #airegulation #nist #policy #government
#UK #AI #GenerativeAI #AIRegulation: "The public want regulation of AI technologies, though this differs by age.
The majority of people in Britain support regulation of AI. When asked what would make them more comfortable with AI, 62% said they would like to see laws and regulations guiding the use of AI technologies. In line with our findings showing concerns around accountability, 59% said that they would like clear procedures in place for appealing to a human against an AI decision.
When asked about who should be responsible for ensuring that AI is used safely, people most commonly choose an independent regulator, with 41% in favour. Support for this differs somewhat by age, with 18–24-year-olds most likely to say companies developing AI should be responsible for ensuring it is used safely (43% in favour), while only 17% of people aged over 55 support this.
People say it is important for them to understand how AI decisions are made, even if making a system explainable reduces its accuracy. For example, a complex system may be more accurate, but may therefore be more difficult to explain. When considering whether explainability is more or less important than accuracy, the most common response is that humans, not computers, should make ultimate decisions and be able to explain them (selected by 31%). This sentiment is expressed most strongly by people aged 45 and over. Younger adults (18–44) are more likely to say that an explanation should only be given in some circumstances, even if that reduces accuracy.
Taken together, this research makes an important contribution to what we know about public attitudes to AI and provides a detailed picture of the ways in which the British public perceive issues surrounding the many diverse applications of AI. We hope that the research will be useful in helping researchers..."
https://www.adalovelaceinstitute.org/report/public-attitudes-ai/
#uk #ai #generativeAI #airegulation
Tech (Global News): Cloning laws could be a model for AI boundaries, Champagne says https://globalnews.ca/news/9799093/ai-laws-canada-francois-champagne-cloning/ #globalnews #TechNews #Technology #ArtificialIntelligence #digitalchartercanada #FrancoisChampagne #digitalcharter #AIRegulation #Collision #BillC-27 #cloning #Canada #AILaws #Tech #AI
#globalnews #technews #technology #artificialintelligence #digitalchartercanada #francoischampagne #digitalcharter #airegulation #collision #BillC #cloning #Canada #ailaws #Tech #ai