The Oxford summit will concentrate on generative AI regulations

Other speakers included Michael Kratsios, who served because the U.S. chief technology officer through the Trump administration; Michael Bronstein, DeepMind Professor of AI on the University of Oxford; Dame Wendy Hall DBE, Professor of Computer Science and Member of the United Nations High-Level Advisory Panel on Artificial Intelligence; and Baroness Joanna Shields, OBE, who served as Britain's cyber security minister under David Cameron and Theresa May. Executives from Google, TikTok, OpenAI and other technology firms were also present.

GenAI explains

As a reminder, generative AI (or GenAI) is artificial intelligence that may create “original” content, including text, images, video, audio and software code, in response to a prompt or query entered by a human. It has been around for a couple of years but has gained traction lately because of major players like OpenAI, Google, Microsoft and Meta who’re putting huge resources into GenAI development. I put the unique in quotes because although the AI ​​model generates the content, it is predicated on training data it receives online and from other sources. So although the wording is original, the knowledge comes from many other places. Of course, this also applies to content created by humans, but reputable journalists and scientists normally cite their sources, which shouldn’t be necessarily the case with AI systems.

Regulation is vital

My panel focused on AI regulations. I used to be joined by Markus Anderljung from the Center for the Governance of AI, Rafaela Nicolazzi from OpenAI, Joslyn Barnhart, senior research scientist at Google DeepMind, and moderator Keegan McBride from the Oxford Internet Institute.

My panel and other speakers agreed almost unanimously that regulation of AI is inevitable and vital. Most people looked as if it would agree with my comment that regulation needs to be targeted and nuanced to forestall negative consequences while not harming the potential positive points of generative AI, which continues to be in its infancy, a minimum of so far as mainstream products are concerned. It should concentrate on actual damages and be flexible enough to accommodate inevitable technological changes. As we’ve got seen over the past few many years, the technology industry is evolving faster than governments. Therefore, it is crucial that governments provide general guidelines without attempting to manage the technology in every detail.

There is a risk that jurisdictions will adopt contradictory laws

Some speakers frightened loudly concerning the balkanization of AI regulation, as several countries and U.S. states are considering or adopting laws that sometimes conflict with regulations in other jurisdictions.

In an interview on the conference, Linda Lurie, who worked within the Biden White House Office of Science and Technology Policy and now works at Westexec Advisors, told me: “What will happen is that every company with a presence will have to do this. “follow the strictest regulations, which is somehow unfair and undemocratic.” She argued that many jurisdictions have already got laws in place that may protect against misuse of AI. “We don’t need to put an AI stamp on every other law in a country. Make sure you know what is currently on the books to identify the gaps and do this at a harmonized level. This includes contributions from both governments, but also from companies and civil society. Only then will you be able to achieve real regulation that is effective and doesn’t kill AI.”

Risks

Numerous people have expressed concern that giant firms, based largely within the United States, are dominating generative AI in a way that would exclude other countries, particularly in Africa, Latin America and other regions where the economy and the technical infrastructure shouldn’t be so developed. They exist within the USA, Great Britain and enormous parts of Europe.

The risks lie not only in these regions being excluded from any economic and social advantages from GenAI, but additionally within the biases that could be built into AI models, particularly those based on web data, largely from wealthier countries and dominant groups in these countries. Don't just take my word for it. ChatGPT itself admits: “Countries with less internet infrastructure or lower rates of digital content creation (e.g. in media, academia, or user-generated platforms) contribute less to the training datasets of AI models.” I assume I'd be glad to see that even a bot could be self-critical when confronted with the query of its own potential bias.

optimism

Most speakers expressed cautious optimism. A British politician spoke about how generative AI will help level the playing field not just for adults, but additionally for young people. When I asked if she feared large firms would dominate generative AI due to their power over social media, search and other points of the web, she expressed hope that regulations could prevent that. I hope she's right, but I'm not convinced.

Although many participants and speakers expressed concerns about negative consequences, including employment disruption, bias, misinformation, deep fakes, privacy and security issues, lack of accountability, and mental property disputes, nearly everyone agreed that generative AI offers tremendous advantages and potential to humanity can economic growth.

Oxford Ph.D. Student Nathan Davies, who moderated the event's panel discussions, said: “It's rare that policymakers, academics and business people come together in one place.”

Although there have been expected disagreements, I used to be left with a powerful sense of hope around some shared values, which is impressive considering conference attendees ranged from Donald Trump's former CTO to current Labor MPs.

After the conference, I walked across the campus, which is greater than 1,000 years old. I'm sure its founders had no idea about artificial intelligence, but they helped lay the muse for the advancement of human intelligence that led us to this place.

Originally published:

image credit : www.mercurynews.com