Copy
View this email in your browser

THE UNITED NATIONS 2045 ROUNDTABLE - A DISTINGUISHED CITY TO HONOR THE UNITED NATIONS' FIRST CENTURY

Co-organizers: The United Nations Academic Impact and Boston Global Forum
8:30 -10:00 EDT/19:30 - 21:00 ICT, March 17, 2021

Moderator:  Mr. Ramu Damodaran, chief of the United Nations Academic Impact and editor-in-chief of United Nations Chronicle Magazine.

Speakers/Panelists:
Bui Thanh Nhon, chairman of Nova Group, Vietnam
Kamal Malhotra, the United Nations Resident Coordinator in Vietnam
Michael Dukakis, three-term governor of Massachusetts
Le Tuan Phong, governor of Binh Thuan province, Viet Nam
Thomas Patterson, Harvard University
Alex Pentland, MIT
John Quelch, University of Miami

AIWS City

AIWS City is a digital virtual city founded on principles stated in “Social Contract for the AI Age”, “People Centered Economy”, “Trustworthy Economy”, “Intellectual Society”, and “AI-Government.”
AIWS City was introduced on 08/21/2020 at the United Nations 2045 Roundtable, co-organized by the United Nations Academic Impact and the Boston Global Forum.

The AIWS City Board of Leaders are: Governor Michael Dukakis, Chairman of the Boston Global Forum; Nguyen Anh Tuan, CEO of the Boston Global Forum; Professor Alex Pentland of MIT; Vint Cerf, Chief Internet Evangelist of Google; former Latvian and Club de Madrid President Vaira Vike-Freiberga; former Prime Minister of Bosnia and Herzegovina Zlatko Lagumdzija; Professor Nazli Choucri of MIT, Professor David Silbersweig of Harvard University; Professor Thomas Patterson of Harvard University; and Marc Rotenberg, Director of Center for AI and Digital Policy at Michael Dukakis Institute.

AIWS City includes distinguished world leaders, inventors, and innovators, as well as faculty from universities such as Harvard, MIT, Stanford, Princeton, Yale, Columbia, UC-Berkeley, Carnegie Mellon, Oxford, and Cambridge.


Location – NovaWorld Phan Thiet

Although virtual in concept, AIWS City will have a physical location – Phan Thiet, Vietnam. It is known for its white sand beaches, temperate climate, and proximity to an international airport and port of call. Less than 100 miles from Ho Chi Minh City, Phan Thiet was historically a place of the Champa people and the enlightened Champa culture, remnants of which still exist in the area.

Phan Thiet is emerging as a worldwide destination. At Phan Thiet, Novaland Group is creating a “World Beach City” for vacationers that will also be an international hub for world leaders, creators, innovators, and scholars.

AIWS City will bring to NovaWorld Phan Thiet a rich set of activities designed to highlight intellectual and creative talent and progress. Together, AIWS City and NovaWorld Phan Thiet will serve as a model for sustainable development and high standards, embodying the ideals that marked the founding of the United Nations and that will sustain it as it moves toward its centennial year.

GLOBAL AI POLICY NEWS (MARCH 2021)

With this issue of the CAIDP Update, we provide a survey of recent AI policy news around the globe. More AI policy news from CAIDP is available here.
 
The Court of Justice of the European Union heard legal arguments about the use of AI technique for the EU funded iBorderCtrl project. MEP Patrick Breyer filed the transparency lawsuit, seeking access to documents about the project. Breyer called the pilot project with lie-detecting avatars that quiz travelers at the borders “pseudo-scientific security hocus pocus.” At issue in the case is whether research funded by the EU must comply with EU fundamental rights.  (“Orwellian AI lie detector project challenged in EU court: Transparency suit highlights questions of ethics and efficacy attached to the bloc's flagship R&D program,” Feb. 5, 2021)
 
According to Reuters, Japanese companies are ramping up the use of artificial intelligence and other advanced technology to reduce waste and cut costs in the pandemic, and looking to score some sustainability points along the way. (“Japanese companies go high-tech in the battle against food waste,” Feb. 28, 2021)
 
UNESCO has launched an Artificial Intelligence Needs Assessment Survey in Africa. The survey highlights the need to strengthen policy, legal and regulatory knowledge for AI governance in Africa. The survey notes that as AI policies are developed across Africa, countries will benefit from greater coordination and expertise to address similar and shared challenges. (UNESCO, March 4, 2021)
 
Digital Privacy News reports that facial-recognition technology is now one of the fastest-growing and most widely dispersed technologies in the world.  But nowhere has high-resolution facial recognition become more prevalent than inside China. According to experts in the global technology-surveillance industry, China has approximately 170 million close-circuit cameras around the nation, including at 200 airports. (“Mainland Chinese Fear Growing Use of Face Recognition,” Feb. 24, 2021)
 
The European Commission has launched a consultation “improving the working conditions in platform work” and algorithmic management. The inquiry noted that the Digital Services Act called for transparency in “algorithms and recommender systems used by online platforms.” (European Commission, Feb. 24, 2021)
 
In Kazakhstan President Kassym-Jomart Tokayev said that the pandemic has accelerated digitalization with 90% of public services now switched to e-format. President Tokayev also described a new educational initiative based on AI. He spoke at the international forum Digital Almaty 2021, which has become a key platform for discussing the digital policy agenda. (“Kazakh President addresses 4th edition of Digital Almaty int’l forum,” Feb. 5, 2021)
 
TechCrunch reports that Sweden’s data protection agency has fined the local police authority approximately $300,000 for unlawful use of the controversial facial recognition software Clearview AI. The police will be required to educate staff and prevent any future processing of personal data in breach of data protection rules and regulations. (Sweden’s data watchdog slaps police for unlawful use of Clearview AI,” Feb. 12, 2021)
 
In Mexico, Dr. Ricardo Monreal Ávila has introduced legislation to regulate social media. The proposal aims to ensure human review of automated decisions by AI systems that could limit access to the Internet, such as the permanent cancellation of user accounts. Senator Monreal is encouraging public comment on his proposal.
 
In the United States, the Chinese tech firm ByteDance, the operator of TikTok, has agreed to pay $92 million to settle a class action privacy lawsuit. The lawsuit charged, among several other claims, that the company provided user data to the Chinese government to assist in meeting two “crucial and intertwined state objectives: (a) world dominance in artificial intelligence  and (b) population surveillance and control.” Also at issue in the case was TikTok’s use of AI techniques  for facial recognition.
 
Uzbekistan accelerates introduction of artificial intelligence technologies. According to the Trend News Agency, the Institute for the Development of Artificial Intelligence will be created in Uzbekistan after President Shavkat Mirziyoyev signed a decree on measures to create conditions for the accelerated introduction of artificial intelligence technologies.
 
Marc Rotenberg, Director
Center for AI and Digital Policy at the Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.

THIS WEEK IN THE HISTORY OF AI AT AIWS.NET - MARVIN MINSKY WAS QUOTED IN LIFE MAGAZINE, "IN FROM THREE TO EIGHT YEARS WE WILL HAVE A MACHINE WITH THE GENERAL INTELLIGENCE OF AN AVERAGE HUMAN BEING"

This week in The History of AI at AIWS.net - in 1970 Marvin Minsky was quoted in Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.”

Marvin Minsky was interviewed by Life magazine in 1970 by journalist Brad Darrach, who was writing an article on Shakey the Dog, an early mobile robot built at Stanford. Minsky made this bold claim, and further added on that “If we’re lucky, they might decide to keep us as pets.” This statement was meant as a warning to not let intelligence computers control vital systems. 

Marvin Minksy was an important pioneer in the field of AI. He penned the research proposal for the Dartmouth Conference, which coined the term “Artificial Intelligence”, and he was a participant in it when it was hosted the next summer. Minsky would also co-founded the MIT AI labs, which went through different names, and the MIT Media Laboratory. In terms of popular culture, he was an adviser to Stanley Kubrick’s acclaimed movie 2001: A Space Odyssey. He won the Turing Award for his influence on AI in 1969.

Marvin Minsky’s quote and this interview highlighted the popularity and mainstream attention AI have received in the ‘60s and ‘70s. Furthermore, Marvin Minsky was one, if not the, most important figure in the development of Artificial Intelligence. His writings and sayings during the rise of AI in the 60s are still deliberated about to this day.

MICHAEL DUKAKIS INSTITUTE ON POLITICO ABOUT ARTIFICIAL INTELLIGENCE

Politico published the article "China wants to dominate AI. The U.S. and Europe need each other to tame it" on March 02, 2021, with a cameo from MDI and CAIDP:

“Strategically, both the U.S. and the EU are concerned about China, so they need a tech policy that acknowledges a very aggressive position that China has taken in AI,” said Marc Rotenberg, director of the Center on AI and Digital Policy at the Michael Dukakis Institute, a technology and leadership think tank in Boston.

Rep. Robin Kelly (D-Ill.), who has championed a U.S. national strategy on AI, asked her European counterparts during testimony to be “narrow and flexible” while pushing ahead with their “desire to be the first to write regulations.” The U.S. and Europe need to stand together as China seeks to write the global playbook, she added.

“Nations that do not share our commitment to democratic values are racing to be the leaders in AI and set the rules for the world,” Kelly said. “We cannot allow this to happen."

GOVERNOR MICHAEL DUKAKIS AND CO-FOUNDERS OF BOSTON GLOBAL FORUM WILL SPEAK AT "A DISTINGUISHED CITY TO HONOR THE UNITED NATIONS' FIRST CENTURY"

AIWS City will bring to NovaWorld Phan Thiet a rich set of activities designed to highlight intellectual and creative talent and progress. Together, AIWS City and NovaWorld Phan Thiet will serve as a model for sustainable development and high standards, embodying the ideals that marked the founding of the United Nations and that will sustain it as it moves toward its centennial year.


Speakers:
Kamal Malhotra, UN Resident Coordinator in Vietnam: “The United Nations and the Symbolic Importance of NovaWorld Phan Thiet and AIWS City.”

Governor Michael Dukakis, Co-founder and Chair of the Boston Global Forum (BGF): “Imagining the City of the Future.”

Governor Le Tuan Phong, Binh Thuan, Viet Nam: “Responsible Development of Binh Thuan Province and Phan Thiet City.”

Professor Thomas Patterson, Co-founder of the BGF: “AIWS City as Concept and Reality.”

Professor Alex Pentland, Member of the AIWS City Board of Leaders: “Application of New Economic and Financial Ideas to AIWS City.”

Chairman Bui Thanh Nhon, Nova Group: “NovaWorld Phan Thiet as a World Model.”

Professor John Quelch, Co-founder of the Boston Global Forum: “AIWS City supports NovaWorld Phan Thiet into a Global Brand.”

PRIME MINISTER ZLATKO LAGUMDZIJA TEACHES AIIA TO AIWS LEADERSHIP MASTER DEGREE STUDENTS

On March 6, 2021, students of the AIWS Leadership Master Degree Program at Saint Petersburg Electronical University (ETU”LETI”) were taught by Professor Zlatko Lagumdzija, former Prime Minister of Bosnia and Herzegovina, with the topic “Building an International Accord on Artificial Intelligence.” He introduced the Social Contract for the AI Age as fundamental for Framework of the AI International Accord.

The homework of students is finding solutions to convince leaders and governments on consensus the Framework for the AI International Accord.

This lecture is hosted by AIWS University at AIWS City.
 
Link: https://www.youtube.com/watch?v=1zwpCO0WkU0

WHO SHOULD STOP UNETHICAL AI?

In computer science, the main outlets for peer-reviewed research are not journals but conferences, where accepted papers are presented in the form of talks or posters. In June, 2019, at a large artificial-intelligence conference in Long Beach, California, called Computer Vision and Pattern Recognition, I stopped to look at a poster for a project called Speech2Face. Using machine learning, researchers had developed an algorithm that generated images of faces from recordings of speech. A neat idea, I thought, but one with unimpressive results: at best, the faces matched the speakers’ sex, age, and ethnicity—attributes that a casual listener might guess. That December, I saw a similar poster at another large A.I. conference, Neural Information Processing Systems (Neurips), in Vancouver, Canada.

Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.

A few years ago, a number of A.I.-research organizations began to develop systems for addressing ethical impact. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (sigchi) is, by virtue of its focus, already committed to thinking about the role that technology plays in people’s lives; in 2016, it launched a small working group that grew into a research-ethics committee. The committee offers to review papers submitted to sigchi conferences, at the request of program chairs. In 2019, it received ten inquiries, mostly addressing research methods: How much should crowd-workers be paid? Is it O.K. to use data sets that are released when Web sites are hacked? By the next year, though, it was hearing from researchers with broader concerns. “Increasingly, we do see, especially in the A.I. space, more and more questions of, Should this kind of research even be a thing?” Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, told me.

Shilton explained that questions about possible impacts tend to fall into one of four categories. First, she said, “there are the kinds of A.I. that could easily be weaponized against populations”—facial recognition, location tracking, surveillance, and so on. Second, there are technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, there is automated-weapons research. And fourth, there are tools “to create alternate sets of reality”—fake news, voices, or images.

To support for AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.

Website
Twitter
Facebook
LinkedIn
Copyright © 2021 Boston Global Forum, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp