View this email in your browser


On 9:00 EST/15:00 CET/17:00 MSK, Saturday, March 6, 2021, the co-author of the Social Contract for the AI Age, Member of the History of AI Board, Prime Minister Zlatko Lagumdzija, will teach AIWS Leadership Master Degree Program. This program is a part of the cooperation between Saint Petersburg Electrotechnical University ETU “LETI” and AIWS University of Michael Dukakis Institute for Leadership and Innovation. He will speak about building AI International Accord, and its challenges and obstacles. Students will be attending to learn the process to build the framework of AI International Accord and practice how to convince governments accept the framework.

Professor Dr. Zlatko Lagumdžija was Prime Minister, Acting Prime Minister, twice Deputy Prime Minister, twice Minister of Foreign Affairs, and a Member of the Parliament and the leader of the largest multi-ethnic political party in Bosnia and Herzegovina between 1992 and 2015. Member of Club de Madrid-The World Leadership Alliance and World Academy of Arts and Sciences. Founder of the Shared Societies and Values Foundation Sarajevo. Since 1989, Dr. Lagumdžija is professor of Management and Information Technologies at University of Sarajevo, and visiting professor of different universities in Europe, Asia and America. Member of numerous International Boards and Missions. Ambassador for Dialogue among Cultures and Civilizations of ISESCO.  


The Center for AI and Digital Policy (CAIDP) at the Michael Dukakis Institute (MDI) have provided detailed recommendations for the National Commission on AI. The recommendations follow from the CAIDP report Artificial Intelligence and Democratic Values, a comprehensive review of AI policies and practices in 30 countries.

The NSCAI is scheduled to release its final recommendations for Congress on Monday, March 1, 2021. CAIDP Director Marc Rotenberg and Michael Dukakis Institute CEO Tuan Nguyen wrote "We believe it is vitally important for the United States to pursue a policy for artificial intelligence that reflects democratic values."
The CAIDP Statement to the NSCAI noted favorably US support for the OECD/G20 AI Principles, the Presidential Executive Orders on AI, and legislation in Congress to establish a national AI strategy that addresses concerns about bias and fairness. But the CAIDP Statement raised concerns about the “opaque policy process” in the US, the reluctance of the Commission to conduct open meetings, and the absence of a data protection agency in the United States.

Regarding the report of the NSCAI, the CAIDP acknowledged “the substantial work of the Commission over a two-year period on this complex and important issue.” CAIDP also supported the International Digital Democracy Initiative However, the CAIDP raised several concerns. “Although we appreciate the brief opportunity to comment on the draft of the final report, there was too little input from the general public in the work of the Commission and too few opportunities for formal comment. The US Commission on AI did not even assess whether the US had taken steps to implement the OECD AI Principles or the G20 AI Guidelines, formal international commitments that the United States has already made.”

“We are also concerned by the decision of the Commission not to support a global prohibition of AI-enabled and autonomous weapon systems. . . . our recent review of country policies strongly indicates support among democratic nations for limits on these systems.”

CAIDP made several recommendations for the final NSCAI report:
- implement the OECD AI Principles
- establish a process for meaningful public participation in the development of national AI policy
- establish an independent agency for AI oversight
- establish a right to algorithmic transparency
- support the Universal Guidelines for AI
- support the Social Contract for the AI Age
- support an International Accord for AI
- reconsider the opposition to a ban on lethal autonomous weapons
The NSCAI event will be cybercast on Monday, March 1, 2021 at 12:00 EST. Registration is open to the public. Comments on the NSCAI report may be sent here.

Marc Rotenberg, Director
Center for AI and Digital Policy at the Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.


This week in The History of AI at - Arthur Samuel popularises the term “machine learning” in 1959 in his article “Some Studies in Machine Learning Using the Game of Checkers”.

Arthur L. Samuel was an American computer scientist. He was a pioneer in the field of computer gaming and artificial intelligence. Born in 1901, he studied at the College of Emporia for his Bachelor and MIT for his Master. His Samuel Checkers-playing Program was one of the first successful self-learning programs and was a basis for this article that coined “machine learning”. He was notable for his works at IBM and Stanford. Samuel won the Computer Pioneer Award in 1987 for his contributions to computer science and AI. He passed away in 1990.

“Machine learning” is the study of computers that can self-improve through time and experience. It is often considered a part in the development of artificial intelligence. Although the study has existed before 1959, with this program and also developments such as the Dartmouth Conference, helped the field become more active. A subset of machine learning, deep learning, has also been gaining traction lately.

The article that he wrote on this program and “machine learning” can be read and downloaded here.

This is important to the history of AI in that it popularises the term “machine learning”, which is an important aspect of artificial intelligence. Arthur Samuel is also one of the pioneers in computer science and AI.


Now, leaders of the Boston Global Forum (BGF) and AIWS Innovation Network ( are nominating persons to honor World Leader in AI World Society Award 2021. The first recipient was Angel Gurria, Secretary General of OECD for 2018, the second recipient was father of Internet Vint Cerf for 2019, and the third and most recent recipient was Professor Judea Pearl, UCLA, for 2020.

The World Leader in AI World Society Award 2021 will be presented at the Quad Roundtable on AI International Accord in late April 2021.

World Leader in AI World Society is a part of Winners of this award will become a member of board of leaders of, the network of more than 100,000 professors, scholars, innovators, and experts of top universities across the world.


The NSCAI is scheduled to release its final recommendations for Congress on Monday, March 1, 2021. The CEO of the Michael Dukakis Institute (MDI), Nguyen Anh Tuan, together with Director of Center for AI and Digital Policy (CAIDP), Marc Rotenberg, have provided detailed recommendations for the National Commission on AI.

"We believe it is vitally important for the United States to pursue a policy for artificial intelligence that reflects democratic values."

“We also support the proposal of the Commission to bring together democratic nations in support of the International Digital Democracy Initiative (IDDI).12 We believe it is vitally important for democratic governments to collaborate on AI policies and practices. And we appreciate the recognition that data minimization techniques are fully compatible with AI innovation,13 a point that has also been made by Professor Judea Pearl, one of the honorees of the Michael Dukakis Institute.”
The recommendation calls for:

  • support for the Social Contract for the AI Age
  • support for an International Accord for AI
  • reconsidering the opposition to a ban on lethal autonomous weapons.
The full recommendations and comments can be read here.


As usual this year, Michael Dukakis Institute sponsors the AI World Executive Summit July 14, 2021. The AI World Society City (AIWS City) of Michael Dukakis Institute will be a strategic alliance partner of this event.

This year’s AI World Executive Summit: The Future of AI will help keep people ahead of the curve and focus on how the best and brightest enterprises are truly innovating and achieving high-performance results from AI.

There is a special session at AI World Executive Summit - “AIWS City, a city for 100 years of the United Nations”

Speakers: Vint Cerf, father of Internet; Governor Michael Dukakis; Mr. Ramu Damodaran, Chief of the United Nations Academic Impact, Editor in Chief of United Nations Chronicle Magazine; Professor Thomas Patterson, Harvard Kennedy School; Professor John Quelch, Harvard Business School; and Mr. Bui Thanh Nhon, Chairman of Novaland.

Read more about the Summit here.


These days, every business is a software business. As companies try to keep up with the rush to create new software, push updates, and test code along the way, many are realizing that they don’t have the manpower to keep pace, and that new developers can be hard to find. But, many don’t realize that it’s possible to do more with the staff they have, making use of new advances in AI and automation. AI can be used to address bugs and help write code, but it’s greatest time saving opportunity may be in unit testing, in which each unit of code it checked — tedious, time-consuming work. Using automation here can free up developers to do other (more profitable) work, but it can also allow companies to test more expansively and thoroughly than they would have before, addressing millions of lines of code — including legacy systems that have been built on — that may have been overlooked.

Not all of the software development workflow can be automated, but gradual improvements in technology have made it possible to automate increasingly significant tasks: Twenty years ago, a developer at SUN Microsystems created an automated system — eventually named Jenkins — that removed many of the bottlenecks in the continuous integration and continuous delivery software pipeline. Three years ago, Facebook rolled out a tool called Getafix, which learns from engineers’ past code repairs to recommend bug fixes. Ultimately these advances — which save developers significant time — will limit failures and downtimes and ensure reliability and resilience, which can directly impact revenue.

But as AI speeds up the creation of software, the amount of code that needs to be tested is piling up faster than developers can effectively maintain. Luckily, automation — and new automated tools — can help with this, too.

Automation is coming to all parts of the software development process, some sooner than others — as AI systems become increasingly powerful, the options for automation will only grow. OpenAI’s massive language model, GPT-3, can already be used to translate natural human language into web page designs and may eventually be used to automate coding tasks. But eventually, large portions of the software construction, delivery, and maintenance supply chain are going to be handled by machines. AI will, in time, automate the writing of application software altogether.

In support of positive AI development for the society, Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF) has established Artificial Intelligence World Society Innovation Network ( In this effort, MDI and BGF invite participation and collaboration with governments, think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society. This initiative is to develop positive AI for helping people achieve well-being and happiness, relieve them of resource constraints and arbitrary/inflexible rules and processes, and solve important issues, such as SDGs.

Copyright © 2021 Boston Global Forum, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp