|
The French MP and Fields medal award winner, Cédric Villani, officially auditioned Constance Bommelaer de Leusse, the Internet Society’s Senior Director, Global Internet Policy, last Monday on national strategies for the future of artificial intelligence (AI). In addition, the Internet Society was asked to send written comments, which are reprinted here.
* * *
“Practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful [...] Once in use, successful AI systems were simply considered valuable automatic helpers.”
Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence
AI is not new, nor is it magic. It’s about algorithms.
“Intelligent” technology is already everywhere—such as spam filters or systems used by banks to monitor unusual activity and detect fraud—and it has been for some time. What is new and creating a lot of interest from governments stems from recent successes in a subfield of AI known as “machine learning,” which has spurred the rapid deployment of AI into new fields and applications. It is the result of a potent mix of data availability, increased computer power and algorithmic innovation that, if well harnessed, could double economic growth rates by 2035.
So, governments’ reflection on what good policies should look like in this field is both relevant and timely. It’s also healthy for policymakers to organise a multistakeholder dialogue and empower their citizens to think critically about the future of AI and its impact on their professional and personal lives. In this regard, we welcome the French consultation.
Our recommendations
I had a chance to explain the principles the Internet Society believes should be at the heart of AI norms, whether driven by industry or governments:
You can read more about how these principles translate into tangible recommendations here.
The audition organised by the French government also showed that the debate around AI is currently too narrow. So, we’d like to propose a few additional lenses to stage the debate about the future of AI in a helpful way.
Think holistically, because AI is everywhere
Current dialogues around AI usually focus on applications and services that are visible and interacting with our physical world, such as robots, self-driving cars and voice assistants. However, as our work on the Future of the Internet describes, the algorithms that structure our online experience are everywhere. The future of AI is not just about robots, but also about the algorithms that provide guidance to arrange the overwhelming amount of information from the digital world—algorithms that are intrinsic to the services we use in our everyday lives and a critical driver for the benefits that the Internet can offer.
The same algorithms are also part of systems that collect and structure information that impact how we perceive reality and make decisions in a much subtler and surprising way. They influence what we consume, what we read, our privacy, and how we behave or even vote. In effect, they place AI everywhere.
Look at AI through the Internet access lens
Another flaw in today’s AI conversation is that much of it is solely about security implications and how they could affect users’ trust in the Internet. As shown in our Future’s report, AI will also influence how you access the Internet in the very near future.
The growing size and importance of “AI-based” services, such as voice-controlled smart assistants for your home, means they are likely to become a main entry point to many of our online experiences. This could impact or exacerbate current challenges we see—including on mobile platforms—in terms of local content and access to platform-specific ecosystems for new applications and services.
Furthermore, major platforms are rapidly organising, leveraging AI through IoT to penetrate traditional industries. There isn’t a single aspect of our lives that will not be embedded in these platforms, from home automation and car infotainment to health care and heavy industries.
In the future, these AI platforms may become monopolistic walled gardens if we don’t think today about conditions to maintain competition and reasonable access to data.
Create an open and smart AI environment
To be successful and human centric, AI also needs to be inclusive. This means creating inclusive ecosystems, leveraging interdependencies between universities that can fuel business with innovation, and enabling governments to give access to qualitative and non-sensitive public data. Germany sets a good example: Its well-established multistakeholder AI ecosystem includes the German Research Center for Artificial Intelligence (DFKI), a multistakeholder partnership that is considered a blueprint for top-level research. Industry and Civil Sociey sit on the board of the DFKI to ensure research is application and business oriented.
Inclusiveness also means access to funding. There are many ways for governments to be useful, such as funding areas of research that are important to long term innovation.
Finally, creating a smart AI environment is about good, open and inclusive governance. Governments need to provide a regulatory framework that safeguards responsible AI, while supporting the capabilities of AI-based innovation. The benefits of AI will be highly dependent on the public’s trust in the new technology, and governments have an important role in working with all stakeholders to empower users and promote its safe deployment.
Learn more about Artificial Intelligence and explore the interactive 2017 Global Internet Report: Paths to Our Digital Future.
Take action! Send your comments on AI to Mission Villani and help shape the future.
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byVerisign
Sponsored byRadix
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byDNIB.com