黑料社

By Mei Lin Fung, Cochair & Cofounder, People-Centered Internet

The sudden rise of Artificial Intelligence worries many people, for plenty of reasons. Some fret about truly existential challenges, like that AIs might start developing consciousness and even turn on their human creators. My concern is more here and now: I worry that, once again, business leaders are rushing to show they are cutting edge by deploying a technology they barely understand.

I鈥檝e seen this movie before. As one of the early pioneers of Customer Relationship Management 30 years ago, I have closely tracked CRM and subsequent rollouts of innovative digital technologies 鈥 which has all too often been done in ways that have had harmful consequences. I was one of those who watched with horror in 2016 when the CEO of Wells Fargo was confronted in Congress with the 鈥渃ross-selling scandal.鈥 The bank paid billions in fines and endured substantial reputational damage that continues until now. Much of that was enabled by the disastrous use of the CRM technology I helped invent at Oracle.

Right now, the consensus among the bosses of business, especially in Silicon Valley and other centers, is that AI鈥檚 long-awaited moment has arrived. But too often, when new tech gets installed before the people in charge really understand it, they flail about, trying to figure out exactly how it will improve their company鈥檚 operations, sales or service. More importantly, for the rest of society, they also don鈥檛 know how to think ahead about the risks that innovations can be used for evil, illegal or harmful purposes.

I have too often heard CEOs make jargon-laden endorsements of new technology to signal they are on the cutting edge. Leaders then found themselves on the 鈥渂leeding edge鈥 with the crash of the 鈥渄ot-com鈥 boom in 2000 and the Great Recession in 2008. I have seen first-hand the damaging effects on cashflow in companies up through the Fortune 500 and the subsequent layoffs due to betting on technology with high hopes and insufficient concern for the unforeseen consequences.

Right now, bosses feel tremendous pressure from shareholders and their peers to have a cool, future-forward AI strategy. These are fertile conditions for needless and thoughtless technology adoption, with potentially large negative consequences for employees, customers and shareholders alike.

CEOs and political leaders can constructively engage with customers and citizens 鈥 and tech companies 鈥 to find clear positive uses for AI with truly concrete benefits. Indeed, it may be helpful to think of AI less as artificial intelligence than as augmented human intelligence.

Rather than getting carried away by the seemingly unlimited, almost mystical and yet all too often imprecise, transformational power of AI, leaders should focus on identifying specific ways in which it can improve things for humans.

Too often such technologies are deployed 鈥渢op-down鈥 with disastrous or unfortunate consequences. So, another piece of advice is to take practical steps to balance that tendency by engaging stakeholders in the AI rollout from the beginning, bottom-up. More fundamentally, those deploying AI must learn from how previous phases of the digital revolution went wrong in crucial ways.

Next year the Internet turns 50. In many respects, it has brought huge benefits to the world 鈥揺specially in democratizing connectivity and access to knowledge. Yet, especially in this last decade since its 40th birthday, the way it evolved has had terribly destructive side-effects for our societies. These range from severe mental health effects (especially for teenage girls) to the pernicious spread of misinformation and consequential social polarization, which is now undermining trust in important institutions, especially in democracies.

A decade ago, with Vint Cerf, one of the original fathers of the Internet, I co-founded an organization called the People-Centered Internet. It aims to address these downsides and ensure we achieve the people-centered vision that was central to the non-commercial Internet at its origin, when it organically spread from university to university, from country to country, animated by a central intrinsic presumption: .

Our mission at PCI has been to work to deliver an Internet that works for the people and with the people, not against them and without them. The rise of AI makes this ever more urgent. The remarkable power that AI has to process and learn carries the potential to make the downsides far worse. Vint and I are now partnered with Jascha Stein, an expert in AI and psychology, to expand PCI鈥檚 mission beyond the Internet to a .

Done right, in an inclusive, people-centered, energy-efficient way, the strengths of AI and other digital technology can help enable us to reverse the widening digital divide and enable a thriving society and flourishing planet. We are not pessimistic about AI鈥檚 power, only about how it is overseen and managed: PCI served as the chair of Digital Regulation for the UN General Assembly Science Summit in 2023 and will be co-chair for 2024.

One priority should be to ensure equality of access to AI. U Under the next generation leadership of 40-year old Jascha Stein, PCI and its partners are launching a global campaign that promotes the importance of people鈥檚 participation, entitled 鈥淲ithout You, the Future of the Internet and AI will be Lost.鈥 Greater digital equity can be achieved by designing applications that usefully augment our social and human intelligence, like population and precision health and learning. AI can help the whole world be healthier and better educated affordably. But we need to ensure these tools are available all over the world via mobile phones (and not just the newest, smartest ones). If AI can be deployed while demonstrating clearly how it can benefit humanity, that will increase trust both in this incredible technology and in the businesses that deploy it well.

AI support for multilingual access to services is a great example of expanding services and markets with the help of AI. Forward-looking companies are engaging their employees and customers in fine-tuning context-sensitive language translation. In the process, they gain greater insight into customer intentions and needs.

I recently visited Bangladesh, which introduced the critical and much-needed concept of to the United Nations General Assembly in September this year. Bangladesh is harnessing digital tech to set a clear path to becoming a middle-income country. For trust to reverse its decline at the highest level, the divide between digital haves and have-nots must be bridged, and Bangladesh is showing us a pathway to do it.

For instance, Google with a2i in Bangladesh worked on an AI flood forecasting initiative called FloodHub. It tracks how rivers ebb and flow, as well as tide anomalies, and can give local authorities early warnings. The system has already enabled up to 40 million people to take prompt action for collective evacuation. It also aids the protection of water resources.

Second, society at large must be deliberately engaged in the debates and discussion about how to deploy AI, and in providing feedback on how it is rolled out. The giant platform companies that have come to dominate the internet and are well-placed to dominate AI, talk often about their cultures of data-centric experimentation. The giant platform companies have in-house processes for running thousands of parallel experiments daily. By sharing their approaches for use in shared public and private data cooperatives, their processes, tests and procedures could make a huge positive difference in how we design, implement and adapt technology and AI that serves people and planet.

The rise of AI makes even clearer the need for greater transparency of use and broader stakeholder governance of data and experimentation, giving a meaningful say to users and the broader community, not just to providers. At the People-Centered Internet, we call these strategies 鈥渃ommunity learning and living labs鈥 where .

Models of such labs exist in other parts of the economy and could be adapted to democratize the rollout of AI to ensure a more people-centered AI. In the U.S., for example, there are Federally Qualified Health Centers in 10,000 locations. These centers work together in Breakthrough Collaboratives to improve the quality of community health. In the European Union, leaders are to engage public participation in understanding and meeting the challenges of online disinformation with tools for content verification and for empowering people to become active creators of trustworthy information.

Such community learning and living labs require the enthusiastic participation of the businesses that are developing and deploying AI. All those innovative startups and hard-charging Fortune 500 companies require digital public infrastructure in order to do their business. Engaging in such community-centric initiatives would be one way of paying back the favor. Tech companies often say they are serious about stakeholder capitalism. This is a way to show they mean it. Any other approach would simply continue the old profit-maximizing, shareholder-centric model that has caused so many problems until now.

Advances in digital public infrastructure (DPI) in the wake of Covid-19 add up to one of the biggest business opportunities in generations. It is fueled by an ongoing surge of investment in digital transformation by the nations of the G7 and G20 and supported by lots of lending in emerging economies by the World Bank, the IMF, other multilateral development banks and the United Nations Development Program. At the AI+DPI Summit in Bangladesh, the opportunities highlighted included: India鈥檚 Unified Payments Interface, which facilitates 12 billion transactions monthly; Indonesia鈥檚 digital identity system, which has reduced registration time at 6,000 financial institutions from 60 to 5 minutes. In Uganda, the Accessible Digital Textbook developed with UNICEF helped hundreds of children with disabilities to graduate from primary school. India鈥檚 Open Network for eCommerce expanded to 230 cities and added 36,000 merchants in the first year.

If we manage this right and deploy it alongside systems of public participation and stakeholder input, such spending will enable the world to avoid potentially costly mistakes. It will help generate trust among the public that in the long-run AI will be a force for good. And what better year to launch this new approach to governance than 2024, as we celebrate the Internet鈥檚 50th birthday?

 

 

Trust, Populism and the Psychology of Broken Contracts

Eric Beinhocker, Professor of Public Policy Practice, University of Oxford