Mapping the Digital Economy in 2020
From mobile Internet to artificial intelligence, blockchain to big data, digital technologies have the potential to bring about dramatic improvements in human wellbeing. But they also pose serious risks to communities and individuals in their roles as consumers, workers, and citizens. Reaping the digital revolution’s benefits, and avoiding its pitfalls, will require us to manage an unprecedented structural transformation for which the world is woefully unprepared.
Given the transformational effects of digitization, it may seem prudent to think through the risks before allowing new technologies to take hold. But, with digital technologies proliferating at an unprecedented rate, that may not be an option. Automobiles existed for 62 years before reaching 50 million users, and electricity took 46 years to reach that level of penetration. But mobile phones took just 12 years, and the Internet seven. The augmented-reality mobile game Pokémon GO had 50 million users after 19 days.
This is partly because, unlike industrialization, digitization is sweeping across the planet practically simultaneously; more than 60% of people in low-income countries already own a mobile phone. And, unlike the advanced economies, developing countries are adopting mobile Internet at the same time as they acquire smartphones, computers, and even electricity.
Some influential figures, such as Bill Gates, have advocated measures aimed at slowing the spread of automation, to give countries more time to adapt. But in today’s globalized world, falling behind technologically carries major costs. That is why we need a comprehensive overview of the digital revolution’s effects on individual and social welfare as soon as possible.
Measuring these effects will not be easy. Technology’s impact on productivity and growth is often elusive, and standard economic accounting does not register quality improvements with negligible incremental costs or non-material dimensions of wellbeing, such as health. But, given the speed and scale of digital-technology penetration, devising the right metrics is essential to enable governments and societies to guide the digitization process in ways that maximize the benefits and minimize the risks.
THE COMMON GOOD
One area where the potential of digitization is particularly promising is the pursuit of sustainable and inclusive growth. With their low threshold for adoption, non-rival nature, and low information costs, digital technologies are intrinsically inclusive. The most active users of digital technology globally are not necessarily those with higher socioeconomic status.
But there is always the risk that technology – or power over it – can be misused. Consider financial inclusion. Already, digital technologies are making some financial services more accessible, affordable, and sustainable. In many countries, mobile payments are rapidly replacing cash and plastic cards, and data are becoming the “new loan collateral.” Blockchain could accelerate this progress, by enabling established financial institutions to lower costs, speed up transactions, improve transparency, audit operations, and much more.
Yet, as the global financial crisis starkly demonstrated, unfettered financial innovation can create serious risks, not least because innovators often exploit informational asymmetries for their own gain. To realize digital technologies’ potential to advance financial inclusion, government action is thus critical. The same is true, more broadly, when it comes to ensuring that digitization drives broadly shared prosperity, without compromising sustainability.
The first step is access. Digitization has given rise to entirely new approaches to coordination, transforming how economic actors – from consumers to corporations – behave and engage with one another. With digital platforms forming the foundations of an interconnected economic system, ensuring equitable access to them is crucial, as is defining the legal responsibilities of various actors in this new organizational context.
TECHNOLOGY FOR WORKERS
Ensuring that the digital revolution spurs broadly shared prosperity will also require action in another key area: employment. The problem is not, as many fear, that automation will eliminate jobs. There is no evidence that automation has destroyed more jobs than it has created, or that it has driven up unemployment rates in the long term. In fact, countries with higher digital-technology penetration, greater automation, or more advanced AI tend to have lower unemployment.
But, while net employment opportunities will not decline, at least not significantly, the types of jobs that are available will change. Those who are able to seize these new employment opportunities – for example, in cutting-edge industries or the “gig economy” – will benefit; those who aren’t properly equipped will suffer.
The challenge thus lies in helping automation’s “losers” to adapt to change in the labor market. This demands not only the strengthening of traditional education and skills-training programs, but also collaborative efforts among companies, universities, and governments to devise and implement more flexible and modern solutions, especially (technology-enabled) on-the-job training.
BIG DATA, LITTLE PRIVACY
Any examination of the impact of the digital revolution must focus on its implications for privacy. Given their non-competitive informational nature, low costs, and network effects, data are fundamentally different from traditional economic inputs. On the production side, they can be volunteered, observed, or inferred, making ownership difficult to define; on the consumption side, unlike oil, they can be “burned” unlimited times. And they become more valuable when they are aggregated, analyzed, and disseminated in some form. With the right arrangements, every participant becomes a producer and user of data. As information sharing has increasingly become a driver of the digital age, the right attitude is not to lock up data, but to ensure that privacy is properly protected.
Recent experience suggests that market forces may not be sufficient to ensure data security. Facebook, with its 2.4 billion users, is a prime example. Last year, it was revealed that an app had harvested more than 50 million Facebook users’ private information and shared it with the political consultancy Cambridge Analytica, which used it to aid US President Donald Trump’s 2016 election campaign. A few months later, it was reported that Facebook had given some of the world’s largest tech companies more access to its users’ personal data than it had disclosed.
Striking the right balance between privacy protection and data sharing requires a comprehensive understanding of both, as well as financial incentives, appropriate governance, and the joint efforts of both regulators and market participants. On the regulatory front, the European Union has taken the lead, implementing the General Data Protection Regulation last year. But questions about the GDPR’s approach and enforcement remain, and outside Europe, meaningful protections are lagging. In the United States, Senate Democrats recently unveiled their own online privacy bill, but its passage is far from certain.
The good news is that effective technology-based solutions are emerging. Guided by the right set of customer-centric principles, existing tools – such as trusted execution environment (TEE), on-device processing, multi-party computation (MPC), and zero-knowledge proof – can largely alleviate privacy concerns.
UPDATING THE RULES OF THE GAME
The regulatory challenge, however, extends far beyond data privacy. Central to economic progress is “adaptive efficiency,” or institutions that are productive, stable, fair, credible, and flexible enough to be updated in response to political and economic feedback.
Today, governance mechanisms must be updated to account for the interests of all stakeholders, which requires giving them a voice and increasing transparency. Moreover, at a time when digital platforms have become market gatekeepers, regulators may need to rethink competition policy. And with economic change outpacing broad regulatory interventions, the role of closer-to-the-pulse decentralized governance should be expanded.
Regulating Internet content is particularly tricky territory. Not so long ago, many opposed such regulation, viewing the Internet as a democratizing force. But it has become apparent that the Internet – and the information, true or not, that is disseminated through digital platforms – has powerful political effects and can even undermine social cohesion. The 2016 US presidential election, which was marked both by “fake news” and mistrust of real news, is a case in point.
But emerging frameworks for regulating data access and Internet content are inconsistent across jurisdictions, often reflecting divergences in fundamental values. Decentralized governance can go too far, threatening to lead to the Balkanization of the Internet, with serious consequences for international business and commerce.
Mitigating this risk – and, more broadly, realizing the digital economy’s potential – requires mechanisms for greater global cooperation. As it stands, most platforms for digital cooperation and regulation are local, national, or regional. But digital technologies do not respect borders in the way hardware does, and they are changing fast.
In this context, digital cooperation must be flexible, adaptable, and broadly inclusive, not least to ensure that technologies align with shared values, including fairness and respect for human rights. Special attention must be paid to lower-income countries and to disadvantaged groups within countries.
This does not mean that we should pursue one-size-fits-all solutions. The harmonization of standards boosts economic efficiency, but it must not be pursued without regard for the cultural and economic needs of different countries and regions. Here, again, thoughtful tradeoffs will be needed.
THE DIGITAL ECONOMY’S ETHICAL FOUNDATIONS
Such decisions must be guided, above all, by ethical considerations. Advances in digital technology, as in genetics or medicine, raise new questions that often challenge our values and perspectives. Who is responsible for identifying and mitigating a technology’s adverse social consequences? What happens when digital technologies are found to reinforce prejudice, advance injustice, or facilitate exploitation? How should governments and businesses promote public welfare in the information age?
Answering such questions requires a shared ethical framework – and not only for senior policymakers and corporate strategists. Values and ethical norms can influence economic and social outcomes, both by shaping the design of new technologies and by guiding user behavior. While new technologies are bound to have unintended effects, and there will always be “bad” actors, the dangers can be contained, if the majority uses the technology responsibly.
This includes the application of our fast-growing, increasingly low-cost computing power to longstanding social challenges. Computers today can collect and store massive data sets, compute summary statistics, and recognize patterns automatically. These developments have transformed once-impractical statistical methods into useful tools for recognizing patterns in all types of data. With pattern recognition being the first step in any type of scientific research – physical, biological, or social – applications of new AI-enabled methods are booming.
But, as valuable as pattern recognition is, it means little without analysis, understanding, and application. Johannes Kepler elegantly described the solar system’s patterns, but it was Isaac Newton who showed us how to understand and interpret them. Similarly, financial and business analysts and social scientists need “structural models” that enable them to interpret patterns – including predicting potential disruptions.
Technological advances have now reached the point that they may be able to provide such models, enabling us to find answers to questions that were previously beyond the scope of our knowledge. Guided by a robust ethical framework, our priorities should include devising policies that foster macroeconomic and financial stability, and building efficient and sustainable social safety nets.
THE NATIONAL SECURITY TRUMP CARD
The final area of inquiry into the digital economy that must be emphasized in 2020 is security. Digital technologies have become central to national security and defense strategies. But many of them, such as AI and quantum computing, also have important economic and social applications; indeed, they were largely developed for this purpose, and only later applied to defense. (In the past, the development of advanced technologies typically flowed in the opposite direction.)
Dual-use technologies have always generated tensions between national-security goals and other (often benign) objectives. In the digital age, however, the scope of the overlap is unprecedented. If governments maintain a single-minded focus on national security, their economies will suffer, and progress in highly consequential areas, such as medicine, could slow considerably. Imagine, not unreasonably, that international exchanges that could contribute to a breakthrough in cancer research, for example, are scaled back or halted.
Yet the world may be on precisely this path, with countries – especially Trump’s US – citing national-security concerns to justify increased protectionism, including tighter constraints on international technology transfer. A wave of such digital nationalism could have a first-order negative effect on economic and social welfare in the long term. The question of how to balance national-security imperatives with the broader public good must therefore feature prominently in any examination of digitization trends.
THE LONG ROAD AHEAD
None of these topics, issues, or challenges admits of a quick fix. But meticulous research and in-depth analysis – like that which is increasingly being carried out by leading thinkers from a wide range of fields, backgrounds, and geographies, which is also a goal of the Luohan Academy – can produce a reasonably comprehensive understanding of the risks and opportunities that digital technologies present.
Progress will be slow, and mistakes will be made. But as we begin a new decade, we still have a chance to learn, adapt, and, ultimately, make the digital revolution work for us all.
(Michael Spence is a member of the academic advisory board of Luohan Academy.)
Published Date: Friday, December 6th, 2019 | 05:17 PM