generative artificial intelligence

What Does the Tech Industry Value? (Part 1)

Author: Dagny Dukach   |   Published by: c.2023 Harvard Business School Publishing Corp.

 

Most people try to do the right thing most of the time. But “right” is relative, of course. This has been especially evident in the recent generative artificial intelligence boom, hailed by some as potentially world-saving and decried by others as quite literally apocalyptic.

As the global tech industry rapidly expands the frontiers of these new technologies, we at HBR pondered several questions: What values guide tech leaders’ decisions? What ideologies, cultural expectations and mindsets inform their priorities? And what risks do these ethical frameworks carry with respect to how AI will be developed?

We asked experts on the history of the tech industry and the ethics of AI to weigh in on these questions. Their responses shed light on the culture and mentality driving decision-making in the tech world — and what the ethos of today’s leaders can tell us about the opportunities and threats we will all face tomorrow.

+++

 

A Glamorization of Speed

Margaret O’Mara is a professor of history and the Scott and Dorothy Bullitt chair of American history at the University of Washington.

“Move fast and break things” has been an animating value of Silicon Valley for generations. This attitude buoyed the development of home computing and video games in the wake of stagflation in the late 1970s; drove venture capital investments in platforms like Netscape, Yahoo and eBay following the early 1990s recession; and defined the post-dot-com-bubble-burst explosion in search, social media, mobile and cloud software.

Today, generative AI is the next big thing. And even as industry leaders warn of the technology’s potential dangers and urge a six-month pause on training more advanced AI systems, the same age-old Silicon Valley mindset appears to be reemerging. A deeply ingrained need for speed and growth may hinder efforts to put adequate guardrails in place. “I’ve done this for 50 years,” Eric Schmidt, former Google CEO, said on “This Week With George Stephanopoulos” in April. “I’ve never seen something happen as fast as this.”

Of course, technologists have worried about sentient computers for more than half a century — and about robot overlords for much longer — and current large language models do show that computing power (and available data on which to use that computing power) is coming closer than ever to matching human intelligence.

However, in my view, the greatest danger isn’t the technology — it’s the ethos and business imperatives that have for so long defined the people building it. The humans designing AI tools are fundamentally fallible, and that’s resulted in biased algorithms, hackable security systems and deadly campaigns of disinformation.

In the world of tech, speed is nothing new. But generative AI systems are rocketing ahead so quickly and so powerfully that even the most seasoned observers are taken aback. Only time will tell how much we’ll break if we move this fast.

+++

 

Dan Wadhwani is a professor of clinical entrepreneurship at the University of Southern California Marshall School of Business.

The rapid development of AI seems to be driven, by and large, by technological fatalism: Fearing they will fall behind the curve, companies are releasing increasingly powerful models with an increasingly blasé attitude toward ensuring these tools are productively integrated into society.

While identifying commercially viable use cases for a new technology isn’t a bad idea, an exclusive focus on keeping up with the competition can blind leaders to social repercussions and unintended consequences. For example, social media, which promised to connect the world, has also wreaked all sorts of damage, from fake news to mental health issues to developmental concerns for teenage users.

AI has the potential to benefit society in powerful ways, but it also poses great risks — many of which are likely still unknown. Tech companies have tried to anticipate some potential downsides but have largely focused on mitigating narrow, limited risks, such as intellectual property concerns or liability for harmful content created with their models through specific algorithmic rules. Moreover, the pace at which AI operates and the lack of real-time visibility into the processes by which AI outputs are generated pose an entirely new kind of challenge.

Once quirky and countercultural, the tech sector has grown into a powerful part of contemporary society, culture and politics. But many of its leaders still seem to embrace a remarkably, perhaps willfully, naive view of their power. It’s time for them to grow up. Recognize and navigate the unpredictable risks posed by AI — or commit even costlier mistakes.

+++

 

An Obliviousness to the Broader Context

Ming-Hui Huang is a distinguished professor of information management at National Taiwan University and the editor-in-chief of the Journal of Service Research.

Among marketers, the concept of identifying target customers and proactively meeting their needs is well established. But in the AI race, many companies have failed to align developers’ goals with those of their target end customers, causing ethical considerations to fall by the wayside.

For example, through their competition to develop large language models, both OpenAI and Google have created conversational tools (ChatGPT and Bard, respectively) that do not always generate helpful and harmless content. Indeed, these models often replicate the human biases embedded in the online data sources on which they’ve been trained — and sometimes, because they rely on all available text data on the internet rather than prioritizing accuracy, they offer responses that are misleading or just flat-out wrong.

So, what’s the fix? AI developers need to build their tools around the priorities, needs and contexts of the people interacting with these new tools. If they do, they’ll be compelled to make more-ethical decisions every step of the way and build products that not only are profitable but also create real value for users.

+++

 

Quinn Slobodian is the Marion Butler McLean professor in the history of ideas at Wellesley College.

What does AI think about its own ethics? I decided to ask it directly — and what it told me was at once revealing and concerning.

I began by asking ChatGPT what was driving the AI race. It answered that the search for “efficiency and productivity” and the “desire to create more value for shareholders and stakeholders” were paramount. When pressed further, it added secondary motives such as the pursuit of innovation, national security and competitiveness, and environmental sustainability.

I didn’t find this hierarchy surprising. The AI race is, at its core, profit-driven. But of course there’s more than one way to make a profit. So next I asked ChatGPT how it thinks AI developers make decisions with respect to social or ethical concerns.

It responded that a company may consult experts, monitor impact, develop ethical frameworks or establish ethics boards or review committees. Its use of the word “may” was striking to me, as was its exclusive focus on self-regulation, devoid of any acknowledgment that a society or state might direct a company’s decision-making. Sure, a company may do any of these things — but if they’re not required to by law, or compelled to by the threat of litigation or reputational damage, will they? Or will they simply defer to the AI’s first directive of maximizing shareholder value?

The collective wisdom of a community or democratically legitimated legislation did not seem to exist for my AI conversation partner. Like its creators, ChatGPT cultivates the fantasy of being a brain in a vat, blissfully unaware of its dependence on larger structures — that is, until the time comes that it will inevitably need to be bailed out.

Share this post:

Smart Technology, Better Business

Partners in your
digital E-volution