Not all technological innovations deserve to be called progress. This is because some achievements, despite their convenience, may not generally bring as much social progress as advertised. One of the researchers who opposes technology proponents is MIT economist Daron Acemoglu. (The “S” in his last name is pronounced like a soft “g”.) IEEE Spectrum spoke with Agemoglu, whose research fields include labor economics, political economy, and development economics, about his recent work and his views on whether technologies such as artificial intelligence will have a positive or negative net impact on human society.

IEEE Spectrum: In your November 2022 “Automation and Workforce” working paper, you and your co-authors say reports are mixed at best when AI is confronted with workforce. What explains the discrepancy between the higher demand for skilled labor and their staffing levels?

Atsemoglu: Firms often lay off less skilled workers and try to increase the employment of skilled workers.

“Generative AI can be used not to replace people, but to help people. … But that’s not the trajectory he’s on right now.”
—Daron Acemoglu, Massachusetts Institute of Technology

In theory, high demand and limited supply should lead to higher prices—in this case, higher wage offers. It goes without saying that, based on this long-held principle, firms will think, “More money, less problems.”

AtsemogluA: You may be right to some extent, but… when firms complain about a lack of skills, part of what I think they’re complaining about is a general lack of skills among the applicants they see.

In your 2021 article “Harming AI,” you argue that if AI remains unregulated, it will cause significant harm. Could you give some examples?

Atsemoglu: Well, let me give you two examples from Chat GPT, which is all the rage right now. ChatGPT can be used for a wide variety of purposes. But the current trajectory of the large language model embodied in Chat GPT is heavily focused on a broad automation program. ChatGPT tries to impress users… It tries to be as good as people in various tasks: answering questions, keeping the conversation going, writing sonnets and writing essays. In fact, he can be better than humans at a few things, because writing coherent text is challenging, and tools to predict which word should go next based on a lot of data from the internet do it right. Fine.

The way that GPT3 [the large language model that spawned ChatGPT] going down emphasizes automation. And there are already other areas where automation has had a detrimental impact — job losses, inequality, and so on. If you think about it, you will see – or at least you can argue – that the same architecture could be used for completely different purposes. Generative AI can be used not to replace people, but to help people. If you would like to write an article for IEEE Spectrum, you can either go and ask ChatGPT to write this article for you, or you can use it to put together a reading list for you, which may include things you didn’t know yourself that are relevant to the topic. The question then becomes how reliable the various articles on this reading list are. However, in this capacity, generative AI will be a complementary tool, not a replacement for humans. But that’s not the trajectory he’s on right now.

“Open AI took a page from Facebook’s Move Fast and Break Things codebook and just threw it all away. This is a good thing?”
—Daron Acemoglu, Massachusetts Institute of Technology

Let me give you another example, more related to political discourse. Because, again, the architecture of Chat GPT is based on getting information from the Internet, which he can get for free. And then, having a centralized structure driven by open AI, it has a conundrum: if you just take the Internet and use your generative AI tools to form sentences, you will most likely end up with hate speech, including racial epithets and misogyny, because the Internet is filled with this. So how does ChatGPT handle this? Well, a group of village engineers have developed another set of tools, mostly based on reinforcement learning, that allows them to say, “These words won’t be spoken.” This is the mystery of the centralized model. Either he spews hateful things, or someone has to decide what’s hateful enough. But this will not contribute to any credibility of the political discourse. because it may turn out that three or four engineers – in fact, a group of white coats – decide what people can hear on social and political issues. I believe hose tools could be used in a more decentralized way rather than under the auspices of centralized big companies like Microsoft, Google, Amazon and Facebook.

You say that rather than keep moving fast and breaking things, innovators should take a more deliberate stance. Are there any specific prohibitions that should guide the next steps towards intelligent machines?

Atsemoglu: Yes. Again, let me give you an illustration using ChatGPT. They wanted to beat Google[to market, understanding that] some technologies were originally developed by Google. And so, they went ahead and let him out. It is currently used by tens of millions of people, but we have no idea what the wider implications of large language models will be if used in this way, or how it will affect journalism, English classes in high school, or what political implications. they will have. Google is not my favorite company, but in this case I think Google would be much more careful. They were actually holding back their big language model. But Open AI, based on Facebook’s Move Fast and Break Things codebook, just tossed it all out. This is a good thing? I don’t know. As a result, Open AI has become a multi-billion dollar company. In fact, it has always been part of Microsoft, but now it has been integrated into Microsoft Bing, and Google has lost about $100 billion in value. So you see the high stakes, the cutthroat environment we’re in, and the incentives that this creates. I don’t think we can trust companies to act responsibly here without regulation.

Tech companies argue that automation will put people in a controlling role, not just kill all jobs. The robots are on the floor and the people in the back room are watching the machines in action. But who’s to say that the back room isn’t across the ocean, but on the other side of the wall—a division that would allow employers to cut labor costs even further by offshoring jobs?

Atsemoglu: It’s right. I agree with all of these statements. I would argue that this is a common excuse for some companies doing rapid algorithmic automation. This is a common refrain. But you’re not going to create 100 million jobs for people who control, provide data, and teach algorithms. The point of providing data and training is that the algorithm can now perform tasks that humans used to perform. This is very different from what I call human complementarity, where the algorithm becomes a tool for humans.

“[Imagine] using AI… for real-time scheduling, which can take the form of zero-hour contracts. In other words, I am hiring you, but I am not obligated to provide you with any work.”
—Daron Acemoglu, Massachusetts Institute of Technology

According to The Harm of AI, executives trained to reduce labor costs have used technology to circumvent labor laws that benefit workers, for example. Say, scheduling hourly shifts so that hardly anyone ever hits the weekly hour threshold that would qualify them for employer-sponsored health insurance and/or overtime pay.

Atsemoglu: Yes, I also agree with this statement. Even more important examples would be the use of AI to monitor workers and for real-time scheduling, which could take the form of zero-hours contracts. In other words, I am hiring you, but I am not obligated to provide you with any work. You are my employee. I have the right to call you. And when I call you, you should show up. So let’s say I’m Starbucks. I will call and say: “Willy, come at 8 in the morning.” But I don’t have to call you, and if I don’t call you for a week, you won’t make money this week.

Will the simultaneous proliferation of AI and government surveillance technologies lead to a complete lack of privacy and anonymity, as depicted in a sci-fi movie? Minority report?

Atsemoglu: Well, I think it’s already happened. In China, this is exactly the situation for urban dwellers. And in the US, these are actually private companies. Google has a lot more information about you and can track you all the time if you don’t turn off various settings on your phone. It also constantly uses the data you leave online, in other apps, or when using Gmail. Thus, there is a complete loss of confidentiality and anonymity. Some people say, “Oh, that’s not so bad. These are companies. It’s not the same as the Chinese government.” But I think it causes a lot of problems because they use the data for customized targeted advertising. It is also problematic that they sell your data to third parties.

Four years from now, when my kids graduate from college, how will AI change their career options?

Atsemoglu: This goes back to a previous discussion with ChatGPT. Programs like GPT3 and GPT4 can derail many careers but won’t result in significant productivity gains along their current path. On the other hand, as I mentioned, there are alternative paths that would actually be much better. AI progress is not predetermined. Not that we know exactly what will happen in the next four years, but we are talking about a trajectory. The current trajectory is based on automation. And if this continues, many careers will be closed to your children. But if the trajectory goes in a different direction and becomes complementary to a person, who knows? Perhaps some very significant new occupations will open before them.

From articles on your site

Related articles online


Please enter your comment!
Please enter your name here

2 × 3 =