AI, AGI, and Future of Tech: MIT's Neil Thompson and BCG's Faisal Hamady

Runtime

Topics

In this episode of More than Meets the AI, MIT's Neil Thompson and BCG's Faisal Hamady delve into the future of technology, discussing AI, AGI, and the evolving landscape of data integration and global collaboration.

With projections estimating artificial intelligence’s (AI) economic impact to reach as much as $15 trillion by 2030—rivaling the GDP of China or the EU—it’s clear that the technology will play a transformative role in the global economy. But how will businesses harness this potential, and what challenges must they overcome to unlock AI’s full value?

“Tasks AI traditionally struggled with are now being tackled effectively by deep learning systems. This shift is transformative and immensely powerful,” says Neil Thompson, Director of the FutureTech Project at MIT’s Computer Science & AI Lab. 

“There’s a vast universe of untapped possibilities we’ve yet to explore,” adds Faisal Hamady, Managing Director & Partner at BCG, as he reflects on the exciting future of AI. 

In the inaugural episode of More than Meets the AI—a web series tailored for top executives and decision-makers in the Middle East, in collaboration with Boston Consulting Group (BCG), Neil and Faisal discuss AI, AGI, and the Future of Tech—exploring topics from data integration to global collaboration.

Transcript

Liji Varghese: From your perspective, Faisal, where do you see AI going in the next 10 years?

Faisal Hamady: I think AI is at a very interesting point today. There’s a significant market opportunity; estimates vary between one and two trillion dollars by 2030, with a significant economic impact as well. 15 trillion dollars by 2030 is the current estimate, give or take. To put that into perspective, that’s equivalent to the GDP of China or the GDP of the EU. So there’s a significant opportunity, both in terms of the market size and also the economic opportunity. 

Now, where would that come from? From a left-brain, right-brain thinking, there’s a significant improvement when it comes to efficiency and doing things in a more efficient way, especially repetitive tasks early on, but going beyond that into the future. Things like drug discovery, optimization of transport routes, deliveries, and so on. 

There’s a second aspect, which is the creativity that AI brings into play. And especially when you couple that with new technologies such as metaverse or spatial computing, the idea to create more synthetic voices, synthetic text, synthetic images, videos, synthetic humans, and overall environments, there’s going to be a significant change in the way we think about creativity and all of that part.

I would think about it as a more short-term to medium-term type of advancement in AI. Similar to the physical changes that have happened before around the exertion of power or changes to the industrial revolutions, there is a point in which we do not know what’s going to come in, and there’s probably a big universe of possibilities that we’re not exploring so far, but it’s going to be exciting to see. On top of the efficiency, productivity, personalization, and use cases that we’re seeing now, the creativity part: what are those new use cases that will be coming into the future that we’re not imagining at the moment? 

Liji Varghese: So Neil, what would you add to that? What do you see happening in the next five to ten years with AI?

Neil Thompson: It’s certainly going to be a very exciting time; there’s no question about that. I think what we’re likely to see over the next little while is that over the short run, many of the tasks that we’ve traditionally done—that we’ve said actually AI systems have been traditionally not very good at—the new form of AI, this deep learning, is actually much more effective at, and so that’s going to be, I think, very, very powerful. 

That’s going to mean that a bunch of jobs are going to be automated. As Faisal was already saying, I think we’re going to see a huge number of new tasks being done and new jobs being created. I think that’s going to be very exciting.

Liji Varghese: That’s something you also talk about in your new AI task automation model, which you co-authored. Could you explain what’s new in this model?

Neil Thompson: Traditionally, when people think about AI automation, they’ve often had to think about it from the point of view of saying, “Well, you know, what does AI do, and what do humans do, and how do those potentially match up?”

And so, typically, what they would say is something like, “Well, we know that there’s been a lot of progress in, for example, computer vision—so computers looking at pictures, looking at videos, and being able to interpret them.” And we say, “Well, if we take a look at the tasks that humans do, where they’re looking at images, we think that maybe those could be automated.” If you do those kinds of estimates, you get very, very high percentages of humans that could actually be automated by AI today. And so that might be initially very worrisome.

What I think, however, we can say is that for some of these things, it’s true that you could imagine having an AI system do such a thing, but you might not actually want to.

The cost of building that system, maintaining it, and training it the right way—all of those things might be much more expensive than having a human do it. And so, in the work that we did, what we actually did was break that apart, and we said, “Okay, well, if we take into account all of those costs—all those costs that, for example, we’re dealing with every day implementing projects—and you actually look at it, you can see that in some cases you really don’t want to do it.”

And what we find is that of that very large set of tasks that could potentially be automated, we find that only about 20 to 25 percent of them are attractive today to automate. And so, what I think that means in practice for most organizations right now is that there’s going to be, initially, a big flurry of automation that’s going to happen.

In that automation that is happening, there are a whole bunch of things that are attractive right today. And then after that, there’s going to be this more gradual rollout that’s going to happen, because at that point, sort of those initial tasks are already done, and now the remaining tasks will become attractive as our chips get better, as our algorithms get better, and as our data gets better.

But all those things will take time, and so there will be that second phase, which will happen at a more gradual point in time.

Liji Varghese: Faisal, do you agree that there should be a gradual adoption across industries when it comes to AI?

Faisal Hamady: I very much agree with Neil also on the fragmentation of how the models will evolve and what the different algorithms will look like. Definitely, cost is one of them. What are the things that you’d want the AI models to do, to look at? Because that might be costly both from developing models and deploying models, but also the cost of computation and the running of infrastructure.

There’s a second aspect of this around. What are the applications, and what do you actually need from those applications? Sometimes deploying things at the edge with user devices that might require much quicker latency, but with less centralized intelligence or limited proprietary data or proprietary models, that might be one model. Another model might be bigger, centralized versions where it’s about more proprietary data.

You might be looking a little bit into background computing and so on. I think over the similarity to the software industry and how that evolves with time, it would be very hybrid, very heterogeneous. There will probably be, and this is all speculation right at this moment in time, but our view is that there will be a proliferation of models.

Some models are big and heavy and require much more computation. Some models will be very specific and very small and deployed at edge devices. And some others will be maybe AI agents that are doing specific tasks at a specific instance in time or a specific situation.

And I think if you think about this in the proliferation of models—a combination of small and big, a combination of closed source and open source, a combination of domain-specific but also foundational—I think that’s where the trend is going on, and we’re very excited to see how you can look into all of that complexity and deploy it back into your business.

Liji Varghese: When you talk about AI right now at this stage, data integration is something that is a challenge for most of the companies today. Do you have any strategies that you recommend to organizations to overcome these challenges?

Faisal Hamady: Look, I mean, today, organizations, I would say, are challenged with three main things based on the work that we’ve done with a number of them. I think, number one is having an end-to-end solution incorporating AI, so all the way from vision and business strategy, cascading this down to the business units with individual target plans, and cascading that further into the processes that the company is running, right? So that end-to-end thinking with AI at the heart is very important. It’s not an add-on to what you’re doing. It’s more reimagining what you’re doing using AI. So that’s number one. 

I would say the second thing about that is, how do you deploy that at scale? Today, in a lot of use cases, we’re seeing specific use cases that are relatively contained within either a specific domain, a specific business, or a specific situation. The idea of who can actually deploy AI at scale is going to be the second biggest challenge, right? 

And I would say third, and back to your point on data, our rule generally with our clients is like a 10:20:70 rule, whereby we say 10 percent is the effort and the resources you should put into algorithms and how do you decide on those infrastructure algorithms overall. Twenty percent goes into data and through the entire value chain – how do you capture, source, clean, consolidate, and treat the data overall until you are able to do (work) with it.

And seventy percent is your people and culture, because, at the end of the day, with those transformations with AI at the heart, it’s making sure that people are aware of what can be done, working hand in hand more symbiotically with the different AI systems, in a way that creates value. That is where the most concentration and challenge are for companies to both comprehend and act upon.

Liji Varghese: Neil, how do you suggest businesses should adapt when it comes to data integration from your perspective?

Neil Thompson: I think this is really going to be a very tough issue for a lot of companies. So I was a management consultant in a prior life as well. And I’ve also, as a professor, interacted with many businesses. Many businesses, for example, will have a model that is very decentralized where they’ve said, “Well, we want to have local incentives,” and so maybe each individual unit will have its own P&L (profit and loss), and they’ll all be managing their own data.

That can be very hard to work with if you’re thinking about AI because you really do need that kind of centralization to come together. And so, for many of these organizations, there’s going to be a lot of organizational change and process change that has to happen before they can do it. And so, I think, for those organizations, they’re going to end up often having to start with very small AI models that might be just related to particular business units.

If they want to have a more holistic view of AI as a company, they’re going to need to be thinking about it as part of a broader transformation that they’re doing in the whole organization. 

Liji Varghese: You also talk about the post-Moore’s Law era in businesses. How do you see computation powers matching up to the AI technology?

Neil Thompson: Yeah, so this is a really important question. I think it’s a little bit hard to get your head around just how much computation is being used for some of these very, very cutting-edge models. And so, to give some sense of it, when, for example, OpenAI trained their most recent language model, GPT-4, it’s estimated that they used tens of thousands of GPUs—the chips that people use in these areas—and they ran them for more than three months.

So this is, you know, something you should be thinking of, like data centers filled with chips running for long, long periods of time to train this model. And that, you know, that’s a very big challenge for folks. Now, traditionally, you know, we’ve often thought about saying, “Well, actually, if we just wait a little while, progress in chips gets better.”

And so that means, yes, it’s true. We need this many chips today, but maybe two years from now, we would need half as many chips. Two years after that, it would only be a quarter as much, and so on. And that traditionally has been known as Moore’s Law. The challenge is that the traditional version of Moore’s Law is slowing down. That was really based on miniaturizing what was going on in the chip. So you took the things that were on the chip, you miniaturized them, and you could get more onto the next version of the chip, and so on. So that miniaturization, unfortunately, we’re really very, very close to the end already with that.

Liji Varghese: So now you say that we have to work from top to bottom?

Neil Thompson: Yes, that’s right. That’s exactly it. So now we have to say, “Well, okay, if we can’t get it from miniaturizing, we can still say there are a whole bunch of things that sit above that.”

So there’s the actual design of the chip, and NVIDIA, for example, has been very successful in designing new specialized chips that do things that have been very, very fruitful and to a remarkable degree. That, however, is also going to be a limited well. As we progress there, we will sort of get closer and closer to the optimal chip for designing these things.

And we’re going to then get to a situation where we need to say, “Well, are there other big innovations we could do that might be, for example, computing with light instead of electrons, computing with other features?”

For example, spintronics is another example. Some of these areas where we’re sort of rethinking these fundamental levels in order to get that last bit of performance. It would also be remiss of me not to point out that when we’re talking about improving how we use computation, a lot of this is that we’ve been talking about the hardware side of it.

There’s also a side of it of improving algorithms, where we say that’s sort of how well we can use the chips we actually have. That’s also been an area with remarkably fast progress right now that my lab has done a lot of work on.

Liji Varghese: Right now, we are talking about all the challenges like data integration and chips. Do you, Faisal, by any chance, see AGI coming into play anytime soon?

Faisal Hamady: Look, there’s first a definition of what AGI is. I think, for a number of years, people have been speaking about AGI and ASI and a number of others. And one quote that I like to repeat always is to say AI is what we don’t have today.

So people always kind of think of today as a frontier. So the next thing is AGI. And once we had things like, for example, the ChatGPT and others, we thought, okay, there’s some new frontier about AI, and so on. And I think that’s an interesting thought to think about: how much are we pushing the frontier, and what would that AGI look like?

There’s quite a bit of thinking about that; that it would mimic human behavior and have other features that current AI systems do not have. To some extent, you can see that in specific fields, right? So in math exams or some computerized exams, the current AI systems are already performing as much, if not more, than humans and others.

I think the interesting part to look for is, as you take that forward, what would that look like from moving beyond, like what we started earlier into the physical acceleration of human potential—which was more industrial revolutions—today to where we are into accelerating the intelligence. Is the next era about more intelligence, or is it about additional aspects added to intelligence? 

Those could be about emotions and empathy, could be about consciousness, could be about reasoning and intuition, right? So the more we push into that, what would that next era be? Personally, I’m not sure what that would look like. But I can say it’s not going to be only more of the same. It’s going to be more, but we also added more layers of things that we just spoke about as well.

Liji Varghese: Another challenge that we see right now is that everything happens in silos. Neil, you come from an academic background. How do you see educational institutes, governments, and businesses working together on AI? Because there is a lot of information that is disintegrated on a lot of levels. So how do you think this can become a reality when collaboration comes into play?

Neil Thompson: Yeah, so I think, particularly traditionally, you definitely do see innovation happening. The U.S. government and many others have taken a lot of steps to try to get that integration to be as good as possible. I think what we do see with AI, though, is that this is just moving so fast that they’re already, like, people are reaching out in a way that is sort of unprecedented.

For example, we wrote an article a little while ago where we documented just the enormous amount of collaboration on papers that is already happening between academia and industry. The enormous number of scientists from academia that are being hired by industry. And so, if anything, this may be a little bit of a challenge for us in academia because, in fact, we’re having a bit of a brain drain in some of our areas.

But I think there is a lot of push. Nevertheless, I think it’s going to be a broader challenge in terms of the human capital, because there are just so many opportunities to use AI, and, you know, even though we are, of course, training people as fast as we can, there’s going to be much more demand than there is supply for the next little while.

And so, I think it’s going to be quite a wild west of human capital, and, I imagine we will still see these very high salaries that we’ve been seeing for a while. But I am confident that the integration we are seeing between industry and academia will continue for a while yet.

Liji Varghese: I wanted to know from you specifically about the Middle East; what challenges do you see when it comes to collaboration? 

Faisal Hamady: Look, I mean, in the Middle East specifically, there is no, I would say, lack of ambition when it comes to technology and innovation, and specifically in the areas of AI. I think there’s a number of countries within the Middle East that are at the forefront of thinking: where do we take that technology forward? And where does that technology take us? 

Going forward, especially in the context of diversifying from our traditional economies based on resources, based on oil and gas, and others, I think there are a number of comparative advantages that the region has in terms of being able to push that agenda. We spoke about the centralization, the relative centralization of decision-making, and access to data from policymakers.

So that’s definitely one advantage: to have the intent of the policymaker to push that agenda in a centralized view. And also, on the other side, to be able to capture the different data streams that are coming from the various sectors to be able to consolidate that. The second aspect is about proprietary data, be it in fields of oil and gas, in healthcare, and others.

There are specific data sets that the region as a whole has a hold of that can be used to develop specialized models that are domain-specific and might be used in other places for, say, resource extraction, scanning for resources, and so on. And on top of that, there are a number of things, such as access to cheap energy today, which is a significant challenge, as you spoke earlier, around how to run the computational clusters to fuel that modeling, both from training and then later on from reactions to them.

And then also in the future, with renewable energies coming into play, there are specific spots in the region, in the north of Saudi Arabia or in the south of the Arabian Peninsula, which are very well suited naturally, both from the flows of wind, from the exposure to sun, and so on, to be much cheaper and competitive on a global scale when it comes to renewables.

So there’s a number of advantages that the region has. I think the challenges continue to be about the concentration of ideas and talent. Specifically, we spoke earlier about the premise of innovation being the fluid, connected mobility of ideas, talent, and capital. I think there is a relatively significant amount of capital that is being deployed and has the willingness to be deployed within the region. Ideas and talent continue to be a challenge.

That is a dynamic, by the way, we have seen more broadly in digital and then more specifically in AI. If you look today into the top ideas, be it around papers, citations, patents, and others, they’re concentrated in one or two countries: the U.S. and China. And then the top five countries consolidate more—around 80 percent of the world’s supply of the top ideas, and so on.

The same thing applies when it comes to talent. Specialized talent in specific areas is within the U.S. and China, broadly speaking, and then within the top five or six countries, there is 80 percent of that concentration.

Liji Varghese: We understand we still have a lot of challenges when it comes to regional collaboration, but can you suggest some key components that are necessary to establish regional or international collaboration? Any strategy that you have, Faisal?

Faisal Hamady: Look, this is a big debate at the moment, right? Whether we will have unified global standards and governance around AI. I believe it’s two sides of the coin. On one hand, there is a base regulation that needs to happen around specific models, around fairness of the models, explainability, transparency, removal of bias, and so on.

So those, I think, are, let’s say, themes that most of the countries around the globe have agreed to, to some extent, in one way or another. I think there are two challenges that will be very significant going forward. And those are around, number one, specifically with Gen AI, that triggered this thought, right? People have different standards around what ethical, for example, means across different jurisdictions. 

So, for example, in some areas in the world, say for the U.S., you need to make sure that you ensure there is limited bias and equality on the individual level. In places like Japan, at the societal level, you might want to look more for social cohesion, or what is the benefit for the group rather than the individual, despite that it might come at the expense of the individual as well, right? And I’m just giving generalizations, but if I reflect on that about ethical frameworks for algorithms, that might not be the same across different jurisdictions. And technology does not know physical borders, right? So the models deployed in one place or the service deployed in one place will be applied globally, and that’s why one of the other things we say is that within this proliferation of models, there might be specialized models with different frameworks around how you govern and what are the ethical standards that are put in place that could be specific to a specific country or a region or a number of countries together.

The second aspect that we see is very difficult, and I think that’s more future-looking. I think today we’re thinking about the technology as if it is what it is today: a number of models, a number of companies driving these, and a number of services using AI. Reflecting again back to the software industries, if you think about the number of microservices, APIs, and libraries that are out there, it’s virtually impossible for anyone to go look into all of these things and say, how do I regulate, how do I ensure standards for each, both from the capabilities of the governments to be able to look into that, and then as well from the capacity to be able to dedicate as much time and resources to go into this.

One idea we were exploring earlier is whether we will have AI agents also helping to regulate other AI agents that the governments are deploying in a specific way, right? And I think that might be an intriguing thought to think of a hybrid future where there might be centralized institutions globally, there might be distributed institutions globally, but there might also be augmented with different AI agents to help, scan, check, and regulate some of these AI algorithms going forward.

Liji Varghese: Neil, I want your thoughts also on this.

Neil Thompson: So the question of regulation is definitely going to be a challenging one. And I think I would sort of divide it up into three different areas. So there’s one, which is that there are going to be a whole bunch of implications of AI that just come from it being a tool that allows us to do more and better (and I’m going to maybe put better in quotations because, of course, it’s going to be out in the world.)

In many of those cases, sort of, the best kind of AI regulation is just going to be good regulation in general, right? If we have a system that is harming some people, we need to make sure that we have good regulations. That’s not about it being an AI system; that’s about harming the people, and we should have good regulations for that.

I do think, however, that as these systems get more powerful, they are going to interact with the rest of the world. So we can already see people building systems, for example, that can access corporate databases and stuff like that to pull in information and maybe make edits. This raises the possibility of, of course, them interacting in the world in a larger way, and that’s that idea of how do you contain, how do you have fail safes, how do you have all of these things to make sure that when a system goes off the rails — some of them will go off the rails, right? — we need to be able to contain them in a way that manages the harm.

And so, for example, I think we can look to places like the financial markets, where we have a lot of automated trading that is going on. You know, people have already started to think about how to build these kinds of robustness checks. So I think that’s one place that we can look for it as well.

I think another thing that businesses are also just going to have to face is that we are used to designing rule-based systems with our IT systems. We have it, and we say, like, it will always only do the following things because we built in a rule that says it can only do that. That is not the way these systems are trained. These systems are trained, and because of that, they can produce things. And even if we say, well, we’ve designed it in this way, and we don’t want it to do this, so we may try and tweak the training so it won’t do that. And people have found ways around it.

The sort of famous one is, with some of these large language models, of course, you don’t want objectionable content coming up on them. But people have found ways to hack it. Just saying something like, “Oh, if I had a relative who was, you know, objectionable in some way, what would they say?” And then the system, which, you know, knows it shouldn’t say that, was trained not to do it. But if you put it in a slightly other context, suddenly it will give you a sort of terrible answer.

You know, you can imagine that applying not only to objectionable content but also to making, you know, wrong inferences. If you’re advising one of your employees or insulting one of your customers, there are some really bad consequences there. And that’s something that both regulators and, sort of, at the government level but also regulation at the level of a company is going to need to think about — how do you monitor those things?

Liji Varghese: How do you make sure that you’re controlling them? How do you not worsen these existing, you know, inequalities or create new ones?

Neil Thompson: I think in the short run, you know, we should think of AI as being a powerful tool, and that will mean actually, I think that in some cases we are going to get exacerbated. We’re going to get the people who can build these systems that can deploy them, the companies with the most scope and the most, um, resources. And that is going to, at least for the moment, exacerbate it. And then the question is, what’s going to happen after that? So, there are some categories of things where we’re going to need to think of, like, social safety nets in order to protect people, retraining, and those kinds of issues. We can, however, also rely on the pace of innovation to change things. So, in the same way that cell phones were initially a technology that sort of made some people much more mobile and other people not, as that became cheaper, it democratized, and it became useful to all of us. The progress in making these models more efficient is happening so rapidly that it is also the case that some of those models will democratize, and that will also be a powerful tool for everyone. I think we’re all getting more powerful tools, and exactly what that distribution has looked like, well, we’re going to have to see.

Liji Varghese: And Faisal, any key components that you suggest?

Faisal Hamady: Definitely, I agree with Neil that a lot of aspects will need to be democratized, and a lot of aspects would need to be regulated around fairness and not creating inequalities at the individual level and then also within different nations.

We spoke also about the differences across nations when it comes to what equality, bias, and ethics mean in different jurisdictions. I also think that there will be winners and losers in this race, similar to how the previous digital platforms and the, uh, and the previous, let’s say, digital eras created concentration of power and concentration of values in different places. I believe that that will continue to be the fact for the future. We are already seeing it from, as we said before, concentrations of ideas, concentration of talent, concentration of capital, and concentration of value within countries, corporates, but also startups such as unicorns.

Overall, the message is that individual nations will need to make sure that they are investing in this technology and that they are putting this at the forefront of their agendas to make sure that they are able to reap the benefits, but also to have a say in the global order of AI that is shaping together, that is shaping at the moment and will have a significant implication for the future.


Our next episode will focus on Is the Middle East ready for AI? We will be talking about talent, infrastructure, and more. 



Topics

More Like This