hit tracker
prime news list

all information about tech and other

Learn about AI with Google Brain and Landing AI creator Andrew Ng

Learn about AI with Google Brain and Landing AI creator Andrew Ng

This interview has been shortened and clearly edited.

MIT Technology Review: I’m sure people often ask you, “How do I build my first AI business?” What do you usually say to that?

Andrew Ng: I usually say “Don’t do that.” If I go to a team and say “Hi everyone, please be the first in AI,” that tends to focus on technology, which can be very good for a research lab. But when it comes to how to run a business, I direct clients or missions, almost never technological ones.

You now have this new project called Landing AI. Can you tell us what it is and why you chose to practice it?

After leading the AI ​​team at Google and Baidu, I realized that AI has transformed the internet for software consumers, such as web search and online advertising. But I wanted to take AI to all other industries, which is an even bigger part of the economy. So after looking at a lot of different industries, I decided to focus on manufacturing. I think many industries are ready for AI, but one industry is one of the models that AI is more prepared for if it has undergone a digital transformation, so there is some data. This allows AI teams to create data-driven values.

So one of the projects I’ve recently been excited about is the manufacture of visual inspection. Can you see a picture of the phone coming out of the manufacturing line and see if there is a fault in it? Or look at a car component and see if there are any teeth? One huge difference is the consumer software on the internet, maybe you have a billion users and a large amount of data. But in manufacturing, the factory has not manufactured billions or even millions of manufactured broken phones. Thanks for that. So the challenge is, can you get an AI to work with a hundred images? It gives you as many times as you can. In fact, I’ve often been surprised by what you can do with even a small amount of data. So while all the hype and excitement and PR about AI is in the huge data sets, I think it’s also a place where we need to grow to open up to these other different applications.

How do you do that?

The mistake that CEOs and CIOs make is often made: “Hey, Andrew, we don’t have that much data, my data is a mess.” So take two years to build an excellent IT infrastructure. Then we’ll have all this great data to build AI. ”I always say,“ That’s a mistake. Don’t do that. ” First of all, I don’t think today’s companies on the planet — perhaps even the tech giants — believe that their data is completely clean and perfect. It’s a journey. Spending two or three years building a beautiful data infrastructure means that you lack the opinion of the AI ​​team to help you prioritize building your IT infrastructure.

For example, if you have a lot of users, should you prioritize asking questions in the survey to get a little more data? Or in a factory, should you prioritize upgrading something that registers the vibration of the sensor every 10 seconds to 100 times a second? An AI project is usually started with the data you already have, which will allow an AI team to provide feedback to help prioritize the additional data to be collected.

In industries where we don’t have the scale of consumer software on the Internet, I feel we need to change our mindset great data good data. If you have a million images, go ahead, use them, it’s fabulous. But there are many problems that can use much smaller data sets that are well labeled and well cared for.

Could you give an example? What do you mean by good data?

Let me first give you an example of speech recognition. When I was working on voice search, you would get audio clips where someone would say, “Um today’s weather.” The question is, what is the proper transcript of that audio clip? “Um (comma) is today’s weather” or “Um (dot, dot, dot) is today’s weather” or is “Um” something we don’t transcribe? Any of them is fine, but what is not good is if different transcribers use each of the three labeling conventions. Then your data is noisy and it hurts the speech detection system. Now that you have millions or billions of users, you can have noisy data and just average the learning algorithm will do just fine. If you are in settings where you have smaller data sets (say a hundred examples), then this type of noisy data has a big impact on performance.

Another example of manufacturing: We did a lot of work on steel inspection. If you drive a car, the side of your car was once made of a sheet of steel. Sometimes there are small wrinkles or bumps or types on the steel. So you can use the camera and computer view to see if there are any errors. But different taggers will label the data differently. Some will put up a huge border box across the region. Some will place small boxes around small particles. When you have a modest set of data, make sure that different quality inspectors are constantly labeling the data, which is one of the most important things.

In many AI projects, the open source model you download from GitHub — the neural network you can get from literature — is enough. Not for all the problems, but for the main ones. So I’ve gone to a lot of my teams and said, “Hey, everyone, the neural network is good enough. Let’s not mess with the code anymore. The only thing you’re going to do now is build processes to improve data quality.” It often turns out that faster improvements in algorithm performance result.

What is the size of the data you are thinking of when you say smaller data sets? Are you talking about a hundred examples? Ten examples?

Machine learning is so diverse that it has become very difficult to provide a single answer. I dealt with the problems when I had about 200 to 300 million images. I had 10 images with problems and I also worked with everything in between. When I look at manufacturing applications, I think it’s not uncommon for something like ten or maybe a hundred images to be a class of error, but there’s also a big difference in the factory.

It seems to me that as AI practices change the size of training sets goes below 10,000 examples, let’s assume that the engineer can examine all the examples and design them themselves and then make a decision-making threshold.

I recently chatted with a very good engineer at one of the big tech companies. And I asked, “Hey, what if the tags don’t match?” And he said, “Well, we have this group of hundreds of people who do the tagging. So I’m going to write the tagging instructions, I’m going to tag all the three guys and then take the average.” And I said, “Yeah, that’s the right thing to do when you have a huge data set.” But when I work with a smaller group and the labels don’t match, I find two people who disagree with each other, both of whom I pick up on a Zoom call and talk to each other to try to get a resolution.

I want to turn our attention now to talking about your thoughts on the overall AI industry. Algorithm it’s our AI newsletter, and I was able to send some questions to our readers in advance. One reader asks: it seems that the development of AI has been focused on academic research or on large-scale, high-resource programs that use many large company resources such as OpenAI and DeepMind. This doesn’t leave much room for small startups to help. What do you think are some practical issues that small businesses can really address to help drive the real commercial adoption of AI?

I think a lot of media attention is focused on big corporations, and sometimes on big academic institutions. But if you go to academic conferences, there is a lot of work done by small research groups and research labs. When I talk to different people in different companies and industries, I think there are a lot of business applications that they can use to deal with AI. I usually go to business leaders and ask them, “What are your biggest business problems? What are the things that worry you the most?” So I can better understand the goals of the business and then I wonder if there is an AI solution or not. Sometimes it’s not, and it’s okay.

Maybe I’ll mention a couple of gaps that I find exciting. I think building AI systems today is still very manual. You have some great machine learning engineers and data scientists doing things on the computer and then pushing them into production. There are many manual steps in the process. So I’m happy with ML operations [machine learning operations] As a new discipline to help make the process of building and deploying AI systems more systematic.

Also, if you look at a lot of common business problems (all functions from marketing to talent), there’s a lot of room for improvement in automation and efficiency.

I also hope that the AI ​​community can address the biggest social issues: see what we can do for climate change, homelessness, or poverty. In addition to some very valuable business issues, we should also work on the biggest social issues.

How do you go about the process of identifying whether or not you can achieve something with machine learning for your business?

I’m going to try to learn a little bit about the business and I’m going to try to help business leaders learn a little bit about AI. Then we usually do a set of projects and for each idea, I will do technical due diligence and business due diligence. We’ll see: Do you have enough data? What is the accuracy? Is there a long tail when you expand into production? How to fill the data back and close the loop for lifelong learning? So make sure the problem is technically feasible. And then with business responsibility: we make sure we get the ROI we expect. After this process, it is common for you, for example, to calculate resources, milestones, and then go to execution.

Another suggestion: it’s more important to start fast and start well to be small. The first significant business application I did at Google was oral knowledge, not web search or advertising. But helping the Google speaker group make speech recognition more accurate, which gave the Brain team credibility and support after growing collaborations. So the second big collaboration we used to see Google Maps computer was to read houses numbers, to geolocate houses on Google maps. And it was only after the first two successful projects that I had a more serious conversation with the advertising team. So I think more companies fail by starting too big than too small. It’s okay to start a smaller project as an organization to learn how it feels to use AI and then continue to achieve greater success.

What is it that our audience should start doing tomorrow to implement AI in their companies?

Jump. AI is causing a change in the dynamics of the industry. So if your company isn’t making a fairly aggressive and smart investment, this is a good time.

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *