Today I went to AI NEXTCon, partly as a recruiting effort (Sift had a table there) and partly to go to the talks and see what people are talking about. It's a fairly small conference and today was day 2 with 3 keynote speeches and two talk breakout sessions with 3-4 talks each.
It's not a cheap conference. I think that attendance costs around $250/day. There are two conference days and two workshop days. So who attends those conferences? It was a very mixed bunch, actually. I saw some people that were fresh out of college looking for a job or to be inspired by machine learning. I saw some people that were there to sell their services (not only the ones that had a booth). There were some software professionals that were not in the AI field but curious to get into it. And there were actual AI professionals from many companies trying to see what is going on. And I think this last class is the one that was probably the most disappointed.
Talks were around 50 minutes long, so pretty long. That actually reduced the quality of the talks, in my opinion. Presenters either tried to cover a lot of ground and gave no examples, or tried to give "real" examples and ended up wasting time talking through code that probably went way too quickly to be really understood, but still took time. But I'm not going to mention names.
The only talk that I want to highlight was a talk by Amy Unruh, from Google. In that talk, she presented Google's new AutoML. It's still early stages for it, but I think it is a great new direction where ML-as-a-service should be going. I give some company my data and they give me a model that internally is trained with more data than the one I gave them. Hopefully that is done through transfer learning, but maybe there are other tricks that might work in fields where there is no reliable transfer learning solution.
I don't think Google has the right product for it yet. I think there are some knobs that need to be provided for customers to do things like balancing errors between classes and other things like that, but it does have some good features:
- Automatic separation of test set and visualization of quality in a lot of different useful ways
- Support for asking Google to come up with people to manually label the data (apparently this is going to be staffed by Google employees/internal contractors)
The reason why I think it's the future of ML-as-a-service is because this is where the value really is and scales. The pre-trained models are nice, but they are always hard to use in real life. The classes are either too granular, granular in the wrong places, or just not granular enough. Also it has sometimes puzzling errors (the example in the presentation above had actually a label that repeated twice with different scores - unfortunately it's covered in the PDF version of the presentation). So you probably want to focus on the labels you care about for your application and bias training there.
If companies can provide you with a way to get the labels you want without having to spend the money to acquire the amount of data necessary to train a strong model, that's the most sticky feature that you can provide people, and still give them what they want. I hope to see more on it soon. Image classification is an "easy" field for it (the problem is pretty well-defined and the inputs are consistent across domains, making it easier to get transfer learning to work). I will cheer more when companies start providing solutions for other fields, like many NLP tasks. Certainly something to keep an eye out for. Great start, Google!