Monday, March 28, 2016
Monday, March 21, 2016
Bengio, LeCun, Jordan, Hinton, Schmidhuber, Ng, de Freitas and OpenAI have done reddit AMA's. These are nice places to start to get a Zeitgeist of the field.
Hinton and Ng lectures at Coursera, UFLDL, CS224d and CS231n at Stanford, the deep learning course at Udacity, and the summer school at IPAM have excellent tutorials, video lectures and programming exercises that should help you get started.
The online book by Nielsen, notes for CS231n, and blogs by Karpathy, Olah and Britz have clear explanations of MLPs, CNNs and RNNs. The tutorials at UFLDL and deeplearning.net give equations and code. The encyclopaedic book by Goodfellow et al. is a good place to dive into details. I have a draft book in progress.
Theano, Torch, Caffe, ConvNet, TensorFlow, MXNet, CNTK, Veles, CGT, Neon, Chainer, Blocks and Fuel, Keras, Lasagne, Mocha.jl, Deeplearning4j, DeepLearnToolbox, Currennt, Project Oxford, Autograd (for Torch), Warp-CTC are some of the many deep learning software libraries and frameworks introduced in the last 10 years. convnet-benchmarks and deepframeworks compare the performance of many existing packages. I am working on developing an alternative, Knet.jl, written in Julia supporting CNNs and RNNs on GPUs and supporting easy development of original architectures. More software can be found at deeplearning.net.
Deeplearning.net and homepages of Bengio, Schmidhuber have further information, background and links.
Monday, March 7, 2016
What does your company do?
When we started Snips a couple of years ago, all we knew was that we wanted to use Artificial Intelligence to solve real, everyday problems.
A year ago, we started thinking about the most pressing problem of today, which is having the abundance of programming and technology, but real scarcity of time. With all of our connected devices, competing for our attention, we knew that it was only going to get worse. Imagine hundreds of connected fridges, cars, watches, tablets, alarms, light bulbs and TVs talking to you at once, unaware of your or each other’s context, and careless about what you were doing while they interrupted you.
But then we thought: what if all of those devices had an AI in them? What would that change? We realized it would change everything. This idea, called “Context-Awareness”, is how we can minimize, and perhaps even automate, most of our interactions with technology. We figured out that in the long run, Context-Awareness is going to be the reason why the world around us will feel unplugged again.
interrupted you. Our mission at Snips, is to make technology so smart and context-aware that it disappears from your consciousness into the background.
What's your background?
I've been involved with technology since I was a kid: I started coding when I was ten years old and created my first start-up, a social network, when I was fourteen. At fifteen, I created a web development agency and at eighteen I decided attend the University College London to study computer science. Shortly afterwards, I started my PhD in bioinformatics.
After my PhD, I went to the US for a few months to attend a program at Singularity University. At the same time, I was working on using machine learning to personalize nutrition, which eventually inspired me to create Snips.
Can you describe what a customer experiences when they're using Snips?
As our short term goal, the team at Snips is focusing on making mobile devices context-aware by aggregating data from all the apps, and turning it into a contextual knowledge graph of the user. Besides the fact that mobile is the most ubiquitous device, we also believe that moving forward our mobile phone will sit in the center of the internet of things.
Today, the trouble with the smartphone is that all the apps you're using are essentially scattered, which means that you constantly need to keep the mental model of where your data is located and what app serves it the best. Thus, as a smartphone user, you're constantly going back and forth on your home screen, linking those things together yourself. For example, If you want to go to your next meeting, you need to open your calendar, search for and copy the address of the location, go back to your home screen, open Uber app, paste the address into Uber and so on and so forth. This process limits the amount of actions you can do with your phone to whatever your brain can process, which is not that much. When you think about it, the fact that people use twenty apps is not a coincidence because that's the exact number of icons you have on your home screen.
What we do with Snips is try and figure out the solution to this problem by aggregating the data from your calendar events, your location, your emails, and your text messages to create a user’s contextual knowledge graph. A knowledge graph is basically representation of how different pieces of data are linked to each other and how they relate to your life. Once that is done, we start analyzing the way you're using your different apps to figure out the patterns such as: everytime a user has a meeting, she opens a calendar and then takes Uber to get there. So, eventually, whenever you type in Snips “take me to my next meeting,” it will simply launch Uber with the address of your meeting pre-filled. What we are offering is an entirely new way of interacting with the smartphones: we try to eliminate the unnecessary processes that user has to go through by providing shortcuts with information pre-filled.
Can you give more details on the AI in the application?
For the starters, there's not one particular type of AI we're using at Snips, there's a whole range of them.
Though there is much happening at Snips right now, I would say there are two key pieces of technology we are concentrated on: the first one is this knowledge graph aggregation I just discussed. For that, we're using around twenty-five different algorithms: basically everything from doing some simple processing and machine learning classification to figure out user’s transportation modes, to a lot of natural language processing to understand the context of the user in his dialogue. I think the big difficulty in what we do is that we need to be precise 99 percent of the time not to miss the key pieces of user’s context and be able to deliver good service at all times.
The second, and a very important piece of technology we are working on is privacy by design. One hundred percent of everything we do at Snips is either running locally on the user’s smartphone or using a new form of cryptography called homomorphic encryption, that allows us to compute un-encrypted data directly. Snips will never, ever have access to the user data. I think this was, and remains, one of the biggest scientific and technical challenges we had to crack.
At what state is your company?
Snips is three years old, so technology is quite mature. The product has been in the beta for a couple of months now, being tested with a few thousand people to gather insights and early metrics. Now, we feel more confident about it so the product will be officially out in May 2016.
Can you tell a story about something that one of your customers has done?
One of the first things we did at Snips, for iOS users, was to analyze their location patterns and predict their next destination. For example, we can infer at what time you're most likely to go home from work, or where you want to go on weekends to have a drink. Once we know this information, we are able to directly suggest actions, like taking an Uber or seeing the location on the Google maps. I think predictive part of Snips was one of the first things that people really liked, because it created this magical moment of serendipity.
For Android users, there is something else we do which is really pretty cool. Say you and I are having a conversation and we're talking about going for dinner in a restaurant. We're doing this in natural language, right? So, if you actually trigger Snips at that point, we can analyze the contents of your screen, analyze the language, figure out the place you're talking about, figure out what that place is, and basically pull out apps on your phone, as well as reviews that you can use. I think the convenience and ease of using Snips is something that people really like.
Just to be clear, you said that all of the processing is occurring on the phone?
How do you go about improving the AI in Snips? What's your research program?
That's a very good question, because research is a huge part of our company culture. Essentially, there are two things we do: First, every model has a first trained model offline, that we upload as a base model in the designs. And that base model then has components which learn online based on the everyday user. Think about it as a doubling thing, right? A base model and then a very individual model that gets better as you use the products. The second thing we for the new research initiatives is that we create research programs where people can contribute data to our efforts so that we can experiment on new things. At no point are we actually using data from the product itself, because as I’ve mentioned Snips is private by design. The data that is accessible to us, is the one given by people voluntarily.
When you talk about your company do you talk about the AI or do you focus on the customer problem that you're solving?
AI is a tool that we use to solve user problems. Our real goal is to create interfaces between people and technology that are so smart and integrated that you don't even have to think about them anymore. This is a problem we're solving. Before anything, this is a really a user problem. The AI is just the way we do it right now.
Why do you think there's so much excitement about AI and startups now?
I think the excitement partially comes from the fact that for the first time in history we've got huge amounts of data we can use to train computers. Despite the popular opinion, I personally don't think that there was any kind of technological breakthrough that created this hype about AI. It's all a question of having access to data and newly found ways to process it.
How do you stay up to date on what's happening with AI? Are there any websites that you go to or any authors or academics that you track?
Well for starters, I read your blog. In general, for me, there are two ways to keep up to date on AI news: 1) Follow the more general discussion around it, which is not necessarily something I find to be as impactful, 2) stay in touch with the community and the technologies around AI. There is no particular blog or website that covers it all, but I think it's more of an open discussion that you see on social media that gives you great insights. Sources like Twitter, Hacker News, Reddit and all the new blogs and techniques that are published, I consider as great ways to keep up to date. As a team, we also read every single paper that comes out on that topic, as you can imagine. But if there's one thing I find extremely useful is that internally at Snips, we have weekly meetings where people present new papers or techniques they've been reading about or working on to the entire team.
How do you go about finding AI talent?
As a French company we've been extremely fortunate in terms of talent acquisition. The number of applicants we get daily is simply unbelievable for a company of our size. When we hire, we normally look for three things in particular: 1) The first and the most important factor is the cultural fit. If you're going to be working with our the team, you need to be one hundred percent convinced that what you're doing is what you want to be doing. You need to feel strongly about the vision of making technology disappear.
The second thing we're looking for are the people who have experience in applying theoretical techniques to real products. So, if you've never actually built a product using AI, if you've only done the Coursera class and you've never build anything practical with that knowledge, it's probably not going to work. Making a product with AI that works is so much harder than just running a couple of examples to get something training. We're looking for people who have experience in working on real product.
And Finally, we are looking for candidates who have curiosity and drive to constantly learn new techniques and to be autonomous when deciding how to apply those techniques. We don't want people to just do whatever we're telling them to. We want people who are able to inspire other team members to learn new techniques because they're just so passionate about it themselves.
So, whether you come from a mathematical or a pure science background, it doesn't matter, as long as you have those three qualities.
Make a prediction about where Snips will be five years from now.
We want to embed Snips in every single connected device on the planet to make them really intelligent. The intelligence of a device should be a consequence of plugging Snips and your device together and analyzing how the user uses this device on a daily basis. For example, if you use your thermostat every day at six pm, and set it to specific temperature, then Snips should learn that and automatically take care of that for you. AI’s capabilities will far outweigh the friction of connected devices and at that point when this happens, the perception of the friction of connected devices that we have as humans will gradually fade away until the point where it completely disappears.