Why and How to Build Self-Optimizing Software with AI - Dev Week SF 2018

AI sounds great, but we just want to build better software. In this talk we aim to cut through the endless hype and jargon to answer the key question of how to build better software using AI. We show how self-optimization, a surprisingly simple and intuitive paradigm, can deliver all tangible benefits of AI without having to deal with complex and brittle pipelines, algorithms, models, or even data. We present a complete walkthrough and live demonstration as well as many real-world results and comparisons against traditional methodology. Finally, we review various relevant market dynamics, including increasingly AI-driven acquisition channels (search engines, app stores, social feeds, ad networks etc), that are rapidly making effective optimization crucial for software products and developers to achieve their full potential in the marketplace.



Sounds good. Hi. Before we start, can I get a quick show of hands?

How many product people do we have here?

You know, product managers, product engineers?

Oh, none? Oh, one? Okay, great.

How about marketing people- people who work on marketing?

No? None?

How about growth people? You know, growth hackers- that kind of thing?

None? Okay, interesting.

How many people here do A/B testing? Okay, a few.

How about other ways of improving metrics, or optimizing metrics?

A couple, okay. All right. Thanks, guys.

Let’s get started.

So my name is Olcan, the C sounds like a J. It’s a Turkish name. I am the founder and CEO at Scaled Inference. And I have been writing code for well, over 25 years now. About half of that was at Google. I have built websites, mobile apps, search engines, analytics services,platforms that power them. So, you know, I like to call myself a developer. I think I deserved that by now. And I was always interested in AI.

From the time I heard about it, when I was 15 or 16, I thought it was very exciting. But I didn’t really get into it until about 10 years ago in Google research, and then later with Google brain. And by that time- you know, my perspective on it was that of a developer. How do we use AI to build better software? How does it help us do that? I believe that AI is going to give us a much better future, but it is also going to do that by empowering us to realize our full potential. And that is the mission of my company, Scaled Inference.

So in this talk, there are three parts. First we are going to start with the “Why?” You know, why optimize?, Why self-optimization?

And then we are gonna get into the “How?” How do we do self-optimization?

And in the end, we are going to look at some code. We are going to optimize some metrics. So we are really going to dig in by the end.

Well, let me start with the “Why?” So what I would like to do is to walk you guys through what is called a customer funnel. If you build any kind of software product or service, then what you are probably going to need to do is to acquire users from one of these, what you might call “acquisition channels” or “acquisitions sources”.

You know that you want people to come to your website from Google. Or, you want people to download your app from the app store, right? But you don’t want them to stop there. You want them to actually complete some kind of signup process, or onboarding process on your mobile app.

And you don’t want them to stop there, either. You want them to actually start using your product, to engage your product- you know, read articles, send messages, whatever it is your product does. And, you want them to keep using the product. You want them to come back the next day, or the next week. And ultimately, your goal is to deliver as much value as you can with as little cost as you can.

None of these levels are optional. If you don’t do well at deeper levels, you will fail later in the process. But, you do have to have good metrics across the entire funnel to succeed long term.

Now, a lot of us are probably familiar with optimizing the acquisition layer. We have acronyms for that. You know, search engine optimization, app store optimization, social media optimization-- that kind of thing.

It’s mostly manual, but, we still get the idea of trying to improve our acquisition metrics. What many of us might not realize though, is that these acquisition sources or channels are increasingly preferring to send people to websites or apps that have much better downstream metrics below the acquisition level.

So, Google will prefer to send people to content that is more engaging. If people immediately leave your site, that is going to hurt your Google rankings. Same thing as with the app stores, they prefer to send people to download apps that they are going to actually keep on their device, that they are going to complete the onboarding for, and actually come back the next day.

So, there is an increasing trend for acquisition channels to prefer better downstream metrics. So what we can do is we can start trying to optimize our metrics at the next level down, the activation level. We might start doing some kind of personalization, some kind of A/B testing on our web pages, or mobile apps. And this will help a lot. This will actually not just help metrics at that level, but actually at the level above for the reason I described earlier. Search engines will prefer your website because you are actually engaging your users better. And app stores will prefer your apps because you are actually retaining your users much better.

But, what if we could actually optimize across the entire funnel? What if we could optimize at every level, and in every piece of software that we have that affects our metrics there? And what if that was actually easy to do? That is why we are here.

So let me start with the current state-of-the-art, or current methodology, as I call it sometimes. And it all starts with actually defining and measuring our metrics. We can’t improve our metrics if we don’t even measure them! And this is traditionally called analytics. There are standards around this. Around how to track our events, and define our metrics.

If you want to measure click-through rates, you are going to have to track the click event, for example. So, this is pretty standard. And of course, we don’t want to stop there, you know. The idea is not just to watch our metrics to go up and down, but we actually want to do something about it. We want to improve our metrics.

This is what we call experimentation, there are many names for it: A/B testing, split testing, hypothesis testing. But all of those things are essentially the same. The idea is that we have two or more variations of our product that is deployed. Variation A is probably what we are doing now, and variation B is what we are considering switching to if it has better metrics. So we deploy both of those, we measure their metrics. And if B turns out to be better than A, we are going to switch to B. And we just have to be a little bit patient, you know. We want the result to be significant. We do not want to act unless it is a significant result. Typically, we are looking for a 90-95% confidence before we act on it. And this makes us feel good, you know. We are not arbitrarily making changes to our software. We are actually looking for evidence before we make changes. It makes us feel good, but, not everything is good.

You know, maybe our metrics are not improving as much or as quickly as we hoped, right? Or maybe the improvements are not sticking, or it is not adding up somehow. If your experiments are taking longer, they are coming back inconclusive or negative, or maybe it is starting to feel like a waste of time. You might start wondering, “should we just move quickly based on hunch?” “Are we better off?” The problem is not experimentation.

Experimentation, as you know, is why science works. It has hundreds of years of theory behind it, so there is no problem with that. The problem is in trying to optimize or improve our software metrics using experimentation. There are many, many technical problems with that, but I just want to go over three of them.

The first one is that metrics change continuously, constantly, and- you know, what this means is that the underlying truth that we have a hypothesis about is changing, even as we gather evidence for it, okay? This is about as bad as it sounds. And there are many reasons why metrics change, you know. Your user population might change, their behavior might change, and you are actually changing your application, right? That is the whole point of optimizing. We are making changes to our app. So the metrics are always changing. How do we know that the experiments and the conclusions we had from earlier are still valid, right?

Are we going to run those experiments again to make sure? Probably not, right? That sounds terrible, so this is a big, big problem. The second problem is that A/B testing is simply a very poor optimization strategy. What’s a good optimization strategy? Well, it’s one that is explicitly designed to improve your metrics as quickly as possible, okay? A/B testing is definitely not that. I won’t go into the details, but I’ll just show you some data later that compares against A/B testing on real-world examples, okay?

Now, the third thing I want to mention is that A/B tests are generally oblivious to contextual wins and losses. What I mean by that is that in running an A/B test, we are aggregating over groups that may behave very differently. We have to do that because we want to compute averages that have statistical significance to them, right? To conclude, A is better than B on average does not in any way rule out a large population where A must, may actually be much worse. That is called a contextual loss, or where B actually might be much better, okay? That is called a contextual win.

These are all hidden in the averages in an A/B testing and, as developers, I mean, imagine, you know, how much more insight, how much more value you could drive from your work if only we could take advantage of those contextual wins, right? And on the flip side, I mean, the whole point of experimenting was that we wanted to be careful, right? We don’t want to hurt our metrics or users. We wanted to act based on evidence. But, the contextual losses are actually getting worse and worse as we blindly optimize these averages, okay? I mean, imagine how that defeats the whole purpose of experimenting in the first place.

So, these three problems are really big problems. I will just stop there as far as talking about problems. So what do we do about this, you know? Are we just gonna act based on hunch?

No, there is a much better way. And, the much better way that we have found is to take analytics and A/B testing exactly as they are. We do not want to reinvent the wheel there, right? But to introduce automation and machine learning into the process. What kind of automation do we get? Well, we can actually automatically discover and surface the contextual wins and losses that we want to know about. We can automatically find and execute the most effective policies, the most effective optimization strategies for our problem. And we can do all of that with much better statistical rigor than we think we have with A/B testing, okay?

The best part, I think, is that all of this, it turns out, can happen behind the scenes, kind of under the hood of traditional analytics and A/B testing that we may already know about, or that we may already be doing. If you are already doing analytics and A/B testing today, you probably do not have to do any additional work to get started with this new concept. So the new concept we call self-optimizing software, we use this cute little bot icon that we lovingly named Amp.

Amp refers to both the new concept, a new paradigm, and also the new platform, the Amp.ai platform, that makes it very easy for developers. So let me just review the the benefits of this. What do we get by doing self-optimization? There are two things.

First of all, as I mentioned this is a much more effective way to optimize our metrics compared to experimentation, right? The other thing, though, is that this is a way to take advantage of AI and machine learning in virtually all of our software for all of our key metrics. I mean, how cool is that, right? Arguably, all we need from machine learning and AI at the end of the day is for our own key metrics to improve, right?

If we could get that, we’re getting all the tangible benefits. And if that is easy to do, we are getting that without all the baggage, okay? And, the baggage of machine learning these days under the traditional paradigm, I won’t get into any details, but I'll just point out some papers published by companies like Google about the technical debt-- the high interest credit card of technical debt of machine-learning. The 43 rules of practical machine learning-- I mean, I think 43 rules of practical anything means that thing is not very practical, right? And at the bottom here we have a quote from a recent major machine learning conference. Ali Rahimi, one of the top researchers in the field, he was receiving an award for work that he did 10 years ago. He said that machine learning has turned into alchemy. And this actually stirred a lot of debate in the community, and a lot of self-reflection, and so on.

But as developers, you know, alchemy sounds terrible, right? It sounds like it will be hard to debug. It would be hard to maintain, right? So I don't think any of us would want alchemy in our software. So one of the main benefits is that we get AI in all of our software to improve our key metrics without resorting to alchemy, okay? Let’s focus now on the comparison against experimentation.

So, we keep claiming that it is much more effective. How do we measure that? How do we know that? So this is the kind of chart we like to look at when we look at the effectiveness of an optimization strategy, okay? The way to do that is to fix the amount of time or number of sessions that we give this strategy. One of the strategies here is traditional A/B testing. So we wait until certain significance is achieved, and then we immediately deploy the the better variation to our product, which by the way is very generous to A/B testing because there are usually additional delay. Someone has to look at the data, someone has to deploy the variant, and so on. But let’s actually be very generous and automate all of that away. And, what we do with the strategies is we look at how much the metric improves over that period, and we look at the cumulative gains over that period.

The way to think about that is the additional signups, or additional check-outs, or additional conversions, or whatever you're interested in over that period, right? And in every case that we've been able to look at from real-world A/B tests, we found a similar result. That there is an enormous inefficiency in trying to optimize your metrics using an A/B test. And, I should say, you know, metric gains is actually like the return on investment. It's actually more important than the metric that’s achieved here. I'll show you in a minute what the metric improvement was, but that’s actually less important. Because you can always invest more time, you know, more effort, more money, or whatever. And, get better metrics, but what is the return on that investment you're making? And that’s what this chart measures- that’s the gain factor, okay? So another way to visualize the gain factor, you know, is something like this. This is what 1X might look like, and I think this is what 8.4X would look like in comparison. I think it looks a lot better. And at Scaled Inference, you know, we love to look at this kind of data visualization, because it reminds us of our mission which, like I mentioned at the beginning, is to empower people ultimately to reach their full potential. You know, AI is just the means to that end, right? So let’s look at some more results.

Again, in bold is the gain factor, each of these is a different use case and the gain factor is what tells you how effective the optimization strategy is. We also report the metrics here, but remember we can always invest more time and effort to improve those metrics further. So that’s not as important. And we don't always have permission to disclose company names, but the one at the bottom, I'd like to tell you a lot more about that. That company is Teespring. Teespring is a major platform for buyers and sellers of custom apparel, including t-shirts. And they are also a major e-commerce business by now. And they will tell you that they owe much of that to aggressive and extensive A/B testing that they do throughout their site. And, they have been doing that for many years.

They wanted to apply self-optimization using our AI platform on their check-out flow. That’s also where they concentrated all of their A/B testing efforts over the years. They felt that it was highly optimized. They had recently hired conversion optimization experts who went in and tried to improve things further, but their conclusion was that they seem to be at some kind of local optimum that, yeah, they are not gonna make it with small changes. They need to start making big changes to hopefully get improvements. So, after all of that, they were quite skeptical trying out a new idea like this. And they have been optimizing, or self-optimizing, I should say, a lot of different variables through their site and in check-out flow. But I want to concentrate on one of them today.

By the way, we call variations, actions. It reminds us that is the part of the user flow that we have controlled over, okay? And in this case, action A is their current checkout flow. Which kind of seems to encourage people to add more things to their cart, okay? And we're introducing an action B to the system that we think encourages people to check-out, to go directly to check-out more. If you are metric is revenue, it is not obvious at all which one of these is better, okay? You can imagine action A being better, because more items would get added to cart, right? But you can imagine action B being better, because more people might actually finish the check-out flow, right? So it's not obvious, and if you run an A/B test, the A/B test might tell you that on average- you know, maybe A is slightly better, B is slightly better. And then, you might deploy that to all of your users for all of the sessions, and then get a slight improvement.

But, what we found when we applied self-optimization is that it actually depends on the context, okay? So it depends on the connection speed, for example. Depends on the device that is being used, the screen size of that device, the time of day, even weekend versus weekday, where the user is coming from, what websites, you know. What other app? So all these things presumably are correlated with how impulsive that users are in that particular session. And depending on that, which action wins will change, right? So, if you do that, then you are able to achieve much better metrics, and that’s what Teespring did in this use case.

So that about sums up the why, for today. I want to start getting into the how. Like I promised, we are gonna see a demo, and we are going to see some code, and optimize some metrics. But let me give you the big picture a little bit. Now, there are three three things that are involved here. There is our software. You know, again this is not just our websites and mobile apps but our back-end systems, really anything that is involved in our customer funnel. And we have our platform in the middle, the amp.ai platform, and we have people, you know, this is probably mostly developers but it could be product managers, and marketers and others, anybody who cares about metrics.

So, the first thing is the platform collects events from our applications. Like I said, this is just traditional analytics. And if you already have instrumentation to collect events using other providers like Google Analytics or Mixpanel or anything like that, we can use it. And, based on those events, we can provide certain insights. Obviously, the insights will depend on how you are using our platform. But even with a minimal integration, you know, a one-line copy/paste integration, we can actually tell you what your key contexts are, and specifically in which context you are most underperforming, or under-serving. So Teespring would have found that their checkout flow is actually under serving slow connections, okay? They need to encourage maybe people to check out faster.

So, that is the kind of insight you can get with a very minimal integration. And then, you define your metrics and this is an iterative process. You might start with just a couple of metrics that you think you care about. Over time, you are expected to have more metrics that you care about: some metrics you want higher, some lower, some you just want to keep an eye on. You are not sure yet. Some metrics you want to know within a certain range. They are sometimes called guardrail metrics. And there is a sense of priority, you know, some metrics you care a lot more about. Like for Teespring, of course it is important that more items are added to cart. You know, that increases their revenue. But somehow, it seems much more important that they actually finish the check-out process, right? So that metric seems slightly more important than the other metric, but both are important. So that is an iterative process. But, as soon as you define your metrics, another iterative process can start.

This is our platform starting to learn and deploy what we call policies. Policies are very simple, they are just a bunch of instructions or rules that tell your application which action to take in which context. It can be randomized. Meaning, we might tell your application will take either A or B randomly in this context. And that is necessary for our system to explore new actions, right? If you introduce a new action especially at the beginning, we are going to need to experiment with that. And as soon as the policies are starting to get deployed, the self-optimization loop starts. But like I said, both of these iterations are important. Both the self-optimization loop and the human loop where we refine our metrics. And remember if you are running experiments, what we do is we actually cut the loop of the optimization to look at the data and the results. And that actually hurts our metrics and hurts our users.

So instead of doing that, we can let the system just continuously optimize and let the system tell us what it can about our users and about our application, and use that to improve the process by refining our metrics or other things.

The last thing I want to tell you about is that these iterations that you do with your users or sessions is you don’t have to do it for all of your users or all of your sessions. You can start with a subset. We call it the optimized groups or the amped group. And this is very similar to experimenting with a subset of your users or sessions. But instead of experimenting on them, you are actually continuously trying to improve metrics for them, and to learn from them, right? So that sounds a lot better than actually experimenting on them. And even with a small group on which you are optimizing, you can get extremely valuable information.

You can learn about which policies are achieving better metrics. You can learn about what are the contextual wins and losses, and the contexts are also automatically learned. You can learn about which context and which actions seem to matter more, you know, which ones are giving you the bigger wins.

Using all that information, you can then go back to the system and specify better metrics, specify better actions. And it’s not just about, you know, adding new actions which might be a lot of work. It might be about actually taking actions away, right? If you realize that an action is actually not giving you the metric gains you hoped for, it may not be worth the maintenance cost or the complexity cost of that action. So you might just take that away. And better context, like I have said this is a contextual policy-learning system. So the more context we can provide, the more effective actually the optimization becomes, right?

Okay, I think that is enough talk for all of us. So I am just gonna jump into a little demo here. We call this the Hello, World! of self-optimizing software. And I hope you guys can see this. Okay, great. So- so there’s a bunch of things going on here.So let me just quickly explain. On the left, we have our website. I am going to log into the amp console in a moment. On the right, we have an example app that I am going to talk about. And then on the upper right, you have the code for that app, okay? I am going to explain all these things in a moment, but there’s three windows going on here.

So this example, we are going to look at the Hello, World! example is actually on our website under the developer’s section, okay? So if you go there, you can actually see all the client libraries that we have for different platforms. And yeah, the same example is actually there, and you can run this a bunch of times. And every time you run this example, it is like a new instance of this Hello, World! application. The part that the application controls is the action in the middle. And in this case, we are determining what style to use for Hello, World!. The rest of it: the context and the outcome, we have to simulate because remember we are just running the app on the same device repeatedly. So to give it different context, we have to actually simulate that context.

And by running this a bunch of times, you can kind of get a sense of the different contexts that are possible. In this case, there is really only two: adult, and child, and you can also see that the action is changing which is what I was talking about earlier. Self-optimizing software has the ability to vary its behavior depending on the context in order to optimize for outcomes, right? And you can also see outcomes here can be happy or not happy. So it is a very simple example that is meant to illustrate the key concepts and the simple API.

So at the bottom right here, we have another instance of the same app. This one just has a few additional features that I am going to need for this demo. On the top right, as I mentioned, is the code for the app and is extremely simple. There are three parts to it that actually correspond to these three sections. First, we observe some context. In this case, there are just two kinds of context, we either observe child or adult;. There is a single function call here for that. And next, we decide the action which is the text style in this case. That happens on this line. There are three possibilities- regular, fun, or classic. And depending on that, we changed the style of the Hello, World! text on the page.

At the end, we observe an outcome. If the user is happy, we will observe the user happy event. And this would be, you know, whatever key outcome we have in our application, obviously. You can not always observe whether the user is happy or not, that tends to be tricky. Okay, great. So let me actually login to our console here, on a special account just for you guys. Just gonna call this demo for talk. And it just occurred to me that our booth is using the same accounts. I hope my guys do not delete the demo. So I created a project. I am just gonna grab the project key here, and let me resize this a little bit. Second just plug the project key over here in this: Hello, World! app down here. And I am just going to run this a few times, let me resize this a little bit more.

What I am trying to do is run it until I get a happy face, okay? What that means is that the user happy event is submitted to the back-end. And by running it a few times, I can do two things. I can go to the console, first of all. I can look at these sessions as they were received by our back-ends. You know, I see here there was an amp session event which is a standard event that we have with a whole bunch of contexts that we automatically extract, so you do not have to report all of that. And here is the user type event that this Hello, World! application is reporting. And here is the text style that was decided, in this case, regular, and so on. So you can make sure that the integration is working correctly as you expect.

As I mentioned, since we got the happy face there, we can actually go in and define our metric that we want to optimize. Let’s call it happiness! And what we want to do is increase the occurrences of the user happy event, right? So we just select that. We want to maximize it with high priority. We don’t have any other metrics, so let’s just do our best with this one. We will just create that metric.

Now you may notice also that as I run this app, the action is not changing, it is just showing the regular action. And the reason for that is although the the code is there for this action to be optimized, we have not yet enabled optimization. And before we enable optimization, all they do is return the first option that it has. We call that the default action;. So this makes it very easy and zero-risk to introduce decision points all over your applications. There is no risk, there is very little effort. Whatever you are doing now, you just call that the first option, you give it a name. And then whatever alternatives you want to introduce, you just add them as you go, okay?

So anyway here, since we have already submitted some sessions with this decision point, we can actually go ahead and enable optimization there. So I am just gonna go in, and look at the decisions that were received so far. I see it here as text style, so I am gonna enable that. I can actually take a look at the values it has: regular, fun, and classic.

Great, it looks like what I am trying to optimize so let me select that one. Save that. Okay, good. So I actually authorize the system to start optimizing that decision point. And the next thing I am going to do is allocate some percentage of my users. As I mentioned, you can pick a small percentage if you want to see what happens initially. Here, since this is a demo, I am gonna go with 80%, okay? So that’s that. And that is it.

So at this point once I start running these sessions, it’s actually going to immediately start learning and immediately start deploying policies to our web application, okay? So- so one of the features here is that I can just click a button, and this is going to repeatedly run these sessions at a high rate. And all we have to do now is wait, maybe 30 seconds hopefully, and- and see the results in the console.

And what’s happening right now is, as I mentioned, every new instance of the app is actually benefiting from all the prior instances. You know, which might have been on different devices, for different users, different times, and so on. So, yeah, actually we are starting to get some results here. I’ll give it some more time. You can see the app actually experimenting with different variations now. You know, it’s not always showing the same thing. And what we are really interested in is what is the metric that the app is achieving on user happiness- the frequency of sessions where user happy event occurs.

Alright, there you go. So we are starting to get some meaningful numbers here. You can see on the right side- on the right edge of the left side. Sorry- you can see the overall- actually let me switch to Amp. You can see the Amped policy is starting to do better than the baseline already. That was about 30 seconds or so, I think. You can see it’s already up over 32% over this time period. By the way, this happened in real time, okay? The platform did not know anything about our app before. And not only is the average much better right now, but we actually found two contexts- user type adult and user type child. And in both of those contexts, we are up significantly. On adult context we are up 30%, on child context we are up 40%, right?! So not only are we optimizing the average, but we see these important segments, and we are optimizing those individual segments. Now I’ll pause this at this point. And so in this case, we were able to just jump right into optimization because in this example, we already had the actions available. You know, the regular, fun, and classic. If you don’t have the actions available to you yet, or you don’t want to optimize them yet, then you can still get started under the analyze section of the console.

So what analyze section can do is tell you not only what your average metric is, so in this case, happiness on average is about 52, but it can actually automatically discover these key contexts for you. This is what I was referring to earlier as learned key context. It can tell you that you have two contexts: child, and adult; and that you are actually significantly underperforming, in this case, on the child segment. And doing well relatively speaking on the adult segment. So this is new information that you can get without even introducing any actions to your system- just reusing existing instrumentation you may have.

So that’s our demo. Just gonna go back here. And thank you for listening. That was actually the whole talk. And please visit us at either amp.ai or scaledinference.com. We are hiring at all positions, and don't forget to request your free account, okay? These are not limited in any way for a while and it's first-come first- serve. Thanks. Any questions?

Q & A

Man: Is there a number of runs you have to do in general to get a enough or a little data?

Olcan: Sir, can you speak up a little bit?

Man: About how many runs do you need to get the data?

Olcan: Oh yeah, that’s a great question. So how many runs, how many sessions are needed before the metric can improve? That depends primarily on how impactful the actions are that you have. If you have two actions and one of them has much better metrics in some context, let’s say better 20%, 30%, that is much easier to discover and exploit. And that might take only a few hundred runs or sessions. But, if your actions are not impactful, at least contextually, so they are only different by a few percent, you are going to need a lot more data. So you might be familiar with this from A/B testing, where you will need a larger sample. So you won’t know that before starting out- how impactful they are.But we find that people are often surprised especially contextually. There is usually some context where that action is actually much better.

Man: Thank you.

Olcan: Go ahead.

Man 2: So, yeah. I have a question. Between context and actions, and the other things you mentioned, there is a really large sample space that you are optimizing over.

Olcan That’s right.

Man 2: Not to mention the difficulty in statistical inference.

Olcan: Yeah.

Man 2: 95% confidence-

Olcan: Yeah.

Man 2: -like, going over-

Olcan: Right.

Man 2: -hundreds of different combinations-

Olcan: That’s right.

Man 2: What sort of methods do you use? Did you-?

Olcan: Yeah. So, the question was: with all the context that we have, and all the actions we might have, in the space, the search space, the optimization space is huge. So, how do we deal with that?

And, the answer is that is where we actually transition from statistics, where you can get guarantees just from your training data, into machine learning, where you actually have to set aside some testing data or hold out data, and do things like cross validation or other forms of testing your hypothesis. You have to be careful about how you explore that space. In general, this type of system has to be quite agnostic to the specific methods used. It needs to be able to explore a wide range of strategies. And just evaluate its performance on hold-out data, or test data. So that is what makes us a machine learning problem and not necessarily a statistical problem. Any other questions? Go ahead.

Man 3: Your team claims to offer licenses for self-posting of the service source or, entirely as a service model?

Olcan: Yes, That’s a great question. We just launched the platform a couple of weeks ago. And this is actually the first public presentation for you guys. We are open to anything at this point. We are very flexible about licensing. Please just reach out to us.

Man 3: For things, specifically about concerns and like, uncertain privacy, and sort of the proliferation of user data out into...

Olcan: Right, yes. That is definitely something I would love to talk to you about.

Any other questions? No? All right. Thanks, everyone.