A Few Useful Things to Know About Machine Learning


A Few Useful Things to Know About Machine LearningKey InsightsBasic DefinitionsLearning = Representation + Evaluation + OptimizationIt's Generalization that CountsData Alone May Not Be EnoughOverfitting has Many FacesPowerful versus simpleIntuition Fails in High DimensionsTheoretical Guarantees Are Not What They SeemFeature Engineering Is The KeyMore Data Beats a Cleverer AlgorithmTwo Types of LearnersLearn Many Models, Not Just OneModel ensemblesBayesian model averagingSimplicity Does Not Imply AccuracyRepresentable Does Not Imply LearnableCorrelation Does Not Imply Causation


Key Insights

 

Basic Definitions

 

Learning = Representation + Evaluation + Optimization

The accompanying table shows common examples of each of these three components. Of course, not all combinations of one component from each column of the table make equal sense. For example, discrete representations naturally go with combinatorial optimization, and continuous ones with continuous optimization. Nevertheless, many learners have both discrete and continuous components, and in fact the day may not be far when every single possible combination has appeared in some learner!

 

It's Generalization that Counts

The fundamental goal of machine learning is to generalize beyond the examples in the training set. This is because, no matter how much data we have, it is very unlikely that we will see those exact examples again at test time.

Contaminating test data can be mitigated by using cross-validation on your training data set. For example utilizing Leave-One-Out (LOO) or K-fold methods [see here for more details on both].

 

Data Alone May Not Be Enough

The most useful learners in this regard are those that do not just have assumptions hardwired into them, but allow us to state them explicitly, vary them widely, and incorporate them automatically into the learning.

 

Overfitting has Many Faces

If we don't have the necessary data we run the risk of just hallucinating a classifier (or parts of it) that is not grounded in reality, and is simply encoding random quirks in the data. This problem is called overfitting.

Understanding overfitting through bias and variance can be helpful. This is visualized below on the dart board.

BiasVariance
The tendency to consistently learn the wrong thingThe tendency to learn random things irrespective of the real signal

… beam search has lower bias than greedy search, but higher variance, because it tries more hypotheses. Thus, contrary to intuition, a more powerful learner is not necessarily better than a less powerful one. Figure 2 illustrates this. Even though the true classifier is a set of rules, with up to 1,000 examples naive Bayes is more accurate than a rule learner. This happens despite naive Bayes’s false assumption that the frontier is linear!

Powerful versus simple

 

Cross-validation can be helpful to avoid overfitting but it does not solve all problems.

Other types of overfitting mitigation techniques:

  1. Adding a regularization term to the evaluation function.

    • This can, for example, penalize classifiers with more structure, thereby favoring smaller ones with less room to overfit.
  2. Perform a statistical significance test (like chi-square) before adding new structure

    • This can be helpful in deciding whether the distribution of the class is really different with and without the new structure

It is easy to avoid overfitting (variance) by falling into the opposite error of under-fitting (bias). Simultaneously avoiding both requires learning a perfect classifier, and short of knowing it in advance there is no single technique that will always do best.

Common misconception —> Overfitting comes from noise.

Multiple testing — this is building on the problem of conducting multiple statistical tests and increasing your chances of a Type I error.

For example, a mutual fund that beats the market 10 years in a row looks very impressive, until you realize that, if there are 1,000 funds and each has a 50% chance of beating the market on any given year, it is quite likely that one will succeed all 10 times just by luck.

 

Intuition Fails in High Dimensions

After overfitting, the biggest problem in machine learning is the curse of dimensionality.

Generally speaking, our human intuitions about dimensionality cannot extend past three physical dimensions. Thus, as a result of this limitation, our intuitions may be useless (at best) or misleading (at worst).

Naively, one might think that gathering more features never hurts, since at worst they provide no new information about the class. But in fact their benefits may be outweighed by the curse of dimensionality.

For example, more features = more dimensions

Fortunately, there is an effect that partly counteracts the curse, which might be called the “blessing of non-uniformity.” In most applications examples are not spread uniformly throughout the instance space, but are concentrated on or near a lower dimensional manifold.

 

Theoretical Guarantees Are Not What They Seem

This portion discusses probabilistic guarantees with respect to the likelihood (probability) that a particular classifier is bad, based on how much data is pulled.

This provides some type of "bound" on how much data is needed.

The author suggests that we should take these with a grain of salt, however, because they tend to be particularly "loose" and also to keep in mind that interesting problems may require an exponentially larger amount of data in order to search the hypothesis space.

This is explained a bit more clearly in the original text but I will simply end with this point because I think it's the most valuable...

Further, we have to be careful about what a bound like this means. For instance, it does not say that, if your learner returned a hypothesis consistent with a particular training set, then this hypothesis probably generalizes well. What it says is that, given a large enough training set, with high probability your learner will either return a hypothesis that generalizes well or be unable to find a consistent hypothesis. The bound also says nothing about how to select a good hypothesis space. It only tells us that, if the hypothesis space contains the true classifier, then the probability that the learner outputs a bad classifier decreases with training set size. If we shrink the hypothesis space, the bound improves, but the chances that it contains the true classifier shrink also.

Another common type of theoretical guarantee is asymptotic: given infinite data, the learner is guaranteed to output the correct classifier.

In practice, we are seldom in the asymptotic regime (also known as “asymptopia”).

 

Feature Engineering Is The Key

So there is ultimately no replacement for the smarts you put into feature engineering.

 

More Data Beats a Cleverer Algorithm

Clever algorithms are valuable, however, it is often more pragmatic to simply gather more data.

As a rule of thumb, a dumb algorithm with lots and lots of data beats a clever one with modest amounts of it. (After all, machine learning is all about letting data do the heavy lifting.)

This, however, raises the issue of scalability. There is lots and lots of data, but no one has the time to process it all.

This leads to a paradox: even though in principle more data means that more complex classifiers can be learned, in practice simpler classifiers wind up being used, because complex ones take too long to learn.

Part of the reason using cleverer algorithms has a smaller payoff than you might expect is that, to a first approximation, they all do the same.

This is surprising when you consider representations as different as, say, sets of rules and neural networks. But in fact propositional rules are readily encoded as neural networks, and similar relationships hold between other representations. All learners essentially work by grouping nearby examples into the same class; the key difference is in the meaning of “nearby.”

Two Types of Learners

For these reasons, clever algorithms—those that make the most of the data and computing resources available— often pay off in the end, provided you are willing to put in the effort. There is no sharp frontier between designing learners and learning classifiers; rather, any given piece of knowledge could be encoded in the learner or learned from data. So machine learning projects often wind up having a significant component of learner design, and practitioners need to have some expertise in it.

 

Learn Many Models, Not Just One

We have begun to find out that simply mixing models together is often the best way to improve performance.

Model ensembles

Three simple versions are:

Many other techniques exist, and the trend is toward larger and larger ensembles.

Bayesian model averaging

Model ensembles should not be confused with Bayesian model averaging (BMA)—the theoretically optimal approach to learning.

BMA weights are extremely different from those produced by (say) bagging or boosting: the latter are fairly even, while the former are extremely skewed, to the point where the single highest-weight classifier usually dominates, making BMA effectively equivalent to just selecting it. A practical consequence of this is that, while model ensembles are a key part of the machine learning toolkit, BMA is seldom worth the trouble.

 

Simplicity Does Not Imply Accuracy

This whole section is dedicated to pointing out that simpler ML models are not necessarily more accurate.

However, from a practical perspective, simpler models are often more valuable because we can more easily understand and learn from them.

 

Representable Does Not Imply Learnable

Just because the data can be represented in one form does not mean that it can be learned in that form.

It can be valuable to try and represent the data in different ways for different learners to see if they perform better/worse.

Therefore the key question is not “Can it be represented?” to which the answer is often trivial, but “Can it be learned?” And it pays to try different learners (and possibly combine them).

 

Correlation Does Not Imply Causation

The point within the header above is getting old.

That being said, strong correlational findings may help researchers discover different/new areas of research for testing causality.

Many researchers believe that causality is only a convenient fiction. For example, there is no notion of causality in physical laws. Whether or not causality really exists is a deep philosophical question with no definitive answer in sight, but there are two practical points for machine learners. First, whether or not we call them “causal,” we would like to predict the effects of our actions, not just correlations between observable variables. Second, if you can obtain experimental data (for example by randomly assigning visitors to different versions of a Web site), then by all means do so.

 

 


Notes by Matthew R. DeVerna