Suppose you’ve got a really nifty way to measure a certain physical property. It could be height, weight, temperature, pressure, etc… This measurement wows everyone that takes a few moments to get the gist of it.

Due to TANSTAAFL a measurement so nifty will of course have drawbacks. In this case the drawback is that this measurement takes a pretty long time relative to other measurements of interest.

So what do you do when you want the benefit of the nifty measurement without the cost of waiting for it to be done everywhere you’d like to do it?

Enter interpolation. In mathematics, interpolation “fills in” a function given N or more known values. There are lots of ways to interpolate a function some of which perform better (that is, they more closely approximate the underlying function over the interval of interest) in some circumstances than others. There are, among others, linear interpolation, quadratic interpolation (or more generally, polynomial interpolation), piecewise interpolation and splines. Interpolation itself is a specific case of the more general concept of approximation.

In a sense neural networks perform approximation in that they are trained on existing data sets (these are the “known values”) and are later used to produce values on inputs where the outputs are not known.

A common theme with all of these approaches to “filling in” functions is the parameter game. They’ve all got parameters, some more than others, and tuning those parameters for a specific application (the tuning game) is usually more art than science.

Subscribe to:
Post Comments
(
Atom
)

## No comments :

## Post a Comment