**Introduction**

There are multiple ways to describe the mathematical model underlying multinomial logistic regression, all of which are equivalent. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have equivalents in the multinomial logit model.

The idea behind all of them, as in many other statistical classification techniques, is to construct a linear predictor function that constructs a score from a set of weights that are linearly combined with the explanatory variables (features) of a given observation using a dot product:

where **X**_{i} is the vector of explanatory variables describing observation *i*, **β**_{k} is a vector of weights (or regression coefficients) corresponding to outcome *k*, and score(**X**_{i}, *k*) is the score associated with assigning observation *i* to category *k*. In discrete choice theory, where observations represent people and outcomes represent choices, the score is considered the utility associated with person *i* choosing outcome *k*. The predicted outcome is the one with the highest score.

The difference between the multinomial logit model and numerous other methods, models, algorithms, etc. with the same basic setup (the perceptron algorithm, support vector machines, linear discriminant analysis, etc.) is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. In particular, in the multinomial logit model, the score can directly be converted to a probability value, indicating the probability of observation *i* choosing outcome *k* given the measured characteristics of the observation. This provides a principled way of incorporating the prediction of a particular multinomial logit model into a larger procedure that may involve multiple such predictions, each with a possibility of error. Without such a means of combining predictions, errors tend to multiply. For example, imagine a large predictive model that is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel, and that prediction is in turn used as the input into a third submodel, etc. If each submodel has 90% accuracy in its predictions, and there are five submodels in series, then the overall model has only .95 = 59% accuracy. If each submodel has 80% accuracy, then overall accuracy drops to .85 = 33% accuracy. This issue is known as error propagation and is a serious problem in real-world predictive models, which are usually composed of numerous parts. Predicting probabilities of each possible outcome, rather than simply making a single optimal prediction, is one means of alleviating this issue.

Read more about this topic: Multinomial Logit, Model

### Other articles related to "introduction":

**Introduction**s To Fiction By Other Authors

... The Borribles An

**Introduction**, 2001 ... Things That Never Happen An

**Introduction**, 2002 ... Wizardry and Wild An

**Introduction**, 2004 ...

**Introduction**- Music - Songs and Tracks

...

**Introduction**", by Chicago from The Chicago Transit Authority "

**Introduction**", by Hood from Outside Closer "

**Introduction**", by Kajagoogoo from White ...

...

**Introduction**to Presuppositional Apologetics Part 2 ... Van Til The Theologian, 1976 ISBN 0-916034-02-X Medical Ethics, 1988 ISBN 0-87552-261-0 Perspectives on the Word of God An

**Introduction**to Christian Ethics, 1990 ...

**Introduction**To Metaphysics

... An

**Introduction**to Metaphysics (

**Introduction**à la Métaphysique) is a 1903 essay by Henri Bergson that explores the concept of reality ...

### Famous quotes containing the word introduction:

“The role of the stepmother is the most difficult of all, because you can’t ever just be. You’re constantly being tested—by the children, the neighbors, your husband, the relatives, old friends who knew the children’s parents in their first marriage, and by yourself.”

—Anonymous Stepparent. Making It as a Stepparent, by Claire Berman, *introduction* (1980, repr. 1986)

“Do you suppose I could buy back my *introduction* to you?”

—S.J. Perelman, U.S. screenwriter, Arthur Sheekman, Will Johnstone, and Norman Z. McLeod. Groucho Marx, Monkey Business, a wisecrack made to his fellow stowaway Chico Marx (1931)

“My objection to Liberalism is this—that it is the *introduction* into the practical business of life of the highest kind—namely, politics—of philosophical ideas instead of political principles.”

—Benjamin Disraeli (1804–1881)