# Decision Theory

The core idea of probabilistic decision theory in compbio is something like this.

Suppose that you are trying to predict a value for some variable, X. Your performance is being measured with reference to the true value of this variable, X'. The true value is assumed to be supplied as part of some reference/benchmark dataset; of course, for the purposes of prediction, you don't know X'.

The measurement is via some reward function R(X,X') that will be maximized when X=X' and will, in general, give high values if X is "similar to" X'. Of course, the precise definition of "similar to" is quite important and that's why you need decision theory.

Suppose you have a probabilistic model for X'. Let's write this as P(X'). The general idea of decision theory is to choose X as follows:

i.e. choose the solution that maximizes your *expected reward*.

Note that this is not, in general, the same as choosing the maximum likelihood (ML) solution, which is argmax P(X'). The only reward function for which the ML solution is the same as the optimal decision-theoretic solution is

**Failed to parse (lexing error): R(X,X') = \left\{ \begin{array}{rl} 1 & \mbox{if $X=X'$} \\ 0 & \mbox{if $X \neq X'$} \end{array} \right.**

Concrete examples of this can be found in the following alignment papers:

- Holmes & Durbin: Dynamic programming alignment accuracy.
*J. Comput. Biol.*1998;5:493-504. - Do
*et al.*: ProbCons: Probabilistic consistency-based multiple sequence alignment.*Genome Res.*2005;15:330-40. - Schwartz & Pachter: Multiple alignment by sequence annealing.
*Bioinformatics*2007;23:e24-9.

See also

-- Ian Holmes - 03 Oct 2007