Updating probabilities with data and moments burit melayu balak cina
This would correspond to coming up with a really big model and then using Bayes’s theorem given all the data you can think of.
However, this is impossible in practice, so , in much the same way that, in physics, we reduce the whole rest of the universe to just a single damping coefficient, or a driving force term.
The uniform distribution might be justified by another argument (e.g.
Suppose we start with some probability distribution q(x), and then learn that, actually, our probabilities should satisfy some constraint that q(x) doesn’t satisfy.
The most common conflict is demonstrated in this short discussion by David Mac Kay.
The problem posed is the classic one, along these lines (although Mac Kay presents it slightly differently): given that a biased die averaged 4.5 on a large number of tosses, assign probabilities for the next toss, x.
We need to choose a new distribution p(x) that satisfies the new constraint – but we also want to keep the valuable information contained in q(x).
If you seek a general method for doing this that satisfies a few obvious axioms, ME is it – you choose your p(x) such that it is as close to q(x) as possible (i.e.