Basically all Markov models have two special properties called the 'Markov assumptions'.
- The model only takes the current state into account when
determining transition probabilities, not any previous states.
This is called the ‘limited horizon principle’, and the Markov models that adhere to it are called ‘first order’ models. These are the most useful and the most common. However, some texts say that they are the only valid Markov models. I find this dubious, firstly because any 'higher order' model can be converted into a first order model, and secondly because lots of people are using higher order models anyway.
For people who find this confusing because they don't know what horizons are.
- The transition probabilities of each state do not change with time.
This is called the ‘stationary process assumption’. Same deal as the limited horizon assumption – many websites say that this is a necessary criterion for every Markov model, but researchers still use non-stationary models.
|This woman is upset because her megaphone is so tiny. She should probably just get a blog.|
A few of the resources I've found on Markov models have included these two properties in the definition for what constitutes a valid model. While I'm sure they're all more qualified than me, I find it very counterintuitive to think of the assumptions as being so rigid. Just because you change your transition probabilities over time (or look back two states instead of one) doesn't mean you aren't still using the underlying logic of Markov models; predicting future states based on current information.
I would treat both these assumptions as very good guidelines; your model will almost certainly be more elegant and more useful if you conform to them. Also, it seems convenient to assume these properties are true of a Markov model unless told otherwise (like, in an exam). But what if the state sequence you're working with is such that breaking an assumption makes the model better? Of course you should break the assumption, and the resulting model will still be extremely Markovian.
There’s no concrete reason why you couldn’t have transition probabilities be a function of more than one previous state and/or a function of time, if it most accurately describes the system you’re trying to represent.
|I just hope The Man is ready for my maverick views.|
P.S. If you have any thoughts on this I’d love to discuss it with you (leave a comment)!