algorithm - map if-then-else to probability -
i need algorithm transform game rules (p&p role playing) probabilities, conditional constructs built if-then-else conditions made of boolean (not,and,or) , relational operators (==,>=,<=,<,>) , dice rolls , boolean values.
example:
var = diceroll(d8,d10,d12) // shaker total of dices // 8 sides, 10 sided , 12 sided dice // values added var w = true var result = ( if (a>=20) 10.3994 else if (a>=14 , w) 8.23 else if (a>=8 , diceroll(d6)>3) 5.22 else 0 ) should transformed programatically formula expected average result like
var result = diceprobabilitygreaterthan(a,20)*10.3994 +(diceprobabilitygreaterthan(a,14)-diceprobabilitygreaterthan(a,20))*8.23 + .. i know how map single relational operator on single diceroll probability (diceprobabilitygreaterthan), , know how transform specific simple illustration hand, have problems find general transformation scheme given rule. hard part in problem me dependend probabilities (like a>20 ... a>10).
more background:
i know utilize monte carlo method, tried , it's slow utilize case. the rules allready info structures, there no parsing required. the dices may exploding, meaning 6 sided dice falling on 6 rolled 1 time again , adding up, maximum shaker result not bounded finite number. the rules contain no loop command structures while or for, form maybe nested if-then-else-tree. the boolean , number values in conditions constants. the solution can limited 1 dependend probability variable (like in example), i'm interested in existence of general solution number of depended variable.this question clone https://math.stackexchange.com/questions/842458/map-if-then-else-to-probability because marked there offtopic.
what want calculate expected value of function. can done recursively.
i assume have rules in tree-like info structure. initial phone call root.calculateexpectedvalue().
there 3 kinds of nodes:
leaf nodes (that specify actual value).calculateexpectedvalue() should homecoming value leaf nodes. variable definitions. these nodes have 1 kid , homecoming child.calculateexpectedvalue(). however, have introduce variable declaration along probability mass function. probability mass functions of active variables must passed parameter calculateexpectedvalue(). more info on probability mass function below. decisions. these nodes have 2 children. probability of both cases can calculated, given probability mass functions of active variables. these nodes should homecoming p * truechild.calculateexpectedvalue() + (1 - p) * falsechild.calculateexpectedvalue(). furthermore, have adjust probability mass function of involved variables. a probability mass function variable defines how variable become value. simple six-sided dice, 1 -> 1/6, 2 -> 1/6, 3 -> 1/6 .... easiest store function dictionary or map.
for diceroll function more 1 dice, have able add together 2 probability mass function (e.g. pmf d8 + pmf d10, , later d12). in order so, create new empty pmf. each pair of elements of both input distributions, calculate resulting sum (element1.value + element2.value) , probability (element1.probability * element2.probability).
now can create , modify pmfs variable declaration nodes. still need behavior of decision nodes.
the first thing calculate probability of decision. that's rather easy. pick pmf of according variable, iterate entries , sum probability if status holds element.
for true child, have modify pmf in way entries status false removed. false child, have remove other entries. afterwards have re-normalize pmf (i.e. split sum of remaining probabilities). sure create new pmfs. don't want these modifications intervene other parts of tree.
you propagate cumulative probability leaf nodes. however, not necessary calculate expected value.
algorithm probability
No comments:
Post a Comment