Thursday, 15 August 2013

functional programming - How does term-rewriting based evaluation work? -



functional programming - How does term-rewriting based evaluation work? -

the pure programming language apparently based on term rewriting, instead of lambda-calculus traditionally underlies similar-looking languages.

...what qualitative, practical difference make? in fact, is difference in way evaluates expressions?

the linked page provides lot of examples of term rewriting being useful, doesn't describe what differently function application, except has rather flexible pattern matching (and pattern matching appears in haskell , ml nice, not fundamental evaluation strategy). values matched against left side of definition , substituted right side - isn't beta reduction?

the matching of patterns, , substitution output expressions, superficially looks bit syntax-rules me (or humble #define), main feature of happens before rather during evaluation, whereas pure dynamic , there no obvious phase separation in evaluation scheme (and in fact otherwise lisp macro systems have made big noise how not different function application). beingness able manipulate symbolic look values cool'n'all, seems artifact of dynamic type scheme rather core evaluation strategy (pretty sure overload operators in scheme work on symbolic values; in fact you can in c++ look templates).

so mechanical/operational difference between term rewriting (as used pure) , traditional function application, underlying model of evaluation, when substitution happens in both?

term rewriting doesn't have function application, languages pure emphasise style because a) beta-reduction simple define rewrite rule , b) functional programming well-understood paradigm.

a counter-example blackboard or tuple-space paradigm, term-rewriting well-suited for.

one practical difference between beta-reduction , total term-rewriting rewrite rules can operate on definition of expression, rather value. includes pattern-matching on reducible expressions:

-- functional style map f nil = nil map f (cons x xs) = cons (f x) (map f xs) -- compose f , g before mapping, prevent traversing xs twice result = map (compose f g) xs -- term-rewriting style: spot double-maps before they're reduced map f (map g xs) = map (compose f g) xs map f nil = nil map f (cons x xs) = cons (f x) (map f xs) -- double maps automatically fused result = map f (map g xs)

notice can lisp macros (or c++ templates), since term-rewriting system, style blurs lisp's crisp distinction between macros , functions.

cpp's #define isn't equivalent, since it's not safe or hygenic (sytactically-valid programs can become invalid after pre-processing).

we can define ad-hoc clauses existing functions need them, eg.

plus (times x y) (times x z) = times x (plus y z)

another practical consideration rewrite rules must confluent if want deterministic results, ie. same result regardless of order apply rules in. no algorithm can check (it's undecidable in general) , search space far big individual tests tell much. instead must convince ourselves our scheme confluent formal or informal proof; 1 way follow systems known confluent.

for example, beta-reduction known confluent (via church-rosser theorem), if write of our rules in style of beta-reductions can confident our rules confluent. of course, that's functional programming languages do!

functional-programming evaluation rewriting

No comments:

Post a Comment