I used evrete at my last job to run insurance quoting rules! Good software, very straightforward to get going, although in my experience “lightweight” seemed a possibly somewhat relative term (at least I don’t think we were throwing overmuch data at it).
I’m curious about how this gets used in practice. How many rules did you have? How many facts? Did you use stateless or stateful sessions? That is, did you load some data in a session, fire it, and throw it away, or was there a reason to keep a persistent session? Did your actions have side effects or mostly tweak fields in your data? Did you have a lot of rules/actions that “chained”, like when you update an object it triggers new rule evaluations? How was debugging? Did you use the Java DSL or the text files I saw in one of the sample projects? Did you have a separate deployment flow for your Everte code? If so, did it help you iterate faster?
Twentyish iirc (this was a pretty specialized insurance process), most of them doing quite a bit including applying tabular lookups stored as facts.
How many facts?
At minimum as many as rules, with no ceiling (e.g. when quoting a customer with multiple locations, each location represented many facts, and we noticed a heavy memory footprint with many locations but I can’t give a useful threshold here)
Did you use stateless or stateful sessions? That is, did you load some data in a session, fire it, and throw it away, or was there a reason to keep a persistent session?
Stateless, putting information in the hopper, finding a resolution, and storing the output
Did your actions have side effects or mostly tweak fields in your data?
The latter
Did you have a lot of rules/actions that “chained”, like when you update an object it triggers new rule evaluations?
I found it more usefully framed as effects rippling outward than the linear sequential process implied by “chaining”, but yes we did; for example you can’t get final pricing until you’ve figured out deductibles, you can’t do that without determining limits, et cetera. Lots of rules had conditions like $coverage.deductible != null for example. I don’t think we had any that would have triggered cascading reevaluation though.
How was debugging? Did you use the Java DSL or the text files I saw in one of the sample projects?
Not bad! The Java DSL is great for debugging, you set breakpoints and work normally. Breakpoint conditions help if you’ve got a complex context like I did with many locations and lines of coverage for each location.
Did you have a separate deployment flow for your Everte code? If so, did it help you iterate faster?
I wonder how many evaluations their prime numbers example runs. It seems like a “bad” example like recursive Fibonacci, but if it actually runs in less than n^3 that would be cool. Although I don’t really think that’s the point of rule engines. I’ve never worked with them before, but it seems like their value mostly comes from partial re-evaluation of the working set when inserting, updating, or deleting facts. This post by Martin Fowler helped me understand more about them. It seems like Enterprise spooky action at a distance to me.
Update: the performance section of their advanced topics docs indicates that indexes are created for equality testing. So this example should run in n^2 for the multiplications, and then the equality is handled with index lookup, if their docs are accurate. However changing the .where() to a Java method predicate that performs the same task runs in the same amount of time as their text predicate, so it doesn’t seem so.
Their docs also say:
Try avoiding conditions that reference fact instances instead of their fields.
So I tried making an IntegerHolder wrapper class to activate their field indexing, but that didn’t help. Both a public field and a private field with a getter worked, but neither was faster. Perhaps it only works for simple equality as in their example, and not expressions like $i1 * $i2 == $i3?
I used evrete at my last job to run insurance quoting rules! Good software, very straightforward to get going, although in my experience “lightweight” seemed a possibly somewhat relative term (at least I don’t think we were throwing overmuch data at it).
I’m curious about how this gets used in practice. How many rules did you have? How many facts? Did you use stateless or stateful sessions? That is, did you load some data in a session, fire it, and throw it away, or was there a reason to keep a persistent session? Did your actions have side effects or mostly tweak fields in your data? Did you have a lot of rules/actions that “chained”, like when you update an object it triggers new rule evaluations? How was debugging? Did you use the Java DSL or the text files I saw in one of the sample projects? Did you have a separate deployment flow for your Everte code? If so, did it help you iterate faster?
Lightning round:
Twentyish iirc (this was a pretty specialized insurance process), most of them doing quite a bit including applying tabular lookups stored as facts.
At minimum as many as rules, with no ceiling (e.g. when quoting a customer with multiple locations, each location represented many facts, and we noticed a heavy memory footprint with many locations but I can’t give a useful threshold here)
Stateless, putting information in the hopper, finding a resolution, and storing the output
The latter
I found it more usefully framed as effects rippling outward than the linear sequential process implied by “chaining”, but yes we did; for example you can’t get final pricing until you’ve figured out deductibles, you can’t do that without determining limits, et cetera. Lots of rules had conditions like
$coverage.deductible != null
for example. I don’t think we had any that would have triggered cascading reevaluation though.Not bad! The Java DSL is great for debugging, you set breakpoints and work normally. Breakpoint conditions help if you’ve got a complex context like I did with many locations and lines of coverage for each location.
No, it was integrated into a service library.
Thank you for the detailed answer, that’s super helpful!
I wonder how many evaluations their prime numbers example runs. It seems like a “bad” example like recursive Fibonacci, but if it actually runs in less than n^3 that would be cool. Although I don’t really think that’s the point of rule engines. I’ve never worked with them before, but it seems like their value mostly comes from partial re-evaluation of the working set when inserting, updating, or deleting facts. This post by Martin Fowler helped me understand more about them. It seems like Enterprise spooky action at a distance to me.
Update: the performance section of their advanced topics docs indicates that indexes are created for equality testing. So this example should run in n^2 for the multiplications, and then the equality is handled with index lookup, if their docs are accurate. However changing the
.where()
to a Java method predicate that performs the same task runs in the same amount of time as their text predicate, so it doesn’t seem so.Their docs also say:
So I tried making an
IntegerHolder
wrapper class to activate their field indexing, but that didn’t help. Both apublic
field and aprivate
field with a getter worked, but neither was faster. Perhaps it only works for simple equality as in their example, and not expressions like$i1 * $i2 == $i3
?