I was thinking about the mathematical problem behind this.
Given K samples from a distribution that is, say, adb+n where n is unknown, but a and b are known, what can you learn about n?
For weapons with 1 die, interestingly, it *only* matters what the maximum and minimum samples are. If you had your 1d10+n produce a bunch of samples in the range, say, 5 to 11, then that behavior is equally likely to come from 1d10+1 as from 1d10+4 or any value in between, no matter how many of the samples there are. The more samples you get the bigger the range is likely to be. For a 1dx + n weapon, it should take about 4(x-1) hits to have a 95% chance of having seen a full interval.
For weapons with 2+ dice, it's trickier. You would learn n for sure when you have seen both extremes, but as has been mentioned, it can take a long time to generate both extremes. For 2d6, for instance, you expect to see both extremes after an average of about 72 hits.
Here's a sample: let's say 2d6+n generated 9, 10, and 15 as its values. We know n >= 3, and n <= 7 for sure. For these particular rolls, the probabilistic estimate would be n ~= 4.6. But what if it was 9, 14, 15? Then the estimate would be n ~= 5.4. So internal values matter; actually, they matter a lot. Also, quantity matters: if the rolls were 9, 9, 10, 10, 15, 15, then things change; we get n ~= 4.4,
Experimenting by hand, I found that with 25 samples for 2d6+n, the most likely n was the correct one about 5/6 of the time. But it was not that unusual to see wrong bonuses getting high percentages, if the rolls were higher or lower than normal. So 25 is probably not high enough, but it's too messy to work out all the way. And then 3 dice will be different, and 4 will be different again.
Given K samples from a distribution that is, say, adb+n where n is unknown, but a and b are known, what can you learn about n?
For weapons with 1 die, interestingly, it *only* matters what the maximum and minimum samples are. If you had your 1d10+n produce a bunch of samples in the range, say, 5 to 11, then that behavior is equally likely to come from 1d10+1 as from 1d10+4 or any value in between, no matter how many of the samples there are. The more samples you get the bigger the range is likely to be. For a 1dx + n weapon, it should take about 4(x-1) hits to have a 95% chance of having seen a full interval.
For weapons with 2+ dice, it's trickier. You would learn n for sure when you have seen both extremes, but as has been mentioned, it can take a long time to generate both extremes. For 2d6, for instance, you expect to see both extremes after an average of about 72 hits.
Here's a sample: let's say 2d6+n generated 9, 10, and 15 as its values. We know n >= 3, and n <= 7 for sure. For these particular rolls, the probabilistic estimate would be n ~= 4.6. But what if it was 9, 14, 15? Then the estimate would be n ~= 5.4. So internal values matter; actually, they matter a lot. Also, quantity matters: if the rolls were 9, 9, 10, 10, 15, 15, then things change; we get n ~= 4.4,
Experimenting by hand, I found that with 25 samples for 2d6+n, the most likely n was the correct one about 5/6 of the time. But it was not that unusual to see wrong bonuses getting high percentages, if the rolls were higher or lower than normal. So 25 is probably not high enough, but it's too messy to work out all the way. And then 3 dice will be different, and 4 will be different again.
Comment