With many thanks to Derakon for his unfailing patience, I've done a lot of work on item generation in Pyrel. I've written a fairly detailed description of how it works on the Pyrel wiki.
We have a choice to make about artifact generation. The way I've written it is the way it's done in v4, where every time an item is generated we do a single random check against what we now call artifactChance. If that chance is passed, you're guaranteed to get an artifact (if any are legal for your depth and not yet generated). If it's not, you get a normal item (which could then be magical or ego etc.). In v4, this chance is 1/400 for 'normal' drops, 1/50 for 'good' drops and 1/15 for 'great' drops. In Pyrel you can set it to whatever you want using the new loot system. If you so desire, you can set it differently for each monster.
Derakon had envisaged a completely different way of doing this, which is to put artifacts and base items together in a single allocation table. Obviously the artifacts would have much lower commonness than normal items. If you get a normal item it is still checked for affixes and themes (magic, ego etc.) in the normal way.
Both will work, and there would be no difference at all to the player. Sometimes you get an artifact, most times you don't.
The difference is in how devs and source-divers and other numerate players understand the probabilities. The second method allows you to say, unequivocally, that The One Ring is X times rarer than a dagger (though that dagger could be plain +0,+0 or an awesome Holy Avenger with extra dice). X is simply the ratio of the One Ring's commonness against the dagger's commonness. The same comparison can be made between any artifact and any base item.
The first method doesn't allow us to do that easily. We can still do it using monte carlo simulations, but that isn't the same degree of certainty at all. But what the first method does allow us to do is know exactly what the likelihood is of generating *an* artifact. The second method obfuscates this - you can calculate it (sum of artifact commonnesses / sum of all commonnesses) but it's not readily available, and will differ for each allocation table.
Does anyone else have a view on whether either of these is important? Are there any other reasons for choosing one method over the other? Is this all a monstrously abstruse waste of everybody's time?
(N.B. Both methods allow fine-tuning of *relative* artifact rarities, so I haven't mentioned this. You can tweak how often Nimthanc appears relative to Soulkeeper in both cases. The difference is about how we manage artifact rarity relative to non-artifacts.)
We have a choice to make about artifact generation. The way I've written it is the way it's done in v4, where every time an item is generated we do a single random check against what we now call artifactChance. If that chance is passed, you're guaranteed to get an artifact (if any are legal for your depth and not yet generated). If it's not, you get a normal item (which could then be magical or ego etc.). In v4, this chance is 1/400 for 'normal' drops, 1/50 for 'good' drops and 1/15 for 'great' drops. In Pyrel you can set it to whatever you want using the new loot system. If you so desire, you can set it differently for each monster.
Derakon had envisaged a completely different way of doing this, which is to put artifacts and base items together in a single allocation table. Obviously the artifacts would have much lower commonness than normal items. If you get a normal item it is still checked for affixes and themes (magic, ego etc.) in the normal way.
Both will work, and there would be no difference at all to the player. Sometimes you get an artifact, most times you don't.
The difference is in how devs and source-divers and other numerate players understand the probabilities. The second method allows you to say, unequivocally, that The One Ring is X times rarer than a dagger (though that dagger could be plain +0,+0 or an awesome Holy Avenger with extra dice). X is simply the ratio of the One Ring's commonness against the dagger's commonness. The same comparison can be made between any artifact and any base item.
The first method doesn't allow us to do that easily. We can still do it using monte carlo simulations, but that isn't the same degree of certainty at all. But what the first method does allow us to do is know exactly what the likelihood is of generating *an* artifact. The second method obfuscates this - you can calculate it (sum of artifact commonnesses / sum of all commonnesses) but it's not readily available, and will differ for each allocation table.
Does anyone else have a view on whether either of these is important? Are there any other reasons for choosing one method over the other? Is this all a monstrously abstruse waste of everybody's time?
(N.B. Both methods allow fine-tuning of *relative* artifact rarities, so I haven't mentioned this. You can tweak how often Nimthanc appears relative to Soulkeeper in both cases. The difference is about how we manage artifact rarity relative to non-artifacts.)
Comment