Topic: Improving the AI
Tibor |
Posted at: 2019-11-29, 17:06
I would use some debug printfs - this is how I developed it.
Productionsites are rotated. Second, if you e.g. dismantle a building - it is dismantled later and AI needs some short time to update own statistics - this is done ex-post of course. (Dismantle is scheduled not executed during AI's thinking). So this is why AI should not go to next production site check if an important decision (dismantle, upgrade) was made... Top Quote |
hessenfarmer |
Posted at: 2019-11-29, 17:28
that is my plan. But it isn't that easy to have proper logs on windows.
Thanks for the information, however this would only be the case if we have the same type of building processed next, right? so if we check for decision time of the bo it would be ok to check a different type of building. Top Quote |
Tibor |
Posted at: 2019-11-29, 20:05
Exactly, but such check would bring complexity to too complex AI code. But doable. But also this should include pairs of "basic" + "enhanced" productionsites, because statistics of enhanced buildings are considered when enhancing. So the question is if the complexity is worth it. The basic question is how frequently an productionsite should be checked. Also note, that usually no cardinal decision is made so multiple buildings are processed in one turn anyway... Top Quote |
hessenfarmer |
Posted at: 2019-12-01, 15:01
I think I have found the culprit for never reaching a third upgraded tribe building. I believe it is within the calculation of the Ai crude statistics. I already tried to not calculate them if we skip. but this seems to be not sufficient to reach more then 69% (highest value seen in my tests, although in game showed both 100%. Top Quote |
Tibor |
Posted at: 2019-12-01, 20:44
If crude statistics is wrong than it is bad. Just keep in mind that crude statistics is time based, while official statistics is attempt-based. Top Quote |
hessenfarmer |
Posted at: 2019-12-01, 23:05
actually the first problem was that in crude statisitcs skipped counted as failed which led to too low stats for multiple program sites. After commenting this out (left an explaining comment) it seemed to be better but still not good. I really don't know why but chnaging thte time base of the calcualtion to 8 minutes delivered reasonable results. However this still needs testing with all tribes. Just pushed my changes to my fork. I would be glad if you could have a look and provide comments. next step is to kick out trained workers. Will work on that now. For the beginning I just provided more upgraded workers in start condition to see if it works at all. Top Quote |
Tibor |
Posted at: 2019-12-02, 08:19
This is what time statistics is - site is working 69% of time, there is some room for higher production. This is an idiosyncrasy of our productionsites, we discussed it many times. I was trying to cope with this by different tresholds for sites with different number of outputs. It might be 90% for site with one output and 50 % for a site with 3 and more outputs. I was also considering separate stats for each output, e.g. 100%, 0%, 0% - time statistics could be 69%, but first "100%" would indicate that we need another building of the type. I am not very glad about the change, if you like official statistics more you can use this here..... Top Quote |
hessenfarmer |
Posted at: 2019-12-02, 10:43
Ok I got your point. I will try another round with skipped enabled again. I still don't understand the influence of the time base and why it delivered such weird results. However, I am not sure whether it is reasonable to combine two complete different issues in one value. from decision making point of view it is a difference between as site is failing (something is missing) or site is skipping (one ware is superfluous). resulting necessity for a building is equal only for buildings with one product. If we keep it this way we might cover it by some extent by calling the programs in relation to the wares needed (which we did fo some Productionsites already). Remaining question is if there is some place in the code where we really need the information that a site is skipping? Else we could try to train new values for the evaluation of current_stats which should adopt in theory to the solution I made currently. Finally I think we need more analysis and testing before we find a final solution. So thanks for your opinion and the discussion. Top Quote |
Tibor |
Posted at: 2019-12-02, 12:35
This provide the information if the site can produce more. We can even make a shortcut. If > 50% the site is busy and we need new one, if < 50% we do not need another site...
No, AI even does not need such information Top Quote |
hessenfarmer |
Posted at: 2019-12-02, 16:16
Problem for me still is why the values I have seen with instrumented code were so weird and far from displayed stats. Skipping is only one thing that had an effect but there must be an other thing as in theory failing and skipping shouldn't effect the values that strong as their duration should be short. One time I almost believed the values got reset from time to time. I think we need to understand all root causes to make a decision here, as bo.current_stats is used in many evaluations not only in the very simple upgrade mechanism.
What I meant is does the Ai depend at any place on the skipping information included in current_stats. If yes I thought we might introduce it exactly there. Top Quote |