Little explorations with HP calculators (no Prime)

03312017, 09:51 PM
(This post was last modified: 03312017 09:55 PM by pier4r.)
Post: #97




RE: Little explorations with the HP calculators
(03312017 09:41 PM)DavidM Wrote: So if I understand correctly (and I certainly may not ), you are essentially doing a table lookup using some target value (that corresponds to column 1) and returning the result (column 3). You do this by sequentially looking through each value until you find the appropriate stopping point. yes. I thought my overview was clear, damn me and my language skills. Quote: If there's a functional relationship between columns 1 and 3 in the table, you could simply apply the function to the target, round the result as needed, and you're done. This should be significantly faster than a lookup. Nice idea, but there is none. The third column are modifiers, from 400 to 400 in steps of 25. The second column is the probability (in integers) that I want of those modifiers. I tried to get a triangle, so the probability starts from 25 at the end, to be 425 in the middle and then down to 25 again. The first column is the cumulative probability (the one that I use for searching), so 25, 25+50, 25+50+75, etc. Actually I think that if I get a uniform random value from 1 to X, and then I have custom probabilities modeled on the same range, I should be able to "cast" the uniform probability in the probability that I want. Quote: Alternatively, you could map the target value to an index, then grab the column 3 value with that index. I would think that would also be faster, though not quite as much as the above. It would still require knowing the functional relationship of the target value to the index, of course.I'm not sure I am following here. I do have already the index that is the row number. Quote: Because the table is presorted, you could use a binary search of the column1 value instead of sequential. Messier code, but would most likely be faster in the long run due to fewer comparisons being required. I'd try the others first, though. That would be possible yes, I normally discard optimizations for things that are small, like 30 entries. But if on average the lookup requires 15 steps instead of the binary search value of log_2(15), with 480 calls per major iteration I could save a lot. in other words, I forgot that I should consider not only a single call, but all the calls to the function. edit: for the last version of the code, one could check here: https://app.assembla.com/spaces/various...ournComp.s Wikis are great, Contribute :) 

« Next Oldest  Next Newest »

User(s) browsing this thread: 26 Guest(s)