Post Reply 
Little explorations with HP calculators (no Prime)
03-30-2017, 07:33 PM (This post was last modified: 03-30-2017 07:57 PM by Gerson W. Barbosa.)
Post: #81
RE: Little explorations with the HP calculators
(03-30-2017 06:36 PM)Han Wrote:  
(03-30-2017 05:47 PM)Gerson W. Barbosa Wrote:  That's what I'd done too, but the solution is very simple, I realize now. No need to solve any linear system.

I suspected there had to be a simple approach. I'll try your hints and also rework my solution to see where I may have made mistakes.

EDIT (after some initial thoughts):

Quote:PS: From E draw a perpendicular line to line AB, which intercepts it at F. Now we have two similar right-triangles: BEF and EFD. Angle DEF = angle EBF = 18 degrees, as we already know. Angle BDE, which you have named 'u' is its complement, 72 degrees. Finally, angle EDC, or your 'x', can be easily determined as 180 - 72 - 48 = 60 degrees.

Are you saying draw EF so that \( \angle BFE \) is 90 degrees? If so, how does it follow that triangle BEF is similar to triangle EFD without also knowing that \( \angle BED \) is 90 degrees (which does not seem obvious to me).

Or are you suggesting we make \( \angle BEF \) equal to 90 degrees? This would make it even less obvious how triangle EFD is even a right triangle.

EDIT 2: Your solution seems to suggest that we only need angles A and C. Am I missing another simple observation ?

I hope the picture is not too much polluted:

[Image: triangle_zpsnlgbqqzs.jpg?t=1490815394]

Edit: I could have done that ONLY if I were sure BED was a right-triange, which obviously we cannot be sure of.
Back to previous step!
Find all posts by this user
Quote this message in a reply
03-30-2017, 08:19 PM (This post was last modified: 03-30-2017 08:38 PM by Han.)
Post: #82
RE: Little explorations with the HP calculators
Here's what I came up with using basic trigonometry. It turns out I made a typo in my calculations when I posted 76.32 earlier (I used 38 instead of 39 for \( \angle CEA \) ). Maple claims the angle is approximately 81.00000016; I am wondering if the exact solution is actually 81 (changed typo from 80 to 81)


Attached File(s) Thumbnail(s)
       

Graph 3D | QPI | SolveSys
Find all posts by this user
Quote this message in a reply
03-30-2017, 08:41 PM (This post was last modified: 03-30-2017 09:29 PM by pier4r.)
Post: #83
RE: Little explorations with the HP calculators
If no otherwise stated, the problems on brilliant normally accept only integers.

The funny part is that you solved with "basic trigonometry" a level 5 problem (the highest challenge there in brilliant, aside from community extras). Now it is true that it spurred a bit more comments than the problems before (level 3) and it may be a simpler case because reported by me (1), but still I thought those were a bit harder to solve.

Anyway when I have to refresh about triangle relationship I use a little handbook, like those that one finds in bookshops and are quite compact. Not all is there, but those helps at times.

(1) I do not copy and paste everything here, I copy what I attempt to solve, so I may select the simpler problems.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-30-2017, 09:22 PM (This post was last modified: 03-30-2017 09:25 PM by Dieter.)
Post: #84
RE: Little explorations with the HP calculators
(03-30-2017 08:19 PM)Han Wrote:  Here's what I came up with using basic trigonometry. It turns out I made a typo in my calculations when I posted 76.32 earlier (I used 38 instead of 39 for \( \angle CEA \) ). Maple claims the angle is approximately 81.00000016; I am wondering if the exact solution is actually 81 (changed typo from 80 to 81)

For the record: I tried an... err... "very conventional" approach and did a drawing to scale in OO Draw. ;-) From this I obtained an angle of 81° as well (say ±0,2°). So this seems to be correct. But I wonder if there is no easier solution to this problem.

@Gerson: So <BED is not 90° but the omnipresent 111°. Maybe this gives one more clue here.

Dieter
Find all posts by this user
Quote this message in a reply
03-30-2017, 10:38 PM
Post: #85
RE: Little explorations with the HP calculators
(03-30-2017 09:22 PM)Dieter Wrote:  For the record: I tried an... err... "very conventional" approach and did a drawing to scale in OO Draw. ;-) From this I obtained an angle of 81° as well (say ±0,2°). So this seems to be correct. But I wonder if there is no easier solution to this problem.

@Gerson: So <BED is not 90° but the omnipresent 111°. Maybe this gives one more clue here.

I did search for my protractor but I couldn't find it. At least I've finally found my long lost three-cell HP-12C, so a positive result after all :-)

Gerson.
Find all posts by this user
Quote this message in a reply
03-30-2017, 10:41 PM (This post was last modified: 03-30-2017 10:42 PM by Han.)
Post: #86
RE: Little explorations with the HP calculators
For an exact answer, my hunch is that this problem probably makes use of some basic trig identities like \( \sin(2A) = 2\sin(A)\cos(A) \) and properties of sine and cosine (e.g. \( \sin(A) = \sin(180-A) \)) (and possibly more; e.g. sine and cosine of sum/difference of angles) in addition to the law of sines.

Graph 3D | QPI | SolveSys
Find all posts by this user
Quote this message in a reply
03-31-2017, 07:35 AM (This post was last modified: 03-31-2017 07:40 AM by pier4r.)
Post: #87
RE: Little explorations with the HP calculators
Since I see that there are like doubts, I share one solution from brilliant (note sure if it is correct but it seems so). I had to give up the chance to solve the problem but it is al right since I can just force myself to come up with my solution (maybe considering concave pentagons as wrote before).

So to not spoiler it, I put the written solution and the image created by one user as links, so they do not immediately show up.

http://i.imgur.com/hyiTDIg.png - written solution.
http://i.imgur.com/METbYu2.png - construction.

I have to analyze it a bit better though doing my construction, otherwise I cannot be sure I followed properly.


side note: I thought I was getting random problems from the brilliant collection about the Geometry topic. In fact, I was getting random problem from the brilliant collection on _one_ subtopic of the Geometry topic, which has 10+ topics. So there are even more problems and variety that I expected.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 10:01 AM
Post: #88
RE: Little explorations with the HP calculators
I need help. Can someone run the program that I posted in the post #74 of this thread on an hp 50g/49g with the variable "maxTournRepV" set to 1 (or 2) instead of 1000?

Because my hp50g is still computing, so far 35 hours and counting passed. So if I have the timings of one iteration (or two) I may estimate how long it will take. Because I did not do it at the start and so I'm not sure if it could finish in the next moment.

Surely if it finishes, I frame the result as example of bad estimations.

In any case, thanks for the help.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 03:12 PM
Post: #89
RE: Little explorations with the HP calculators
(03-31-2017 10:01 AM)pier4r Wrote:  I need help. Can someone run the program that I posted in the post #74 of this thread on an hp 50g/49g with the variable "maxTournRepV" set to 1 (or 2) instead of 1000?

I changed the relevant part of your code as requested:

Code:
  maxTournRepV
  @1000
  1


...then I added one more program to your directory at the top:

Code:
timeit
\<<
    'main' TEVAL
\>>

Executing 'timeit' on my 50g generated the following results:

Code:
{ 8. 6. 9. 10. 10. 10. 12. 13. 18. 19. 18. 22. 19. 22. 22. 22. }
{ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 4. 0. 0. 0. 0. }
:s: 1163.2958

So it appears that one iteration of rrFunc took about 19.4 minutes.

Extrapolating to 1000 iterations, it would appear that your program would require almost 2 weeks to complete.

I haven't tried to analyze your code, but it's likely that there are some loops which are slowing you down. SEQ appears quite a few times, and I'd take a look at those for a start. While they are nice syntactically, they can be slow -- especially if you are giving them an algebraic to execute. I think I saw at least one of them being used to simply load the stack with a series of placeholders. NDUPN would be much simpler (and faster) to use for something like that. Also consider passing an RPL program object instead of algebraics, as the algebraics have to be translated to RPL sequences anyway behind the scenes.

GET and PUT can also be slow if called repeatedly (I saw them, but didn't trace the code to see how often they are called).

The WHILE-REPEAT loop is also something I'd look at closely. What's the likelihood of it needing a significant number of iterations?

Again, I haven't traced through your code to see what it's actually doing. These are just my "gut check" ideas after a quick scan.
Find all posts by this user
Quote this message in a reply
03-31-2017, 05:27 PM (This post was last modified: 03-31-2017 05:43 PM by pier4r.)
Post: #90
RE: Little explorations with the HP calculators
(03-31-2017 03:12 PM)DavidM Wrote:  -

Super thanks! You cannot believe how helpful you were. my goodness two weeks.

First and foremost I steal your timeit program, I did not think about it. Pretty neat.

Second the result that you report shows a bug in my program. Every correspondent place in the lower list should have a 1 where the main list has 22 (because it is the maximum). It is like i messed up with POS and TAIL. I can debug it like "unit testing" though thanks to the DIR structure and not a big nested program.

I did not see that in my tests, so great.

Third: 2 weeks, too long.

Fourth, for the optimizations. Yes I used SEQ syntax as seen in a couple of examples over the internet, I may do a program to replace that just with a FOR, at the end I need a list though. Or maybe as you suggested: stack duplicates, number, \->LIST.

About the algebraics. Sometimes I use algebraics to improve the readability of the code ( like 'matrix(row,column)' or expressions that are ugly - for me - in RPL, I prefer 'n^2' rather than n 2 ^ ). Otherwise I always end up on it and say "what the heck do I want to do here?" even if I add comments.

In general I tend to avoid direct stack operations unless they are very small, otherwise I need to put long comments in the code to remember myself how the operations will work (see the comments near the STREAM command in the code that you copied).

So I suppose there is room for improvements but not if I want to stick with certain principles for code readability and maintainability. For example the message #8 in this post makes heavy use of the stack and to me it is unreadable until I debug it and I add comments.

Sure I know I lose quite a lot of speed. I would say that even when I fix the bug that your report showed me, I cannot wait 2 weeks for a result of just one tournament format, when I have to test 4 of them. (this would mean 2 months of activity for the 50g, I would need a very stable source of power to not lose the computation)

So, many thanks for your input once again, surely the program was neat to learn about the directory structure and some other stuff. Now I fix some passages and then I archive it.

Or maybe I can give it a try and ask here how people would do some parts, if I split my code in very small generic parts.

Ok I killed the execution on the 50g. It had done 74 iterations completely, in 42h and 42m. So an average of 34 minutes per iteration on my side.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 06:34 PM
Post: #91
RE: Little explorations with the HP calculators
(03-31-2017 05:27 PM)pier4r Wrote:  First and foremost I steal your timeit program, I did not think about it. Pretty neat.

Well, it's not really much of a program. Since it appears that you are possibly using Debug4x for some of your explorations, you should note that the results of TEVAL on the emulated 50g in that environment aren't what you would expect if you are running with "authentic calculator speed" turned off. Relative timings should give you an idea how different algorithms might compare, but the actual reported timings aren't real in that case.

(03-31-2017 05:27 PM)pier4r Wrote:  Fourth, for the optimizations. Yes I used SEQ syntax as seen in a couple of examples over the internet, I may do a program to replace that just with a FOR, at the end I need a list though. Or maybe as you suggested: stack duplicates, number, \->LIST.

It really depends on the specifics as to how big an impact this can have on performance. It's just something to keep in mind, especially for loops which have a high count.

Also worth mentioning, based on looking at your code: there's a definite loop structure in RPL that can be used when you don't need the counter's value for a computation: START ... NEXT/STEP. Take a look at that in the reference manuals.

(03-31-2017 05:27 PM)pier4r Wrote:  About the algebraics. ... In general I tend to avoid direct stack operations unless they are very small, otherwise I need to put long comments in the code to remember myself how the operations will work. ... So I suppose there is room for improvements but not if I want to stick with certain principles for code readability and maintainability. ... Sure I know I lose quite a lot of speed. I would say that even when I fix the bug that your report showed me, I cannot wait 2 weeks for a result of just one tournament format, when I have to test 4 of them.

Believe me, I hear you. I share your views on the importance (and value) of code readability, and I generally feel that this issue is far too often neglected in these discussions at the expense of code compactness. Both are important, of course, but they sometimes work against each other and trade-offs have to be considered. Code size is easily measured and is thus objective. Readability, however, is often subjective and dependent on an individual's preferred coding style. This makes it all the more difficult to reach a consensus about readability assessments.

Similar to code size, performance is usually easily measured. In this particular case it seems apparent that performance is unacceptable with the current coding methods. While harder to read and edit, the stack-based operations will almost always perform better. This would seem to dictate that recoding at least parts of your application will be required to achieve reasonable results.

(03-31-2017 05:27 PM)pier4r Wrote:  Ok I killed the execution on the 50g. It had done 74 iterations completely, in 42h and 42m. So an average of 34 minutes per iteration on my side.

That's longer than I experienced, and I'd guess that you are probably running your 50g in exact mode as opposed to approximate. Mine was set to approximate when I ran the test. That could explain the difference. If the lists are growing over each iteration, that could also slow things down. I haven't looked at the code closely enough to see if that is that case, though.
Find all posts by this user
Quote this message in a reply
03-31-2017, 07:34 PM (This post was last modified: 03-31-2017 08:04 PM by pier4r.)
Post: #92
RE: Little explorations with the HP calculators
Thanks for the reply.

In the past I used the emulator but now that I want to burn the 50g a bit, I'm using it for everything (so, I need to come up with smart debug stuff otherwise I spend too much time). So the TEVAL is always running there. I'm trying to profile the single programs at the moment.

Moreover when a loop is called once or twice, I discard it (as it, in big Oh complexity approach, the required time is constant). I will consider also START NEXT, although I use WHILE REPEAT to avoid break conditions, I do a "break" toggling a variable and trying to keep the inner code in the loop small (I also think that you agree, here) . The FOR loop so far are all needed, I will see with the profiling, although it takes time.

When I identify something slow (in terms of commands, like NDUPN vs SEQ or structures), I will ask here if the community has suggestions.

In general I agree that when the computation lasts long, the readability comes second, but code maintenance could be quantified too in approximated terms, for example gross hours that I spend before the programs completes one iteration as intented.

Sure, 2 weeks of execution are still a lot. I will see what I can do, in the case I go back to less intensive tasks and I put the code elsewhere waiting for the newRPL.

edit: I also verified that yes, flag -3 and flag -105 slow down the execution a lot. (2x slower)

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 08:33 PM (This post was last modified: 03-31-2017 08:34 PM by pier4r.)
Post: #93
RE: Little explorations with the HP calculators
Ok, with the help of the "profiling subprogram" (1) I found the most time consuming function. With flag -3 and -105 set (so, no exact mode) each call lasts for an (eyeballed) average of 3 seconds. There are 480 calls to this function, that requires a total of 24 minutes. There is other code to execute, but the time spent on the rest of the code is less than the one used for this function.

What the function does is the following.
- Given this matrix http://i.imgur.com/moh6eSP.png
- produce a random value between 1 and 7225, call it curProbV
- from the first row to the last row of the matrix, check if curProbV is smaller that the element in the first column (I thought I could start from the bottom, but since curProbV is random, it would not matter).
- if you find that the value is smaller, take the third element in the row and return it.

I do use some algebraic notation to avoid having problems to read in the code that what I do is reading a matrix. Could be that this extend the execution of the function to 3 seconds?

The rpl code is the following
Code:

getVarFunc
  \<<
    @program to extract the variance out of the built matrix
  
    @output, the variance of the strength points out of the matrix

    0 @curProbV
    0 @tmpV
    \->
    @external inputs
    @local var
    curProbV
    tmpV
    \<<
      \<-maxSumProbV RAND * 1 + IP 'curProbV' STO
        @ we get a random number that has to be compared with the probability sums
      
      1 'tmpV' STO
        @with tmpV we will use it as a counter
      \<-uFmatchingProbFound CF
      WHILE
        \<-uFmatchingProbFound FC?
        tmpV \<-numProbV \<=
        
        AND
      REPEAT
        IF
          curProbV '\<-mVarProb(tmpV, \<-colSumProb)' EVAL \<=
        THEN
          @ found a probability sum bigger than the actual random value.
          \<-uFmatchingProbFound SF
        ELSE
          1 'tmpV' STO+
        END
      END
      
      '\<-mVarProb(tmpV, \<-colVar)' EVAL
    \>>
  \>>

(1)
Code:

profiling
  \<<
    @profiling single blocks. This code is called as the profiled function.
    'getVarFunc' TEVAL
    
    @savings TEVAL values
    'timings' STO+
    " 
    " @breakline
    'timings' STO+
    
    @leaving the original value on the stack.
  \>>

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 09:19 PM (This post was last modified: 03-31-2017 09:21 PM by Han.)
Post: #94
RE: Little explorations with the HP calculators
Since you are already placing comments in your code, why not leave the code in RPL form and add an appropriate comment. For example,

Code:

        IF
          curProbV '\<-mVarProb(tmpV, \<-colSumProb)' EVAL \<=
        THEN

could be written as

Code:

        @ check to see if curProbV <= mVarProb(tmpV,colSumProb)
        IF
          curProbV tmpV \<-colSumProb \<-mVarProb \<=
        THEN

You gain speed without having to over-compromise on legibility. By the way, I am not checking code; I am merely assuming mVarProb is a function; modify accordingly to fit your needs.

Graph 3D | QPI | SolveSys
Find all posts by this user
Quote this message in a reply
03-31-2017, 09:41 PM
Post: #95
RE: Little explorations with the HP calculators
So if I understand correctly (and I certainly may not Smile ), you are essentially doing a table lookup using some target value (that corresponds to column 1) and returning the result (column 3). You do this by sequentially looking through each value until you find the appropriate stopping point.

Here's a couple thoughts:

- If there's a functional relationship between columns 1 and 3 in the table, you could simply apply the function to the target, round the result as needed, and you're done. This should be significantly faster than a lookup.

- Alternatively, you could map the target value to an index, then grab the column 3 value with that index. I would think that would also be faster, though not quite as much as the above. It would still require knowing the functional relationship of the target value to the index, of course.

- Because the table is pre-sorted, you could use a binary search of the column-1 value instead of sequential. Messier code, but would most likely be faster in the long run due to fewer comparisons being required. I'd try the others first, though.
Find all posts by this user
Quote this message in a reply
03-31-2017, 09:42 PM
Post: #96
RE: Little explorations with the HP calculators
Thanks! It mVarProbV is a matrix though (matrix of variances and related probabilities value) the one descrived in the picture.

And yes in general if I have comments I can write down what is happing, the problem is that more often than not the comments go out of synch with the code. So the more generic the comment, the better.

Maybe for that case I could do it, just the line would be longer with GET.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
03-31-2017, 09:51 PM (This post was last modified: 03-31-2017 09:55 PM by pier4r.)
Post: #97
RE: Little explorations with the HP calculators
(03-31-2017 09:41 PM)DavidM Wrote:  So if I understand correctly (and I certainly may not Smile ), you are essentially doing a table lookup using some target value (that corresponds to column 1) and returning the result (column 3). You do this by sequentially looking through each value until you find the appropriate stopping point.

yes. I thought my overview was clear, damn me and my language skills.

Quote:- If there's a functional relationship between columns 1 and 3 in the table, you could simply apply the function to the target, round the result as needed, and you're done. This should be significantly faster than a lookup.

Nice idea, but there is none.

The third column are modifiers, from -400 to 400 in steps of 25.
The second column is the probability (in integers) that I want of those modifiers. I tried to get a triangle, so the probability starts from 25 at the end, to be 425 in the middle and then down to 25 again.
The first column is the cumulative probability (the one that I use for searching), so 25, 25+50, 25+50+75, etc.

Actually I think that if I get a uniform random value from 1 to X, and then I have custom probabilities modeled on the same range, I should be able to "cast" the uniform probability in the probability that I want.

Quote:- Alternatively, you could map the target value to an index, then grab the column 3 value with that index. I would think that would also be faster, though not quite as much as the above. It would still require knowing the functional relationship of the target value to the index, of course.
I'm not sure I am following here. I do have already the index that is the row number.

Quote:- Because the table is pre-sorted, you could use a binary search of the column-1 value instead of sequential. Messier code, but would most likely be faster in the long run due to fewer comparisons being required. I'd try the others first, though.

That would be possible yes, I normally discard optimizations for things that are small, like 30 entries. But if on average the lookup requires 15 steps instead of the binary search value of log_2(15), with 480 calls per major iteration I could save a lot. in other words, I forgot that I should consider not only a single call, but all the calls to the function.

edit: for the last version of the code, one could check here: https://app.assembla.com/spaces/various-...ournComp.s

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
04-01-2017, 02:27 PM
Post: #98
RE: Little explorations with the HP calculators
(03-31-2017 09:51 PM)pier4r Wrote:  yes. I thought my overview was clear, damn me and my language skills.

There's nothing wrong with your language skills. I was simply questioning my comprehension skills. Smile

(03-31-2017 09:51 PM)pier4r Wrote:  
Quote:If there's a functional relationship between columns 1 and 3 in the table...
Nice idea, but there is none. ... Actually I think that if I get a uniform random value from 1 to X, and then I have custom probabilities modeled on the same range, I should be able to "cast" the uniform probability in the probability that I want.

I was thinking that you might be able to model the relationship between columns 1 and 3 fairly closely with polynomials across a couple of ranges of input. Doing so would come close to (but not match exactly) the table lookup, which may or may not be acceptable in your situation. If it is, I would guess it to still be a faster method than sequential lookups.

(03-31-2017 09:51 PM)pier4r Wrote:  
Quote:Alternatively, you could map the target value to an index, then grab the column 3 value with that index.
I'm not sure I am following here. I do have already the index that is the row number.

Similar to the first option mentioned above, I was envisioning creating a function of some kind that would take the target value given and result in a integer in the range 1-33 representing the index of the column-3 value that it maps to. This would then require the additional step of GETting the final value from the array. I wasn't sure if it would be simpler/faster/more accurate to model the input->index relationship vs. the input->final result one.

I'm not trying to suggest that these are the best options, of course. I don't know the tolerances you're working with here and my ignorance of the constraints simply makes it easier for me to come up with suggestions. Smile

This is an interesting application! I look forward to seeing what you do with it.
Find all posts by this user
Quote this message in a reply
04-01-2017, 07:58 PM
Post: #99
RE: Little explorations with the HP calculators
Thanks! Actually you are right, if you check the second column in the matrix it is a triangular form so I may find something to model the sum of those values. I will have to check when I have time and then I hope to post here the results.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
04-01-2017, 09:45 PM
Post: #100
RE: Little explorations with the HP calculators
(04-01-2017 07:58 PM)pier4r Wrote:  Thanks! Actually you are right, if you check the second column in the matrix it is a triangular form so I may find something to model the sum of those values. I will have to check when I have time and then I hope to post here the results.

But isn't your independent variable in this case the probability (curProbV)? If that's the case, it seems that you should instead model the variance result.

As an example, instead of doing lookups for the variance, you could use the following formulas.

For curProbV values of 25-3825, the variance shown in the array is approximately: \[variance=5.4 \times curProbV^{0.53}\]

and from 3826-7225, the variance is approximately:
\[variance=7.3 \times (7250-curProbV)^{0.5}\]

The resulting values aren't exactly the same as the table lookups, but they're very close. And it's not clear to me if those differences are even a problem. For example, instead of intermediate values all being "ceiled" to the next variance in the table, they will return a unique result. Is that a good or bad thing?
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 4 Guest(s)