HP Prime  Using TRIANGLE and TRIANGLE_P

11132015, 07:16 PM
(This post was last modified: 11252016 01:21 AM by Han.)
Post: #1




HP Prime  Using TRIANGLE and TRIANGLE_P
The commands TRIANGLE and TRIANGLE_P are very powerful commands on the HP Prime as they can be used to render 3dimensional objects onscreen. A few forum users asked me to describe the advanced form of these commands. The two commands are essentially the same, with the exception that the _P version is in pixels whereas the "non _P" version is in terms of coordinates in floating point form. Below is the description for the _P version.
TRIANGLE( ptdefs, tridefs, triparms ) or TRIANGLE_P( ptdefs, tridefs, triparms ) ptdefs  this is the "points definition" and is a list. The content of this list are smaller lists, each containing up to four values: the coordinates \( x \), \( y \), and \( z \) of a point, and the color \( c \) of that point. These points are the vertices of triangles that will be drawn. The color is optional. For example: Code: ptdefs:={ creates a list of three points: \( (0,0,0) \) colored blue, \( (0,1,0) \) colored red, and \( (1,0,0) \) colored green. tridefs  this is the "triangles definition" and is also a list. The content of this list are smaller lists (that define triangles) containing up to five values: indices \( i_1 \), \( i_2\), and \( i_3\) referencing points from ptdef, color value \( c \), and an alpha value \( a \). The integers \( i_1, i_2, i_3 \) are each indices to points in ptdefs. That is, if \(i_1 =2 \), \( i_2=5\), and \(i_3=7\), then the corresponding triangle would have vertices located at coordinates stored in ptdef(2), ptdef(5), and ptdef(7)  the second, fifth, and seventh points inside ptdef. This triangle would be drawn with color c, and alpha value a (where \( 0 \le a \le 255 \) ). If \( c = 1 \) then the triangle will be colored using the colors of the vertices (the points inside ptdef). If the vertices have different colors, then the triangle's color will be blended in a gradient manner. Both the color and alpha values are optional. If colors are specified in both the triangle definition and the point definition, then the color from the triangle definition has higher priority (except if \( c = 1 \) ). If all triangles are to be of the same color (and alpha value) then one may optionally use the form: tridefs:={ color , tridef_1, tridef_2, ... }; OR tridefs:={ color, alpha, tridef_1, tridef_2, ... } where tridef_n is of the form: { i1, i2, i3 } Before explaining the last parameter triparms, we need a brief overview of how 3dimensional objects are projected onto the screen. A 3D point is of the form \( (x,y,z) \). The easiest way to obtain a 2D projection is to simply delete a dimension (for example, delete the \(z\)coordinate) so that the point \( (x,y,z) \) is drawn as \( (x,y) \). Points are simply "flattened" onto the screen located on the plane \( z=0 \). However, this projection does not provide any sense of depth/distance  what we normally call perspective. To add perspective, we need a notion of "distance from the eyepoint." For simplicity's sake, imagine that the object to be viewed is centered at the origin, and that we are viewing the object through the LCD screen of a camera located at \( (0,0, z_\text{eye} )\). The viewing screen, then, is located on the plane \( z=z_\text{eye} \). Objects closer to the screen should appear large, whereas objects farther from the screen should appear small. So we use the \( z \)coordinate of a point to determine its distance from the viewing screen. Thus, the point \( (x,y,z) \) could be "flattened" to the point \[ \frac{1}{z_\text{eye}z} \cdot (x,y) = \left( \frac{x}{z_\text{eye}z} , \frac{y}{z_\text{eye}z} \right) \] The astute reader will observe that this formula requires some care, because inversion occurs should \( z_\text{eye} \le z \). Thus, the value \( z_\text{eye} \) must be selected so that the entire object sits "in front of" the viewing screen. If we imagine that the object fits inside a rectangular box with coordinates \( (\pm X, \pm Y, \pm Z )\) then we could simply choose to set \( z_\text{eye} \) to be at least \( \sqrt{ X^2 + Y^2 + Z^2 } \) so that should we rotate the object (and its containing rectangular box), the corner of the box furthest from the origin would not appear "behind" the viewing screen located on the plane \( z= z_\text{eye} \). Another point of concern is that dividing by \( z_\text{eye}  z \) might leave the projected point too small to see on the screen. So we could introduce a "zoom factor" \( k \). Thus, \[ (x,y,z) \to \frac{k}{z_\text{eye}  z} \cdot (x,y) = \left( \frac{kx}{z_\text{eye}  z}, \frac{ky}{z_\text{eye}  z}\right) \] In order to view the object from all different angles, we can first apply a sequence of rotations to the coordinates of the object so that \( x,y,z \) is transformed into \( x', y', z' \). Then, we apply the projection described above to \( x', y', z' \). A rotation about any single axis is achieved by multiplication by a \( 3\times 3 \) matrix. I will leave out much of the linear algebra and simply state (as an example), that rotation by an angle of \( \theta \) around, say, the \(x\)axis can be represented as \[ \left[ \begin{array}{ccc} 1 & 0 & 0\\ 0 & \cos(\theta) & \sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{array} \right] \cdot \left[ \begin{array}{c} x\\ y \\z \end{array} \right] =\left[ \begin{array}{c} x'\\ y' \\z' \end{array} \right] \] and that similar matrices exist for rotations around the \(y\)axis and \(z\)axis. So a complete set of rotations about each axis would be the product of the corresponding \( 3\times 3 \) matrix, which we will simply denote as \( R \). And now for the last parameter triparms ... triparms  this is a list containing information about the rotation, viewing box, clipping information, etc. It takes the form: Code: { \( RP \) is a \( 3\times 4\) matrix and consists of the rotation sequence \( R \) defined above as well as an additional column: \[ RP = \left[ \begin{array}{cc} & 0 \\ R & 0\\ & z_\text{eye} \end{array} \right] \] This is the matrix that will used to both rotate the object being viewed, as well as project the 3D coordinates down to 2D. The values Xwin and Ywin are the upper left coordinates of the 2D viewing window (the LCD screen on our camera). When using the _P form, the viewing window defaults to pixels inside the rectangle whose upper left corner sits at \( (0,0) \) and bottom right corner sits at \( (360,240) \). Since the object presumably sits at the origin, then we would want to shift the viewing window (i.e. set Xwin to 160, and Ywin to 120) so that \( (0,0) \) is at the center of the screen. The value \( k \) is the zoom factor we described earlier. The values Xmin, Xmax, Ymin, Ymax, Zmin, and Zmax describe the clipping box. That is, only coordinates that lie inside this box are drawn, whereas coordinates (after rotation) that lie outside of this imaginary box will not be drawn. The last option zstring is a string that holds the \( z \)clipping information. If it is specified, then objects will be drawn with "hidden line removal"  points not visible because the are blocked from vision by other points "in front" would therefore not be drawn. This string can be initialized by simply calling TRIANGLE() without any arguments: zstring:=TRIANGLE(); or zstring:=TRIANGLE_P(); The "N" parameter is optional. If specified, it will convert all floating point values to integers from 0 to 255 so as to speed up the \( z \)clipping. A few words about the notation written here vs. the notation in the help screen on the HP Prime. The Xwin, Ywin, and k values in the article above are the xeye, yeye, and zeye values listed in the help screen on the HP Prime. The x, y, and z variables in the help screen on the HP Prime are actually the points \( ( \overline{x}, \overline{y}, \overline{z}) = ( x', y', z'  z_\text{eye} ) \) where \[ R \cdot \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} x' \\ y' \\ z' \end{bmatrix} \] and the x, y, and z values in the vector above are the actual \( (x,y,z) \) coordinates of a point from ptdefs. ADDITIONAL NOTE: It is possible to render several objects onto the same viewing screen, with each object rotated by its own rotation transformation. Each object would have its own point definition and triangle definition. Drawing each object would simply be a matter of calling several TRIANGLE commands  one for each object  and specifying the same triparms value (and possibly with exception to the rotation \( R\); the \( z_\text{eye} \) value must be kept the same). To render multiple objects with the \( z\)clipping, however, one must be careful with the "N" option. When using "N", the points are normalized (in particular the \( z\)values are normalized to values from 0 to 255) _after_ rotation. If the largest \( z\)value from object 1 is 10, then 10 is converted to 255. On the other hand, if the largest \( z\)value from object 2 is only 5, then 5 is converted to 255 as well. Herein lies the problem with combining multiple objects with normalized \( z \)values when drawing with \( z\)clipping. In order to properly cull occluded objects, we must make sure that all the \(z\)values are normalized in the same manner. The solution is quite simple  ensure that the point definitions of each object contain "corners" of a "container" box. That is, imagine that all the objects fit inside a single rectangular box whose corners are at \( ( \pm X, \pm Y, \pm Z) \), where \( X\), \(Y\), \(Z\) are all larger than any \( x \), \( y\), \(z\) coordinate (in absolute value) for any point within any point definition of any of the objects. This way, all the \( z\)values are normalized so that \( Z \) is always 0 and \( Z \) is always 255. By reusing the same zclip parameter in sequential TRIANGLE calls, we will have proper occlusion culling and only visible pixels are drawn. We could alternatively drop the "N" option. However, the \( z\)clipping is not as fine (and supposedly slower). Graph 3D  QPI  SolveSys 

« Next Oldest  Next Newest »

User(s) browsing this thread: 1 Guest(s)