Solid Rendering
This lesson provides an introduction to NST's OpenGL-based 3D rendering
facilities. You will learn
How to view 3D
scenes with the view3d unit
The view3d unit is an operator
unit that provides an interactive window into a three-dimensional scene
of objects. The view3d unit will draw these objects while executing
its operands. A set of special ``object rendering units'' have the
property to make an object appear in the view3d window when they
become executed as the operand of a view3d unit. A second set of
``geometric
transformation units'' allows to apply geometric transformations (such
as translations, rotations or size scaling) to the objects that follow
them in the operand sequence. Additionaly, any ``normal'' NST unit among
the operands will become executed in the usual (operand style) way. Here
is a simple example#1,
where the object rendering unit is the make_mesh
unit,
which simply renders a 3-dimensional mesh or mesh-surface (the specification
of the mesh is analogous to that of the draw_mesh
unit, i.e, one defines a mask specifying which elements of an array are
accessed as x,y or z coordinates of a mesh point, or as further
attributes, such as the mesh point color). In example#1, the mesh
is of the simplest possible type, namely just a set of 20x20 z-values
(if no x,y coordinates are provided, the mesh unit assumes them
to lie equidistantly on a grid range that can be set in the parameter window
of the mesh unit). Try out replacing the 'S' option char by a 'M'
to see a different mesh-variant.
How to display and color
3d surfaces
Example#2 illustrates
how the mesh unit can use additional attributes, such as a RGB-triple for
each mesh point (a set of nicely distributed RGB values for z-dependent
blue/yellow coloring is computed in the prog_unit).
Also now the mesh is rendered in a mode where each iteration step only
adds a single contour z(x,y=const), i.e., only a 20-point mesh line
instead of a full 20x20 mesh. Each new mesh line is pushed into the mesh
at one side, shifting the entire mesh by one mesh line (with the farthest
opposite mesh line being lost). Since the view3d unit continuously
executes its operands during viewing, one has the impression of a wave
running over the mesh.
How
to specify the location and orientation of objects
The next example#3 illustrates
how the geometric transformation units act on an object to bring
it into a specified position and orientation. In the present case, the
object is a simple ``tripod'' of three perpendicular coordinate axes (red=x,
green=y, blue=z) initially aligned with the world coordinate system
and centered at the origin. The four transformation units (one translation
and three rotation units) in front of the tripod unit allow to move the
object in various ways. The translate_frame
unit will translate it by a vector that is determined with the first three
sliders. Each of the three rot_frame
units will rotate the object along its current x,y or z-axis.
Note, however, that the first (leftmost) rotation will already change the
axis directions, and the second rotation will then be with respect to these
changed
axis
directions, changing axes even further. The rightmost rotation will be
relative to the axis resulting from the combination of the previous two.
Therefore, as soon as rotations are involved, the effect of the transformations
becomes order-dependent. E.g., if you move the translation unit into the
rightmost position (but still left to the tripod), the translations will
be along the axis directions of the rotated object and no longer along
the axis directions of the "world coordinate system". This behavior generalizes
to any combination of transformations and objects: Any transformation acts
on
all objects and transformations to its right. Any transformation
is relative to the coordinate axes set up by the combined action of all
transformations to its left. If you wish to limit the effect of a group
of transformations, you can enclose them in a begin_frame
- end_frame pair. Then,
the entire group will have no transformational effect as a whole, only
within the group will the transformations be felt. This construction can
be arbitrarily nested for arranging objects in a hierarchical manner.
Example#4 illustrates
the use of transformation units with time-varying parameters (computed
in a prog_unit) to let a little hexagon object follow a circular
hopping path, tumbling over and rotating between hops.
How to
build articulated objects such as robot arms
Given suitably shaped objects it becomes easy to build articulated objects,
with moveable parts, such as robot arms. The make_link
unit offers three parametrized objects of shapes that are useful to build
robot manipulators. Example#5
illustrates their use to render a simple robot arm by first positioning
a base part (link type 1), then making a vertical translation into the
upper joint point of the base part plus a rotation such that the new z-axis
becomes perpendicular to the disk at the top of link1. The link type 2
part, when created in such a coordinate system, will then abut nicely with
the base part to form the middle link of a simple robot arm. Again a translation
by the length of the link type 2 part leads to the next joint, where the
distal arm element created as another link type 2 part. Additionally, there
are rot_frame units in front of the three link parts, makeing them
rotatable around the corresponding joint axes. Everything is packaged into
a container unit, so that a modular "arm unit" results. Its input are the
three joint angles, fed from the attached window unit with the three sliders.
How to render objects
with the prog_unit.
Many of the object rendering and geometry transformation objects become
also available in the prog_unit
when #importing the shared library "solid". Example#6
shows the previous example, but now the parts of the arm and their transformations
have been implemented as corresponding function or method calls in a prog_unit.
The "solid" library offers a subset of the OpenGL
functions (their names are glXxxx) plus some additional NST functions
or objects (with names nstXxxx). Most things are implemented as
function calls, however, some more complex objects (such as the robot parts)
must be implemented as objects that are first instantiated and then rendered
by invoking the parameterless method instanceName.draw().
How
to find the position of object points in world coordinates
Where is the tip of the robot arm (or any other object point) in world
coordinates? To answer questions like these one needs the transformation
from the object coordinate system of a general object (such as the last
link of the robot arm, where the arm tip has the simple position (L,0,0),
with L denoting the length of the last link) to the world coordinate
system. This transformation changes with the configuration of the arm (or
generally, with the transformations that act on the object of interest).
The obj_to_world unit,
when positioned as a successor to the object of interest, will do the necessary
transform, i.e., when fed with the vector (L,0,0) its output will
return the desired world coordinates. Example#7
extends example#6 in a corresponding fashion to illustrate this
technique for obtaining the location of the tip of the robot arm. As a
visual check, we draw a 3d cursor (this is conveniently done with
a make_mesh unit for a 1x1 mesh of a single error cross).
However, since the input coordinates to the mesh unit are in world coordinates,
we must protect it from the coordinate transforms that positioned the parts
of the arm. This provides an example of the use of the begin_frame
and end_frame units explained above (the example uses the corresponding
calls in the prog_unit instead of the units). If you wish to transform
into a different coordinate frame than the world, you can use the define_world
unit:
it temporarily defines the current object coordinate frame in whose scope
the define_world unit is placed as the new world coordinate frame.
Any subsequent invocations of an obj_to_world
unit will then transform
into that coordinate frame. Similarly, the
world_to_obj
unit
provides the inverse transformation.
How
to view a scene from different points with the camera unit
(to be written. Until then, you probably can see most things by inspecting
the example circuit that comes with the camera_unit)