Gerald Hernandez
10/14/2004 4:26:00 PM
I believe we are probably speaking of the same thing.
The original bounding box / range collision test to determine which elements
(polylines) to consider is the basis of any location test so I forget to
mention it sometimes, as for me it is a given.
I am talking mostly about the hit testing within the function that locates
the closest segment to the point per element. Within this function the hit
radius is dynamic and shrinks based on the minimum distance found so far.
The "optimization" is based on axis aligned infinite strips and/or axis
separation on a per segment basis. Basically, you are creating virtual
bounding boxes based on the hit radius and the segment extents on the fly.
You must still test each vertex/segment but you have quite a few conditions
that will quickly reject a segment from further processing. The benefit is
that there are no expensive division or square root calculations and you
minimize the overall number of calculations necessary. These are only
necessary for segments which could really be closer.
When you exit the function, one parameter returned is the distance2, which
becomes an input parameter to the test for the next element which becomes
your starting hit radius.
Theoretically, it might be possible for the optimization to be more
expensive than other methods, but this would require an unlikely scenario
that each progressive segment is somehow closer to the hit point.
Of course, this all assumes that you are defining and processing your
polylines as a vertex list and not some other method such as Symmetric,
Relative Vector, etc. In which case much precomputation is already done at a
cost of memory. In these cases, other even faster methods could be employed.
But these would be very specialized and come at a cost of processing time
and memory at some point.
Finally, the numbers Michael is talking about are relatively tiny compared
to the scenarios I encounter. And this method performs quite well for me, so
I foresee no problems in his situation.
Gerald
"Frank Hileman" <frankhil@no.spamming.prodigesoftware.com> wrote in message
news:uTf4%237fsEHA.3556@TK2MSFTNGP10.phx.gbl...
> Hello Gerald,
>
> The segment rejection process you speak of is probably best performed by
> keeping a bounding box surrounding a set of segments. Presumably it is
less
> expensive to test the bounding box first, especially if there are much
fewer
> bounding boxes than segments.
>
> The first problem with this approach is the size of the bounding box. If
the
> "hit radius" (minimum hit distance) is fixed, it is easy to compute the
> bounding box sizes and cache the bounding boxes. If the hit radius is a
> parameter to the hit testing function then a fixed size bounding box must
be
> cached and the distance to the bounding boxes will have to be computed on
> each hit test.
>
> VG.net already stores a bounding box per visual element, and distance
> computations can be performed on these before proceeding to the more
> intensive per-segment test. However I imagine a contour map will have many
> shapes with overlapping bounding boxes, meaning that many shapes will pass
> the first hit test. So it is unclear if the "optimization" would be a help
> or a hindrance.
>
> Additionally the numbers Michael has given are not especially large.
>
> If you had a different optimization approach in mind I would love to hear
> it! Even if only for theoretical interest, and not this specific case.
>
> Regards,
> Frank
>
> "Gerald Hernandez" <Cablewizard@spam_remove@Yahoo.com> wrote in message
> news:%23nSIViasEHA.1308@tk2msftngp13.phx.gbl...
> > Generally speaking, the method Frank suggested is the best way.
> > However, your observation that comparing the distance to each segment
> > seperately could be inefficient and basically a brute force approach is
> well
> > founded. But it is worth mentioning that there are a few ways to
optimize
> > the algorithm to minimize the actual number of expensive distance
> > calculations employed. In addition to Frank''s comment of the "trick"
being
> > to compare the distance to each segment separately, I would add that the
> > real "trick" is to employ less expensive segment rejection testing
within
> > the process. This minimizes the number of more expensive calculations by
> > rejecting segments we know we don''t need to test.
> >
> > Given Frank''s obvious experience in this arena, I can only assume that
> these
> > methods are implemented within their algorithms. This is indeed a more
> > elegant solution than the brute force method. I have not used their
> product,
> > but from my own experience with optimizing this and similar algorithms,
> with
> > a little tweaking, the performance of this method is much better than
one
> > might expect.
> >
> > Gerald
> >
> >
>
>