I’ve once heard this saying from Jon Blow (though I’m not sure if it was original)

Programming isn’t math, it’s physics

I think that’s absolute true, someone coming from a math background must learn to be wary of all sorts of things you never expect in math. In this post I will give an example of that.

Background

Suppose I’m making a 2D game involving rectangular shapes. Let’s say I have a 2D vector type and a struct representing a rectangle, which looks something like this

struct v2
{
    r32 x, y;
};

struct rect2
{
    v2 min;
    v2 max;
};

Elaboration: v2 is a straightforward packing of two 32-bit floats. rect2 represents a rectangle by a pair of (min, max) coordinates.

The problem is: given these

  1. a rectangular entity at origin with dimension dim,
  2. point p

find out if p is inside of the entity.

The naive approach

I think many people would, like me, start with something like this

inline b32
checkOverlapBad(v2 origin, v2 dim, v2 p)
{
    rect2 hitbox = {origin, origin + dim};
    return inRectangle(hitbox, p);
}

so hitbox is the rectangle representing the hitbox of the entity in question. And then we check if p is inside of the hitbox or not, using the function inRectangle below

inline b32
inRectangle(rect2 rect, v2 point)
{
    return ((point.x >= rect.min.x) && (point.y >= rect.min.y)
            && (point.x < rect.max.x) && (point.y < rect.max.y));
}

This is already a working solution, but it has one major flaw. The flaw is caused not by the programmer, but by the limitation of floating point numbers in doing real number arithmetic.

For example, a math person would expect this loop to go on for a long time.

    for (r32 x = 1.0f; ; x *= 2)
    {
        v2 origin = {x, x};
        v2 p = {x+3.0f, x+3.0f};
        v2 dim = {3.005f, 3.005f};
        b32 bad_result = checkOverlapBad(origin, dim, p);
        assert(bad_result);
    }

Since you don’t expect the assertion to fire, unless x gets absurdly large or something. Instead, the assertion fires at x = 131072.0f, which is by no means a large number. The explanation is that in checkOverlapBad, we add up origin and dim. The problem is when origin gets too large, the result of the addition loses precision, and you end up with a hitbox like this

hitbox = {min={x=131072.000 y=131072.000}
          max={x=131075.000 y=131075.000}}

the CPU has knocked out the 0.005f fraction in order to make room for the number’s magnitude. Which means that our poor rectangle’s dimension has shrunk from 3.005f to 3.000f, which is no good.

Can we do better?

Definitely. The point of this post is not to point out the obvious fact that “computers have a finite number of bits”, but to show that you can maneuver around it by rearranging the calculation. If, instead of doing the arithemtic in terms of “absolute coordinates” (that is, the coordinates that p and origin are in), we do it relative to origin as follows

inline b32
checkOverlapGood(v2 origin, v2 dim, v2 p)
{
    v2 delta = p - origin;
    rect2 hitbox = {{0,0}, dim};
    return inRectangle(hitbox, delta);
}

This identical example fares better

    for (r32 x = 1.0f; ; x *= 2)
    {
        v2 origin = {x, x};
        v2 p = {x+3.0f, x+3.0f};
        v2 dim = {3.005f, 3.005f};
        b32 good_result = checkOverlapGood(origin, dim, p);
        checkOverlapGood(origin, dim, p);
        assert(good_result);
    }

The above assertion fires at x = 16777216.0, which is satisfactory considering that 16777216.0 = 2^24, so we’ve literally run out of bits to represent p at that point (in this case p=16777220.0, not the correct value of 16777219.005). Our example failed because our coordinate system can no longer represent coordinates precisely, and not because of the combined magnitude range of position plus dimension, as in the case of checkOverlap.

Plus, notice that our hitbox is a constant, so if I hadn’t made p be right at the edge (say, I could set p={x+2.0f,x+2.0f}), the loop would continue to work until x reaches inf (Update (9.4.2022): well, it would only work accidentally).

A significant improvement, obtained simply by doing the same math differently.

How does one find out about this?

I remember finding out about this by comparing my collision detection code with Casey’s. Since his code doesn’t register collision involving entities on different floors, and mine does. Had I not have example code to compare my solution to, I would never have found out about the defects in my code.

In an alternative universe, I could have overlooked the whole thing. I could have easily chalked it up to floating-point math being twiddly and ignored the problem. There’s no guarantee that you’ll ever find out about stuff like this on your own.

That’s why I think in programming you need a bit of luck, as well as proper guidance (which is also luck). I can’t deny that this sounds discouraging, but on the flip side it would be boring if everything was straight-forward.

Sidenote: Weekly update

I’ve just finished episode 100 of Handmade Hero today. Stuff gets even harder now with lighting. The hardest part is that everything is fake:

  • the perspective is fake
  • the ground is fake.
  • the sky is fake.
  • the z value is fake

It’s all about making stuff up… but still it’s the most fun I’ve ever had!


<
Previous Post
Learning experience with Handmade Hero
>
Next Post
Bitmap mermoy: How to store complexity in data structures