Hello and welcome back to my blog!

This article is a kind of companion article to Physics engines for dummies and talks about the act of actually detecting a collision between two shapes.

This article assumes the reader has a basic grasp of maths and geometry.

**Definitions**

I will define matrices in bold uppercase, vectors in uppercase and scalars in lowercase. The dot product will be shown as . and cross product as x.

||A|| means the magnitude of A.

|A| means the absolute value of A.

When I want to express individual components of a vector, I will write A_x to mean the x component of A.

I will use * to denote multiplication, but also note that sV might be used to show the scalar s multiplied by the vector V.

## Collision Detection

Briefly, collision detection is the process of detecting when two shapes are about to collide or have already collided. There is a distinction there because there are chiefly two different types of collision detection methods:

**Discrete**

This type will only tell you whether two shapes are intersecting at the current time.

At frame 1, the objects are not colliding, but by frame 2 they are.

**Continuous**

This type tries to predict the exact point and time of collision between the two objects.

At frame 1, the objects are *predicted* to collide before frame 2 arrives. The system estimates the point and time of the collision so that when frame 2 does arrive the objects are not intersecting.

Obviously the continuous method seems the better choice, since we don’t want our objects to be intersecting on screen – however before we can understand the way this method works, we must first explore the other method.

## Discrete collision detection

**Penetration distance**

In discrete collision detection, it’s very important to know exactly how much one shape has penetrated into another and what the shortest direction to push the two apart again is.

This is important because we will need to be able to *resolve *the collision and push the two shapes apart so they are just touching.

In Figure 1, the penetration distance is *d* and the direction to push is *n*.

Let’s start having a look at some common collision detection cases.

**Circle vs circle**

Two circles *A *and *B *are defined as having a centre and a radius. They collide when the distance between the two centres is less than the sum of their radii:

||A-B|| < (r1+r2)

Of course you can speed this up by not having the compute the magnitude ||A-B|| which involves a sqrt by squaring both sides of the equation:

||A-B||² < (r1+r2)²

So to calculate a measure of the penetration we can say:

p = ||A-B|| – (r1+r2)

You can see that this is a simple rearrangement of the collision test inequality

||A-B|| < (r1+r2)

=

||A-B|| – (r1+r2) < 0

All I have done is to subtract (r1+r2) from both sides of the inequality which is handily also the measure of penetration between the two circles. So, when

p < 0

The two spheres are penetrating by distance p. We would also like the *penetration vector *so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the *only *vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little.

N = (A-B) / ||A-B||

P = N*p

Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance.

At this point you may have noticed that I haven’t tried to optimise this part of the calculation by using squared lengths etc. This is generally ok because we can still do the initial intersection test using squared maths and then once we have detected an intersection, we can go for the heavier maths since it will happen far less often.

You may also have noticed that during our calculations we have calculated the collision normal N.

**AABB vs AABB**

Similarly as useful as the mighty circle, the Axis Aligned Bounding Box, or AABB is defined by its corner points and represents a un-rotatable rectangle in 2d. These are generally used to bound more complex objects so that a simple overlap check can be done to determine broad overlap, and then a more expensive fine grained check can be performed later.

*Figure 2* shows complex object *A *bounded by its AABB.

For two AABBs *A *and *B *defined by their centres and half-extents (half width, half height), overlap can be determined quite simply:

D = |centreB-centreA| – (halfExtentsA+halfExtentsB)

binary overlap = D_x < 0 && D_y < 0;

In the example below the half extents are shown in red.

Fun AABB facts:

To build an AABB from a complex object, just loop through all the vertices keeping a track of the minimum and maximum coordinate in each axis:

/// <summary> /// /// </summary> static public function BuildAABB(points:Vector.<Vector2>):AABB { var min:Vector2 = new Vector2(Number.MAX_VALUE, Number.MAX_VALUE); var max:Vector2 = min.m_Neg; for each(var p:Vector2 in points) { min = p.Min(min); max = p.Max(max); } return new AABB ( min.Add(max).MulScalar(0.5), max.Sub(min).MulScalar(0.5) ); } |

To build an AABB from two separate AABBs, do the same thing, but form the min and max points of each AABB and use them as the vertices:

static public function CombineAABBs(a:AABB, b:AABB):AABB { const minA:Vector2 = a.m_centre.Sub( a.m_halfExtents ); const maxA:Vector2 = a.m_centre.Add( a.m_halfExtents ); const minB:Vector2 = b.m_centre.Sub( b.m_halfExtents ); const maxB:Vector2 = b.m_centre.Add( b.m_halfExtents ); const min:Vector2 = minA.Min(minB); const max:Vector2 = maxA.Max(maxB); const centre:Vector2 = min.Add(max).MulScalar(0.5); const halfExtents:Vector2 = max.Sub(min).MulScalar(0.5); return new AABB( centre, halfExtents ); } |

**Circle vs AABB**

Not terribly useful in the real world, but very useful as tool for understanding, the circle vs AABB test is also reasonably simple – the trick is to convert this problem into the simpler problem of circle vs circle. For an AABB A and a circle B:

The first step is to get the centre of the circle and project it on to the boundary of the AABB. Fortunately this is as easy as forming the vector *V *between AABB and circle and simply clamping it against the half extents of the AABB. This yields point B_{c}. In *Figure 2*, the clamped x, y region of V is shown in red.

B_{c} is actually the closest point on the AABB to the circle, which means the distance from the AABB to the circle is simply the length of the vector from B_{c} to B minus the radius of the circle.

D = B – B_{c}

distance = ||D|| – radius

**Circle vs OBB**

Circle vs Oriented Bounding Box or OBB is much more useful in the real world, but more tricky because the box now has an orientation to worry about. But, once again we can convert this problem into a simpler one, that of circle vs AABB by simply transforming the circle into the space of the OBB first.

public class OBB { /// <summary> /// /// </summary> public override function Collide( vis:Visual ):Contact { if ( vis is Circle ) { const c:Circle = Circle( vis ); // transform circle into OBB space const v:Vector2 = m_matrix.TransformIntoSpaceOf( c.m_Pos ); const clamped:Vector2 = v.Clamp( m_halfExtents.m_Neg, m_halfExtents ); // form point on OBB const point:Vector2 = m_matrix.TransformBy( clamped ); // get distance to circle const d:Vector2 = c.m_Pos.Sub( point ); const dist:Number = d.m_Len - c.m_Radius; return new Contact( d.m_Unit, dist, point ); } else { throw new NotImplementedException( ); } } } |

That’s the code snippet.

**OBB vs OBB**

Now we’re starting to get into the interesting stuff. Oriented Bounding Box vs Oriented Bounding Box is actually pretty much the same complexity as convex polygon vs convex polygon. A lot of people advocate using the Separating Axis Test in order to solve this problem, but personally I’ve never been able to visualise it and generally, if I can’t visualise something, I’m not going to use it as a solution.

A concept I have been able to visualise and the one I’m going to demonstrate here is that of the Minkowski Difference. Once again the idea behind using this technique is that it enables us to covert this difficult problem into a more simple one.

The Minkowski Difference consists of the convex hull of every point on the boundary of one shape *subtracted* from every point on the boundary of the other shape. Its easier to imagine this as shrinking one of the shapes down until it becomes a point, and expanding the other shape by the shape of the first. *Configuration space* is the space that the Minkowski difference resides within – the set of vector differences of two objects.

*Figure 4* shows objects *A *and *B*, and then also shows object *B *shrunk down to a point, while *A *grows by the shape of *B*. Of course, because its a subtraction of one from the other, the position of the resulting shape changes, and it’s the relationship with the origin (0,0) which then becomes very interesting.

It turns out that when the Minkowski Difference (written *MD *from now on) of two shapes contains the origin, the two shapes are intersecting. Furthermore, the distance from the origin to the MD is actually the distance between the two shapes, and the vector between the two gives you the *Minimum Translational Distance* which is the globally shortest distance you can move either shape from penetration until they both just touch.

In the above demo, you can manipulate the OBBs *A *and *B *by moving them, or you can drag the handles to rotate them. Note the MD *B-A* shown and how it reacts as the objects are moved. The origin is shown as the black dot in the centre – note that when *A *and *B *intersect, the MD contains the origin. The *Minimum Translational Distance* between the two shapes is shown as the blue line.

The observant among you will also notice the coloured edges in the MD – these represent the shape they originated from; red edges come from the red shape *A *and green from *B*.

So, how do you build this MD for two OBBs, then? Each edge in the MD is made up of a vertex from one shape subtracted from an edge from the other. In order to know which vertices we subtract from which edges, we need to define the concept of a *supporting vertex*.

A *supporting vertex* is simply a vertex which is “most in the direction of” a given direction vector. Mathematically, this can be found as the vertex which has the greatest dot product with a given direction vector.

Once we have this we can simply loop through all the edges of *A*, finding the supporting vertex in B for each edge by using reversed edge normal. The edge normal is reversed because we want to find the vertex “most opposite” this edge. As we do so, we project the origin onto each edge, and keep track of the distance – we’re looking for the least negative distance, which will be the *Minimum Translational Distance*.

Once we’ve gone through edges of *A*, we do the same thing with *B*, this time using supporting vertices from A, again tracking the least negative distance. If we’re only concerned about penetration we can terminate our search as soon as we find a positive distance – since this represents a separating axis.

var p0:Vector2 = new Vector2( ); var p1:Vector2 = new Vector2( ); var leastPenetratingDist:Number = -Number.MAX_VALUE; var leastPositiveDist:Number = Number.MAX_VALUE; // face of A, vertices of B for ( var i:int = 0; i<A.m_numPoints; i++ ) { // get world space normal var wsN:Vector2 = A.GetWorldSpaceNormal( i ); // world space edge var wsV0:Vector2 = A.GetWorldSpacePoint( i ); var wsV1:Vector2 = A.GetWorldSpacePoint( ( i+1 )%A.m_numPoints ); // get supporting vertices of B, most opposite face normal var s:Vector.<SupportVertex> = B.GetSupportVertices(wsN.m_Neg); for (var j:int=0; j<s.length; j++) { // form point on plane of minkowski face var mfp0:Vector2 = s[j].m_v.Sub(wsV0); var mfp1:Vector2 = s[j].m_v.Sub(wsV1); var faceDist:Number = mfp0.Dot( wsN ); // project onto minkowski edge var p:Vector2 = ProjectPointOntoEdge( new Vector2( ), mfp0, mfp1 ); // get distance var dist:Number = p.m_Len*Scalar.Sign(faceDist); // track negative if ( dist>leastPenetratingDist ) { p0 = p; leastPenetratingDist = dist; } // track positive if ( dist > 0 && dist<leastPositiveDist ) { p1 = p; leastPositiveDist = dist; } } } // face of B, vertices of A for ( i = 0; i<B.m_numPoints; i++ ) { // get world space normal wsN = B.GetWorldSpaceNormal( i ); // world space edge wsV0 = B.GetWorldSpacePoint( i ); wsV1 = B.GetWorldSpacePoint( ( i+1 )%B.m_numPoints ); // get supporting vertices of A, most opposite face normal s = A.GetSupportVertices(wsN.m_Neg); for (j=0; j<s.length; j++) { // form point on plane of minkowski face mfp0 = wsV0.Sub(s[j].m_v); mfp1 = wsV1.Sub(s[j].m_v); faceDist = -mfp0.Dot( wsN ); // project onto minkowski edge p = ProjectPointOntoEdge( new Vector2( ), mfp0, mfp1 ); // get distance dist = p.m_Len*Scalar.Sign(faceDist); // track negative if ( dist>leastPenetratingDist ) { p0 = p; leastPenetratingDist = dist; } // track positive if ( dist > 0 && dist<leastPositiveDist ) { p1 = p; leastPositiveDist = dist; } } } if ( leastPenetratingDist<0 ) { // penetration by leastPenetratingDist } else { // separated by leastPositiveDist } |

The above snippet also tracks positive distance so can be simplified if you only want to track penetration. One caveat to watch out for is the case where the two shapes are exactly aligned – as they are in the demo above. In this case, you have to deal with the possibility of there being two supporting vertices for an edge instead of one, and it’s important to re-project onto the edge from the origin to make sure you don’t accidentally pick the wrong edge-vertex pair. The above code deals with this case.

**Poly vs Poly**

The really good thing about this technique is that it applies directly to arbitrary convex polygons as well.

Again, manipulate the live demo by dragging the shapes and their handles…

So, we can now calculate the penetration distance for two arbitrary convex polygons.

## Continuous

So, finally we get around to talking about continuous collision detection. Now we start to be able to reap the rewards of all this *Minkowski Difference* nonsense

**Continuous Linear**

It turns out that if all we want to do is a linear sweep of our shapes to find the first time of collision ignoring rotation, the only thing we have to do is to ray-cast from the origin towards the MD using the relative velocity of the two objects as the ray segment end!

In *Figure 5*, the first time of collision of *B *with *A *can be found by shooting a ray from the origin towards the MD of *B-A*.

For polygons this is the same amount of work as calculating the penetration distance – you fire the ray at the edges of the MD, which you build as usual. You can even avoid testing edges if the ray is back-facing with respect to the edge normal.

In the above demo, A is being swept against B along A’s velocity; the pink A’ shows where A hits B. The MD shows what this looks like in Configuration Space – don’t forget to rotate/move A and B to see what the result looks like. Notice how the ray at the origin fired against the MD corresponds exactly to when A hits B?

One caveat is that A must not intersect B to start with, or the test will fail. Gino Van Den Bergen has generalised this approach to work with the GJK algorithm, this is well worth taking a look at if you want to cast against non-polygonal shapes, like circles or capsules.

The basic idea is this:

You provide a function which can return the closest distance between two shapes and the normal at that point. These two return values are also the distance from the MD to the origin and the direction that distance is in. In this example, the function used is returning distance and normal between two circles – shown in the diagram is the MD of two circles.

In *Figure 6*, two iterations of Gino Van Den Bergen’s algorithm are shown, I_{1} and I_{2}. The black point represents the current estimate, and starts at the origin in I_{1} and moves along the velocity shown as the arrow. The green point represents part of the information returned by the function – the closest point on the MD, which is at the distance returned, also the normal at that point has been found as well (again, returned by the function).

The algorithm intersects the velocity ray against the returned normal and produces the red point shown in the diagram. Then it moves on to the next iteration. The red point becomes the new estimate and the function is queried again for the distance to the MD and the normal. The information returned results in the new red intersection point in I_{2} and at this point the diagram becomes less useful because we’re so close to the visible tolerance. In practice the algorithm will terminate when the distance returned from the function is less than some predefined tolerance.

I’m describing this algorithm because it will be useful when we think about intersection including angular velocity.

**Continuous Angular/Linear**

Things start to get a little more tricky when we want to calculate the exact hit time where there is rotation present. To solve this problem, we must turn to the work of Brian Mirtich, Gino Van Den Bergen and Erwin Coumans.

Brian Mitch’s thesis presents some of the fundamental building blocks used in solving this problem:

In *Figure 7*, (lifted directly from his thesis) he shows two bodies, with their closest points shown and the two actual contact points *x1 *and *x2 *for this collision. Given the fact that these bodies are convex, he shows that the distance between the closest points represents the first possible time of collision between the two shapes under **no rotation**. Further, he shows that, given the point with the largest distance from the centre for each body, it’s possible to bound the earliest time of collision between two rotating shapes.

You can see why by looking at *Figure 8* – on the left, the thin rectangle rotates and extreme point p moves to p’ following the rotation. The length of the arc travelled by p can be found by simply using the circular arc length formula from geometry – L=theta*r as shown on the right in *Figure 8*. Therefore if we know the maximum distance from the origin of the shape, and we know the angular velocity, we can find an *upper bound* for the linear component of the angular rotation of a shape, and therefore we can simply add this linear distance value on to the bound that we had for non-rotating shapes.

Erwin Coumans took this idea and using a method similar to the linear cast of Gino Van Den Bergen, produced a new algorithm which determines the time of impact between translating/rotating bodies:

The live demo above shows A and A’, the position of A after some linear and angular motion, 90 degrees to be precise. The various iterations of the MD are shown on the right. Don’t forget to move the shapes around!

The thing to note is that the **shape **of the MD changes under rotation of either body, unlike with linear only motion where it just translates.

Each of the MDs shown represent one iteration of the algorithm. The algorithm itself is exactly the same I described using Figure 6 above, except that there is an extra distance term added which means that instead of the ray intersecting the plane and the estimate getting moved directly there, it only approaches it instead – taking care not to advance more than the extra distance terms allows each iteration. This way, the correct result is approached each iteration, or the algorithm terminates with a miss.

The actual code behind the algorithm is quite small:

public function AngularCast( B:Polygon, velA:Vector2, velB:Vector2, avA:Number, avB:Number ):LinearCastResult { const A:Polygon = this; const origin:Vector2 = new Vector2( ); var t:Number = 0; // relative linear velocity const ray:Vector2 = velB.Sub( velA ); var mA:Matrix23 = new Matrix23( A.m_Pos.Add( velA.MulScalar( t ) ), A.m_angle+avA*t ); var mB:Matrix23 = new Matrix23( B.m_Pos.Add( velB.MulScalar( t ) ), B.m_angle+avB*t ); var contact:Contact = GetDistance( B, mA, mB, parent, screenCentre ); while ( contact.m_dist>kAngularTollerance ) { // intersect velocity against normal const rayN:Number = ray.Dot( contact.m_normal ); const dRel:Number = rayN + A.m_r*Math.abs(avA) + B.m_r*Math.abs(avB); // compute conservative advancement t += contact.m_dist/dRel; if ( t<0 || t>1 ) { // never hit return new LinearCastResult( false, t, new Vector2( ) ); } // interpolate mA = new Matrix23( A.m_Pos.Add( velA.MulScalar( t ) ), A.m_angle+avA*t ); mB = new Matrix23( B.m_Pos.Add( velB.MulScalar( t ) ), B.m_angle+avB*t ); // get distance again contact = GetDistance( B, mA, mB, parent, screenCentre ); } return new LinearCastResult( true, t, contact.m_normal ); } |

As you play around with the demo, you’ll notice it sometimes takes this algorithm a while to converge – in production code you would most likely limit the number of allowed iterations. Something like having it terminate if its not making ‘enough’ progress in each iteration might be a good idea.

## Conclusion

In this article, we have explored the various forms of collision detection and the algorithms behind them, from the simplest possible case of discrete circle vs circle to the complex case of continuous translating/rotating polygons. I hope this article has been intelligible and will inspire people to start diving in and exploring these techniques in more detail.

## Source code

As always, you can buy the source-code accompanying this article if you wish – it is written in Actionscript 3.0 but the techniques are applicable to all OO languages. All purchases are very much appreciated and ensure that I can afford to make more like this one

You get code for every single demo on this page, containing useful functions for computing the distance between arbitrary convex polygons, penetrating or not, and for computing the exact time of contact between rotating/translating polygons and best of all, the licence allows you to use this code for commercial and non-commercial applications alike

USD 5.99

Subscribers can access the source here

Until next time, have fun!

Cheers, Paul.

Pingback: Collision detection for dummies | Paul’s blog@Wildbunny « Netcrema – creme de la social news via digg + delicious + stumpleupon + reddit

This is good stuff, im working on a twisted Pong game right now and can use a couple things learned from this thread as I won’t be using ordinary shapes.

Pingback: Robert McGhee » April 21st

Interesting post, although I will have to read a bit more about the Minkowski Difference. The timing was interesting as well as I had just begun my own attempt at collision detection without any previous knowledge of the subject. While my implementation *works*, its far from perfect and this post will really help.

Excellent, glad it helped you, or at least inspired you to find out more

Article body seems to be gone o_O

Yes, sorry its been restored now

Thank you very much for writing this article. I never took linear algebra, so sometimes it’s hard to conceptualize matrix transformations just reading through the math. Your article (actually the Physics Engines For Dummies article) motivated me to bust out the javascript and see if I can make some sense of this stuff.

Hey great! No problem – I’m really happy it inspired you!

The content seems to be missing.

Hey Paul just wanted to let you know that on my iPhone’s safari I can’t read your post. It just shows me the comments. And I want to read it!

Hi Eric,

There was a horrible auto-save problem last night which overwrote the article with blank Its been restored now

Cheers, Paul.

Where has this article gone? I can’t see it

Apologies, there was an auto-save problem last night apparently Its been restored now

Your separating axis test is somewhat inefficient. There is a linear time algorithm based on rotating calipers (at least for 2D). What you do is find the tangent cone for the two polyhedra, then take any vector within the cone as your separating axis.

http://cgm.cs.mcgill.ca/~orm/cslines.html

Hi Mikola,

I’ve not heard of rotating callipers before, I will definitely take a look – one point though, the test I describe is linear time already if you ignore the supporting vertex lookup which can be constant time in the case of regular solids.

The question is, does the rotating callipers technique cope with the case of penetrating polys?

Cheers, Paul.

Yes, it should. In fact, you can relate the rotating caliper test for the separating axis back to the Minkowski sum. Implicitly, you should think about it as lazily traversing the hull of the configuration space obstacle, and searching for a pair of extremal vertices on the convex hull of the obstacle. Taking the lines passing through these vertices and the origin determines a cone which is the space consisting of all separating axes. If this cone is empty, which will happen if the origin is contained in the obstacle, then the two objects are colliding. The main difference between doing it this way vs. explicitly evaluating the Minkowski sum is that you avoid having to explicitly compute the object or allocate any extra temporary memory.

Edit: I mean to say that it is the complement of the cone for the separating axis. If the cone consists is empty entire space (ie if the origin is in the object), then there is no separating axis.

According to their paper it doesn’t handle penetration…

Additionally, I’m confused why you think you need to allocate any extra memory when computing the MD, its just a loop over faces of each object

Cheers, Paul.

Just saying… best collision detection post I’ve seen. This is why the intertubes are awesome. Thanks for sharing!

Hi Dan,

Thanks, that means a lot Glad you enjoyed it…

Cheers, Paul.

Pingback: Real-Time Rendering · Seven Things for May 4th, 2011

Beautiful and easy to follow. Thank you.

No problem

I’m an absolute novice at best in the programming field and still harbor a phobia-grade fear of maths, but to critique your writing alone, it’s pretty decent. I rarely come across tech docs that read this easily. I wish half of the documentation I have attempted to read were this well voiced and explained, really.

Thank you very much Means a lot

As a novice programmer I must say this is very well written. Clear for chumps like me to be able to translate and apply ourselves! One thing that I’m puzzled by though is how do you project the origin onto the edge of a shape (OBB vs OBB)? I’m familiar with projection using a unit vector, but unsure of how to project the origin onto an edge built by two vectors, something I can’t find explained on the internet.

Hi Murray,

Glad you enjoyed it

Projecting a point onto an edge is quite simple, you just do the regular maths that you’re familiar with to project onto a unit vector – if you do it right you end up with the coordinate of the point in edge space (0->1) so you can then clamp this against the bounds of the edge (0->1) and recompute the point on the edge.

http://www.pfirth.co.uk/dotproduct.html#Projection

Cheers, Paul.

Hi Paul,

I have almost the same question. But seems your link is dead. I need your confirmation about this: given a point P and an edge consturcted from vertices V1 and V2, if the projected point P` is out of the edge, then the distance from the point to the edge is actually the length of the shorter vector of PV1 and PV2.

Is it right?

Thanks, Xiaoyu

Your article help me a lot. The sentence can’t present my appreciate!!

I wish you can write more about this theme !!

Pingback: Collision detect tutorial, Wild Bunny | Deguize

Pingback: How to make Angry Birds – part 2 | Paul's blog@Wildbunny

Pingback: Anonymous

Hi Paul,

In the ‘Circle vs AABB’ section, you said that

“The first step is to get the centre of the circle and project it on to the boundary of the AABB. Fortunately this is as easy as forming the vector V between AABB and circle and simply clamping it against the half extents of the AABB. This yields point Bc. In Figure 2, the clamped x, y region of V is shown in red”

How do you “simply clamp against the half extents”? What was the formula used there?

Sorry for the newbie question

Thanks!

Hi walter,

I too was confused by this part of Paul’s article. I think the following explanation might be a little more clear:

Consider the components of V (see Fig. 3): the x component (Vx) is parallel to hx and the y component (Vy) is parallel to hy. The action Paul refers to as “clamping” is simply chopping the components of V so that they are at most equal to the corresponding h-values. In Fig. 3, both Vx and Vy are longer than hx and hy, respectively, so both must be cut down to size. Trimming the components of V in this way gives you a new vector which ends on the point Bc.

If the circle was situated such that a corner of the square was not the closest point (e.g. if the circle’s center point has the same y value as the square’s center point) then one of the components of V (in my example, it’s Vy) would be shorter than its corresponding h-value. This component of V wouldn’t need to be chopped down. As a result, the “clamped” version of V would point at a Bc which lies along the edge of the square instead of its corner.

It’s a troublesome concept to explain with words (which, I imagine, is why Paul was compelled to invent a term to describe it), but if you draw it on paper you may get a better understanding.

talrnu

Hi Talrnu,

Nice explanation, thanks

Cheers, Paul.

A bit late but anyway: Thanks! That helped me. I had hard time to understand it. In the end it were merely a few lines:

Pingback: Day 11: Add collision detection to iPhone game

Hey Paul,

thanks for this article. This helped a lot for my project.

I have a question regarding your code snippet for OOB vs OOB collision:

Is it possible to convert that algorithm to handle OOBs in 3D space? I tried to implement it for the 3D scenario. The only difference I could think of is that for the 3D case we have to deal with planes instead of edges. So I changed the code to take that into account but it still doesn’t work correctly. Is there anything more that needs to be altered to get 3D OOBs working?

Best regards

Morty

Hi Morty,

Glad you enjoyed it

3d is a lot more complex than 2d when it comes to OBB vs OBB because there are now also edges to deal with – in fact every edge of one OBB crossed with every edge of the other give you the missing planes. Its a little more complex than that because some edge combinations can be optimised away. Its been a while since I’ve thought about this problem in 3d It might be worth doing a google for some reference material

Cheers, Paul.

Thanks for your fast reply.

So, the problem lies in the construction of the MD?

As far as I understood the algorithm, the premise holds that each edge of the MD is just a translated edge from one of the boxes (in 2D). At least thats what I extracted from the code snippet and the demo. So I assumed for the 3D case that each face of the MD is just a translated face from one of the boxes. Is that a false assumption? I have to admit that I have some problems to visualize the MD of two OBBs in 3D

Unfortunately I can’t find any good reference material covering the MD approach AND the calculation of the minimum translational distance. And this distance is exactly what I am interested in.

Best regards

Morty

Hi Morty,

Your assumption about how the 2d MD is computed are correct.

In 3d however, there are a bunch more faces to deal with; these extra faces come from the edges of the 3d shapes (in 2d, the ‘edges’ are actually faces, so there is no equivalent). The normal of these new faces is the cross product of edges from one shape and edges from the other.

For two objects P and Q, you can compute the vertices which make up the polygon of the new faces like this:

p = current_p_edge.a – current_q_edge.a;

q = current_p_edge.b – current_q_edge.a;

r = current_p_edge.b – current_q_edge.b;

s = current_p_edge.a – current_q_edge.b;

where .a and .b are simply the start and end of each edge. As for reference material, I found this paper, but I’m afraid I’ve not read it fully

http://www.cs.tau.ac.il/~danha/papers/exact_mink_3d.pdf

Cheers, Paul.

Hi Paul,

Nice article. I’m having a problem with the OBBvsOBB code though. I don’t understand the A.GetSupportVertices() method exactly. The explanation mentions finding a singular vertex but the method returns a collection. Does this method sort the returned values in order of dot product result (or some other way)?

I’m an artist by trade so sorry if i’ve been dumb and missed something.

Cheers, G.

Hi Gareth,

If you look at the last paragraph of the OBB vs OBB section you should find the answer Its to handle a special case where the OBBs are exactly aligned, in which case there will be two support vertices for a given direction

Cheers, Paul.

Thanks Paul, makes sense now :).

Hi Paul,

Fantastic article; curiously, I could only ever visualise SAT before -so i had the reverse experience.

I am wondering how, in the case of exact alignment, you get 2 contacts from an edge, as seems to me you only ever get one.. so to appear dense but am interested in your method.

Thanks a lot for this great article:) Jon

Hi Jon,

I recommend you take a look at my angry-birds article which takes a close look at the contact generation following discovery of closest features

http://www.wildbunny.co.uk/blog/2011/06/07/how-to-make-angry-birds-part-2/

Cheers, Paul.

As a quick addition.. should i use a poly clipping algorithm to accomplish this? or work with projecting the supporting vertices?

thanks again for your article

Jon

Paul, you save my day! The best article about CD. Thanks for share it.

No problem – glad it helped you!

Current student doing games for fun. Loved the article, thanks. Quick question:

“Fortunately this is as easy as forming the vector V between AABB and circle and simply clamping it against the half extents of the AABB”

More specifically the “clamping it against the half extents of the AABB” portion. Could you clarify this phrase at all? Thanks man!

Hi Zachary,

Sure, once you got your vector V

ensure that

-halfExtents.x < V.x < +halfExtents.x

and

-halfExtents.y < V.y < +halfExtents.y

By clamping V so it lies between those values.

Cheers, Paul.

Paul, what is a collision normal?

The link that explains it is broken.

Hi Alejandro

A collision normal is a unit length vector representing the contact plane of the collision – for example a sphere resting on the floor, the contact normal would be perpendicular to the floor.

Cheers, Paul.

Hi Paul,

Thanks for the deep, accurate and yet simple explanation which show a certain element of mastery Do you know an efficient way to compute the interpenetration (and especially the direction) with two convex objects of any form (a capsule for instance) please ? (Since you cant browse the edges and faces of the MD in this case to find the closest point I guess)

Hi there,

You can compute the interpenetration of any two convex objects provided you know what the Minkowski Difference of those shapes is, then its simply a case of finding the direction to and distance from the origin to the MD.

In the case of two capsules, the MD is a rounded parallelogram, so the distance between them is the distance from a point to the ‘surface’ of the parallelogram.

Hope that helps!

Cheers, Paul.

Thank you very much, however, the links seem to be broken. I had to google resources due to this fact: just bringing this to your attention.

In kinder words, this has helped me quite a bit with collision detection. Thank you!

Hi Ben,

Awww, I’d forgotten about those links when I let my personal website expire… Guess I’ll have to go fix them!

Cheers, Paul.

What’s AABB?

Axis Aligned Bounding Box

Hi my name is James and I am completely self taught in programming so I don’t necessarily understand everything.

The one thing I can’t wrap my head around is this function.

GetDistance( B, mA, mB, parent, screenCentre );

What am I trying to get the distance to, and from what? It looks like you have the Origin, new A, new B, old B, and something called “parent”?

GetDistance() gets the distance from ‘this’ object which is effectively object A, to the given object B. parent and screenCentre are just for bookkeeping purposes and don’t feature in the calculation.

Cheers, Paul.

That makes much more sense. I assume distance refers to closest point of A to the closest point of B and not their centers?

I’ll figure it out.

Thanks. Very enlightening.

Yes, the closest points are on the surface of both polygons, not their centres

Well on the bright side I got it to work…

On the dark side… It takes several hundred iterations to complete…

I guess the question is why? Im using this code.

`rayN = getDot(rayx, rayy, (minDistX / minDist), (minDistY / minDist));`

dRel = rayN + (Ar * fabs(Aav)) + (Br * fabs(Bav));

//Aav & Bav (A/B Angular Velocity) are in degrees

//Ar & Br: Distance from body center to farthest point

I have also come to notice that dRel hugely depends on whether you are using degrees or radians for Angular Velocity. However, degrees iterate too slow and radians iterate too fast.

All my angular coordinates are in radians since that’s most natural for the maths.

100s of iterations sounds wrong in general – it can take a while to converge in edge cases, but in general it should be much faster than that. Sounds like you have a bug in either your distance code or the code which moves the iteration closer?

Solved!

rayN is sometimes negative, adding absolute value “angular compensation” resulted in dRel getting smaller, not larger, aka producing the opposite effect than desired. A simple fabs(rayN) did the trick.

This still makes me question how your code works, because the dot product can be negative and the angular compensation is always positive…

If the rayN is negative, it means the closest point is already inside the object IIRC. On a convex shape it should only ever bring you closer to the surface, or terminate.

Well I do not know, but it works 100% of the time now with correct iterations now that i added the fabs() and converted to radians.

A wise man once told me “If it ain’t broke, don’t fix it.”

Hmmm, I’m not sure I approve of the use of fabs() – seems like you’re hiding the problem

It might bare further investigation – like I say, if rayN is negative its a sign the point is on the wrong side of the plane, i.e. the intersection point has ‘passed’ the object.

My ‘minDist’ is always positive since it is a distance, and ‘minDistX/Y’ are not since they represent the vector from A to B

Is this the problem? If so, then how do you determine a negative ‘minDist’

No, that’s fine – but your distance function should be able to return a negative distance when the two objects are penetrating each other.

dot product is defined as

`(x1 * x2) + (y1 * y2)`

so, say for example, you have 2 non-intersecting objects A and B with the velocities and respectively. minDist is + (not intersecting) and (lets say) so are its components. Then…

`rayx = 1 - 1 = 0`

rayy = -1 - 1 = -2

rayN = (0 * minDistX) + (-2 * minDistY) = negative number

However if you switch A and B’s velocities, then rayy = 1 – (-1) = 2 and the dot becomes positive…

Im clearly missing something and it is driving me insane.

the velocities didn’t show up properly…

`A : (1,1)`

B : (1,-1)

If the objects are non-intersecting and you still get a negative rayN it must mean they are heading away from each other.

u know, this entire problem is probably due to me neglecting to mention i used my own Distance Function…

I just found yours, it looks better….

Hey Paul! Thank you for a great article and a great site! I really like the way you check AABB, but I was wondering if it´s possible to get a “minimum translation vector” out of it also? Somehow I got the idea that since you subtract the center of the box B with center of Box B (which puts it close to 0) and then subtract the half extents of both boxes, the negtive (=”overlapping”) result should be the translation vector. Or at least translation magnitude. But I can´t get it to work like that in my code, so perhaps it´s just a wild goose chase..?

Keep up the good work!

Hi Fredrik,

Yes, you can get the minimum translation vector using something similar, although its slightly more involved. If you have a look at this site:

http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm

Particularly the signed distance to box function, that should give you some clues

Cheers, Paul.

I’m having a hard time on visualazing how to build the MD with the supporting vertex so I’d like to know if I’ve understood this right.

So we first start with an edge on a shape and find its normal pointing inward the shape? Then we look for the supporting vertex on the other shape most in the direction of the normal.

Once we have the supporting vertex how do you substract it from the edge? Do you substract it from the vertices that form the edge and does it mean moving the edge to the place where it belongs to on the MD?

Why do we need to loop through the other shape? Do we like first construct the red edges of the shape and then the green ones, to reference a figure on the article?

Also how do I get the closest feature-pair from this? Is the edge with the Minimum Translational Distance to the origin on the pair? And what is the vertex on the pair?

Hi there,

Yes you have it mostly right, you subtract the supporting vertex from the candidate edge’s vertices, and this operation transforms the resultant edge into configuration space in the MD. You need to loop through the other shape because the MD is composed of feature-pairs which are {vertex of A, edge of B} and {vertex of B, edge of A}. Without doing both you’d end up with only half the necessary edges in the MD.

The closest feature pair is simply the edge in the MD which is nearest the origin – you just do a little book-keeping to remember which pair on which objects formed that MD edge.

Hope that helps!

Cheers, Paul.

Yes that helps a lot! Thank you very much!

Sorry to bother again but I forgot to ask about this one:

“As we do so, we project the origin onto each edge, and keep track of the distance – we’re looking for the least negative distance, which will be the Minimum Translational Distance.”

A totally elementary question but how do you get a negative distance? Isn’t a distance always positive?

Well, its a signed distance; more like a 1D vector actually

I gave it a bit more thought. So is the sign of the distance determined by which side of the edgevector the origin lies on? The negative distance is on the left-hand side and the positive is on the right-hand side of the vector?

Yes, exactly – and in general it tells you whether the two shapes are inter-penetrating or not (negative final result = inter-penetrating by that distance).

Ok. Thanks again!

Sorry for coming up with a question yet again.

Is the signed distance calculated so that you

first divide the normal by its length making it a unit normal so that

the dotproduct of the unit normal and the vector pointing from the closest point on the edge to the origin gives you the distance?

Or should you calculate the distance with the ordinary magnitude formula and then

get the sign of the distance from the dotproduct of the normal and the edge-origin vector?

Or is there a better way to do it?

Hi there,

The normals are already unit length, so you calculate the signed distance as the dot between the normal and the vector from the origin to the edge.

Cheers, Paul.

Why are they already unit length? Doesn’t perping of the vector keep its length the same? Or am I missing something?

It does keeps it’s length the same yes, but the normals for the edges start out normalised, so the perp means they’re still unit length

I don’t understand why they start out normalised. Shouldn’t the perped vector be normalised to make it a unit length normal instead of just a normal?

Also is the ray-cast in practise a line-line intersection (or more specifically segment-segment intersection) when tested against a polygon?

The edge normals are pre-calculated and normalised (and stored in local space), then whenever the object moves, these are rotated into world space. The perp operator doesn’t affect the length of the vector.

Raycast is indeed a segment vs segment test

Ah, now I understand! Many many thanks for helping me!

Fantastic

Hi Paul,

I did buy the source code associated with this article, but i lost it during a hard-drive failure. is there any chance in retrieving the download?

Thanks for a great site.

Jon

Hi Paul,

Congrats on an excellently prepared paper on continuous vs discrete collision detection.

I’ve started to get into game development and I started off with discrete collision detection without using Vectors, just the x/y co-ordinates and I checked to see if any objects shared the same area on screen, or if any co-ordinates penetrated the boundary planes (eg. posx > screen.width ? true : false). Checking boundary penetration was easy, but checking object overlap was inefficient and didn’t work. I was trying to perform Point checks to see if any x,y co-ordinates for each point on a bounding box had any penetration for a target bounding box, without rotation. It failed, so I turned it off.

I started to read some articles about using Vectors to measure velocity, position and direction, and using algebra maths to calculate distances between objects for overlap. A small brainwave put me me very close to using the OBB method – calculate the vectors between the obb centre points, find the angle of the vector, clamp to the nearest vertex for obb a & b according to the angle and if the distance from vertexes in the next step will be less than 0, then there’s a collision. I would then apply the methods you described in physics engines for dummies, applying impulse vectors to reduce to velocity enough to avoid impact.

However, the overlap would only work if the vertexes would overlap directly, and would only work with circle or square shapes, not ellipsis, rectangles, etc.

Hopefully I can solve these problems efficiently using the Minkowski difference theorem.

Hello Paul.

Great article. The interactive demos are really useful, for me it’s a lot easier to understand having visual support.

I’m having a little trouble in the part about OBB vs OBB. The edge normal is a vector perpendicular to the edge we’re checking right? How can I find it? I was only able to find a formula that requires the equation of a line (-1/m).

Hi Daniele,

In 2d, edge normal can be found by using the perp operator on the edge vector, then normalising it:

http://www.wildbunny.co.uk/blog/vector-maths-a-primer-for-games-programmers/vector/#Perp

The direction the edge points (i.e A->B, or B->A) will determine the flip direction of the perp, so make sure it’s consistent in your calculations

Hope that helps!

Cheers, Paul.

That’s exactly what I needed. Thank you very much!

I don’t know if it’s relevant, but by doing some tests on paper I noticed that I need to use the reversed edge normal if the vertices of a shape are considered in counterclockwise order, and straight normal if considered in clockwise order.

Hi Daniele,

Yes, that is relevant – a reversed winding order will flip the normals of a shape so its important to use a consistent order. You can detect flipped winding if the area calculation for a shape returns a negative number.

Cheers, Paul.

A number of links are broken

Thanks for letting me know – I’ve fixed them up

Hey Paul,

Brilliant article if I may say so myself. You’ve helped explained a great deal to me and although I cant quite visualise everything just yet I feel like I’m getting a grasp on it.

Just a quick question if I may, I’ve designed my game engine to 1) update object position, 2) check collisions then 3) draw objects; in that order. Essentially any collisions that would occur would not be visible to the player as it is rectified in stage 2. Is it OK for me to continue using discrete collisions or am I missing something here?

Thanks

Hi Moad,

I would recommend the following integration order:

Integrate velocities

Collision detection/resolution

Integrate position

Draw objects

Apparently, this is called semi-implicit euler and has served me very well in the past

That’s great thanks.

I’ve skimmed this and I plan to read it later. Will this article give me the knowledge needed to figure out if a semi-circle collides with circle OR if a semi-circle collides with a square? I searched for your usage of the word “circle” throughout the article and nothing jumped out to me as a direct solution to this problem.

This articles doesn’t, but an earlier one does give a good method for getting the distance from a point to an arc-length which you can adapt for your purposes. Check the bottom of this article:

http://www.wildbunny.co.uk/blog/2011/02/23/2d-isometric-land-breaking-out-of-the-grid/

Hope that helps!

Cheers, Paull.

I read your link but I’m still not sure how to detect collision of these shapes. I’d love it if you’d update your article with a section dedicated to this. But I understand that might take a lot of time. Either way, thanks for the article.

Hi Paul,

Thanks for this great article, really inspires me a lot:)

I have a question about finding supporting vertex for MD though, I am wondering if you can shed some light on this: by “vector V that is most in the direction of N”, are you talking about mimial projection of V onto N, or minimal angle between V and N? Or, more exactly, V.dot(N) or (V/|V|).dot(N)?

Best Regards,

Xiaoyu

Its the vector which when dotted with N has the largest positive scalar result

Oh, that makes sense. I didn’t reverse the normal so my supporting vertex is found by minimal dot production of V and N. Anyway it works in my little demo.

Thanks for your clarification

Ooops, I mean maximum projection of V onto N.

Okay, I figured it out, it’s the minimal projection.

Hi Paul

could I ask you what does the .m_v operation do?

I see you used it 4 times but cannot figure out what could be its meaning.

Thank you for the great articles!

Hi Kaeel,

That ‘m_’ just means member variable and the ‘v’ is its name. In this case denoting a vector.

Cheers, Paul.

Greast article Paul.

About OBBvsOBB, why the need to do the same test (BvsA), shouldn’t it return the same thing as AvsB, only with reversed contact normal? (also, why the ‘-’ in faceDist = -mfp0.Dot( wsN ); should’n this calculation be exactly the same as with AvsB?).

Thanks.

Hi there,

This is because A (verts) vs B (faces) and B (verts) vs A (faces) do not produce the same results

Cheers, Paul.

Hello Paul,I implemented the SAT in my 2d game (AABB vs Tile) for the detection and response to collisions and your tuto help me a lot. Now I would like to (AABB vs OBB), I detect, but I do not see how to apply the response for just move to the AABB, and later the two box, Thanks in advance,

sorry for my english level

Hi Vince,

Easiest thing to do is to treat this as OBB vs OBB, there isn’t much advantage in having a special case to deal with AABB vs OBB.

Cheers, Paul.

Hello, thanks. My code is ok for OBB VS OBB detection, but I don’t see how to deplace the OBB after detection. I wanted to go through the rotation angle to determine the slope, but apparently this is not a good idea. Can you put me on the right path ?

thanks in advance

The minimum translational vector returned from the collision code is the minimum translation required to bring both bodies into touching contact.

However, I get the feeling you’re asking about physics simulation, rather than collision detection?

You have a good feeling, it’s about the physics simulation. A logic for AABB VS AABB its ok, but OBB, I need help

Have a read of my article on physics simulation if you’ve not explored it already: http://www.wildbunny.co.uk/blog/2011/04/06/physics-engines-for-dummies/

Cheers, Paul.

Hello Paul, I have read your tuto physics engine, but I don’t see. If I buy it and speculative contacts, normally I would understand and will succeed my OBB VS OBB collision response ? As I would be happy and you can eat

thanks in advance

Hi Vince,

I can’t say in advance whether you will be able to solve your problem if you buy the code. I would recommend reading through and trying the examples one by one to make sure you understand the process.

Cheers, Paul.