Ok, I admit it; I love the title of this post so much I want to get it printed on a T-shirt! But that’s a job for later, right now there’s some serious work to be done.
It may surprise you to discover that I don’t just plonk down at my keyboard, take a swig of tea, and rattle off fifteen-hundred words in half-an-hour, before hitting the “submit” button and sauntering off for a glass of wine and a snooze. Nope, ninety-nine times out of a hundred my untrustworthy memory fails me and I’m forced to do some actual research in order to maintain the appearance of knowing what I’m talking about… and this is definitely one of the ninety-nine.
I put a great deal of time and effort into learning tensors and Classical Lamination Theory (CLT), but annoyingly I doubt I’ll ever be able to regurgitate it on-the-fly, at least not in a coherent manner! So I’m hitting the books again with a goal of distilling out the fundamentals required for design, hopefully without getting lost in the theoretical “long grass”. The trade-off is that some detail is going to get skimmed over, and one or two mathematicians may be harmed in the process – possibly due to falling off their chairs in horror at my blasé attitude to their craft!
So what-on-earth is a tensor?
A quick Google search turned up the following: “A mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space”. Phew! There’s quite a bit of assumed knowledge packed in there, which makes it a less-than-great description for our purposes, but it’s a starting point, so I’m going to run with it. To do that however, I’m going to chop it up and deal with it in parts, starting with: “A mathematical object … represented by an array of components…” which sounds strangely familiar.
Cast your mind back to the matrices we discussed in the last post. Whilst a matrix is limited to 2-dimensions (i.e. having rows and columns), and occasionally has just 1-dimension (if it is a single row or single column), a tensor may also be zero-dimension (which is just a number, called a scalar), or 3-dimensional (notionally, numbers arranged in a cube), or even “4, 5, 6, 7, 8 etc.”-dimensional (at which point a small trap-door in the back of my head opens, through which my brain attempts to squeeze and make its escape).
Avoiding the difficulty of imagining a 4-D object for a moment, I’m going to take a quick diversion into terminology. Unfortunately the word “dimension” is already in use, so tensors aren’t described as being 1-D, 2-D etc., the word “rank” is used instead, (or “degree”, or “order” – yep, once again universal conventions are hard to come by!) So, for example, a 2-D tensor is commonly described as a “second-rank” tensor.
For our purposes a tensor is going to look exactly like a matrix – an array of numbers or symbols surrounded by square brackets. It’s worth noting however that just because a tensor can be written as a matrix, it doesn’t have to be written in matrix form and a matrix is not always a tensor. We’re simply choosing to write our tensors as a matrix, because it’s convenient way to package and manipulate the data.
A Sense of Direction
Getting back to our Googled definition, the next part notes that tensors are: “… analogous to but more general than a vector …” Any mention of vectors and you can be pretty sure that direction is going to be important, but what does, “more general that a vector” even mean?!
I previously mentioned that a zero-rank tensor is just a single number called a scalar. A scalar has no direction, only a magnitude. Speed, is a scalar (it tells you how fast you are going but not the direction); mass, temperature and absolute pressure are scalars too; they all have magnitude but no direction.
If we go up a step from a scalar and combine magnitude and direction together, you get a vector. Vectors are very common in engineering and physics, force being a good example – with the difference between tensile, compressive and shear forces being an excellent illustration of the importance of the direction component! Vectors are a first-rank tensor; they are often depicted as a line, with an arrow to indicate direction and the length denoting the magnitude. Break a vector down into coordinates and they can be represented as a single column matrix:
Figure 1 – A First-Rank Tensor (i.e. a vector)
Stepping up again from vectors we get a second-rank tensor. They don’t get a special name like scalars and vectors, they are just “second-rank tensors”, and you can represent them as a 2-D matrix. Stress and strain are examples of second-rank tensors – which we’ll be going into in more detail later. If you imagine a vector as a single arrow, then the best way I can think of to visualise a second-rank tensor is as a grouping of interrelated vectors. Here’s one way of presenting the second-rank stress tensor:
Figure 2 – A Second-Rank Tensor (in this case representing stress)
Now we get to the mind-bending part. If we want to relate stress and strain in an isotropic material – such as most metals – we can use the stiffness (i.e. Young’s Modulus) relation and, providing the loading is simple, do a straightforward calculation:
εE = σ
(i.e. strain × stiffness = stress)
What’s not immediately obvious though, is that the above calculation is actually presenting a special case – we are treating stress and strain as scalars (or at best, vectors pointing in the same direction) when really they aren’t. As I just mentioned, stress and strain are both actually second-rank tensors, making the stiffness relationship between them (oh the horror!), a fourth-rank tensor.
Fortunately we have found a way to write fourth-rank tensors in a 2-D matrix, so there’s no need to buy special “4th dimension” paper and pencils, but they are not exactly intuitive. I may have already stretched your visualisation abilities well beyond breaking point, but if you do want to try and visualise a fourth-rank tensor, think of it as holding the mathematical relationships between two groups of interdependent vectors. (Ugh! On second thoughts, maybe don’t bother.)
The final part of our Googled tensor definition stated that our array of components, “… are functions of the coordinates of a space.” By my reasoning this statement has more than one aspect to it, the first being that the number of dimensions of “space” is important. The size of the matrix required to represent a tensor depends on the number of components the tensor has, and that in turn depends on the number of spatial dimensions the tensor is working within. The rule is that the number of tensor components is equal the number of dimensions raised to the power of the tensor’s rank. A picture really helps here, so have a look at the diagram below. I skipped zero-rank tensors in the diagram as they always have only one component (any number to the power zero is equal to one):
Figure 3 – The size of matrix required to describe a tensor depends on the rank and number of dimensions
I didn’t illustrate a fourth rank tensor in the above diagram, but following the same logic a fourth-rank tensor in two dimensional space will have 2⁴ = 16 components (arranged in a 4×4 matrix) and a fourth-rank tensor in three-dimensional space will have a whopping 3⁴ = 81 components (in a 9×9 matrix). Fortunately for us we can make some assumptions that cull the numbers of components in our fourth rank tensors considerably, so they won’t be totally unmanageable, but once again – that’s a subject for later!
… But It’s The Direction You Point It In That Really Counts
Now we’ve established that, when written down, tensors look like matrices, but that means we can’t tell if a matrix is a tensor or not from simply looking at it, leaving the question: “What makes a tensor a tensor?”
There is a notably tongue-in-cheek quote attributed to physicist Anthony Zee, who, on being asked a similar question by a student, “So what exactly is a Tensor?” replied:
“A tensor is something that transforms like a tensor.”
This description is annoyingly self-referencing, but it does make a valid point – transformation is what makes a tensor a tensor. Remember how I previously described a matrix as a “filing cabinet” for numbers. Well if we use that filing cabinet to store material properties or state data (e.g. stiffness, stress, strain, etc.) at a particular location and in a particular direction within a material, we can then use tensor transformation to calculate what the material properties or stress/strain state is at that same location, but in any other direction of our choosing. This is very useful indeed! (I’ll be explaining how to do it later.)
Taking a second look at our definition I think maybe the description that tensor components, “… are functions of the coordinates of a space” is back-to–front. What tensors allow us to do is change the “coordinates of space” i.e. move our reference point, and see how the thing we are examining looks from a different point-of-view.
That Wasn’t So Bad, Was It?
I’m going to pause here. We’re not done with tensors yet; we still need to cover transformation in more detail and maybe have another look at the tensor versions of stress and strain. But hopefully our: “Mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.” is now slightly less of a mystery.
I’m sure you’re all familiar with the old showbiz adage insisting that you should, “Always leave them wanting more.” Unfortunately by this point I suspect you’ve already had way more than enough. Which only leaves me to hit the “submit” button and ponder which bottle of wine to open.