Into The Vortex

Aerodynamics is a strange thing. On the one hand familiar, but also mysterious. We’ve all been outside on a windy day or stuck our hand out of a moving car window, so we’re naturally acquainted with the general effects of fast-moving air, but what the air itself is actually doing remains invisible, the realm of wind tunnels and high-powered computer simulations.

Many, many years ago I was one of those slightly annoying children who asked, “Why?”, all of the time, and growing up near a golf course, it wasn’t long before I enquired of my father, “Why are golf balls lumpy?”. Pleased to be furthering his child’s education my father confidently replied, “Son, It makes them go further”. Intrigued, and frankly somewhat suspicious, (I was old enough to know that speedy things like fast cars and aeroplanes were smooth and streamlined, and also that parents were not a reliable source of information… After all, they appeared to believe in both the tooth fairy and Santa!). So I paused, cocked my head to one side, furrowed my brow and launched my second-most-favourite question, “How?”.

Now, I don’t remember the exact response, but I’m 100% sure it didn’t involve the words, “Delaying turbulent boundary layer separation”, although there just may have been a mumbled mention of, “less drag”, immediately followed by dad disappearing behind the newspaper or going off to do something ‘urgent’ in the shed.

Why am I telling you this? Mostly because I want to talk about vortex generators, and they fall into the same category for pilots, as ‘golf ball dimples’ do for golfers: That is to say, most are familiar with them, a good portion know what they do, but far less know how they actually work.

Before we plunge headlong into the details of how vortex generators work, let’s first have a look at what they are and what they do:

VGs (to save ink/pixels I’m going to call them VGs from now on) come in many different shapes and sizes, but In their commonest form are thin, usually triangular, tabs attached perpendicular to a surface and at an angle to the oncoming airflow (see Fig.1). Invariably used in groups, when applied to aerofoils they are usually arranged in pairs along the span set-back from the leading edge.VortexGs-Fig1

Figure 1 – Typical vortex generator application

So we know what VGs look like, but what do they do? The obvious answer is, “Exactly what their name suggests”, they generate vortices. Behaving like tiny wings, each VG creates a small amount of lift perpendicular to the oncoming airflow and in the process sheds a trailing vortex downstream from its tip. This explanation is all well and good, but sadly not very enlightening, so a more practical answer is that VGs, “Fix aerodynamic problems”.

Separation Anxiety

You can be pretty confident that VGs were nowhere to be seen in the original  designs for almost every aircraft they are now attached to, in fact you can almost guarantee they were added later after something unsavory turned up during flight testing:

As far as possible, aircraft designers like the airflow to stay firmly stuck to the surface of their aeroplanes. Depending on the location, detached flow can result in a multitude of effects, from additional drag and early stall to ineffective control surfaces and stability problems. None of these traits is desirable, but unfortunately detached flows are hard to avoid. As soon as an aerodynamic body starts to narrow, such as at the rear portion of an aerofoil or fuselage the airflow wants to separate from the surface. Gentle tapering of surfaces helps, (giving familiar ‘streamlined’ shapes), but is not always practical and is ineffective at higher angles-of-attack or where surface discontinuities such as at flaps or control surfaces occur.

Hitting a Boundary

Flow separation occurs thanks to the behaviour of the air in a thin layer immediately adjacent to the aircraft’s surface (See Fig.2). Air has some viscosity – it’s not in the same league as honey, but nonetheless it possesses a degree of ‘thickness’ or internal friction. What this means is that when air flows over a surface some molecules stick to it whilst the others rub against each other as they flow past and are slowed down. This area of friction-affected air is called the boundary layer and it starts off very shallow, but thickens as the air travels further along the surface.


Figure 2 – The boundary layer and flow separation

When air flows over a tapered surface such as the rear portion of an aerofoil there is a combined effect of viscosity and also an adverse pressure gradient (the pressure is lowest over the front portion of an aerofoil where most of the lift is produced and then increases as the surface tapers). In this case the air immediately adjacent to the surface experiences both viscous drag and a pressure differential trying to push it back towards the lower pressure area at the nose. This can cause the airflow at the rear of the aerofoil to turn back on itself,  reversing direction and so acting like a wedge forcing the oncoming airflow to separate from the surface.

A Quick Fix

If an aircraft design demonstrates flow separation problems the obvious solution is to tweak the aerofoil shape or re-contour the fuselage profile to solve the problem, but if the aircraft is already at the flying prototype stage, or is a one off design, significantly altering the outline of the aircraft will be expensive at best, and at worst completely impractical. This is where VGs come to the rescue. Because they are simply attached to the existing surface, aerodynamic problems can be fixed without the need for re-tooling or major structural changes.

Vortex generators work because sluggish air in the boundary layer is at the root of most separation problems. Correctly dimensioned VGs extend slightly above the boundary layer and create vortices at their tips that grab fast moving air in the free stream and mix it into the boundary layer. The now highly turbulent and energy-rich boundary layer is far more resistant to flow separation and so will follow more sharply tapered surfaces, better negotiate sharp discontinuities caused by deflected control surfaces, and resist aerodynamic stall to higher angles of attack (Fig.3).


Figure 3 – Re-energising the boundary layer

No Such Thing as a Free Lunch

Of course it can’t all be good news or our aircraft would be peppered with VGs. In reality you have to pay somewhere, and for VGs that penalty is drag. Whilst they avoid the large drag increases that come with flow separation, the drag generated by VGs occurs in all flight regimes, and so adds to the total parasitic drag of the aircraft whenever it is flying – even in conditions where flow separation may not actually be a problem.

To control the drag VGs create their dimensions and positioning are critical. If VGs are well proportioned and well positioned then the bulk of the VG (around 80%) will sit inside the boundary layer and the drag penalty incurred will be modest. Make VGs too tall and unnecessary drag will result with no added benefit, too short and they simply won’t work.

In the final analysis, if VGs are taming poor stall behaviour or a loss of control authority at high angles of attack, then a modest increase in drag is a small price to pay. Similarly, curing a fuselage flow separation problem during cruising flight is almost guaranteed to give a net drag reduction and so be well worthwhile. As long as VGs are the right size and in the right place there is very little down-side to them as long as they are correctly applied. I suppose they are quite delicate, which makes them prone to damage, but that’s about it. In fact, for me at least, their biggest drawback is that whenever I see them I immediately ask myself the question, “Is that a clever piece of design, or just a band-aid solution to an unforseen problem.”


To ‘V’ or not to ‘V’

That is the question… But don’t worry, unlike Shakespeare’s Hamlet I’m not having an existential crisis, just pondering one of the mysteries of aircraft configuration: Why aren’t we all flying aircraft with V-tails?


Figure 1 – V-tailed ultralights are out there, like this SV11 for example

The history of aircraft design is littered with innovations which at the time of their inception were heralded as being ‘game-changing’ or even ‘revolutionary’. However, with the definite exception of the jet engine, the vast majority have failed to live up to their promises. This shouldn’t come as a surprise. Combine over-enthusiastic engineers excitedly pursuing a novel idea, with a marketing mentality keen to make grand attention-grabbing claims and it’s easy to see where the hype comes from. But whilst you can fool people, you can’t fool nature, and many a promising idea has fallen foul of the laws of physics.

I’d argue that V-tails fall into this category. On paper they have a huge amount of promise and they turn up quite regularly on UAVs and jet-fighters. However, in the ultralight and GA world they remain something of a curiosity; but why is this?

‘V’ Good

Theoretically V-tails have a lot going for them, especially in the drag department. Firstly a V-tail reduces both wetted and frontal area. The theory goes that the two diagonally mounted aerofoils of a V-tail can perform the same job as the three separate fins in a conventional empennage, but with a smaller combined area. Hey Presto – smaller area, less drag. But wait, there’s more! Because a V-tail only has two fins, there is one less intersection between surfaces and one less wing tip too, so you get a bonus reduction in interference and tip drag as well. Clearly, if you are chasing speed, a V-tail is the way to go.

Vtail Fig2 - Equivalent

Figure 2 – Scale comparison of equivalent conventional and V-tails

Next on the list of V-tail benefits is control. Conventional tails can be subject to “Rudder Lock”, a phenomenon where large yaw angles, such as those occurring during a spin, generate massive aerodynamic forces on the rudder, pinning it hard over with more force than the pilot’s legs can overcome. Obviously this is an undesirable trait, and one which should be avoided if possible (a requirement for certified aircraft and certainly recommended elsewhere!). V-tail geometry limits the aerodynamic forces on the control surfaces during a spin, providing some resistance to rudder lock.

V-tails have two other potential control benefits – based on the V configuration raising the tail surfaces relative to the fuselage. Firstly the V-tail is less exposed to ground effect, meaning it won’t suffer from the same loss of elevator effectiveness conventional horizontal tails experience when close to the ground, i.e. when flaring for landing or raising the nose for take-off. Secondly, a raised position places the centre-of-pressure of the control surfaces above the centre-of-gravity of the aircraft. The benefit here is that you get greater pitch–up authority (albeit at the cost of reduced pitch-down authority) at large control surface deflections. This is because the drag generated by the deflected control surfaces creates a supplementary pitch-up moment in addition to the primary pitch–up due to control surface lift.

Vtail Fig3 - Elevator

Figure 3 – V-tail pitch control

Vtail Fig4 - Rudder

Figure 4 – V-tail yaw control

‘V’ Bad

So that’s the good points wrapped up, but what about the bad stuff? First-up, there are some drawbacks to combining the rudder and elevator functions. In aircraft that have manual flight controls, (i.e. pretty much all ultralights), aerodynamic forces acting on the controls get fed directly back to the pilot. For pilots used to conventional aircraft this can make for some unexpected interactions between control forces, notably when large amounts of trim are applied, or when applying large amounts of ‘rudder’ input such as sideslipping for a cross-wind landing. Control forces are not the only problem, there is also potential for the controls themselves to interact, such as increased drag from large rudder inputs causing some secondary pitch-up effect.

The next problem is also control related. Mechanically combining conventional stick and rudder control inputs to give differential control surface movement for rudder, and coincident movement for elevator, requires a mechanical mixer assembly. This is not only adds weight but represents a complex mechanical linkage which is also a single point of failure for the control system, effectively putting the elevator and rudder control “eggs in one basket”. Trim can also be an issue. Providing a trim system on the pilot side of the mixer assembly is relatively straightforward, but removes the benefit of having a trim system which is independent of the primary controls. A separate trim system, on the other hand, will provide redundancy (required if an aircraft is to meet FAR Part 23), but is yet more complex and heavier to implement.

On the subject of weight, you might imagine that having less and “smaller” surfaces would produce a weight saving. Somewhat surprisingly this turns out not to be the case. Whilst there is inevitably some saving from the reduced overall surface area, each V-tail fin is doing duty as both horizontal and vertical tail and so tends to be larger in area than any single conventional tail surface. The end result is greater aerodynamic loads, which in turn require stronger and thus heavier structure, giving up significant weight benefit.

The final drawback for V-tails is adverse roll. We are all familiar with adverse yaw, the tendency for the nose of an aircraft to yaw away from the direction of bank when rolling (caused mainly by a difference in drag on each wing due to aileron deflection). The usual piloting response to adverse yaw is to apply rudder to counteract the yawing moment, but with a V-tail the act of applying rudder to counteract the yaw generates a rolling moment which tries to roll the aircraft out of the turn, i.e. adverse roll.

It’s Not Wrong, It’s Just Different

There are a few aspects of V-tails that don’t fall into the realm of advantage or disadvantage; they are just differences that need to be considered. One of these is dihedral effect. A V-tail, by definition, has a lot of dihedral and this supplements the dihedral effect of the main wing. This tends to make the aircraft more laterally stable, but also makes it more prone to Dutch Roll. With a conventional tail the solution would be to increase the directional stability by increasing the vertical tail area. This isn’t an option for a V-tail, as reducing the ‘V’ angle to give more ‘vertical’ tail area also increases the dihedral effect and so doesn’t fix the problem.  In fact the usual solution is a ‘Y-tail’ which adds a small fixed vertical tail surface to improve directional stability.

Finally, a claim often made is that V-tails are easier/cheaper to manufacture – as there is one less fin and one less control surface to build. This simplicity argument is certainly true for servo-controlled systems, but for manual systems it’s not so clear cut, having to be balanced against a control system which is significantly heavier, more complex and costly.

V. Ugly?

In summary, V-tails do have their place. If you have jet wash or water spray to avoid; or are fanatical about minimising drag, they may just be the way to go. V-tails make even more sense if your craft is unmanned or fly-by-wire – thereby avoiding the control feedback quirks. However, for an Ultralight, I don’t really see the point. As a comparison between the Sonex and Waiex demonstrates, there is no real performance or weight difference to be had between the two tails. In the end it really comes down to aesthetics, so if you like the look, why not? Just don’t expect miracles in the performance department.

Battle Fatigue

Last month we had a look at metal fatigue and why we are not immune to it despite the low number of hours the average homebuilt acquires. This month we’ll get onto more of a design footing and look at how aircraft designers tackle the fatigue problem.

Taking a Gamble

One of the key challenges when designing for fatigue is the probabilistic nature of fatigue itself. It simply isn’t possible to predict failure after an exact number of load cycles. The fact is there is a large variation in fatigue life – even for apparently identical parts. All fatigue design ultimately boils down to a gamble, albeit one with the odds heavily stacked in your favour.

If you take a collection of steel samples, all of the same dimensions and polished to the same surface finish, and expose them repeatedly to a loading equivalent to 75% of their ultimate strength, you’ll find they break after somewhere between 10,000 and 100,000 cycles. Do the same test at 55% of the ultimate strength and the samples will last somewhere between 250,000 cycles and infinity – that’s a fair amount of uncertainty!

Add to the mix real-world loading, which varies considerably in both magnitude and frequency; and then pity the poor engineer who has to answer the superficially simple question, “Will it break?”

Personally, my preferred response to the above question has always been, “Yes, eventually! How long would you like it to last?” which gets to the crux of the problem, especially when combined with, “…and how confident would you like me to be?”

Fatigue2 - Fig1 S-n Diagram

Figure 1 – A generic S-n diagram

For a more in depth explanation take a look at Figure 1. Known as an S-n diagram it plots stress against load cycles for a given material. To produce the chart a large number of identical samples are repeatedly loaded and unloaded at a variety of stress levels and the number of cycles to failure at each level is recorded, (on a logarithmic scale, which allows data from ten thousand to one hundred million cycles to be shown in one chart). With enough samples a best-fit curve can be drawn to give an idea of the average life of a sample at any given stress level.

This is useful information, but from a design point of view you don’t really want to know the stress level at which 50% of your parts have already failed! Instead some clever statistical analysis is required, producing another curve of the stress level or number of cycles at which 99% of parts can be expected to survive. In addition, for some materials an endurance limit can also be determined, giving a stress level below which no fatigue failures should occur irrespective of how many load cycles are experienced.

Armed with this data, plus the desired reliability and predicted loading, and with additional allowances made for effects such as temperature, corrosion, surface finish and stress concentrations (among others!), a designer should finally have the information required to produce an acceptably durable part… although in a commercial environment analysis alone is not considered enough, and the final judgement invariably comes down to testing.

So How Long Will It Last?

There is an old adage in motor racing that the perfectly engineered race-car should, “Break down just as it crosses the finish line”, thus demonstrating it has enough durability to finish the race, but is carrying no more weight than the absolute minimum necessary to complete the task. Aircraft designers find themselves in a similar situation; there’s no question that aircraft have to meet their design-life requirements, but any excess strength means excess weight and a corresponding loss of performance, range or payload. This quest for a happy-medium has historically led to four different approaches to fatigue design:

Design for Infinite Life – Components are designed to be stressed below their endurance limit, (sometimes called the fatigue limit), plus a margin of safety, the goal being to provide a unlimited fatigue life. In the case of materials such as aluminium, which have no clearly defined endurance limit, a limitless fatigue life is not possible so an exposure well beyond anything that could be expected in service is selected instead, effectively negating the risk of a fatigue failure.

Safe-Life Design – A finite life is deliberately included in the a component’s design after which it is required to be replaced. A suitable margin of safety is applied to the required design life of the part; the expected loading and operating conditions; and also to account for the statistical uncertainty of fatigue properties resulting in an acceptably small probability of failure during the part’s lifetime. Safe-life design results in ‘lifed’ parts; component required to be replaced during scheduled maintenance prior to reaching a specified number of hours in service.

Fail-Safe Design – Rather than attempting to avoid fatigue failures altogether, fail-safe design accepts that part failures may occur and instead focuses on making the system as a whole ‘failure tolerant’. Structures are designed with multiple redundant load paths, allowing loads to be safely transferred around a failed part without causing further damage. Of course failed parts still need to be detected and replaced, but in the meantime the aircraft will still be safe to operate, albeit with a reduced margin of safety. As an example, large aircraft skin panels are typically designed with “crack stoppers” – stiffeners directly attached to the skin, dividing it into ‘bays’. If a crack occurs in the skin it will only be able to grow as far as an adjacent stiffener, limiting the maximum crack size to a single fuselage bay. To meet the fail-safe requirement the skin in the adjacent bays is then designed to be capable of carrying the additional load incurred should a panel fail.

Damage-Tolerant Design – Extends the fail-safe design concept to include capturing fatigue failures before they occur (and thus minimising the demands on the fail-safe structure!). Much like fail-safe design it acknowledges that fatigue cracks will develop, but based on a knowledge of crack growth behaviour, and an ability to reliably detect cracks using non-destructive inspection, the intent is that failing parts will be identified and replaced before they endanger the aircraft. By calculating the predicted rate of crack growth on a component, maintenance intervals can be set such that a crack will be discovered by inspection before the parts residual strength is reduced to a dangerous level.

Which Method is Best?

None of the above approaches is inherently better than the others. Part of the skill of the designer is the ability to select the most applicable method for the task in hand, and all the above approaches all have their strengths and weaknesses.

Starting with the Infinite Life approach, the primary drawback is that it produces components are heavier than is strictly necessary. This is generally not a good thing for an aeroplane, but is certainly a practical solution for engine components such as valve springs which can see billions of load cycles in a lifetime, and which don’t lend themselves to regular inspection or frequent replacement.

Safe-life design will save weight when compared to the Infinite Life approach, but an accurate knowledge of both the loading and conditions a part will experience in service is critical if premature failures are to be avoided. Safe-life has a financial impact too, “lifed” parts either need to be economically replaceable, (although you wouldn’t guess it from the cost of overhauling a TBO expired engine!), or alternatively, forced retirement of the aircraft when the hours are up has to be accepted.

Fail-safe design should similarly save some weight when compared to an Infinite Life approach, although it inherently involves redundant structure and thus ‘extra’ weight by definition. Careful analysis on the part of the engineer is also required to ensure all single points of failure have been identified and eradicated from the design – not necessarily a simple task. For example, a wing with multiple spars may have adequate residual strength to accommodate a single spar failure, but the if accompanying reduction in wing stiffness leads to aerolastic problems like flutter, the design may appear to be fail-safe when it actually isn’t.

Finally, Damage-Tolerant Design has the greatest potential for weight saving, but comes at a heavy price in the form of analysis, testing and especially ongoing non-destructive inspections. For a commercial airliner these costs are easily justified by the lifetime savings in fuel and/or corresponding increase in payload, but this is certainly not the case for your average homebuilt.

Making The Most Of It

We’ve covered the overall design approach, but achieving the best possible fatigue life on an individual component level is just as critical; so what are the tools available to a designer to really get the best from his parts?

Firstly, surface finish counts. Surface imperfections are just tiny cracks waiting to happen, so removing them by polishing or surface grinding can massively improve fatigue life. If there are initially no cracks in a part they must form at a microscopic level before they can grow. This “crack nucleation” process can take a long time and so represents a significant opportunity to extend the total fatigue life of a part. In the same vein, residual compression forces on a part’s surface inhibit crack initiation, so even if polishing is impractical the life of a part can be usefully extended through surface treatments such as shot peening or burnishing which leave residual surface compression stresses.

Cracking is bad, but once a crack has started the battle is not entirely lost. A part can have considerable life remaining, providing the crack grows slowly and the critical crack length (beyond which a rapid failure will occur) is not too short. This is where material properties, in particular Fracture Toughness, become vital. Materials with high fracture toughness are tolerant of cracking, giving slow crack development and long critical crack length. Now, I’m not going to plunge into the details of fracture mechanics here, but it’s worth noting that this is not a simple case of selecting steel over aluminium or even selecting a particular aluminium alloy. These choices do have an impact, but simply selecting a different type of heat treatment can change the fracture toughness by more than 50%. The devil, as they say, is in the detail.

Summing Up

As a final word of warning, fatigue damage is cumulative and for the most part occurs at a microscopic level where it is not readily apparent to a visual inspection. It is a brave (or foolhardy) maintainer that uses a part beyond its stated life, even if it still looks, “As good as new”.

Fatigue2 - Fig2 Comet Window

Figure 2 – Feather edges produced by countersunk holes and large stress concentrations in the vicinity of square windows proved to be fatal flaws in the design of the DeHavilland Comet.

Chronic Fatigue

I was at an air show recently and was more than a little taken aback to overhear someone expressing a view that, “Fatigue isn’t an issue for homebuilts, they just don’t fly enough hours for it to be a problem.”

It’s easy to see where this kind of opinion comes from. Certified airframes have fatigue lives equivalent to tens, or even hundreds of thousands of flying hours, whereas most homebuilts will be lucky to collect more than couple of thousand hours in a lifetime; so they should be fine, right?

Unfortunately this belief misses the crucial point that certified aircraft are carefully designed and tested to achieve a certain design life, and even then there’s still been a few occasions when the big boys got it wrong, (Aero Commander wing spars and deHavilland Comet windows spring to mind). Designing for fatigue life is tricky, and just because an aircraft is only going to accumulate a few thousand hours certainly does not mean fatigue can be conveniently ignored. But before we look at how long your beloved homebuilt is going to last, let’s go back to the origins of fatigue.

An Age Old Problem

In the first half of the 19th century it was noticed that some railway carriage axles would fail unexpectedly after relatively short periods in service, despite the fact that they were operating at loads well below their designed and tested strength. By the 1850’s there was a growing appreciation in the engineering community that metal components exposed to cyclic loading displayed a tendency to weaken over time. They dubbed this phenomenon ‘fatigue’ as it was postulated that the material was somehow ‘tiring’ through use, and losing its strength. Systematic investigation followed, revealing that fatigue failures actually result from the progressive growth of initially microscopic cracks. These cracks develop gradually over repeated loading cycles until a part is so weakened that catastrophic failure occurs at well below the designed strength.

Three Steps to Failure

Fatigue failure occurs in three stages. Firstly a crack needs to initiate, this will typically occur at a pre-existing surface defect such as a tooling mark, an area of damage, or material defects such as a void or contamination in a casting. However even apparently defect free highly polished parts will initiate cracks eventually, triggered by tiny imperfections in the material microstructure. It’s worth noting that, for a part with good surface finish and no damage, the first 90% of the fatigue life can pass with no cracking visible to the naked eye.

Once a crack has initiated it will then go through a period of slow growth extending by a tiny amount with each load cycle. Despite being damaged a cracked part can remain serviceable in this state; provided the design loads are not exceeded and the crack is shorter than the critical length the part will not fail catastrophically. Finally, after a period of crack growth which may last months, or even years in service, the length of the crack will reach a point where the rate of growth increases exponentially, rapidly leading to final failure of the part.

This fatigue process is clearly visible when examining the failure surface of a broken part as shown in Figure 1: Fatigue Fig1

Figure 1 – A Classic Fatigue Failure

A crack has initiated from an area of damage (in this case a tool mark) and has then grown slowly over repeated loading cycles creating a fairly smooth but subtly ‘beach marked’ region similar in appearance to tree rings, finally the crack has grown large enough that the remaining material lacked the strength to carry the load and the part has failed creating a large rough area indicative of rapid fracture.

An Old Age Problem?

It should be clear by now that fatigue life is not actually about age, instead the primary criteria involved are the number and magnitude of the loading cycles. Low stress loading cycles are much less damaging than high stress ones and will cause failure to occur far more slowly. To give a couple of examples: Landing gear legs see large stress variations with each take-off and landing, which is much more arduous from a fatigue point of view, but the number of cycles will be low – maybe a few thousand in a plane’s entire lifetime. On the other hand an engine mount is exposed to constant vibration whenever the engine is running, the magnitude of the stress variation is low but the exposure is huge – even if you consider only the vibration directly due to the firing of the cylinders, a Rotax four stroke produces well over 500 million loading cycles for every 1000 hours it runs!

Smoothly does it

If you’ve built a metal aeroplane you’ll be well aware of the mantra to, “smooth edges and deburr holes”, and not without good reason. Burrs and rough edges provide a multitude of tiny ‘notches’ – sites for cracks to initiate and propagate from – which can dramatically reduce the fatigue life. But notches are not just a builders problem, from a design point of view holes, corners, and changes in thickness should always treated with suspicion, after all, they are basically blunt cracks deliberately included in the design! Now I’d challenge anyone to design an aircraft without using any holes, but placement of these features is critical and can be the difference between a part that lasts weeks and one that lasts years.

Fatigue Fig2

Figure 2 – Fatigue Cracks frequently initiate at holes

So why do notches cause problems? Firstly they cause stress concentrations – small localised areas of higher stress – and secondly, by definition, they are on the surface of the part and so are likely to already be in a high stress area – especially for parts loaded in bending.

As a side note it is this same property of notches that makes corrosion such a problem. Loss of material to corrosion obviously reduces a parts strength, but the surface damage, cracking and pitting that corrosion creates can be far more critical and have a huge impact on fatigue life.

Fatigue Fig3Figure 3 – Corrosion not only weakens the base material but provides an opportunity for cracks to develop

Material Matters

Correct material selection is vital for good fatigue performance. For some metals, such as steel there is a fatigue endurance limit – a stress level below which cracks won’t initiate or grow, providing theoretically infinite fatigue life providing stress levels are low enough. Unfortunately aluminium doesn’t display this property and even at very low stress levels fatigue failures can still theoretically occur – albeit at massive numbers of load cycles. This doesn’t make aluminium useless, but it does mean that high frequency vibration and aluminium don’t play together nicely, and probably explains why you don’t see many aluminium engine mounts.

Do we Need to Worry?

Getting back to my “Air Show Expert”, was he right? I guess time will tell. Fatigue may well be a lesser problem at our end of aviation, but we are not immune and it’s certainly not something we should simply ignore. So, with that in mind, how do designers combat the problem of fatigue? We’ll find out next time.

Riveting Stuff

Rivets, in fact metal construction in general, gets a bit of a bum deal. Wooden aircraft trigger bouts of misty-eyed nostalgia, conjuring up images of master craftsmen labouring with hand tools and the evocative smells of sawdust and doped fabric. Composite aircraft, on the other hand, with their sexy compound curves, high speeds and ‘cutting-edge’ technology have the glamour end of the market pretty much sewn up. So where does that leave the humble metal aeroplane? Ask most people to name a riveted metal aircraft and they are likely to come up with a Cessna or an airliner; the aeroplane equivalents of a Toyota Corolla and a bus! It’s also hard to escape the lingering perception that riveted construction reached its zenith during WW2 and has been becoming increasingly irrelevant ever since.

It’s fair to say that riveted construction, much like the aforementioned Toyota, is a victim of its own success, after all it’s difficult to be exciting when you are so ubiquitous, but the very fact that riveted metal aircraft are so common speaks volumes. They may not be sexy, but they clearly have something going for them.

Now, I’m going to address some of the features of riveted joints, but before I plough into the details I will point something out. I’m not going to tell you how to buck a rivet, or even how to design a riveted joint, you can find that information easily enough elsewhere, what I really want to seek out is what makes riveted construction special, and why it deserves a little more respect!

Hammering the Point Home

One thing that is not commonly realised is that riveted semi-monocoque construction is about as weight-efficient as you can get when it comes to light aircraft. Now, before I get shouted at and accused of talking nonsense, there are some caveats. First, I’m talking about GA sized aircraft. If you are willing to limit yourself  to high drag and low speed ‘minimum aircraft’ like a drifter, or even tube and fabric types, then you can certainly go lighter, albeit at a cost in performance. Secondly I am assuming industry standard factors of safety – Composites theoretically have better specific strength and stiffness, but once you add in the additional factors of safety required to cover undetectable defects, damage, moisture absorption and temperature effects, unless the structure is almost entirely carbon fibre (and thus eye-wateringly expensive) riveted aluminium will come out ahead on weight.

Where Have All The Rivets Gone?

There was a time when rivets were everywhere, the Sydney harbour bridge for example contains 6,000,000 of them, and yet these days they have pretty much vanished across all areas of engineering, with the obvious exception of aircraft. So where did they go? The simple answer is welding, which has completely superseded rivets for steel structures and the vast majority of aluminium ones. The only reason riveted joints still turn up in metal aircraft is that they are made from high strength aluminium alloys, which require heat treatment to obtain their superior properties, a process which is reversed by the heat involved in welding. This is not too much of a problem for smaller components as they can be welded first and then heat treated afterwards, but the same cannot be said for something as bulky as a whole airframe, which won’t usually fit in a furnace!

The other main process riveting has to compete with is adhesive bonding. Adhesives have advanced massively in the last 70 years and bonding can now be successfully and reliably achieved even on notoriously difficult substrates like aluminium. However, for the time being at least, rivets have the edge in terms of reliability, cost and ease of inspection. I have yet to see a bonded metal to metal structural joint in a homebuilt aircraft, and if I did I’d probably refuse to fly in it. Not so long ago the FAA had a requirement that bonded lap joints had to contain sufficient rivets “to carry ultimate design loads without benefit of the adhesive”, which must make you wonder why you’d bother using adhesive in the first place!

Pros and Cons

When it comes to joining thin sheets of aluminium, rivets have a lot going for them. On the manufacturing side they are lightweight, cheap to produce, simple to install and easily inspected afterwards. From a mechanical point of view large numbers of small fasteners serve to distribute loading over a large area; an essential property when joining thin sheet materials. Rivets also provide a clamping force in the joint area which allows some of the load to be transferred between the riveted sheets by friction, (a property usually conservatively ignored in structural calculations). Combined with the fact that rivet shanks expand when they are bucked, completely filling their installation hole, you can pretty much guarantee a riveted joint will allow virtually no relative movement between sheets, this ensures the rivets will share the load in a predictable way.

Rivet Figure1  Figure 1 – Schematic representation of a riveted single lap joint

By way of comparison, loose fitting fasteners will allow a joint to move (once the friction due to the clamping force holding the joint together has been exceeded) transferring the load onto the shanks of the fasteners, but not necessarily sharing the load between fasteners in an entirely predictable way. This also highlights why rivets should not be combined with regular bolts in the same joint, or alternatively, why only interference fit fasteners should be used in conjunction with rivets; otherwise the rivets will carry all the load until they have deformed enough to take up the clearance in the bolt holes, at which point the bolts will belatedly start to pick up some of the load:

Rivet Figure2

Figure 2 – Mixing rivets and bolts in a joint is not a good idea

Just like any fastening system rivets do have to be used with some thought for their limitations. Their very nature means that they are good at carrying shear loads but have poor tensile capacity. At first glance it’s relatively simple to design joints to carry loads in shear but the would-be designer needs to be aware of situations where tensile loads can turn up unexpectedly. The classic example is our old friend the unsupported single lap joint which rotates under load, but there are others: Sheet metal aircraft are often designed such that the skin structure will not visibly buckle below the limit load – this is especially true in commercial aircraft where buckling tends to alarm the passengers! However between limit and ultimate loading allowing buckling can save considerable weight, whilst still meeting strength requirements, the designer just needs to be aware that a buckled sheet such as a spar shear web can easily place tensile loads on rivets that would normally only see shear.

Doubling Up

A single row of rivets is limited in how much load it can transfer, so for heavier loads multiple rows of rivets are often used to increase the number of fasteners sharing the load whilst avoiding going below the minimum fastener spacing. In many ways multiple rivet rows start to behave like bonded joints. When more than two rows of rivets are used the first and last rows will bear proportionately more of the load than those in the centre due to uneven strain across the joint, this is only avoidable by stepping down the material thickness across the joint to match the strain, leading to a considerable increase in complexity and cost.

Separation Anxiety

Riveted joints provide considerable redundancy and have some inherent safety features. For a start, a single rivet failure will never cause a catastrophic failure. In a properly designed structure rivet size and spacing are such that adjacent rivets have the capacity to pick up the extra load should a rivet unexpectedly fail at less than the design load. If this design requirement Is not met a progressive failure could occur where the extra load from a failed rivet is passed along a join line triggering the rivets to fail sequentially effectively ‘unzipping’ the structure. A similar requirement applies to cracks occurring in the sheet metal between rivets. Fatigue cracks typically initiate from existing holes so a single crack between rivet holes is a fairly common failure mode, one which obviously must not jeopardize the whole structure.

On the subject of cracks, there is seldom much discussion about fatigue in the ultralight arena, especially compared to the huge issue of ageing GA aircraft. The relative youth of ultralight aviation means the RA-Aus fleet is for the most part quite young, but as we and our aircraft mature ‘graceful degradation’ is going to become a far greater issue, something I’m going to explore next time.


Shear Excitement

Last time we looked at bolts used in tension, but when it comes to ultralight aircraft this type of joint is firmly in the minority, particularly when it comes to heavily loaded areas such as wing and strut attachments. For these fittings you are much more likely to find a bolt or pin loaded in shear, and even more so if the wings happen to be folding or removable. Why is this? Let’s find out…

At first glance it may seem strange to use a bolt in shear rather than tension, especially considering the shear strength of most bolts is somewhere between two thirds and three quarters of their tensile strength. In fact the old AN bolt standard uses a shear strength allowable of only 60% the tensile strength, which is reassuringly conservative but makes it clear that bolts are undoubtedly weaker in shear than tension. So why use bolts in shear? The explanation comes from the fact that we are concerned with the failure stress, and not simply the load. As you will recall from my previous articles (you did read them didn’t you?), stress is the applied load divided by the affected area, so a larger area results in a lower stress. For a bolt in tension the critical area for calculating the stress is the cross sectional area of the bolt, or more specifically, the cross sectional area of the threaded portion of the bolt. The minor thread diameter is used in the calculation as it is smaller than the nominal diameter of the bolt grip and so gives a slightly higher stress value in the region of the thread.

BoltShearFig1Figure 1 – A typical “Fork and Blade” double shear type joint

Now comes the clincher. When a bolt is used in shear the fittings are, wherever possible, designed to ensure the bolt is loaded in what is known a as a double shear configuration (as shown in Figure 1), sometimes called a “fork and blade” arrangement. This ensures the load on the bolt is shared over two separate parts of the bolt’s cross section (as shown in Figure 2). Also, providing the correct bolt has been used for the task, the shear loaded area will be located on the bolt’s grip, safely away from the threaded section, so the full cross section of the bolt is being utilised. As a result the load is shared over a combined area of more than twice that of the same bolt in tension, so the stress is better than halved. The result of all this is that the same diameter bolt will carry at least 20% more load in double shear than in tension. The good news doesn’t end there either; using a more complex fitting and a longer bolt, triple, quadruple or even more shear can be achieved, permitting either progressively heavier loads or thinner and thinner bolts. Taking this to the logical extreme a very thin bolt in multiple shear results, and what you actually end up with is familiar to all homebuilders – a piano hinge!BoltShearFig2

Figure 2 – Exploded view of a bolt in double shear

Fitting In

For a bolt loaded only in shear the head of the bolt and the nut don’t see a significant load – they are really only there to stop the bolt from falling out – this means you can save some weight by using lighter bolts with skinny heads and thin nuts. In a perfect world this suggests that a shear fitting will work out lighter than a tension fitting, but sadly that’s seldom the case. The bolt may be lighter, but fitting design is application dependent and any weight saved in the nut and bolt can easily be taken up by an increase in the weight of the more complex fitting.

A Pivotal Moment

Shear fittings have some other tricks up their sleeve. They are versatile and can double up as a hinge; plus if you are confident there will be no tension load you can do away with bolts all together and use a simple pin instead, both are attractive features for fittings such as wing fold mechanisms where rotational movement or frequent speedy removal are desirable. Rotational freedom has other benefits too, counterintuitively, even in fittings that are not obviously required to rotate. I’ve talked before about load paths and the importance of knowing where the loads go within a structure. Pinned type shear joints are extremely useful in this regard as they will transfer forces perpendicular to their axis but cannot transfer a torque (because they are free to rotate). As an example, this is incredibly useful if you want to design a wing carry-through that keeps all the bending loads in the wing spars whilst transferring the lift load to the fuselage. A pinned joint on the spar centreline allows the spar to flex under load and so only transfer the shear load to the fuselage whilst the bending load stays in the spar caps:


Figure 3 – Pinned shear joints allow rotation and so do not transfer torque

And Now The Bad Points

Surely shear joints aren’t all good news? Well no, they do have some drawbacks, for a start they are not tolerant of loose fits. In a tension joint the fit of a bolt in a hole is typically not that critical, in fact a small amount of clearance is a positive advantage especially if there are multiple bolt holes in a fitting as the free-play allows a small amount of misalignment to be accommodated. In comparison, for a double shear fitting this is not the case. There is no clamping force so the components are not prevented from moving relative to one another, as a result any free-play in the joint not only gives a sloppy connection but will also cause the parts to rattle or fret against the bolt leading to accelerated wear. To avoid this problem close tolerance fits are desirable, but they are not only more labour intensive – requiring reamed holes and/or close tolerance fasteners. They can also make installation a pain as parts will have to be accurately aligned for assembly, not easy if you are trying to precisely position something the size and weight of a wing! Tapered pins rather than straight pins or bolts can alleviate the assembly problem, but won’t solve the problem of manufacturing tolerance. If you have multiple holes in the same part all requiring tight fits even tiny inaccuracies in hole spacing can render assembly impossible, or result in damage or unexpected stresses when the parts are forced together.

Staying Single

Last of all I should really mention single shear joints. For all the reasons mentioned earlier in this article, and in previous articles on joints in general, single shear joints are seldom optimal. They don’t make efficient use of the bolt material and their inherent asymmetry leads to secondary loading and indirect load paths, both of which  lead to heavier fittings, but that’s not to say you never see them. Small, lightly loaded fittings often simply don’t justify the expense and complexity required for a double shear arrangement, and larger fittings carrying mixed tension and single shear loading are sometimes  a necessary evil. The bolt may be heavier, but if the fitting is lighter it can still result in a good solution. Of course if you really want to maximise single shear performance then it may be time to give up on bolts all together, after all, for the true king of single shear connections you need look no further than the humble rivet… but that’s a subject for next time.


A Bolt From The Blue

The humble metal bolt. It’s been quietly going about its business for the last few hundred years pretty much unheralded. Cheap to produce, incredibly strong for their weight, durable and requiring only the simplest of hand tools to install, this workhorse of the engineering world is so ubiquitous it goes virtually unnoticed. At least until you try and remove one with a corroded thread!

Bolts may be common, but they are definitely not well understood, at least not beyond professionals and those of us with unusual ‘engineering geek’ tendencies. The prime reason for this is that for run-of-the-mill, non-aviation uses, installation is seldom critical; bolts just get done up “tight” and “she’ll be ‘right”. For more exacting applications an installation torque may be specified and a conscientious mechanic will get out a torque wrench and tighten to spec – a more demanding process no doubt, but one that requires very little additional thought. It’s only if you happen to design bolted joints that you really need to understand them, and then you quickly discover that they are not quite as simple as they appear.

Feel the Tension

Bolts are typically loaded in one of two ways, tension or shear (or a mixture of both). Shear will have to wait until next post as I’m going to kick-off by looking at bolts loaded in tension. The critical point to grasp when thinking about tension-joints is that all the parts actually behave like springs; they may not look like springs, because the extensions and compressions involved are very small, but they are springs nonetheless. When you insert a bolt and tighten the nut it compresses the material around the hole whilst simultaneously putting the shank of the bolt in tension (as shown in Figure 1). Just like springs the material around the hole squashes slightly and the bolt shank stretches slightly. Exactly how much squashing/stretching occurs depends on the stiffness of the parts and the preload in the joint and it is these characteristics which are critical to the joint’s performance.Bolted Joint Fig1

Figure 1 – Forces in a Tension Joint

Bolted Joint Fig2Figure 2 – Extension vs Force Graph

Figure 2 is a graph of Extension vs Force. Force is on the vertical axis which may look a little strange, especially if you are familiar with force vs extension diagrams (for engineering materials) where force is usually on the horizontal axis, but bear with me. The joint material, shown in blue, is being compressed so it’s ‘extension’ is actually negative, also, in this particular case, the joint is much stiffer than the bolt so there is less compression in the joint material than extension in the bolt at any given load (i.e. the blue line is steeper than the red line). To make this information more useable we can rearrange the graph into Figure 3. By moving the compression data to the right hand side we create something called a “Joint Diagram”, which turns out to be very useful, especially if like us, you want to see what happens when a joint is loaded in tension.Bolted Joint Fig3

Figure 3 – Example Joint Diagram

Intuition sometimes fails us, and bolted joints are one of those instances. It’s easy to imagine that, for a joint in tension, the bolt will simply pick up any applied load – increase the load by 1000 Newtons and the bolt will experience an extra 1000 Newtons load – but unfortunately intuition is wrong, and it’s just not that simple! Going back to our spring analogy, by tightening the nut and applying a preload the material sandwiched in the joint behaves like a compressed spring and the bolt itself like a stretched spring. If you now apply an external load to this system it causes the bolt stretch to increase slightly, but it also relaxes the compressed joint material. The compressed joint material is stiffer than the bolt so the majority of the applied load is taken up by a loss of clamping force, whilst only a small increase in bolt load occurs. Figure 4 illustrates this and also highlights why high preload is a good thing; preload maximises the external load that can be applied before all the clamping force is lost and the joint separates. Most people would agree that bolt breakage constitutes joint failure, but joint separation is no less serious. Separation allows fluids to leak, and parts to move relative to each other. The latter usually redistributes the load and bends fasteners, a problem often closely followed by them snapping off!

Bolted Joint Fig4

Figure 4 – Joint Diagram for a Tension Joint

At this point I’m sure some of you are going, “Hold on a moment. He just said the joint material is stiffer than the bolt, but steel is stiffer than aluminium, so surely for aeroplanes the bolt is usually stiffer then the joint?”

Now it’s true that steel is stiffer than aluminium, but remember stiffness is a combination the material property (its modulus of elasticity) and the volume of material affected, so a steel bolt can be less stiff than the volume of aluminium it is clamping, especially if there is a washer to spread the clamping load over a larger area of aluminium, (you do always use a washer don’t you?).

Going Soft

So what happens if your joint material is ‘soft’? Assuming you can’t just use a bigger washer (and you thought those big washers in wooden structures were just there to stop the fibres crushing!), Figure 5 shows the problems with a soft (low stiffness) joint material or an excessively stiff bolt. The bolt will carry much more of any applied load, this is great from a preload point of view as the loss of clamping force due to applied load will be much less, (so you can get away with less preload), however you will need your bolt to be beefy or it will fail long before the joint separates, and of course a beefy bolt will be even stiffer, further exacerbating the problem. Soft joints aren’t necessarily a complete disaster, but they do need to be designed for.

Bolted Joint Fig5Figure 5 – Joint Diagram for a ‘Soft’ Joint

Chasing Perfection

By now you have probably come to the conclusion that an ideal joint is extremely stiff with a very stretchy bolt, preferably torqued to just below its proof strength. You have also probably noticed that there aren’t many rubber bolts around, so it doesn’t take much imagination to realise that chasing this ideal, with real world materials, is quite a challenge. Real bolts are quite stiff, so you can’t set the preload to just below the bolt’s strength, the bolt is always going to carry some of the applied load and you’ll need some margin-of-safety. In addition the preload won’t stay nicely fixed where you want it. Imagine the material being clamped is aluminium; the bolt is steel; the joint is thick; and there are significant temperature changes involved. Differential thermal expansion will have a huge impact, (just ask anyone who has designed through bolts for an aluminium engine block), and achieving enough preload when cold, but not overstressing the bolts when hot, is going to be a challenge. Plus, if  the bolt does get loaded beyond its yield strength it will stretch permanently, so even if the bolt doesn’t break, when the load is removed some of the clamping force will be lost; not ideal if you are using it to hold down a cylinder head.

Sit Tight

Let’s assume you’ve got around the above problems and know exactly how much preload you want, now you hit an even bigger problem: How do you actually get the desired preload? A calibrated and correctly used torque wrench, applied to a clean and within tolerance nut and bolt will give a preload accuracy of around ±25%. Yes, you read that correctly; if you do everything right you could still be out by a quarter of the intended value! Surface finish, coatings, contamination and lubricants can all massively affect the torque-preload relationship. So get some oil or other lubricant on a thread for example and you could easily break something long before reaching the specified torque; forget to include a washer and the increased friction from uneven bearing could result in inadequate preload and a joint which fails to develop its full strength. All this uncertainty inevitably leads to conservative design and it’s a brave engineer who specifies a torque that will give a design preload above 85% of the bolt’s rated proof load.

Back in the Real World

The preceding discussion has all been quite theoretical, but what are the implications for your average maintainer? If someone asked me what I thought was important when bolting something together, my first answer would be, “Don’t forget the washers”. These unassuming disks of metal are truly the unsung heroes of the fastening world (I’d have subjected you to an entire article on washers if I thought anyone would read it!). My second answer would be, “If a torque is specified, go get your torque wrench and torque the nut correctly!”. Tightening by feel risks sacrificing some of the joint’s strength and there aren’t many joints in an aeroplane where you have that luxury. Finally, the strength of a joint is about far more than just the strength of the bolt – bolts should be always be replaced by bolts of the same grade, and whilst using a lower grade bolt is definitely bad, stronger certainly does not always mean ‘better’.