Click here to send me email. | Click here for a site directory |
The Threshold Question
Defining the QuestionIn any discussion of baseball and PEDs, it seems obvious that the threshold question must be the degree to which a given PED does or does not actually enhance performance. If a supposed "enhancer" has in reality little or no effect, is effectively a placebo, we need to be asking hard questions about whether and why we should upset by its use. If, to make a silly but demonstrative example, some large number of professional ballplayers somehow became convinced that eating peanuts would greatly enhance their performance, would that constitute rational grounds for widespread and intense condemnation of peanuts and peanut-eaters? Yet, despite what would seem the overwhelming obviousness of such questions, almost no one involved in the furor about steroids, human growth hormone, and like substances seems to have done the least shred of investigation into that threshold question, do these things in fact work? The casual observer must be forgiven ignorance of the very existence of the question, in that it has become essentially axiomatic that such substances do indeed have great, almost magical effects on performance. That is because a small cadre of ignorant, moralizing fanatics long ago began making such claims, and repeated them so loudly and so often that any contrary opinions were drowned in and swept away by the tide of rhetoric. Moreover, by adopting a moral rather than a medical stance, those fanatics put PEDs on a footing with marijuana, cocaine, and the rest of the panoply of illegal drugs totally unrelated to athletic performance (except detrimentally): to use the modern term, they "demonized" PEDs. If one seeks to make an independent evaluation of reality, the volume of the chorus makes it difficult. It is necessary to use the greatest care in selecting sources, to avoid those--on either side of the issue--who either have an axe to grind or who are simply parroting claims they have no credentials to understand or evaluate. For example, on the one side are presentations, usually including some carefully selected supporting quotations, by such places as body-building web sites, whose axes and credentials (or rather, lack of credentials) are highly visible; on the other side are various professional anti-drug scare-mongering sources that lump all illicit substances under the simplistic Reefer Madness heading Evil. Another complication derives from the overlooked point that there are muscles and there are muscles. Musculature improvements that might be critical to, say, an Olympic speed skater might not matter at all to a swimmer, and vice-versa. What muscles a given PED might affect are as important as how much it affects musculature. Sticking to baseball, muscle augmentation that might be a great help to a pitcher might mean little to a batter, or vice-versa. Till we can and do analyze the biology and physics of the performance requirements and relevant anatomy, we are grossly ill-equipped to pass judgments. Answering the QuestionSome Medical BackgroundThere is a tediously thorough examination of the medical effects of various PEDs elsewhere on this site. From it, I here abstract the bit critically relevant to the business of this page:
In the linked page appear a good few citations from the scientific literature that amply support that statement; for brevity, I will not repeat them here. But keep this fact clearly in mind: it is important. Looking For FootprintsBear Hunter #1If one claims that there is a huge bear prowling the winter woods, one is under some obligation to point at pawprints somewhere in the snow if one is to have any credibility at all. Because the focus in recent years has been chiefly on batting, let's start by examining those data--actual MLB statistics--to see if there are indeed any visible footprints. To search for possible effects from PED use, we first need to understand what PEDs might or might not do for players. No one has ever claimed that any PED improves visual acuity or reflex response speed; all that PEDs can possibly do is increase muscularity. In baseball terms, that means power--the distance balls are hit. If PEDs have a discernible effect in baseball, then that effect must be on power, and only on power. To properly measure power levels in baseball, we need something that is independent of other performance data. We cannot, for example, simply count home runs--for a batter, a league, or all of major-league baseball--because home-run figures can change substantially with no change in power. To understand that, realize that power determines how far a ball will go when struck well; for a given level of power, with all other factors constant, a certain proportion of all hits will be home runs. Still keeping all else fixed, more power means more home runs, less means fewer. But suppose all else is not constant. Suppose, for example, that the strike zone as called by umpires were to change materially one way or the other over time (which has actually happened, as with the rapid and substantial 2001 expansion); clearly, the number of hits gotten would also change materially. So, even with no change in actual power, batters would get materially more or fewer home runs as a consequence. Moving from a straight crude count to a rate measure is no improvement. If hits were to go up for reasons unrelated to power--as, for example, by strike-zone size changes--so would the rate of home runs as measured by home runs per plate appearance or home runs per at-bat. (So also, we must remember, would total scoring.) To successfully measure power per se, what we need to do is relate well-powered balls to hit totals. We could use the ratio of home runs to hits, and that works quite well. But not all "well-powered" balls necessarily leave the yard: doubles and triples are also, to some extent, indicators of power. Thus, the best measure of sheer power is Total Bases per Hit, a figure aptly known as the Power Factor. (You can verify that the PF tracks the same thing as just home runs per hit by popping up this graph of the two.)
The PF, when looked at in historical context, is quite illuminating. Take a leisurely look at the graph below (in which the slightly tilted red lines are the smoothed averages for the years they span): The annotations make the graph largely self-explanatory, but here are a few notes anyway, reading left to right. (I have put a smaller duplicate of the graph below the notes, so that you needn't keep scrolling back up if the big one has rolled up off your screen as you read.)
The purpose of so full a graph, and the notes, is to demonstrate that the Power Factor indeed measures pretty well just what it purports to: the force with which the average ballplayer hits the ball when he hits it within the lines. If we pull back and look at the graph as a whole, two things quickly become obvious: one, for most of the century there is a clear and fairly steady upward trend to power; and two, that at certain points there are those sudden, discontinuous jumps. (We ignore the expected dips and jumps that represent the start and end of WW I and WW II.) The discontinuities separate readily recognized distinct eras in the game. The slightly upslope red lines represent the long-term averages of the years that they span, smoothing out the minor year-to-year zigs and zags. Because the discontinuities are sudden, "overnight" (well, over-winter) jumps, and in all cases align exactly with a known physical change in the baseball, we must wonder what the graph would look like if it weren't for those artificial quantum jumps. Well, we can easily find out. First, though, let's take a moment out to look more closely at those sudden jumps, here attributed to changes in the baseball itself, because they are important to our understanding. There will always be skeptics who deny, with much handwaving and little data, anything they choose not to believe. But, regrettably for their cause in this instance, there is definite, hard scientific data to prove the point. While a precis of the studies appears below, for a much richer elaboration on just what was done how, and what the results were and what they signify, visit the page here wholly dedicated to the science of the changing baseball. First, In 2000, scientists at the University of Rhode Island physically examined baseballs from several widely separated seasons. Their conclusions?
That last is noteworthy, because MLB's defenders routinely claim that balls from different years cannot meaningfully be compared owing to "aging effects". But, as we see, competent scientists sharply disagree. The entire article, linked above, richly repays the brief reading time required. (It also shows that the yarn in the windings is out of spec as well.) And that is scarcely the only scientific examination of the physical ball itself to reach the same conclusions. A CT scan of 1998 baseballs done by Pennsylvania State University in conjunction with Universal Medical Systems also found, um, interesting things about them:
The examination also looked at other baseballs from 1998, so McGwire was not getting personal favors from MLB. The scientists doing those examinations were constrained by what baseballs from which years were actually available to them for inspection. But the PF, as graphed above, allows us to see the exact years in which the changes were made. It would be fascinating to re-construct the graph above by, in effect, "slicing out" the gains from changes in the ball, but I have not attempted that, as it might reek too much of subjectivity. But there is nothing subjective about its interpretation: on several occasions in baseball history, including one within the last 15 years, the ball has been expressly and significantly juiced. Whether the juicings were by deliberate calculation and directive (almost certainly the case in 1920), or just the results of occasional changes in the manufacturing process (the obvious cause in 1977 and the likely cause in 1993) is immaterial; that there were such changes is undeniable in the face of the evidence. It is absolutely essential that any analysis of batting performance take explicit cognizance of that fact. Batting stats from the period 1977 - 1992 and the period 1993 - 2007 are simply incommensurable: comparisons can only be made within those eras, not across them.
Now, to avoid those incommensurability problems, here's the same graph with all the discontinuities (except the 1910 cork-core jump, which is too small to bother about) sliced out, and the data spliced together to make a true showing of actual power changes, independent of changes in the ball:
All of a sudden, we see that baseball has really had, so far as power goes, three major eras. Naturally, within each there are jigs, both up and down, from year to year, but they are (saving perhaps the 1986 - 1987 bump that no one seems to understand) relatively small jigs on the overall scale of the graph. The tilted red lines are the intra-era average movements in PF
If we look at those intra-era average movements, something has to jump forcefully out at anyone: from 1962 on, true power has been declining. (Here, "true power" means simply power exclusive of artificial boosts from isolated, big-jump changes in the baseball itself.) The rate of decline from 1962 on through 1981 was dramatic. From 1983 on through the present, it has been nearly flat (it looks like a slight downtrend because the bizarre 1986/1987 spike "front-loads" the average, but I didn't want to arbitrarily smooth out those years). It thus becomes quite impossible to believe in any theory that speaks of "boosting" power in modern times, simply because there has been no such boost. Here's a blow-up graph of the so-called "steroid era", starting at 1982 (because 1981 was strike-shortened and thus not a good data point). Understand that in this graph nothing has been "spliced out" save the single ball juicing of 1993/1994 (whether 1993 was or was not post-juicing is still debated); the numbers on the left would change were earlier splicings and wartime smoothings dropped, but the shape and scale of the graph would be unchanged.
(OK, OK, there was 1986/1987--does anyone think that relevant?) If the "Power Factor" is still too exotic for you, you can look at the straightforward runs-scored-per-game numbers. Here is a graph--quite unadjusted--of the Rawlings-ball era--the past 30 years--with both Runs per Game and Power Factor plotted on comparable scales: The two data, PF and R/G, obviously track very closely (in 2001, R/G took a small downtick vs PF, else they'd match almost exactly). The "steroids era" is supposed to have begun about 20 years ago, and picked up steam thereafter. Oh? Where's the beef? Who in his right mind can look at these graphs and say "look, there, there's the evidence". Anyone who wants to try to build a case that the ball juicing in 1993 (and 1994--most observers feel that the newer juiced balls weren't fully in use in 1993) is going to have to explain a zero effect before and a zero effect after: that is, every single "juicer" in MLB started at the same time, the winter of 1992/1993, and no others before or since. Give us a break, please, it is to make a cat laugh.
The end. Bear Hunter #2The Method:Let's come at this in a totally different way, using physics. As the now-classic book The Physics of Baseball by Robert K. Adair (Sterling Professor Emeritus of Physics, & Senior Research Scientist in Physics, Yale University) shows--as do many other sources on the physics of batting--the speed of a fences-bound baseball as it leaves the bat is the sole determinant (other than wind effects) of the distance it will travel for a given angle of departure (the optimum being about 35° upward). Moreover, that speed can be calculated from a fairly straightforward equation derived from basic physics as tempered by data from experimental results, laboratory and field (yes, there are scientists who do research on batting). From another paper, Possible effects of steroids on home run production in baseball (R. G. Tobin), we find that "for balls hit near the bat's 'sweet spot'--which is the case for most home runs--the results are well approximated by a simple one-dimensional, partially elastic collision, with the bat treated as a rigid body," so that the speed of the outbound batted ball, v, is given by-- --where M is the mass of the bat (assumed to be 34 ounces); m is the mass of the ball (assumed to be 5 ounces); CR is the coefficient of restitution, set at the widely accepted value of 0.50; vpitch is the incoming pitch speed (assumed to be 90 mph); and vbat is the bat speed of the swing. The pea under the shell in that equation is vbat, which is the one thing affected by the strength of the batter. A reasonable base value for a typical major-league ballplayer is about 67 mph (more exactly, 30 meters/second), and that is what we will use as our baseline. Solving the equation is straightforward if tedious arithmetic. The result is a ball speed leaving the bat of 114.6377 mph. One can, as a rough guide, say that if we further assume the ball has been hit at the optimum flight angle and is not materially wind-affected, then--by eyeballing a graph in Professor Adair's book--under those assumptions the ball will travel about 440 feet. The accuracy of the exact distance is not material here, so the various assumptions need only be individually reasonable and give us a plausible result. The reason exactness is not crucial is that what we really want to look at is small variations in the calculated distance as we assume variations in the strength of the batter. Whether the baseline is 430 feet or 440 feet is immaterial, because the increments we calculate will be sufficiently accurate. (Think of your car's speedometer: when it reads 65, you might be doing 61 or 68, but if it goes from a reading of 65 to a reading of 70, you can be sure that you have sped up by just about exactly 5 mph.) We can calculate bat speed from another straightforward equation, as found in Professor Nathan's paper: The symbols here do not mean the same things as in the first equation. I apologize for the confusion, but it is very difficult to reproduce mathematical equations on a web page, so I lifted these direct, as images, from their source papers, in which two different authors are talking about two different issues. Here, v is what was vbat in the first equation, bat speed, and the M here is the mass (weight) of the batter himself, while the m here is the mass of the bat. The value ε (or rather its square) is an empirical constant whose value derives from laboratory and field investigations. That value is given by Adair as 0.012345679; Nathan, in a paper titled Swing Speed vs. Bat and Batter Mass, suggests a value of 0.007751937. I have used a simple averaging of those two values, which happens to come out to a nice, neat .01; the effects on results for differences within these ranges of value are not great (see Nathan's paper).
We proceed by calculating the value of the right-hand expression omitting the constant k; we then increase M by some percentage and re-calculate; then we take the ratio of the second (increased-weight) to the first (baseline) and multiply it against the original baseline bat speed that corresponded to the assumed baseline player weight. We thus get the increment of bat speed corresponding to the increment of muscle mass. So far, so good, and others have gone the same route. We can further use Professor Adair's chart of distance travelled versus inital ball speed, which shows us roughly 5 feet gained for every extra mph of speed. Here, then, are the results from adding functional (note that word) muscle mass in increments of 1% up to 10% of body weight at a time on calculated ball-drive distance:
The Trap:Now we come to the crux. For a 200-pound man, 1% of body weight is 2 pounds, and 10% is 20 pounds. So can we say that if a player goes from 200 pounds to 220 pounds, he can then hit a ball more than 10 feet farther? No. Here's the critical point: in doing analyses like this, we have to be very, very careful not to make the elementary mistake that more than one would-be analyst has made by trying to simplistically plug numbers into these sorts of equations. That mistake is to assume that all added muscle mass is being used to further power a struck baseball. It's wrong. Adair makes a crucial statement that some have failed to consider:
That is, the approximation deals with situations like that illustrated here, where the two men being compared with the equations are assumed to be perfectly proportional to one another, just different in size (and hence mass). If we want to compare the probable long-ball power of two batters, we assume that both, as trained major-league ballplayers, have identically shaped bodies. We treat them, in simplified calculation, as if one were just a slightly shrunken (or, as the case may be, expanded) copy of the other, with just enough "air let out" (or pumped in) that the weights correspond. We know that that proportionality assumption is not, in general, correct--what bizarre power numbers would one get trying to scale up from 160-pound Joe Morgan to 204-pound (at the least) John Kruk? But however wrong it may occasionally be when looking at different men, it is absolutely, positively guaranteed to be wrong when we are dealing with two different versions of the same man, one before and one after adding muscle; we can never assume--because it will never be true--that the two versions will be exactly proportional (there is no growth in height, for example). Above all, we cannot assume that the significant (lower-body) muscle mass increases in proportion to overall muscle-mass gain; and if steroids assisted the muscle growth, the differential will in fact be drastic. Steroidal Effects on Musculature:This point--made near the top of this page--cannot be hammered hard enough: steroids have a markedly greater influence on upper-body musculature than on lower-body musculature. I will not here repeat the many probative citations available because they already appear on the medical-effects page of this site. But it is sheer, indubitable fact. That being so, it is wrong to assume that some x pounds of added total-body musculature represents the extra muscle available to power a baseball. All that counts, all that is functional for this particular, exacting task, is lower-body muscle. Let me say it again for emphasis: Batting power is all about lower-body strength. Bulging biceps and triceps may wow the baseball Annies--and perhaps scandal-sniffing reporters--but they mean essentially nothing to long-ball hitting. If we just blindly plug total-body muscle-weight gains (especially if we think them steroid-augmented) into the bat-speed formula, we are breaching one of its inherent assumptions, bodily proportionality, and hence we will assuredly get meaningless and thus misleading results.
Realistic Results:So, when we consider whether a ballplayer who has added weight in the form of muscle has added to his ability to power a ball, we really can only add in, in these equations, that amount of muscle showing up in lower-body strength. While I suppose it's anatomically impossible, a batter who added, say, 5 pounds of muscle to his lower body and zero to his upper body would be just as power-ball-enhanced as a batter who added 20 pounds of total muscle of which 5 was lower body and 15 upper body. Thus, if we want to get a handle on potential steroid gains, we have to make some assumption about just what the growth-assisting differential ratio actually is. That, regrettably, is hard to do: such phrases as "more than", "differentially", "greater", "most marked", and the like are awfully imprecise. Let's try looking at two cases: a 4:1 ratio and a 3:1 ratio of upper-body to lower-body development. We will assume that the player in question has added a full 10% of body weight as sheer muscle--that's a 200-pound man adding 20 pounds of pure muscle. In our first case, that means he's presumably added 4 of those pounds to his lower body (which comports well with the stereotyped "triangle" body of broad shoulders with huge biceps, triceps, delts, and so on). That is an effective addition, for the batting-distance equations, of a 2% increase. From the tabled calculation results above, that would work out, roughly, to adding between 2 and 3 feet to his optimally batted ball distance (circa 2.5 feet, or 30 inches. If we assume instead a 3:1 upper/lower ratio, we have 5 extra pounds, a 2.5% increase, and the added distance becomes about 45 inches. None of those numbers are anything like exact, and are not meant to be. What they are meant to be is a demonstration that the idea of even a man who adds hugely (and 20 pounds of sheer muscle is not small potatoes) to his musculature is not suddenly going to go from gap power to moon shots. A plausible estimate is that he can add from 2 to at most 4 feet to the average distance of his most solidly hit drives. The number of balls that a given batter hits in a season that are, say, a yard or less short of just going out is impossible for anyone other than Stats, Inc. to know with any precision. One very, very rough indicator is from their 1995 Baseball Scoreboard, in which they note that the number of "home-run-saving catches" for the prior season was 64 across all of MLB. Because many parks have fences too high to allow such catches, we can arbitrarily double that number and get 128 just-over-the-fence balls, which is roughly 4 marginal balls a season per club. Heck, double that to allow for catchable over-fence balls not gotten to. That's still around one a year per man. I grant at once that that is very far from any kind of exact number, but it does suggest--strongly, in my opinion, but you judge--that few men hit many balls a year for which an extra 30 or even 40 inches is going to make the difference between in and out of the park. So Bear Hunter #2 hasn't exactly proven anything, but he has sure as shootin' made a good circumstantial case for "no bars in them thar woods".
Other Bear HuntersI have sought here to present the essential information in a way simple and clear enough that even a non-expert in statistical analysis must realize that there is no denying it. I have focussed on a graphical presentation, because it is easier to see literally than to visualize from numbers, and equations are--for most people--even more discouraging. But for those who would like to see some examinations with considerably more mathematical rigor, each using a different fundamental approach, let me point you to these (all of which reach pretty much the same conclusions as the analysis here--the snow in the woods is virgin snow):
I could, did I have the patience, roll out more yet quotations from suitably qualified experts, but to anyone not more or less insanely dedicated to believing that steroids have had a perceptible--much less dramatic--effect on baseball numbers, that should suffice; for those who are so dedicated, nothing will ever suffice.
Some Further ConsiderationsPitchingFor most of the history of the PED furor, the focus in baseball has been on batters, and particularly on home-run hitters. The grotesque injustices done to--and still being done to--Barry Bonds are all too well known. Only very recently has mention of pitching been anything more than a whisper, and even that talk was not loud till the explosive effect of the Mitchell Report and its pages on Roger Clemens. For pitching, it would seem that the musculature-benefit arguments that work against batters--steroids develop almost entirely upper-body strength--would suggest that pitchers have much to gain from using steroids. There are two facts, though, that need to be considered. The first is that while one normally thinks of pitching in connection with arms, it is by no means all arm: ask any coach who has thrown BP what tires first, and he'll invariably tell you "the legs". Lower-body strength has a great deal to do with pitching. So while steroids could be expected to improve an important part of pitching, upper-body strength, they would not be augmenting 100%, or anything close to it, of what makes for power pitching.
The second point, and it is indubitable fact, is what we already saw above: the actual, real-world statistical records in the so-called "steroid era" simply do not admit of any sort of special, artificial influence. The best a steroids-corrupted-baseball advocate can do is to try the "it helped batters and pitchers equally, so the stats are a wash" approach. That has two fatal defects. First, even were it so, it still utterly wipes out the argument that "steroids have tainted records". One so arguing is in the position of saying "Uh, I'm right, so that makes me wrong." Second, it is wildly improbable: supposed improvements to batting skills, which are by their very nature not going to be much affected by steroids, somehow curiously exactly balance off supposed improvements to pitching skills. Deary me. In an extensive article in the April 30, 2006 Washington Post titled "Do Steroids Give A Shot in the Arm? Benefits for Pitchers Are Questionable", Amy Shipley includes comments from numerous expert sources, from Dr. Frank Jobe to Dr. Mike Marshall, who uniformly feel that steroids do not help pitchers to any material extent. Nor is there anything to make medical personnel feel that steroids have benefits other than sheer muscularity:
That steroids don't help pitchers any more than they do batters is an idea borne out by at least one study, "More Juice, Less Punch", Cole & Stigler, which analyzed the ERAs of 23 pitchers expressly identified by the Mitchell document as steroid users, and found that:
More virgin snow. Augmented Playing TimeThough this really belongs on the Medical Effects of PEDs page here, I should note that some partisans, confronted with the arguments above and seeking some validation of their cherished beliefs, argue that even if PEDs don't boost power rates, they distort power counts (that is, home-run totals, seasonal and career) by helping players to heal faster from injuries, and thus to get in more playing time than they could unaided. Consideration of nothing else but the numerical fact that even the best home-run hitters produce an average of one home run every three games ought to discourage this belief, but a closer examination, as presented here on a separate page, PEDs as Healing Agents, more thoroughly dispels this folly. In particular, it shows that average playing time for regulars has decreased through the so-called "steroids era", the exact reverse of what the "more playing time" argument postulates. The Barry Bonds CaseCase AnalysisWhat this man has endured justifies, to me, an entire separate section on this page. We have seen, with data that only the most mindless fanatic can deny, that the overall results for major-league baseball just do not admit of any external, artificial effect except periodic juicings of the ball itself, in the "steroid era" or at any time. We need, though, to see if perhaps--as was argued just above about pitching--in the cases of a select few older men it has allowed "false" cumulative, career results. I suggest that an ideal test case would be the man the BBWAA loves to hate, Barry Bonds. First the facts, then the analysis. Here is a graph of Bonds' Power Factor over his career. The graph is divided into time segments representing the three differing ballparks he played in (ballparks have substantial effects on all baseball stats). Now let's see what we're looking at. In Pittsburgh, as a young player, he had what were already quite remarkable stats, a PF over 1.8; in 1992, his last Pittsburgh year, it went up to a stunning 2+ figure, though that was, in fact, only a mild uptick based on his performance till then. Arriving in San Francisco, he continued at about his established pace. The single notable year was 1999, when he again had an uptick, this time a somewhat larger one. After the move to the new park-of-many-names (originally PacBell), his PF generally--just as one might expect in a player in his late 30s--began a zig-zag but obvious decline. The single exception, of course, was the famous one-shot spike in 2001, when he shattered the home-run record. The extent to which that year was a freak, even for Bonds, is glaringly obvious from the graph above. So where is the "strange" behavior for whose explanation we are supposed to turn to PEDs? The detractors have only two places to pick from: the step up in 1999 and the spike in 2001. Everything else is (as such things go) so smooth, it's practically a model. I suggest we look first at the spike, as it's most easily dealt with. First off, it is clearly just that, a spike: draw a straight line from his 1999 datum to his 2007 datum and it is obvious that every other year in that span is just a normal fluctuation over or under that line (actually, one would expect another little uptick in 2008, should Bonds play then). Is such a spike evidence of anything? Well, here's another player's power spike; I have chosen the X and Y scales to be comparable to the Bonds chart above. Pretty suspicious, hm? I guess Roger Maris started taking steroids in 1960; that bizarre spike in 1961 proves it, doesn't it? Then, when everyone got suspicious, he laid off, and look how his results plummeted. I suppose that is just my sarky little way of saying that children shouldn't try to play with adults' toys, and if one does not have some grasp of statistical analysis, one shouldn't try to prove points with it.
In any career of sufficient length, it is highly likely that there will be an "inexplicable" career year, as well as an equally "inexplicable" off year. Stuff happens. That's just how it works. If a player has the innate abilities, then in some season it can all just come together--there is so much luck in baseball results--and you get a "career year". Don't forget that 1961 also brought us career-.271-hitter Norm Cash's .361 average. More 1960 steroids? Pfui. Yet another datum, as if more were needed, was what happened to Bonds when his knee went bad. His power fell off substantially, because his swings were not good, which was because his lower body could not generate the needed torque. His highly muscled upper body was totally unaffected, yet there was the fall-off. As a sidebar thought-experiment, let's consider what would we get for career numbers if we were to "saw off" that Bonds spike, say to a 2.30 PF. He had 156 hits that year, so that would mean about 359 total bases, versus the 411 he actually had, a loss of 52 total bases. The worst case is that they are all home runs--meaning we should probably (we're scarcely being exact here, just playing) allocate them as doubles. That means we would subtract 26 home runs from his 73 (and jack his doubles by that many), leaving 47 homers in 2001--which is perfectly in line with the 49 the year before and 46 the year after, which gives us some confidence. If that had been the case, Bonds' career total right now, even after the lost time from the knee injuries, would be 736--still well past Ruth and well within striking distance (19) of Aaron's 755 (and, without the knee, probably already past it anyway). The point is that the not-at-all-unlikely spike in Bonds' power in 2001 got him the record a little earlier--but it didn't get him the record, period. His career did.
That deals with the spike. More credible as grounds for consideration is Bonds' jump in PF from 1998 to 1999, a jump that was a 17% increase. That year, 1999, is claimed by Bonds' attackers as his first post-steroids season; do the data justify the suspicion? To evaluate the claims, we need to consider three things:
Let's take a closer look at that last point, which may well be the key. All modern ballplayers work out regularly, in season and out of season. But Bonds, who had long relied chiefly on his innate abilities and thus ordinary training, came to the conclusion after the 1998 season that if he were to prolong his career, a much more strenuous regimen would be necessary. There is no doubt about that well-known decision: it is the very basis of the claims that that was when he started using steroids. But if his adopting of steroids into his augmented regimen is open to question, the augmentation of the regimen is not: it is commonplace knowledge. And "augmentation" is much too mild a term. In John Bloom's book Barry Bonds: A Biography, we find this:
Does all this prove that Bonds did not take steroids? No. But in evaluating the matter, we need to remember that it is not a binary yes/no decision. There are three quite distinct questions we need to be asking:
We have already seen that it is highly unlikely that even if Bonds did take steroids they had any material effect on his performance. The massive increase in rigor of his training regimen appears by itself quite adequate to account for the moderate post-1998 boost in his power, especially considering--yet again--that steroids do not much (if at all) boost the crucial lower-body musculature that gives a batter power, whereas endless squats and like workout exercises do. And the one-time spike in 2001 is a classic statistical "burp" of the sort seen in many men's careers (but not so often noticed because even their one spiked year's output is not newsworthy). Finally, note that since 1999, Bonds' PF has, saving that spike, been on average moving rather obviously and relentlessly down--not down in an off-the-cliff way, as if he lost some boost, but gradually, exactly as we would expect from the inevitable ravages of time. If he supposedly started steroids just before 1998, then we are expected to believe that they pumped him up then let him right back down. Say what? So, though the data do not--and cannot--disprove steroids allegations, they do demonstrate clearly that Bonds' stats do not need any artificial, exterior explanations, because there are no artificial influences to be seen in them. He may or may not have taken steroids, knowingly or unknowingly--but it doesn't matter to his performance. As to the two other questions, they--as with all steroids allegations--are evidential matters that should be resolved in some forum other than analysis of performance. If, to hark back to our silly analogy of peanut-eating, we were to ban peanut-eating by ballplayers, whether Fred Feckless did or did not eat peanuts might then be of interest to legalists, but not to responsible baseball analysts (or to baseball fans)--unless, of course, they had all become hypnotized by propaganda into believing that peanuts really do confer mystic powers on those who eat them. It is much like the old corked-bat nonsense: superstition. Mind, the question of whether we should attach opprobrium--and, if so, how much--to a player who apparently sought to "cheat" even if his "cheating" was inherently ineffective is a totally different question, one involving ethics, not baseball performance, and it is considered on a separate page of this site. Some Deeper QuestionsSomething else urgently needing consideration is why there was this incessant hammering on Bonds. Doubtless part of it was his nearing the home-run record; but the history of Bonds-bashing is a lot older than the chase or the steroids allegations. A thorough reading of the Cosellout web site's two-part article Sports Illustrated's Curious COVERage of Barry Bonds yields intensely interesting data (as, indeed, does the entire site). If you reading this are a sportswriter and feel you have been punctilious in your research and fair in your coverage of PEDs in general and Bonds in particular, jolly good, I salute you. But you know perfectly well that 98% of your colleagues have been neither punctilious in their research nor fair in their coverage of PEDs and of players suspected--usually on no palpable grounds--of using PEDs, and that is putting it in very much more polite language than the matter justifies.
A political reporter who would regularly refer to President Bush as an alcoholic with obvious drinking problems had best be able, if challenged, to supply some some sort of evidence--if not definitive at least enough to go to trial with, so to speak--or be laughed (or thrown) out of a job; but a sportswriter who regularly tosses out equally gross libels of "obvious" PED users seems to be under no such onus. As the political reporter cannot say "Well, obviously he's a little goofy at times, and look at his bad judgement" and be expected to be taken seriously, so should the sports reporter be obliged to say something more than "Well, obviously he's pretty muscular, and look at his home-run totals".
But it's not just writers--after all, sticks and stones and all that. It's a lot worse, as the blog article "Breaking Bonds" by author Matt DeMazza shows. The government's whole case against Bonds is the tale of a weirdo Inspector Javert chasing his personal Jean Valjean. The whole thing stinks. And the smell has nothing to do with the late-arriving question of whether Bonds used PEDs.
|
()
This site is one of The Owlcroft Company family of web sites. Please click on the link (or the owl) to see a menu of our other diverse user-friendly, helpful sites. | Like all our sites, this one is hosted at the highly regarded Pair Networks, whom we strongly recommend. We invite you to click on the Pair link (or their logo) for more information on getting your site or sites hosted on a first-class service. | ||
All Owlcroft systems run on Ubuntu Linux and we heartily recommend it to everyone--click on the link for more information. |
• Overview and Summary
|
Not every browser renders proper HTML correctly (Internet Explorer famously does not);
so, if your browser experiences any difficulties with this page (or, really, even if it
doesn't),
(It's free!)