The idea of putting your best hitter second, rather than third, is still a novel one within baseball and has yet to gain widespread acceptance, even though the evidence in favor of such an arrangement is pretty strong. Using metrics such as batting runs, estimating the runs gained or lost through changing a lineup, shifting to an optimal lineup is only worth about 10-15 runs, or just over a win, in the course of a full season. That said, the marginal gain in getting your best hitter another handful of at-bats, including extra at-bats at the end of games, makes it worth trying to capture value that otherwise would be squandered.
The Reds are the best example this year of a team that is giving away offense by putting their worst hitter, Zack Cozart, ahead of their best hitter, Joey Votto, an example of archaic thinking that still persists within the game because that's how we've always done it.
Same as it ever was
Traditionally, the No. 2 hitter is supposed to be a table-setter who can put the ball in play, drop a bunt, hit behind a runner, and so on. This is all hogwash, of course: The No. 2 hitter has the same basic job as all of the other guys in the lineup -- to get his posterior to first base any way he can.
To put it another way, his job is to avoid making an out. As an industry, we spend too much time praising players for doing "little things" -- every round of applause a player gets for grounding out and getting a guy from second to third makes me die a little inside -- and that glorification has led to this piffle about the two-hole hitter being a certain type of player, rather than just a really good hitter, period.
Each lineup spot gets about 2.5 percent more appearances than the one after it over the course of a season, or roughly another plate appearance every 8-9 games. (That stat, and much of the information in this article, comes from the very useful "The Book: Playing the Percentages in Baseball," by Tom Tango, Mitchel Lichtman, and Andrew Dolphin, which has a whole chapter examining lineup construction that goes well beyond what I'm discussing here.)
The obvious implication is that you want to load your best hitters up at the top of the lineup, and to put your worst hitters near the bottom of it. Even flipping Cozart and Votto (not that you'd hit Cozart third, but just hypothetically) would convert about 10 outs per year into times on base, assuming 150 games played for each player and using their OBPs of the past few years.
Of course, you don't want to put your best overall hitter, someone who gets on base but also hits for power, in the leadoff spot, because his first plate appearances come with the bases empty and the remainder will come with fewer men on base because he's hitting behind the No. 8 and 9 hitters. Tango et al, showed that historically the leadoff spot has far fewer PAs with men on base (36 percent, with no other spot below 44 percent) than any other lineup position, making it an ideal spot for a high-OBP but low-power hitter.
For the Angels, that's not Trout, who slugged .564 last season with 30 homers and belongs in a position where he can deploy that power to knock some guys in, while also getting on base for the hitters behind him. (Sadly, the Angels don't really have a high-OBP guy for the leadoff spot; Peter Bourjos' .370 mark this season is way out of line with his career OBP of .307.)
Conventional wisdom says you put your best overall hitter third, and to this day most teams still do just that. Tango et al, point out, again with historical data, that when you consider the plate appearances each lineup spot receives, as well as the frequency with which each lineup spot gets each base-out situation*, a team's best hitter belongs in the No. 2 spot: It comes up about 2.5 percent more often over the course of a year, and generates more value with almost every way of reaching base due to who's typically on base and with how many outs. That is, a single or a double or a walk from the No. 2 hitter is worth more in run-scoring potential than the same event from a No. 3 hitter. The numbers are all very close, but the No. 2 hitter gets those extra 15 or so plate appearances a year, and the No. 3 hitter, on average, leads off the fewest number of innings, which is another reason not to put your highest OBP guy there.
* There are 24 base-out situations: 0, 1, or 2 outs, as well as eight configurations of runners on base from bases empty to bases loaded. These are the 24 base-out scenarios found in a run expectancy table, that tells you how many runs you can expect to score given a number of outs and a configuration of men on base. It also tells you that giving up an out via a sacrifice bunt is generally stupid.
These gains are small but real, and freely available to any team. What's a little less evident immediately from these studies -- again, I refer you to "The Book" for the data itself -- is the very real, almost binary benefit a team may get once or twice a year in the ninth inning from batting, say, Joey Votto second instead of Zack Cozart.
According to Dan Szymborski, in nine-inning games the past 10 years, the last out was made by the No. 2 batter 11.7 percent of the time, about what you'd expect given nine lineup spots with a slight skew toward spots near the top. (A straight 1-in-9 shot would be 11.1 percent.)
In other words, in about 19 games a year, the No. 3 hitter was left standing in the on-deck circle, forever alone. With one-run games accounting for about a quarter of each team's schedule last year -- the Reds were 31-21 in such games, so nearly a third of their games were decided by a run -- that would mean on average about five games a year where the team's best hitter doesn't get a last chance to bat. It might be only one or two such games, and it could be more than five, but the point is that there is never a game where you should be comfortable losing by a run while your best hitter stands on deck watching a clearly inferior two-hole hitter make the final out.
And a win coming from that situation isn't a hypothetical win from 10 runs produced on aggregate over a season -- it's a binary variable, a loss turned into a win, the kind that shows up in the standings and that people who work with baseball statistics are often absurdly accused of ignoring.
If you can get one more win a year from optimizing your lineup this way, with no downside whatsoever, shouldn't you do it? And shouldn't any manager who hits a guy with a career .283 OBP second (Cozart), ahead of a guy (Votto) with a career .417 OBP (.445 this year, .474 last year), be held accountable for that decision? Put your best hitter second, your next-best hitter fourth, your high-OBP/low-power guy first, and you get, in effect, free runs, maybe just a handful over the course of a season, but maybe that one marginal at-bat in the ninth inning turns into a very real, tangible win, the kind that teams are supposed to be pursuing anyway.
The conventional wisdom here is wrong, and all it took was a few guys to question it and look at the data to explain to us why.