There is some dispute about the precise year and place when limited overs cricket began. A one-day cricket tournament, initiated at the Tripunithura Cricket Club, near Cochin, India in 1950 claims the honor.
The Pooja Knockout tournament is still played each year and its 72nd edition is currently underway.
In professional cricket, the first one-day competition was a Midlands Knock-out Cup in 1962 between four English counties who, simultaneously, had spare days in their schedule. The following year, a 65-over tournament, the Gillette Cup, was launched for all 17 counties. This sponsorship lasted until 1980, replaced by NatWest Bank, and the competition survived under a variety of sponsors and formats, although it is a shadow of its former self. In 1969, a 40-over Sunday competition, sponsored by the tobacco company John Player, was introduced. A third 50-over competition, sponsored by Benson & Hedges, was introduced in 1972 and ran for 30 years before T20 cricket took over.
Limited overs cricket spread into other countries. The Gillette format and sponsorship, for example, was extended to Australia, South Africa and the West Indies on a rather tentative basis. Ironically, the first international limited overs match was a direct result of rain. In 1971, a Test match between England and Australia in Melbourne was abandoned because of heavy rain on the first three days. The loss of revenue and lack of cricket for the players led to an agreement to play a match on the Gillette Cup format. Despite it being a Tuesday, 45,000 spectators were attracted. Few knew that a historic moment had occurred. Once again, that moment had been driven by financial considerations.
Unlike Test cricket, limited overs cricket does not allow a draw. In the format’s early years, schedules were not so crowded, so reserve days were available in case of interruption. The first ever Gillette Cup match at Manchester in May 1963 was played over two days because of rain. As schedules became fuller, it became necessary to think about how to deal with the effects of interruption, usually by rain, on the outcome of matches that must be completed in one day.
The first method used was based on average runs per over (ARR). If the team batting second lost some overs, then the ARR of the team batting first was multiplied by the number of overs available for the team batting second, with that number being the target, plus one. The problem with the method is that it favored teams batting second because they had a shorter time to achieve the target, often with more wickets in hand.
This was apparent in a match between Australia and the West Indies in 1989. The West Indies target was reduced to an average run rate below that which Australia had achieved in its innings. This caused uproar and the Australians set about developing an alternative method, called the Most Productive Over (MPO), which was adopted for the 1992 World Cup.
According to MPO, if an interruption occurs while team two is batting and its innings is reduced to x overs, its target is revised according to the runs scored by team one in its highest scoring x overs. It was not long before flaws in this method emerged, including one in spectacular fashion. In the semifinal of the 1992 World Cup in Sydney, South Africa required 22 runs from 13 balls to beat England when rain stopped play. It relented within 10 minutes, by which time South Africa’s target was announced as 21 from a single delivery, much to everyone’s incredulity.
This happened because the umpires judged that two overs had been lost to rain. Under MPO, this meant a deduction from the target of the runs scored by England in their least productive overs. England had scored no runs from these overs and so the target remained the same but 12 balls were deducted from the 13 that had been available prior to the rain. Despite subsequent revisions, the method could not escape its bias toward the team batting first compared with ARR, which favored the team batting second.
Into this unsatisfactory situation entered two British statisticians, Frank Duckworth and Tony Lewis, whose names are now woven into cricket’s rich tapestry. Their D/L solution, first used in 1997, prior to the advent of T20 cricket, was based upon the notion that teams have two resources to build a score — overs and wickets. The combination of these resources, which are left for a team at any point in its innings, will determine its ability to score more runs. Analysis of many previous 50-over matches revealed patterns of scoring.
Based on this, the method converts all of the possible combinations of overs and wickets left into a table which expresses these combinations as resource percentages. If rain interrupts, the target score for the team batting second can be adjusted relative to the total achieved by the team which batted first so as to reflect the loss of resources. The overall aim is to set a mathematically fair target for the team batting second, that has the same difficulty as the original target, or calculate a result if the match has started but cannot be completed.
The D/L method has been modified on several occasions to address minor criticisms. The data on which the table is based is updated every year to take account of recent matches. In 2015, the custody of the method passed to an Australian professor of data science, Steven Stern, with the method being renamed DLS.
A major concern expressed about DLS is its potential unsuitability for T20 cricket on the grounds that the format’s scoring patterns differ to ODIs. Stern compared patterns in T20s with those in the last 20 overs of ODIs and found no significant difference. DLS has been called into use four times so far in the current T20 World Cup and, as is usually the case, has produced fair revised targets and outcomes. This is a remarkable achievement.