Saturday, January 25, 2003

Kerry Packer, one day cricket, and the impact of television.

A month or two back, I read an Australian newspaper article about the 25th anniversary of World Series Cricket, the commercial breakaway cricket competition that occurred in 1977. Various Australian players were quoted as saying that media mogul Kerry Packer, who sponsored the competition, had tuend the sport into something fully professional and had saved the game of cricket from oblivion, and that if he hadn't come along, the sport would have faded into obscurity.

As a claim, I think this is excessive, but still, there is some truth in it.

International cricket has traditionally consisted of so called "Test matches", which go for a number of days each. Each member of each of the two teams gets to bat twice, and when it is all over, the side with the most runs is the winner. If the amount of time scheduled for the match runs out, then the game is declared a draw. (This is distinct from a "tie", which is the result if both sides end up scoring the same number of runs). Over 126 years of international cricket, the number of scheduled days per match has varied from three days to no limit, but for the last few decades it has been standardised at five days. Typically, the national team of one country tours another country for two to three months, and in that time they play three to six of these test matches with the host side.

A disadvantage of this is that type of game is that spectators who attend a game for one day only see part of a match. It was believed that if an entire game could be played in one day, then more spectators might be interested, and in 1963 the Gillette Cup was founded in England. For top level domestic cricket between the county sides in England, games were normally limited to three days. However, this was a competition of one day cricket: teams batted once each instead of twice, and each side was only allowed to bat for a maximum number of overs, so that the match could be completed in one day.

This competition was quite successful, and the number of domestic one day games played in England and elsewhere steadily increased over the years. However, these were considered largely social games, and at international level, test cricket continued to be the only thing that was played until 1971. In 1971, rain ruined a test match being played between Australia and England in Melbourne, but the weather was fine on the last scheduled day. Rather than waste the day, the teams agreed to play a one day, limited overs game between the two sides. A large crowd turned up to see that match, Australian won, and international one day cricket was born. For the next few years, most international tours followed a test series with between one and three one day matches. In 1975, the first World Cup of one day cricket was played in England, with the West Indies defeating Australia in a very exciting final. One day cricket was on the rise, but was still the poor relation of test cricket.

The highest rating television network in Australia, Channel 9, was in the 1970s ownwed by Kerry Packer. (Packer owns the network today, too, although he spent some of the time in between not owning it). Packer was smart enough to observe that cricket, if it has a large following, is almost a perfect sport for commercial television. The game goes for long periods of time, and can thus fill up many hours and days of television schedule time, and it does this in summer, when finding other programming can often be difficult. In addition, cricket has breaks in play between overs every three or four minutes. Normally, these breaks are just slightly longer than a standard 30 second television commercial. (Cricket also has slightly longer breaks for drinks once an hour and after batsmen get out). For this reason, Packer attempted to bid for the television rights to cricket in Australia. He was rebuffed, not for financial reasons but because the governing body was more interested in the national coverage that could be offered by the ABC and that commercial networks could not offer. (Commercial television networks did not have a national reach in Australia in the 1970s, although they do today).

The administrative body of cricket in Australia was not especially commercially minded: although cricket was theoretically a professional sport, it had the structure of an amateur sport. (In many ways it still does. Professional cricket consists of games between teams representing states, countries, counties, and other geographical and political entities, rather than the types of clubs and franchises that exist in soccer or baseball). In the mid 1970s, relations between the Australian board and the players were not good, and the players were badly underpaid. Packer was aware of these grievances, and did a deal directly with the players. He signed up all the best players, and set up his own, rebel cricket competition called "World Series Cricket" (WSC) in 1977, consisting of three teams: Australia, the West Indies, and one representing the rest of the world. (An advantage doing it this way was that he was able to sign up some of the best South African players, who were otherwise banned from international cricket). For two years this organisation played its own matches in competition with the official matches, and (at least in Australia) WSC had most of the best players. WSC was originally intended to consist mainly of five day "Supertests", but was much more willing to experiment than the traditional cricketing authorities. It turned out that there was great interest in one day matches, so the number of these in the schedule was steadily increased.

The Sydney Cricket Ground (SCG) is owned by the state government of New South Wales. This government had no wish to upset the most powerful media mogul in the country, and so the government gave permission for WSC matches to be played at the SCG.
However, the Melbourne Cricket Ground (MCG) actually belongs to the Melbourne Cricket Club, which would not give Packer permission to play his games there. Therefore, another venue had to be found, and the games were played at VFL park, a ground which was built for Australian Rules Football. Football matches were often played at night. As the lights were already there, WSC decided to experiment with night cricket, and scheduled one day games starting in the middle of the afternoon, and concluding just after 10pm. (This was also excellent for prime time television). The traditional red ball did not show up well against artificial light, and a white ball was substituted. This white ball did not show up well against the white clothing traditionally worn by cricketers, and therefore coloured uniforms were substituted.

The night matches were a huge success. In Sydney, with the cooperation of the state government, lights were erected (in the middle of the night, without warning, so as to foil potential protests from local residents) and night games were played in Sydney too. By the second season of World Series cricket, a format had been established: a mixture of five day matches and a triangular series of three teams playing one day matches, in which players wore coloured uniforms, played with a white ball, and in which many matches were played in the evenings under lights. This product was extremely popular with fans, and this success was a factor in the peace settlement that Packer signed with the cricketing authorities in 1979. This was essentially a surrender to Packer. Packer was given the television rights to cricket, a company called PBL marketing (that belonged to Packer) would market the game and essentially control its administration, and each season would consist of test matches between Australia and either one or two other teams as well as a triangular one day series with Australia and two other sides. The one day games would be played with a white ball, coloured uniforms, and many games would be played under lights. The one day game was marketed more heavily than the five day product, and the number of one day games was much greater than anywhere else in the world. Purists (such as myself) claimed that test cricket was a better and more interesting game, but the crowds, sponsors, and television ratings suggested that most people didn't feel that way. In Australia, from time to time we got "Is test cricket dying?" articles in the newspapers.

Thus, by 1979, international cricket in Australia had gone through two years of rapid change, and ended up with a format that remains essentially unchanged nearly 25 years later. Australia was playing a great many more one day games than any other nation. However, it wasn't especially good at them, and its win/loss ratio was only so-so. While most other countries were still using the old format of a test series and two or three one day games, triangular tournaments in Australia featured as many as 19 games. Crowds were large for one day games, but were much smaller for test matches.

What was in retrospect another key factor in changing the game occurred in 1983. The third World Cup was played in England, and it was quite unexpectedly won by India. Interest in cricket was already high, but winning the World Cup (in combination with the spread of television in India, which was just starting to occur) caused it to explode. Inevitably this led to an increase in interest in one day cricket in India as well, and this increased the proportion of one day cricket played there, as well. The 1987 World Cup was played in India and Pakistan, and was a great success (although the hosts were a little disappointed when their own teams were eliminated in the semi-finals). Australia ended up beating England in the final.

In most nations the changes were gradual. The number of one day games played on a tour steadily increased. Lights were gradually installed at a few grounds, which led to a greater number of games in coloured uniforms played with white balls. There was a gradual move towards triangular tournaments rather than series between two sides. (Stand along triangular tournaments in neutral venues were also played, largely for the benefit of television. These triangular tournaments started off containing fewer games than the immense tournaments played in Australia, but they steadily grew.

Australia hosted the World Cup in 1992. Whereas the 1987 event in India had been played using a similar format to the earlier events in England (white clothes, red balls, day matches), the 1992 event was one day cricket in the Australian style. This tournament also marked the return of South Africa to international cricket after sanctions against them had ended. Unbeknownst to most people outside South Africa, a culture of cricket had developed there that was if anything even more brash than what existed in Australia. Lots and lots and lots of one day games, many played under lights with a white ball, lots of loud music, you name it. Tours of South Africa were like tours of Australia, only more so.

From about 1992, the Australian model was adopted virtually everywhere with the exception of in England. Enormous triangular tournements were being played in South Africa, India, Pakistan, Sri Lanka. (Not coincidentally, this is about the time when large amounts of money started being paid for television rights, particularly in India). Whereas the 1987 World Cup in India and Pakistan had been played under an English model, the 1996 World Cup, again played on the subcontinent, was an event featuring all the features or cricket that had been adopted in Australian fifteen years before. The final was played in Lahore under lights, where Sri Lanka beat Australia.

As late as 1997, England had played more one day internationals at the Melbourne Cricket Ground than they had on any ground in England. They still played matches in the day, in white clothes and a red ball, and had breaks for lunch and tea. However, even they eventually succumbed. Coloured uniforms and a white ball were introduced for some of their domestic matches. The 1999 World Cup, the first in England for 16 years, was played using a format similar to what was done in the rest of the world. These were the first internationals played in England in coloured uniforms with a white ball, but they weren't the last. A year or so later, England signed a television deal with BSkyB which called for two touring sides every year, and a mixture of test cricket and a large triangular one day tournament, which was to be played with a white ball etc etc. The deal was extremely similar with the one that Australian cricket had signed with Kerry Packer 20 years before.

So, cricket was transformed. Teams were playing huge numbers of one day matches, were playing nine months of the year, and were being paid far better than had been the case 25 years earlier. Some would say that the whole world copied the format invented by Kerry Packer in Australia in 1978, but to be truthful I doubt that was it. Television transformed the sport, but television would have likely transformed the sport in exactly the same way had Packer not arrived. It simply would have taken longer. The contrast is between what happened in Australia, where all the commercial reforms happened at once, and everywhere else, where they came gradually. Had Packer not come along, they would have come gradually in Australia as well. I think it extremely unlikely that they would have never come along. (It might be worth comparing with rugby, which became fully commercial in about 1995).

One question that is worth asking is simply whether the earlier commercialisation that occurred in Australia helped Australia build their subsequently great teams. This one is tricky. Three or four years after the end of WSC, Australia had one of the worst teams in their history. This may have been a consequence of the disruption to the game caused by WSC, or it might have just been bad luck. However, Australia responded to this crisis by reforming the system of player development in Australia thoroughly. The commercialism of the game that existed by then, and the money that was in the game by then, certainly made it easier for them to do this. And of course, the game was not being controlled by traditional administrators at that point: it was being controlled by businessmen who reported to Kerry Packer. Certainly these people were much less sentimental than the sorts of people who ran (and run) the game in, say, England.

Finally, there has been one other interesting transformation in the game in Australia in the last decade. In the 1980s, one day cricket was promoted much more heavily than test cricket, and this showed up in television ratings and crowd figures. However, in the second half of the 1990s, this trend reversed itself dramatically. Crowds for test cricket started to rise, and for one day cricket to fall (although one day crowds remain good). It seems that the large numbers of new cricket fans who had started following the game due to its commercialism were now interested enough in it to appreciate test cricket, and in fact to prefer it. (It may be that once you have watched a few one day games they all seem the same, whereas test matches are subtle and extraordinary things). This trend has been visible for a while, but by this season it had become clear that test cricket was in Australia once again the pre-eminent form of the game and one day cricket was the poor relation. (This trend may have been strengthened by the rise of legspin bowler Shane Warne, whose skills are shown off more by test cricket than one day cricket). A South African friend of mine who visited Australia a year ago was astonished by just how high was the profile of test cricket in Australia.

However, for now, one day cricket is still king in South Africa, in India, and elsewhere. (England is kind of complicated). Certainly in the rise of commercialism and one day cricket, everyone else followed where Australia led. So an interesting question is whether Australia is leading the world again in a trend back to test cricket, or whether this is something uniquely Australian. I will watch this with interest.
Well, he seem to have the worst piece of internet sabotage since the Great Worm of 1988. (A vulnerability in a piece of Microsoft software has been taken advantage of. Where have I herad that before?) If we didn't know it already, the internet is pretty vulnerable. I tend to thing that the Al Qaeda's of the world as a general rule lack the technical capabilities to pull something like this off, however. As for the Aum Shinrikyos of the world, they are perhaps more worrying. (Dave Winer has some thoughts on the relative speeds with which the blogosphere and online media got the story compared to the conventional media. Of course, this is the type of story we really need the conventional media to be good at covering. In a situation only a little worse than this, we might have to rely on the conventional media, because the online media will simply not be functioning) By itself, this sort of attack is mainly irritating. If it were to happen in parallel with a series of military or terrorist attacks, however, its consequences could be very serious.

Friday, January 24, 2003

This piece at Wired takes about the schizophrenia at Sony: the company that is both one of the largest consumer electronics manufacturers and simultaneously owns a Hollywood studio (Columbia Pictures) and a major record label.

The basic point is that people who buy electronics want products that are as flexible as possible: products that can communicate with each other, products that can record and duplicate software, and products that don't have silly limitations built into what they can do. The content companies, on the other hand, want to restrict what electronics hardware can do because they are terrified of piracy. Therefore, Sony has difficulty releasing really cutting edge electronics products, because this would be seen as undermining the music and movie business. As Wired puts it, the contradiction is thus:

Instead, it's tried to play both sides. As a member of the Consumer Electronics Association, Sony joined the chorus of support for Napster against the legal onslaught from Sony and the other music giants seeking to shut it down. As a member of the RIAA, Sony railed against companies like Sony that manufacture CD burners. And it isn't just through trade associations that Sony is acting out its schizophrenia. Sony shipped a Celine Dion CD with a copy-protection mechanism that kept it from being played on Sony PCs. Sony even joined the music industry's suit against Launch Media, an Internet radio service that was part-owned by - you guessed it - Sony.

Of course, Sony's competitors are not so constrained. The typical strategy of big companies (Panasonic, Toshiba, etc) is to pay lip service to the content industry's concerned, but to largely give their customers what they want. There are more and more smaller competitors, largely Chinese, and these companies have no scruples at all. What is the likely long term consequence for this? Well, Sony loses its preeminence in consumer electronics, possibly. They bought the content companies in the first place so that they could supposedly get "synergies" out of selling both hardware and software. They never actually succeeded (although they did not lose as badly as Panasonic, who purchased MCA/Universal Pictures), and the content companies are now a big obstacle to Sony's core businesses. They need to sell them, although they are not likely to get a very good price in the current environment.
The good thing about this Economist leader is that it is written by someone who clearly gets the fact that copyright law was introduced to encourage creation and distribution of work, and not to ensure that copyright holders gained complete control of how a work is (and all derivative works are) used for all time. They also clearly get that copyright law, when extended too far, can actually hinder creativity rather than help it.

Then however, the leader writer demonstrates that he doesn't understand technology at all, and doesn't know the history of the PC industry, by rather glibly stating that although copyrights should be shorter, laws enforcing technological copy protection should be enacted to allow copyright holders to protect this copyright. There are a number of problems with this. The first is simply that it is easier said than done. Nobody has ever invented a copy protection system that nobody is able to break. There is one basic problem, which is that however you encrypt, scramble and lock your content when you store it on a DVD or whatever, you have to decrypt, unscramble and unlock it to play it. Once you have decrypted, unscrambled and unlocked it to play it, you can also copy it.

This simple fact means that technologically savvy and commercial pirates are never going to be stopped. What having such copy protection in place will do is greatly inconvenience legitimate users, and will make legally legitimate uses of content (such as the various things that fall under "fair use") impossible without the consent of the copyright holders. (The Digital Millenium Copyright Act (DMCA) makes it illegal to disable or tamper with copy protection, even if the use to which the copyright material is being put is otherwise legal). It is worth observing that the software industry used copy protection a lot in the early 1980s, but gave up because its customers hated it so much.

Recent attempts at copy protection law from the content industry are attempting to make it mandatory that all computers and other electronic devices sold have so called "digital rights management" software built into them: that is, sortware that refuses to play content unless it is owned legitimately. This is deeply problematic. To see why, it is necessary to think about what a PC is. A PC is a general purpose electronic device. It has a screen, a keyboard, a set of speakers, various peripheral devices such as DVD-ROM drives, hard disks and the like, plus the ability to connect new devices such as digital cameras, scanners, etc, all connected to a processor and some memory that can manipulate infomation to do with those devices in any way. The point is that how it does this is not defined in advanced. A programmer can find new ways for data from one of these devices to be manipulated and then played, used, edited, redisplayed etc etc on any of these devices. The reason we can do all these extraordinary things on our PCs is precisely because of this flexibility. And we should be able to. If I buy a DVD, then I should be able to play it on my PC, and then do anything I like with the data from it on my PC. (Once I have done something on my PC, then copyright law should come into play if I want to give this to other people, but it should not restrict actually what I can do on my PC). The nature of the PC is that people other than the vendor can design the applications for it. Virtually every useful PC application has arisen this way.

Compare this with what, say, a DVD player does. It is designed for reading DVDs, and playing them back on a television. It is a special purpose computer, rather than a general purpose computer. It has one very specific use. It does not allow the data on a DVD to be played in interesting ways. It is designed precisely to be used in the way that Sony (or whoever) intended. This is true of virtually all non-PC electronics devices, from calculators to hi-fi systems to most mobile phones.

And what does the content industry want DRM software want to do. It wants to look at everything my PC does, check if this is within the small set of uses that is permitted, and not allow the PC to do anything that falls within the narrow description of what is allowable under copyright law. In short, it wants to remove the general purpose aspect of the computer. It wants to turn PCs into special purpose devices which can only do things that were specified in advance. It wants to turn PCs into glorified DVD players (or whatever). If you mandate "digital rights management", you are mandating that a PC contain software from a particular vendor, or a small set of vendors. (This is why Microsoft has spent a huge amount of money on various versions of Windows Media Player with all sorts of DRM software built into it. Microsoft wants to be that vendor). This will dramatically restrict the development of new applications. Essentially the content industry finds the general purpose PC so threatening that it wants it outlawed. It wants to give you a small list of applications, and put each in a special box that cannot talk to any other boxes. This way piracy is hard. However, innovation is also hard. The idea of innovation can come from the ground and filter up was what made the PC revolution so special. Eliminating it would quite simply be a catastrophe.



Next time I put up a photo, I will go for one of me having had a shave and a haircut, and wearing a nice suit, I think. (I actually do look respectable at least some of the time).

Just out of interest, in this photo I am sitting at the top of Mt Lobuje East, and I am around 6100 metres above sea level. As you can see, Everest is somewhat higher (approx 8800 metres). The photo is taken with an extremely wide lens (19mm), so you can see the valleys below as well as the mountains above. As a consequence, Everest is actually nearer than it looks. It is about three kilometres above me, and also about three or four kilometres away in the horizontal sense.

Thursday, January 23, 2003

Dumb Movies and Environmentalism

In the 2000 movie Red Planet , which I caught on DVD the other day for some reason, there is a brief prologue in which it is explained that some time in the future, climate change has occurred on Earth, and a combination of the ozone layer being gone and global warning has rendered the earth nearly uninhabitable. Therefore, astronauts go to Mars, and release genetically modified green algae as the starting process in a terraforming process that will warm Mars up, unfreeze the Martian ice caps, and eventually release oxygen from underground so that there will be a breathable atmosphere.

Now as this was revealed, one curious question unfolded in my mind. If technology has been developed that can terraform Mars, a hunk of rock with frozen ice caps and only a very thin non-oxygen atmosphere and change its atmosphere and climate into an inhabitable world, why can't the same technology be used on Earth to terraform its atmosphere and climate back to what it was before the crisis occurred? Surely this is somewhat easier than transforming a hunk of rock into an inhabitable world. Either I am missing something, or this is the dumbest movie plot point since the human batteries in the Matrix. And how is it possible for Hollywood to release a movie like this without anyone asking this one simple question. And why didn't anyone in the audience ask this question either? (The film got lots of bad reviews, but not for this).

I suppose one possibility is that a UN dominated by banana-starved Europeans has prevailed on earth, genetically modified algae are banned on the planet, and therefore although terraforming technology exists and climate change can be controlled, there are laws preventing this from actually being done because "GM is wrong". But this couldn't be. Nobody would allow huge numbers of people to die for reasons of Romantic Luddism, would they?

Seriously, though. The mental disconnect that led to this plot illustrates an interesting point. On earth, the question of environmental damage is often thought about in isolation. The damage is considered, but the question of how technological advance can help clean it up is not. However, when we look at something completely divorced from the everyday, such as the potential colonisation of Mars, we suddenly can once more think of the technology. Curious.
I think that "Jay Manifold" sounds like a name that the square jawed weapons officer should have on an interstellar warship in a vaguely militaristic 1950s science fiction novel, possibly one written by A.E. Van Vogt. But that could be just me.
To state the fairly obvious, the war is about to start. Four carrier battle groups, a large USAF base in Qatar, bombers flying in from Diego Garcia, Turkey, and elsewhere. The amount of hardware in play is clearly enormous. The initial bombing onslaught is clearly going to be fierce.
Sherlock Holmes and the Anglosphere

Andrew Sullivan quotes Sherlock Holmes as apparently an early proponent of the Anglosphere.

"It is always a joy to meet an American, Mr. Moulton, for I am one of those who believes that the folly of a monarch and the blundering of a minister in far-gone years will not prevent our children from being some day citizens of the same world-wide country under a flag which shall be a quartering of the Union Jack with the Stars and Stripes."


It isn't just this one story, either. The Holmes books are very kind and complimentary to Americans, even if they sometimes have the usual British prejudices towards them.

"Tomorrow it will be but a dreadful memory. With my hair cut and a few other superficial changes I shall no doubt reappear at Claridge's tomorrow as I was before this American stunt - I beg your pardon, Watson; my well of English seems to be permanently defiled - before this American job came my way."

( His Last Bow. 1917).

Most of the Holmesian canon is short stories, but in two of the four novels( A Study in Scarlet and The Valley of Fear) about half the action takes place in America. (In both cases, Holmes is in London, and figures out what happened later). Usually sympathetic American characters crop up in quite a few of the other stories. (German and French characters tend to be much less straightforward). Sir Henry Baskerville, the principal character (apart from Holmes and Watson) of The Hound of the Baskervilles , is British but supposedly grew up in America. (Intriguingly, though, both Sir Henry and Holmes himself were played by Australians in the most recent BBC television adaptation, so there is one more for the Anglosphere).

And as one final observation, Conan Doyle once wrote a play entitled Angels of Darkness , which was never published and is lost. However, it apparently features Watson living in San Francisco prior to meeting Holmes, and married to an American.

As a somewhat ludicrous aside while talking about Dr Watson in San Francisco, it is worth spending some time on the question of Dr Watson's wives, merely because it is amusing. Conan Doyle rather rushed the writing of the Holmes stories, and as a consequence the details of the stories are not always entirely consistent from one to another. The chronology of the stories appears to be that Watson lives with Holmes in 221b Baker Street, then gets married to Mary Morstan and leaves in 1888 . Sherlock Holmes apparently dies. At some point in the next couple of years, Mary dies. Holmes returns in 1894, and Watson then returns to live with Holmes in Baker Street. (Holmes' brother Mycroft has kept the rooms in Baker Street exactly as Holmes left them, which is convenient, although it must be said that Mycroft, unlike Watson, was aware that Holmes was not dead). Watson then gets married again in 1902, and leaves Baker Street again. Some years later, Holmes (like Conan Doyle himself) retired to Sussex for a career in beekeeping, but still meets up with Watson to take cases from time to time.

However, there are various details of chronology that are not entirely consistent, and if you take it literally, Watson is leaving and returning rather more than makes any. One way to interpret it all is to conclude that Watson was married more than twice, and in taking to to extremes some people have found evidence that Watson had as many as seven wives.

If so, what actually happened to the wives?


The second dubious explanation is that Watson was a serial killer.

How hard would it be for a doctor to procure poisons or administer deadly infections? Watson does admit to having "another set of vices" in "A Study in Scarlet"--could he be referring to a murderous streak a mile wide? This would make Watson one of the most diabolical, cunning, and daring killers of all time, to stay so brazenly close to the world's greatest detective and yet defy discovery at every turn. One would surmise that Holmes would get suspicious by the fourth or fifth time he was asked to present a ring as the best man.


This is all very silly, but much fun has been had discussing the subject.

Wednesday, January 22, 2003

My reaction to the Academy Awards is generally that they wrong films win virtually every year. Still, though, I love movies and I think that tipping the Oscar race is kind of fun, so I follow it and give my thoughts on it every year.

Two days ago the Oscar season started in earnest with the Golden Globe Awards in. For the Oscars themselves, I will give my thoughts gradually on a category by category basis, and a set of predictions the day before the ceremony. (I gave some thoughts on Best Animated Feature last week). Golden Globe results are normally reasonably good indicators of Oscars. However, the Golden Globes divide their main awards up into to categories: "Drama" and "Musical or Comedy". This doesn't usually matter much, because the Academy seldom gives awards to comedies, and musicals have been close to extinct. However, there has been a resurgence of musicals in the last couple of years. Moulin Rouge received lots of nominations last year and looks to have come close to winning Best Picture. Chicago seems a serious contender this year. This means that we have to look at the Musical or Comedy categories at the Golden Globes as well as the Drama categories. This is a change.

Unusually, this year the key to what is going on seems accessible by looking at the Best Actress and Best Supporting Actress categories, so that is what I will look at first. We start with the film The Hours, which apparently features distinguished performances from three very fine actresses: Meryl Streep, Nicole Kidman, and Julianne Moore.

The Oscar rules for acting awards state that all female performances are eligible for both the Best Supporting Actress and Best Actress awards. Which category a performance falls into is entirely up to the voters. Normally, what happens is that the studio that made the film takes out "For Your Consideration" advertisements in the Hollywood trade papers, and perhaps in the New York and Los Ageles press, which essentially tell the voters in which category to vote for a particular performance. Usually lead performances are suggested for the lead category, and supporting performances for the supporting category, but it vary depending on the strength of the field, the egos of the people involved, and the strength of the performance. (Anthony Hopkins won "Best Actor" for playing Hannibal Lector, and was only on screen for about 15 minutes in total, but it was such a powerful performance that the studio decided to go for the big one, successfully). It can also depend on what other performances the same actor has produced in the same year. An actor may not be nominated twice in the same category for different performances, or in different categories for the same performance, but may be nominated in different categories for different performances.

This is necessary to explain what is going on in the two actress categories. Julianne Moore put in a possibly even more lauded performance in Far From Heaven . Meryl Streep put in a fun performance in Adaptation , and therefore, although the three actresses in The Hours have performances of similar size, Miramax are campaigning for Streep and Kidman for Best Actress, and for Moore as Best Supporting actress.This way, the actresses don't have to compete with their own performances in different films. It seems likely to me that Streep and Moore will get nominated for both categories.

So who will win? We are in the odd position that there are two awards and I think the members of the academy would really like to give an award to all three actresses. All three are seen as overdue for an award, and all three are perceived as having done particularly good work this year. In the Golden Globe awards, best Actress (Drama) went to Nicole Kidman and Best Supporting Actress to Meryl Streep. My feeling is that the Oscars will go the same way.

Nicole Kidman was a very familiar actress in Australia (both in the movies and on television) ten to fifteen years ago. Everyone thought she was going to go the Hollywood and make it as a very big star. Of course, instead, she want to Hollywood and married Tom Cruise, and then spent the next decade more famous as Tom Cruise's wife than as an actress. Even when she put really good performances, such as in To Die For (1995), and The Portrait of a Lady (1996), she didn't get the credit she deserved. When she broke up with Cruise, she almostly instantly got the success and recognition she had never quite had before. Last year she was nominated for Moulin Rouge (although many people thought her performance in The Others was better) and this year she will clearly be nominated again. I think the general perception is that people overlooked how good an actress she was over the Cruise business, and now is the time to make up for it. At this point, therefore, my money is on Kidman to win Best Actress.

Meryl Streep is of course one of the best regarded actresses in Hollywood, and is greatly admired by other people in the profession. She has been nominated for awards a lot. (Oscar nominations in 1979, 1980, 1982, 1983, 1984, 1986, 1988, 1989, 1991, 1996, 1999, 2000, and a total of 18 Golden Globe nominations). She actually won awards early in her career, but last won an Oscar or a Golden Globe since 1983. That was prior to this year. She did pick up a Golden Globe award for Best Supporting Actress for Adaptation the other night, and gave a speech that clearly indicated she was astonished to finally win something. (For the two sets of awards, she had been nominated 22 times since last actually winning). My feeling is that this is the year where the academy will decide it really is time to give her an award again, and she will pick up Best Supporting Actress. Plus she is supposed to be extremely good in the movie. It is a fun performance, and the Best Supporting Actress award often goes to fun performances.

Julianne Moore seems likely to miss out. For Best Actress, she is likely to be nominated for Far From Heaven which seems the slightly less accessible of her two movies, but the better of her two performances. (It would help if I had seen the actual movies, clearly). I think her vote will be split, and although she has been nominated twice before, I don't think her career is going to be seen as quite a distinguished as the other two. So she may miss out, although this will be largely because it is such a strong year.

Do I think there are any other contenders? There is the Chicago factor. Renee Zellweger will likely be nominated for Best Actress. She was nominated last year for Bridget Jones' Diary and appears to be generally quite liked, but I don't think she is perceived as "serious" enough to win in the Best Actress category. Catherine Zeta-Jones will probably be nominated for Best Supporting Actress, but again I doubt she will win, largely on the basis that she isn't in the class of Meryl Streep of Julianne Moore. (Zellweger did win the Golden Globe for Comedy or Musical. Zeta-Jones was actually nominated for lead Actress, Musical or Comedy, but lost to her costar).

As for non-Chicago other actresses, Diane Lane for Unfaithful is likely to get the last Best Actress slot, and Kathy Bates for About Schmidt looks likely in the supporting category.

As for people who will miss out, I am with Harry Knowles in thinking that it is a travesty that Emily Watson has not won an Oscar. At one point it looked like she might have a chance for Punch Drunk Love, but this faded away (probably due to the lack of commercial success for that movie: Adam Sandler fans hated it because it wasn't a normal Sandler film, and people who go and see art films didn't go and see it because it had Adam Sandler in it). This is annoying.

Just as an aside, I think that the one sheet for Punch Drunk Love is the most beautiful movie poster of the year. I need to get a copy to get framed and put up on my wall.



In a week or two I will talk about the races for Best Picture and Best Director. I am fairly sure this is going to be one of those years where they go to different films. Then after that I will talk about the Best Actor and Best Supporting Actor categories. (I am putting these ones off, as I have the least idea about them at this point). I may even talk about the technical categories at some point, if I am that way inclined, too.
Shane Warne is going to retire from one day cricket after the World Cup so as to prolongue his test career. This could mean either than his body is in a pretty bad way and he just wants to eke out another year or two, or he is still in decent shape, knows that it is the test career and not the one day career that matters, and was a bit shocked by the shoulder injury before Christmas and doesn't want to risk his test career by doing anything stupid. I hope it is the latter. Given that before the injury he was bowling better than he had in years, I am hopeful.

Warne is capable of being an idiot, but he is one of those extraordinay sportsmen who appear to have been blessed by the almighty. He comes on to bowl, and the crowd is silent, or everyone crowds around the television to watch. I wish he could play forever.

Tuesday, January 21, 2003

By taking bribes from bookmakers to lose matches, former South African captain Hansie Cronje disgraced the game of cricket. I can forgive players who have done a lot of stupid things, but not this. I am sorry that Cronje is dead, but a tribute to the man? Give me a break.
Suspension bridges, and cable stayed bridges

I have discussed this at least in passing before .


A suspension bridge such as the first Severn Crossing, above) is a bridge where the towers are connected to each other by cables. (The shape of the arc of the cables is a shape called a "catenary", by the way). The deck is then held up by vertical cables that connect to the main catenary shaped cables. The key point is that the only stresses on the deck are vertical.


A cable stayed bridge (such as the Second Severn Crossing, above) is a bridge where the cables are connected directly from the towers to the deck. If you build a bridge this way, the towers have to hold a lot less weight, and the bridge can be much less massive, and therefore much cheaper to build. However, the stresses on the deck are horizontal as well as vertical, and therefore the deck has to be made out of something stronger than is the case for a suspension bridge. For this reason, it was not practical to make large cable stayed bridges until materials that could withstand greater lateral stress than could traditional materials were developed in the1980s, and this sort of bridge only really became a big deal in the 1990s. However, you now see them everywhere. For the very largest bridges (longer than 1000m) cable stayed bridges are still impractical.

When I was in Normandy over Christmas, I went and saw the Pont de Normandie across the mouth of the Seine connecting Le Havre and Honfleur. I took some photographs, but unfortunately it was an overcast day and the light coloured bridge didn't show up very well. I scanned one of these and was going to post it with yesterday's article, but it wasn't a very good photo so I refrained from doing so. I should have found a photo somewhere else but it was time for bed.

Monday, January 20, 2003

Paul Marks over at Samizdata comments that it is possible to contest the claim that we live in a period of exceptionally rapid technological progress. He quotes a speech to that effect given by John Jewkes in 1972. At that point, Jewkes asked where are battery operated cars, a typewriter that can take dictation, much cheaper ways of digging tunnels, a cure for the common cold, substantially new and much more efficient desalination techniques, et cetera. Most of the things Jewes asked for, we still do not have.

I think there is some truth in this. With respect to the technology of large things, we haven't got very far in fifty years. Look at transport. Rail, road, and air travel hasn't advanced much at all. From 1903, when the Wright Brothers made their first flight, to 1958, when transatlantic Boeing 707 services commenced, the advances were immense. (I will consider the De Havilland Comet to be essentially an experiment, and the Boeing 707 the mature product). From 1958, there has been essentially nothing. The time it takes to cross the Atlantic has in fact increased slightly, as aircraft today are slightly slower than those of 1958. Aircraft today are more efficient, and travel costs a lot less, but the product itself is the same one we had in 1958. As for railways, widespread electrification is also something that dates back about 50 years, but since then there has been no widespread technical transformation. As for roads, the car was a mature product by the second world war. Since then, they have again improved in quality and have become more widespread and cheaper, and the road networks they run on have become larger, but the car itself has not really changed. The speeds of cars on our roads have not really become higher. As for physical infrastructure, as Jewkes said, the cost of, for instance, digging tunnels, hasn't dropped appreciably. Digging an underground railway through London was done in the 1860s, and the difficulty of doing so seems today about the same. Engineers commenced digging a Channel Tunnel in 1880, and there doesn't appear to have been any technical obstacle to their succeeding at the time. The project was shut down for purely political reasons.

As for bridges, the length of the largest bridge span in the world increased dramatically from about 1800 to about 1930. In 1800 the longest bridges in the world had spans of about 150 metres. In 1937, the longest bridge in the world was 1280 metres long (The Golden Gate Bridge in San Francisco). Well, after this dramatic series of advances, the longest bridge in 1997 the record had only advanced to 1410 metres long, hardly any further advance at all.

And of course there is rocketry. The Germans invented the V2 in the second world war. Since then we have had bigger rockets, culminating with the Saturn V and the Russian Energia in the 1960s, but no real technical advance. Since the 1960s, though, rockets have actually got smaller. The technology is basically that of 1950.

Robert Zubrin makes a similar argument towards the end of The Case for Mars . Zubrin argues that mankind lacks frontiers, and the we need a new one, and that in the mean time we are becoming steadily more introspective, and large scale technical advance has suffered. Zubrin believes that we won't really get it going again until we have a new frontier, and the planets in general and Mars in particular are the frontier we should look at. Yes, this will boost rocketry, but it will increase the scale of the things we are trying to do, and get us out of what Zubrin perceives as a rut. (I don't have a copy of the book handy. If I did I would quote from it).

My interpretation is a little different though. While I would like to see humans visit and colonise Mars at least as much as Zubrin would, I don't think the absence of a frontier is what has caused this curious pause in physically large technology. What I actually think is that most of the technologies listed above had essentially reached the limits that the human mind and analogue technology could manage on their own. These technologies had reached limits of complexity that were it was hard to go beyond.

Since 1950, technological progress has instead concentrated on the very small instead of the very large. We have had an electronics revolution, and a resultant communications revolution, basically. And of course we have had a computer revolution. This in my mind only really got going in a big way in about 1980, with the beginnings of the widespread adoption of the personal computer. (I will consider the engineering workstation to be a form of PC, also). This put a tool on everyone's desk that allowed us to get through the complexity limits of analogue technology. Suddenly, this additional computational power was available, and this changed the rules dramatically. It immediately made technologies such as the mobile phone much more practical. (Essentially, a huge amount of computational effort is needed to prevent different calls from interfering with each other, plus it had key roles in various other parts of the "communications revolution"). And, it had the potential to revolutionise almost all the technologies mentioned above. It is only just starting to do so, but this is the lag as people learn to use new technology. (Economists spent decades worrying about the "productivity paradox". All this money was being spent on computers, but it wasn't showing up in productivity statistics. And then, in the late 1980s, it suddenly did. There was a lag, but it came through. We have had the lag in lots of other technologies too, but everything is suddenly coming through).

So let's look at some of the technologies mentioned above. Air transport. Much higher speeds requires stronger materials, more powerful engines, and really complicated modelling of fluid dynamics equations that describe what is going on at mach 6. Modelling the fluid dynamics equations is getting better and better, but you cannot do it without a computer. As for stronger materials, well we now pretty much have these. We have them because materials science uses some really fancy computational modelling techniques to make them. (Big advances in materials science are crucial to all sorts of things. We wouldn't have them without the computer revolution). Car transport. Well, at the highest performance end, Formula 1 racing, we have a situation where the technology is advancing at an incredible speed. Composite materials, fancy air flow models, computer based suspension and gear changes and a lot of other things lead to a situation where the cars become faster and faster every year. The administration of the sport has to change the rules every year in a desperate attempt to slow the cars down, but none the less they still get faster every year. The bottleneck is the driver's reflexes. On public roads, the issue is the driver, and congestion. Well, computer modelling may eventually solve these problems too. It is certainly being worked on. As for fixed infrastructure, well, tunnel building is still hard. As for bridges, in the last 15 years we have seen an utter revolution. More advanced materials science means that cable stayed bridges have become practical where suspension bridges were needed before. Cable stayed bridges can be built for a fraction of the price, so we have a golden age of bridge building. In December I visited the Pont de Normandie, the second longest cable stayed bridge in the world (856 metres) . 20 years ago a suspension bridge which would have cost several times as much to build would have been required.

The longest bridge in the world was 1410 metres in 1997. It is now 1992 metres: the longest bridge being a particularly economically pointless bridge in Japan. It seems likely that a bridge connecting Italy and Sicily will soon be built, with a span of approximately 3000 metres. After 60 years of stagnation, suddenly we have a huge advance. As for rocketry, well the computation led advance in materials science may mean that we can do without rockets almost entirely, and use space elevators for getting payloads into orbit. To do this, we need modern materials again, and we need complicated computer systems to prevent the elevator cable from colliding with space debris.

Almost all the technological fields I have discussed above advanced dramatically between 1800 and 1950, but then more or less stopped. Since then we have seen ourselves stagnate, but I would contend the stagnation is ending.

This is only a small part of the story. The spread of computational power has led to dramatic developments in nanotech and biotech, also. These revolutions will be at least as big, and may give us things like cheap desalination, as well as other things unimaginable. This, fundamentally, is why (as I mentioned once before), I dislike the expression "information technology" or "IT" so much. I think the computer revolution is, at heart, about computation, not about information. The benefits of improved information processing and improved communications are of course very important, but they are only a small part of the revolution that computers have unleashed (and they mostly wouldn't be possible without the computation anyway). Computation exists at a lower level, and it enables many things besides information processing. The larger revolution is that improved computation will give us a dramatically improved ability to control the physical world. The expression "IT" is just far too narrow an understanding of what computers are for.
Josh Marshall has started following the leads on just why George Bush has been sending wreaths to the grave of Jefferson Davis, once president of the Confederate States of America. Okay, the truth isn't very interesting or flattering to the president (it seems that Bush is repaying a favour done him by some not especially savoury southerners in the 2000 presidential primaries). However, everything seems to lead to an organisation named the "Sons of Confederate Veterans". These chaps are concerned with preventing "Heritage Violations" (their capitalisation), that is "Any attack upon our Confederate Heritage, or the flags, monuments, and symbols which represent it". If one detects such a Heritage Violation, it should apparently be reported like this

Whom do you report it to? Your first contact should be your Camp Commander or Heritage Officer. They should in turn report the heritage violation to the Heritage Chairman in your Brigade. The Brigade Heritage Chairman should then contact his Brigade Commander and the Division Heritage Chairman. Heritage violation responses are best handled at the local level, in cooperation with Brigade and Division level officers. A plan of action to deal with the heritage violation should be developed by these Brigade and Division officers, acting in concert with the local camp and member (or other person) that initially reported the violation.
The Division Heritage Chairman should report the violation to the Division Commander, and the SCV’s Chief of Heritage Defense. The Chief of Heritage Defense can call upon the national organization to respond to the violation, if such action is required. The Chief of Heritage Defense is assisted by a members of a Heritage Defense Committee, appointed by the Commander-in-Chief.

For the Chief of Heritage Defense to have a heritage situation officially deemed as a violation by the Sons of Confederate Veterans, he must have consent from the Commander-in-Chief and such other members of the General Executive Council as the Commander-in-Chief may designate, as well as a consensus of the Heritage Defense Committee.

These guys are apparently serious. (Or they at least think they are). Does the Ministry of Silly Walks come into this anywhere? Does it have anything to do with International Whacking Day, perhaps?

(Josh has lots more. I apologise for duplicating so much of his post, but it is simply too funny).
This article in the LA Times on food in the US Navy's submarines (via aldaily ) is quite interesting. Basically, the Navy goes to great trouble to serve submarine crews the finest food possible: submarine cooks are trained in some of the finest restaurants and cooking schools in America (and sometimes elsewhere), and after serving in submarines many of them go on to become chefs in top restaurants themselves, to cook for the President in the White House, or even to teach in top cooking schools. Basically the deal is that when you coop 140 men up in a tiny space for three months, during which time they are mostly unable to communicate with anyone off the sub, keeping up morale is tremendously important. Still, this sounds good.

Rico says the Jefferson City's cooks try to keep up an eclectic menu -- a fusion of Asian, European and American cuisine that could have easily been lifted from any upscale restaurant.

Breakfast is hearty, with bacon, sausage, eggs, pancakes, French toast and grilled steaks, depending on the day. Grits and oatmeal made from scratch are standard offerings, as are fresh-baked doughnuts and omelets made to order.

A lunch menu on a recent Monday consisted of French onion soup, spinach lasagna and Italian sausage, followed by a dinner that included egg drop soup, teriyaki steak, Cajun blackened fish and pork fried rice.

A salad bar is standard for lunch and dinner as well as ice cream and a variety of cakes baked each day for dessert.

On Tuesday, the main lunch dishes were grilled steaks and broiled lobster, with seasoned wax beans and sauteed mushrooms with onions. For dinner, the crew had Dijon baked pork chops with natural pan gravy, simmered pasta and sesame glazed green beans.


Plus there is the simple fact that the number of people who serve in submarines is small, and the total effort in feeding them well isn't all that large and doesn't cost all that much. Assuming that half of the 73 submarines are at sea at any one time, that is a total of about 5000 people who have to be fed this way. That is much less than the crew of a single aircraft carrier. I was given a tour of the carrier USS Constellation once. We did briefly stick our heads into the mess room, and the sailors were being given plates of mass produced spaghetti bolognese. It looked perfectly edible, and perfectly nourishing, but it looked like institutional food everywhere - not all that exciting. Presumably this is par for the course for the rest of the navy.

The article also has this tantalising glimpse of another country.

British submarine crews have the added luxury of a small bar with ale on tap, but alcohol is prohibited on U.S. Navy vessels.

It doesn't say anything about the food the British submariners are served, however. Fish and chips, anyone? And do French submarines have large wine cellars?

Sunday, January 19, 2003

Okay, as a further explanation of why the Australian cricket team are as successful as they are, take a look at the photograph that goes with this article . The picture is of 21 year old Michael Clarke , playing in his first game for Australia. It was not an important game: Australia had already qualified for the finals of the one-day tournament in question. And of course it is a one day game, and most players would rather play a test. Clarke has been mentioned a bit over the last few months as someone who would play for Australia sooner or later, but he was realistically third or fourth in the queue of batsmen waiting for a place in the side. However, Australia had qualified for the finals, and so a couple of players were rested for this game. On top of that, just to do his best to confirm various other stereotypes about Australian cricketers in the minds of foreigners everywhere, Darren Lehmann managed to get himself suspended for five games for racial insensitivity. So, Clarke was a last minute inclusion in the side. Like many people in Australia, Clarke has clearly dreamed of playing for Australia since he was about six years old, and consequently we get this photograph: Clarke out in the middle in front of a capacity crowd, playing England, looking at the peculiar yellow coloured cap that is part of the Australian one day team uniform, with a smile on his face suggesting some mixture of delight and amazement. Even coming into the side in inauspicious circumstances in a relatively unimportant match it is that big a deal to the guy. I don't think I have ever seen an expression like that on the face of an English player. I wait eagerly to see the expression on his face when he gets the much more aesthetically pleasing Australian test cap. (It may be that this attitude on the part of Australians towards cricket and sport in general is excessive - in fact I believe it is - but it does lead to Australia having an excellent cricket team).

And what did Clarke do in the match? Well, he bowled seven pretty tight overs, and took one wicket for 24. He ran one of the English batsmen out through fine fielding. And with the bat, he scored an unbeaten 39 not out that took Australia from 6-104 to the winning total of 153. That is, he played a match winning performance. He isn't going to play in the world cup, because the team has already been selected and he isn't in it. He may not even play in the next match. However, I suspect he will be in the touring party to the West Indies, at least in the one day side.
Okay, the blog has been redesigned. Sadly, I think I am going to have to change the picture to one in which I take up a larger portion of the photograh. There may be one or two little further fiddles over the next week or two, but this will do for this evening. I have spent enough time hacking HTML when I would rather be writing this weekend already.
Jay Manifold has a good piece on nanotech. Particularly interesting is his discussion of the potential environmental uses.

With replicating assemblers, we will even be able to remove the billions of tons of carbon dioxide that our fuel-burning civilization has dumped into the atmosphere. Climatologists project that climbing carbon dioxide levels, by trapping solar energy, will partially melt the polar caps, raising sea levels and flooding coasts sometime in the middle of the next century. Replicating assemblers, though, will make solar power cheap enough to eliminate the need for fossil fuels. Like trees, solar-powered nanomachines will be able to extract carbon dioxide from the air and split off the oxygen. Unlike trees, they will be able to grow deep storage roots and place carbon back in the coal seams and oil fields from which it came.

(That's actually Jay quoting Eric Drexler's Engines of Creation rather than Jay directly).

This has obvious relevance to what I was saying about environmental issues and sustainability the other day.
A question to readers

Sometimes, when I load this webpage on my laptop, the top of the page appears (With "Michael Jennings" in large letters) and then there is a delay for twenty or thirty seconds in which nothing appears to be loading, before the rest of the page suddenly appears. Does anybody else notice this, or is it just something specific to do with my laptop and internet connection? The other possibility is that it has something to do with the assortment of features that I have attached to this page that come from ouside sources (comments, counter, geographical locator, pictures, search function etc), in which case I may want to figure out which feature is responsible and change it. If this is happening to you, it would be useful if you could let me know the combination of browser and operating system you are using. (I am using Internet Explorer 6 running on Windows ME, for instance).

Update It looks like it may have been that this photo of Miranda Otto was causing the problem. I was linking to the file in the Internet Movie Database, and it is now instead hosted at my ISP, so that may be an improvement. Plus I have shrunk the picture, although it was not a large file to start with. (As to why the globe in the top right hand corner of IE was not spinning when loading was still going on, I don't know. I will blame Microsoft for that one). If anyone is still having problems, please let me know.
I see also that the Mt Stromlo astronomical observatory near Canberra has been destroyed . This belonged to the Australian National University (ANU) and was the most important astronomical observatory in Australia in the first half of the twentieth century, but from an astronomical point of view it has become less useful in recent decades as the city of Canberra and the resulting light pollution has grown. The most important (light) astronomical site in Australia today is at Siding Springs, near Coonabarabran, which hosts both the ANU's main observatory, and the Anglo-Australian Observatory , which jointly belongs to the British and Australian governments. Both observatories at Siding Springs have a number of telescopes. At least one of the ANU telescopes was previously sited at Mt Stromlo but was moved due to the better light conditions at Siding Springs.

This all meant that Mt Stromlo was in recent years mostly a training facility. It also had a major function in getting young people in Australia enthused about astronomy. As someone who went to a National Science Summer School in Canberra when a high school student quite a few years ago, I remember a night tour of the observatory being one of the highlights.

This is terrible.
As an interesting follow up to my comments on the reliability of SMS messages, some relatives of mine have been affected by the terrible fires in Canberra. In many instances they have not been able to make mobile or fixed line calls, due to networks being overloaded and damaged. However, they have been consistently able to get SMS messages in and out. Because SMS messaging only uses a tiny amount of bandwidth, and because it doesn't have to be in real time, SMS capabilities survive when a network is too overloaded or damaged to handle voice calls. (A workable SMS service is requires many fewer base stations than a workable voice service). Using SMS rather than voice calls is also going to preserve battery life in instances where people who are stranded have no access to power.

As an interesting observation, this might be an interesting way for telcos to promote the use of SMS in countries where its use isn't that widespread (ie the US). Promote it as something that people should learn how to do for the sake of safety. Hopefully, having then learned how to do it for the sake of safety, people will start using it for other reasons as well.

Of course, learning how to use it clearly is a good idea from the point of safety, so in this case everyone wins.

Blog Archive