More on Nutrition

September 30, 2009

In a recent post, about Prof. Richard Wrangham’s theory that the discovery or invention of cooking played a key role in human evolution, I mentioned that standard references on the nutritive value of foods — such as the USDA Bulletin HG-72, Nutritive Value of Foods — are compiled on the assumption that the calories in a particular food item, such as a carrot, remain the same whether the carrot is eaten raw or cooked.  As Prof. Wrangham has pointed out, cooking of some foods significantly changes the degree to which calories and other nutrients are absorbed by the body.  In some cases, such as proteins or starches, cooking improves the availability of nutrients; in other cases, heat may have the effect of reducing availability, or actually of altering the nutrient itself, as happens with some vitamins.

This is actually one symptom of a larger problem with nutritional labeling as it is done currently.  In essence, the composition values (e.g., X mg of vitamin Q-17) are arrived at by a static analysis on a lab bench, and do not take into account the availability of the nutrient to the body (bioavailability).  Of course the information provided is a lot better than nothing, but it is potentially misleading in some cases.  For example, dairy products are generally an excellent source of calcium; but if your cheese is being eaten in a sauce on some other foods (spinach, or whole grain products), you will not get nearly the normal benefit from the calcium in the cheese.  This happens because certain organic compounds in the spinach (oxalic acid) or the grains (phytates) bind with calcium ions in a structure that is indigestible.  Tetracycline antibiotics can do the same thing, and in that case the effectiveness of the antibiotic is compromised as well.

There are also cases where the established testing procedures can be fooled, either accidentally or on purpose, by being taken “out of context”.  You may remember the incidents of pet food and infant formula from China that were contaminated with melamine.  Melamine is a plastic that was added deliberately to these items so that, when they were tested for protein content, the readings would be higher (better) than they really were.  This trick works because the standard method of testing foods for protein content is, in effect, a test for nitrogen content.  When we know we are testing foods, this works OK, because of the three macro-nutrient groups (carbohydrates, fats, and protein), only protein contains any significant amount of nitrogen.  When the test is applied to something else, it still detects nitrogen just fine, but  there is no guarantee that the nitrogen is in protein.  (This same test applied to a chemical fertilizer, which is rich in nitrogen, will typically show that it contains an enormous amount of protein.)

The moral of the story, I suppose, is that nutrition labels are useful, but they have their limitations.  You should probably also keep in mind what your grandmother told you about eating a balanced diet.

Down-Sizing for Real

September 29, 2009

I’ve talked before about how technology, like music players and cell phones, has gotten smaller, lighter, and more capable during the last two or three decades.  But to really appreciate how far we’ve come, it’s sometimes useful to look back further still.

I have recently been re-reading Alan Turing: The Enigma, the wonderful biography of Turing by Andrew Hodges, and was struck by a short passage about some then-state-of-the-art equipment.  During World War II, in early 1943, Turing made a visit to the US, in part to help coordinate use of the signals intelligence gained from the breaking of the German Enigma encryption, and in part to explore and assist with some new communications security projects.  One of these, being carried out at Bell Laboratories, was to produce a secure voice telephony system for communication between the United States and the United Kingdom.  In those days there were no submarine fiber-optic cables, so trans-Atlantic telephone calls had to go via radio, making them vulnerable to interception.

A system was developed, based on a technology called Vocoder originally developed at Bell Labs in the mid-1930s.   This system used digitized samples of the audio signal, taken at different frequencies with an early form of pulse-code modulation, to produce an intelligible digital voice signal that required only about 300 Hz of bandwidth.  The Bell Labs scientists had developed an encryption system, “System X”, which Turing inspected.  It was very far from being a model of miniaturization:

A terminal occupied over 30 of the standard 7-foot relay rack mounting bays, required about 30 kW of power to operate, and needed complete air conditioning in the large room housing it.

The device wasn’t terribly energy-efficient; all that input power produced about 1 milliwatt of encrypted audio output.  The great news, though, was that the system actually worked,so that FDR and Churchill were able to talk on the phone as the war developed.

Party Time, Microsoft Style

September 29, 2009

As I mentioned in a post a few days ago, sometimes I almost feel sorry for Microsoft.   But fortunately, they usually do something that helps me snap right out of it.

Microsoft is preparing for the launch of the next version of its Windows® operating system, Windows 7, on October 22.  This involves the usual flurry of press releases and so on, but this time there’s a new twist.   Apparently some bright spark in the marketing group decided that encouraging people to hold Windows 7 “Launch Parties” would be the latest, greatest thing in viral marketing.  People who signed up to have a party (yes, the theory was that people would volunteer to do this, and would not have to be taken before a judge and sentenced) could get a Launch Party Kit containing (I am not making this up):

  • One limited Signature Edition Windows® 7 Ultimate
  • One Deck of Playing Cards with Windows® 7 Desktop Design
  • One Puzzle with Windows® 7 Desktop Design
  • One Poster with Windows® 7 Desktop Design
  • Ten Tote Bags with Windows® 7 Desktop Design for hosts and guests
  • One table top centerpiece for decoration
  • One package of Windows® 7 napkins

Apparently, at least some US customers also got streamers and balloons.   (I have been unable to confirm that the centerpiece is a marzipan sculpture of Bill Gates and Steve Ballmer holding a Blue Screen of Death.)

Microsoft has even released a promotional video on YouTube, with helpful hints on staging your very own party.   Featuring four of the most desperately untalented actors ever seen (although in a politically-correct assortment), it is chock-full of really good ideas; for example, you should plan to install Windows 7 a “couple of days” before the party, so you have time to play with it (or, as we say in English, try to get it to work).

Ian Douglas, a blogger for the Daily Telegraph, wrote:

I’m beginning to think that no one involved with Microsoft’s advertising has ever left the house or spoken to a real person.

Rob Pegoraro at the Washington Post also has a post on the video:

By two minutes into the video, I could only hold my head in my hands, cringing and saying, “No, no, no, this can’t possibly be real!” before giggling helplessly at how high these six minutes and 14 seconds of video ranked on the Unintentional Comedy Scale.

And Charlie Brooker of the Guardian wonders whether Microsoft’s robots or the Brotherhood of the Mac is worse.

If you are a real glutton for this sort of thing, there are also several dozen companion videos on YouTube that illustrate “fun activities” you can do at your Launch Party.  On second thought, maybe I’ll just have a “Lose Your Lunch” party instead.

Turn Down the TV!

September 28, 2009

Although reasonable people understand that it is advertising, in the form of commercials, that pays for broadcast television that is free to watch,we still find commercials to be distinctly annoying at times.  I can remember, when I was still a kid, asking why the commercials on TV were louder than the program, and being assured that they weren’t — that FCC rules required them to be no louder than the program.  Although there was some truth in that claim, my ears were telling the truth, too.  Now, according to the “Physics Buzz” blog at the Physics Central Web site, there is a move afoot to make this aspect of commercials less annoying

As I mentioned, there was some truth in the claim that.FCC rules mandated that commercials be no louder than the programs associated with them.  As is often the case with regulations, however. what I call the “Chicago Election Axiom” came into play: it’s not the voting that counts, it’s the counting that counts.  What the rules actually say is that the peak audio amplitude in the commercials must be no higher than the peak amplitude in the program.  (Both must be less than an overall limit.)  The problem is, it is entirely possible to stick to the letter of the rule while making the commercials seem louder (and therefore, presumably, more attention-getting), because the rule does not reflect how human hearing works:

The problem with this approach is that the peak level of the sound does not accurately reflect how loud something sounds to the listener. Our brains judge loudness by averaging all of the waves that roll by — big and small.

Furthermore, our hearing is not uniformly sensitive at all frequencies.  Notionally, the range of sounds audible to humans ranges from 20 Hz to 20 kHz.  But very few adults can hear out to the highest frequencies, and everyone’s hearing is most acute in the middle frequencies.

Audio engineers also recognize that human beings have evolved to pay more attention to certain pitches that have been important for our survival.

“We are most sensitive in the mid-range, in the range of babies crying,” said David Weinberg, chair of the Washington D.C. chapter of the Audio Engineering Society.

The producers of commercials can boost these frequencies, without raising the peak audio level, to make the commercial subjectively louder.  They can also compress the dynamic range (between soft and loud sounds), again raising subjective loudness without breaking the rules.  Although audio processed in this way sounds harsh, and is unpleasant to listen to for any length of time, we can tolerate it for the 30 or 60 seconds of the typical commercial.

The new rules, if approved, will require digital audio content to be tagged, so that electronics in the receiver can “undo” any unnatural signal processing:

The new audio recommendations, soon to be sent out to broadcasters for approval, provide a way to measure the loudness of television content based on current scientific understandings of how human hearing works. Shows and commercials would be tagged with information about their loudness that TVs and audio receivers could use to counteract the audio tricks that make commercials jump out at us.

If nothing else, adoption of the new rules might produce some slight reduction in fighting over the TV remote.

What’s Cooking?

September 28, 2009

Besides the use of written language, one of the most distinctive characteristics of humans, as a species, is that we cook our food.  Dr. Richard Wrangham, who is a Professor of Biological Anthropology at Harvard University, and is the director of the Kibale Chimpanzee Project in Uganda, has a theory that cooking is not only a distinctly human trait, but something that helped shaped human evolution in the first instance.

The NPR Web site has an interview with Dr. Wrangham [transcript and audio] in which he discusses his theory.  Although the origins of cooking are, as he says, lost in pre-history, there is some good evidence that people would have immediately liked cooked food, even if its discovery was accidental:

…  the reason for saying that is we have done tests on the great apes, and the great apes uniformly show a preference for cooked food over raw or sometimes have no preference for cooked over raw in the case of one or two things, but they never prefer raw to cooked.

The evolutionary question Dr. Wrangham is trying to answer is what happened that led to the development of modern humans, first Homo erectus and then Homo sapiens (us), distinct from our common ancestors with the great apes.  The most obvious biological difference is that we have very much larger brains; and this is an issue because, not to put too fine a point on it, the brain is a metabolic pig, using something like 20-25 % of the calories burned by the body.  Yet, despite the increased energy requirements of a big brain, humans have significantly smaller jaws, teeth, and digestive systems than our great ape relatives.

Dr. Wrangham’s theory is that cooking accounts for the difference.  Cooked food is more efficiently digested for several reasons:

  • It makes the food softer, so that fewer calories need to be expended to mechanically break down the food in the gut.  (Interestingly, some chimpanzees will mash some foods before eating them, presumably for the same reason.)
  • Cooking makes protein more available for digestion, by relaxing its molecular structure, making the amino acids more accessible to digestive enzymes.
  • Cooking also makes some carbohydrates (starches) more accessible, by opening up the structure of amylose and amylopectin sugar chains to digestive action.

It’s estimated, for example, that if an egg is eaten raw, about 55-60 % of the protein can be digested; if the egg is cooked before eating, then about 95 % of the protein can be digested.  So, in essence, cooking increases the “rate of return” on eating, allowing us to get enough fuel to run our large, expensive brains with a reasonable expenditure of effort.  Confirming this, it’s observed that people who try to eat a diet composed entirely of raw “natural” foods have difficulty getting enough calories.

(This also points up a flaw in the way foods are currently labeled for their nutritional value.  The implicit assumption in this labeling is that the number of calories from, say, a carrot is the same whether it is eaten raw or cooked.  Some of this newer research indicates that assumption is false.  Interestingly, this may explain in part why the increased prevalence of highly-processed foods in our diet is correlated with excess weight.)

So the next time you grill a steak, or make some scrambled eggs, be reassured that you’re just doing what comes naturally.  Of course, if you want to be an evolutionary rebel, you could always have some sushi.

Safety Theater

September 27, 2009

Back in June, I posted a note following the crash on the Washington DC Metro, which sadly caused the deaths of nine people, injuries to a number of others, and considerable inconvenience to thousands of commuters and other travelers.  One train ran into another train that was stopped, probably because of a failure in a safety system that is supposed to keep trains apart.  In my original post, I noted that some of the initial reports of the accident suggested a physically improbable chain of events.

Today’s Washington Post has a follow-up article on one of the short-term “fixes” that was put in place by Metro shortly after the accident.  Some of the cars involved in the crash were of an older type, which is known to have structural deficiencies that might prove dangerous in a collision (as indeed they were).  This was originally discovered some time ago; because funds were (and are) tight, the decision was made to replace the old cars with new ones as they became available, rather than to retro-fit safety improvements to the old cars

Shortly after the accident, Metro announced that it was reconfiguring its trains, so that the older cars would not be used at the ends of the train, but only in the middle, surrounded by newer (and stronger) cars.  The Post story says that this change was made in an attempt to improve public confidence, not as the result of any specific analysis:

One of the first moves Metro officials made after a subway crash killed nine people this summer was to sandwich older rail cars, similar to one crushed in the accident, between newer, sturdier cars. While repeatedly portraying the move as one that might improve safety, interviews and newly obtained documents show Metro conducted no engineering analysis before launching the initiative.

It was an example of what Bruce Schneier  calls “security theater”: something that has little or no effect on actual security, but is designed to make people feel better.  Perhaps we should call this “safety theater”.

I am, in a strange way, somewhat relieved to know that the car “sandwiching” decision was not taken on the basis of any engineering analysis, because, if it had been, I would have grave doubts about that analysis.  The suggestion that was made when the decision was announced was that the stronger, stiffer cars on the ends of the trains would protect the less-robust ones in the middle from damage.

Now admittedly it is an over-simplification of the conditions, but if we assume those stronger cars to be perfectly rigid, interposing them between the colliding object and the weaker cars would make essentially no difference.  The kinetic energy of the colliding object has to go somewhere; a perfectly rigid body would transfer essentially all of that energy to the weaker car(s) in the middle of the train. (If, on the other hand, the newer cars have energy-absorbing “crumple zones”, like modern automobiles, then the change would help.)

Imagine putting an egg on the counter, and hitting it with a hammer; obviously, it will break.  Now suppose that you put the egg on the counter, and suspend over it a 0.25 inch thick steel plate, on springs so it is just touching the egg.  If you then hit the plate with a hammer, do you think the egg will be protected?  Neither do I.

I do understand that the dynamics of an actual collision are much more complicated.  What I find a bit unsettling is that the original “improvement” was accepted pretty much uncritically by the media and the public, except for a few other old curmudgeons I know who apparently remember something of Physics 101.

Correspondence Courses

September 25, 2009

When I moved, not too long ago, one of the things I was sorting through was a bunch of old files containing letters to and from friends and colleagues.  (For younger readers who are unfamiliar with this idea, we used to actually write messages on paper, put them in envelopes, and send them via snail-mail to people we knew.  It was fun to get and send them, providing a break from sharpening our stone axes and hunting mastodons.)  Looking back on that, it is really amazing how much the technology of personal communications has changed; it’s tempting to think that the technological change has produced a corresponding change in our habits.

However, people’s communicating habits have stayed remarkably consistent, according to an article on the Web site, reporting a study by researchers at Northwestern University, and published today in Science [abstract]:

A new Northwestern University study of human behavior has determined that those who wrote letters using pen and paper — long before electronic mail existed — did so in a pattern similar to the way people use e-mail today.

The study examined the correspondence history of sixteen well-known historical personalities, ranging from Sir Francis Bacon, as far back as 1574, to writer Carl Sandburg, as recently as 1966.  It has been suggested that people’s use of E-mail is driven primarily by the need to respond to others (and that may be the case for a certain amount of business E-mail), but the study found that personal correspondence by E-mail followed the same patterns as pen-and-ink mail.

No matter what their profession, all the letter writers behaved the same way. They adhered to a circadian cycle; they tended to write a number of letters at one sitting, which is more efficient; and when they wrote had more to do with chance and circumstances than a rational approach of writing the most important letter first.

The researchers found that, with some adjustments to time scales, the same behavior models could describe both the historical correspondence and contemporary E-mail.

(As an aside, the time scale adjustment may in some cases be less than you might think.  In the late 19th and early 20th centuries, it was possible, even commonplace, for someone in London to send a letter to a friend in Oxford, inviting him to dinner that evening — and to receive a reply by a later post that day.)

People in some ways are amazingly adaptable when it comes to using technology; but there are some parts of our psychological make-up that tend to be pretty stable.

%d bloggers like this: