For a little while there, ethanol was being pushed as the replacement fuel of the future. The corn lobby was the biggest advocate, but other environmental-seeming groups jumped in. The future of the automobile seemed bright, and even though corn was quickly recognized as not optimal, switchgrass and other approaches seemed popular for a while. It looks like the hype has now died off, which is a good thing. Much of the support was coming not from people concerned with improving the environment, but from front groups sponsored by fossil fuel companies.
Professor Patzek puts the energy inputs to produce corn ethanol at about 7 times the energy produced. Today, nearly all of this is from fossil fuels, mostly petroleum and natural gas. These include agricultural inputs--fertilizer and pesticides, (which are substantially petroleum products) fuel to operate machinery, harvesting, processing (distillation being the big one), and transport costs. Ethanol is too volatile to be shipped in pipes--it has to be shipped in tanks, on trucks or trains. This is also why gasoline with some ethanol in it--typically E10--goes bad after a few months. All of these inputs are ignored and ethanol is treated as a zero emissions biofuel, for computing subsidies and tax breaks, over and above the crop subsidies for growing corn. This clearly not true, and is bad policy, at many levels. The process could be improved: no-till agriculture eliminates most fertilizer and pesticides and some of the other fuel consumption. Agricultural waste could be burned to power some of the distillation process (for corn, there's not enough of it to do it all). Converting to switchgrass would improve much of this. But even making the most optimistic assumptions, the energy required to produce ethanol is about four times the energy produced.
The bottom line is that ethanol, especially corn ethanol, is a boon to the fossil fuel and corn companies. The subsidies have distorted the market, and the corn and fossil companies have put up a big environmental-appearing front to try to improve their own profits. But it's a fraud.
20 December 2012
11 December 2012
The Failure of Banking
In the past few decades, banks and similar institutions have been allowed to consolidate, to get involved in businesses that had previously been out of bounds for them, to engage in a wide variety of scams, cons and ponzi schemes, all with impunity. Anyone familiar with the work of the Pecora Commission, John Maynard Keynes, and a host of others, could have predicted the consequences, and many did, from Byron Dorgan to Paul Krugman, to Brooksley Born, to Noriel Roubini. But they proved Cassandras. Five years after the bubble collapsed, the big banks are even bigger, yet instead of providing better services at lower cost, they're engaging in an ever growing list of petty scams--instead of offering interest and free checking, banks are charging for everything. A series of probably criminal and certainly incompetent screwups at Chase, Citi, HSBC and others indicate that this is not a successful business model. This is a race to the bottom. An astonishing series of unchallenging interviews by supposedly real reporters reveals that the men in charge do not understand, and do not care how the economy works, and do nor care what the consequences of their behavior is to 99.999% of the population. They're only interested in feathering their own beds.
The right thing at this point would be to do what Simon Johnson recommended 4 years ago and nationalize the banks. It might not be necessary to take them all, but certainly anybody with more than a handful of branches....assets over around $500M or so. The local banks and credit unions are often doing reasonably good service but are being swept up by the downdraft of the giants. All senior management of the giants should be banned for life from any business related to banking. I'm thinking burning at the stake would be about right, but sadly, the constitution bans that. The "national" bank would renegotiate mortgages and other loans and begin lending at traditional market rates. Like banking, it could be a profitable business, but as a government bank, it would be forbidden from making a profit, which eliminate most of the scams that the big banks now employ. As the economy improves, it could begin spinning off more and more small banks, which engage strictly in commercial banking and have rules analogous to the FDIC rules about capitalization, leverage and types of investment. But whenever one of them exceeds the size threshold, back into the national bank they go. If the banks are smart, they'll split themselves up before this happens.
It's probably not necessary to put such an onerous size limit on investment banks. They deal with larger amounts of money--frequently a single investment is larger than the cap on bank size would be. They're also not as subject to capitalization requirements. But they should still be small enough that if they fail, they fail, period, and the government can never feel tempted to step in. I think this puts an absolute limit of about $50B on them--about 1/3% of GDP.
In 1911, Standard Oil, which until then had about 70% of the US petroleum market, was split into 33 smaller companies. Within 5 years, 9 of them were larger than Standard had ever been. Petroleum was at the time a growing business, while banks are necessarily a finite share of the economy, but this is suggestive of what could happen if we broke up the banks. Imagine banks gaining customers because they're providing better service! I remember that happening in the 1970s and even into the 1980s a little. I know it could happen again. It doesn't need to be a race to the bottom, and we must not allow it to happen again.
Unfortunately, we probably missed our moment--the year or so after the Sept 2008 collapse. During the '30s, President Hoover and his allies did almost exactly the wrong thing, so Roosevelt had a mandate to really fix the problem. Bush, Paulson, Bernanke and Obama didn't actually compound the problem the way Hoover had, and although they didn't actually fix it, the TARP, the ARRA, Quantitative Easing and the symbolic but feckless "stress tests" put enough of a band-aid over it to start a slow recovery. We may need another bank-driven recession to give us the political will. The consequences of this will be terrible, but it may make us stronger in the end.
The right thing at this point would be to do what Simon Johnson recommended 4 years ago and nationalize the banks. It might not be necessary to take them all, but certainly anybody with more than a handful of branches....assets over around $500M or so. The local banks and credit unions are often doing reasonably good service but are being swept up by the downdraft of the giants. All senior management of the giants should be banned for life from any business related to banking. I'm thinking burning at the stake would be about right, but sadly, the constitution bans that. The "national" bank would renegotiate mortgages and other loans and begin lending at traditional market rates. Like banking, it could be a profitable business, but as a government bank, it would be forbidden from making a profit, which eliminate most of the scams that the big banks now employ. As the economy improves, it could begin spinning off more and more small banks, which engage strictly in commercial banking and have rules analogous to the FDIC rules about capitalization, leverage and types of investment. But whenever one of them exceeds the size threshold, back into the national bank they go. If the banks are smart, they'll split themselves up before this happens.
It's probably not necessary to put such an onerous size limit on investment banks. They deal with larger amounts of money--frequently a single investment is larger than the cap on bank size would be. They're also not as subject to capitalization requirements. But they should still be small enough that if they fail, they fail, period, and the government can never feel tempted to step in. I think this puts an absolute limit of about $50B on them--about 1/3% of GDP.
In 1911, Standard Oil, which until then had about 70% of the US petroleum market, was split into 33 smaller companies. Within 5 years, 9 of them were larger than Standard had ever been. Petroleum was at the time a growing business, while banks are necessarily a finite share of the economy, but this is suggestive of what could happen if we broke up the banks. Imagine banks gaining customers because they're providing better service! I remember that happening in the 1970s and even into the 1980s a little. I know it could happen again. It doesn't need to be a race to the bottom, and we must not allow it to happen again.
Unfortunately, we probably missed our moment--the year or so after the Sept 2008 collapse. During the '30s, President Hoover and his allies did almost exactly the wrong thing, so Roosevelt had a mandate to really fix the problem. Bush, Paulson, Bernanke and Obama didn't actually compound the problem the way Hoover had, and although they didn't actually fix it, the TARP, the ARRA, Quantitative Easing and the symbolic but feckless "stress tests" put enough of a band-aid over it to start a slow recovery. We may need another bank-driven recession to give us the political will. The consequences of this will be terrible, but it may make us stronger in the end.
06 December 2012
Bus Fares and Extrinsic Costs
In a busy society there are a lot of things which have extrinsic benefits and costs. The most popular example is pollution: in a laissez-faire world, a company can dump whatever it wants into the environment. If the company were living in a closed environment (think of a space station), it would have to deal with this pollution itself, but for centuries, they were allowed to pollute the (seemingly infinite) atmosphere, wetlands and waterways. During the 1970s it started to become clear that these were far from infinite and governments instituted various measures to force the companies to clean up. The companies resisted, of course. Now that it's becoming clear that CO2 is such a problem with particularly dire consequences, the polluters are resisting again.
Not all extrinsic effects are negative. Public transit is a benefit to everybody, but especially to the companies and their employees whose riders use it to commute to and from work. Many communities installed transit in the late 19th and early 20th centuries, generally subsidized by real estate developers. For a few decades, the fares collected from riders were sufficient to pay for operating costs, but as demands for expansion and maintenance grew, these became insufficient. Generally, city fathers have been farsighted enough to subsidize them from the broader tax base, but in the cases where they have cut back, the loudest complaints have been from corporate executives: employees can't get to work! They're getting the benefit, but not paying for it--very much like what was happening with their ability to pollute. Ultimately, society has to be taxed.
As a society, we need to face these extrinsic costs and benefits and be willing to pay to make them work as well as practical. There are lots of them--road construction and maintenance, public safety such as police and fire, building inspection, bank regulation, the postal service, lots more. These things are necessary. They don't always have to be government run: for example, most road construction and maintenance is done under government contract by private businesses, although a lot of emergency services--snowplows, sanding, pothole repair--are government workers. When it's practical, we can charge user fees to put as much of the costs as we can onto the people using the facility the most. Gas taxes and bus fares are good examples of this.
But we must not let the fact that some of the costs can be paid this way confuse us into thinking that these are like private businesses. They are not; they are public utilities. These fees should be the highest they can be without discouraging use, but no higher. Bus fares for poor people getting to work should not eat significantly into their take-home pay. To the contrary, we want to encourage people to use it, by making it cheap and desirable.
Not all extrinsic effects are negative. Public transit is a benefit to everybody, but especially to the companies and their employees whose riders use it to commute to and from work. Many communities installed transit in the late 19th and early 20th centuries, generally subsidized by real estate developers. For a few decades, the fares collected from riders were sufficient to pay for operating costs, but as demands for expansion and maintenance grew, these became insufficient. Generally, city fathers have been farsighted enough to subsidize them from the broader tax base, but in the cases where they have cut back, the loudest complaints have been from corporate executives: employees can't get to work! They're getting the benefit, but not paying for it--very much like what was happening with their ability to pollute. Ultimately, society has to be taxed.
As a society, we need to face these extrinsic costs and benefits and be willing to pay to make them work as well as practical. There are lots of them--road construction and maintenance, public safety such as police and fire, building inspection, bank regulation, the postal service, lots more. These things are necessary. They don't always have to be government run: for example, most road construction and maintenance is done under government contract by private businesses, although a lot of emergency services--snowplows, sanding, pothole repair--are government workers. When it's practical, we can charge user fees to put as much of the costs as we can onto the people using the facility the most. Gas taxes and bus fares are good examples of this.
But we must not let the fact that some of the costs can be paid this way confuse us into thinking that these are like private businesses. They are not; they are public utilities. These fees should be the highest they can be without discouraging use, but no higher. Bus fares for poor people getting to work should not eat significantly into their take-home pay. To the contrary, we want to encourage people to use it, by making it cheap and desirable.
28 November 2012
Going Halfway
Someone1 once said "There is nothing so useless as half of a bridge. You've wasted resources that could have been used on something useful, and you still can't get across the river." There are a lot of things which are like bridges, and are worse than useless if you don't finish them. But there are a lot of things that are worth starting even if you can't finish. If you're starving, it's worth eating a little, even if you can't afford a full meal. Likewise, if you've got a federal deficit, raising taxes on rich people will reduce the political pressure to resolve the problem and not hurt the economy in any way that is borne out by economic history, even though it won't completely solve the problem.
There are lots of cases where going halfway is a lot worse than finishing the job. The space shuttle was conceived as a much bigger project, with more, larger vehicles. Cost cutting reduced the efficiency of the ultimate design to the point that the total cost of the program was 450% (correcting for inflation) of the predicted cost of the more ambitious program delivering less than 10% of the projected payload to space. Most experienced engineers have been involved in projects where cutbacks intended to reduce development time or costs had the effect of increasing them--while damaging the product more than could possibly be made up by reduced costs2.
When deciding to cut back a project, be it a small engineering project or a national health care system, it's important to try to decide if you're saving money or producing half of a bridge. The evidence is pretty overwhelming--single payer healthcare, such as US medicare or the British national health, is far cheaper and produces better outcomes than piecemeal "market based" systems. People must have adequate health care. Cutbacks to medicare and medicaid will either make healthcare more expensive, or kill thousands.
1 I think this may have been the military strategist Karl von Clausewitz, but I haven't been able to find the reference.
2 My own experience with this was Microsoft C6. We'd planned an 18 month development cycle, but about 4 months in, this was shortened to 6 months. Three years later, we finally shipped a vastly inferior product to the one we'd have built on the original schedule, at perhaps triple the cost, to great loss of market share and prestige, and provoking most of the technical talent to leave the team.
There are lots of cases where going halfway is a lot worse than finishing the job. The space shuttle was conceived as a much bigger project, with more, larger vehicles. Cost cutting reduced the efficiency of the ultimate design to the point that the total cost of the program was 450% (correcting for inflation) of the predicted cost of the more ambitious program delivering less than 10% of the projected payload to space. Most experienced engineers have been involved in projects where cutbacks intended to reduce development time or costs had the effect of increasing them--while damaging the product more than could possibly be made up by reduced costs2.
When deciding to cut back a project, be it a small engineering project or a national health care system, it's important to try to decide if you're saving money or producing half of a bridge. The evidence is pretty overwhelming--single payer healthcare, such as US medicare or the British national health, is far cheaper and produces better outcomes than piecemeal "market based" systems. People must have adequate health care. Cutbacks to medicare and medicaid will either make healthcare more expensive, or kill thousands.
1 I think this may have been the military strategist Karl von Clausewitz, but I haven't been able to find the reference.
2 My own experience with this was Microsoft C6. We'd planned an 18 month development cycle, but about 4 months in, this was shortened to 6 months. Three years later, we finally shipped a vastly inferior product to the one we'd have built on the original schedule, at perhaps triple the cost, to great loss of market share and prestige, and provoking most of the technical talent to leave the team.
12 October 2012
Social Security Viability
It's a common theme of the right that Social Security will collapse if we don't do something to cut back on the terrible problem of giving money to people who earned it. The right have been fighting Social Security since it was first proposed (although a few Republicans did vote for it in 1935), and many have been trying to privatize it ever since. Just last night, Paul Ryan claimed that "Social Security [is] going bankrupt. These are indisputable facts". Well, no. Social Security is fully funded for at least another 25 years if we do nothing. But there are some trivial things we can do to make it viable for much longer. For example, raising the contribution cap can bring in a lot of money: if the cap were eliminated entirely, it'd bring in about $130B a year, enough to pay $6K a year to every beneficiary, or add about 50 years to the viability of the trust fund. (current revenues are about $500B) The people who would be paying, for the most part, don't really need social security, nor is the cost a big part of their income. That they don't need it is often used as an excuse for them to not pay, and a rationale for "means testing", meaning they wouldn't be eligible. This is wrongheaded. There's a big group who need social security a little bit. Although they have other income, SS helps them. This group is the largest share of the contributors.
So how is it that the supposed actuarial problems of dramatically increasing life expectancy and the baby boom are not eating up the entire trust fund? Well, they are, a little. But only a little. It turns out that life expectancy increases have taken place in two broad areas: firstly that life expectancy for children is dramatically higher than it was in 1935. This actually adds to the labor force. It means that all those kids who were dying of whooping cough, smallpox, polio and all the rest, aren't anymore, and are are helping to pay for it all. Life expectancy for oldsters has gone up too--but not much. The demographic that matters: average life expectancy of a 65 year old has increased from about 14 to 18 years. This is a lot of new seniors for sure: in 1935 there were about 8 million people over 65, and today there are about 40 million: a factor of 5. but wait: there were fewer than 30M people paying payroll tax in 1935, but today there are almost 150M--coincidentally a factor of 5. So it all balances.
So how is it that the supposed actuarial problems of dramatically increasing life expectancy and the baby boom are not eating up the entire trust fund? Well, they are, a little. But only a little. It turns out that life expectancy increases have taken place in two broad areas: firstly that life expectancy for children is dramatically higher than it was in 1935. This actually adds to the labor force. It means that all those kids who were dying of whooping cough, smallpox, polio and all the rest, aren't anymore, and are are helping to pay for it all. Life expectancy for oldsters has gone up too--but not much. The demographic that matters: average life expectancy of a 65 year old has increased from about 14 to 18 years. This is a lot of new seniors for sure: in 1935 there were about 8 million people over 65, and today there are about 40 million: a factor of 5. but wait: there were fewer than 30M people paying payroll tax in 1935, but today there are almost 150M--coincidentally a factor of 5. So it all balances.
04 October 2012
There You Go Again
I think the largest reason Obama seemed overwhelmed by his opponents aggressive, lying attacks in last nights debate was because he was afraid of one line: Ronald Reagan's quip "There You Go Again". It was dishonest in 1980 (Carter argued that Reagan had fought medicare and was fighting Carter's plan, which was true, but Reagan claimed he had an alternative, which he didn't), but that didn't seem to matter. Over and over, Romney lied about Obama's policy and his own proposals, or blamed him for things which were clearly not his fault. For example, a few days after he was inaugurated, Obama promised to halve the deficit. This was done before the magnitude of the collapse was understood and when TARP was still expected to be used to buy toxic assets and minimize foreclosures instead of subsidizing banks. Nevertheless, he has brought the deficit from $1.5 to $1.2T and is on course for $900B in 2014. (fy 2013 has already started). Why has this happened? Almost entirely, republican obstructionism.
Ending the Bush tax cuts on schedule after 2010 would have subtracted about $400B from the deficit, but it's an article of faith for republicans and some democrats that this would have sent the fragile economy into a dive. Many economists disagree but lets ignore that. Obama wanted to end the cuts only for incomes over $250K--about 2% of tax filers, but about 30% of all income. The 2010 "deal" added about $80B a year to the deficit instead of ending the Bush tax cuts on schedule--subtracting almost $500B had nothing been done or $230B had we done what Obama wanted. Had the Bush cuts expired, and nothing else been done, the deficit would be half what the president inherited, just as promised, and had Obama's plan been used, it'd have been about 2/3rd. instead, it's about 5/6ths.
What's going to happen after the election? Panic about the "fiscal cliff", which was caused by this foolishness and the even more foolish debt limit fight last year, will probably push all of this off another few years. What should happen? Abandon the "sequester". Rescind the Bush cuts for high incomes, keep them for low incomes. Rescind the payroll tax cap completely and restore rates to their pre 2010 level (6.2%. It's now 4%) for incomes over $100K.
Had the original House health care plan (the one with the public option, which oddly, Romney endorsed last night) been passed, it would have already been in effect, saving $200B or more, while the most important parts of ACA for the budget, the mandate and the exchange, don't go into effect until 2014 and won't have much impact for some years to come. Add this to the Obama tax hike for millionaires plan, and we'd have been almost to the half he promised. But the republicans filibustered this, with help from a couple of right wing democrats. (the democrats sort of had 60 votes for about 3 months. Franken wasn't seated until Kennedy was out of the picture, and the time that Kennedy's substitute (Kirk) and Brown's election was the only period when the Ds might have had enough votes to override a filibuster. But Lieberman and Ben Nelson sided with the Rs on health care.)
Romney had the gall to blame all of this on democrats refusing to to work with republicans. All night he was baiting Obama to blame the republicans for their extreme obstructionism. But he never fell for it, but he was intimidated into not fighting with the facts, which are completely in Obama's favor. He had the gall to accuse Obama of lying, "using his own facts", which was exactly what he himself was doing.
Romney is right, that congressional democrats work with republican presidents much better than republicans work with democratic presidents. This is not the fault of the democrats. The fact is, the deficit is almost entirely the fault of irresponsible republican policies and maintained by irresponsible republican obstructionism.
Ending the Bush tax cuts on schedule after 2010 would have subtracted about $400B from the deficit, but it's an article of faith for republicans and some democrats that this would have sent the fragile economy into a dive. Many economists disagree but lets ignore that. Obama wanted to end the cuts only for incomes over $250K--about 2% of tax filers, but about 30% of all income. The 2010 "deal" added about $80B a year to the deficit instead of ending the Bush tax cuts on schedule--subtracting almost $500B had nothing been done or $230B had we done what Obama wanted. Had the Bush cuts expired, and nothing else been done, the deficit would be half what the president inherited, just as promised, and had Obama's plan been used, it'd have been about 2/3rd. instead, it's about 5/6ths.
What's going to happen after the election? Panic about the "fiscal cliff", which was caused by this foolishness and the even more foolish debt limit fight last year, will probably push all of this off another few years. What should happen? Abandon the "sequester". Rescind the Bush cuts for high incomes, keep them for low incomes. Rescind the payroll tax cap completely and restore rates to their pre 2010 level (6.2%. It's now 4%) for incomes over $100K.
Had the original House health care plan (the one with the public option, which oddly, Romney endorsed last night) been passed, it would have already been in effect, saving $200B or more, while the most important parts of ACA for the budget, the mandate and the exchange, don't go into effect until 2014 and won't have much impact for some years to come. Add this to the Obama tax hike for millionaires plan, and we'd have been almost to the half he promised. But the republicans filibustered this, with help from a couple of right wing democrats. (the democrats sort of had 60 votes for about 3 months. Franken wasn't seated until Kennedy was out of the picture, and the time that Kennedy's substitute (Kirk) and Brown's election was the only period when the Ds might have had enough votes to override a filibuster. But Lieberman and Ben Nelson sided with the Rs on health care.)
Romney had the gall to blame all of this on democrats refusing to to work with republicans. All night he was baiting Obama to blame the republicans for their extreme obstructionism. But he never fell for it, but he was intimidated into not fighting with the facts, which are completely in Obama's favor. He had the gall to accuse Obama of lying, "using his own facts", which was exactly what he himself was doing.
Romney is right, that congressional democrats work with republican presidents much better than republicans work with democratic presidents. This is not the fault of the democrats. The fact is, the deficit is almost entirely the fault of irresponsible republican policies and maintained by irresponsible republican obstructionism.
13 September 2012
Cell Phones on Airplanes
It's against FCC regulations to use a cell phone on an airplane. Most people seem to think the reason is that they're afraid it'll interfere with the navigation of the airplane. Here's the FCC's policy statement on the subject. Note in particular the second statement: they're worried about interference with the ground.
To understand this, it's important to understand how a cell phone works. Each phone has a radio transmitter/receiver, aka transceiver. It's capable of being tuned to any one of (depending on protocol) dozens or hundreds of channels. Whether it's digital or analog, 3G, 4G, TDM, packet, etc., what exactly constitutes a channel is immaterial to the issue. What's important is that there are a limited number of them.
The ground part of this consists of a network of transceivers mounted on poles or other high places, called cell towers, which can tune to the same channels. They are organized into cells, which is a region on the ground using wires or microwave links to connect to each other and the rest of the phone system. Each cell tower can use, at most, the number of channels in the protocol. Phones are low power radios and they actually reduce power so that they can't be heard by more than one or two cell towers at once. Because any bandwidth a phone uses must be reserved by every cell tower that can receive the signal. (this is not strictly true--there's some collision recovery in several of the digital protocols--but collisions reduce bandwidth, so the essential problem remains. for simplicity, let's pretend it's all channels and ignore packets and collisions and such)
Most of the time, most phones are not in use--they have a handshake with their local tower so the system knows which phone to ring, but this doesn't take much bandwidth. Phones that are in use use a lot of bandwidth. If an area has a lot of phone traffic--a big building for example, they'll put extra towers in to adapt.
But it's important to recognize the two dimensional design of the network. Most of the time you're fairly close to the center of a cell. you use two cells only when you're near the border of both. But if you're up in the sky, you're a lot farther from the closest cell, and the second closest might be only a little farther yet. Worse, there might be several others that are still close enough that any bandwidth you use on your phone must be reserved for all the cells. In a dense city and a plane at 10,000 feet, this could be a hundred cells or more.
Now imagine that a plane is flying over our group of cells, carrying 100 passengers who simultaneously want to call their spouse or client or whatever, telling them the plane is about to land, time to come pick me up at the airport. They've used 100 channels in every cell within range--which could be a band 10 or 15 miles wide and 50 miles long, under the landing pattern, which is usually in a big city. Not only that, there's another plane coming along a minute or two later doing the same thing, with an effect that's overlapping the first plane. Two or three such planes could exceed the entire capacity of the entire group of cells.
So they ban it. If one or two people call, it's no big deal; it's stealing those two channels from all those cells, but they have enough capacity they can handle it. If half of them leave their phones in ground mode by mistake, there's so little bandwidth used it hardly matters. It's only if a lot of them are using a lot of bandwidth simultaneously that it's a problem. Adding more cell towers doesn't solve it: they'd be in range too. what does solve it is a cell repeater on the airplane, which uses a different protocol to talk to the ground. FCC is happy to have people think it'll cause interference with the airplane they're on, because that'll keep them from cheating.
Similarly, the flight attendants don't have the time to figure out which electronic device might cause ground interference and which is harmless. The fact is that most are harmless, although there is a tiny chance that one of them might cause a bunch of RFI, and a real problem, either because it's non FCC compliant or because it's malfunctioning. The one they don't recognize might be the one that causes the problem. Simpler and safer to have a single rule and ban them all.
To understand this, it's important to understand how a cell phone works. Each phone has a radio transmitter/receiver, aka transceiver. It's capable of being tuned to any one of (depending on protocol) dozens or hundreds of channels. Whether it's digital or analog, 3G, 4G, TDM, packet, etc., what exactly constitutes a channel is immaterial to the issue. What's important is that there are a limited number of them.
The ground part of this consists of a network of transceivers mounted on poles or other high places, called cell towers, which can tune to the same channels. They are organized into cells, which is a region on the ground using wires or microwave links to connect to each other and the rest of the phone system. Each cell tower can use, at most, the number of channels in the protocol. Phones are low power radios and they actually reduce power so that they can't be heard by more than one or two cell towers at once. Because any bandwidth a phone uses must be reserved by every cell tower that can receive the signal. (this is not strictly true--there's some collision recovery in several of the digital protocols--but collisions reduce bandwidth, so the essential problem remains. for simplicity, let's pretend it's all channels and ignore packets and collisions and such)
Most of the time, most phones are not in use--they have a handshake with their local tower so the system knows which phone to ring, but this doesn't take much bandwidth. Phones that are in use use a lot of bandwidth. If an area has a lot of phone traffic--a big building for example, they'll put extra towers in to adapt.
But it's important to recognize the two dimensional design of the network. Most of the time you're fairly close to the center of a cell. you use two cells only when you're near the border of both. But if you're up in the sky, you're a lot farther from the closest cell, and the second closest might be only a little farther yet. Worse, there might be several others that are still close enough that any bandwidth you use on your phone must be reserved for all the cells. In a dense city and a plane at 10,000 feet, this could be a hundred cells or more.
Now imagine that a plane is flying over our group of cells, carrying 100 passengers who simultaneously want to call their spouse or client or whatever, telling them the plane is about to land, time to come pick me up at the airport. They've used 100 channels in every cell within range--which could be a band 10 or 15 miles wide and 50 miles long, under the landing pattern, which is usually in a big city. Not only that, there's another plane coming along a minute or two later doing the same thing, with an effect that's overlapping the first plane. Two or three such planes could exceed the entire capacity of the entire group of cells.
So they ban it. If one or two people call, it's no big deal; it's stealing those two channels from all those cells, but they have enough capacity they can handle it. If half of them leave their phones in ground mode by mistake, there's so little bandwidth used it hardly matters. It's only if a lot of them are using a lot of bandwidth simultaneously that it's a problem. Adding more cell towers doesn't solve it: they'd be in range too. what does solve it is a cell repeater on the airplane, which uses a different protocol to talk to the ground. FCC is happy to have people think it'll cause interference with the airplane they're on, because that'll keep them from cheating.
Similarly, the flight attendants don't have the time to figure out which electronic device might cause ground interference and which is harmless. The fact is that most are harmless, although there is a tiny chance that one of them might cause a bunch of RFI, and a real problem, either because it's non FCC compliant or because it's malfunctioning. The one they don't recognize might be the one that causes the problem. Simpler and safer to have a single rule and ban them all.
06 September 2012
Rainbows and Green Flashes
I've seen the "green flash" three times--all at sunset, from west coast beaches: Once from Cannon Beach, in Oregon, Twice from Pajaro Dunes, in California, all in the winter. There's a reason for this, as I'll get into later. Many people are skeptical that it's a real thing. It is, but things have to line up just exactly right to be able to see it:
You need to have clear viewing all the way to the horizon and past it--no clouds, no haze, especially no mountains. Typically these conditions happen only when it's been raining and it's cleared up just an hour or two before sunset. it helps if there's a little bit of haze right at the horizon, but not much.
The water needs to be warmer than the air. This is why it tends to be a winter phenomenon. The green flash is a member of the broad class of optical phenomena called "mirage", which results from layers of air being heated differently, resulting in them refracting different colors differently. The bottom level of air needs to be quite a bit warmer than the air at eye level, which really can only happen if it's being heated by water. Because blue and green are scattered more than red, sunsets tend to look red. But there's a moment that the red light is all passing above your eye while green is coming your way.
The reason I think rainbows are similar is because they take a combination of several things to make them happen: it needs to be clear enough that the water vapor is well lit. But rainbows aren't very bright and it's hard to see them against a clear blue sky: there needs to be something behind. it's usually in front of clouds or something else that's darker than the sky. The primary rainbow is about 2 degrees thick and it's about 80 to 84 degrees wide. Rainbows are the result different colors of light being refracted differently by water, but all being reflected together inside drops of water. (To see all of it with a 35mm camera, you need a 19mm lens. usually you can't see all of it. to see all of a double rainbow, you need a 13mm lens.)
My favorite rainbow was when I was driving almost due north late on a winter afternoon. I was at the southern edge of a storm--behind me was well lit, but the hood of my car was in light rain. the rainbow appeared to be around the hood of my car. Todd Newman was with me and we both saw it, but we didn't have a camera convenient and it didn't last long.
You need to have clear viewing all the way to the horizon and past it--no clouds, no haze, especially no mountains. Typically these conditions happen only when it's been raining and it's cleared up just an hour or two before sunset. it helps if there's a little bit of haze right at the horizon, but not much.
The water needs to be warmer than the air. This is why it tends to be a winter phenomenon. The green flash is a member of the broad class of optical phenomena called "mirage", which results from layers of air being heated differently, resulting in them refracting different colors differently. The bottom level of air needs to be quite a bit warmer than the air at eye level, which really can only happen if it's being heated by water. Because blue and green are scattered more than red, sunsets tend to look red. But there's a moment that the red light is all passing above your eye while green is coming your way.
The reason I think rainbows are similar is because they take a combination of several things to make them happen: it needs to be clear enough that the water vapor is well lit. But rainbows aren't very bright and it's hard to see them against a clear blue sky: there needs to be something behind. it's usually in front of clouds or something else that's darker than the sky. The primary rainbow is about 2 degrees thick and it's about 80 to 84 degrees wide. Rainbows are the result different colors of light being refracted differently by water, but all being reflected together inside drops of water. (To see all of it with a 35mm camera, you need a 19mm lens. usually you can't see all of it. to see all of a double rainbow, you need a 13mm lens.)
My favorite rainbow was when I was driving almost due north late on a winter afternoon. I was at the southern edge of a storm--behind me was well lit, but the hood of my car was in light rain. the rainbow appeared to be around the hood of my car. Todd Newman was with me and we both saw it, but we didn't have a camera convenient and it didn't last long.
Bay States
When I moved to Massachusetts from the San Francisco Bay area, I thought it was funny that Massachusetts was the Bay State, while California, which has many more bays, is not.
A Bay is a broad inlet of the sea where the land curves inward. This could be quite a shallow inlet, or a deep one. Some bays are called harbors when they're confined enough to provide some protection. A sound is between two bodies and is open at both ends.
Here's a list of the Bay States:
Maine: The biggest is Penobscot Bay, but there are zillions of others.
New Hampshire, with a very short coastline, nevertheless has a number of bays.
Massachusetts has three main bays: Massachusetts Bay, Cape Cod Bay, and Buzzards bay. there are also a number of smaller bays and sounds.
Rhode Island is basically one big Bay: Narragansett Bay, with a bunch of penninsulas and islands in it.
Connecticut has many things that would be called bays, but most of them seem to be unnamed, or are called harbors.
New York has many bays: New York Bay, Jamaica Bay, several more, even not counting most of NY's shoreline on Long Island.
New Jersey: Sandy Hook Bay, Newark Bay, Delaware Bay, many others.
Delaware is largely defined by it's giant bay, which is the boundary between it and New Jersey.
Maryland is similarly defined by its giant bay, the Chesapeake, which continues on to Virginia
Virginia's Atlantic Seaboard is largely the Chesapeake bay, although there's a short section that has a few bays of its own, and there's an isolated part of Virginia on the Delaware penninsula.
North and South Carolina's Coastline consists of a series of large, wide bays.
The coastal area of Georgia is made up of a bunch of low islands. Consequently, most of their inlets are sounds, not bays.
Florida has many famous bays: Tampa Bay, Biscayne Bay, Pensacola Bay, many others.
Alabama is a very short coastline, but it's basically all Bays. Mobile Bay is the biggest.
Mississippi also has a short coastline: Pascagoula Bay, Biloxi Bay, St Louis Bay are all on it.
The Mississippi Delta area of Louisiana has so many bays it's silly to try to count. I gave it a start and realized I was making mistakes around the time I got to 100.
Texas has quite a few too: Galveston Bay, Corpus Christi Bay, etc.
California starts at the south with San Diego Bay, Mission Bay, Alamitos Bay, Morro Bay, Monterey Bay, San Francisco Bay, Drakes Bay, Tomales Bay, Humbolt Bay. Lots more.
Oregon: Coos Bay, Alsea Bay, Tillamook Bay, Youngs Bay. Small Bays.
Washington: Willapa Bay, Grays Harbor, Discovery Bay, Puget Sound (which is really a bay with some big islands in it)
I won't try to do Alaska and Hawaii.
My ranking is based on how big a part of the state the bay or bays are:
#1 Rhode Island
#2 Delaware
#3: Maryland
#4: Massachusetts
#5 California
#6 Florida
If California didn't also have a bunch of big mountains and valleys, it'd be #1.
A Bay is a broad inlet of the sea where the land curves inward. This could be quite a shallow inlet, or a deep one. Some bays are called harbors when they're confined enough to provide some protection. A sound is between two bodies and is open at both ends.
Here's a list of the Bay States:
Maine: The biggest is Penobscot Bay, but there are zillions of others.
New Hampshire, with a very short coastline, nevertheless has a number of bays.
Massachusetts has three main bays: Massachusetts Bay, Cape Cod Bay, and Buzzards bay. there are also a number of smaller bays and sounds.
Rhode Island is basically one big Bay: Narragansett Bay, with a bunch of penninsulas and islands in it.
Connecticut has many things that would be called bays, but most of them seem to be unnamed, or are called harbors.
New York has many bays: New York Bay, Jamaica Bay, several more, even not counting most of NY's shoreline on Long Island.
New Jersey: Sandy Hook Bay, Newark Bay, Delaware Bay, many others.
Delaware is largely defined by it's giant bay, which is the boundary between it and New Jersey.
Maryland is similarly defined by its giant bay, the Chesapeake, which continues on to Virginia
Virginia's Atlantic Seaboard is largely the Chesapeake bay, although there's a short section that has a few bays of its own, and there's an isolated part of Virginia on the Delaware penninsula.
North and South Carolina's Coastline consists of a series of large, wide bays.
The coastal area of Georgia is made up of a bunch of low islands. Consequently, most of their inlets are sounds, not bays.
Florida has many famous bays: Tampa Bay, Biscayne Bay, Pensacola Bay, many others.
Alabama is a very short coastline, but it's basically all Bays. Mobile Bay is the biggest.
Mississippi also has a short coastline: Pascagoula Bay, Biloxi Bay, St Louis Bay are all on it.
The Mississippi Delta area of Louisiana has so many bays it's silly to try to count. I gave it a start and realized I was making mistakes around the time I got to 100.
Texas has quite a few too: Galveston Bay, Corpus Christi Bay, etc.
California starts at the south with San Diego Bay, Mission Bay, Alamitos Bay, Morro Bay, Monterey Bay, San Francisco Bay, Drakes Bay, Tomales Bay, Humbolt Bay. Lots more.
Oregon: Coos Bay, Alsea Bay, Tillamook Bay, Youngs Bay. Small Bays.
Washington: Willapa Bay, Grays Harbor, Discovery Bay, Puget Sound (which is really a bay with some big islands in it)
I won't try to do Alaska and Hawaii.
My ranking is based on how big a part of the state the bay or bays are:
#1 Rhode Island
#2 Delaware
#3: Maryland
#4: Massachusetts
#5 California
#6 Florida
If California didn't also have a bunch of big mountains and valleys, it'd be #1.
30 August 2012
Steam Engines in Space
Much current power generation is done with steam. Water is confined in a "boiler" and exposed to some heat source (burning coal, nuclear fission, concentrated solar energy, etc.) and boiled. This produces steam, which is passed through a pipe to a turbine or reciprocating piston(s) to produce rotary motion. Usually this is used to power a generator, but sometimes the rotary motion is used directly--a steam locomotive for example.
An important but subtle component of a steam locomotive is what's called the "steam dome". This is a dome on top of the boiler, where the steam collects. It relies on gravity--water stays in the boiler, while the "dry" steam floats on top and in the dome. It's important to keep water out of the steam tubes and pistons: it cools the steam, reducing pressure, and if there's enough of it, can prevent the piston from moving and break something.
As far as I can tell, conventional steam locomotives need gravity or something like it (e.g. centrifugal force) to operate. There are heat engines which do not require this separation of dry steam, such as the Stirling Cycle, so it's still possible to produce rotary power in space. It's also possible to make a direct ejection steam rocket which could operate in zero-g.
An important but subtle component of a steam locomotive is what's called the "steam dome". This is a dome on top of the boiler, where the steam collects. It relies on gravity--water stays in the boiler, while the "dry" steam floats on top and in the dome. It's important to keep water out of the steam tubes and pistons: it cools the steam, reducing pressure, and if there's enough of it, can prevent the piston from moving and break something.
As far as I can tell, conventional steam locomotives need gravity or something like it (e.g. centrifugal force) to operate. There are heat engines which do not require this separation of dry steam, such as the Stirling Cycle, so it's still possible to produce rotary power in space. It's also possible to make a direct ejection steam rocket which could operate in zero-g.
21 August 2012
Civics Teaching and Testing
It's very clear that a large majority of Americans don't understand enough about civics, history, public policy, or the constitution to be able to make informed political decisions. For example, when voters, including tea party folks, are asked about Obamacare, they are somewhat opposed, and this opposition has been growing. But when they are polled about the specific policy changes imposed by the ACA, without ever mentioning that it is ACA, they are overwhelmingly supportive of nearly all of it, and even substantially supportive of the mandate. There's clearly some distortion and probably propaganda involved. Then you get teapartiers carrying signs saying "Keep Your Government Hands Off My Medicare". This was not an isolated incident. It can only be a symptom of a deep misunderstanding. Then you get Romney and Ryan, both of whom say they want to do things which could only cut Medicare deeply, screaming about Obamacare cutting Medicare $716B--No, ACA moves the $716B to parts of Health Care that work, and away from Part D, which didn't. Nearly all tea party members can't find "separation of church and state" in the constitution, so they believe that the constitution doesn't require it. They ignore, or are unaware of the fact that the people who wrote the first amendment say that separation is how we should achieve the religious freedom required there.
Former Supreme Court Justice Sandra Day O'Connor points out that many states have completely dropped all civics teaching requirements and worries that this results in an electorate that can't make informed decisions about important things...such as the above.
I think you should be required to pass a civics competency test before you can vote, and it should be constructed to eliminate "low information voters". here's a (possibly extreme) stawman for comment: "how many supreme court justices are there? Name 5 of them and give a little about their philosophy. What is the scientific method and what about it gives scientists high confidence in their results? give an economic, political and sociological description of the following societal models: communism, european socialism, capitalism, oligarchism, monarchy."
I think a class that teaches something like this should be part of the standard high school curriculum, and you should have to pass the test every few years to vote (with changes according to public events). I took a class in the 8th grade that would have gotten me to about 50% on this test, and I think a high school graduate should test over 70%. Right now, only a tiny percent would pass my test though, and that's a problem.
Former Supreme Court Justice Sandra Day O'Connor points out that many states have completely dropped all civics teaching requirements and worries that this results in an electorate that can't make informed decisions about important things...such as the above.
I think you should be required to pass a civics competency test before you can vote, and it should be constructed to eliminate "low information voters". here's a (possibly extreme) stawman for comment: "how many supreme court justices are there? Name 5 of them and give a little about their philosophy. What is the scientific method and what about it gives scientists high confidence in their results? give an economic, political and sociological description of the following societal models: communism, european socialism, capitalism, oligarchism, monarchy."
I think a class that teaches something like this should be part of the standard high school curriculum, and you should have to pass the test every few years to vote (with changes according to public events). I took a class in the 8th grade that would have gotten me to about 50% on this test, and I think a high school graduate should test over 70%. Right now, only a tiny percent would pass my test though, and that's a problem.
18 August 2012
The Scientific Method
People often make bizarre claims about science. For example, last week Ben Waide, a Kentucky state legislator, said "The theory of evolution is a theory, and essentially the theory of
evolution is not science — Darwin made it up. My objection
is they should ensure whatever scientific material is being put forth
as a standard should at least stand up to scientific method. Under the
most rudimentary, basic scientific examination, the theory of evolution
has never stood up to scientific scrutiny."
It's amazingly clear that Mr. Waide doesn't have the foggiest clue what science is, and wouldn't pass a fourth grade test on the subject.
The Scientific Method, suggested by Isaac Newton, consists of five phases:
Question: Why is some particular thing about the universe the way that it is?
Hypothesis: you come up with a model for how that thing works.
Prediction: based on that model, you predict something that the model suggests would be true, but is not already obvious.
Experiment: you see if the prediction is true or not by trying it.
Analysis: you figure out if you've really proven your theory, and publish. The most crucial parts of this process are falsifiability and repeatability: does your experiment really prove what you set out to prove or did you crock it (possibly unwittingly) in a way that would succeed no mater what? The point is that the prediction be a little surprising--Maxwell's Equations predicting radio, General Relativity predicting the curvature of light in strong gravity, etc. Can other people with completely different situations duplicate your work? Very often, failed experiments lead to new, better questions. Most scientists spend most of their time in the experiment and analysis phase.
Evolution has stood up to some of the most intense scrutiny of any theory in history and has come through with flying colors. For example, countless experiments have shown that you can force a species to change by altering its environment to benefit some trait or other. Most of these are done in the lab, but we've also created numerous strains of antibiotic-resistant bacteria by using antibiotics too much. As another example, evolution predicts "link" species would appear in the fossil record. Most of these are transitional forms and don't last long enough to leave a trace, but plenty do, including in the fossil record of our own species. Homo Habilis, Homo Erectus, Zinganthropus, many others
Read more here: Ihttp://www.kentucky.com/2012/08/14/2298914/gop-lawmakers-question-standards.html#storylink=cpy
It's amazingly clear that Mr. Waide doesn't have the foggiest clue what science is, and wouldn't pass a fourth grade test on the subject.
The Scientific Method, suggested by Isaac Newton, consists of five phases:
Question: Why is some particular thing about the universe the way that it is?
Hypothesis: you come up with a model for how that thing works.
Prediction: based on that model, you predict something that the model suggests would be true, but is not already obvious.
Experiment: you see if the prediction is true or not by trying it.
Analysis: you figure out if you've really proven your theory, and publish. The most crucial parts of this process are falsifiability and repeatability: does your experiment really prove what you set out to prove or did you crock it (possibly unwittingly) in a way that would succeed no mater what? The point is that the prediction be a little surprising--Maxwell's Equations predicting radio, General Relativity predicting the curvature of light in strong gravity, etc. Can other people with completely different situations duplicate your work? Very often, failed experiments lead to new, better questions. Most scientists spend most of their time in the experiment and analysis phase.
Evolution has stood up to some of the most intense scrutiny of any theory in history and has come through with flying colors. For example, countless experiments have shown that you can force a species to change by altering its environment to benefit some trait or other. Most of these are done in the lab, but we've also created numerous strains of antibiotic-resistant bacteria by using antibiotics too much. As another example, evolution predicts "link" species would appear in the fossil record. Most of these are transitional forms and don't last long enough to leave a trace, but plenty do, including in the fossil record of our own species. Homo Habilis, Homo Erectus, Zinganthropus, many others
Read more here: Ihttp://www.kentucky.com/2012/08/14/2298914/gop-lawmakers-question-standards.html#storylink=cpy
13 August 2012
What isn't a Free Market
The right is constantly harping on how it's important that we let the free market work, that government not pick winners and losers, that the free market is always the best possible system and we must never try to replace free market systems with national ones.
All of this is wrong. Actually, most of it is correct, but the right misapplies it in almost all cases.
The theory is that the market correctly evaluates all ideas and the best ones are the ones that succeed. There's a lot of truth in this, but there are some big caveats. The most important is that there are a lot of cases where the market fails to be free for some reason. More than a century ago, it was recognized that monopolies have the power to manipulate the market for their own purposes, and with very few exceptions, when they did this it was to the detriment of everybody. Railroads, oil companies, banks, the phone company and more were all either broken up or strictly regulated to assure that the good of the public remained the first priority.
About 30 years ago we forgot what the problems had been, and set about reversing this: deregulating all of these industries and more. That mistake has led to the current high unemployment, congressional gridlock and corruption.
There's actually a fairly good metric for whether a market is free. A market is only free if it can be seen embracing new ideas. If there are better ways that are not succeeding, there has to be some reason, and usually it's that there's something preventing the free market from working.
For example, US health care is capable of providing the best service anywhere, but at a sufficiently high cost that most people can't afford that excellent standard of care, and many people can't afford any health care at all. Our rank by "outcomes" is among the lowest of any advanced country and we're the most expensive per capita by far--about double all those countries with better outcomes. We have a collection of tacit collusion, perverse incentives, corruption and several other things that allow insurers and providers to maximize profits without improving service. Most regions have only a handful of providers and many have only one. Until recently, insurers were allowed to reject customers for pre-existing conditions, which prevents customers from changing insurers when they find out their coverage is bad. There are obviously several better ways out there: Japan, Switzerland, Great Britain, many others have a variety of different approaches. All have found that eliminating or strongly regulating the market is what works. Nationalizing seems to work a little better than strongly regulating, but I think it's significant that it's close, and that costs for these others is broadly similar, with the difference mainly being in the program's generosity.
Another example: During the first years of the 20th century, Standard Oil controlled about 70% of the US market for petroleum, and almost 90% of the refining. In 1911, the Supreme Court ruled that Standard was a monopoly and required it to be broken up into 34 "baby Standards". Within a few years 9 of them: Standard of New Jersey (later Exxon)
Standard of New York (later Mobil)
Standard of California (later Chevron)
Standard of Indiana (later Amoco)
Atlantic (later part of ARCO)
Continental (later Conoco)
Standard of Kentucky (later Kyso)
Standard of Ohio (later Sohio)
Ohio Oil (later Marathon)
were all bigger than Standard had ever been. Part of this growth was the rise of the private automobile, but the success of creating a free market is undeniable.
All of this is wrong. Actually, most of it is correct, but the right misapplies it in almost all cases.
The theory is that the market correctly evaluates all ideas and the best ones are the ones that succeed. There's a lot of truth in this, but there are some big caveats. The most important is that there are a lot of cases where the market fails to be free for some reason. More than a century ago, it was recognized that monopolies have the power to manipulate the market for their own purposes, and with very few exceptions, when they did this it was to the detriment of everybody. Railroads, oil companies, banks, the phone company and more were all either broken up or strictly regulated to assure that the good of the public remained the first priority.
About 30 years ago we forgot what the problems had been, and set about reversing this: deregulating all of these industries and more. That mistake has led to the current high unemployment, congressional gridlock and corruption.
There's actually a fairly good metric for whether a market is free. A market is only free if it can be seen embracing new ideas. If there are better ways that are not succeeding, there has to be some reason, and usually it's that there's something preventing the free market from working.
For example, US health care is capable of providing the best service anywhere, but at a sufficiently high cost that most people can't afford that excellent standard of care, and many people can't afford any health care at all. Our rank by "outcomes" is among the lowest of any advanced country and we're the most expensive per capita by far--about double all those countries with better outcomes. We have a collection of tacit collusion, perverse incentives, corruption and several other things that allow insurers and providers to maximize profits without improving service. Most regions have only a handful of providers and many have only one. Until recently, insurers were allowed to reject customers for pre-existing conditions, which prevents customers from changing insurers when they find out their coverage is bad. There are obviously several better ways out there: Japan, Switzerland, Great Britain, many others have a variety of different approaches. All have found that eliminating or strongly regulating the market is what works. Nationalizing seems to work a little better than strongly regulating, but I think it's significant that it's close, and that costs for these others is broadly similar, with the difference mainly being in the program's generosity.
Another example: During the first years of the 20th century, Standard Oil controlled about 70% of the US market for petroleum, and almost 90% of the refining. In 1911, the Supreme Court ruled that Standard was a monopoly and required it to be broken up into 34 "baby Standards". Within a few years 9 of them: Standard of New Jersey (later Exxon)
Standard of New York (later Mobil)
Standard of California (later Chevron)
Standard of Indiana (later Amoco)
Atlantic (later part of ARCO)
Continental (later Conoco)
Standard of Kentucky (later Kyso)
Standard of Ohio (later Sohio)
Ohio Oil (later Marathon)
were all bigger than Standard had ever been. Part of this growth was the rise of the private automobile, but the success of creating a free market is undeniable.
04 August 2012
Romney Tax Returns Case Analysis
Case 1: there’s
nothing there, apart from opportunistic exploitation of the tax law. Upside of releasing returns: disarms the
attacks. Downside: it’s a weapon that
can only be used once. No matter what,
Romney makes a lot of money and pays an embarrassingly small amount of taxes
compared to average Americans pay.
Upside of holding on: he looks determined/stubborn in the face of
opposition , slightly countering the extensive evidence of flip-flopping/etch-a-sketch. Downside: his enemies have a mystery weapon
they can hold against him.
Case 2: he paid 0% taxes in one or more years because of
legal but unfair exploitation of the tax law.
Upside of releasing: it disarms the attacks. Downside: the unfairness of tax law becomes a
top issue in the campaign, with Romney a flag bearer for the bad guys. Upside of holding: this doesn’t happen. Downside: the attacks continue.
Case 3: he hasn’t tithed appropriately to his church. He’s supposed to give 10%. If he hasn’t done it, there may be some quid
pro quo with the church or another Mormon politician, but for this case I’ll
pretend there wasn’t. Upside of
releasing: most people will see that
there’s nothing really important there, although it’s very embarrassing for him. The attacks are disarmed. Downside: A lot of religious conservatives
will look at him with some distain. This
won’t make anybody vote for his opponent, but it may make a bunch of his
current supporters not vote at all.
Upside of not releasing: these
votes stay with Romney. Downside: the
attacks continue.
Case 4: he hasn’t tithed appropriately in return for some
quid pro quo, presumably from someone connected with LDS. This spreads the damage to people Romney may
value and may lead to real jail time for them, while leaving Romney unscathed,
except for failing to protect his friends.
Upside of releasing: None. Downside
of releasing: Romney’s friends go to
jail, blaming Romney. Upside of holding:
he keeps his friends. Downside of
holding: the attacks continue.
Case 5: Romney’s tax returns show evidence of some
especially rapacious but legal behavior, most likely in connection with the
economic crisis of 2007-08. Economic
upside of releasing: None. Downside:
Romney’s credibility as a businessman who plays fair is completely destroyed
and he will become the face of the crisis and economic collapse. he loses 48-50 states in the election. Upside of holding on: this doesn’t happen. Downside: the attacks continue.
Case 6: Romney’s
returns show evidence of some crime by himself or those close to him: Upside of releasing: none. Downside: he or his close ally goes to
jail. Upside of holding on: this doesn’t
happen and he maintains his determined look.
Downside: the attacks continue.
One of the big advantages of being president is that you get
to pick your attorney general and federal prosecutors. If you or those close to you are guilty of
some crime, you can steer the justice department away from prosecuting those
crimes. It will go badly for such a
president if he’s found out, and controlling the justice department may be the
only way to prevent that. This is almost certainly the reason Sheldon
Adelson is so determined to buy somebody a presidency: there’s pretty good evidence that he,personally, is guilty of violating the Foreign Corrupt Practices Act in Macau,
against the advice of his lawyers, so it’s a knowing violation.
28 July 2012
How American is Mitt Romney?
Mitt has been making mumblings about how President Obama doesn't understand our Anglo-Saxon heritage. So I thought I'd look into figuring how how American, and how Anglo-Saxon each of them are. It turns out there's a really great website for this sort of information: there's a fairly complete genealogy for both men available on-line:
Wargs:romney.html
Wargs:obama.html
Obama was born in Honolulu, Romney in Detroit
Romney's Mom was born in Logan, Utah in 1908. Obama's in Witchita, Kansas in 1942
Romney's Dad was born in Chihuahua, México, in 1907. Obama's in Kenya in 1936.
So, their parents are exactly equally American, although George Romney was born of American parents trying to evade polygamy laws by moving to Mexico.
Romney's paternal grandparents were born in Utah Territory.
His maternal grandfather was born in England, and his maternal grandmother was born in Idaho Territory.
so NONE of his grandparents were born in America, although 3 were in US territories. 2 of Romney's great grandparents were born in England, one in what is today Germany, one in Canada, the rest in the US. Only 5 of his 16 GG grandparents were born in the US and 8 of his 32 GGGgrandparents. The rest were in Scotland, England, Ireland, Canada, or what is today Germany. Most came after the revolution, although he has ten ancestors who participated in the Revolution. At least 25% of his ancestry cannot be regarded as Anglo-Saxon. At least half of his ancestors moved to Utah before it was a US Territory, and later Mexico, specifically to avoid being considered American.
All of Obama's paternal ancestors were born in Kenya.
All 6 of Obama's maternal grandparents and great-grandparents were born in Kansas. On his mother's side, basically all of his ancestors were born-in-America WASPs, with very few exceptions: he has a great-great-grandmother who was born in Ireland. He has a G^6 grandfather who was born in Switzerland. All the rest were born in America or the colonies, and apart from the few exceptions, essentially all of his maternal ancestors seem to have emigrated to America in the 17th century. At least 14 of them fought on the American side in the American Revolution. At most 55% of his ancestry is not Anglo-Saxon.
So: Romney is indeed more Anglo-Saxon than Obama, but only by a little. It seems to me that leaving America to try to avoid its laws (at least two generations of Romney's ancestors did this) undercuts his Americanism a little.
Wargs:romney.html
Wargs:obama.html
Obama was born in Honolulu, Romney in Detroit
Romney's Mom was born in Logan, Utah in 1908. Obama's in Witchita, Kansas in 1942
Romney's Dad was born in Chihuahua, México, in 1907. Obama's in Kenya in 1936.
So, their parents are exactly equally American, although George Romney was born of American parents trying to evade polygamy laws by moving to Mexico.
Romney's paternal grandparents were born in Utah Territory.
His maternal grandfather was born in England, and his maternal grandmother was born in Idaho Territory.
so NONE of his grandparents were born in America, although 3 were in US territories. 2 of Romney's great grandparents were born in England, one in what is today Germany, one in Canada, the rest in the US. Only 5 of his 16 GG grandparents were born in the US and 8 of his 32 GGGgrandparents. The rest were in Scotland, England, Ireland, Canada, or what is today Germany. Most came after the revolution, although he has ten ancestors who participated in the Revolution. At least 25% of his ancestry cannot be regarded as Anglo-Saxon. At least half of his ancestors moved to Utah before it was a US Territory, and later Mexico, specifically to avoid being considered American.
All of Obama's paternal ancestors were born in Kenya.
All 6 of Obama's maternal grandparents and great-grandparents were born in Kansas. On his mother's side, basically all of his ancestors were born-in-America WASPs, with very few exceptions: he has a great-great-grandmother who was born in Ireland. He has a G^6 grandfather who was born in Switzerland. All the rest were born in America or the colonies, and apart from the few exceptions, essentially all of his maternal ancestors seem to have emigrated to America in the 17th century. At least 14 of them fought on the American side in the American Revolution. At most 55% of his ancestry is not Anglo-Saxon.
So: Romney is indeed more Anglo-Saxon than Obama, but only by a little. It seems to me that leaving America to try to avoid its laws (at least two generations of Romney's ancestors did this) undercuts his Americanism a little.
25 July 2012
Al Gore Invented the Internet!
Well, not really, and he didn't say he did either. But he did play a crucial role.
The internet is a collection of protocols--TCP/IP, DNA, HTTP, RFC-822, lots more. Nearly all of them were invented by, or under the aegis of, some US or European research grant. The most important of these came from the US Defense Department's Advanced Research Projects Agency, known as DARPA or ARPA. A number of computer networks were invented in the late 60s, but nearly all of them were fundamentally centralized. DoD saw great value in the networking, but realized that centralized networks were extremely susceptible to attacks on the critical links. At the time, the Cold War and the risk of Nuclear Attack was at the top of DoD's mind. Two ARPA researchers, Vinton Cerf and Bob Kahn, realized that a highly decentralized network would be immune to this problem, and designed some rudimentary protocols, which they called NCP, and commissioned some other researchers to begin implementing them. It didn't take long to come up with something useful, and BBN (Bolt Beranek and Newman) built the first router under DARPA contract, which they called an IMP (Interface Message Processor)
The nascent network, called the ARPANet, became immediately popular for academic and military. ARPA, which was footing the entire bill, quickly put their foot down and banned commercial traffic lest the bill get too large. Since it was fundamentally a research network that had become far more popular than expected, they'd included no billing mechanism at all. At the same time, a number of other companies were selling various other types of networks. The most popular were Bulletin Board Systems and Local Area Nets. Nearly all of these used the Star organization--a central hub with all communication passing through it, although there were a few decentralized networks that grew up, such as UUCPNet, and a few LANs that used token passing. Token passing is decentralized, but limited in scale, and Star is not decentralized--to make it bigger, the central hub needs to be bigger and pretty quickly becomes an unaffordable bottleneck. But the completely decentralized ARPANet did not need any big hubs. IMPs got cheap fast, and a lot of people quickly realized that service providers could be small and independent and they could all work together, making a whole that was much larger than the sum of the parts. But there was a problem of how to charge the users, to make the thing scalable and commercial.
Cerf and Kahn and a few others set about solving the problem, and came up with a new set of protocols, which they called TCP/IP (Transmission Control Protocol/Internet Protocol) which improved a lot of things, and included some sourcing information, which allowed service providers to implement billing. Since the ARPANet had been around for over ten years and was very popular by this point, it took a lot of politicking to impose this change, and it was Al Gore who was the champion of it in congress. (I was one of the skeptics at the time. I was wrong. Cerf was right) He got the bill passed, and on New Years Day 1983, called "Flag Day" for internet users, NCP was abandoned and TCP/IP replaced it. The internet was born. At the same time Domain Name Architecture or System (DNS...the obvious acronym would be confusing) was imposed.
Over time, a number of competing protocols have been in play. Some of them, such as X25, are potentially superior, but none have been able to compete with the incredible flexibility and installed base of TCP/IP. All of the Star/Hub networks have been forced to accommodate the internet or go out of business. Many are still in use as LANs or ISPs, but the majority have switched to using TCP/IP. During the late 80s/early 90s, a great many BBS-like networks grew up, including Compu$erve, MSN, Prodigy, AOL, and more. All of these used the Star architecture. It made billing simple and and forced users to mainly use content of the providers choice, which of course created more profit centers. Each had incompatible ways of generating content, although they began to focus on what were called Markup Languages and HyperLinks. On the ARPANet, you had to have some level of sophistication about files and data in order to use things. the network providers provided, among other things, a high level of user-friendliness. They used HyperText and so did most of the non-network CD-ROM content, such as HyperCard and Microsoft Encarta.
HyperText had been invented in the mid 70s by a brilliant eccentric named Ted Nelson. His idea was that all the information in the world could be brought together and linked, by placing them all on a centralized set of computers and putting pointers in the various documents to one another. He saw there being a centralized editorial board to make sure all the content was correct and consistent. Nelson's idea was consistent with the Star/Hub idea. He wrote a fun book about it and lots of other things, called Computer Lib/Dream Machines in 1974 which caught the imagination of thousands of computer scientists. One of them was Tim Berners-Lee, an Englishman and researcher at CERN, the European Nuclear Research Center (paid for by a consortium of European governments). Berners-Lee realized that by modifying a markup language and making a browser to use it, and providing a simple internet protocol layer, which he respectively called HyperText Markup Language and HyperText Transfer Protocol (HTML and HTTP), he could create a version of Ted Nelson's vision. His motive was to provide a flexible host for CERN researchers to document their experiments, apparatus, and more, but it didn't take long before he realized that everybody could use it. He called it the World Wide Web.
The internet is a collection of protocols--TCP/IP, DNA, HTTP, RFC-822, lots more. Nearly all of them were invented by, or under the aegis of, some US or European research grant. The most important of these came from the US Defense Department's Advanced Research Projects Agency, known as DARPA or ARPA. A number of computer networks were invented in the late 60s, but nearly all of them were fundamentally centralized. DoD saw great value in the networking, but realized that centralized networks were extremely susceptible to attacks on the critical links. At the time, the Cold War and the risk of Nuclear Attack was at the top of DoD's mind. Two ARPA researchers, Vinton Cerf and Bob Kahn, realized that a highly decentralized network would be immune to this problem, and designed some rudimentary protocols, which they called NCP, and commissioned some other researchers to begin implementing them. It didn't take long to come up with something useful, and BBN (Bolt Beranek and Newman) built the first router under DARPA contract, which they called an IMP (Interface Message Processor)
The nascent network, called the ARPANet, became immediately popular for academic and military. ARPA, which was footing the entire bill, quickly put their foot down and banned commercial traffic lest the bill get too large. Since it was fundamentally a research network that had become far more popular than expected, they'd included no billing mechanism at all. At the same time, a number of other companies were selling various other types of networks. The most popular were Bulletin Board Systems and Local Area Nets. Nearly all of these used the Star organization--a central hub with all communication passing through it, although there were a few decentralized networks that grew up, such as UUCPNet, and a few LANs that used token passing. Token passing is decentralized, but limited in scale, and Star is not decentralized--to make it bigger, the central hub needs to be bigger and pretty quickly becomes an unaffordable bottleneck. But the completely decentralized ARPANet did not need any big hubs. IMPs got cheap fast, and a lot of people quickly realized that service providers could be small and independent and they could all work together, making a whole that was much larger than the sum of the parts. But there was a problem of how to charge the users, to make the thing scalable and commercial.
Cerf and Kahn and a few others set about solving the problem, and came up with a new set of protocols, which they called TCP/IP (Transmission Control Protocol/Internet Protocol) which improved a lot of things, and included some sourcing information, which allowed service providers to implement billing. Since the ARPANet had been around for over ten years and was very popular by this point, it took a lot of politicking to impose this change, and it was Al Gore who was the champion of it in congress. (I was one of the skeptics at the time. I was wrong. Cerf was right) He got the bill passed, and on New Years Day 1983, called "Flag Day" for internet users, NCP was abandoned and TCP/IP replaced it. The internet was born. At the same time Domain Name Architecture or System (DNS...the obvious acronym would be confusing) was imposed.
Over time, a number of competing protocols have been in play. Some of them, such as X25, are potentially superior, but none have been able to compete with the incredible flexibility and installed base of TCP/IP. All of the Star/Hub networks have been forced to accommodate the internet or go out of business. Many are still in use as LANs or ISPs, but the majority have switched to using TCP/IP. During the late 80s/early 90s, a great many BBS-like networks grew up, including Compu$erve, MSN, Prodigy, AOL, and more. All of these used the Star architecture. It made billing simple and and forced users to mainly use content of the providers choice, which of course created more profit centers. Each had incompatible ways of generating content, although they began to focus on what were called Markup Languages and HyperLinks. On the ARPANet, you had to have some level of sophistication about files and data in order to use things. the network providers provided, among other things, a high level of user-friendliness. They used HyperText and so did most of the non-network CD-ROM content, such as HyperCard and Microsoft Encarta.
HyperText had been invented in the mid 70s by a brilliant eccentric named Ted Nelson. His idea was that all the information in the world could be brought together and linked, by placing them all on a centralized set of computers and putting pointers in the various documents to one another. He saw there being a centralized editorial board to make sure all the content was correct and consistent. Nelson's idea was consistent with the Star/Hub idea. He wrote a fun book about it and lots of other things, called Computer Lib/Dream Machines in 1974 which caught the imagination of thousands of computer scientists. One of them was Tim Berners-Lee, an Englishman and researcher at CERN, the European Nuclear Research Center (paid for by a consortium of European governments). Berners-Lee realized that by modifying a markup language and making a browser to use it, and providing a simple internet protocol layer, which he respectively called HyperText Markup Language and HyperText Transfer Protocol (HTML and HTTP), he could create a version of Ted Nelson's vision. His motive was to provide a flexible host for CERN researchers to document their experiments, apparatus, and more, but it didn't take long before he realized that everybody could use it. He called it the World Wide Web.
14 July 2012
Fungible workers
I just woke from a very dark, almost dickensian dream. I was working in a large factory which consisted of many work teams using metal worktables, doing various metalworking sorts of things. Some of them had machine tools, and many of them were working on greasy things. The lunchroom was in a separate room, to keep it clean, but it had the same sort of metal tables.
One day the factory was shut down for reorganization. As most of the workers stressed over not being able to do their jobs, other workers were reconfiguring their work tables. After several days of this, we found out why. All the worktables had been converted to lunch counters!
Being a dream, this was an exaggeration, but only a little. To a lot of management thinking, workers are fungible, and the ones who work in the lunch counter are the most fungible of all: everybody understands what they do, and many other workers could do their job, so they are probably the lowest paid workers on site. Moreover, the workers who are responsible for bringing in all the companies income, the sales force, does most of their work at lunch. So management had made what is to them, the obvious optimization. Emphasize the parts that bring in the money, and make them cost as little as possible.
Most workers who have more than a few months of training are not very fungible. This includes nearly the entire manufacturing and engineering staff. However, the sales staff, large parts of management, some clerical workers, janitors, and yes, the lunch counter workers, are quite fungible. They can quickly switch from doing what they do for one type of company, to doing the same thing for a different type of company. There's some specialization for sure, but selling a car and selling a computer are more similar than different. But building a car and building a computer are quite different.
The people that want to believe that workers are fungible have outsourced the majority of manufacturing in this country. Some of it went to outside contractors who are working in America, but a lot has gone to countries where labor is much cheaper. But the parts that can't be outsourced: the lunch counter, health care, sales, etc., are still here and taking an ever larger share of the national income. But the workers, the non-fungible majority, are being forced to split the shrinking remainder. Each individual company has done this to maximize its own profits. But like a tragedy of the commons, if everybody does it, we destroy the core of the economy.
Most workers are not fungible, but money is. This leads to a very interesting and slightly counter-intuitive consequence: when there is demand, workers find employment. The average American taxpayer's income is over $80K The median family income is about $50K, and the median individual income is under $40K. What's going on? A few people have extremely high incomes--many, many times the average. Many families have multiple earners. Imagine we did a little redistribution--not enough to make everybody equal, but enough to give work to a lot of the workers who are now making under $20K. Many of these people had skills which are going to waste. Employ them doing useful things. These people will need things: groceries, cars, places to live. Some of the underemployed will create new businesses, and if they have customers, the new business might survive.
One day the factory was shut down for reorganization. As most of the workers stressed over not being able to do their jobs, other workers were reconfiguring their work tables. After several days of this, we found out why. All the worktables had been converted to lunch counters!
Being a dream, this was an exaggeration, but only a little. To a lot of management thinking, workers are fungible, and the ones who work in the lunch counter are the most fungible of all: everybody understands what they do, and many other workers could do their job, so they are probably the lowest paid workers on site. Moreover, the workers who are responsible for bringing in all the companies income, the sales force, does most of their work at lunch. So management had made what is to them, the obvious optimization. Emphasize the parts that bring in the money, and make them cost as little as possible.
Most workers who have more than a few months of training are not very fungible. This includes nearly the entire manufacturing and engineering staff. However, the sales staff, large parts of management, some clerical workers, janitors, and yes, the lunch counter workers, are quite fungible. They can quickly switch from doing what they do for one type of company, to doing the same thing for a different type of company. There's some specialization for sure, but selling a car and selling a computer are more similar than different. But building a car and building a computer are quite different.
The people that want to believe that workers are fungible have outsourced the majority of manufacturing in this country. Some of it went to outside contractors who are working in America, but a lot has gone to countries where labor is much cheaper. But the parts that can't be outsourced: the lunch counter, health care, sales, etc., are still here and taking an ever larger share of the national income. But the workers, the non-fungible majority, are being forced to split the shrinking remainder. Each individual company has done this to maximize its own profits. But like a tragedy of the commons, if everybody does it, we destroy the core of the economy.
Most workers are not fungible, but money is. This leads to a very interesting and slightly counter-intuitive consequence: when there is demand, workers find employment. The average American taxpayer's income is over $80K The median family income is about $50K, and the median individual income is under $40K. What's going on? A few people have extremely high incomes--many, many times the average. Many families have multiple earners. Imagine we did a little redistribution--not enough to make everybody equal, but enough to give work to a lot of the workers who are now making under $20K. Many of these people had skills which are going to waste. Employ them doing useful things. These people will need things: groceries, cars, places to live. Some of the underemployed will create new businesses, and if they have customers, the new business might survive.
05 July 2012
Things the Free Market Can't Do Well
It's an article of faith on the the right that the Free Market will always do better than government run business. This is certainly true in a great many cases, but there are lots of counterexamples. Here are a few
USRA: In the late 19th and early 20th centuries, the railroads were engaged in a "race to the bottom", trying to outcompete each other with lower fares, lower operating costs, emphasizing popular routes at the expense of less popular ones, and so forth. In a lot of ways like the post-regulation airlines of today. They kept maintaining locomotives, track, and rolling stock at the minimum level that would get the job done, and as a consequence, all had very high maintenance costs and poor performance. When the US entered World War I, the extra demand placed on the railroads immediately exposed huge deficiencies, which the private railroads were plainly incapable of fixing on their own. So the government nationalized all of them into an institution called the United States Railroad Administration. They immediately set about rationalizing and repairing routes, designed a new set of state of the art locomotives and freight cars, streamlining management, and more. The railroads never ran better or were more profitable, and when the war ended, the railroads kept all the improvements and continued many of the patterns established by USRA, even dragging their feet on un-nationalizing until almost two years after war's end. It wasn't until the government agreed to fairly major support that the old management finally agreed to go their own paths.
When World War II broke out, many of the railroads essentially volunteered to be nationalized again. Since the reforms installed 20 years earlier were still working, the government didn't see the need, but the railroads willingly submitted to fairly extensive government control throughout the war. This was effective too: the railroads continued to work well. Much as before 1917, loads were fairly high and management was loath to try changes that might possibly cause them headaches, but also might improve things. As soon as the war ended, many of the new innovations that had been deferred were instituted, such as switching from steam to diesel-electric locomotives. These dramatically reduced the time locomotives spent in the shop, at tremendous savings in operating costs. They also made some foolish errors: the "mechanical" refrigerator car allowed huge efficiency improvements over their ice-cooled counterparts, but the railroads almost universally stuck with ice. Private truckers on the new and rapidly expanding highways were more willing to try the newfangled mechanicals and soon completely took over the market. Not having to pay for roads and fuel being close to free also helped them compete.
Interstate Highways, Bridges, and Dams. All of these things had been invented and installed at one time or another by small operators, charging tolls of some sort to recoup their initial investment. But it wasn't until government got involved that these things really took off. No business is big enough to fund even a single interstate or big dam on its own. But building them created millions of jobs directly, and the improvements to the economy they generated created millions more, and in nearly every case completely paid for their costs within a decade or so. Not always directly. For example, the economy of the "East Side" of Lake Washington is about $50B/year. Without the bridges (one is currently being replaced and substantially upgraded at a cost of $4.6B), it would be closer to $5B.
Health Care Insurance: I'm not aware of a single case where private insurers provide better coverage than public. US health care costs are nearly twice those per capita of the second most expensive country, and we leave at least 20% of customers completely without insurance and that many again with inadequate insurance. V.A. and Medicare costs are 20-30% less than the equivalent private insurance, even using what is otherwise largely the same basic health care system. And our outcomes are much worse than most of the countries that pay half or less what we do, even if you account for the poor outcomes of those who have no insurance at all. It's true that in America if you can afford it, there is no better medicine available, but only the 1% can afford it.
The Internet: Between 1970 and 2000 we unintentionally ran a very interesting experiment comparing the private and public markets for networking. The thing known today as the Internet was developed (by a consortium of universities and private companies) at the behest of the Defense Advanced Research Projects Agency (DARPA), hence its original name: "ARPA-Net". For military reasons, it used a decentralized, many-hubbed architecture, reasoning that if part of it were destroyed (by a nuclear strike or sabotage, for example), the rest would continue to function uninterrupted. It was strictly non-commercial and had no billing mechanism, although many private companies joined up and mostly followed the rules. This had tremendous advantages: decentralization meant that nobody was in charge, so new sites could come on-line and expand with a minimum of interference. Very "Free Market". Meanwhile, many computer companies were inventing their own network systems. Nearly all of these (e.g. Compuserve, Prodigy, MSN, AOL) used the "Star" architecture, meaning logging into a central hub, with all the users communicating only through the hub. This made billing very natural, but made it a little awkward for users of competing networks to communicate with each other. . In 1983, a Senate committee headed by Al Gore set about "commercializing" the ARPANet, changing a number of protocols to support billing and larger numbers of users, renaming it "Internet". Within very few years, all of the Star networks found that providing internet access improved nearly everything about the user experience, and after the invention of the World Wide Web (also by a government funded researcher), very few people even remember that there ever was a serious market for Star architectures.
The uniting theme of these examples is that the government institution did what was most effective at solving the problem for the long term, while business always did what was most profitable for the short term. In every case, doing the right thing was vastly more effective in the long term and with only the one exception (health care) vastly more profitable for the businesses that adapted to the government mandate.
USRA: In the late 19th and early 20th centuries, the railroads were engaged in a "race to the bottom", trying to outcompete each other with lower fares, lower operating costs, emphasizing popular routes at the expense of less popular ones, and so forth. In a lot of ways like the post-regulation airlines of today. They kept maintaining locomotives, track, and rolling stock at the minimum level that would get the job done, and as a consequence, all had very high maintenance costs and poor performance. When the US entered World War I, the extra demand placed on the railroads immediately exposed huge deficiencies, which the private railroads were plainly incapable of fixing on their own. So the government nationalized all of them into an institution called the United States Railroad Administration. They immediately set about rationalizing and repairing routes, designed a new set of state of the art locomotives and freight cars, streamlining management, and more. The railroads never ran better or were more profitable, and when the war ended, the railroads kept all the improvements and continued many of the patterns established by USRA, even dragging their feet on un-nationalizing until almost two years after war's end. It wasn't until the government agreed to fairly major support that the old management finally agreed to go their own paths.
When World War II broke out, many of the railroads essentially volunteered to be nationalized again. Since the reforms installed 20 years earlier were still working, the government didn't see the need, but the railroads willingly submitted to fairly extensive government control throughout the war. This was effective too: the railroads continued to work well. Much as before 1917, loads were fairly high and management was loath to try changes that might possibly cause them headaches, but also might improve things. As soon as the war ended, many of the new innovations that had been deferred were instituted, such as switching from steam to diesel-electric locomotives. These dramatically reduced the time locomotives spent in the shop, at tremendous savings in operating costs. They also made some foolish errors: the "mechanical" refrigerator car allowed huge efficiency improvements over their ice-cooled counterparts, but the railroads almost universally stuck with ice. Private truckers on the new and rapidly expanding highways were more willing to try the newfangled mechanicals and soon completely took over the market. Not having to pay for roads and fuel being close to free also helped them compete.
Interstate Highways, Bridges, and Dams. All of these things had been invented and installed at one time or another by small operators, charging tolls of some sort to recoup their initial investment. But it wasn't until government got involved that these things really took off. No business is big enough to fund even a single interstate or big dam on its own. But building them created millions of jobs directly, and the improvements to the economy they generated created millions more, and in nearly every case completely paid for their costs within a decade or so. Not always directly. For example, the economy of the "East Side" of Lake Washington is about $50B/year. Without the bridges (one is currently being replaced and substantially upgraded at a cost of $4.6B), it would be closer to $5B.
Health Care Insurance: I'm not aware of a single case where private insurers provide better coverage than public. US health care costs are nearly twice those per capita of the second most expensive country, and we leave at least 20% of customers completely without insurance and that many again with inadequate insurance. V.A. and Medicare costs are 20-30% less than the equivalent private insurance, even using what is otherwise largely the same basic health care system. And our outcomes are much worse than most of the countries that pay half or less what we do, even if you account for the poor outcomes of those who have no insurance at all. It's true that in America if you can afford it, there is no better medicine available, but only the 1% can afford it.
The Internet: Between 1970 and 2000 we unintentionally ran a very interesting experiment comparing the private and public markets for networking. The thing known today as the Internet was developed (by a consortium of universities and private companies) at the behest of the Defense Advanced Research Projects Agency (DARPA), hence its original name: "ARPA-Net". For military reasons, it used a decentralized, many-hubbed architecture, reasoning that if part of it were destroyed (by a nuclear strike or sabotage, for example), the rest would continue to function uninterrupted. It was strictly non-commercial and had no billing mechanism, although many private companies joined up and mostly followed the rules. This had tremendous advantages: decentralization meant that nobody was in charge, so new sites could come on-line and expand with a minimum of interference. Very "Free Market". Meanwhile, many computer companies were inventing their own network systems. Nearly all of these (e.g. Compuserve, Prodigy, MSN, AOL) used the "Star" architecture, meaning logging into a central hub, with all the users communicating only through the hub. This made billing very natural, but made it a little awkward for users of competing networks to communicate with each other. . In 1983, a Senate committee headed by Al Gore set about "commercializing" the ARPANet, changing a number of protocols to support billing and larger numbers of users, renaming it "Internet". Within very few years, all of the Star networks found that providing internet access improved nearly everything about the user experience, and after the invention of the World Wide Web (also by a government funded researcher), very few people even remember that there ever was a serious market for Star architectures.
The uniting theme of these examples is that the government institution did what was most effective at solving the problem for the long term, while business always did what was most profitable for the short term. In every case, doing the right thing was vastly more effective in the long term and with only the one exception (health care) vastly more profitable for the businesses that adapted to the government mandate.
25 May 2012
Pre-Blibbet
From my collection, before we used email for phone messages or had the blibbet logo. 1981, I think. The paper and glue are starting to get a little brittle.
21 May 2012
Two Maps
Here are two maps, county by county, put together by the New York Times. the top one is what percentage of all income come from various government services: medicare, food stamps, VA, unemployment, etc.
The second is the 2008 presidential election. In both cases, if you go to the original source, you can drill down by county or go to the results from different years.
The thing that strikes me is how similar they are. The deepest blue areas in the electoral map mostly get the least benefits, and many of the places that get the most are very "red" politically. There are some conspicuous exceptions: Apache county in Arizona, where most people live on an indian reservation, DC, where most people are poor and black, along the river in Mississisppi, where most people are poor and black, and along the texas/new mexico border, where most people are poor and mexican. But by and large, the people that benefit the most consistently vote for the party that wants to take those benefits away.
The places that benefit the most are all deeply poor, with very few people that are even middle class. the difference, it appears to me, is the color of your skin. poor whites vote republican, poor everybody else votes for democrats.
The second is the 2008 presidential election. In both cases, if you go to the original source, you can drill down by county or go to the results from different years.
The thing that strikes me is how similar they are. The deepest blue areas in the electoral map mostly get the least benefits, and many of the places that get the most are very "red" politically. There are some conspicuous exceptions: Apache county in Arizona, where most people live on an indian reservation, DC, where most people are poor and black, along the river in Mississisppi, where most people are poor and black, and along the texas/new mexico border, where most people are poor and mexican. But by and large, the people that benefit the most consistently vote for the party that wants to take those benefits away.
The places that benefit the most are all deeply poor, with very few people that are even middle class. the difference, it appears to me, is the color of your skin. poor whites vote republican, poor everybody else votes for democrats.
Subscribe to:
Posts (Atom)