Government, Political Influence,
and Wealth
Technology and the Uncertain
Foundations
of Anglo-American Wealth
excerpted from the book
Wealth and Democracy
a political history of the
American rich
by Kevin Phillips
Broadway Books, 2002, paper
p230
By 2000, Europe had its own European Central Bank, and together
with the Bank of Japan and the U.S. Federal Reserve, the three
dominated the global financial system, controlling upwards of
80 percent of growth in the developed world. "Governments
have largely ceded to these three institutions the responsibility
of controlling world inflation, and to do this they must necessarily
influence the near-term path for GDP and unemployment," noted
Goldman Sachs economist Gavyn Davies. "Rarely, if ever, can
so much power have been wielded by such a small number of institutions
sitting outside the direct democratic process."
The International Monetary Fund, in turn,
was the global agency to which nations, usually poor or embattled
ones, turned for loans and assistance in economic crises. The
conditions of those loans usually involved austerity and measures
to make the local economy safer for foreign investors. The U.S.
Treasury Department was influential in IMF decisions, and one
economics professor, Rudiger Dornbusch of MIT, stated simply that
the IMF was "a tool of the United States to pursue its policy
offshore."
In collaboration with U.S. multinational
banks and corporations, the U.S. government, on a bipartisan basis,
was indeed closely involved in writing the rules of the new global
investor economy, especially through two new frameworks brought
into existence in the 1990s: the North American Free Trade Agreement
(1993) and the World Trade Organization (1995). The bottom line,
from the standpoint of American multinational banks and corporations,
was that the U.S. market had lost its old importance. Investment
opportunities, production facilities, workers, and markets also
had to be sought elsewhere, which would require the creation of
a protective international legal and regulatory framework, one
able to secure investment by overriding contrary local parochialisms
and procedures.
Although criticism had forced the tabling
in 1998 of a proposed Multilateral Agreement on Investment, the
enactment of the North American Free Trade Agreement and the World
Trade Organization included sections authorizing similar protections.
Both agreements were pushed through Congress under so-called "fast
track" procedures. When fast track was in place, the House
and Senate were required to consider major trade legislation on
a take-it-or-leave-it basis, with amendments prohibited. Otherwise,
amendments-including ones to strike provisions thought to trespass
on U.S. sovereignty-might well have passed.
However, new transnational enforcement
procedures helped to explain why the World Trade Organization
was superseding the General Agreement on Trade and Tariffs. NAFTA,
too, had a section that established a system of arbitration under
which investors from one of the other two nations could bring
claims against the U.S., Canadian, or Mexican governments. Investors
were allowed to demand compensation should the profit-making potential
of a venture be injured by national, state, or local government
decisions. The broader WTO, in standards for members that former
director-general Renato Ruggiero called "a new constitution
for a single global economy," permitted governments to bring
actions against other nations before special WTO tribunals for
interfering with the flow of goods and capital.
Several decisions by these three-member
panels-routinely operating behind closed doors and generally staffed
by former government or corporate trade officials-illustrated
the transfer of power. One ruling against the United States required
amendment of the Clean Air Act to permit the entry of Venezuelan
gasoline that did not meet federal standards. Thailand, for its
part, was told to give up manufacturing a cheap AIDS drug after
the U.S. threatened a WTO suit on behalf of an American pharmaceutical
firm. Critics in the U.S. Congress pointed to the large potential
for WTO panels to overturn state and local laws in the United
States. Each year, they said, Japan, the European Union, and Canada
publish lists of American laws that each considers WTO-illegal.
In 1999, according to the Georgetown University Law Center, ninety-five
such laws were tentatively identified in California alone.
In terms of procedure, no appeals to other
bodies were allowed from tribunal decisions based on criteria
that free trade, economic growth, and enhanced financial returns
outranked different local values. This fueled critics. Journalist
William Greider, a latter-day muckraker, charged that, "The
WTO aspires, in effect, to create a Bill of Rights for capital,
crafted one case at a time by the corporate lawyers filing their
confidential pleadings in Geneva. It is not hyperbole when critics
say the system defines property rights and common social concerns
as irrelevant to trade."
To the AFL-CIO, the rules of the new global
economy were being "created by government muscle, wielded
behind closed doors, largely on behalf of the most powerful corporate
and financial interests." Democratic U.S. senator Fritz Hollings
of South Carolina, who became chairman of the Senate Commerce
Committee in 2001, charged that "the WTO puts our social
contract in jeopardy; its one-size-fits-all capitalism threatens
to destroy America's standard of living."
Even corporations had some second thoughts
when a WTO tribunal ruled in 2001 that a $4 billion U.S. tax break
for exporters was in violation of the new international rules,
and affected U.S. companies howled. By and large, though, the
new framework was one that U.S. multinational corporations promoted
and favored. So did investors who understood that American wealth
principally rested on stock market valuations tied to corporate
profits
p244
Radio had its own intertwining with government. During World War
the navy took over all radio patents and speeded radio's development.
After the war the navy, General Electric, and Westinghouse set
up a new company, the Radio Corporation of America, to hold all
the patents and steer technological development. An admiral served
as an ex officio member of the board of directors and functioned
as a liaison with government. In the words of one chronicler,
"Every leading technician or official of RCA was a reserve
officer of the Army or Navy, and the company was geared to instant
conversion to war duty as an arm of the government." At first
it was not clear whether radio and the limited electromagnetic
spectrum would be publicly or privately owned-one plan was to
set aside 25 percent of the spectrum for public service. However,
the end result of the Radio Act of 1927 and the Communications
Act of 1934 was to give broadcasters the airwaves. The public
service strings attached were scarcely more effective than those
attached a half century earlier to the public lands given railroads.
p245
By 1943, as wartime mathematicians at Pennsylvania's Army Ballistic
Research Laboratory found themselves falling behind in meeting
the military's needs for analyzing trajectories and computing
artillery firing tables, Army Ordnance funded a crash program
to produce the ENIAC (Electronic Numerical Integrator and Computer).
Completed in 1945 and generally described as the first electronic
computer, the wall-sized ENIAC, with its 18,000 vacuum tubes,
spent only a few weeks calculating firing tables before Los Alamos
mathematicians were allowed to use it for calculating the hydrodynamics
of hydrogen bombs.
Its follow-up, the EDVAC, was the first
stored-memory computer, and its details were disseminated so widely
that army lawyers ruled that they passed into the public domain.
The next group of computers were funded or commissioned as follows:
SEAC (1949) for the Bureau of National Standards, IAS (1951) for
the army, navy, and RCA; Whirlwind (1949) for the SAGE strategic
air-defense system, Univac (1953) by RemingtonRand for the Census
Bureau, other government agencies and business buyers, and the
IBM 701 (1953) for the Defense Department. By this point commercial
demand was catching hold.
The transistor in the meantime had been
invented in 1949 at the American Telephone & Telegraph Company's
Bell Laboratories, and AT&T, because of a federal antitrust
suit filed that same year, was encouraged to disseminate information,
spurring development. In 1954 the silicon junction transistor
was produced for the U.S. military for use in radar and missile
applications. The invention in 1958 of the integrated circuit
(IC) - a leap forward that combined a number of transistors on
a single silicon chip-had not been undertaken for the armed forces,
but federal military and space applications became the IC's market
and proving ground.
Technology specialists Nathan Rosenberg
and David Mowery have tabulated the importance of the federal
procurement. Chart 5.4A, below, shows the growth of semiconductor
production, with the Department of Defense taking over one-third
of total semiconductor production until 1963. Chart 5.4B shows
the huge initial dependence of IC producers on military sales.
Just as U.S. antitrust officials pushed
for diffusion of critical technology, Mowery and Rosenberg emphasize
that the Department of Defense did so for its own goal of ensuring
a "second source" of the technology DOD was buying.
Compliance meant that firms had to exchange designs and share
enough process knowledge to ensure that the components would match.
The emergence of the microprocessor in
1971, which led to personal computers and work stations, owed
less to Washington. Not so the development of software. Antitrust
pressure induced IBM to "unbundle" its hardware and
software, opening up space for independent software producers.
Mowery and Rosenberg also conclude, despite lack of a single time
series, that, "Much of the rapid growth in custom software
firms during the period from 1969 through 1980 reflected expansion
in federal demand, which in turn was dominated by Department of
Defense demand."
The Internet, of course, began as a project
of the Defense Department's Advanced Research Projects Agency
(DARPA). In a 1968 essay, DARPA computer pioneer J. C. R. Licklider
discussed how a few weeks earlier he and others "participated
in a technical meeting held through a computer." He correctly
predicted in that same essay that being "on line" through
a network of multi-access computers had the potential to "change
the nature and value of communication even more profoundly than
did the printing press and the picture tube." By 1969 the
idea of a network got a $1 million budget at DARPA, and by the
early 1970s, ARPANET was wired to twenty-three sites with some
connection to government funded computer research.
Taken over after some years by the National
Science Foundation, ARPANET had 100,000 sites when it was shut
down in 1989, its sites to become part of other networks. Collectively
these networks assumed the name Internet, gaining a new potential
in 1993 when Marc Andreessen, a codewriter at the federally-funded
National Center for Supercomputing Applications, came up with
a vernacular protocol for Web access by the multitudes. That year
the number of commercial Web sites jumped from fifty to over ten
thousand, and Andreessen and his friends helped found Netscape.
p248
The telecommunications industry, like radio, received spectrum
from the government under pledges of public service but at no
serious cost. "The free distribution of the public-owned
electromagnetic spectrum to U.S. radio and television companies,"
according to one critic, "has been one of the greatest gifts
of public property in history, valued as high as $100 billion."
The Telecommunications Act of 1996 alone, which gave each existing
television broadcaster an additional six megahertz of spectrum
so that they could start broadcasting simultaneously in digital
and cable, drew fire for giving away spectrum worth $40-$100 billion
in return for a loose promise of public service programming But
most politicians were silent, too well aware of the power the
major media conglomerates exercised over their careers, another
parallel to the influence of the railroads a century earlier.
p269
... the gains of India and China may well represent the early-stage
emergence of the two nations and continent most likely to challenge
the twenty-first-century technological advantage of the United
States.
India boarded the Internet and software
express quite successfully in the 1990s, drawing on its 15-30
million-strong English-speaking and well-educated middle class.
Even so, one-third of the 900 million population still lived in
absolute poverty. Hyderabad, whose Nizam had once been the wealthiest
man in British India, led the tech parade, along with nearby Bangalore.
Although just 280,000 Indians worked in the technology sector
at decade's end, the benefits to India's elite were broader. India's
software exports jumped from about $1 billion in 1995 to some
$5 billion in 2000, principally to the United States. At the height
of the boom, a survey by McKinsey & Company predicted a rise
to $50 billion in a decade. John Wall, president of Nasdaq International,
enthused that, "There are potentially 100-plus companies
in India that could list on the Nasdaq." Indeed, between
the start of 1999 and the spring of 2000 the market value of the
software sector on the Bombay Stock Exchange rose from $4 billion
to a peak of more than $50 billion. Employees of Infosys alone
included 270 millionaires in dollar terms, at least before the
bubble broke.
India's millions of engineers also became
the principal foreign source of skilled technicians for Silicon
Valley under the U.S. government's H1-B visa program. In 2000,
the total of skilled Indian workers in the U.S. approached 75,000,
including more than one-quarter of the workforce at Cisco, the
networking giant ... Thousands more Indians, it should be noted,
held lucrative niches in the U.S. financial sector, and roughly
a score of U.S. computer, Internet, and software firms had Indian-born
chief executives. All of these ties greatly assisted India's fledgling
software industry. In wealth distribution terms, the result was
yet another glamorous digirati, bubbly stock markets, and Western
high-rise buildings in a poor and more polarized society.
Still, even though India's software industry
and related fortunes suffered from the post-millennial tech crash,
few serious observers saw the country as anything but an ascending
force in twenty-first-century technology thanks to its large number
of English-speaking engineers and low wage rates. Software engineers
commanding $75,000 a year in the U.S. could be hired for one-fifth
as much in Hyderabad or Bangalore, and by 2001 the Indian government
reported nearly one thousand tech firms- several hundred American-with
foreign operations in Bangalore.
p278
Comments on the speculative tendencies of Dutch, British, and
Americans have been legion. As for the repetitive implosions,
doubters can consult economist Charles Kindleberger's book Manias,
Panics, and Crashes. Of his twenty-eight major examples between
1720 and 1975, twenty-one originated wholly or partially in the
avid stock market cultures of Holland, Britain, and the United
States. Initially, of course, close relations between technology
and finance produced benefits by enlisting the capital needed
to support innovation. The periodic national bane, alas, was the
frequent subsequent distress, most prominently in the United States,
from the implosion of technologically-fed speculative bubbles.
Chapter 9 will take a larger approach
to the culture and politics of speculation in the United States.
But the repetition of eighteenth and nineteenth-century technology
booms that bubbled and burst is instructive, especially because
it has rarely been pursued. There is no need to revisit the small
British speculative booms in diving engines, gas lighting, or
canals. However, America's three great technologically-nurtured
nineteenth-century boom-bust sequences (1857, 1873, and 1893)
can be summed up in one fast-speeding, revolutionary word: railroads.
None of these matched the opening-round psychological delusion
of so many Britons in the Great Railway Mania of 1844-48, but
the repetitive damage of the several railroad implosions to the
larger U.S. economy was particularly vivid on that side of the
Atlantic.
p286
Where elites prosper on a political economy of profitable openness,
technology and capital move easily-and when the hour of disappointment
or adversity comes, it arrives surprisingly quickly... Spain had
little manufacturing to safeguard, but its financial and commercial
capacities were further drained in the seventeenth century as
Genoese, Frenchmen, Flemings, and Portuguese Jews left for greener
pastures. Hollanders should have seen the straws in the wind when
some of Amsterdam's foreign moneymen followed William of Orange
to London in the 1690s. Capital transfer was close behind, and
the migrating skills of Dutch carpenters and engineers only another
generation or two.
The essence of the successive Dutch, British,
and American internationalisms, intensified in the rentier cosmopolitanism
that flourished decades beyond each golden age, has been to oppose
restraints on the flow of capital, labor, and trade with an insistence
that borders on theology and brooks no argument. The self-interest
of the prevailing elites has been obvious, however. Leading powers
formerly committed to protection and mercantilism back when that
approach profited them-Britain from the sixteenth century to the
early nineteenth, the United States from the 1790s to the 1930s-elicit
cynicism with their new insistences.
The changes have been well cataloged.
In the Dutch case, its industrial decline, especially in textiles,
was closely tied to two waves of European protectionism and new
or higher duties on imports. The prohibitive tariffs put by England
and France on Dutch cloth and finished textiles in the late seventeenth
century were followed by similar measures in Russia, Prussia,
Denmark, Norway, and Spain in the first quarter of the eighteenth
century. These cumulatively devastated the Dutch industry. The
British, in turn, lost hope of spreading free trade and laissez-faire
in the late nineteenth century as France, Germany, the United
States, and Russia imposed or increased duties on imported goods.
Not a few historians have wondered how
the British, in particular, could have so misread the ebb and
flow of nations as to dismiss warning signals. The example of
Holland was only 40 miles across the Channel and just 150 years
in the past. Much of the answer probably lay in the hubris, even
self-deception, that attends leading world economic power status-
the ideological and geopolitical equivalent of technological mania
and irrational exuberance.
The British, as they tried to sell the
rest of Europe on their new credo of economic openness, infused
their insistence with the moral fervor prominent in Victorian
society. The Liberal Party became its unswayable upholder, the
City of London its principal interest group, and allied intellectuals
its theorists. Moreover, in downplaying the economic jeopardies
apparent in the 1900s, party leaders like Lord Rosebery, the former
prime minister, and future prime ministers H. H. Asquith and Sir
Henry Campbell-Bannerman touched on virtually every argument heard
again in the United States of the 1980s and 1990s. Demands for
reciprocity would provoke foreign countermeasures. Britain's future
lay in finance and services (where we have a favorable balance).
Official statistics were not clear enough to act on. And perhaps
most of all, the real answer was to achieve "better education,
better training, better methods, larger outlooks" (Asquith,
1903) and to "fight them [tariffs} by a more scientific and
adaptive spirit-by better education" (Rosebery, 1903).
One is tempted to say that the United
States, caught up in a somewhat similar situation, has been no
better at marking the British lesson than Britons had been in
heeding the Dutch. True, there were important differences. A century
after the earl of Rosebery urged "a more scientific and adaptive
spirit" on his countrymen, Americans-or at least a higher
percentage of them-seemed to still display it, witness the role
of Silicon Valley as the Mecca of global technology. Early-twentieth-century
Britain boasted nothing so cutting-edge. Besides, the Britain
of 1900 had two obvious challengers in the United States and Germany.
The United States of the millennium had no immediate rival that
kept it looking over its shoulder.
p289
The conspicuously limited job creation of the major U.S. technology
firms bespoke several warnings. Firm after firm was increasing
its reliance on overseas production and software engineers and
suppliers, especially, as we have seen, in India, China, Taiwan,
and the rest of relatively low-wage East Asia. Even more to the
point, many U.S. firms were dependent on foreign nations, mostly
Asian, to fill American-based jobs with skilled engineers and
programmers unavailable in the U.S. labor pool.
This reliance extended to the highest
levels of management. Of the four or five hundred top U.S. Internet,
telecom, chip, and networking firms, dozens had Chinese, Indian,
or Asian-American chief executives, and Silicon Valley was home
to large numbers of Indian, Chinese, and Taiwanese executives
and engineers. A group with major representation in the Valley,
Indus Entrepreneurs, estimated that 30 percent of the software
engineers there were of Indian origin. An economist at the University
of California in Berkeley used a Dun & Bradstreet database
to count 750 local companies run by Indians. The workforce at
Cisco Systems' San Jose headquarters was 45 percent Asian; Santa
Clara County as a whole had a nonwhite majority and a 24 percent
Asian population.
The Valley, indeed, seemed to foretell
Asia as the next leading world economic region. A New York Times
profile in 2000 explained that, "The defining character of
Silicon Valley today is not the pasty-faced plaid shirt-wearing
aerospace engineer, but a young geek from Taipei or Bangalore
with an H-1B visa...." American universities, too, were educating
hundreds of thousands of foreigners to man rival economies. Others
who had come much earlier were going back-thousands of scientists
returning to Taiwan alone-drawn by East Asian pride, growth, and
prospects. On all counts, partnership arrangements were proliferating.
Moreover, behind the U.S. computer-industry
tie to Taiwan, which in 2000 made 39 percent of the world's disk
drives, 54 percent of its monitors, and 93 percent of its scanners
as well as 5 3 percent of the laptops and 25 percent of the personal
computers, lay the unnerving prospect of Taiwan's manufacturing
absorption into the fast-growing Goliath of mainland China. More
and more Taiwanese firms were moving production there. Labor on
the mainland cost only one-quarter to one-third as much as on
Taiwan, and China had two other lures: graduation of new engineers
at a rate of 145,000 a year and a domestic computer market growing
40 percent annually. The upshot was that of the computers and
laptops sold in the United States by such companies as Compaq,
Dell, and Gateway, a growing ratio of the components and even
final products came from China, not Taiwan, putting the U.S. into
what the New York Times described as an "odd position: its
main supplier of PC's and other information-technology, or I.T.,
gear will be its main strategic adversary." Most businessmen
just shrugged. Back in the 1990s, the chief executive of Boeing,
a major U.S. defense contractor with assembly lines in China,
had dismissed technology transfer in aerospace production as something
Washington, not Boeing, would have to deal with.
p290
In 2000, Asia was already about to pass the United States in its
number of Internet users. Officials talked of China itself pulling
ahead by 2005, and one China watcher noted that were China to
grow at 7 percent a year, it would surpass a U.S. economy growing
at 3 percent some time between 2020 and 2030.
Wealth
and Democracy
Index
of Website
Home Page