Archive for the ‘Inspired by a Book’ Category

The Future of Success

Sunday, June 5th, 2011

“The Future of Successs” now an age-old classic, is one of the plethora of books written by Robert Reich, a reputed labor economist known for his analysis of the “New Economy” and its ramifications.

Golden: Reich tells us knowledge workers - a term developed by Peter Ferdinand Drucker - will be the prized workers of the 21st century.

Never suffering from a dearth of observations, Reich proclaims in “The Future of Success” that the “New Economy” is fundamentally different from the “Old Economy.” While the latter focused on the mass-production of labor-intensive goods, the more contemporary former term conveys an economy more focused with knowledge-intensive goods.

The shift from labor-intensive to knowledge-intensive goods may not seem at all too radical. But labor economist Robert Reich will be quick to tell you that it has grave consequences for the labor market.

Under the “Old Economy,” many a skilled hand could be put to use in manufacturing goods. The secondary sector (economic jargon for the industrial sector) was able to handle the swelling of its ranks because an increase in consumption was linked to an increase in the number of employees.

Unfortunately for job-seekers, the rules of the game have fundamentally changed under the “New Economy.” Knowledge-intensive goods do not require a large number of workers; in fact, a small number of innovative thinkers can propel a company towards success. What does this mean for companies’ employment strategies?

Shed. Shed. Shed.

Reich lists the changing rules of employment in his taxonomy as follows:

Old Economy (mid-twentieth century)
-Steady work with predictably rising pay
-Limited effort
-Wage compression, and the expansion of the middle class

New Economy
-The end of steady work
-The necessity of continuous effort
-Widening inequality”
(pp. 93-101).

Reich also notes that the New Economy has lasting implications for how we perceive education:

“The real value of a college education to one’s job prospects has less to do with what is learned than with who is met. The parents of one’s classmates, and the friends of their parents, provide connections to summer jobs and first jobs, then later to clients and business customers. Loyal alumni offer further deals. The more prestigious the university, the more valuable such connections are likely to be. To the extent that an Ivy League education has superior value, that value has less to do with the grandeur of its libraries or the cleverness of its professoriat than with the superiority of its connections.” (p.134)

His observations go beyond the scope of the merits of university education at a prestigious university:

“[P]eople at or near the top are doing remarkably well, to be sure. They possess just the right combination of talents and connections, and have sol themselves adeptly. But they are not winning it all; they are sharing some of their winnings with talented people arrayed around them on whom they depend, and those people in turn are sharing some of their winnings with others on whom they depend, and so on, extending outward and downward in a vast network of interconnections. As talented people make names in their fields, they’re worth more.”

To top it all off, Reich draws upon a quote from Tom Peters, who provides readers with a maxim telling of just how commoditized our world has become in his article “The Brand Called You”:

“starting today you are a brand. You’re every bit as much a brand as Nike, Coke, Pepsi, or the Body Shop […] the most important job is to be head marketer for the brand called you.”
(Fast Company, August-September 1997. pp. 83-94).

Now, more than ever, the competition to succeed is becoming increasingly ruthless. Those marginal few who make it to the top will, according to Reich, reap astronomical rewards, while the great majority of people will toil in semi-skilled and manual labor, earning pennies compared to their super-lavish, sophisticated counterparts.

So much for equality.



Justice Comme Justice

Tuesday, March 1st, 2011

The Magna Carta—a landmark English charter that had given certain liberties to Englishmen in the 13th century—reads: “Nulli vendemus, nulli negabimus aut differemus, rectum aut justitiam,” or in English, “To no man will we sell, or deny, or delay, right or justice.” From what the Magna Carta mandates, it is obvious that the charter considers justice to be of superlative import.

But what, exactly, does justice mean?

William Shakespeare put it poetically in King John:

“Well, whist I am a beggar, I will rail
And Say there is no sin but to be rich;
And being rich, my virtue then shall be
To say there is no vice but beggary.”
(King John, II. i. 593-596)

From this we can observe that justice, like beauty, is largely in the eyes of the beholder. It is a disputed virtue, and thus holds great relevance in public discussion for the ideals of justice that imbue our social institutions must first be agreed upon by the people.

In Japan, justice has until recently been seen as a cold virtue, and most Japanese affiliated justice with retributive justice (lex talionis). In addition, justice has also been viewed with some degree of skepticism, and a considerable number of people hold justice in high regard because they associate it with hero’s justice—namely, that the strong define justice as they see fit.

But much of these previous sentiments have given way to renewed discourse upon the arrival of Michael Sandel—a Harvard professor of political and moral philosophy—to Japan last year. His lectures at Saunders Hall at Harvard enjoyed a strong viewership, and NHK, a Japanese media company, invited him to deliver a lecture at The University of Tokyo on August 25th last year.

By using simple case studies that present moral dilemmas, Sandel forced the participants in his lecture at The University of Tokyo to face their own subjective conceptions of justice. One particular example Sandel uses is the flute.

Sandel is not alone. Amartya Sen—an authority on normative economics and a recipient of a Nobel Peace Prize in Economics—uses the “flute example” in his book The Idea of Justice to present a common moral dilemma:

“Let me illustrate the problem with an example in which you have to decide which of the three children—Anne, Bob and Carla—should get a flute about which they are quarreling. Anne claims the flute on the ground that she is the only one of the three who knows how to play it (the others do not deny this), and that it would be quite unjust to deny the flute to the only one who can actually play it. If that is all you knew, the case for giving the flute to the first child would be strong.

In an alternative scenario, it is Bob who speaks up, and defends his case for having the flute by pointing out that he is the only one among the three who is so poor that he has no toys of his own. The flute would give him something to play with (the other two concede that they are richer and well supplied with engaging amenities). If you had heard only Bob and none of the others, the case for giving it to him would be strong.

In another alternative scenario, it is Carla who speaks up and points out that she has been working diligently for many months to make the flute with her own labor (the others confirm this), and just when she has finished her work, ‘just then,’ she complains, ‘these expropriators came along to try and grab the flute away from me’. If Carla’s statement is all you had heard, you might be inclined to give the flute to her in recognition of her understandable claim to something she had made herself.” (p.13)

Sen illustrates a divisive issue with great clarity; the case for all three of them seem strong. So then, how would this problem be framed in academic terms?

“Bob, the poorest, would tend to get fairly straightforward support from the economic egalitarian if he is committed to reducing gaps in the economic means of people. On the other hand, Carla, the maker of the flute, would receive immediate sympathy from the libertarian. The utilitarian hedonist may face the hardest challenge, but he would certainly tend to give weight, more than the libertarian or the economic egalitarian, to the fact that Anne’s pleasure is likely to be stronger because she is the only one who can play the flute.” (p.13)

We now see that egalitarians, libertarians, and utilitarians, who each base their reasoning on impartial, non-arbitrary claims, would be unable to reach a shared resolution to this problem.

In order to overcome such problems, Sen says, thinkers such as Adam Smith and John Rawls have advanced unique notions of impartiality. Smith, in his The Theory of Moral Sentiments, draws upon the notion of an ‘impartial spectator’:

“In solitude, we are apt to feel too strongly whatever relates to ourselves… The conversation of a friend brings us to a better, that of a stranger to a still better temper. The man within the breast, the abstract and ideal spectator of our sentiments and conduct, requires often to be awakened and put in mind of his duty, by the presence of the real spectator: and it is always from that spectator, from whom we can expect the least sympathy and indulgence, that we are likely to learn the most complete lesson of self-command.” (The Theory of Moral Sentiments, III. 3.38, p. 153-154)

Sen calls Adam Smith’s approach one of open impartiality that relies on enlightenment relevance. In contrast, John Rawls employs closed impartiality that relies upon membership entitlement:

“My aim is to present a conception of justice which generalizes and carries to a higher level the familiar theory of the social contract as found in say, Locke, Rousseau, and Kant.” (A Theory of Justice, p.10).

Both Sen and Rawls make a committed attempt to overcome divisive conceptions of justice. In this regard, the study of justice is one that requires deliberation, and must remain a continuous and arduous process. Sen concludes by saying,

“To ask how things are going and whether they can be improved is a constant and inescapable part of the pursuit of justice.” (The Idea of Justice, p.86)

The pursuit of justice is a continuous affair that must be deliberated by anyone who seeks to grasp justice comme justice—or justice as justice. Without a deliberative process, we would be putting an honorable virtue at serious risk of falling to something of vulgar value.


Why Japan Can’t Woo More Moo’s

Saturday, November 27th, 2010

Cash Cows. They’re the kind of killer-products that every company craves.

Yet Japan’s premiere blue-chip companies have become increasingly unable to provide “the next big thing.”

Take Sony for example—a decade ago Sony seemed impervious to skeptics of its continued growth. Sony’s walkman and high-resolution televisions were taking the world by storm, and it seemed like no competitor could match Sony’s sleek, hip products.

But now Sony’s grip on all-things-electronic has been attacked by all fronts: Apple has taken the lead in portable music players, Samsung’s LG has taken home the winning gold in preferred television units, and to add insult to injury, Sony’s Vaio laptops have become increasingly MIA from store shelves around the world (no, they aren’t going Dell’s way of online custom-orders, they’re simply deep in the red.)

For all of Sony’s dismal performance of late, Sony still has incredible technological capabilities. The Play Station 3, like the Play Station 2, set the world-standard in next-generation video recording through its Blue-Ray ready drives. Sony’s laptops, though increasingly harder to come across, are so sleek they’d serve as paper cutters. The company’s R&D labs have some of the world’s finest engineers, many of them with decades of experience in the audio, visual, and entertainment industries.

Why then, are Sony’s products doing so poorly? Seth Godin, a marketing guru, says that to conceptualize, create, and market a cash cow, one has to get rid of all the P’s in marketing (like Pricing, Promotion, Positioning, Packaging, etc.)

All companies have to focus is the new P—the Purple Cow.

Here’s Seth’s anecdote about Purple Cows, in—surprise!—his book Purple Cow:

“When my family and I were driving through France a few years ago, we were enchanted by the hundreds of storybook cows grazing on picturesque pastures right next to the highway. For dozens of kilometers, we all gazed out the window, marveling about how beautiful everything was. Then, within twenty minutes, we started ignoring the cows. The new cows were just like the old cows, and what once was amazing was now common. Worse than common. It was boring.” (p.2).

He then goes on to drive his point ruthlessly to the reader, just in case anyone missed his point the first time around:

“Cows, after you’ve seen them for a while, are boring. They may be perfect cows, attractive cows, cows with great personalities, cows lit by beautiful light, but they’re still boring. A Purple Cow, though. Now that would be interesting.”  (p.2).

The point that Seth Godin is trying to make is that a good product just doesn’t quite cut it anymore. The product has to be remarkable. Sony’s walkman had good design and offered good sound. Apple’s iPod may not have delivered better sound quality (in fact, it was probably worse), but the clickwheel? Now that was remarkable, and that was worthy of being called a Purple Cow.

What do today’s remarkable companies all have in common? They’ve got Purple Cow mindsets. They aren’t playing it safe. They aren’t kissing up to the status quo. They are, as Seth Godin observes, “outliers. They’re on the fringes. Super-fast or super-slow. Very exclusive or very cheap. Very big or very small […] the leader is the leader because he did something remarkable.” (p.20).

Sony’s predicament provides a case in point for Japan’s economy as a whole—Japan has all the technological expertise to make a plethora of remarkable products. Yet it just can’t seem to deliver, and it’s because Japan played it safe for the past two decades.

It’s time Japan decided to take one big, audacious gamble.
It’s time Japan decided to become a Purple Cow.


Democracy’s Growth Pains

Friday, November 19th, 2010

One of the things that Nel Noddings analyses in the opening pages of his book “Educating Citizens for Global Awareness” is social and cultural diversity. Noddings states that “diversity” involves “racial, ethnic, and religious differences” while disregarding physical appearances of individuals. In other words, Noddings considers “diversity” along lines of cultural heritage—which, of course, is defined by the social, historical, and cultural context of the people in question.

In his book, Noddings states that recognizing the importance of “diversity” is paramount to the creation of “pluralism,” that is, “sharing power with all those affected by policies and decisions.” By this Noddings means that in order to construct a rich political sphere that is representative of the myriad discrepancies that make up the populace, we must recognize that the “public” is not one homogenous mass but rather one that is made up of an people of eclectic backgrounds.

The thrust of Noddings’s argument concludes with the remark that “diversity, pluralism, and multiculturalism—rightly understood—protect us from our worst social/political impulses.” Although Noddings does not provide historical examples of such cases, one can easily make a link between his argument (which is an abstract truism) and say, some of the real, harrowing events which serve as examples to verify his claim (like the Holocaust and the oft-overlooked yet no less horrifying genocide of the Chinese committed by Japan during WWII).

Yet one cannot help but question the limits to Noddings’s rosy vision of a public sphere where minorities and marginalized people can freely express their opinion. Noddings seems to accept the deliberative democracy envisioned by Jager Habermas in his book The Structural Transformation of the Public Sphere (1961)—that is, a democracy that functions healthily by taking into account the multitude of opinions of a non-homogenous populace.

There are many thinkers today who challenge such optimism. Chandal Mouffe is one such thinker, and her argument about agonistic pluralism, that is pluralism where differences are the source of friction rather than deliberation, is convincing enough—after all, is it really possible to completely ignore conspicuous disparities between people of different cultural heritage and view each other as equal citizens who share a common heritage?

Mouffe’s antithesis to Habermas’s claims can also be applied to the argument put forth by Noddings—a deliberative democracy assumes goodwill and well-reasoned, cool-headed (yet passionate) deliberation amongst people of differing backgrounds. Yet Mouffe says that this is impossible; as human beings, we cannot help but recognize our differences, and it is through recognizing these differences in an agonistic way that we can really express, and hope to overcome, our grievances.

At present, those of the deliberative democracy camp and the agonistic pluralism camp have dug their heels firmly into the ground to challenge the other on ideological and conceptual grounds. This all comes to show that present structuralized forms of democracy are well overdue for a serious update, and its flaws once again unearthed.

Once such flaws are unearthed and tended to, we may be able to finally water down the stereotypes and biases so prevalent in the world today and strive for a truly global, peaceful coexistence amongst the people (and hopefully in a green earth too!)


The Faults of Reconciliation: Stuffing Words in a Dead Man’s Mouth

Saturday, October 23rd, 2010

In many newspapers, the word “conflict resolution” is often used interchangeably with the term “reconciliation.” However, while the former indicates an end of state-level discord, the latter is a branch of peace studies that is rapidly gaining followers.

From an academic perspective, “reconciliation” connotes a deeper level of attitude-change amongst the parties involved. It is not merely a change of diplomatic stance but a deeper level change where simmering animosities are relieved, which progresses to benign coexistence and finally, it is hoped, towards a relationship that is mutually intimate and symbiotic.

Yet for all the buzz in the academic sphere, for all the hype amongst International Relations majors, reconciliation as a conceptual framework for establishing peace is and remains flawed. Reconciliation counts amongst its tools the seeking of justice, truth, restitution, reform, and oblivion (“time heals all wounds.”) These tools are used to ameliorate hostilities with the aim of normalizing and establishing amicable relations between the parties involved in conflict.

All of this sounds good in theory. But there remains something evidently disturbing about reconciliation.

To realize just what’s so disturbing about this notion, one must first question who is the most disenfranchised when conflicts occur.

Needless to say, it’s those who have lost their lives.

The crucial fault of a posteriori claims for justice after conflicts occur is in the fact that we are essentially acting as agents for the dead, we are representing people who have lost ability to voice their opinions. What we ought to bear in mind then, hypothetically, is the rights of the dead.

Some may have sought vengeance had they been killed, yet others who are more docile of heart may not have sought retributive justice. As survivors of conflict, we can only surmise what the dead (the most disenfranchised of all) would have wanted us to do.

But reconciliation is a scary science, and it’s a scary science because it justifies the act of putting words in a dead man’s mouth.

Considering what to do afterwards, a form of retrospective analysis, is by its nature subjective. This leaves a great margin of interpretation that the victor can capitalize upon. Hence the term “victor’s justice.”

To make matters worse, reconciliation’s benefits are dubious. The fact that conflict continues to occur despite the work being done on reconciliation, shows that historical reconciliation, as a study, does not have preventative qualities; thus the essence of the study of historical reconciliation is not an answer to conflict or even a preventative measure but rather a form of a posteriori opinion surveys as a framework for how conflicts ought to be dealt with after they occur.

Besides, why do we need to reconcile? Are not the relatives of those who have been killed retaining the identity of their deceased by harboring deep resentment towards the aggresors?

As John, one of the main characters in Aldous Huxley’s masterpiece work Brave New World states, “I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”

To which another character replies, “in fact, you’re claiming the right to be unhappy.”

John’s response?

“All right then, I’m claiming the right to be unhappy.” (p.240).

Reconciliation may have lofty ideals, but killing resentment may be the same as a ridding the world of the last remaining memories of the dead. Which, might I add, is a form of memory genocide.


Adam Smith’s Invisible Hand and Visible Misinterpretations

Monday, October 4th, 2010

Famously absentminded and an avid player of whist, he roamed the campuses of Glasgow and Oxford in the mid-18th century. This man was also frequently overheard talking to himself. Nonetheless, this eccentricity of a man was a preeminent thinker and held the chair of Moral Philosophy at Glasgow University. He counted amongst his friends great intellectuals such as David Hume, D’Alembert, Turgot, Voltaire, and even Francois Quesnay.

This man, of course, is Adam Smith.

Today, Adam Smith is one of the most well-known figures in economics. Most textbooks begin with an expert from his book The Wealth of Nations, enlightening young students about how an invisible hand tends supply and demand towards equilibrium in the long run. Hailed as the founder of economics, Adam Smith has certainly made lasting contributions towards modern economics.

But do we really understand Adam Smith and his insights? Can we be certain that we did not misunderstand him?

Ironically, Adam Smith himself did not see himself as a “founder of economics,” nor did he even consider The Wealth of Nations to be his greatest work.

The former claim is an easy one to verify: Smith had intended to dedicate The Wealth of Nations to Francois Quesnay, the French thinker who authored the Tableau Economique—an economic model of macroeconomics.

The latter claim can be deduced when one looks at Smith’s life-history: Considering his long-standing reputation as an authority on moral philosophy, it is quite probable that Adam Smith died thinking that his book Theory of Moral Sentiments published in 1759 was his greatest work.

So then, what about the bloated fanfare about his notion of the invisible hand? The term appears on page 572, where Smith writes,

“[The market participant] intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.”

When one reads The Wealth of Nations, the reader cannot help but wonder if Adam Smith wrote of the invisible hand in passing. After all, Smith makes sure that he repeats important claims several times within his work to reinforce his claim. In contrast, the term “invisible hand” appears only once. Perhaps it’s just a metaphor.

It seems reasonable to conclude that Adam Smith considered himself more an authority on philosophy than a voice for economics, and we ought to titillate the feasibility of whether or not the invisible hand is just overbloated hype.

But with all of that said, Adam Smith’s Wealth of Nations is certainly a bible of macroeconomics. The book hints of Smith’s stance as a Rawlsian long before John Rawls established his notion of the “difference principle” and also contains Smith’s insights on population growth, statistics, and speculation—just to name a few. Smith also draws liberally from French thinkers, which makes The Wealth of Nations a rich and enjoyable read.

Students of economics would do well to read The Wealth of Nations—being content with the notion of an invisible hand may in fact lead to very visible misinterpretations of not only Adam Smith, but of economic theory in general if the students’ attitudes towards studying are characterized by lazy inquiry.

Thus the fork in the road towards sound economics and vulgar economics seems to lie in whether or not an individual sees the continuity of the economics as a study.


Who Started the Fire?

Monday, September 6th, 2010

In 1989, Billy Joel released his song “We didn’t start the fire,” a song that catalogues the events that took place throughout Mr. Joel’s lifetime. The overall message of the song is clear: the baby boomer generation—of which Mr. Joel himself is also a part of—was not to be blamed for the downsides and shortcomings of society. After all, these societal ills were around before the baby boomers were born, so thus, argues Mr. Joel, his generation should not be held accountable for historical responsiblities.

The song’s lyrics references dozens of historical accounts, events, and people all the way from Marilyn Monroe to the failed Bay of Pigs Invasion at a dizzying speed, adding to the songwriter’s case that societal events occur in such a manner that no particular generation can be singled out and found at fault.

Two decades have passed since Mr. Joel’s song charted #1 on the U.S. billboard top 100, and now a new generation of people, most popularly christened the children of the “digital age,” have come into existence. Though there has been an easing of finger-pointing over the years, the question still remains: are historical responsibilities inter-generational?

This question was proposed this year on August 25th at The University of Tokyo by Harvard professor of philosophy Dr. Michael Sandel. He asked the mostly Japanese audience of 300—picked by NHK out of an 8,000 strong applicant base—whether or not today’s generation of Japanese have any responsibilities for the crimes committed by previous generations.

Dr. Sandel, of course, was talking about the wartime atrocities committed by Japanese forces during World War II.

The audience found itself divided into two camps: one claimed that historical responsibilities are inter-generational, since each generation is built upon the achievements and faults of the previous one. In contrast, the other camp asserted that there are social paradigmatic shifts brought about by galvanizing change, which makes the idea of a Darwinian-Marxist model of an “evolutionary path of society” untenable.

This question is particularly interesting because the interpretations of the link between the concept of time and the concept of society clash most often at the country-level.

Both views of historical responsibility based upon a particular interpretation of time raised at Dr. Sandel’s lecture at The University of Tokyo were mentioned by Benedict Anderson mentions this in his book “Imagined Communities.”

 “The idea of a sociological organism moving calendrically through homogenous, empty time,” he observes, “is a precise analogue of the idea of nation, which is also conceived as a solid community moving steadily down (or up) history.” (p.26)

Which lies in stark contrast to the argument against this notion, namely a “more Foucauldian sense of abrupt discontinuities of consciousness.” (p.28).

The crucial fact that we must realize here is that historical responsibilities where the parties involved are at the nation-state level are mostly issues of restorative justice, and have little to do with the true academic inquiry of the time-society link.

What we ought to be analyzing, therefore, is whether or not restorative justice really eases the pain of the victimized party, or rather leaves the victimized party grinning after he has successfully capitalized on the descendants of the relenting aggressor.

Historical texts seem to indicate that events in history underscore man’s ineptitude to have amicable relations at the nation-state level. Large-scale wars have only been decreasing in the past half-century because of the deterrence offered by nuclear weapons and the birth of supranational organizations, however infantile and largely powerless they may still be.

The case for time-society links and the larger philosophical context in which it ought to be analyzed should be done, first and foremost, by consulting the notions of collective memory forwarded by Maurice Halbwachs, a French philosopher. Whereas “history” shared by nation-states inevitably introduces politics, shared experiences and collective memory of mankind as a species are factual and actuary.

Perhaps then, the notion of collective memory may be the nitrogen that finally extinguishes the “fire” Mr. Joel had mentioned in his masterwork song.


Why We Ought to Read

Wednesday, August 25th, 2010

Today, very few people feel the need to read the dusty, classical texts of ancient writers. Or perhaps, a more accurate account may be that they are unable to do so, what with the zeitgeist of contemporary life being one where people are overloaded with societal duties. It seems as if people today are often forced to multitask to incredible extremes. As Nicholas Karr points out in his book “What the Internet is Doing to our Brains,” we are becoming increasingly inept at focusing on one particular task.

Technology has been no savior in this regard. As a matter of fact, as Karr has noted, technology is the prime culprit in preventing people from detaching themselves from society and engaging in leisurely activities.

So then, what do I mean by leisurely activities and how does it pertain to reading? Well, the concept of “leisure” envisioned by say, Hannah Arendt, is a deliberate act of “contemplation.” So thus when we are robbed of time, robbed of time to reflect upon ourselves, robbed of time to read, then we are losing the time we can spend to “contemplate,” or to be inquisitive about the world around us. When men are robbed of their ability to be inquisitive, they are effectively blinded of their ability to see the faults of the established zeitgeist and are washed away with the times.

This is by no means a new phenomenon. Ray Bradbury had pointed out in his book “Fahrenheit 451,” published in 1953, that though technology shaves time to do chores, it also erodes peoples’ time to contemplate—for example, as dressing up for the day becomes easier, “the man lacks just that much time to think while dressing at dawn, a philosophical hour, and thus a melancholy hour.” (p.74).

“Fahrenheit 451,” one of the most well-known novels depicting a distopian society, tells of a chilling alternate history where firemen burn books. The story unfolds as a fireman proudly proclaims, “Monday burn Millay. Wednesday Whitman, Friday Faulkner, burn ‘em to ashes, then bury the ahes. That’s our official slogan.” (p.15)

As the story unfolds, the protagonist, a fireman by the name of Guy Montag, begins to have doubts about whether or not burning books will really increase society’s aggregate happiness, as he had been taught by his superiors. Montag is led to realize that books must have enormous significance when an old woman commits suicide upon learning that her books must be burnt. In a sudden bout of enlightened discourse, Montag proclaims, “I thought about the books. And for the first time I realized that a man was behind each one of the books. A man had to think them up. A man had to take a long time to put them down on paper […] we need to be really bothered once in a while. How long is it since you were really bothered? About something important, about something real?” (p.68-69).

Yet Montag was challenged by another character who reminds him that “the public itself stopped reading of its own accord […] in any event, you’re a fool. People are having fun.” (p.113) In other words, if the public didn’t care about grave issues, then wouldn’t it be better to let them become carefree of all societal woes?

Ray Bradbury was not the only author who imparts this question upon the reader. F. Scott Fitzgerald, the author credited for creating the jazz age, has one of his characters point out in his most celebrated work “The Great Gatsby” that “the best thing a girl can be in this world [is] a beautiful little fool […] everything’s terrible anyhow, everybody thinks so—the most advanced people.”

So then, the question comes down to whether or not the public ought to read books and become aware of societal woes, or remain ignorant?

Ignorance is bliss.
Or is it?

One thing that we can observe is that the cause for reading books is not a lost cause. As a matter of fact, many of Japan’s topmost business “elites” have read classic texts consciously aware of what the books’ significance. For example, Katsunobu Onogi, well-known in Japan as the former president of Long Term Credit Bank, was reputed to have read books voraciously. As Gillian Tett, former bureau chief in Japan for the Financial Times reveals in her book “Saving the Sun”—an account of Japan’s failure to modernize its financial institutions—that “In London, Onogi happily roamed around secondhand bookshops, devouring European and American works by Weber, the German political scientist, John Milton, the English author, and Charles Lamb, the English essayist who had written about the dangers of financial speculation and asset bubbles back in nineteenth-century London.”

So then books, through their ability to store the collective knowledge of mankind, have the ability to give us the wisdom to make better decisions.

Once again, in “Fahrenheit 451,” Montag is made aware of the significance of books when Faber, an academic-in-hiding tells him,“the books are to remind us what asses and fools we are. They’re Caesar’s praetorian guard, whispering as the parade roars down the avenue, ‘Remember Caesar, thou art mortal.’ Most of us can’t rush around, talking to everyone, know all the cities of the world, we haven’t time, money or that many friends. The things you’re looking for, Montag, are in the world, but the only way the average chap will ever see ninety-nine per cent of them is in a book.” (p.112)

Even the most seemingly infallible of us make mistakes. But we can lessen the severity of these mistakes and make better decisions and thus create a better system of informed decision-making handed down from one generation to the next if we decide to retain our collective knowledge through books—which are tangible relics of our experiences, and arguably our greatest treasure.


Jihad, McWorld, and Bureaucratic Officialdom

Sunday, August 8th, 2010

In the March 1992 edition of the Atlantic Monthly, Benjamin Barber, an American political theorist, published his work “Jihad vs. Mcworld.”  He claims, with great brevity, that in today’s world, the forces of Jihad and the forces of McWorld are the two primary forces vying for the hearts and minds of men.

In his opening paragraph he remarks: “Just beyond the horizon of current events lies two possible political futures – both bleak, neither democratic. The first is a retribalization of large swaths of humankind by war and bloodshed: a threatened Lebanization of national states in which culture is pitted against culture, people against people, tribe against tribe – a Jihad in the name of a hundred narrowly conceived faiths against every kind of interdependence, every kind of artificial social cooperation and civic mutuality. The second is being borne in on us by the onrush of economic and ecological forces that demand integration and uniformity and that mesmerize the world with fast music, fast computers, and fast food – with MTV, Macintosh, and McDonald’s, pressing nations into one commercially homogeneous global network: one McWorld tied together by technology, ecology, communications, and commerce. The planet is falling precipitantly apart AND coming reluctantly together at the very same moment.”

Thus, Barber sees McWorld and Jihad pitted against each other as they exert their influence across the four corners of the world. McWorld, a gruesome patchwork of multinational corporations trumpeting blind, voracious consumerism, has birthed a resurgence of corporate symbolism and the decline in influence of traditional culture in our daily lives.

In contrast, Jihad is traditional culture turned avenging-angel-incognito, causing sporadic acts of violence as symbolic acts of resistance against McWorld’s strengthening clutches upon our daily lives. Barber does not, however, see Jihad as justified, but rather that it is little more than a movement by small groups of people of myriad variety trying to gather whatever vestiges of identity they can morsel.

Barber puts it most succinctly when he observes: “neither McWorld nor Jihad is remotely democratic in impulse. Neither needs democracy; neither promotes democracy.”

It would be stating the obvious to say that Japan, as a nation, has largely sold itself to the seductive luminosity of what Barber calls “McWorld.” One stroll to Shibuya’s notorious pedestrian intersection will dizzy the unsuspecting tourist with relentless bombardments of corporate symbolism.

Though Japan is recognized as a democratic country, in essence, vested interests, largely protected by bureaucratic red tape, holds the Japanese citizenry from garnering true political representation. Ever since the end of World War II, true political power has been firmly in the hands of bureaucrats, and despite a strong albeit short-lived campaign to wrest control from them by former prime minister Yukio Hatoyama, bureaucrats are still calling the shots today.

If Japan’s bureaucrats were a well-intentioned bunch with noble ideals and true civil servants in the name of the word, then perhaps a spoonful of bureaucratic paternalism might be digestible to the general public. But such has too often not been the case. This is perhaps most perceptible in the number of excessive and unnecessary public works projects that have been proposed and carried out by bureaucrats and their construction companies (through which they make hefty sums of money) over the years.

Of course, there have always been public protest, no matter how feeble and ignored by the media the protests may have been. The efforts though, were mostly in vain, as Alex Kerr, a critic of contemporary Japan, notes in his book “Dogs and Demons” that “so weak is Japan’s democracy in the face of [bureaucratic] officialdom that in twenty-five out of thirty-three such cases, between 1995 and 1998, legislatures have refused to conduct referendums.”

In his book, Alex Kerr laments the damage that has been wrought to Japan’s environment. Kerr illustrates the ghastly reality of contemporary Japan in excrutiatingly vivid detail: “Japan has become arguably the world’s ugliest country. To readers who know Japan from tourist brochures that feature Kyoto’s temples and Mount Fuji, that may seem a surprising, even preposterous assertion. But those who live or travel here see the reality: the native forest cover has been clear-cut and replaced by the industrial cedar, rivers are dammed and the seashore lined with cement, hills have been leveled to provide gravel fill for bays and harbors, mountains are honeycombed with destructive and useless roads, and rural villages have been submerged in a sea of industrial waste.”

Much of this damage is irreversible, or reversible albeit with a very high cost. The public has been so detatched from policymaking through bureaucratic officialdom and so blinded from relevant matters due to total immersion into the labyrinth of McWorld’s objectified symbols that flowering of true democracy in Japan seems to be the wishful thinking of a fool.

The kind of democracy that Japan should strive to achieve, if it still has the capability to strive for democracy, is the kind of “Open Society” advanced by the late Karl Popper in his 1945 book “The Open Society and its Enemies.” Popper, disillusioned with top-down government after his fellow socialist friends were shot dead in the name of greater societal good, became a strong advocate of liberal democracy. According to Popper, it is the unpredictability nature of the future of society through any viable scientific means that necesitates a bottom-up approach to governmental decision-making, and thus there lies the latent need for true democratic participation by all respective citizens.

Whether or not Japan’s citizens will garner true political representation lies in the citizens’ ability to rally under the battlecry for true representation – which is possible only when they realize that they must represent themselves.


Inequality, Intelligence, and the Post Crisis World

Saturday, June 12th, 2010

Amartya Sen, winner of the 1998 Nobel Prize in Economics, writes in his book “Inequality Reexamined” that when we think about inequality, we first ought to ask ourselves “inequality of what?”

Until Sen posed this question, policymakers often talked of “a more equal society” in a rough, slipshod way. As Sen notes, it is pivotal to debate what kind of inequality one is focusing on and how it ought to be addressed.

Sen proposes that the best way to gauge socioeconomic inequalities within a particular society is by measuring each individual’s “capabilities”—calculated by the sum of one’s “functions.” For example, a child starving in Africa and a man engaged in a hunger strike are both being deprived of food, but the latter has the option to eat should he decide to do so while the former does not. In this regard, the latter has the “function” to eat, while the former does not enjoy such a “function.”

We see here that starvation has two distinct forms when analyzed through Sen’s “capability approach”—“chosen starvation” and “forced starvation.”

This observation is crucial when it comes to policymaking: especially when the policy is geared towards lessening a particular inequality. Combating a particular inequality is usually a problem of distribution, and this is where the notion of “capabilities” becomes particularly important. Though distributing food to poverty-stricken African countries may help, it doesn’t do much good to distribute food to people fasting in Islamic countries, because they’re engaged in a form of “chosen starvation” out of a religious belief.

This problem of prudent distribution is also a problem of “pareto optimality,” named after the Italian economist Vilfredo Pareto. In short, “pareto optimality” is a state in which no further distribution will bring about any further utility (“utility” is a term akin to “happiness” – advanced by the fathers of utilitarianism J. Bentham and J.S. Mill.)

Today, the world is increasingly divided by “have’s” and “have-nots.” This problem is most conspicuous in the “North-South problem,” which depicts the enormous wealth disparity between the northern and southern hemispheres, an ugly phenomenon that has progressed due to globalization’s polarized, partisan governance.

It is time we develop a better system of global governance. It is time we establish new guidelines for economic prosperity in which every country is entitled to the nectars of growth. It is time we move beyond mere awareness of unequal global distribution of wealth, and move towards amending it.

As Amartya Sen observes in his essay “How to Judge Globalization,” globalization “deserves a reasoned defense, but it also needs reform.”

But without the intellectual infrastructure, in other words an academic infrastructure to mandate global policymaking, any hope of better global governance and better global distribution of wealth would largely be in vain.

The first realistic step towards establishing a post-Westphalia-system epistemic community—that is, a truly globalized intellectual brain-cloud that goes beyond the mere cathartic expression of today’s blogosphere—would be most easily achieved by networking all academic institutions into one giant, intertwined forum.

Some progress has been made since Joseph Nye indirectly affirmed the growing importance of intellectual persuasion in by coining the phrase “soft power” in his book “Bound to Lead: The Changing Nature of American Power” (1990). The relevance of intellectual persuasion has gradually risen over the past several decades, and there are growing signs of change within the ranks of many government bodies.

A prominent voice in international relations, Akihiko Tanaka agrees with Nye in his book “The Post Crisis World” that there is now a much greater emphasis on “soft power” rather than “hard power” as the political realm shifts towards intellectual brawling and the economy also shifts towards knowledge-intensive industries.

For example, Obama’s cabinet, which some have branded “Obama University,”—qualified by the fact that Obama has amassed an impressive echelon of brains wielding M.D.’s and Ph.D.’s, shows some early signs of growing brain-clouds that will soon hover over much of the political realm.

Unfortunately, Obama’s example hasn’t been followed by countries such as Japan, whose political leaders have shown a marked inability to lead through intellectual discourse. Prime minister Hatoyama shocked the world as he resigned, the sixth Japanese prime minister to do so in five years.

What Japan and much of the world needs today is to follow Obama’s example and bring about a renewed intellectual discourse on foreign policy, one that emphasizes the establishment of a global public sphere to tackle tomorrow’s problems.