Schizophrenia and Eastern religions

A Buddhist Zen master who lives in Tokyo wishes to fly to Kyoto on a private plane. When he arrives at the airport, he is offered two planes: one that is faster but aeronautically questionable, and one that is slower but aeronautically sound. He is informed by the airport authorities that the faster plane violates some of the basic principles of aeronautical mechanics, and the slower plane does not.

The aeronautical or technological deficiencies of the faster plane represent underlying mistakes in physics. The Zen master, in his teaching, asks his disciples questions the right answers to which require them to embrace contradictions. To do so is the path to wisdom about reality, which has contradictions at its core. But the Zen master does t waver from upholding this teaching about reality while, at the same time, he chooses the slower, aeronautically sounder and safer plane because it accords with a technology and a physics that makes correct judgments about a physical world that abhors contradictions.

If there is scientific truth in technology and physics, then the unity of the truth should require the Zen master to acknowledge that his choice of the slower but safer plane means that he repudiates his Zen doctrine about the wisdom of embracing contradictions.

He does not do so and remains schizophrenic, with the truth of Zen doctrine and the truth of technology and physics in logic-tight compartments. On what grounds or for what reasons does he do this if not for the psychological comfort derived from keeping the incompatible “truths” in logic-tight compartments? Can it be that the Zen master has a different meaning for the word “truth” when he persists in regarding the Zen doctrine as true even though it would appear to be irreconcilable with the truth of technology and physics he has accepted in choosing the slower plane? Can it be that this persistence in retaining the Zen doctrine does not derive from its being “true” in the logical sense of truth, but rather in a sense of “true” that identifies it with being psychological “useful” or “therapeutic”?

In other words, Zen Buddhism as a religion is believed by this Zen master because of its psychological usefulness in producing in its believers a state of peace or harmony. In my judgment, this view of the matter doe snot reduce or remove the schizophrenia of Zen Buddhism.

Mortimer J. Adler, Truth in Religion, pp. 75-6

The end is near!

From Fox News:

In a sign of pessimism about humanity’s future, scientists today set the hands of the infamous “Doomsday Clock” forward one minute from two years ago.

“It is now five minutes to midnight,” Bulletin of the Atomic Scientists (BAS) director Kennette Benedict announced today (Jan. 10) at a press conference in Washington, D.C.

That represents a symbolic step closer to doomsday, a change from the clock’s previous mark of six minutes to midnight, set in January 2010.

The clock is a symbol of the threat of humanity’s imminent destruction from nuclear or biological weapons, climate change and other human-caused disasters. In making their deliberations about how to update the clock’s time, the Bulletin of the Atomic Scientists focused on the current state of nuclear arsenals around the globe, disastrous events such as the Fukushima nuclear meltdown, and biosecurity issues such as the creation of an airborne H5N1 flu strain.

The Doomsday Clock came into being in 1947 as a way for atomic scientists to warn the world of the dangers of nuclear weapons. That year, the Bulletin set the time at seven minutes to midnight, with midnight symbolizing humanity’s destruction. By 1949, it was at three minutes to midnight as the relationship between the United States and the Soviet Union deteriorated.

In 1953, after the first test of the hydrogen bomb, the doomsday clock ticked to two minutes until midnight.

The Bulletin — and the clock ­— were at their most optimistic in 1991, when the Cold War thawed and the United States and Russia began cutting their arsenals. That year, the Bulletin set the clock at 17 minutes to midnight.

From then until 2010, however, it was a gradual creep back toward destruction, as hopes of total nuclear disarmament vanished and threats of nuclear terrorism and climate change reared their heads. In 2010, the Bulletin found some hope in arms reduction treaties and international climate talks and nudged the minute hand of the Doomsday Clock back to six minutes from midnight from its previous post at five to midnight.

With today’s decision, the Bulletin repudiated that optimism. The panel considers a mix of long-term trends and immediate events in the decision-making process, said Benedict. Trends might include factors like improved solar energy technology to combat climate change, she said, while political events such as the recent United Nations climate meeting in Durban play a role as well. This year, the Fukushima nuclear disaster made a big impression.

“We’re trying to weight whether that was a wake-up call, whether it will make people take a closer look at this new and very powerful technology, or whether people will go on with business as usual,” Benedict told LiveScience on Monday in an interview before the announcement of the “doomsday time” decision.

Other factors that played into the decision included the growing interest in nuclear power from countries such as Turkey, Indonesia and the United Arab Emirates, Benedict said.

The Bulletin panel found that despite hopes of global agreements about nuclear weapons, nuclear power and climate change in 2010, little progress has been made.

“The world still has approximately over 20,000 deployed nuclear weapons with enough power to destroy the world’s inhabitants many times over,” said Lawrence Krauss, an Arizona State University professor and the co-chair of the BAS Board of Sponsors. “We also have the prospect of nuclear weapons being used by terrorist non-state actors.”

Likewise, talks on climate change have resulted in little progress, the panel found. In fact, politics seemed to trump science in discussions over the last two years, said Robert Socolow, a Princeton professor of mechanical and aerospace engineering and a member of the Bulletin’s Science and Security board.

“We need the political leadership to affirm the primacy of science as a way of knowing, or problems will be far worse than they are already,” Socolow said.

And, for those who disagree with this move, here’s why:

Algorithm could untangle authors of Torah

From ScienceBlog:

In both Jewish and Christian traditions, Moses is considered the author of the Torah, the first five books of the Bible. Scholars have furnished evidence that multiple writers had a hand in composing the text of the Torah. Other books of the Hebrew Bible and of the New Testament are also thought to be composites. However, delineating these multiple sources has been a laborious task.

Now researchers have developed an algorithm that could help to unravel the different sources that contributed to individual books of the Bible. Prof. Nachum Dershowitz of Tel Aviv University’s Blavatnik School of Computer Science, who worked in collaboration with his son, Bible scholar Idan Dershowitz of Hebrew University, and Prof. Moshe Koppel and Ph.D. student Navot Akiva of Bar-Ilan University, says that their computer algorithm recognizes linguistic cues, such as word preference, to divide texts into probable author groupings.

By focusing exclusively on writing style instead of subject or genre, Prof. Dershowitz and his colleagues sidestepped several methodological hurdles that hamper conventional Bible scholarship. These issues include a potential lack of objectivity in content-based analysis and complications caused by the multiple genres and literary forms found in the Bible — including poetry, narrative, law, and parable. Their research was presented at the 49th Annual Conference of the Association for Computational Linguistics in Portland.

Continue reading…

Are jobs obsolete?

I don’t generally offer much commentary on the articles that I post, allowing them to speak for themselves, but I have to say that I agree emphatically and completely with this one. A modern, technological form of Distributism is really the only reasonable, just, and practical economic model for the world we are steadily moving toward and which we live in now.

From CNN.com:

The U.S. Postal Service appears to be the latest casualty in digital technology’s slow but steady replacement of working humans. Unless an external source of funding comes in, the post office will have to scale back its operations drastically, or simply shut down altogether. That’s 600,000 people who would be out of work, and another 480,000 pensioners facing an adjustment in terms.

We can blame a right wing attempting to undermine labor, or a left wing trying to preserve unions in the face of government and corporate cutbacks. But the real culprit — at least in this case — is e-mail. People are sending 22% fewer pieces of mail than they did four years ago, opting for electronic bill payment and other net-enabled means of communication over envelopes and stamps.

New technologies are wreaking havoc on employment figures — from EZpasses ousting toll collectors to Google-controlled self-driving automobiles rendering taxicab drivers obsolete. Every new computer program is basically doing some task that a person used to do. But the computer usually does it faster, more accurately, for less money, and without any health insurance costs.

We like to believe that the appropriate response is to train humans for higher level work. Instead of collecting tolls, the trained worker will fix and program toll-collecting robots. But it never really works out that way, since not as many people are needed to make the robots as the robots replace.

And so the president goes on television telling us that the big issue of our time is jobs, jobs, jobs — as if the reason to build high-speed rails and fix bridges is to put people back to work. But it seems to me there’s something backwards in that logic. I find myself wondering if we may be accepting a premise that deserves to be questioned.

I am afraid to even ask this, but since when is unemployment really a problem? I understand we all want paychecks — or at least money. We want food, shelter, clothing, and all the things that money buys us. But do we all really want jobs?

We’re living in an economy where productivity is no longer the goal, employment is. That’s because, on a very fundamental level, we have pretty much everything we need. America is productive enough that it could probably shelter, feed, educate, and even provide health care for its entire population with just a fraction of us actually working.

According to the U.N. Food and Agriculture Organization, there is enough food produced to provide everyone in the world with 2,720 kilocalories per person per day. And that’s even after America disposes of thousands of tons of crop and dairy just to keep market prices high. Meanwhile, American banks overloaded with foreclosed properties are demolishing vacant dwellings to get the empty houses off their books.

Our problem is not that we don’t have enough stuff — it’s that we don’t have enough ways for people to work and prove that they deserve this stuff.

Jobs, as such, are a relatively new concept. People may have always worked, but until the advent of the corporation in the early Renaissance, most people just worked for themselves. They made shoes, plucked chickens, or created value in some way for other people, who then traded or paid for those goods and services. By the late Middle Ages, most of Europe was thriving under this arrangement.

The only ones losing wealth were the aristocracy, who depended on their titles to extract money from those who worked. And so they invented the chartered monopoly. By law, small businesses in most major industries were shut down and people had to work for officially sanctioned corporations instead. From then on, for most of us, working came to mean getting a “job.”

The Industrial Age was largely about making those jobs as menial and unskilled as possible. Technologies such as the assembly line were less important for making production faster than for making it cheaper, and laborers more replaceable. Now that we’re in the digital age, we’re using technology the same way: to increase efficiency, lay off more people, and increase corporate profits.

While this is certainly bad for workers and unions, I have to wonder just how truly bad is it for people. Isn’t this what all this technology was for in the first place? The question we have to begin to ask ourselves is not how do we employ all the people who are rendered obsolete by technology, but how can we organize a society around something other than employment? Might the spirit of enterprise we currently associate with “career” be shifted to something entirely more collaborative, purposeful, and even meaningful?

Instead, we are attempting to use the logic of a scarce marketplace to negotiate things that are actually in abundance. What we lack is not employment, but a way of fairly distributing the bounty we have generated through our technologies, and a way of creating meaning in a world that has already produced far too much stuff.

The communist answer to this question was just to distribute everything evenly. But that sapped motivation and never quite worked as advertised. The opposite, libertarian answer (and the way we seem to be going right now) would be to let those who can’t capitalize on the bounty simply suffer. Cut social services along with their jobs, and hope they fade into the distance.

But there might still be another possibility — something we couldn’t really imagine for ourselves until the digital era. As a pioneer of virtual reality, Jaron Lanier, recently pointed out, we no longer need to make stuff in order to make money. We can instead exchange information-based products.

We start by accepting that food and shelter are basic human rights. The work we do — the value we create — is for the rest of what we want: the stuff that makes life fun, meaningful, and purposeful.

This sort of work isn’t so much employment as it is creative activity. Unlike Industrial Age employment, digital production can be done from the home, independently, and even in a peer-to-peer fashion without going through big corporations. We can make games for each other, write books, solve problems, educate and inspire one another — all through bits instead of stuff. And we can pay one another using the same money we use to buy real stuff.

For the time being, as we contend with what appears to be a global economic slowdown by destroying food and demolishing homes, we might want to stop thinking about jobs as the main aspect of our lives that we want to save. They may be a means, but they are not the ends.