Thursday, October 31, 2013

Mechanical robots of the past...they were supposed to be beneficial



The perfect bridge partner, Mr. Televox, the mechanical man, makes a fourth at the Huntington Hotel during the convention of the Pacific Coast Electrical Association in 1928. 

  "The Automatons of Yesteryear"

by

Katie Hiler

October 28th, 2013

The New York Times

The metallic monster had slanted, yellow eyes, skin like a suit of armor, and the halting, unsteady movements of one of Frankenstein’s creations. On a bright September day in 1928, a curious crowd of onlookers got their first glimpse of Eric the Robot. The mechanical man rose from his chair, stretched out an arm for silence, and made a speech, of which few people understood more than a few words. In a special cable to The New York Times, one reporter remarked on the event, held at the exhibition of the Model Engineer’s Society of London: “Of all orators, Eric seemed the coldest and most lacking in magnetism.”

While he may have been the first robot many Londoners had ever seen, Eric was just one of many humanoid machines on the scene in the early 20th century. Advances in technology and engineering had finally caught up with a human need to play God and create a man from metal parts – a desire stretching back at least as far as da Vinci’s “mechanical knight.”

In 1926, Westinghouse Electric Corporation introduced the world to Televox, a robot that could obey a human voice. Mr. Televox, as the public knew him, was followed by a number of other simple robots, including a humanoid known as Elektro (and his canine robo-companion, Sparko), which was exhibited at the 1940 World’s Fair. Many of these early robots were designed to perform crowd-pleasing tricks like smoking a cigarette, firing a revolver or whistling a tune. Most were just made of gears and levers, voiced and controlled by their Ozian creators behind the curtain.

But to R.J. Wensley, the engineer behind Televox, the modern robot was capable of more than just trivial entertainment – it could be put to real use. Televox was in many ways the product of a new approach to industry. The Industrial Revolution had nurtured the mind-set that machines could do some jobs better than men — more quickly, more accurately, and with greater safety.

At least three of Mr. Wensley’s robots were employed as substitutes for watchmen at reservoir substations across Washington, D.C. And he wasn’t ready to stop there. “In time to come the only work to be done by men and women will be that which requires faculties of discernment, discretion, and judgment,” Wensley was quoted as saying in 1933. “All other work – anything repetitive, routine, standardized – can better be done by machines.”

Yet even as technology was making robotic stand-ins for our most boring jobs a reality, scientists and engineers were already wondering whether it would be possible to invent a machine mind that could learn the human faculties that Mr. Wensley said set us apart.

Of course, to attach a robot to the computers of the 1960s — mainframes the size of entire rooms — would be laughable. And the human brain itself still resembled the dark, unexplored surface of a distant planet. The robotic brain will have to develop for years and years before the robotic body can merge with it.

Still, the unbounded imagination and deluded thinking of early robot makers, believing in the possibility of the technology of their time, laid the groundwork for today’s research. Many of the pioneers likely followed the credo of Thomas A. Edison, a father figure to so many American inventors, when he said in 1915: “To invent, you need a good imagination and a pile of junk.” 



Assorted robots

Happy 2013 Halloween


Use the search engine for many Halloween entries.

A "peculiarly modern preoccupation"...identity...a new book


Martha C. Nussbaum wrote...

Peter Brooks has written a splendid meditation on the search for the self: erudite, illuminating, and eloquent. He shows how this search leads to an obsessive focus on markers of identity and stories of imposture. Rousseau, Balzac, Stendhal, Proust, and Freud are central interlocutors, but Brooks makes reference to a wide range of other texts, and deftly weaves developments in U.S. law into his discussion.



Enigmas of Identity

by

Peter Brooks

ISBN-10: 069115158X
ISBN-13: 978-0691151588

Wednesday, October 30, 2013

Value of theology


"Study Theology, Even If You Don't Believe in God"

This lost liberal art encourages scholars to understand history from the inside out.

by

Tara Isabella Burton

October 30th, 2013

The Atlantic

When I first told my mother—a liberal, secular New Yorker—that I wanted to cross an ocean to study for a bachelor’s degree in theology, she was equal parts aghast and concerned. Was I going to become a nun, she asked in horror, or else one of “those” wingnuts who picketed outside abortion clinics? Was I going to spend hours in the Bodleian Library agonizing over the number of angels that could fit on the head of a pin? Theology, she insisted, was a subject by the devout, for the devout; it had no place in a typical liberal arts education.

Her view of the study of theology is far from uncommon. While elite universities like Harvard and Yale offer vocational courses at their divinity schools, and nearly all universities offer undergraduate majors in the comparative study of religions, few schools (with the exceptions of historically Catholic institutions like Georgetown and Boston College) offer theology as a major, let alone mandate courses in theology alongside other “core” liberal arts subjects like English or history. Indeed, the study of theology has often run afoul of the legal separation of church and state. Thirty-seven U.S. states have laws limiting the spending of public funds on religious training. In 2006, the Supreme Court case Locke v. Davey upheld the decision of a Washington State scholarship program to withhold promised funding from an otherwise qualified student after learning that he had decided to major in theology at a local Bible College.

Even in the United Kingdom, where secular bachelor's programs in theology are more common, prominent New Atheists like Richard Dawkins have questioned their validity in the university sphere. In a 2007 letter to the editor of The Independent, Dawkins argues for the abolishment of theology in academia, insisting that “a positive case now needs to be made that [theology] has any real content at all, or that it has any place whatsoever in today's university culture.”

Such a shift, of course, is relatively recent in the history of secondary education. Several of the great Medieval universities, among them Oxford, Bologna, and Paris, developed in large part as training grounds for men of the Church. Theology, far from being anathema to the academic life, was indeed its central purpose: It was the “Queen of the Sciences” the field of inquiry which gave meaning to all others. So, too, several of the great American universities. Harvard, Yale, and Princeton alike were founded with the express purpose of teaching theology—one early anonymous account of Harvard's founding speaks of John Harvard's “dreading to leave an illiterate Ministry to the Churches”, and his dream of creating an institution to train future clergymen to “read the original of the Old and New Testament into the Latin tongue, and resolve them logically.”

Universities like Harvard, Yale, and Princeton no longer exist, in part or in whole, to train future clergymen. Their purpose now is far broader. But the dwindling role of theology among the liberal arts is a paradigmatic example of dispensing with the baby along with the bathwater.

Richard Dawkins would do well to look at the skills imparted by the Theology department of his own alma mater, Oxford (also my own). The BA I did at Oxford was a completely secular program, attracting students from all over the religious spectrum. My classmates included a would-be priest who ended up an atheist, as well as a militant atheist now considering the priesthood. During my time there, I investigated Ancient Near Eastern building patterns to theorize about the age of a settlement; compared passages of the gospels (in the original Greek) to analogous passages in the Jewish wisdom literature of the 1st century BC; examined the structure of a 14th-century Byzantine liturgy; and read The Brothers Karamazov as part of a unit on Christian existentialism. As Oxford's Dr. William Wood, a University Lecturer in Philosophical Theology and my former tutor, puts it: “theology is the closest thing we have at the moment to the kind of general study of all aspects of human culture that was once very common, but is now quite rare.” A good theologian, he says, “has to be a historian, a philosopher, a linguist, a skillful interpreter of texts both ancient and modern, and probably many other things besides.” In many ways, a course in theology is an ideal synthesis of all other liberal arts: no longer, perhaps, “Queen of the Sciences,” but at least, as Wood terms it, “Queen of the Humanities.”

Yet, for me, the value of theology lies not merely in the breadth of skills it taught, but in the opportunity it presented to explore a given historical mindset in greater depth. I learned to read the Bible in both Greek and Hebrew, to analyze the minutiae of language that allows us to distinguish “person” from “nature,” “substance” from “essence.” I read “orthodox” and “heretical” accounts alike of the nature of the Godhead, and learned about the convoluted and often arbitrary historical processes that delineated the two.

Such precision may seem—to the religious person and agnostic alike—no more useful than counting the number of angels on the head of a pin. But for me, it allowed me access into the fundamental building blocks of the mentality, say, of a 12th-century French monk, or a mystic from besieged Byzantium. While the study of history taught me the story of humanity on a broader scale, the study of theology allowed me insight into the minds and hearts, fears and concerns, of those in circumstances were so wildly different from my own. The difference between whether—as was the case in the Arian controversy of the fourth-century AD—the Godhead should be thought of as powerful first, and loving second, or loving first and powerful second, might seem utterly pedantic in a world where plenty of people see no need to think about God at all. But when scores of people were willing to kill or die to defend such beliefs—hardly a merely historical phenomenon—it's worth investigating how and why such beliefs infused all aspects of the world of their believers. How does that 12th-century French monk's view of the nature of God affect the way he sees himself, his relationship with others, his relationship with the natural world, his relationship with his own mortality? How does that Byzantine mystic conceive of space and time in a world he envisions as imbued with the sacred? To find such questions integral to any study of the past is not restricted to those who agree with the answers. To study theology well requires not faith, but empathy.

If history and comparative religion alike offer us perspective on world events from the “outside,” the study of theology offers us a chance to study those same events “from within”: an opportunity to get inside the heads of those whose beliefs and choices shaped so much of our history, and who—in the world outside the ivory tower—still shape plenty of the world today. That such avenues of inquiry have virtually vanished from many of the institutions where they were once best explored is hardly a triumph of progress or of secularism. Instead, the absence of theology in our universities is an unfortunate example of blindness—willful or no—to the fact that engagement with the past requires more than mere objective or comparative analysis. It requires a willingness to look outside our own perspectives in order engage with the great questions—and questioners—of history on their own terms. Even Dawkins might well agree with that.

Martian samples...great idea...better than manned missions


"Mars Sample Return Container"

 by

Keith Cowing

October 29th, 2013

SpaceREF

This spherical container has been engineered to house the most scientifically valuable cargo imaginable: samples brought back from the Red Planet.

Still probably many years in the future and most likely international in nature, a Mars sample-return mission is one of the most challenging space ventures possible for robotic exploration. A robust, multifunctional sample container is an essential link in the long technical chain necessary to make such a mission successful.

Weighing less than 5 kg, this 23 cm-diameter sphere is designed to keep martian samples in pristine condition at under -10*C throughout their long journey back to Earth.

First, the sample container must be landed on Mars, along with a rover to retrieve a cache of samples carefully selected by a previous mission, according to current mission scenarios. The container seen here hosts 11 sealable receptacles, including one set aside for a sample of martian air.

Then, once filled, it will be launched back up to Mars orbit. There it will remain for several days until a rendezvous spacecraft captures it. To ease the process of rendezvous, the sample container carries a radio emitter and retroreflectors for close-up laser ranging.

Before being returned to Earth, the container will be enclosed in another larger bio-sealed vessel to ensure perfect containment of any returned martian material. This container will then be returned to Earth for a high-speed entry.

"Because there is the potential, however remote, that the samples contain alien life, we have to comply with strict planetary protection protocols not to bring them into contact with Earth's biosphere," explained Benoit Laine, Head of ESA's Thermal Analysis and Verification section, who oversaw the sample container project.

"In effect, the parachute technology is not reliable enough - which means the container must be able to withstand a crash landing without parachute.

"The mission design therefore does not include any parachute, and the capsule literally falls from Mars onto Earth, decelerated only by the pressure on the heatshield through Earth's atmosphere, and by the impact at landing."


While the sample container is a proof-of-concept design rather than actual mission hardware, it is fully functional and has undergone testing in simulated thermal conditions, including a 400 g shock test.

"This challenging project drew on the expertise of multiple ESA specialists," added Benoit. "It incorporates mechanical systems covering structural, thermal and mechanisms engineering but also communications, antennas and power - it has of course to incorporate a highly reliable battery."

The prime contractor for the project, which was supported through ESA's Aurora programme, was French company Mecano I&D. Activities to prepare for a Mars sample mission continue, including a refinement of the sample container design, coordinated by the future missions preparatory office of ESA's Directorate of Science and Robotic Exploration.

"Nope" to time travel says Ned Markosiah...sorry Brian Greene


"Visiting philosopher declares time travel to past impossible"

by

Ellen Coogan

October 30th, 2013

The Crimson White [University of Alabama]

Fanciful ideas of time travel to the past were dispelled by modern philosopher Ned Markosiah at Tuesday night’s “Philosophy Today” lecture.

“So my argument is that time travel to the past is not possible if a certain theory of time is true. Namely, the dynamic theory of time, which I do think is true,” Markosian, a philosophy professor at Western Washington University, said. “So, I do think time travel to the past is not possible. As for time travel to the future, that could still be possible.”

The talk compared two competing schools of thought about time: static time and dynamic time.

According to static time, all times are equally real. So what happened a few days ago still exists in the same way as what is happening right now.

According to dynamic time, time is like a moving spotlight that shines on certain moments, and only the moments that are under the light exist. The spotlight keeps moving and cannot be stopped, just as time cannot be stopped.

Markosian said time travel to the past is impossible according to dynamic time theory for several reasons.

The first relates to theories of time travel involving scenarios where external time and personal time do not line up. Personal time is the time as experienced by the time traveler, and external time is the time of the rest of the world. His argument states that personal time is not really time at all; it is just a different sequential order without metaphysical implications.

His second reason for why time travel to the past is impossible is that present events cannot be causes for past events. As Markosian said, one cannot push a button today that made something happen in 1900.

“I’ve always been interested in the philosophy of time and issues like the static theory versus the dynamic theory, and then I think, like literally everyone, I find the idea of time travel fascinating, so that got me wondering, ‘Yeah, but is it really possible?’, and then applying some stuff about the different theories about the nature of time and then came up with the conclusion that time travel to the past is not possible if the dynamic theory is true,” Markosian said.

Reactions to the talk ranged from philosophical to emotional.

“I was kind of depressed to find out that time travel wasn’t real because it sounds super cool,”
Damon Stanley, a junior majoring in math and philosophy, said.

Other students were intrigued by the theories of time discussed.

“I think I’d want to be a static time traveler because that seems kind of cool, but I think the intuitionism behind dynamic time travel is ballin’,” Matt O’Brien, a junior majoring in philosophy and economics, said. “I like intuitions as far as like, ‘Hey, go with what you think’s right until reason overruns it,’ and so right now, my intuition doesn’t go with static time, it goes with dynamic time, and I don’t have enough of a reason to want to time travel so bad that I should become a static time theorist.”
Markosian said if time travel were possible, he would like to go back in time to meet the ancient Greek philosophers Socrates, Plato and Aristotle.

“The difficulty would be learning ancient Greek,” he said.

Grover's Mill redux...10-30-38


The Writer's Almanac...

It was on this day in 1938 that a cylindrical Martian spaceship landed in Grover's Mill, New Jersey, and began incinerating onlookers with an alien heat ray, an event that was covered by the Columbia Broadcasting System and its affiliated stations, and that caused widespread alarm and mass hysteria. News of the attack interrupted a program of live dance music, the reports growing more frequent and ominous as the hour wore on, until the New Jersey state militia had been obliterated and three Martian tripod battle machines began ravaging the landscape.

Of course, the broadcast was a hoax, a cleverly crafted Halloween prank composed of simulated on-the-spot news bulletins based on the H.G. Wells novel, The War of the Worlds. The broadcast had been prefaced with the announcement that what would follow was a dramatic presentation by Orson Welles and the Mercury Theatre on the Air, but many listeners missed the introduction and panic ensued. People in New Jersey fled the area convinced they could smell poison gas and see fiery flashes from the tripods in the distance.

It has been estimated that of the 6 million people who heard the original broadcast, more than 1.5 million believed it to be true and more than a million others were genuinely terrified, and contemporary accounts tell of police stations swamped with calls. Within a month there were more than 12,000 newspaper articles on the broadcast and its impact, and as far away as Germany Adolf Hitler is said to have cited it as "evidence of the decadence and corrupt condition of democracy." Many listeners sued the network for mental anguish, claims that were all denied save one for a pair of size nine black shoes, by a man from Massachusetts who complained he'd had to spend what he'd saved for new shoes to escape the invading Martians. Welles insisted that that claim be reimbursed.

Welles and the Mercury Theatre were censured, but the broadcast secured Welles an instant, notorious fame. In 1988, Grover's Mills, New Jersey, celebrated its hour of fame by installing a Martian Landing Site monument near Grover's Mill Pond, not far from the remains of a water tower shot to pieces by its frightened residents 50 years before.


"Terror spread by radio play! Read the Telegraph's 1938 report on Orson Welles's War of the Worlds broadcast"

In New York, 75 years ago, thousands of people listening to Orson Welles' reading of The War of the Worlds were convinced it was reality. Here is the Telegraph's 1938 report

October 30th, 2013

The Telegraph

The Federal authorities to-day began investigation of the most amazing episode in the history of broadcasting, the dramatisation of H. G. Wells’s novel” The War of the Worlds,” which last night flung thousands into a state of panic, and convinced them that the United States was being invaded by a host of supermen from Mars.

The names of American cities were substituted for the original place-names, and Mr. Wells today cabled his American representative, Mr. Jacques Chambrun, stating that “totally unwarranted” liberties had been taken. He also expressed deep concern at the effect of the broadcast.

Mr. Chambrun has placed the matter in the hands of his lawyers.

He said to me: “At no time was it explained that this dramatisation would take liberties that amounted to complete rewriting of the novel, rendering it an entirely different story.

“Mr. Wells and I consider that in doing this the Columbia Broadcasting System and Mr. Orson Welles – the producer and principal actor in the play – far overstepped their rights. I believe that the Columbia Broadcasting System should make a full retractation.”


DEATH RAYS

The play was presented as a series of news bulletins and commentaries. Monsters as tall as skyscrapers and armed with death rays were described by the announcer.

“One of the gigantic creatures,” he said, “is straddling the Pulaski skyway. [The road carried on a viaduct out of Jersey City.] We warn the people to evacuate New York City as the Martians approach.”

Fantastic as it may seem, scores of reports from every part of the country bear witness to the fact that the wildest fear took hold of hundreds of households from Rhode Island to California.

Mr. Welles was stunned by the results of his efforts because, he said this afternoon, he hesitated to put the show on, thinking “it might bore people.”

The Columbia Broadcasting System, no less perturbed, points out that the programme was interrupted four times with reminders to listeners that it was all just a play from the New York studio.

WAVE OF TERROR

The panic was caused by the fact that many persons who tuned in casually became convinced that they were listening to authentic news bulletins describing the wholesale destruction of cities in Eastern America. A wave of terror swept the nation from coast to coast and produced results without precedent.

Thousands of people in New York and New Jersey fled into the streets. Scores were treated in the hospitals for shock, and many suffered heart attacks.

When the truth became known the victims of this mass hysteria were naturally highly indignant and typical expressions of opinion to-day were, “rotten,” “asinine,” “criminal,” “disgraceful,” and “a public outrage.”

Never has an authentic news announcement brought such startling reaction. Some people ran about affirming that they had actually seen the invading Martians and heard terrific explosions.

People who believed these strange events had come to pass had only one thought – to escape. They snatched a few belongings, packed their families into their cars, and started driving wildly towards the open spaces. Policemen on traffic duty were dumbfounded by the spectacle of cars rushing past well in excess of the speed limit

PRAYERS IN CHURCHES

Terror took hold of certain districts in Harlem, New York’s great negro quarter, where people either fled or crowded into the churches to pray. Residents of many Southern cities gathered in the streets to pray. At Indianapolis a woman ran screaming into a church where evening service was being held and shouted: “New York has been destroyed. It’s the end of the world. Go home and prepare to die.”

One man arrived home to find his wife in the act of taking poison, and another man, who had gone to Reno for divorce proceedings, immediately took a ’plane for New York to give any help he could to his wife

Police stations and newspaper offices were swamped by telephone calls. Philadelphia police received 3,000 calls in an hour, and the Philadelphia radio station, which relayed the programme, received 4,000. New York police received 2,000 calls in 15 minutes.

Unable to get the Columbia studios on the ’phone, New York police sent a motorcyclist to find out what was happening.

SEARCH FOR “INJURED”

In New Jersey traffic jams were caused by the press of people, who rushed into the streets, and police cars searched for persons reported to be injured by the falling meteor in which the Martians were supposed to have landed.

Some people believed that the destruction had actually been caused by the meteor, and two geology professors of Princeton University hunted for fragments.

They had been told that 1,500 persons had been killed by the meteor at a point a few miles away.

At Newark New Jersey, all the occupants of blocks of flats left their homes with wet towels round their heads as improvised gas masks.

Men of the National Guard of New Jersey started reporting for mobilisation, and a man at San Francisco telephoned the police, saying: “Where can I volunteer? We’ve got to stop this awful thing.”

Throughout the night all broadcasting stations made announcements intended to allay these astonishing fears, and the panic subsided as quickly as it spread.

“MARTIAL LAW PREVAILS”

Among the more lurid passages of to “The War of the Worlds” broadcast, which were spoken to the accompaniment of stage explosions and other startling sounds were:

“Ladies and gentlemen, I have a grave announcement to make. Incredible as it may seem, strange beings who landed in New Jersey to-night are the vanguard of an invading army from the planet Mars.

“At this moment martial law prevails throughout New Jersey and Eastern Pennsylvania.

“We take you now to Washington for a special broadcast on the national emergency by the Secretary for the Interior.”


At this point the actor taking the part of the Secretary addressed the nation. Hair-raising descriptions of the futile efforts to repulse the Martians followed, culminating with this passage, “There are Martian cylinders all over the country. There is one outside Buffalo and one in Chicago.

“I am speaking from the roof of the broadcasting building in New York. The bells you hear are warning the people to evacuate the city as the Martians approach.

“This may be the last broadcast.

“We will stay here until the end.

“People are now holding a service below us in St Patrick’s Cathedral. This is the end. Black smoke is drifting over the city.

“People in the streets are running towards the East River. Thousands of them are dropping like rats. Now the smoke is spreading faster.

‘It has reached Times-square.

“The people are trying to run away. It is no use. They are dropping like flies.

“The smoke is crossing Fifth-avenue. It is 100 yards away. It is 50ft.”


There the actors voice trailed off, in a well-simulated last gasp.


October 30th, 1938

Tuesday, October 29, 2013

Deceased--Arthur C. Danto


Arthur C. Danto
January 1st, 1924 to October 25th, 2013

"Arthur C. Danto, a Philosopher of Art, Is Dead at 89"

by

Ken Johnson

October 27th, 2013

The New York Times

Arthur C. Danto, a philosopher who became one of the most widely read art critics of the Postmodern era, championing avant-garde artists like Andy Warhol and proclaiming the end of art history, died on Friday at his home in Manhattan. He was 89.

The cause was heart failure, his daughter Ginger Danto said.

The author of some 30 books, including “Beyond the Brillo Box” and “After the End of Art,” Mr. Danto was also the art critic for The Nation magazine from 1984 to 2009 and a longtime philosophy professor at Columbia.

“His project, really, was to tell us what art is, and he did that by looking at the art of his time,” said Lydia Goehr, a Columbia University philosophy professor who has written extensively about Mr. Danto. “And he loved the art of his time, for its openness and its freedom to look any way it wanted to.”

Mr. Danto was pursuing a successful career in academic philosophy when he had a life-defining moment. As he recalled in numerous essays, it happened in 1964 when he encountered a sculpture by Andy Warhol in a New York gallery. It was “Brillo Box,” an object that seemed to Mr. Danto to differ in no discernible way from the real cardboard soap-pad container it copied.

If there was nothing visible in Warhol’s sculpture to distinguish it from an ordinary object, Mr. Danto wondered, what made it art? At a time when more and more artists were creating works lacking traditional artistic qualities, this was an urgent question.

Leaving aside that Warhol’s sculpture was made of silk-screened plywood, not cardboard, the defining feature of the sculptural “Brillo Box” was, in Mr. Danto’s view, that it had a meaning; it was about something — consumer culture, for one thing. The real Brillo box only had a functional purpose. But how would you know whether you were looking at a meaningful or a merely functional object? The short answer was, you knew because the Warhol box was presented as art in an art gallery.

This led Mr. Danto to propose a new way of defining art. The term would be bestowed not according to any putatively intrinsic, aesthetic qualities shared by all artworks but by general agreement in the “artworld,” a community that included artists, art historians, critics, curators, dealers and collectors who shared an understanding about the history and theory of modern art.

If that community accepted something as art, whatever its form, then it was art. This required an educated viewer. “To see something as art requires something the eye cannot descry — an atmosphere of artistic theory, a knowledge of the history of art: an artworld,” wrote Mr. Danto in his oft-quoted 1964 essay “The Artworld.”

Mr. Danto’s notion of the art world inspired what came to be known as the Institutional Theory of Art, an idea that was developed most fully by the philosopher George Dickie in the 1970s and that remains widely influential on thinking about contemporary art.

Mr. Danto also came to believe that in the contemporary world, no single style could dominate, as Abstract Expressionist painting had done in the 1950s. Pluralism would be the new order.

This led him to proclaim the end of art history. By this he meant not that people would stop making art, but that the idea of art progressing and evolving over time along one clear path, as it seemed to have done from the Renaissance through the late 19th century and into the first post-World War II decade, could no longer be supported by art of the late 20th century. After the ’60s, art had splintered and gone off in a multitude of directions, from Photorealist painting to the most abstruse forms of Conceptualism.

But if so many different kinds of things could be viewed as art, what, if anything, did they have in common? The common denominator, Mr. Danto concluded, was meaning, and that led him to propose that the art of our time was mainly animated by philosophy. Artworks in the Postmodern era could be viewed as thought experiments about such problems as the relationship between representation and reality; knowledge and belief; photography and truth; and the definition of art itself.

If the new art was philosophy incarnate, then the critic who was also a philosopher might have an advantage over the traditional critic when it came to understanding and explicating art. Mr. Danto got a chance to test himself in that capacity when he became the art critic for The Nation.

But while he won the National Book Critics Circle prize for criticism in 1990 for “Encounters and Reflections: Art in the Historical Present,” he was not universally admired.

The critic Hilton Kramer, writing in The New Criterion in 1987, likened Mr. Danto’s views to one of “those ingenious scenarios that are regularly concocted to relieve the tedium of the seminar room and the philosophical colloquium.”

Arthur Coleman Danto was born in Ann Arbor, Mich., on Jan. 1, 1924. He grew up in Detroit, spent two years in the Army and then studied art and art history at Wayne State University.

He aspired to be an artist, and he specialized in woodcuts, his daughter Ginger said. “He had quite a life as an artist,” she said, “but when he got money from the G.I. bill, he decided to study philosophy.” In 2010, Mr. Danto donated many of his prints and original woodblocks to the Wayne State University Art Collection.

He did graduate work in philosophy at Columbia University, and he studied with Maurice Merleau-Ponty on a Fulbright grant in Paris.

Mr. Danto began teaching at Columbia in 1951, earning his doctorate the following year. He continued to teach at Columbia until his retirement in 1992, after which he was named Johnsonian professor emeritus of philosophy.

Mr. Danto’s first wife, Shirley Rovetch, died in 1978. In addition to his daughter Ginger, who is a writer about art, Mr. Danto is survived by his wife, Barbara Westman Danto, and another daughter, Elizabeth Danto.

As The Nation’s art critic, Mr. Danto wrote extended reviews and essays about prominent artists, past and present, with philosophical insight, professorial erudition and, almost always, sympathy and curiosity. He avoided negative criticism, which he considered cruel.

His interests were catholic. “Unnatural Wonders: Essays From the Gap Between Art and Life” (2005), one of several volumes of collected reviews, includes essays on contemporaries like Damien Hirst, Barbara Kruger, Yoko Ono, Gerhard Richter and Matthew Barney and on past masters like Picasso, Giacometti and Leonardo.

His was the kind of art criticism that could engage even readers with no particular interest in art. “There is a lot of uninspired work in the galleries,” Mr. Danto once wrote. “But there is so much ingenious work, so much intelligence, so much dedication, and really so much high-mindedness in the art world that, were it shared by the rest of the world, we would have entered a golden age.”


"Arthur C. Danto, groundbreaking critic who declared ‘the end of art,’ dies at 89 in NYC"

October 27th, 2013

The Associated Press

Arthur C. Danto, a provocative and influential philosopher and critic who championed Andy Warhol and other avant-garde artists and upended the study of art history by declaring that the history of art was over, has died. He was 89.

Danto, art critic for The Nation from 1984 to 2009 and a professor emeritus at Columbia University, died of heart failure Friday at his Manhattan apartment, daughter Ginger Danto said Sunday.

An academically trained philosopher, Danto became as central to debates about art in the 1960s and after as critic Clement Greenberg had been during the previous generation. Danto was initially troubled, then inspired by the rise of pop art and how artists such as Warhol and Roy Lichtenstein could transform a comic strip or a soup can into something displayed in a museum, a work of “art.” Starting in the ‘60s, he wrote hundreds of essays that often returned to the most philosophical question: What exactly is art? Danto liked to begin with a signature event in his lifetime — a 1964 show at New York’s Stable Gallery that featured Warhol’s now-iconic reproductions of Brillo boxes.

“Is this man some kind of Midas, turning whatever he touches into the gold of pure art? And the whole world consisting of latent art works waiting, like the bread and wine of reality, to be transfigured, through some dark mystery, into the indiscernible flesh and blood of the sacrament?” Danto wrote in “The Artworld,” a landmark essay published in 1964.

“Never mind that the Brillo box may not be good, much less great art. The impressive thing is that it is art at all. But if it is, why not indiscernible Brillo boxes that are in the stockroom? Or has the whole distinction between art and reality broken down?”

Danto would refer to the show as the moment when art history ended and “progress could only be enacted on a level of abstract self-consciousness.” In such essays as “The End of Art,” Danto noted the progression of styles in the 19th and 20th century— impressionism, modernism, abstract expressionism, pop art. After the Brillo show, art had reached its ultimate expression and became a medium not of trends but of individuals — some brilliant, some ordinary, none advancing the overall narrative.

“When I first wrote about this concept, I was somewhat depressed,” Danto later observed. “But now I have grown reconciled to the unlimited diversity of art. I marvel at the imaginativeness of artists in finding ways to convey meanings by the most untraditional of means. The art world is a model of a pluralistic society in which all disfiguring barriers and boundaries have been thrown down.”

Danto would be praised by The New York Times’ Barry Gewen as “arguably the most consequential art critic” since Greenberg, an “erudite and sophisticated observer” who wrote with “forcefulness and jargon-free clarity.” But his ideas were not universally accepted. Danto frequently had to explain that art wasn’t dead, only art history.

Rival critics such as Hilton Kramer questioned whether the story was over and whether Warhol deserved to be part of it. In an essay published in The New Criterion in 1987, Kramer likened Danto’s views to one of “those ingenious scenarios that are regularly concocted to relieve the tedium of the seminar room and the philosophical colloquium.” He also dismissed Warhol’s work as “a further colonization of the aesthetically arid but nonetheless seductive territory” of avant-garde art.

In “What Art Is,” a book published in 2013, Danto responded that his “effort was to describe art differently from that of the conservative taste of most of the New York critics.”

“From my perspective, aesthetics was mostly not part of the art scene. That is to say, my role as a critic was to say what the work was about — what it meant — and then how it was worth it to explain this to my readers,” he wrote.

Danto’s other books included “Encounters and Reflections,” winner of a National Book Critics Circle prize in 1991, “Beyond the Brillo Box” and “After the End of Art.” He was an editor of The Journal of Philosophy, a contributor editor to Artforum and president of the American Philosophical Association.

Arthur Coleman Danto was born in Ann Arbor, Mich., and raised in Detroit. He served two years in the Army during World War II and was stationed in Italy and in North Africa. He then studied art and history at Wayne State University and received a master’s and doctoral degree from Columbia University, where he taught from 1952 to 1992 and chaired the philosophy department for several years. He was especially influenced by the 19th-century Germany philosopher Georg Wilhelm Friedrich Hegel and drew extensively upon Hegel in his theory of art history.

After the Warhol show, Danto pursued a definition of art that could be applied to both the Sistine Chapel and a Brillo box. He rejected the ancient Greek idea that art was imitation and the Renaissance ideal that art was defined by aesthetic pleasure. Danto was shaped by the 20th-century rise of “ready-mades,” ordinary objects turned into “art,” whether Warhol’s Brillo boxes or the urinal Marcel Duchamp submitted to galleries during World War I. In “What Art Is,” Danto concluded that art was “the embodiment of an idea,” defined not by how it looked but by what it had to say.

“Much of contemporary art is hardly aesthetic at all, but it has in its stead the power of meaning and possibility of truth,” he wrote in “What Art Is.”

Danto’s stature as a critic overshadowed his early career as an artist. He was an accomplished printmaker whose woodcuts were exhibited in the Art Institute of Chicago, the National Gallery of Art and elsewhere in the 1950s. He later donated his prints to Wayne State.

“When I became a critic, I met everyone under the sun. But I knew very few artists when I was an artist. Some printmakers, some second generation Abstract Expressionists. ... They were the great figures of my world, like Achilles and Agamemnon in ancient times,”
he wrote in a 2007 essay about his own work.

“The heroes today are very different, and so the artists for whom they are heroes have to be very different. I could never have been an artist shaped by such heroes, though as a writer, I like their art well enough. I am glad to see that my work holds up despite that. In a way, I feel like an old master.”

Danto was married twice — to Shirley Rovetch, who died in 1978, and since 1980 to Barbara Westman. He had two children, Ginger and Elizabeth.


Arthur C. Danto [Wikipedia]

No evidence, but pubically accepted...Richard Feynman's "cargo cult science"


"More on the crisis in research: Feynman on 'cargo cult science'"

by

Michael Hiltzik

October 28th, 2013

latimes.com

After reading my weekend column about the crisis in life science research, Hajime Hoji of USC's linguistics department reminded me of the late Richard Feynman's brilliant deconstruction of the flaws and pitfalls of science as it's done in the modern age.

"Cargo Cult Science" was adapted from Feynman's 1974 commencement speech at Caltech, where his spirit reigns as one of that institution's two certified saints. (The other is Robert A. Millikan, Caltech's first president.) The text appears in his 1985 book, "Surely You're Joking, Mr. Feynman!" Here are some excerpts, but the talk is worth reading in its entirety, both for Feynman's lucid, engaging style and the depth of his thinking.

In the talk, Feynman discussed how much laypersons and scientists themselves take for granted about research results. "We really ought to look into theories that don't work, and science that isn't science," he said. "Cargo cult science" was his term for research that never seemed to yield provable results, but acquired public acceptance because they possessed the veneer of rigorous methodology.

What cargo cult science lacked was something that, he observed, was never actually taught to Caltech students. "It's a kind of scientific integrity...that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it....Details that could throw doubt on your interpretation must be given, if you know them....If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it."

One suspects that Feynman, who died in 1988, would be appalled by the current standards of research publication, which critics say favor audacious claims instead of the painstaking, judicious marshaling of evidence he advocated. It's even more striking today to ponder his confidence in science's ability to weed out factitious or mistaken findings.

"We've learned from experience that the truth will come out," he told the students. "Other experimenters will repeat your experiment and find out whether you were wrong or right.... And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science."

The truth is that the testing of experimental results by other experimenters is exactly what may be lacking in today's publication-driven science world. And as some scientists recognize, getting a paper published in a prestigious journal can do a great deal for one's reputation, even if it's later shown to be wrong.

Even then, Feynman acknowledged that desperation for research funding was driving a tendency by scientists to hype the applications of their work. Otherwise, a friend told him, "we won't get support for more research of this kind." Feynman's reaction was characteristically blunt. "I think that's kind of dishonest," he said.

Newton's spooky coat of arms


After Isaac Newton was knighted by Queen Anne in 1705 he adopted an unusual coat of arms: a pair of human tibiæ crossed on a black background, like a pirate flag without the skull. After some general reflections on Newton’s monumental scientific achievements and on his rather enigmatic life, we investigate the story behind his coat of arms. We also discuss how that simple heraldic design illustrates the concept of chirality, which would later play an important role in the philosophical arguments about Newton’s conception of space, as well as in the development of modern chemistry and particle physics.

Isaac Newton’s sinister heraldry by Alejandro Jenkins

Sunday, October 27, 2013

Linguistics...constantly evolving


"The Word For…"

by

Ian Crouch

October 24th, 2013

The New Yorker

There ought to be a word for “the limbo-like precincts of an airport baggage claim, where groggy travellers gather around the motionless treads of empty conveyor belts.” It is a singularly desolate scene, and there should be a succinct way for a forlorn luggage-seeker to text a quick apology to the friend who is idly circling the airport roads. Now, there is: “baggatory.”

That clever turn is just one of a couple hundred neologisms coined by Liesl Schillinger in her new book, “Wordbirds: An Irreverent Lexicon for the 21st Century.” Other gems include “social crawler” (a party-goer who accidently mingles with losers); “Facebook-happy” (a miserable person who fakes bliss in carefully managed Facebook posts); “polterguy” (an ex-boyfriend who haunts future relationships); “factose intolerant” (a person who claims a false allergy or irrational antipathy to certain foods); and “rotter” (the bottom drawer in the refrigerator where produce goes to putrefy). Most of the words and phrases in the collection are accompanied, in a bit of whimsy, by an illustration of birds acting out the scenes described: the “social crawler” is a peacock mixing with pigeons.

The word “neologism” dates to the seventeen-seventies, taken from Greek via French, meaning “new speech.” But the practice of coining new words goes back to the beginning of language itself. It accelerated as culture accelerated, and by the nineteenth century conservative types were worried that industry and science were flooding the linguistic marketplace with all kinds of shoddy fad words, and that the language had to be protected from interlopers. Others embraced the dynamism. In a letter to John Adams, in 1820, Thomas Jefferson, a man of business and science as well as politics, wrote, “I am a friend to neology. It is the only way to give to a language copiousness and euphony.”

Of-the-moment words still have their skeptics: every time a major dictionary announces that they’ve added a trendy word, like “twerking,” a chorus of worried head-shakers turns up online to complain that the whole country is going to pot. This is nothing new, but the occasional vehemence of the response may be due to the sense that novel words are coming at us faster than ever. Schillinger, in her introduction, writes that “over the last two decades new technologies, new means of communication, and sweeping social changes have crashed down on society like an avalanche.” Stodgy lexicographers, she writes, just need to keep up.

The Internet, for better and worse, has produced a pile of jargon that’s now been absorbed into the wider language. And its various associated social tools function as a vast and productive word farm—combining the longstanding instinct to match language to the modern condition with the very recent ability to share one’s experiments with a wide audience. The Internet Age has produced more writers per capita than ever before. Has it made us better writers or worse? Smarter or more distracted, callow, and stupid? Clive Thompson, in his new book on the ways that technology has improved cognition, writes that one of the unexpected developments of innovation is that “status-update tools like Twitter have produced a renaissance in witty, aphoristic, haikuesque expression.” With all the bad (“wut r u doooooing?”) comes plenty of good—an explosion of short and smart bits of humor, insight, and commentary. The lines between amateur and professional have eroded; it is as if the world is writing together to try to sort out the large and small questions of the day. Twitter, especially, is an ideal incubator for neologisms. Space is limited, so running words together becomes a stylistic necessity—function drives form, and portmanteaus reign.

One of Schillinger’s better creations is “fratois,” meaning “a hearty slang or patois used by bonhomous men that makes them sound like back-slapping fraternity brothers.” This is a mocking term, evoking locker-room towel snapping or George W. Bush’s Oval Office. It also brings to mind another strain of neologism currently enjoying popularity, what Katherine Connor Martin, of the Oxford University Press, identifies as the “portmanbro.” She cites such examples as “bromance,” which is now in general use, and other terms like “brahphet” (“the guy who thinks he knows everything”) and “brobituary” (“a short description of an ex bro who went off and got married”). Most of these terms make fun of the male subspecies known as bros, but Martin cheekily warns of the dangers of metonymy: What if, by “being the sort of person who says ‘bro,’ a person becomes a bro”? This might go further: Are the people who trade in contemporary neologisms astute critics of the moment, or are they somehow defiled by their use of this moment’s language?

If neologisms seem suddenly ubiquitous, perhaps this proliferation is the result of our current pace of life. If we are expected to multitask, then shouldn’t our language have to, too? Searching for innovation, we might seek out something that predates the Internet by a few years: the German language. German has a fine architecture, in which appositive phrases are joined into compounds, leading to long and goofy-sounding words that express often complicated factual and emotional information. One popular Web convention of late is the phrase “There should be a German word for ….” Take a situation that I often face: the struggle after receiving an e-mail to find a response time that is long enough that you don’t appear creepy or desperate but short enough that you don’t forget to respond altogether. To such linguistic questions come answers, in Ben Schott’s hugely inventive new book, “Schottenfreude: German Words for the Human Condition.” There are a hundred and twenty new and evocative words; here are four:

    Leertretung
    Stepping down heavily on a stair that isn’t there.
    Void-Stepping

    Tageslichtspielschock
    Being startled when exiting a movie theater into broad daylight.
    Day-Light-Show-Shock

    Rollsschleppe
    The exhausting trudge up a stationary escalator.
    Escalator-Schlep

    Gaststättenneueröffnungsuntergangsgewissheit
    Total confidence that a newly opened restaurant is doomed to fail.
    Inn-New-Opening-Downfall-Certitude


That last one may be the longest in the book (thankfully, there are pronunciation guides). Each word has an accompanying anecdote, fact from history, or literary citation provided on the facing page. They are cross-referenced and indexed. Several of Schott’s creations answer needs generated by the hyper-connected present, but it is, in its tone and organization, pleasantly pre-Web—a self-enclosed thing that rewards another, older kind of multitasking: reading, laughing, and learning. 




Wordbirds: An Irreverent Lexicon for the 21st Century

by

Liesl Schillinger

ISBN-10: 1476713480
ISBN-13: 978-1476713489

A mask of anonymity, a cauldron of abuse--Internet commentary


"The Psychology of Online Comments"

by

Maria Konnikova

October 24th, 2013

The New Yorker

Several weeks ago, on September 24th, Popular Science announced that it would banish comments from its Web site. The editors argued that Internet comments, particularly anonymous ones, undermine the integrity of science and lead to a culture of aggression and mockery that hinders substantive discourse. “Even a fractious minority wields enough power to skew a reader’s perception of a story,” wrote the online-content director Suzanne LaBarre, citing a recent study from the University of Wisconsin-Madison as evidence. While it’s tempting to blame the Internet, incendiary rhetoric has long been a mainstay of public discourse. Cicero, for one, openly called Mark Antony a “public prostitute,” concluding, “but let us say no more of your profligacy and debauchery.” What, then, has changed with the advent of online comments?

Anonymity, for one thing. According to a September Pew poll, a quarter of Internet users have posted comments anonymously. As the age of a user decreases, his reluctance to link a real name with an online remark increases; forty per cent of people in the eighteen-to-twenty-nine-year-old demographic have posted anonymously. One of the most common critiques of online comments cites a disconnect between the commenter’s identity and what he is saying, a phenomenon that the psychologist John Suler memorably termed the “online disinhibition effect.” The theory is that the moment you shed your identity the usual constraints on your behavior go, too—or, to rearticulate the 1993 Peter Steiner cartoon, on the Internet, nobody knows you’re not a dog. When Arthur Santana, a communications professor at the University of Houston, analyzed nine hundred randomly chosen user comments on articles about immigration, half from newspapers that allowed anonymous postings, such as the Los Angeles Times and the Houston Chronicle, and half from ones that didn’t, including USA Today and the Wall Street Journal, he discovered that anonymity made a perceptible difference: a full fifty-three per cent of anonymous commenters were uncivil, as opposed to twenty-nine per cent of registered, non-anonymous commenters. Anonymity, Santana concluded, encouraged incivility.

On the other hand, anonymity has also been shown to encourage participation; by promoting a greater sense of community identity, users don’t have to worry about standing out individually. Anonymity can also boost a certain kind of creative thinking and lead to improvements in problem-solving. In a study that examined student learning, the psychologists Ina Blau and Avner Caspi found that, while face-to-face interactions tended to provide greater satisfaction, in anonymous settings participation and risk-taking flourished.

Anonymous forums can also be remarkably self-regulating: we tend to discount anonymous or pseudonymous comments to a much larger degree than commentary from other, more easily identifiable sources. In a 2012 study of anonymity in computer interactions, researchers found that, while anonymous comments were more likely to be contrarian and extreme than non-anonymous ones, they were also far less likely to change a subject’s opinion on an ethical issue, echoing earlier results from the University of Arizona. In fact, as the Stanford computer scientist Michael Bernstein found when he analyzed the /b/ board of 4chan, an online discussion forum that has been referred to as the Internet’s “rude, raunchy underbelly” and where over ninety per cent of posts are wholly anonymous, mechanisms spontaneously emerged to monitor user interactions and establish a commenter’s status as more or less influential—and credible.

Owing to the conflicting effects of anonymity, and in response to the changing nature of online publishing itself, Internet researchers have begun shifting their focus away from anonymity toward other aspects of the online environment, such as tone and content. The University of Wisconsin-Madison study that Popular Science cited, for instance, was focussed on whether comments themselves, anonymous or otherwise, made people less civil. The authors found that the nastier the comments, the more polarized readers became about the contents of the article, a phenomenon they dubbed the “nasty effect.” But the nasty effect isn’t new, or unique to the Internet. Psychologists have long worried about the difference between face-to-face communication and more removed ways of talking—the letter, the telegraph, the phone. Without the traditional trappings of personal communication, like non-verbal cues, context, and tone, comments can become overly impersonal and cold.

But a ban on article comments may simply move them to a different venue, such as Twitter or Facebook—from a community centered around a single publication or idea to one without any discernible common identity. Such large group environments, in turn, often produce less than desirable effects, including a diffusion of responsibility: you feel less accountable for your own actions, and become more likely to engage in amoral behavior. In his classic work on the role of groups and media exposure in violence, the social cognitive psychologist Alfred Bandura found that, as personal responsibility becomes more diffused in a group, people tend to dehumanize others and become more aggressive toward them. At the same time, people become more likely to justify their actions in self-absolving ways. Multiple studies have also illustrated that when people don’t think they are going to be held immediately accountable for their words they are more likely to fall back on mental shortcuts in their thinking and writing, processing information less thoroughly. They become, as a result, more likely to resort to simplistic evaluations of complicated issues, as the psychologist Philip Tetlock has repeatedly found over several decades of research on accountability.

Removing comments also affects the reading experience itself: it may take away the motivation to engage with a topic more deeply, and to share it with a wider group of readers. In a phenomenon known as shared reality, our experience of something is affected by whether or not we will share it socially. Take away comments entirely, and you take away some of that shared reality, which is why we often want to share or comment in the first place. We want to believe that others will read and react to our ideas.

What the University of Wisconsin-Madison study may ultimately show isn’t the negative power of a comment in itself but, rather, the cumulative effect of a lot of positivity or negativity in one place, a conclusion that is far less revolutionary. One of the most important controls of our behavior is the established norms within any given community. For the most part, we act consistently with the space and the situation; a football game is different from a wedding, usually. The same phenomenon may come into play in different online forums, in which the tone of existing comments and the publication itself may set the pace for a majority of subsequent interactions. Anderson, Brossard, and their colleagues’ experiment lacks the crucial element of setting, since the researchers created fake comments on a fake post, where the tone was simply either civil or uncivil (“If you don’t see the benefits … you’re an idiot”).

Would the results have been the same if the uncivil remarks were part of a string of comments on a New York Times article or a Gawker post, where comments can be promoted or demoted by other users? On Gawker, in the process of voting a comment up or down, users can set the tone of the comments, creating a surprisingly civil result. The readership, in other words, spots the dog at the other of the end of the keyboard, and puts him down.

As the psychologists Marco Yzer and Brian Southwell put it, “new communication technologies do not fundamentally alter the theoretical bounds of human interaction; such interaction continues to be governed by basic human tendencies.” Whether online, on the phone, by telegraph, or in person, we are governed by the same basic principles. The medium may change, but people do not. The question instead is whether the outliers, the trolls and the flamers, will hold outsized influence—and the answer seems to be that, even protected by the shade of anonymity, a dog will often make himself known with a stray, accidental bark. Then, hopefully, he will be treated accordingly.


[Maria Konnikova is the author of the New York Times best-seller “Mastermind: How to Think Like Sherlock Holmes.” She has a Ph.D. in psychology from Columbia University.]

Saturday, October 26, 2013

Deceased--Deborah Turbeville

Deborah Turbeville
July 6th, 1932 to October 24th, 2013 
 



Style is reminiscent of the early pictorial photographers.

"Deborah Turbeville, Fashion Photographer, Dies at 81"

by

Margalit Fox

October 25th, 2013

The New York Times

Deborah Turbeville, who almost single-handedly turned fashion photography from a clean, well-lighted thing into something dark, brooding and suffused with sensual strangeness, died on Thursday in Manhattan. She was 81.

Her death, at St. Luke’s-Roosevelt Hospital Center, was from lung cancer, her agent, Marek Milewicz, said.

Though images like Ms. Turbeville’s — which might include pale, haunted-eyed models in derelict buildings — are practically de rigueur in fashion photography today, they were almost beyond contemplation when she began her work in the early 1970s. She was the only woman, and the only American, in the triumvirate (the others were Helmut Newton and Guy Bourdin) that by wide critical consensus changed fashion photography from sedate to shocking.

Ms. Turbeville, who began her career editing fashion magazines, became famous, Women’s Wear Daily wrote in 2009, “for transforming fashion photography into avant-garde art” — a distinction all the more striking in that she was almost completely self-taught.

Her photographs appeared in magazines like Vogue, Harper’s Bazaar and Mirabella; in newspapers including The New York Times; in advertisements for clients like Ralph Lauren, Bruno Magli, Nike, Macy’s and Bloomingdale’s; in exhibitions worldwide; and in books, including “Unseen Versailles” (1981), a collection of her photos of the hidden, dusty spaces underpinning Louis XIV’s grand palace.

“Fashion takes itself more seriously than I do,” Ms. Turbeville told The New Yorker in 2011. “I’m not really a fashion photographer.”

In mid-20th-century America, fashion photography was about precisely that: fashion. The clothes, vividly lighted, were front and center, with the models chosen for their well-scrubbed, patrician femininity. They looked, as often as not, as if they had just come from tennis at the country club, though reassuringly free of sweat.

Ms. Turbeville’s photos, by contrast, were unsettling, and they were meant to be. In her fashion work, clothes are almost beside the point. In some images the outfits are barely visible; the same is often true of the models, resulting in an elegiac landscape defined more by absence than by presence.

In a de facto commentary on fashion’s manipulation of women, Ms. Turbeville literally manipulated her negatives — scratching them, tearing them, scattering dust on them and otherwise distressing them — to make the finished images redolent of decay. She employed faded color, black-and-white and sepia tones; prints were often deliberately overexposed, rendering her subjects spectral.

The settings were as striking as the subjects. Ms. Turbeville’s photos are awash in ruin: she favored places like grimy, deserted streets, abandoned warehouses and, in the image that nearly 40 years ago horrified the public and cemented her reputation, a decrepit New York bathhouse.

“I can’t deny that I design the background,” Ms. Turbeville told The Times in 1977. “A woman in my pictures doesn’t just sit there. In what kind of mood would a woman be, wearing whatever? I go into a woman’s private world, where you never go.”

By the late 1970s, articles on photography had begun to refer to “the Deborah Turbeville look,” which, as The New York Times described it in 1977, was “not the kind that mother used to admire in Vogue.”

Unlike other world-renowned fashion photographers, Ms. Turbeville rarely shot the famous, though when she did the results could be startling. In a portrait of Julia Roberts published in The Times Magazine in 2004, Ms. Roberts looks so expressively solemn as to appear on the verge of tears.

For the viewer, the net effect of Ms. Turbeville’s work is one of dreamlike, melancholy beauty. Her images exude an almost palpable sense of longing, with questions about the woeful women they depict — Who are they? Why are they so sad? — hanging unanswered in the air.

Her barren settings leave abundant room for loneliness and loss, and the dissolution they contain is a constant reminder of the passage of time. (“I like to hear a clock ticking in my pictures,” Ms. Turbeville once said.) Her photos seem to depict small, once exquisite worlds that, by the time she clicks the shutter, have already evanesced.

Deborah Lou Turbeville was born in Stoneham, Mass., on July 6, 1932. (Many sources give the year erroneously as 1937 or 1938.) As a young woman she moved to New York, where she was an assistant and sample model to the noted fashion designer Claire McCardell. Afterward, she held editorial positions at Harper’s Bazaar and Mademoiselle, though she deplored the work.

“I’d get a note from some senior fashion editor saying, ‘We find your arrival half an hour late for Bill Blass appalling,’ ” Ms. Turbeville told The Independent, the British newspaper, in 1993. “I just stopped going.”

She began taking photographs on her own in the 1960s, and in 1966, having had no previous instruction, enrolled in a six-month workshop taught by the photographer Richard Avedon and the art director Marvin Israel.

“If it hadn’t been for the two of them, I wouldn’t have taken my photography seriously,” Ms. Turbeville told The Times Magazine in 1981. “It was so out of focus and terrible. The first evening in class, they held up pictures. They said, ‘It isn’t important to have technique, but you must have an idea or inspiration, and we feel the only one who has is this person who’s never taken a photograph before.’ I became very unpopular in the class.”
In 1975 Ms. Turbeville produced an image that would become, The Independent wrote in 1993, “one of the most famous fashion photographs of the last 50 years.” Part of a swimsuit shoot for Vogue, it showed five listless women leaning against the walls of a shower room in a condemned New York bathhouse.

Writing in The Times late that year, Hilton Kramer reviewed an exhibition at Hofstra University that included the image, calling it “one of the most beautifully composed pictures in the show,” but going on to say, “Yet its ‘Marat/Sade’ imagery leaves one wondering if we have not moved beyond the boundaries of fashion photography.”

The reaction from others was even more strident.

“People started talking about Auschwitz and lesbians and drugs,”
Ms. Turbeville recalled in an interview quoted in the 1991 book “Appearances: Fashion Photography Since 1945,” by Martin Harrison. “And all I was doing was trying to design five figures in space.”

Ms. Turbeville won an American Book Award in 1982 for “Unseen Versailles,” a project for which she had been recruited by Jacqueline Kennedy Onassis, then an editor at Doubleday. The volume, with an introduction by the novelist Louis Auchincloss, documented the echoing, long-disused backstairs rooms that visitors never see. (To heighten their autumnal aspect, Ms. Turbeville smuggled in armfuls of dead leaves.)

She and Mr. Auchincloss also collaborated on “Deborah Turbeville’s Newport Remembered” (1994), which captured the vanished Gilded Age in all its crumbling particulars.

Among her other books are “Wallflower” (1978), a collection of misty, death-imbued fashion photographs; “Studio St. Petersburg” (1997), shot in Russia; “Casa No Name” (2009), shot in Mexico, where she long had a home; and “Deborah Turbeville: The Fashion Pictures” (2011).

Ms. Turbeville, who lived part of each year in San Miguel de Allende, Mexico, is survived by a brother, Thomas.

She also maintained an apartment in the Ansonia, the Beaux-Arts pile on the Upper West Side of Manhattan. There, she left generations of old paint unstripped, speckled mirrors unsilvered, threadbare tapestry unrestored and tattered curtains unmended.


From W magazine...


Friday, October 25, 2013

Spray-on garments?


Maybe it's a fad like the "paper dress". I don't think that popular clothiers have to be concerned at the moment.

"Spray-On Clothing Could Deliver a Suit in a Can"

Start-up develops fiber-laden sartorial aerosol that can be styled and worn

by

Pippa Wysong

October 25th, 2013

Scientific American

Someday, packing for a trip might be as simple as stowing a spray can of colloidal polymer mix for making your own spray-on clothes. Whether it’s a T-shirt or evening attire, spray-on fabric is a novel way to make a variety of light-use fabrics.

British fashion designer Manel Torres dreamed up the idea after attending a wedding and watching people spray each other with Silly String, filaments of plastic propelled in liquid form from an aerosol can. “I thought, wouldn’t it be neat if there was a way to create a material that could be sprayed to cover a larger surface area and used for clothing?” he says. To do this, he went back to school at Imperial College London, earned a PhD in chemical engineering, and launched Fabrican, Ltd., with Paul Luckham, a chemical engineering professor, also at Imperial.

Once sprayed onto a surface the instant fabric forms a nonwoven material, Torres says. This formula consists of short fibers bound together with polymers and a solvent that delivers the fabric in liquid form. The material is sprayed directly onto a person’s bare skin where it dries almost instantly. It can be easily peeled off because the polymers do not bind to skin. Other variants would adhere to surfaces. “The difference is largely in the formulations, but also in the method of spraying,” he says, adding they have experimented with spray guns, aerosol nozzles, portable canisters and jet sprays for both industrial and customized applications. To create a shape—such as a flaring skirt—the solution would be sprayed onto a surface with that desired shape.

The material’s characteristics—such as strength and texture—depend on the type of fibers mixed into the solution. Possibilities include natural fibers such as wool, cotton, silk or cellulose as well as synthetic fibers such as nylon. The fabric itself is like a thin, slightly stretchy suede and can be applied in layers to make it thicker, Torres says. But the texture and feel differs depending on the types of fabrics are mixed into the solution.

The wearable fabric feels quite cool when it is first sprayed on, but warms up to body temperature within seconds. Some models describe the spray-on clothing as feeling like a “second skin,” "like being dressed but feeling naked," Torres says.

Luckham notes that a T-shirt can be sprayed, taken off over the head and put back on again. But one can get more creative in undressing. “A T-shirt can be cut up the front with scissors and removed like a waistcoat, put back on again, and then the front can be resprayed,” he says.

After a person is finished wearing a spray-on garment, it can be recycled. This is done by shredding or cutting the garment into small pieces and dissolving it in a solution. Smaller pieces of fabric dissolve more easily than big ones do, and resemble tissue dissolving in water, Torres says. The propellant used to spray the fabric is the same substance as the solvent, which simplifies the recycling process because you don’t have to have another solution or solvent on hand, Luckham says.

The team is also developing spray-on, lightweight waterproof plaster casts and testing prototypes in partnership with U.K. military personnel who have lost limbs in combat. Commercialization of a final product is expected once testing has been completed. Spray-on bandages and other medical applications are also under development. An advantage of the spray-on fabric is that it's sterile when applied, which makes it attractive for emergency applications such as field dressings, Torres says.

One automotive company is working with Fabrican to develop spray-on fabric for car interiors. The challenge is to create a fabric strong and durable enough to withstand the daily wear and tear of the family car or a commercial vehicle.

Near-term, Luckham envisions giving people the ability to spray small amounts of fabric onto surfaces to create an instant face cloth or towel—or even using spray-on materials as decorations similar to papier mâché.