verbumlogos
A PERSONAL JOURNAL, KEPT LARGELY TO RECORD REFERENCES TO WRITINGS, MUSIC, POLITICS, ECONOMICS, WORLD HAPPENINGS, PLAYS, FILMS, PAINTINGS, OBJECTS, BUILDINGS, SPORTING EVENTS, FOODS, WINES, PLACES AND/OR PEOPLE.
About Me
- Xerxes
- New Orleans, Louisiana, United States
- Admire John McPhee, Bill Bryson, David Remnick, Thomas Merton, Richard Rohr and James Martin (and most open and curious minds)
6.6.23
Plato
AddToAny Options
Contrary to many people’s perception of him, Plato did not spend his entire life listening to Socrates philosophising in colonnades in Athens or writing dialogues meandering through complex ideas. He was once captured in the middle of the Mediterranean Sea and put up for sale in a slave market. The reason we seldom hear about this is the same reason there hasn’t been a stand-alone biography of Plato in English in nearly 200 years. The sources for his life are untrustworthy and fiendishly difficult to interpret.
The enslavement allegedly occurred while Plato was travelling home from Sicily in 384 BC. A number of ancient biographers claim that the philosopher boarded a ship with a Spartan who enslaved him on the orders of the tyrant of Syracuse, but Plato’s new biographer, Robin Waterfield, suggests it’s more likely that he was on board a merchant ship which caught the eye of pirates. The seas were full of marauders in this period and it is entirely possible that Plato sailed into treacherous waters. His luck changed after he was spotted in the market by an admirer who agreed to pay a ransom to secure his release.
Plato never mentioned any of this in his own writings, but then he rarely wrote about himself at all. The modern biographer must piece together clues from his works of philosophy, snippets of information provided by biographers and historians living centuries after he died and a small collection of letters and epigrams which are for the most part spurious. Waterfield is reluctant to dismiss the episode of Plato’s capture as pure fallacy because the circumstances are credible and the chronology seems to fit with what we know of his movements. If the story is true, it offers just a taste of what we may be missing.
Waterfield is open about the paucity of information on Plato’s life and is highly cautious in his reading of the sources and extraction of details, pirates aside. He remains wary, for example, of romantic tales of Plato’s travels to meet philosophers and seers. Tradition holds that the philosopher went to Libya to stay with a mathematician, to Egypt to study with priests, to Phoenicia to meet the Magi and to southern Italy to live with the Pythagoreans. ‘The fact that no two sources give Plato the same itinerary’, Waterfield argues, ‘shows that they are basically making it all up.’
And yet Plato did travel, including to Pythagorean Italy and, of course, Sicily. His time with Dionysius II, ruler of Syracuse, is not well documented, but Waterfield cleverly draws out what he can to evoke the spirit of philosophical mania that gripped the court during his visits. His account of Plato’s failure to reform the tyrant and establish a new constitution for him is particularly well done.
While the cautious approach is sensible, it does not always make for the most exciting story, since much of the colour of Plato’s life as we have come to know it lies in the strange, outlandish tales woven about him for reasons that can only be guessed at. It was said, for example, that Plato was either the product of a virgin birth (he was actually his mother’s fourth child) or the offspring of Apollo, god of prophecy and poetry. As a baby, his lips supposedly attracted Apollo’s bees and his mouth was filled with their honey. Later, on the eve of meeting the teenage Plato for the first time, Socrates dreamed that he was holding Apollo’s bird, a young swan, which found its wings and flew away singing. The next day, Socrates recognised Plato as the bird of his dream.
Depending on whom you read, Plato was celibate or libidinous, power-hungry or servile, normal or a little bit weird, with a ravenous appetite for olives. Waterfield understandably struggles to conjure a portrait of Plato’s personality out of such contradictory material, though his descriptions, such as that of the philosopher’s sense of humour – ‘understated, that of Cervantes rather than P G Wodehouse’ – help to nudge his narrative into the biographical genre and make this more than a work of intellectual history.
Waterfield probably rightly dismisses as slander the popular anecdote that ‘Plato’ was a sobriquet devised as a jibe at the philosopher’s beefy physique. The word could connote fatness or stockiness, either of which might have applied to Plato, even if he wasn’t enamoured of olives, but it was also a standard name in Greece. It is impossible to tell how rotund he really was from surviving portraits, which end at the shoulders.
Plato’s birth is traditionally given as 428/7 BC, but Waterfield makes a strong case for pushing the date forward to 424 BC or later on the basis that there is no evidence of him fighting in the final battles of the Peloponnesian War, as he would have done had he been old enough. He was probably born on Aegina in the Saronic Gulf but taken to Athens to grow up after the Athenians seized the island during the war.
Our main interest in Plato’s adult life lies in his establishment of what Waterfield calls ‘the ancient world’s most successful institute of higher education and research’. Students at Plato’s Academy studied everything from politics and logic to physics and optics and were encouraged to disagree with each other. Waterfield evokes its atmosphere superbly. Indeed, the passages on Plato’s teachings, his dialogues and his contribution to the field of philosophy are a particular strength of the book and help to compensate for the unavoidable patchiness of the biography itself. Plato ‘created a philosophy that was capacious enough to include contradictions and yet remain intact,’ he writes. The survival of Plato’s thoughts and words is perhaps all that really matters.
For People Who Devour Books
5.6.23
10.5.23
(15) Eating in the Marais - by Meg Zimbeck
(15) Eating in the Marais - by Meg Zimbeck
Simon Winchester on ‘Knowing What We Know’ from ancient times to AI
washingtonpost.com/books/2023/05/04/knowing-what-we-know-simon-winchester-review
Michael DirdaMay 4, 2023
“Knowing What We Know: From Ancient Wisdom to Modern Magic.” (Harper)
Simon Winchester considers the fate of humankind when machines think for us
‘Knowing What We Know,’ the latest from the author of ‘The Professor and the Madman,’ explores the evolution of knowledge, from oral storytelling and the development of writing to digital media and AI.
Review by Michael Dirda
May 4, 2023 at 12:00 p.m. EDT
Listen
7 min
Comment
Gift Article
Share
About halfway through his new book “Knowing What We Know,” Simon Winchester devotes several pages to a British organization, founded in 1826, called the Society for the Diffusion of Useful Knowledge. By publishing inexpensive booklets on a variety of subjects, the group aimed to enlarge the intellectual horizons of the newly literate working classes of that era. In many ways, Winchester — the genial and much admired author of books about the Oxford English Dictionary, the volcanic explosion of Krakatau, the Yangtze and Mississippi rivers, geological maps, the 1906 San Francisco earthquake, the Atlantic and Pacific oceans, and so much else — might be appropriately dubbed the One-Man Society for the Diffusion of Useful Knowledge of our own era.
Whatever his subject, Winchester leavens deep research and the crisp factual writing of a reporter — he was for many years a foreign correspondent for the Guardian — with an abundance of curious anecdotes, footnotes and digressions. His prose is always clear, but it is also invigorated with pleasingly elegant diction: Fashionable gentlemen might be “grandees” or “swells,” while a country’s dependent provinces are dubbed “satrapies.” Winchester also neatly enriches his sentences with sly literary allusions: The intellects of Chinese censors, he notes in this new book, are “vast and cool and unsympathetic,” which is how H.G. Wells described the minds of Martians.
Above all, Winchester values precision. While many writers would be content to refer to “the Andaman Islands” and stop there, Winchester — trained in geology and the earth sciences at Oxford University — proffers a sharper geographical delineation: “the Andaman Islands, the string of lime-stoned jungle-covered skerries lying in the Bay of Bengal, off the coast of Burma.” In short, he is a pleasure to read, or even to listen to, as devotees of his audiobooks can testify.
In “Knowing What We Know,” Winchester surveys “how knowledge has over the ages been created, classified, organized, stored, dispersed, diffused and disseminated.” The book, nearly 400 pages along covers oral storytelling, the development of writing, the emergence of libraries in antiquity, the discovery of paper by Cai Lun in China, the Gutenberg printing press, the heyday of the encyclopedia, the rise of newspapers, radio and television, the techniques of propaganda and public relations and, finally, the digital and artificial intelligence revolutions of our own time.
Do these subjects sound familiar? As a brief afterword explains, Winchester worked on the book while hunkered down in his study in western Massachusetts during the coronavirus pandemic, unable to travel for in situ research. Consequently, “Knowing What You Know” is less original than his best-selling “The Professor and the Madman” — about a convicted murderer, confined in an insane asylum, who became a major contributor to the Oxford English Dictionary — or “The Man Who Loved China,” an enthralling biography of the eccentric Joseph Needham, the biologist turned Sinologist who spearheaded the magisterial multivolume “Science and Civilization in China.”
“Knowing What We Know” is, instead, largely a synthesis of Winchester’s extensive but focused reading, amplified with occasional short borrowings from his own magazine articles and earlier writing. Informative and entertaining throughout, it is packed tight with his usual array of striking factoids: The Rosetta Stone is the most visited object in the British Museum, Virginia Woolf reviewed Arthur Waley’s translation of “The Tale of Genji” for Vogue Magazine, announcers in the early days of BBC radio read the news in full evening dress and — those were the days! — one issue of the Sunday New York Times in 1987 clocked in at 1,600 pages and weighed roughly 12 pounds.
A book review of ‘The Perfectionists’ from author Simon Winchester
In a loose sense, the first half of “Knowing What We Know” supplies the background history to the overriding and more philosophical question that eventually comes to the fore in the second half of the book: What will be the fate of humankind in a world where, increasingly, machines do our remembering, thinking and creating for us? Winchester worries “that today’s all-too-readily available stockpile of information will lead to a lowered need for the retention of knowledge, a lessening of thoughtfulness, and a consequent reduction in the appearance of wisdom in society.”
Himself a second-string polymathic, Winchester hero worships those people who, throughout history, have aspired to know everything or who have contributed useful innovations to multiple fields. In Western culture, Aristotle is the primary exemplar of this tradition, but “Knowing What You Know” touches on others nearly as accomplished, including the scientist Shen Gua in 11th century China, the multilingual, multitalented Black African James Beale (who later renamed himself Africanus Horton), the saintly mathematicians Srinivasa Ramanujan and Frank Ramsey (both of whom died young), the learned classicist Benjamin Jowett, the 19th century visionary Charles Babbage, who drew up plans for an “Analytical Engine,” Tim Berners Lee, inventor of the World Wide Web, and John McCarthy, the founding father of artificial intelligence.
While the above list does not include any women, largely due to the constraints imposed on them in the past, its range does underscore the global perspective of the author. For instance, one section of “Knowing What We Know” surveys the various national exams given to young people, starting with the Scholastic Aptitude Test (SAT), now increasingly disparaged, though seldom for the reason mentioned by Winchester: “In the eyes of almost every educated country in the world, the American SAT is just ridiculously easy.”
Its Chinese equivalent, the dreaded Gaokao, “is to the American SAT as Go is to Go Fish.” A 2018 preliminary exam for 11-year-olds in the city of Shungqing tellingly featured this question: “A ship carries 26 sheep and 10 goats: How old is the captain?” One child not only calculated that the livestock weighed at least 7,7000 kilograms but knew that piloting a boat carrying more than 5,000 kilograms of cargo required its captain to have had a license for five years. One could only apply for such a license at the age of 23. Ergo, the boat captain must be at least 28 years old.
Read more book reviews by Michael Dirda at The Washington Post
Think of all the different kinds of knowledge that child brought to bear on this seemingly insoluble problem. Because computers can now answer our questions at a keystroke, they cannot help but encourage laziness and intellectual atrophy: As gym rats say about putting on muscle, “No pain, no gain.” Instant access to digitized information can be a useful adjunct to our daily lives, but it is still no match for the deeply human pleasure of acquiring competency, in learning how to do a difficult thing well all by oneself. Don’t we most admire those people who can perform intricate tasks, whether physical or mental, with confidence, grace and pizazz? As one Chinese girl proclaimed, learning was worth any amount of hard work “because she now had the knowledge.”
Winchester ends “Knowing What We Know” with the somewhat desperate speculation — earlier enunciated by Sherlock Holmes in “A Study in Scarlet” — that our minds can only retain so much information. By allowing computers to function as our brain attics, we might gain the mental space and leisure “to suppose, ponder, ruminate, consider, assess, wonder, contemplate, imagine, dream” and thus become more “thoughtful, considerate, patient” and “wise.”
Isn’t pretty to think so? Yet I suspect that people are too gloriously messy, too human, for this sort of austere, Utopian future, whether imagined by Plato, Wells or Winchester. In fact, all that high-minded thinking sounds more like how some bloodless and very smart computer might happily spend the livelong day.
Knowing What We Know
From Ancient Wisdom to Modern Magic
By Simon Winchester. Harper. 432 pp. $35.
Simon Winchester on ‘Knowing What We Know’ from ancient times to AI - The Washington Post
Simon Winchester on ‘Knowing What We Know’ from ancient times to AI - The Washington Post
Simon Winchester on ‘Knowing What We Know’ from ancient times to AI
washingtonpost.com/books/2023/05/04/knowing-what-we-know-simon-winchester-review
Michael DirdaMay 4, 2023
“Knowing What We Know: From Ancient Wisdom to Modern Magic.” (Harper)
Simon Winchester considers the fate of humankind when machines think for us
‘Knowing What We Know,’ the latest from the author of ‘The Professor and the Madman,’ explores the evolution of knowledge, from oral storytelling and the development of writing to digital media and AI.
Review by Michael Dirda
May 4, 2023 at 12:00 p.m. EDT
Listen
7 min
Comment
Gift Article
Share
About halfway through his new book “Knowing What We Know,” Simon Winchester devotes several pages to a British organization, founded in 1826, called the Society for the Diffusion of Useful Knowledge. By publishing inexpensive booklets on a variety of subjects, the group aimed to enlarge the intellectual horizons of the newly literate working classes of that era. In many ways, Winchester — the genial and much admired author of books about the Oxford English Dictionary, the volcanic explosion of Krakatau, the Yangtze and Mississippi rivers, geological maps, the 1906 San Francisco earthquake, the Atlantic and Pacific oceans, and so much else — might be appropriately dubbed the One-Man Society for the Diffusion of Useful Knowledge of our own era.
Whatever his subject, Winchester leavens deep research and the crisp factual writing of a reporter — he was for many years a foreign correspondent for the Guardian — with an abundance of curious anecdotes, footnotes and digressions. His prose is always clear, but it is also invigorated with pleasingly elegant diction: Fashionable gentlemen might be “grandees” or “swells,” while a country’s dependent provinces are dubbed “satrapies.” Winchester also neatly enriches his sentences with sly literary allusions: The intellects of Chinese censors, he notes in this new book, are “vast and cool and unsympathetic,” which is how H.G. Wells described the minds of Martians.
Above all, Winchester values precision. While many writers would be content to refer to “the Andaman Islands” and stop there, Winchester — trained in geology and the earth sciences at Oxford University — proffers a sharper geographical delineation: “the Andaman Islands, the string of lime-stoned jungle-covered skerries lying in the Bay of Bengal, off the coast of Burma.” In short, he is a pleasure to read, or even to listen to, as devotees of his audiobooks can testify.
In “Knowing What We Know,” Winchester surveys “how knowledge has over the ages been created, classified, organized, stored, dispersed, diffused and disseminated.” The book, nearly 400 pages along covers oral storytelling, the development of writing, the emergence of libraries in antiquity, the discovery of paper by Cai Lun in China, the Gutenberg printing press, the heyday of the encyclopedia, the rise of newspapers, radio and television, the techniques of propaganda and public relations and, finally, the digital and artificial intelligence revolutions of our own time.
Do these subjects sound familiar? As a brief afterword explains, Winchester worked on the book while hunkered down in his study in western Massachusetts during the coronavirus pandemic, unable to travel for in situ research. Consequently, “Knowing What You Know” is less original than his best-selling “The Professor and the Madman” — about a convicted murderer, confined in an insane asylum, who became a major contributor to the Oxford English Dictionary — or “The Man Who Loved China,” an enthralling biography of the eccentric Joseph Needham, the biologist turned Sinologist who spearheaded the magisterial multivolume “Science and Civilization in China.”
“Knowing What We Know” is, instead, largely a synthesis of Winchester’s extensive but focused reading, amplified with occasional short borrowings from his own magazine articles and earlier writing. Informative and entertaining throughout, it is packed tight with his usual array of striking factoids: The Rosetta Stone is the most visited object in the British Museum, Virginia Woolf reviewed Arthur Waley’s translation of “The Tale of Genji” for Vogue Magazine, announcers in the early days of BBC radio read the news in full evening dress and — those were the days! — one issue of the Sunday New York Times in 1987 clocked in at 1,600 pages and weighed roughly 12 pounds.
A book review of ‘The Perfectionists’ from author Simon Winchester
In a loose sense, the first half of “Knowing What We Know” supplies the background history to the overriding and more philosophical question that eventually comes to the fore in the second half of the book: What will be the fate of humankind in a world where, increasingly, machines do our remembering, thinking and creating for us? Winchester worries “that today’s all-too-readily available stockpile of information will lead to a lowered need for the retention of knowledge, a lessening of thoughtfulness, and a consequent reduction in the appearance of wisdom in society.”
Himself a second-string polymathic, Winchester hero worships those people who, throughout history, have aspired to know everything or who have contributed useful innovations to multiple fields. In Western culture, Aristotle is the primary exemplar of this tradition, but “Knowing What You Know” touches on others nearly as accomplished, including the scientist Shen Gua in 11th century China, the multilingual, multitalented Black African James Beale (who later renamed himself Africanus Horton), the saintly mathematicians Srinivasa Ramanujan and Frank Ramsey (both of whom died young), the learned classicist Benjamin Jowett, the 19th century visionary Charles Babbage, who drew up plans for an “Analytical Engine,” Tim Berners Lee, inventor of the World Wide Web, and John McCarthy, the founding father of artificial intelligence.
While the above list does not include any women, largely due to the constraints imposed on them in the past, its range does underscore the global perspective of the author. For instance, one section of “Knowing What We Know” surveys the various national exams given to young people, starting with the Scholastic Aptitude Test (SAT), now increasingly disparaged, though seldom for the reason mentioned by Winchester: “In the eyes of almost every educated country in the world, the American SAT is just ridiculously easy.”
Its Chinese equivalent, the dreaded Gaokao, “is to the American SAT as Go is to Go Fish.” A 2018 preliminary exam for 11-year-olds in the city of Shungqing tellingly featured this question: “A ship carries 26 sheep and 10 goats: How old is the captain?” One child not only calculated that the livestock weighed at least 7,7000 kilograms but knew that piloting a boat carrying more than 5,000 kilograms of cargo required its captain to have had a license for five years. One could only apply for such a license at the age of 23. Ergo, the boat captain must be at least 28 years old.
Read more book reviews by Michael Dirda at The Washington Post
Think of all the different kinds of knowledge that child brought to bear on this seemingly insoluble problem. Because computers can now answer our questions at a keystroke, they cannot help but encourage laziness and intellectual atrophy: As gym rats say about putting on muscle, “No pain, no gain.” Instant access to digitized information can be a useful adjunct to our daily lives, but it is still no match for the deeply human pleasure of acquiring competency, in learning how to do a difficult thing well all by oneself. Don’t we most admire those people who can perform intricate tasks, whether physical or mental, with confidence, grace and pizazz? As one Chinese girl proclaimed, learning was worth any amount of hard work “because she now had the knowledge.”
Winchester ends “Knowing What We Know” with the somewhat desperate speculation — earlier enunciated by Sherlock Holmes in “A Study in Scarlet” — that our minds can only retain so much information. By allowing computers to function as our brain attics, we might gain the mental space and leisure “to suppose, ponder, ruminate, consider, assess, wonder, contemplate, imagine, dream” and thus become more “thoughtful, considerate, patient” and “wise.”
Isn’t pretty to think so? Yet I suspect that people are too gloriously messy, too human, for this sort of austere, Utopian future, whether imagined by Plato, Wells or Winchester. In fact, all that high-minded thinking sounds more like how some bloodless and very smart computer might happily spend the livelong day.
Knowing What We Know
From Ancient Wisdom to Modern Magic
By Simon Winchester. Harper. 432 pp. $35.
(15) Why AI Will Never Rival Human Creativity
(15) Why AI Will Never Rival Human Creativity
Why AI Will Never Rival Human Creativity
persuasion.community/p/why-ai-will-never-rival-human-creativity
William Deresiewicz
Study for Les Demoiselles d’Avignon, Pablo Picasso, 1907. (Photo by Luiz Souza/NurPhoto via Getty Images.)
AI might put artists out of business. It will not, however, replace them. It will not—cannot—make good art, great art: true art. Which is to say, original art. This is, I know, a dangerous prediction (“dangerous prediction”: a redundancy). But unlike the techies and pundits, in their glorious ignorant smugness, I have some sense of what art is and how it is created.
AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.
The African masks in Picasso’s Les Demoiselles d’Avignon, to take one of a million examples, were a low-probability choice. So were the footnotes in David Foster Wallace’s Infinite Jest. So was the 40-second chord at the end of The Beatles’ “A Day in the Life.” So is every new metaphor. Elizabeth Hardwick, who wrote criticism at the pitch of art, was famous for her adjectives: “the clamorous serenity of [Frost’s] old age,” Plath’s “ambitious rage,” the “aggressive simplicity“ of the old New York aristocracy. None of these were probable. There are words, in art, for that which is: derivative, stale, clichéd. Boring.
Low-probability choices are leaps: lateral and unpredictable, associative and idiosyncratic. Where do they come from? Inspiration, we say, a word that explains by not explaining. Inspiration is mysterious (not the same as mystical, though some would say it’s that, as well). Its nature is obscure. It is neither conscious nor unconscious but instead involves a delicate and frequently elusive interplay between the two. It is serendipitous—like standing in a thunderstorm, said Randall Jarrell, and hoping to be struck by lightning. That is why successful works cannot be replicated even by the artists who create them. Every new one is a voyage of discovery, its destination unforeseeable—the very opposite of creating, as the AIs do, to a set of specifications. “The main thing in beginning a novel,” wrote Virginia Woolf, “is to feel, not that you can write it, but that it exists on the far side of a gulf, which words can’t cross: that it’s to be pulled through only in a breathless anguish.” Quality in art is an emergent property: it arises in the doing, in a dialogic dance between the artist and the work. As the work takes shape, it shows the artist what it wants to be.
Joan Didion, wrote Joyce Carol Oates, “began Play It as It Lays with no notion of character or plot or even ‘incident.’ She had only two pictures in her mind: one of empty white space; the other of a minor Hollywood actress being paged in the casino at the Riviera Hotel in Las Vegas.” Saul Bellow found the clue to The Adventures of Augie March, his exuberant early masterpiece, in the sight of water flowing down a street. Why not write a novel, he suddenly thought, that would have “as much freedom of movement as the running water”? Chatbots, in creating text, have only other texts to draw on. Artists draw on the totality of their experience. Both Salman Rushdie, a novelist, and Martin Scorsese, a filmmaker, have talked of being influenced by the music of the Rolling Stones. How do you program a chatbot, or a filmbot, for that—program it to draw on information that belongs to an entirely different medium? How do you program it for “influence”—that non-logical process of absorption, digestion, suggestion—at all?
And Rushdie and Scorsese were not only influenced by music. Like every artist, they were influenced—were shaped, were made—by everything they’d seen and heard and smelled and touched, everything they’d thought and felt and done. Experience: not just the source of art but its very substance—what it’s made from, what it refers to, what it is tested against. It might be possible to code for low-probability choices, I guess, but how would the computer know if the results are worth a damn? Art is good insofar as we recognize it as true, as corresponding to our experience of the world, both inner and outer. But for AIs, there is neither experience nor world. No sights or sounds, no joys or pains, and also no awareness—no idea whether what they make is true, and therefore whether it is good.
In art, what’s more, the true and the good are unstable. Every great breakthrough in art is rejected at first. All truly original art—Cubism, bebop, the dance of Merce Cunningham—is reviled by the standards of its day. Often it is judged to not be art at all. It is noise, or nonsense, or something that your five-year-old can do. AIs create, perforce, according to existing standards. If the true and the good are beyond their ken, all the more so are tomorrow’s true and good, the ones that don’t exist yet, the ones great art brings into being.
We argue whether artificial intelligence is truly intelligent, but even if it is, intelligence and creativity are very different things. Part of the confusion in discussions of AI and art undoubtedly arises from the degraded conception of creativity that has taken hold, in recent years, in tech. Nothing is original, techno-pundits like to say; “everything is a remix.” This is a banality that grew up to become a stupidity. That new creations build upon existing ones has long been a cliché, but the techies have stretched it to mean that nothing is ever original: that creativity involves, and only involves, the rearrangement of existing parts. Which makes you wonder how we ever managed to progress from the first painting in the first cave. Assisting these arguments is the concept of the meme, the idea that elements of culture propagate themselves from mind to mind, just as genes do from body to body. But the meme hypothesis (and it is only a hypothesis) fails to recognize that minds are capable of altering their contents. We don’t just passively transmit ideas and images, nor do we simply recombine them. Somehow, we manage to generate new ones: manage to create—through processes we do not understand and, I do not think, will ever replicate outside the human brain—the elements of culture to begin with. Or, at least, some of us do.
None of this, however, is to say that AI art—artificially generated songs, novels, visual images, even films—will not supplant the human kind. In the age of mass production, people have shown an unending willingness to accept cheap crap in place of costlier quality: in food, in consumer goods, and, more recently, thanks to the internet, in culture. Indeed, having turned art into “content”—limitless, interchangeable, disposable—the internet has already eroded taste to such an extent that fewer and fewer people are capable of distinguishing between crap and quality in the first place. Or bother to. As for artists, those rarities who bring the new to birth, good luck.
William Deresiewicz is an essayist and critic. He is the author of five books including Excellent Sheep, The Death of the Artist, and The End of Solitude: Selected Essays on Culture and Society.
Reckoning With Birth
Reckoning With Birth
Reckoning With Birth
commonwealmagazine.org/natality-mortality-banks-arendt-children-india-feminism
Doctors carry a newborn baby in a hospital (Javier Valenzuela/EyeEm)
Interested in discussing this article in your classroom, parish, reading group, or Commonweal Local Community? Click here for a free discussion guide.
Birth is humanity’s greatest under-explored subject. I had that thought over thirteen years ago, when I first gave birth and realized how very little in my upbringing and education had prepared me for the experience. I believe it still, although I have come to see how birth has been explored more extensively than I first imagined. Humans have thought about and written about birth from the beginning of recorded history, from ancient creation stories to medieval theological tracts, from philosophic manuals to obstetrics textbooks, and from nineteenth-century novels to twenty-first-century memoirs.
Look back at the earliest written sources and there birth is. In creation myths from ancient Egypt to ancient Greece, and from ancient India, Africa, and the Arctic to Indigenous communities in the Americas, the mystery of human birth was probed as a sub-narrative in the creation of the cosmos. Where did humans come from? How and why were they born? What is this creation they are a part of? The range and creativity of the answers people have come up with are astounding. The first humans are born from dismembered gods (Greek) or from the earth (Israelite). They emerged out of an ear of corn (Maya) or they were vomited out of a lonely god’s mouth (Congolese). They are born by sex or without sex, with mothers or, more often, without any women at all.
But despite birth’s recurring presence in the written record, and despite rumors of some long-lost matriarchal age and society that privileged a feminine divine and saw birth as the primary axis of imaginative, political, and social power, there is little evidence that birth was ever the foundational experience that any culture organized itself around. Just as women have been seen, in Simone de Beauvoir’s phrasing, as “the second sex,” birth has a sense of secondariness about it; it has long hovered in death’s shadow, quietly performing its under-recognized labor. Death has been humanity’s central defining experience, its deepest existential theme, more authoritative somehow than birth, and certainly more final. It is a given that humans are mortal creatures who must wrestle with their mortality, that death is the horizon no one can avoid, despite constant attempts at evasion and postponement and despite the recurring fantasy of immortality. Birth, meanwhile, is what recedes into a hazy background, slipping back past the limits of memory, existing in that forgotten realm where uteruses, blood, sex, pain, pleasure, and infancy constellate.
Perhaps it’s a survival instinct: from the time one is born, death becomes the most pressing concern. How to avoid death, how to deal with it as an inevitability—these are urgent questions. Different traditions have defined a range of ways of confronting death and integrating that encounter into one’s daily life. Roman Stoic philosopher Seneca spoke of death’s omnipresence in our lives: “From the time you are born, you are being led to death.” Our deaths are a point fixed by Fate; we cannot predict that point and we cannot control it. Accepting death and learning how to die were hailed by Seneca as paths to ultimate freedom. It is our love of life, he believed, our attachment to living, that holds us in bondage. “Study death always,” he instructed. “It takes an entire lifetime to learn how to die.”
Those who philosophize properly, Plato asserted centuries before Seneca, are those who practice death and dying. In the Christianity that matured alongside such Greek and Roman influences, the crucifix would overshadow the manger as the central symbol of liturgical worship, with Christ’s death and resurrection accruing more theological significance in most communities than Mary’s miraculous birthing. Celibacy and an otherworldly asceticism would be recommended for those on the fast track to salvation; the end was imminent, many early Christians believed, and true seekers should seek not to perpetuate the human race, but to be reborn into God’s kingdom. “Remember to keep death daily before your eyes,” St. Benedict advised a faithful flock of celibate monastics in the medieval period.
Or, as Buddhists have insisted for millennia: to be born is to be chained to endless rounds of human suffering. The consequence of birth is death, a Buddhist maxim asserts, and the renunciant’s goal is to escape from this hellish cycle, to gain enough insight into the nature of reality so that at death he or she is freed from birth once and for all. One ancient Buddhist text, the Sūtra on Entry into the Womb, describes the uterus as a place where a body is trapped “amidst a mud of feces and urine…unable to breathe.” The text is unambiguous in its perspective on birth: “I do not extol the production of a new existence even a little bit; nor do I extol the production of a new existence for even a moment. Why? The production of a new existence is suffering.”
By the twentieth century, these philosophic and theological traditions would be reimagined by artists like Russian filmmaker Andrei Tarkovsky, who believed that “the aim of art is to prepare a person for death, to plough and harrow his soul.” And by the twenty-first century, death was “having a moment,” an Atlantic reporter declared, as millennials joined forces with aging baby boomers in the global death-acceptance movement, creating “death cafés” and “death salons” where people could gather to discuss their mortality while sipping craft beers, eating cupcakes decorated with tombstones, and listening to presentations by hipster morticians.
But where are the birth cafés? And what hipster would ever be seen there? Faced with the resounding, final clap of death, what claims can birth have to existential, theological, or moral significance? To artistic or imaginative grandeur? To political importance? Does it really matter that, or how, we were born, that someone carried us in a uterus and then ejected us into the world through a tight canal headed downward toward the earth, or that we emerged from an abdomen, or that we grew in some test tube? What was that process? Where did it begin and where did it end? How did it shape us and how did it transform the people and places we were born into? What is the place of birth in the widest and deepest human story one might tell? And what does it mean that the greatest power humans have had—the power to create another human being—has been relegated in nearly all time periods and all places to a secondary status, a task to be performed by an underclass defined by their gender?
I’ve asked these questions obsessively for over a decade. Birth often felt so huge and untamed, so morally dense and so imaginatively rich, that it continually overwhelmed all human attempts at describing or controlling it. But I’ve wondered what human life would look like if the poets, sages, intellectuals, and political leaders had made statements more like these: “From the time we are born, we are being shaped by birth.” “Study birth always; it takes an entire lifetime to come to terms with our having been born.” “Keep birth daily before your eyes.” “Birth is evidence of our freedom.” “The fundamental purpose of art is to process the strange, painful, and miraculous experience of childbirth.” Imagine what the world would look like if we humans understood ourselves as natal creatures who throughout our lives, whether we like it or not, need to wrestle with our own natality.
I came across the word “natality” shortly after my first child was born. I was in my early thirties working as an editor at a university press about an hour up the coast from where I lived. Each morning I’d drop my daughter off at a small, cramped daycare, passing her into the arms of another woman. She’d wail as I walked down a corridor lined with finger-paint smudges on colorful paper, out through the heavy double doors and into the crowded parking lot. Fresh from the rapture, alive with birth’s dizzying intensities, I’d drive alone up I-95, past factories and smokestacks, supermarkets and fast-food chains, hugging the coast and gripping the wheel with a silent maternal fury. A limb was missing. Who was she, back there with that other woman? And who was I now? What had just happened? I wasn’t the person I had been. I thought the things that many new mothers think after giving birth: Why did no one tell me what this was like? Why did no one prepare me? Where was birth in all those books I’ve read so voraciously since childhood? An hour up the coast I’d go, into the outer world of meetings, conferences, opinions, and ideas. I’d park my car and walk to my office, sit down, and begin reading submissions from the world’s leading experts on various subjects. There were books on just about everything, it seemed. Everything except birth.
And then, there it was: “natality.” One strange word, suddenly appearing in a book proposal I received from a philosopher who was writing on childhood. The term, the philosopher said, had been coined by Hannah Arendt, one of the most celebrated and controversial thinkers of the twentieth century. “Natality” conveys the idea that birth as a beginning represents, in Arendt’s words, “the supreme capacity of man,” a capacity inherent in human life that is the “miracle that saves the world, the realm of human affairs, from its normal, ‘natural’ ruin.” Because we all were born, Arendt believed, we are always all capable of beginning again, of starting something new through each human action—the most prized of capabilities, in Arendt’s estimation. These definitions had an immediate, powerful resonance, the philosopher said, because Arendt articulated them after fleeing Nazi Germany as a childless Jew.
The author casually mentioned natality and then moved on. But the word stuck with me. Natality? Familiar words lurked within it—“natal,” “native,” “nature,” “nativity,” “nation”—and yet “natality” itself had an alien ring. “Natality” is in the dictionary, I discovered, but usually with a definition as brief as “1. birthrate.” But Arendt wasn’t speaking about statistics. Her natality planted itself in my imagination with all its foreignness and stayed with me, flowering in unexpected ways over the next thirteen years. In a world bedeviled by destructive tendencies, Arendt’s creative and democratic approach to birth, her entirely worldly and simultaneously miraculous understanding of natality, had a strong, subversive appeal. In her own life, Arendt chose not to have children; natality was not pro-natalism, not an argument for why women should give birth or become mothers. But she understood that while we may not choose birth, birth has already chosen us.
“Natality” conveys the idea that birth as a beginning represents, in Arendt’s words, “the supreme capacity of man,” a capacity inherent in human life that is the “miracle that saves the world, the realm of human affairs, from its normal, ‘natural’ ruin.”
I clung particularly to this challenging insight of hers: that it is not enlightened wisdom to doubt human natality, or to argue against birth’s crucial role in human life. It’s a sign, rather, that one is ripe for totalitarian control. Today, celebrating birth can seem like an oblivious denial of just how dire our political, social, and ecological reality is. But Arendt saw birth and our engagement with it as a deep, direct encounter with reality in all its materiality, rather than as an evasion of it. Totalitarian leaders, she wrote, know neither birth nor death and “do not care whether they themselves are alive or dead, if they ever lived or never were born.” They take power when their subjects have stopped caring too. Totalitarianism thrives “when the most elementary form of human creativity, which is the capacity to add something of one’s own to the common world, is destroyed.” Each new thing we add to the world is another birth; our having been born is what guarantees us the ability to act, to work as agents in our societies. Once that creativity, as she defined it—birth, politics, action, people coming together to create new lives and new realities—had been completely extinguished, you had a mass society of atomized individuals who could be completely coerced into doing anything their leaders ordered. They had lost touch with reality, a reality that included the fact that they had all once been born and that this birth was evidence of their inherent, miraculous creativity. “Ideologies,” she wrote, “are never interested in the miracle of being.”
Despite Arendt’s fame, “natality” never made it far outside academia. It was virtually ignored by everyone other than specialists, and there is still no single, alternative word to express for birth what “mortality” expresses for death: how birth shapes all human life, defining its limits and its possibilities. Medical advancements have revolutionized birth over the past century, and a simultaneous explosion of writing and research about childbirth has been published in novels, poems, academic studies, how-to books, and memoirs across the globe. But birth remains a niche topic, a singular event relevant only to those experiencing it immediately.
Most people who have spent time with birth admit its seismic power, either positive or negative. But they often lack the language to articulate what it is or how it works. Birth is beyond language, people tell me, too mysterious and contradictory to be captured fully in words. Even as birth is ubiquitous now—splashed on the covers of magazines, dramatized in reality TV shows, and graced with its own product lines—it remains somehow shrouded in silence, exiled at the farthest reaches of what can acceptably be talked about in polite company. And so I witness them, mothers gathered in private, sharing birth stories the way veterans share war stories, like a secret upon which a society depends but which lingers in its shadows.
In the twenty-first century, birth remains unspeakable not only because of its graphic physicality, but also because of its thorough domestication—its reputed role in conserving a mainstream, normative order, one controlled largely by men. Feminism grew up in the twentieth century partially through various women’s radical disavowal of a traditional sexual politics that used birth as the key engine for women’s subordination. A woman who wanted to do anything of significance in this life needed a “room of one’s own,” as Virginia Woolf famously put it, not a house overrun with children. Simone de Beauvoir went further, writing, “Woman has ovaries and a uterus; such are the particular conditions that lock her in her subjectivity.”
Brilliant, radical, second-wave feminist Shulamith Firestone agreed with this point, arguing that women live “at the continual mercy of their biology—menstruation, menopause, and ‘female ills,’ constant painful childbirth, wetnursing and care of infants, all of which made them dependent on males…for physical survival.” It wasn’t just men who were to blame. It was nature itself. The biological division of labor had turned women into birthers and that division marked the beginnings of all class and caste systems. It was the first inequality, and it led to “psychosexual distortions” that humanity is still wrestling with. Firestone imagined a cybernetic future in which technology would take over childbearing and the work of raising children would be distributed across a society’s members. Artificial wombs would release women from the tyranny of nature.
Birth was understood as a problem by many leading voices in the movement, and sometimes their critiques of birth have overshadowed the complex and even unparalleled richness in birth found by many self-described feminists. The feminist critiques came as a needed corrective, and they deserved to be heard. Many women, after all, had died in childbirth since time immemorial. Women were given little agency or credit when it came to birth, but they were forced to deal with the full weight of its consequences. Expectations about birth had essentialized women according to a set of often oppressive ideas about gender, leaving childless women at the margins.
The easiest way around birth’s many conundrums was to avoid it altogether. Other twentieth-century movements made the same recommendation on different grounds, adding fuel to the flames of feminist critiques of birth. A global population-control movement, for instance, sounded the alarm about humanity’s increasing numbers. There are just too many people, Paul R. Ehrlich argued in his bestselling book The Population Bomb (1968). He believed we were birthing our way into extinction. Mass famine was on the near horizon. “Hundreds of millions of people are going to starve to death,” he anxiously predicted. And not only people. As environmental scientists have painfully illustrated, humankind is a destructive species, a threat to biodiversity. One of the major ways individuals can limit their carbon imprint, protecting other species, is by not reproducing.
By the twenty-first century, giving birth was not looking like a great option in many parts of the world. Having a child would limit one’s career opportunities and drain one’s finances. Birth would hurt the environment and might entail one’s participation in gender inequalities. It would be a selfish act, some argued, in a world with millions of orphans. Self-described “BirthStrikers” gathered into a small movement, refusing to have children and expressing their terror at the apocalyptic future any children might face.
It is not enlightened wisdom to doubt human natality, or to argue against birth’s crucial role in human life. It’s a sign, rather, that one is ripe for totalitarian control.
Natality rates are now at record lows. About 44 percent of Americans between the ages of eighteen and forty-nine who don’t already have children say they don’t plan on having children at any point in the future; most of them simply don’t want kids, they report, while about a quarter of them cite medical reasons and about 14 percent cite financial concerns. Rates have fallen across classes and age groups, among the native-born and immigrants alike. In the United Kingdom, fertility rates in 2020 dropped to about a child and a half per woman, a record low. Global fertility rates likewise plummeted from the 1950s on, with wealthy G7 nations Canada, France, Germany, Italy, and Japan joining the United States and the United Kingdom at the head of the pack.
The declines may be a natural response to positive developments, including the fact that people in these countries are living longer and exercising more control over their reproductive lives. But they are accompanied by troubling and not unrelated trends: growing inequality and loneliness, rising suicide rates, fewer social services, greater political polarization, the spread of false narratives and propaganda campaigns, political setbacks for women, the stalled campaigns for racial justice, and the erosion of democratic norms. These phenomena all point to a profound isolation at the heart of modern life, a pulling back from a shared, embodied, and committed life with other people. Birth, like democratic politics, challenges us with otherness, with the putting aside of oneself to make room for another person, and with the challenges of difference and plurality.
The critiques of birth are not easily dismissed; without them, it is hard to imagine a different and more just social order. The negativity toward birth has had costs, however. It has historically alienated many ordinary women from the feminist movement and stymied a more systematic reappraisal of gender relations by emphasizing the priorities of individuals against the needs of the collective. Declaring birth barbaric or retrograde means undermining many people’s experiences and diminishing the role that women and caretakers have played in the history of human civilization. The aversion to birth that is articulated as an open rebellion against a patriarchal tradition often directly echoes the shame and disgust expressed about birth in that tradition itself.
A barrenness haunts these visions of life beyond birth, but it also haunts the fetishizations of birth that can seem at first like affirmations of it. In the twentieth and twenty-first centuries, for instance, birth has been used as a powerful moral prop by political movements otherwise deleterious to human life. In terms of political priorities, various pro-natal groups have valued the fetus’s life more highly than that of the struggling mother or the hungry child, the first-grader about to be gunned down in her classroom in a senseless mass shooting, or the species on the brink of extinction. In so exclusively sanctifying the unborn, these groups often approach birth as an unforgivable degradation.
What is missing in the culture war’s heated, polarized debates are the voices that imagine other possibilities, those who intuit a freedom in birth, not from birth. Take American novelist Toni Morrison, a single mom of two boys, who described becoming a mother not as the nail in the coffin of her oppression but as “the most liberating thing that ever happened to me.” She believed that we specifically asked to be born. “That’s why we’re here,” she said. “We have to do something nurturing that we respect before we go. We must. It is more interesting, more complicated, more intellectually demanding and more morally demanding to love somebody. To take care of somebody.”
Minimizing birth means diminishing one of the greatest powers humans have had: the creation and sustenance of life itself, the bringing forth of a next generation that might live better, imagine more, suffer less, and create a more lasting world. This doesn’t mean we need a specified number of people, or that it’s necessary to stay at replacement levels. Maybe we should dial back and hold our own viral spread in check until we’ve found more sustainable ways to live on our planet. But I stop far short of extinction, alarmed by descriptions of our species as a scourge that must be wiped from the earth, formulations all too similar to those used to justify ethnic cleansing.
It remains an open question for me: Are our attempts to rein ourselves in by controlling birth entirely responsible, or are they too tainted by the same destructive and even eliminationist mindset that has made possible genocide and environmental degradation? We, of course, are not separable from nature, hovering above or outside of it, protecting or destroying it. We are nature. Could our tendency to see ourselves as distinct from the rest of creation be part of the problem? These questions are some of the most complex and urgent we can ask in the twenty-first century, and the history of birthing we can draw in wrestling with them doesn’t provide easy answers.
My husband, for instance, was born in 1972 in a small town in Gujarat, India, in the years when a Western-led campaign to limit the number of children born to poor, untouchable people like his parents reached its apogee. Despite having the youngest and the second-largest population on earth, India also has one of the world’s longest-standing official family-planning programs. In the early 1950s, not long after the nation gained independence, and while Western countries were experiencing their postwar baby booms, India adopted the world’s first national policy aimed at shrinking its domestic population. Contraceptives, sex education, and, eventually, sterilization were aggressively offered to both men and women. Technologies that Western feminists had celebrated for furthering the crucial cause of reproductive choice were taken up by neo-Malthusians and eugenicists who saw in birth control, sterilization, and family planning a way to shrink burgeoning populations in other countries. India was a point of particular focus. The Western population controllers who went there and were welcomed by Indian leaders came home horrified by the country’s crowds and by what they saw as its people’s impoverished, unmitigated misery. Their concern was sometimes an expression of genuine humanitarian impulses, but very often it was also infused with nationalistic, eugenicist, and exploitative ambitions and driven by fears of marauding, nonwhite hordes. Controlling human populations became in the twentieth century an alternative to outright warfare, with other countries kept in check not by the military occupation of their land but by strategic social-engineering schemes targeting their people’s fertility.
Are our attempts to rein ourselves in by controlling birth entirely responsible, or are they too tainted by the same destructive and even eliminationist mindset that has made possible genocide and environmental degradation?
In 1975, three years after my husband was born, Indian Prime Minister Indira Gandhi imposed a state of emergency, giving herself the power to rule by decree. Among the human-rights violations that occurred during the Emergency was a campaign directed by Gandhi’s sons that resulted in the forced sterilization of more than eight million people in a single year—many more people than were sterilized by the Nazis. The effort, bankrolled by American taxpayers, mandated that men with two or more children have vasectomies, and it also led to the sterilization of many men who were political opponents of the Gandhis, and of men who were poor, uneducated, or disabled. Botched operations killed thousands. The Indian people, still organized in loosely connected states distinguished by different languages, identities, and traditions, generally resisted this centralized government program. Many of the family-planning efforts in subsequent years shifted to the sterilization of women, who seemingly had less power to resist. Still, the campaign has been widely perceived as an abject failure. For a complex set of reasons, not all of them liberatory, many people in India kept giving birth, even when incentivized not to, and even when that birthing was an act of civil disobedience.
My husband’s parents had no more children after he was born. As a Dalit man, was his father subject to forced sterilization? Was his mother targeted? If so, my husband suspects they would have welcomed the sterilizations, burdened as they already were with three children and limited resources. He was glad they limited their family to three children; he grew up knowing how hard it had been for his grandparents to have large families, how difficult it was for his parents even to raise him and his two siblings. But he also grew up seeing the signs that read “Hum Do Hamare Do,” meaning “We Two Our Two.” The message was clear: two parents should have only two children. But there he was, growing up as a third child who violated the generational symmetry; he was the human surplus the posters warned against. This background has fostered my husband’s discomfort with group names like BirthStrikers.
The reality is that pro-natal norms have rarely been promoted evenly across populations. There have always been groups of people—the poor, disabled, religious or racial minorities, women on welfare, the gender-nonconforming, the sick—whom no government or powerful interests want to reproduce. People in these groups can come to birth with different baggage, histories that ironically help them see in birth opportunities denied to them in the broader culture: familial intimacies, self-definition, life affirmations, love, continuity with and respect for their ancestors, creativity, and the creation of a better world.
The pressure to procreate may feel very real to many people, and motherhood can be presented as an idealized state, but most mothers can attest to the fact that while motherhood may be superficially championed, at a deeper level it is often undermined by their culture. Motherhood is venerated in places like the United States except when it comes time to pay the bill from the maternity ward, offer maternity leave, feed a mother’s children, or come up with solutions to the child-care conundrum. Birth goes against widespread cultural values in the West: to accumulate and hoard capital, to seek one’s own individuation and success, to create and maintain one’s own private space, to avoid discomfort, and to eschew risk. Birth breaks down most of the dualisms humans use to structure reality: man/woman, mind/body, thought/experience, destruction/creation, self/other, creator/created, birth/death. In challenging those binaries, birth can be an act of resistance and motherhood an expression of alterity. Therein lies the difficulty of talking about birth today: birth is both the norm and its transgression.
And so maybe the twenty-first century is a time to think more carefully and deeply about birth, about what it has been throughout history, is today, and could be in our future. Maybe it is time for all people, and not just new mothers, to wrestle with human natality—to think anew about how birth has shaped our lives and societies, and how it has altered the course of our planet’s history. Can our reckoning with birth’s ubiquity and magnitude, its private and public significance, re-attune us not only to its difficulties but also to what Hannah Arendt called a “shocked wonder at the miracle of Being”? Can it remind us of our innate capacity to always begin again?
Published in the May 2023 issue:
View Contents
Jennifer Banks is senior executive editor at Yale University Press, where she has acquired books on literature, religion, and philosophy since 2007. This essay has been adapted from her forthcoming book Natality: Toward a Philosophy of Birth. Copyright © 2023 by Jennifer Banks. Used with permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.
Please email comments
8.5.23
The Eros of Shirley Hazzard
The Eros of Shirley Hazzard
The Eros of Shirley Hazzard
hudsonreview.com/
by
David Mason
. . . the truth has a life of its own.
—Shirley Hazzard, The Transit of Venus
Among the literary genres, biography appears to be thriving. Perhaps it satisfies some element of life writing we also get from fiction, adding a dose of gossip and the illusion that we can actually know the truth of other people’s lives. There is always more than one way to tell a story. Some good recent biographies have been thematic or experimental: Katherine Rundell on John Donne, Frances Wilson on D. H. Lawrence, Andrew S. Curran on Diderot, Clare Carlisle on Kierkegaard. We have authoritative doorstoppers from Langdon Hammer on James Merrill to Heather Clark’s numbingly detailed book on Sylvia Plath. And we find a happy medium-sized biography in Mark Eisner’s on Neruda or Ann-Marie Priest’s on the great Australian poet Gwen Harwood. Among the best of these, Brigitta Olubas’ Shirley Hazzard: A Writing Life is not overstuffed or particularly arcane in structure, not weighted down with newly discovered scandal, but lucidly and even gracefully organized, guided by a compelling thesis.[1] Olubas believes, and I agree, that Hazzard pursued one erotic object more than all others, poetry, which is inseparable from Eros in its other meanings. “This . . . large belief in romantic and sexual love stands behind all Shirley Hazzard’s writing,” Olubas tells us. “It is aligned with her sense of human connectedness and above all with poetry, which is at heart for her a way of being human.”
Too flinty and realistic to be an aesthete, Hazzard nevertheless pursued a life steeped in aesthetic pleasure, and this shows in her fiction, both in its acute observations and its verbal scrupulousness. Her novels are very nearly poems. One reads the best of them, The Transit of Venus, having to pause on virtually every page to absorb the shock of her sentences. Eros for Hazzard is a power beyond paraphrase, essentially poetic in nature. She believed her life had literally been changed by poetry, just as it was changed by love. Yet she was no dreamer. She had a strong introduction to reality, even political reality, and the aftermath of war. Love and beauty were important precisely because so much of life was lived without them. She was equipped by her life to offer a global vision, international in scope, suspicious of national loyalties, even as it focused at times on the smallest domestic details. She was a wonderful writer, utterly sui generis.
Hazzard was born in Australia in 1931. At the time a country of roughly 6.5 million people, still living under the notorious “White Australia Policy” that imposed strict quotas on immigration by race, it felt to many people an insular and backward society. Hazzard was not alone among Australian artists in wanting to get the hell out. As an Australian herself, Olubas writes with understanding of Hazzard’s antipathy for her native land, which was complicated in her later dealings with it. Hazzard was an internationalist from the get-go, more interested in the wider world than many of her fellow Australians seemed at the time to be. The Transit of Venus contains several of her most acerbic passages on her childhood home: “There was nothing mythic at Sydney: momentous objects, beings, and events all occurred abroad or in the elsewhere of books. Sydney could never take for granted, as did the very meanest town in Europe, that a poet might be born there or a great painter walk beneath its windows.” I felt the same shudder of provincialism growing up in the Western United States. It is a misunderstanding, but in literature a fruitful one. Hazzard and Australia were always wary of each other. As Australia changed, Hazzard was rarely there long enough to witness it. For their part, Australians have sometimes read her with suspicion or held her at arm’s length, as if she were too polite to be one of them.
Her characters are hardly less judgmental of Americans. In her final novel, The Great Fire, several characters remark on the ascendancy of American political interests following World War II and the inevitability of more war. Or, as the narrator, perhaps speaking for her characters, puts it in Transit: “It was a pity one could not have a better class of saviour: Americans could not provide history, of which they were almost as destitute as Australians.” This is less authorial snobbery than a comedy of manners. Her short stories about office life at an organization closely resembling the United Nations make the same comedy out of any nationality you can name. It is a universal human misapprehension.
She grew up in an intelligent family, her mother an unhappy Scot, her father a Welsh Australian with a prominent diplomatic career. She and her sister were verbally adept, Shirley startlingly so. None of them really got along. When her father, Reg, was posted to Hong Kong as Australia’s Trade Commissioner, only Shirley really thrived. Her mother pined for Sydney. Her sister came down with tuberculosis. Her father pursued his career as a philanderer. But Shirley, at age sixteen, got herself a job and fell so profoundly in love that she never recovered from it.
A stopover in Japan on the way to Hong Kong gave her proximity to one of the most devastating events in all of human history, the bombing of Hiroshima. This catastrophe, brought about by idealists confronted with gruesome reality, underlies the worldview in much of her fiction, particularly The Great Fire and Transit of Venus. From the latter:
In the past, the demolition of a city exposed contours of the earth. Modern cities do not allow this. The land has been levelled earlier, to make the city; then the city goes, leaving a blank. In this case, a river amazed with irrelevant naturalness. A single monument, defabricated girders of an abolished dome, presided like a vacant cranium or a hollowing out of the great globe itself: Saint Peter’s, in some eternal city of nightmare.
The allusion to Shakespeare’s Tempest is no accident. Hazzard uses words with a poet’s tragic accuracy, from a river’s “irrelevant naturalness” to that building’s “vacant cranium.”
In Hong Kong she worked in the offices of the British Combined Intelligence Services, where she was surrounded by people, many of them men, who had not only come through the war, but were also devoted internationalists and classically educated linguists. Hazzard was an autodidact, already proficient in French and soon to learn Italian (having fallen in love with Leopardi). Despite her lack of formal education, she must have seemed brainy beyond her sixteen years, but she was a vulnerable girl from an unhappy family. When she met Alexis Vedeniapine, a White Russian immigrant who was also a British war hero, she fell in love with him—and he, over time, with her. She would alter and idealize this relationship in The Great Fire—with deliberate artistic purpose, I will argue, though not all readers agree with me. A more realistic depiction of their “affair,” put in quotes because it was never consummated, can be found in her late short story, “Sir Cecil’s Ride.”
Alec had been seriously wounded and taken prisoner during Operation Market Garden in Holland, then after the war had been posted to China, where he had grown up, his family having escaped the Russian Revolution. He was typical of Hazzard’s loves in many ways: highly intelligent, a lover of poetry and languages, and significantly older than she. Their attraction reverberated through her life partly because she was so young when it started, but also because her family removed her from it, returning first to Australia, then to New Zealand. Alec shared her assumption that they were engaged and knew her family would object because he was nearly twice Shirley’s age. When he returned to England, ultimately to a farmer’s life in Hertfordshire, it became clear not only that they would never marry—Shirley, too, had outgrown the attachment—but that Alec was not the literary man she had dreamed him. He had lived through revolution and war, had studied the effects of revolution in China and produced disillusioned reports about where things were headed there, but what he wanted for his own life was entirely quieter and less aesthetically involved. He was not an artist.
When Reg was posted to New York, Shirley made her final and full escape from the Antipodes. The family stayed in London on the way, so she was able to indulge her literary appetite for the Mother Country. And in New York, which would be one of her primary homes for the rest of her life, she nurtured her love of the arts while working in an entirely political realm, the United Nations. Though the work consisted mainly of typing and filing, she was always surrounded by internationalists, people of intelligence and engagement. While it was clear that, as a woman, she could never advance in her employment as easily as her male friends did, she absorbed the culture of the place, and really the culture of any office life—the compromised ideals, the pettiness and gossip, the small but pervasive power struggles, the turf wars. Some of her funniest short stories and most astringent critical prose would come out of this work experience.
Her affairs with men, frequently in the office, developed a pattern. Usually they were older and married. Occasionally they were gay, and a sort of Platonic affair resulted. She seems always to have been open-minded about sexuality and maintained loyal friendships with gay men throughout her life. As it happens, I know many women who have a hard time finding a man remotely worthy of them, and Shirley might have been stuck in this pattern for a long time if she had not, through her friend Muriel Spark, met the great translator and biographer, Francis Steegmuller. He was older too, possibly bisexual, and widowed from a first long marriage. Shirley was thirty-two when she married him, Francis fifty-seven. They would have more than thirty years together before his death in 1994—by most accounts a happy marriage, and certainly in literary terms a productive one.
*
Shirley would not have met Francis if she were not already an established writer, who began publishing short stories in The New Yorker when she was thirty. The first story to be accepted, though not the first to be published, was “Harold,” a funny, unexpected evocation of poetry’s power.
In her twenties, fleeing a dead-end love affair, Shirley had got herself posted to the United Nations Emergency Force in Naples—an office tasked with supplying peacekeepers in Suez. She had fallen in love with Italian poetry before she fell in love with Italy, but Naples, and nearby Capri, sealed the deal. She also began staying as a guest at the Villa Solaia in Tuscany, home of the famous Vivante family, and it became something of a second home to her. “Harold” is set there on a summer night at the dinner table outside the villa. Hazzard briefly describes those at table, some of them judgmental and grumpy foreigners on holiday. They anticipate the arrival of another foreigner, an Englishwoman, and her son. When these do arrive, the son proves an awkward, clumsy boy who thinks of himself as a poet. How embarrassing! The dinner guests, deciding to be charitable to the boy, ask him to read his poems. What they do not expect is that this misfit boy might be a genuine artist, like Baudelaire’s ungainly albatross:
When he had read aloud for a few minutes, the boy looked up, not for commendation but simply to rest his eyes. Charles said quickly: “Go on.” The inclined young face had grown, in the most literal sense, self-possessed. Their approval, so greatly required in another context, had now no importance for him. He spoke as though for himself, distinctly but without emotion, hesitating in order to decipher corrections, scattering his crumpled papers on the table as he discarded them. It seemed that no one moved . . . ; they had separated into solitary, reflective attitudes that conceded this unlikely triumph.
The boy’s mother calls him inside, and his last stumbling words from within the villa are “I’m sorry,” but Hazzard has, without quoting a word of the poems, given us their effect, the stillness of unexpected beauty and eloquence.
Hazzard’s short stories are, like all her prose, startlingly precise, often comic, with dark shadings as they reveal human struggles between reality and idealism. Brigitta Olubas has edited a Collected Stories that should be read by anyone who loves the form. She has also edited We Need Silence to Find Out What We Think, a selection of Hazzard’s essays that I have not yet read, though I am eager to do so. A literature professor in Sydney, Olubas was ideally situated to write Hazzard’s biography. She has done so with remarkable poise, using Hazzard’s prose to elucidate her life, and vice versa, noting where Hazzard has changed details for artistic purposes. Whether Francis was gay or not hardly concerns her, because the important part of the marriage is the way they supported each other, helped each other with their work and enjoyed the cultural pleasures offered by New York, as well as their extended stays in France and Italy, among other countries.
Hazzard’s writing for The New Yorker had already freed her to quit her UN job, and Frances was rich, which didn’t hurt. His first wife had been the painter and heiress Beatrice Stein. Her death from cancer left him grief-stricken, slow to commit to another marriage, but Shirley stuck with him, and they forged what appears to have been a very good life of travel and writing—they even kept a Rolls-Royce in a Swiss garage for their European sojourns. Francis’ biographies of Flaubert, Apollinaire and Cocteau, among others, all sold well. If Shirley put his career ahead of hers, as many women of her generation might have done, her writing does not seem to have suffered from it. Her last two (and greatest) novels came at long intervals, but there was plenty of other writing in between. They had many friends, and I will confess that a few pages of Olubas’ biography became such a blur of social engagements that I skimmed them. It is the biography’s only flaw, but an unavoidable one, given the lives involved. They seem to have known everyone in Parisian, Italian and New York literary circles. And I mean everyone. The book is a name-dropper’s paradise.
Shirley had wanted children, but a miscarriage and later hysterectomy ended that dream. Perhaps as a result, she maintained many friendships with younger writers, especially poets, all of whom remembered her fondly. Among her elders, Alfred Kazin, who was close to Francis and then also to her, was astonished by her:
. . . the magic of Shirley the Hazzard. When will we learn from a woman like this—with her incredible gentleness, the light that fills where she is, that love is a form of intelligence—a way of listening to the world, of taking it in, of rising above one’s angry heart . . .
Italian friends noted a special vitality in her eyes that endeared her to them. She seems to have been generous but firm in her opinions, “endorsing love,” as one of her characters says, and opposed to cruelty. Graham Greene disliked her—she was vulnerable to dismissive views of her particular enthusiasms—and her memoir of him on Capri concerns mostly her life there with Francis. Having watched her in a few YouTube videos, I think I would have liked her very much. Olubas tries valiantly in several passages to give impressions of Hazzard’s voice, her “tirelessly humane” conversation, but we are best served in that regard by what she has left us in her fiction.
*
The Transit of Venus was published in 1980 and quickly became a bestseller. It is the high-water mark of her writing, and perhaps one of the high points in the history of the modern novel. You have only to read the opening paragraphs to know you are in the hands of a master:
By nightfall the headlines would be reporting devastation.
It was simply that the sky, on a shadeless day, suddenly lowered itself like an awning. Purple silence petrified the limbs of trees and stood crops upright in the fields like hair on end. Whatever there was of fresh white paint sprang out from downs or dunes, or lacerated a roadside with a streak of fencing. This occurred shortly after midday on a summer Monday in the south of England.
As late as the following morning, small paragraphs would even appear in newspapers having space to fill due to a hiatus in elections, fiendish crimes, and the Korean War—unroofed houses and stripped orchards being given in numbers and acreage; with only lastly, briefly, the mention of a body where a bridge was swept away.
That noon a man was walking slowly into a landscape under a branch of lightning.
When I read that final sentence I felt, well, electrified. And the charge of her language hardly lets up for more than three hundred pages. This devastated landscape is the world, a location of such violence and indifference that it can crush individual lives with hardly a whisper. And it is the world in which love and even goodness must matter, if anything is to matter. Shirley was friends with the American poet Anthony Hecht, who had also pitted a kind of beauty against the horrors of war and the Shoah, and who came in his own life to a calm acceptance of love. They are similar writers in some ways—similar poets of real magnitude.
That man walking under a branch of lightning is Ted Tice, an astronomer. He is about to meet and fall in love with a young Australian woman, Caro Bell, whose sister and half-sister bear some resemblance to Hazzard’s sister and mother. But if these characters have roots in autobiography, they are thoroughly transmuted. Hazzard’s titles are resonant metaphors, her characters alive in the galaxy of their author’s fierce intelligence. The novel is darkly funny, ultimately tragic. I’m in the midst of rereading it now, dazzled by its force.
I’ve also just reread The Great Fire, which appeared in 2003. That long delay between novels is partly explained by Francis’ decline and death in 1994. Several people I know and respect have misgivings (or graver doubts) about The Great Fire, so I should try to say why, for me, it works. From her short stories onward, Hazzard was able to write convincingly about men as well as women, and her male characters in The Great Fire, from the hero, Aldred Leith, clearly a version of Alec Vedeniapine, to his idealistic friend Peter Exley, all seem believable characters who could appear in novels by anyone from Joseph Conrad to Graham Greene and Evelyn Waugh. The harder characters for many to believe are Helen, the teenaged Australian girl with whom Leith falls in love, and her fatally ill brother, Benedict. Their literary names signal an authorial desire to give them poetic properties, and this I acknowledge. But that is also why I don’t read The Great Fire as straight realism, but as a sort of Shakespearean Romance in which love might actually succeed in redeeming some lives. But not all lives. Hazzard understands the context in which she is placing this artful, and in many ways unlikely, entanglement. She is pitting the imagination against the pressure of reality, and while the resulting experience may be utterly literary, I still find it coherent and beautiful.
In some ways the characters are schematic. Leith is the aging, wounded veteran, disillusioned but hoping to find some kind of peace and beauty in his life. Helen’s father is the rude Australian, the colonial who acts more like a brutal colonizer than anyone else. The novel is set largely in Japan, close to Hiroshima, so the title metaphor remains near at hand. What could possibly come out of such catastrophe? How are human beings to act in a world where such things happen? Hazzard does not see real efficacy in political solutions, only in personal choices.
Helen, first referred to as “a changeling,” seems waiflike, one of those literary creations who have not yet grown up. She is all literary intelligence and love, perhaps an idealized version of Hazzard herself when she fell in love with Alec. That is why I think the novel is a Romance, one half step removed from reality toward the realm of magic and dream.
I’ll just offer one brief quotation to illustrate Hazzard’s writing here. The scene is in London. Leith has returned to put his affairs in order while Helen is stranded (just as Hazzard herself was) in New Zealand. He is talking with his late father’s lover, who happens to have been his own lover before that.
Leith had also brought her a circlet of carved jade, in the colour called kingfisher. Aurora gave him a small wrapped book. He said, “I walked to St. Paul’s this afternoon.” He got up and, going to the fire, looked into the lovely picture. “What happened to John Bull?”
“We forgot to pack him, and he got blitzed.” Aurora said, “So you’ve seen the town. The clearing away has made it starker. Has put it in the past. When you were here, in ’45, rubble still provided a sort of immediacy.”
“Or I was too sunk in my own rubble to take things in. The churches, every one of them a ruin.”
“Yes. Poor God.”
That last touch, like the ironic short line ending a stanza in a Thomas Hardy poem, resonates in more ways than I can say, historical, religious, aesthetic.
At an earlier point, Leith writes in a notebook, “It is incompleteness that haunts us.” I don’t have the sense of a sentimental ending in which he is somehow “completed” by love. Instead, I have the sense of two people who have not yet died in the great fires, who have the power to choose love over indifference.
[1] SHIRLEY HAZZARD: A Writing Life, by Brigitta Olubas. Farrar, Straus and Giroux. $35.00.
Powered by PrintF
24.4.23
Allen Toussaint at the Keyboard
Allen Toussaint at the Keyboard
Allen Toussaint at the Keyboard
myneworleans.com/allen-toussaint-at-the-keyboard
April 24, 2023
The Editor's Room
April 24, 2023 |By Errol Laborde
Allen Toussaint
(AP Photo/Patrick Semansky)
Several years ago, at Jazz Fest time, I was working on an article about New Orleans musicianship, especially piano playing technique. I needed badly to talk to an expert and figured I could do no better than to get Allen Toussaint, the city’s famed song writer, on the phone.
There were two surprises. One, somehow, I actually got hold of his number. Two, he answered. For a man of his stature in the music industry, I was braced for the call to be blown off as another inquiry from a music geek but no, and this was surprise three, he actually embraced the topic. Piano playing was his life; what he wanted to talk about. He was cordial and tried his best to make sense out of a complex question. The conversation needed embellishment. That’s when he paused to offer instrumental examples. I had not realized it but the whole time we had been talking Toussaint was sitting at a piano. He was in his natural environment as he placed the receiver to the side and began to play different examples of New Orleans style. This was a motherlode. Forget about the interview, I was having a private concert, enriched by explanations, from Allan Toussaint. I learned something about New Orleans piano music from the experience, and a whole lot about Allen Toussaint, who was as classy as he was prolific and creative.
His profile of song hits – most written for others, some performed by him – included:
•All These Things
LakesideShopping_Web0423_300x250
BayouBoogaloo_Web0423_300x250
•A Certain Girl
•Fortune Teller
•Holy Cow
•It’s Raining
•Lipstick Traces
•Mother in Law
• Southern Nights
•Whipped Cream
•Working in a Coal Mine
All of these songs create sweet memories for somebody – some romantic, some humorous, such as singer Benny Spellman’s bass refrain of “Mother in Law!” dispersed between Ernie K-Doe’s lyrics about domestic frustration.
“Southern Nights,” which Glenn Campbell recorded, could have been made into poetry recalling warm evenings beneath stormy skies. While performing that song, Toussaint would drift into a recitation of youthful memories that was theater in itself.
Toussaint died suddenly Nov. 10, 2015, at 77, during a trip to Spain. Like everyone else who knew of him, I wish I had had a chance to hear him more often. I do remember my last memory and it was certainly an expression of his generosity:
In June 2012, the city was abuzz about the plans of the Newhouse chain to reduce publication of The Times-Picayune newspaper to three times a week. Besides the loss of news coverage, many of the publication’s employees were losing their job. A rally was held at the parking lot of Rock ‘n’ Bowl to help support the newly unemployed. Toussaint is a big star who could have commanded huge fees. This day he was a volunteer performing at a keyboard on a makeshift stage.
“Allen Toussaint has rescued the careers of many singers,” I would write, “but I never thought he would be needed to help salvage the Times-Picayune. There he was though, in the Rock ‘n’ Bowl parking lot performing at a rally to save the newspaper from marginalization at the hands of its owners.
“Toussaint’s opening song, ‘Holy Cow,’ one of his many classics, summed up the situation perfectly, as though written for the Newhouse clan and their accomplices:
“I can’t eat
And I can’t sleep
Since you walked out on me, yeah
Holy cow, what you doing, child?
Holy cow, what you doing, child?
What you doing, what you doing, child?
Holy smoke, well, it ain’t no joke
No joke, hey, hey, hey.”
“ ‘What’s going on?’ a truck driver who was paused at a stop sign asked as I waited to cross South Carrollton to attend the event. ‘Is there free music or something.?’ I explained that it was a rally to try to save the newspaper. “ ‘Oh yeah,’ he responded, ‘that three times a week thing, that ain’t no good.’ ” Then he drove away.”
“First my boss
The job I lost
Since you walked out on me, yeah
Holy smoke, what you doing to me, me?
Walking the ledge
Nerves on edge
Since you walked out on me, yeah
Holy cow, what you doing to me, child?”
In January 2022, what was Robert E. Lee Boulevard was renamed Allen Toussaint Boulevard. There was never any real disagreement about Toussaint’s worthiness to have a street named after him, though there was at least one suggested alternative—Gentilly Boulevard. Toussaint lived in that neighborhood, plus at one point the street parallels the Fairgrounds where the Jazz Fest is held.
In terms of New Orleans musical legacy, however, all roads somehow connected through Toussaint.
Get Our Email Newsletters
The best in New Orleans dining, shopping, events and more delivered to your inbox.
What's New
An Alternative Tour of London for King Charles’s Coronation - The New York Times
An Alternative Tour of London for King Charles’s Coronation - The New York Times
In the Footsteps of Charles III
nytimes.com/2023/04/20/travel/london-charles-coronation-tour.html
April 20, 2023
On a crowded urban street, a kiosk sells small flags and other memorabilia related to the coronation of the British king, Charles III, whose picture is superimposed on a British flag that hangs in the kiosk. In the background is an imposing gray building with a sign at the top of a high arch that reads "The Ritz."
Advertisement
Continue reading the main story
No royal heir in British history has waited longer than Charles III, the king formerly known as the Prince of Wales, to ascend the throne. When he is officially crowned, on May 6, Charles will be 74 years old — a full 47 years older than his mother, Elizabeth II, was at her own coronation way back in the mid-20th century.
A lot has changed in the monarchy, and in the monarch, since the early days of the queen’s reign. Elizabeth came to the top job through accidents of history and fate. Her uncle, Edward VIII, abdicated in 1936, disrupting the normal order of succession; her father, George VI, succeeded him but died 16 years later at the age of 56, propelling Elizabeth onto the throne. By contrast, Charles — the oldest Prince of Wales in British history to become king — was born a monarch-in-waiting and has had a lifetime to prepare.
The public in turn has had a lifetime to get to know Charles, starting from his rarefied childhood in the public eye. We had a ringside seat at his marriage to Diana, Princess of Wales, who died in 1997; we followed his affair with and eventual marriage to Camilla Parker Bowles; we saw his struggles with his second son, Prince Harry, in an ongoing saga that is bound to spill over into the coronation, which Harry is scheduled to attend without his wife, Meghan, the Duchess of Sussex.
By tradition, heirs to the throne don’t meddle in unroyal matters. But Charles was an unusually outspoken Prince of Wales. He is known as a lover of classical music, a student of philosophy and world religions and a proponent of sometimes controversial ideas. He has often waded into debates on unexpected topics like alternative medicine and organic farming (pro) and modernist architecture (against).
Advertisement
Continue reading the main story
In London alone, there are plenty of royal spots to visit (Kensington Palace and Westminster Abbey, for starters), royal-themed exhibits to explore (“The Royal Palace Experience” at Madame Tussauds) and coronation-themed walking tours through this most inviting of cities.
But for visitors interested in exploring the history and psyche of the new king, here are some stops on an alternative royal tour in and around the city.
Image
Prince Charles (as he was known at the time) at Highgrove House, his private residence in Gloucestershire, England, in 1994.Credit...Getty
Highgrove
Charles bought Highgrove House, a Georgian neo-Classical estate in Gloucestershire, in 1980, before he married Diana. He saw it as a refuge, a bolt-hole in which he could pursue country pleasures and contemplate the beauty of nature; she found it boring and preferred the city. More and more, it became the place where he arranged discreet trysts with Camilla Parker Bowles.
The house — reachable by bus from London, or by taking a train to Kemble, and then a taxi — is closed to outsiders. But the grounds are open for tours each April through October. Until the end of May, there’s also an exhibition at the Garrison Chapel in Chelsea, London, “Highgrove in Harmony: Exploring A Royal Vision,” that lets you appreciate the gardens without leaving the city.
Advertisement
Continue reading the main story
The exhibition demonstrates how fully the gardens embody Charles’s philosophical and aesthetic preoccupations: his love of nature, his passion for tradition, his enthusiasm for artisanal crafts. There’s a winsome photo of him on his knees weeding, and several never-before-seen princely watercolors and sketches. Charles is said to have even personally planted much of the thyme in what is known as the Thyme Walk.
A Coronation F.A.Q.: The coronation of Charles III, the former Prince of Wales, is set to take place on May 6. Here is everything you need to know about.
In Charles’s Footsteps: An alternative tour of London for the coronation explores the psyche and history of the new king.
Journey to the Throne: Once an awkward, self-doubting young man, the 73-year-old Charles comes to the throne as a self-assured, gray-haired eminence.
A Personal Empire: As prince, Charles used tax breaks and offshore accounts to turn his estate into a billion-dollar portfolio, while Britain faced austerity.
The gardens consist of a number of interconnected parts and you can see photographs of them all here, including the flower-dotted Wildflower Meadow, which Charles envisioned as imitating “the foreground in Botticelli’s great painting ‘Primavera.’” It’s farmed using traditional methods — scythed by hand and visited each autumn by Shropshire sheep. As Charles once said: “I never underestimate the value of the ‘golden hoof’ in the great scheme of biodiversity.”
Image
Charles, pictured here at Trinity College in Cambridge, became the first Prince of Wales to receive a university degree.Credit...Getty
Trinity College, Cambridge
When Charles was still an unhappy student in a remote Scottish boarding school, a high-level committee decreed that, in a break with tradition, he should continue his education instead of going directly into the military. Thus he became the first Prince of Wales ever to receive a university degree.
The young prince was sent to Trinity, the richest of Cambridge University’s 31 colleges. His life there was hardly normal. Isolated by temperament and position from most of the other students, Charles frequently repaired to the countryside for shooting weekends and to London for cultural and state functions. (He also studied for some months in Wales in preparation for his formal investiture as Prince of Wales at Caernarfon Castle, a medieval fortress in northwest Wales.) According to a contemporary report in The Times, Charles did give student life a go. He sang, acted, contributed to the university magazine, played polo against Oxford, took an evening pottery course and went on an archaeological dig with other students to the island of Jersey.
Advertisement
Continue reading the main story
You can visit the princely academic habitat by taking the hour-or-so-long train ride from London to Cambridge; Trinity is a short cab ride into town. Founded by Henry VIII in 1546, the college is impressively grand, due in part to its reported 1.3 billion pounds in assets. (Its holdings include the O2 Arena in London and great swaths of the busy and lucrative Port of Felixstowe.)
Since the pandemic, Trinity’s awe-inspiring interior — including the Great Court; a famous statue of Henry VIII holding a chair leg that at some point replaced his original sword; and the library designed by Sir Christopher Wren and dating to 1695 — has, sadly, been closed to the public. But visitors can approach from the back, walking along a network of college-owned lawns across the Cam River, and peer in through the formidable gates. There the college porters, resplendent in bowler hats, will be happy to share royal and other tidbits.
Image
The British Library, an expensive, epic project designed by the architect Colin St. John Wilson, was a building that Charles particularly disliked. The library, which opened in 1998, is now a vibrant cultural center.Credit...Tom Jamieson for The New York Times
The British Library
Charles, a devotee of traditional building materials and traditional buildings, has spent many years attacking what he sees as the scourge of modernist architecture. In 1984, he managed to offend many members of the London architectural establishment by denouncing their work in a speech to the Royal Institute of British Architects.
His interventions had a knock-on effect, causing the cancellation of some of the buildings he singled out for particular scorn. Some of the casualties: a planned extension to the National Gallery, which Charles compared to a “monstrous carbuncle”; a Mies van der Rohe-designed building that he called “a giant glass stump”; and three projects by Richard Rogers, the modernist architect who died in 2021. “You have to give this much to the Luftwaffe,” Charles said, referring to one of Mr. Rogers’s proposals. “When it knocked down our buildings, it didn’t replace them with anything more offensive than rubble.”
Advertisement
Continue reading the main story
Charles’s biggest bête noire was the British Library, an epic project designed by the architect Colin St. John Wilson that cost more than $700 million and took 36 years from inception to completion, and that he said looked like a “dim collection of sheds groping for some symbolic significance.” (For good measure, he compared its reading room to “the assembly hall of an academy for secret police.”)
The library opened in 1998 and was an instant hit, though most people agree that the building’s pedestrian red brick exterior does a disservice to the majestic interior, with its cunning deployment of multiple levels, soaring spaces and beautiful use of light. Its centerpiece is the King’s Library, a leather- and vellum-bound collection that rises up in a six-story glass bookcase at the building’s core.
The library is now a buzzy and vibrant cultural center thrumming with life. Some of its greatest treasures are displayed in its dedicated exhibition space. (Asked for a recommendation on a recent visit, a library official said: “I quite like the Magna Carta.”)
Mr. St. John Wilson never really recovered from all the criticism, though he was knighted in 1998. He died in 2007, at 85. Eight years later, the library was designated a Grade I-listed building, Britain’s highest heritage honor.
Image
The Ritz, where Charles and Camilla publicly appeared together for the first time, evokes old-school opulence and obsequious hospitality.Credit...Tom Jamieson for The New York Times
The Ritz
London is full of Charles-related locations, as you might expect. There is Clarence House, where he and soon-to-be-crowned Queen Camilla lived for many years. There is Hill House in West London, where he went to elementary school, and the Tower of London, where many of his family’s jewels are on display. (It’s currently without the Imperial State Crown and Queen Mary’s Crown, which are being used in the coronation.)
Advertisement
Continue reading the main story
And there is the Ritz hotel, the scene of the historic occasion when Charles and Camilla emerged from the shadows of their extramarital affair and appeared in public as a couple for the first time. All they did was leave a party and briefly stand outside. But this was the moment, as The Independent newspaper put it at the time, when “15 seconds of blinding flashbulbs ended at least 12 years of ducking and diving.”
The year was 1999, four years after Diana threw a grenade into the royal myth by declaring that “there were three of us in this marriage” — meaning she, Charles and Camilla. The couple divorced in 1996; Diana’s fatal car accident took place the following summer. The appearance at the Ritz was the beginning of Camilla’s integration into Charles’s public life, culminating in the couple’s marriage, in 2005.
The Ritz is as stately a venue for a romantic coming-out as it is possible to be. Situated on the corner of Green Park on Piccadilly, it evokes old-school opulence and over-the-top obsequious hospitality.
Nonroyal visitors can eat in the Michelin-starred restaurant (sample dinner entree: Dover sole for 68 pounds, or about $85); sip cocktails at the Rivoli Bar, or take afternoon tea, for 70 pounds apiece, in the grand foyer. Pro tip: Wear something nice, get a good blowout and leave your fanny pack at home. However you look, you will feel frumpy compared to everyone else.
Advertisement
Continue reading the main story
Image
Poundbury, created on 400 acres of Dorchester farmland owned by the Duchy of Cornwall, reflects Charles’s ideals regarding rural and urban planning.Credit...Tom Jamieson for The New York Times
Poundbury
Nestled at one end of Dorchester in Dorset, southwest England, Poundbury is the embodiment of the king’s singular Weltanschauung, a community built from scratch on 400 acres of farmland owned by the Duchy of Cornwall, the royal estate. With some 4,600 residents, it’s a royal experiment in contemporary living meant to “break the mold of conventional housing,” Charles once said.
Poundbury’s central hub is named Queen Mother Square, after Charles’s late grandmother; the main pub-cum-hotel is called the Duchess of Cornwall Inn, after his wife. But mostly the town is an exercise in soft-power royalty. The traditional building materials, the human-scale architecture, the master plan by the modernist-eschewing Luxembourgish architect Léon Krier, the harmonious aesthetic in the prettily painted front doors, the artisanal shops — all these reflect Charles’s values and philosophy.
Image
In Poundbury, townhouses, apartment buildings and single-family homes are interspersed with neat squares and small parks, all meant to convey a strong sense of community.Credit...Tom Jamieson for The New York Times
Take the train to Dorchester South, hop on a bus to Poundbury and disembark at the square, identifiable by the huge Queen Mother statue. Walk down the street in any direction to get the feel of the place.
Advertisement
Continue reading the main story
Here classic townhouses intermingle with apartment buildings and free-standing one-family homes, interspersed with squares, tiny parks and cunningly constructed courtyards and alleys that convey an air of openness and connection. There is no litter. There are few pedestrians and very little noise.
Poundbury’s residents are zealous converts to this way of living. Outsiders have been less enthusiastic. The town has been compared to a Potemkin village, to Brigadoon, to a “feudal Disneyland” and to the town in the movie “The Truman Show.”
To judge for yourself, shop at the garden center for flowers and horticultural accouterments. Eat at one of the quaint nearby cafes, like the Potting Shed, with its many sorts of olives. Buy some artisanal bread at Finca in a grand building known as the Buttercross, catch up on town gossip at the Buttermarket convenience store, which doubles as the post office, or indulge in a spa treatment at Pure Beauty.
Farther afield
You won’t find Charles featured on the website of Gordonstoun, the remote Scottish boarding school his father, Prince Philip, forced him to attend and which he once referred to as “Colditz in kilts,” referring to the prisoner-of-war camp run by the Nazis. But it happens to be surrounded by extraordinarily beautiful (if often wet and cold) countryside, and is near, among other things, the lovely 13th-century market town of Elgin.
As befitting his former role as Prince of Wales, Charles has a residence in Wales: Llwynywermod, near Mydffai, a tiny village near the Brecon Beacons National Park. It’s closed to the public, but visitors really yearning for royal experiences can choose from two vacation cottages — North Range and West Range — available for rent on the property.
Advertisement
Continue reading the main story
Follow New York Times Travel on Instagram and sign up for our weekly Travel Dispatch newsletter to get expert tips on traveling smarter and inspiration for your next vacation. Dreaming up a future getaway or just armchair traveling? Check out our 52 Places to Go in 2023.
Subscribe to:
Posts (Atom)