Ask Mike

A selection of answers to questions posed by readers of AskHistorians. Refreshed most weeks, with the latest postings at the top.


A hotel detective depicted in the old Hal Roach film “Barnum & Ringling, Inc.,” dating to 1928.

Q: Whatever happened to the hotel detective?

I read a lot of old pulp fiction, and hotel detectives are in a lot of the stories. But I also travel a lot and have never noticed one.

A: Hotel detectives were indeed once extremely ubiquitous, and are now indeed much less commonly encountered. They were men who did a very distinct type of job. They were responsible for ensuring that the hotel the detective worked for was safe and secure – memoirs describe regular rounds of the building and endless testing of locks – but they were, for the most part, far more concerned with protecting their employer than their guests. Their job certainly did involve preventing crimes from taking place on the premises, and solving, where possible, those that did occur – but mostly this was done to ensure that the business they worked for was not being ripped off. Only occasionally, and at the very best hotels, would their work extend to more customer-focused activities, such as offering protection to distinguished guests.

A large proportion of these “house officers” were former policemen who took hotel posts after retiring from the force. Such men made ideal employees. The skills required of a hotel detective included a good understanding of human nature, a talent for conflict resolution, and a good working knowledge of the local criminal element – all things that were readily picked up in the course of a career in law enforcement. Ample experience of dealing with crooks and crime was important not least because, in taking up a house position, a former policeman forfeited a good deal of the powers he’d had as a cop. “The hotel detective is the world’s most fenced-in man,” the journalist Frederick Laurens observed in 1946. “He has no badge, can carry no weapon, has no authority to push people around, as have the regular police, and must either rely on tact or threats and ugly looks to get his way.”

Hotel bellhops could be enlisted to provide important information to the house detective.

Experience was the most important attribute of a hotel detective, since, for the most part, such men had three main functions to perform. The first was to protect the hotel’s reputation and prevent it from unwittingly breaking any laws, which, especially in earlier periods, often involved preventing an establishment from acquiring a reputation as the sort of place that allowed unmarried couples to have sex on the premises – very often an offence at the time under laws relating to “unlawful cohabitation”. A 1979 article in Texas Monthly notes that in earlier times a house officer would, as part of his routine, challenge male guests with the line “Is there a woman in your room?” Dev Collans, in his pulpy exposé I Was A House Detective (1954), describes enlisting bellboys to report on “couples who wouldn’t open their suitcases while the bellboy was still in the room; married couples didn’t hesitate to. A man in sleek clothes with a woman whose shoes were run down at the heels is another giveaway.”

The detective’s second major task was to screen new employees and know as much as possible about those who made it onto the staff, in order to prevent them from robbing both the hotel (of food, silverware, bedlinen and pretty much everything else) and the guests. In this respect, a New York Times article dating to 1902 recounted how “Detective Sergeant ‘Sam’ Davis, who has for twenty years been responsible to Police Headquarters for all the hotel detective work between Fourteenth and Fifty-ninth Streets” in Manhattan, fingered “one chef, three cooks, two porters, half a dozen chambermaids, and a woman in charge of the linen room” as thieves in a single hotel. As soon as the 13 malefactors had been fired, “the robberies stopped [and] the proprietor found his receipts growing larger.”

The Adolphus Hotel, an old-style establishment in an unfashionable area of Dallas, continued to employ a team of three hotel detectives into the late 1970s.

Thirdly, a house officer would be expected to keep criminal elements from causing trouble in his hotel. This involved recognising known crooks and prostitutes – another reason why retired police officers from the district were highly favoured as hotel detectives. A detective might, for example, agree on a signal with the desk clerk to warn of a known criminal attempting to check in; since the crook would be highly likely to leave without paying his bill, he would be told the establishment was full and there were no vacancies.

Of course, an especially large proportion of their work involved spotting when guests were taking prostitutes into their rooms, and either stopping them or, more usually – since active intervention embarrassed and angered guests, and tended to cause scenes – logging the girls’ locations, and dealing later with any problems that occurred as a result of their visits to the hotel. These only rarely had anything to do with sex; as Charley Coyle, a house detective at the Adolphus Hotel in Dallas, noted in 1979, “These girls aren’t there just to have sex and get paid. It would be different if they were. Not so much trouble for us. They’re there to steal.” According to Gregory Curtis of Texas Monthly, keeping track of prostitutes was by far the most time-consuming aspect of the house detective’s job: “Every hotel detective I talked with, from those in the plainest hotels to those in the fanciest, said prostitution was still their main problem.” One reason for this was that sex workers favoured working in hotels, not least because of the advantage that a prostitute had over her clients when they were in a public space. Lou Speer of the Adolphus explained that

A clever working girl can get the money she’s been promised, then clean out her client’s wallet and possibly his luggage, and escape from the room with her virtue, at least the sexual part of it, intact.

No, they don’t usually carry guns or nothing. They don’t really have to. A lot of times they’ll get out of the rooms just by saying they’re going down the hall for some ice to put in their drinks…. Usually what they do is make sure the mark takes his clothes off first. Hell, he’s got his own ideas about what she’s there for, so all he has to do is just heat him up a little bit, and he’s not going to think twice about stripping down. Then, with him naked as a jaybird, she can grab his wallet and run out the door and there’s no way he’s going to come running after her.”

A prostitute, her victim, and the stereotypical hotel detective – posed photo from Texas Monthly, 1979.

Interestingly – in Speer’s experience, at least – the hotel detective’s main role in cases such as this was not to catch the girl, but to prevent the guest from attempting to bring a claim of theft against the staff. Few men would admit to bringing a prostitute into the establishment, much less to being stupid enough to allow themselves to be robbed by her, but many would attempt to lodge a complaint that their wallet had been stolen while they were in the hotel. In such circumstances, Speer and his men had recourse to their “hooker reports” – a log they kept of single guests who entered the premises with women on their arms.

“If a guest comes down in the morning and says his wallet was stolen, the first thing I do is look up my hooker reports to see if he had a girl up there. The guest is trying to say that the hotel is responsible for the loss. You ought to see the expression on some of their faces when I say, ‘But what about the girl you took up to your room at twelve-eleven last night?'”

As to why hotel detectives are now a dying breed: two key developments have combined to do away with them. One is changes in morals; no modern hotel is likely to acquire a dubious reputation simply because it allows clearly unmarried couples to share a room, and it’s no longer against the law for guests of this sort to “unlawfully cohabit” – so house detectives are no longer required to police the guests. The second is the ubiquity of close-circuit television. When it comes to deterring and detecting theft, it’s cheaper and probably more effective to outfit a hotel with multiple CCTV cameras than it is to pay a roster of former detectives to work often unsociable hours to try to solve such crimes after they have taken place. The problems of staff theft and of prostitutes stealing from guest rooms can both fairly readily be investigated now – and evidence handed over to the local police – without recourse to a house detective.

Sources

Dev Collans’s pulpy 1954 exposé of the house detective’s world.

Hotel detectives and their experiences“, New York Times, 1 June 1902

Dev Collans, I Was a House Detective (1954)

Gregory Curtis, “Hotel detective,” Texas Monthly February 1979

Norman Hayner, Hotel Life (1936)

Frederick V. Laurens, “Hotel detective – 1946” in Best: the Popular Digest

Frank O’Sullivan & Walter Wright, Practical Instruction in Police Work and Detective Science (1940)

Horace Herbert Smith, Crooks of the Waldorf: Being the Story of Joe Smith, Master Detective (1930)

 


Q: At any point between the end of WWI and the end of WWII was there ever a rise of supernatural beliefs in Japan?

A: In fact there was – though the “interwar” period has much less meaning in Japan than it does in the west, and the spike in interest and belief can be more accurately dated to c.1910-35. This rise and fall had more to do with the history of Japan’s engagement with western ideas than it did with the impact of the two World Wars.

I would point you towards four figures in particular who played a key part in this spike, and who influenced the way in which Japan thought about the subjects you are interested in: Asano Wasaburō (1847-1937), a teacher at the Naval War College who imported a version of western spiritualism into Japan; Deguchi Nao (1837-1918), who was a sort of trance medium who claimed to have visions of the deity Ushitora-no-Konjin; her son-in-law Deguchi Onisaburô (1871-1946), a flamboyant Shintoist-spiritualist who blended existing folk belief with new concepts of divination, exorcism and millennarianism to create a new religion, Oomoto, which eventually took on an unusual anti-state flavour that resulted in its persecution in the 1920s; and Inoue Enryo (1858-1919).

To deal with Enryo first: he was an academic philosopher who attempted to fuse Buddhism with western science and founded, first the Enigma Research Society, and then the more successful Fushigi Kenkyukai, or “Society for Research on the Mysterious” in 1886 – pretty much at the same time as the rough-equivalent Society for Psychical Research was founded in the UK (1882). He became popularly known as “Dr Ghost” and was the creator of what Japanese call yokaigaku, literally “monsterology” but more usually translated as “mystery studies”.

Enryo was an active Buddhist (in fact at one point in his life, a Buddhist priest) and as such found it easier than many of his fellow academics to accept the reality of some psychical phenomena. One of his key concepts was that the distinction between the “false mystery” and “true mystery” was key to understanding superstitious belief. His society studied “Ghosts, foxes and tanukis [狐狸], strange dreams, reincarnations, coincidences [偶合], prophesies [予言], monsters [怪物], witchcraft [幻術], insanity, and so on.”

There’s an English paper on “Inoue Enryo’s Mystery Studies” in the journal International Inoue Enryo Research, 2 (2014), 119-55, and a full bibliography of western language materials about him can be seen here.

The other three were significantly more influential than Enryo, and they have been quite extensively studied. See Nancy Stalker, Prophet Motive: Deguchi Onisaburo, Oomoto and the Rise of New Religion in Imperial Japan; Emily Groszos Ooms, Women and Millenarian Protest in Meiji Japan: Deguchi Nao and Omotokyo; Birgit Staemmler, “The chinkon kishin: divine help in times of national crisis.” In Kubota et al [eds], Religion and National Identity in the Japanese Context and Kenta Kasai. “Theosophy and related movements in Japan.” In Prohl & Nelson [eds], Handbook of Contemporary Japanese Religions

In addition, Helen Hardacre, the great authority on Shintoism, has a useful introductory chapter on their movement in Sharon A Minichello (ed.), Japan’s Competing Modernities: Issues in Culture and Democracy, 1900-1930 entitled “Asano Wasaburō and Japanese Spiritualism in Early Twentieth-Century Japan.” This paper is a good English language intro that explains how western spiritualist concepts entered Japan and were adapted to relate to pre-existing religious, nationalist and especially shamanic concepts – bringing the new movement inevitably into conflict with state-sponsored Shintoism. Hardacre concedes that throughout this period such beliefs were marginal to the mainstream of Japanese culture. However,

nineteenth-century spiritualism from the West was a subject of great interest in early twentieth-century Japan. Situated on a border between mass culture and the more rarefied pursuits of Westernized, bourgeois salon culture, Japanese spiritualism represented, in part, the importation of Western cultural fads for seances, telekinesis, clairvoyance, and hypnosis. As such, it was [initially] romantic and escapist in a larger cultural context of empire, industrialization, and the expansion of state powers.

Trouble really started when Asano’s ideas ideas became fused with the apolocalyptic preaching of Deguchi Omisaburo, who caused considerable official alarm by attempting to spread his ideas in military and university circles. Hardacre, Ooms and Stalker all also discuss the ways in which western-influenced spiritualism, in the form of the Omotokyo movement founded by Deguchi Nao but later run by Asano, was heavily suppressed in Japan in two campaigns dating to 1921 and 1935.

Finally, a quick summary of other English-language resources:

  • Michael Dylan Foster’s Pandemonium and Parade: Japanese Monsters and the Culture of Yokai looks at the evolution of the Japanese concept of ‘monster’ – specifically the variety known as yôkai from 1700-2000. It covers the way in which ghost and monster stories evolved over this period, but it’s not a chronological study.
  • Noriko T. Reider’s Japanese Demon Lore: Oni from Ancient Times to the Present includes a couple of chapters on the changing conceptions of demons in Tokyo in the late 19th and early 20th centuries, with a focus on the increasing commercialisation of demons in the media.
  • Suzuki Kentaro’s paper “Divination in contemporary Japan” is an analysis of a detailed survey of contemporary divination practices. Although focused on the present, it is very useful in presenting a breakdown of the types of belief in divination that exist in Japan, and as such it would be a useful jumping off point for more more detailed research in the period that interests you. Incidentally, Suzuki comments that this is “a subject upon which there is at present almost no academic research.” In Japanese Journal of Religious Studies 22 (1995), 249-66.
  • A popular account of palmistry in Japan just before the period you are interested in is S. Culin, “Palmistry in China and Japan,” Overland and Out West 23 (1894). This one is available online from the University of Michigan library.
  • Curran et al, in Multiple Translation Communities in Contemporary Japan, mention that the vampire story became popular in Japan in the period 1915-30, as a result of the influx of translated western works of all sorts that peaked in the early 1900s. The Japanese term for vampire, kyuketsuki, was coined in 1915. It seems this new interest was literary and academic, however, rather than resulting in the appearance of actual supposed cases of vampiric activity.

 


 

Sphinx of Hatshepsut

Sphinx of Hatshepsut (ca. 1479–1458 B.C), from the Metropolitan Museum, New York. Contrary to popular belief, some images of female pharaoh survive – this seven-tonne example was destroyed on the orders of Hatshepsut’s successor, Tuthmosis III, and the remains hurled into a quarry, from whence it was recovered and painstakingly reassembled by archaeologists.

Q: Did Ramses II try to erase Queen Hatshepsut from the record books because she was a successful ruler or because she was a woman (who depicted herself as male)?

A: At its simplest, the answer to your question is that the destruction wrought on Hatshepsut’s monuments and memory seems to have occurred explicitly because she was a women – for reasons that I will try to set out for you below.

We do need to be honest about the problem here: we have no histories, no chronicles from Hatshepsut’s time (c.1507-1457 B.C.). The evidence we have is – bar a few late king lists – archaeological, and while it can tell us something of what happened during her reign as pharaoh, and after her death, it tells us little – directly at least – about why things happened as they did: why so many examples of her cartouche were shaved down and recut in order to ascribe them to some other pharaoh, and why elsewhere “her entire figure and accompanying inscription were effected and replaced with the image of some innocuous ritual object such as an offering table.” [Dorman] All answers are speculation; the distinction that we need to draw is that between informed and ill-informed guesswork.

Reimagined

Re-imaginings of Hatshepsut – some more realistic and respectful than others – are commonplace nowadays, testament to the hold the ancient Egyptian ruler still exercises over our imaginations.

But, with that said, the following is broadly agreed, by most Egyptologists, to be true: that Hatshepsut was a powerful member of the Egyptian royal family of the 18th dynasty, being the eldest daughter of one of Egypt’s greatest warrior-kings, the pharaoh Thuthmosis I (her very name means ‘Foremost of Noble Women’); that she was fortunate, in that her parents had no surviving male child, which eventually led her to move close to a position of power – as was commonly the case in ancient Egypt, she was married to a close relative, her half-brother, also Thuthmosis, the son of a high-ranking woman in the royal harem who eventually succeeded to the throne; and that she was also, very probably, ambitious, for when her husband died, leaving her to rule as regent for his infant heir by another woman from the royal harem – her step-son and nephew, the future Tuthmosis III – she was able to manouevre herself (in ways that have, unfortunately, left no clear traces in the archaeological record) into a position of absolute power. Hatshepsut the king’s-woman (which is the literal translation of the ancient Egyptian word for ‘queen’ – rank in this period, even for a woman of Hatshepsut’s lineage, was entirely the product of a husband’s or a father’s status) became Hatshepsut the pharaoh, ruling alone and portraying herself in masculine terms, most famously by overseeing the production of statues that showed hear sporting a full beard.

It’s worth pausing briefly to look at the reign before we consider what happened to Hatshepsut’s monuments and to her reputation after she died. One key point to make is that, while she was not actually the first woman to take absolute power in Egypt, she was the first one to do so in a time of peace; the only previous female pharaoh, Sobekneferu of the 12th dynasty (r. c.1800 B.C., at the tail end of the Middle Kingdom period), had taken power at a time of national crisis, and apparently out of necessity, there being no other senior royal males available to rule. Another is that Hatshepsut was apparently not, as she is sometimes portrayed, a ruler with a distinctively “feminine” agenda, preferring peace to war. It is true that one of the more notable achievements of her reign was a trading voyage to the land of Punt (far to the south, sometimes identified with modern Somalia), but Egypt did wage war – successfully – in Hatshepsut’s time. This, together with her use of standard Egyptian iconography, and her entirely conventional determination to divert vast state resources to the construction of funerary monuments for herself (her magnificent mortuary temple, which survives, is one of the most iconic tourist attractions on the Nile) tend to argue that, whatever the reason for the post-mortem destruction that partially obliterated her name, it was not because she forced through policies or ordered actions that were outrageous or reviled. She was no Akhenaten – the 18th dynasty pharaoh notorious for neglecting the old gods in favour of a quasi-monotheistic new cult focused on the sun god, Aten, whose name was also wiped from Egyptian records after his death.It is also very helpful to look at what we know of Hatshepsut’s relationship with her stepson, Tuthmosis III, since it was in his reign that much of the destruction wrought on her monuments took place. Two points emerge most clearly here. The first is that there is no direct evidence that Hatshepsut ever did anything to suggest that Tuthmosis was not the rightful heir to the throne. Dating the monuments that survive, it would appear that she ruled in her stepson’s stead, as regent, for at least two years before claiming power for herself; thereafter, Tuthmosis was not only allowed to live, but was actually given a solid training for taking power, being not only highly educated by the standards of the time, but also allowed to rise within the ranks of the Egyptian army until he became its commander-in-chief.

Tuthmosis I

Hatshepsut was the daughter of Tuthmosis I (ca. 1506-1493 B.C), a powerful New Kingdom pharaoh of the 18th dynasty. His remains, disturbed by tomb robbers years after his death, were concealed in a cache of royal mummies at Deir el-Bahri, above his daughter’s mortuary temple, and were recovered in 1881.

It seems inconceivable that the stepson would have been permitted a distinguished military career, and command over a powerful army, had Hatshepsut viewed him as a direct threat to her rule. Tuthmosis must have accepted – at least on some level – Hatshepsut’s right to rule, and we have no evidence that he made any attempt to seize power or prepare any sort of coup while she was still alive. Similarly, it is almost impossible to believe that a woman, who long custom and Egyptian political philosophy alike conceived as having no divine right to rule, could have held onto power for 22 years without the active support of a large portion of the country’s elite. There are other examples in Egyptian history of inconvenient heirs meeting suspicious ends, and of elites rising up against unpopular rulers; it has to be significant that neither of these things occurred during Hatshepsut’s reign.

Several alternatives have been advanced to explain how power may have been wielded during this period. We know that Hatshepsut made an effort to stress the legitimacy she had acquired via her royal parentage, emphasising not only that she was the rightful heir to a powerful king, but also divine, as the product of her father’s union with a royal mother from the same family. In this sense, importantly, she was actually more “royal” than her half-brother and husband, who was the son of a much lower-status woman. We also know that Hatshepsut was depicted far more commonly than was her stepson, and nominal co-ruler, on monuments constructed during her regency and then reign; surveying her mortuary complex, Vanessa Davies counts 87 occurrences of Hatshepsut’s name and figure, compared to 37 of Thuthmosis III. All this suggests that her efforts to portray herself as a worthy ruler, and as a divine monarch, were successful, and perhaps this best explains why she did not feel threatened by her stepson, and why she not only allowed him his army career, but also permitted him to be represented, during her reign, as a figure of considerable power and potency. Davies concludes that he “was represented as a multi-faceted and powerful figure; thus one might infer that he actually behaved and functioned in this manner, or, at the very least, that Hatshepsut intended for him to be viewed in this light.”

So while the Egyptian state may have expected and prefered to be ruled over by a male pharaoh, it seems there was no absolute proscription on female rule; it was highly unusual, but neither blasphemous nor “impossible.” Joyce Tyldesley concludes that

Legally, there was no prohibition on a woman ruling Egypt. Although the ideal pharaoh was male – a handsome, athletic, brave, pious and wise male – it was recognised that occasionally a woman might need to act to preserve the dynastic line. When Sobeknofru ruled as king at the end of the troubled 12th Dynasty she was applauded as a national heroine. Mothers who deputised for their infant sons, and queens who substituted for husbands absent on the battlefield, were totally acceptable. What was never anticipated was that a regent would promote herself to a permanent position of power.

Yet this is not to say that Hatshepsut was not aware of the underlying weakness of her position. There are two points to make in this respect. First, let’s hear again from Tyldesley:

Morally Hatshepsut must have known that Tuthmosis was the rightful king. She had, after all, accepted him as such for the first two years of his reign. We must therefore deduce that something happened in year three to upset the status quo and to encourage her to take power. Unfortunately, Hatshepsut never apologises and never explains… Indeed, seen from her own point of view, her actions were entirely acceptable. She had not deposed her stepson, merely created an old fashioned co-regency, possibly in response to some national emergency. The co-regency, or joint reign, had been a feature of Middle Kingdom royal life, when an older king would associate himself with the more junior partner who would share the state rituals and learn his trade. As her intended successor, Tuthmosis had only to wait for his throne; no one could have foreseen that she would reign for over two decades.

Mortuary.png

Hatshepsut’s mortuary temple survives close to the Valley of the Kings, and is today one of the best-preserved New Kingdom monuments in Egypt.

It is interesting, in this context, to consider how Hatshepsut portrayed herself over the course of her reign. Early statuary from her regency period clearly depicts a woman wearing male regalia, breasts visible on a naked upper body. Later, after her coronation as pharaoh, depictions change; Hatshepsut is now portrayed as a man, with wider shoulders and no breasts. And she was buried as a man, as well, in a king’s sarcophagus. As Kara Cooney points out, this can be seen as a matter of convention, not deception; Hatshepsut never changed her – very clearly feminine – name, so it seems unlikely she was trying to pretend to be something she was not. Yet it is difficult to imagine that female rule was simply accepted without any question in the Egypt of her day; its consequences were too stark a departure from religiously-rooted norms. The problem here was one of political philosophy, not simply politics. But, as Cooney notes:

Given that the king on earth was nothing less than the human embodiment of the creator god’s potentiality, Hatshepsut must have been all too aware that her rule posed a serious existential problem: she could not populate a harem, spread her seed, and fill the royal nurseries with potential heirs; she could not claim to be the strong bull of Egypt.

Perhaps, then, it is better to see “male” images of Hatshepsut as nods to a conventional iconography that applied equally to any Egyptian ruler, of whatever sex, than it is to imagine them as admissions of serious political weakness.

So, with all this said, we can turn at last to answering the question posed: why were Hatshepsut’s images destroyed after her death, and why was her name removed from so many of the monuments she made?

It’s important, first, to recognise that the new regime was not a complete break with the past. Thuthmosis continued to employ a large proportion of the royal servants who had served his stepmother. And the desecration he ordered – which Cooney estimates accounted for “hundreds, if not thousands” of images and inscriptions – was not a campaign of attempted absolute obliteration, as the campaign against Akhenaten seems to have been. Not all of Hatshepsut’s statues were destroyed, and not all of her cartouches were hacked away; a significant number survived, not least those representing her as queen, including some that were quite prominently displayed on her tomb, which would surely have been a prime target for any Roman-style campaign of damnatio memoriae. The same is true of the desecration that seems to have occurred to the monuments of her prime supporter, her steward Senenmut – whose name was removed from only 9 of his surviving 25 statues. Cooney summarises by saying that the statues that were removed or desecrated were those in public places – the aim, therefore, may have been to “prevent people from seeing and interacting with her as king.” The archaeological record, moreover, strongly suggests that the campaign did not begin immediately on Thuthmosis’s accession – Hatshepsut was buried with all honour, for one thing, and works underway at the time of her death were completed, which can only imply that her heir ordered work on them to continue. Something happened later to change this, something that Cooney concludes was probably a shift that took place in the mind of an ageing ruler considering his legacy.

devotional

Surviving statue of Hatshepsut in a devotional pose. The image comes from the early part of the reign and the pharaoh is still depicted as a woman.

Modern consensus is that the desecration of Hatshepsut’s monuments cannot have begun earlier than the 42nd year of Thuthmosis’s reign, which is 20 years after his aunt’s death. We also know that it continued into the reign of his son, Amenhotep II – to a period when few of those responsible would have had any memory of the female pharaoh. Finally, where Hatshepsut’s name was obliterated, it was rarely replaced with her stepson-nephew’s; more usually, the new name carved was that of her father, his grandfather, Thuthmosis I.

All of this suggests that the campaign was neither wildly aggressive, nor “personal”. It seems unlikely to have been carried out on the orders of a man who had spent the 20 years of Hatshepsut’s reign boiling with anger at being usurped.

Most modern archaeological interpretations of Hatshepsut’s reign, including those of Dorman, Tyldesley and Cooney, prefer instead to see the destruction of her name as a form of reassertion of what would have been seen as the the natural political and theological order – “an impersonal attempt at retrospective political correctness” (Tyldesley) aimed at stressing the male prerogative to rule. This would explain why Thuthmosis seems to have ordered the adoption of distinctive artistic style in sculptures and paintings showing him – one that was very much a break from the old style that had existed in Hatshepsut’s style, and which harked back, more importantly, to the styles adopted by his grandfather. Dorman argues that the key intention was to stress Thuthmosis III’s royal lineage (and hence legitimacy) while removing signs of female disruption to the approved order, most probably because “the recently invented phenomenon of a female king had created such conceptual and practical complications that the evidence of it was best erased.”

For Cooney, meanwhile,

“the Egyptian system of political-religious power simply continued to work for the benefit of male dynasty. Hatshepsut’s kingship was a fantastic and unbelievable aberration. Ancient civilization didn’t suffer a woman to rule, no matter how much she conformed to religious and political systems; no matter how much she ascribed her rule to the will of the gods themselves; no matter how much she changed her womanly form into masculine ideals. Her rule was perceived as a complication by later rulers—praiseworthy yet blameworthy, conservatively pious and yet audaciously innovative—nuances that the two kings who ruled after her reconciled only through the destruction of her public monuments.

Sources

Kara Cooney, The Woman Who Would Be King: Hatshepsut’s Rise to Power in Ancient Egypt (Broadway Books, 2015); Vanessa Davies, ‘Hatshepsut’s use of Tuthmosis III in Her Program of Legitimation,’ Journal of the American Research Center in Egypt 41 (2004); Peter F. Dorman, ‘The proscription of Hatshepsut,’ in Roehrig, Dreyfus & Keller, Hatshepsut from Queen to Pharaoh (Yale, 2005); Joyce Tyldesley, ‘Hatshepsut and Tuthmosis: a royal feud?’ BBC History, 2011

 


 

St Brices day

A Viking-era burial pit uncovered on the Dorset Ridgeway in 2009. The mass grave contains the remains of about 50 mostly young men. All had been decapitated. Carbon-dating and isotope analysis show that the skeletons date to c.975-1024, and it is possible they were victims of the St Brice’s Day Massacre of November 1002, an attempt by Æthelred the Unready to rid his kingdom of every Danish inhabitant in a single day.

Q: This article in The Atlantic mentions that the murder rate in the Medieval period was 12%. That seems absurdly high. Is there any truth to it?

It just seems absurd. Like, just estimating, half the world’s population or more was in India and China. China and south India both had long periods of political stability – does that mean a European had something like a 25% chance of dying due to violence? Are they counting people who die to due to war-caused famines as being murdered?

 

A: Tracking this claim to its source is a good example of heading down an Alice in Wonderland-style rabbit hole.

Checking back to the article you cite, it’s clear that the claim is based on a new and really quite exceptionally broad survey of violence among all mammal populations. This was published in Nature this week as Gomez et al, “The phylogenetic roots of human lethal violence.” Superficially this means the source is an impressive one, since Nature is certainly one of the most prestigious scientific journals in the world. However, it’s worth noting that the paper appears as a “Letter” rather than as a full fledged article, and that Nature has a surprising history of publishing high-profile but what can politely be termed “controversial” articles, such as one offering a scientific name for the Loch Ness Monster based on underwater photographs of what later turned out to be almost certainly a tree stump.

In this case The Atlantic itself sounds some cautionary notes about the evidential basis of the violence survey. It is a meta-analysis, and by its nature not a very comprehensive one (that is, it does not include any original research, but collates the results of earlier surveys) that attempts to compare the levels of violence among a huge variety of different mammal populations across the whole of the archaeological record. A total of 1024 species are surveyed, humans being just one of them. So it’s reasonable to wonder exactly how much effort was put into making sure the human sample was comprehensive and representative, and problems associated with the data had been completely thought through.

The Atlantic has a few pertinent comments about the team’s methodology:

  1. “First, he and his team compiled everything they could find on causes of death for various mammals, accumulating some 3,000 studies over two years.”
  2. As for the sources of information for the human sample: they did this “by poring through statistical yearbooks, archaeological sites, and more, to work out causes of death in 600 human populations between 50,000 BC to the present day.”
Polly Wiessner

Polly Wiessner: unimpressed.

As for the way in which the data has been handled: “Polly Wiessner, an anthropologist from the University of Utah … is unimpressed with the study’s human half. “They have created a real soup of figures, throwing in individual conflicts with socially organized aggression, ritualized cannibalism, and more. The sources of data used for prehistoric violence are highly variable in reliability. When taken out of context, they are even more so.”

“Richard Wrangham from Harvard University has similar concerns about the mammalian data, noting that Gómez have folded a lot of different kinds of killing—infanticide, adult deaths, and more—into a single analysis. And from an evolutionary standpoint, it matters less whether two related species kill their own kind at a similar rate, but whether they do so in a similar way.”

To go further into this requires a close reading of the original article, which is available online here. It’s worth noting that the article itself gives very inadequate sources for most of the information it contains, which is not surprising when it is based on such a vast meta-analysis. To find out what the actual sources are we have to go to the “Supplementary material” section which is separately available here.

Let’s summarise what we can discover about the sources used and their comprehensiveness and reliability by reading through these two sources.

First, the Letter itself.

This points out that the human violence figures available were divided into four categories by socio-political organisation: bands, tribes, chiefdoms and states. This is a categorisation widely used in the social sciences (eg anthropology) but one that I’d say a lot of historians find unhelpful. After all, there’s a vast historical literature devoted entirely to trying to define what a “state” actually is.

Statistical yearbook.png

Above: a statistical yearbook yesterday.

There are also some acknowledgements of potential bias that give some clues as to the sorts of sources being used: “The level of violence inferred from skeletal remains could be under-estimated because many deadly injuries do not damage the bones…” And there’s also a reference to “statistical yearbooks” being a prime source of information. So it would appear that the data for the medieval period is going to be based on the archaeological record, rather than the written record. There’s not much clue yet as to how broad the sample will be, geographically or temporally, but if a lot of reliance is being placed on “statistical yearbooks” then that sounds some pretty loud warning bells for me when it comes to making accurate assessments of the medieval period, since, of course, no such contemporary sources exist for this period.

That’s about it for the Letter itself, and the bibliography offers no further clues as to the exact sources of information. To go deeper we have to look at the “Supplementary information:” document. This contains a couple of useful additional bits of information. First, by “medieval period” the authors mean the period 1300-500 BP, which is 716-1516 A.D. Second, their data is based on a sample of 17,372 human remains. Again this sounds warning bells, since such as sample size is not going to be enough to provide proper coverage of the whole human population across the whole globe for that whole period. It’s a real drop in the ocean sort of a figure – a sampling. Actually, we’re told that figures from 600 different populations were compiled for the survey as a whole (covering the period from 50,000 BP to now), which in one sense is quite impressive, but which also implies that very likely no one population is followed in a consistent and systematic way across the whole period.

Third, there is a better definition of “lethal violence” offered: for the purposes of the paper, this is defined as “the percentage of the people that died owing to interpersonal violence.” If we think about that for a moment in the context of earlier cautions, this sets off more alarm bells. If we’re looking at archaeological data, it’s going to be very difficult to distinguish for example between interpersonal violence and accidents and suicides in many of these records. Was a broken leg inflicted in a battle or a fall? If the researchers have been scrupulous, I would expect this factor would result in a form of understating of figures for the medieval periods, since they should only be counting wounds that were clearly inflicted by weapons. Again though we have to recognise that this whole debate goes on in the wider context of the difficulty of identifying some marks of violence from purely skeletal remains. Many arrow wounds, not to mention cut throats, deaths by poison etc etc are not going to show up readily in the archaeological record.

Finally, we can use the data supplied here to be much more explicit about the precise sources consulted. I’ve copied the portion of the survey data that refers to the medieval period here.

Serbian fractures.png

Some of the fractures identified in medieval remains from Serbian cemeteries by Djuric et al. But what caused them?

This shows the site of the remains surveyed, rough date and number of remains, and (final column) a source, which you can chase up in the bibliography if you’re so inclined. They are anthropological and archaeological, not historical. Just to give one example, one of the sources for Serbian violence is Djuric MP, Roberts CA, Rakocevic ZB, Djonic DD, Lesic AR (2006). “Fractures in late Medieval skeletal populations from Serbia.” American Journal of Physical Anthropology 130: 167-178.

From all this we can see that the survey is very limited – it covers only the UK, Ireland, Portugal and Spain, Scandinavia, Germany, Poland and Croatia. Even allowing for my earlier comments about the lack of comprehensiveness here, in my opinion this is a ridiculously limited pool of data from which to extrapolate a worldwide, pan-medieval figure. There are many reasons for supposing that even if the European figures are representative (which we can’t know, but look at some of the specifics – two Viking cemeteries, a burial pit associated with the Battle of Towton (the bloodiest battle, probably, in British history), a royal graveyard in Croatia, and some monks’ graveyards … it would be so easy to argue that these are very unrepresentative samples), these figures are just not very useful.

Towton battle damage.png

Easily identifiable battle damage from a skull excavated from a burial pit dating to the Battle of Towton (1461), during the Wars of the Roses. But the battle is hardly representative of medieval experience – perhaps 28,000 died in what was the bloodiest engagement ever fought on British soil.

All we can really conclude from this is that a survey of remains from 40 different, broadly medieval, European sites, containing a very varied number of bodies, from very varied periods that include some periods of war, estimates deaths by violence at an average of 12%. Even in this limited context, I immediately have hundreds of questions about how typical these sites are, what sorts of violence, how we know who inflicted what wounds in what circumstances, and even whether the victim survived them to die a natural death much later. None of these questions are answered by the Letter and to focus on something as specific as deaths in the medieval period is to use the paper itself for purposes it was not really intended for.

 


Assassins Creed London skyline.png

A London cityscape in the Victorian era, as re-imagined for “Assassin’s Creed.” The reality would have been significantly smellier and dirtier.

Q: I am a hot-blooded young British woman the Victorian era hitting the streets of Manchester for a night out with my fellow ladies and I’ve got a shilling burning a hole in my purse. What kind of vice and wanton pleasures are available to me?

A: To begin with, I need to caution that Manchester – which looked like this in 1870 – has not been as widely written about as other cities, so I have drawn on some studies of other major cities as well; in addition, there would have been huge gulfs in experience depending on social class, and the “Victorian era” is in itself an extremely broad term, covering 60 years and some substantial shifts in lived experience and in the types of entertainment on offer. For all these reasons, consider this answer a rather broad one that attempts to cover young women’s experiences in the big city generally, and mostly in the latter half of the Victorian period.

Let’s start, though, by considering what elements may have been unique to Victorian Manchester, which in the course of this period passed Liverpool and Dublin to contend, with Birmingham and Glasgow, for consideration as the “second city of the empire.” It was, to put it bluntly, an industrial hell-hole, albeit one that offered exciting opportunities – the main centre of cotton manufacturing in the UK at a time when Britain was a gigantic net exporter of finished textile products. This had several important impacts that we need to be aware of, of which the most important was that the city became a magnet for workers from rural or small-town backgrounds, who could easily find work in the myriad of factories that sprang up there, and lodgings in the vast swathes of slum housing that inevitably grew up as a result. All this meant that Manchester was home to a large number of young workers of both sexes who were a considerable degree free of the sort of restraints that they would experience at home. Adolescent and young women might live without parents, and sometimes siblings; the social bonds and restraints created by the church were also significantly weakened, and the Religious Census of 1851 revealed church attendance among working class people in major industrial centres to be scandalously low.

Sewing factory in late Victorian England.png

Female workers in a Victorian-era sweat-shop.

By the 1840s, then, Manchester was already the greatest and most terrible of all the products of the industrial revolution: a large-scale experiment in unfettered capitalism in a decade that witnessed a spring tide of economic liberalism. Government and business alike swore by free trade and laissez faire, with all the attendant profiteering and poor treatment of workers that their doctrines implied. It was common for factory hands to labour for 14 hours a day, six days a week, and the conditions in domestic service – which was the other main source of employment for young women – were only a little better. Chimneys choked the sky; Manchester’s population soared more than sevenfold. Thanks in part to staggering infant mortality, the life expectancy of those born in Manchester fell to a mere 28 years, half that of the inhabitants of the surrounding countryside. One keen observer of all this was an already-radical Friedrich Engels, sent to Manchester in 1842 to help manage a family-owned thread business (and keep him out of the hands of the Prussian police). The sights that Engels saw in Manchester (and wrote about in his first book, The Condition of the Working Class in England) helped to turn him into a communist. “I had never seen so ill-built a city,” he observed. Disease, poverty, inequality of wealth, an absence of education and hope all combined to render life in the city all but insupportable for many. As for the factory owners, Engels wrote, “I have never seen a class so demoralised, so incurably debased by selfishness, so corroded within, so incapable of progress.” Once, Engels wrote, he went into the city with such a man “and spoke to him of the bad, unwholesome method of building, the frightful condition of the working people’s quarters.” The man heard him out quietly “and said at the corner where we parted: ‘And yet there is a great deal of money to be made here: good morning, sir.’”

For all these reasons, it is hardly surprising that Manchester was also a noted centre of radicalism and an early hotbed of the labour movement in this period. The infamous Peterloo Massacre, in which cavalry had charged a vast crowd demonstrating for parliamentary reform, killing or injuring as many as 500 of them, took place in the city before Victoria’s day (1819), but it cast a very long shadow over the decades to come. Manchester became of the biggest supporters of the Chartist movement, a (for then) radical mid-century organisation calling for a large-scale expansion of the franchise.

So, to summarise: to be working class in Victorian Manchester was to do work that was long, hard and dangerous; to be an interchangeable and expendable part in an industrial machine built by factory owners who laboured to resist unionisation; and to work in an environment in which “health and safety” was largely non-existent. Terrible accidents involving unguarded, whirring machinery and human limbs were hideously common.

There was every reason to seek escape in the city’s entertainments.

Molly Hughes.jpg

Molly Hughes, education pioneer and author of the invaluable – and still highly readable – trilogy A Victorian Family 1870-1900.

Let’s begin by considering the degree to which male and female entertainments were, or were not, one and the same in the Victorian era. To a great extent, it seems, women – or at least the right sort of women – might go almost anywhere, if appropriately accompanied; at one of the main dog pits in London, where crowds assembled to watch dogs take on a dozen wild rats at a time, Henry Mayhew (author of the utterly invaluable London Labour and the London Poor) was told: “I’ve had noble lords and titled ladies come here to see the sport – on the quiet.” But class was a vital determining factor when it came to entertainment. The experiences of Molly Hughes – who was a girl in London in the 1870s, and an adolescent in the city in the 1880s, and who grew up in a family that seems to have been both relatively liberal and relatively fun-loving, give an interesting insight into just how constrained middle-class life could be for a girl. Molly had to press hard to get herself a decent education, and her experiences of life outside the family home– which she considered to be unusually broad, by the standards of her contemporaries – strike us today as almost comically limited. When Molly was a girl, her mother

was for encouraging any scrap of originality in anybody at any time, and allowed me to ‘run free’ physically and mentally. She had no idea of keeping her only girl tied to her apron-strings, and from childhood I used to go out alone in our London suburb of Canonbury, for a run with my hoop or to do a little private shopping.

As an adolescent, however, her experience of big-city fun was limited to just one or two vividly-recalled and heavily-chaperoned experiences. Here is by far the grandest and the most important that teenaged Molly ever enjoyed – and it is quite evident that the men in the family had serious concerns about the idea of taking her out at all, and that she herself was permitted no part in bringing it about. She was 16:

During the Christmas holidays of ’82, it occurred to the boys that I ought to have a little relaxation, in view of the rigorous time I was likely to have at my new school. How would I like to go to a theatre and see a real play? … I had never even been to a pantomime. Mother was consulted, and thought it wouldn’t do me any harm, especially as Dym [a brother and Cambridge undergraduate] said he would choose a small theatre and a funny farce – Betsy at the Criterion… The play itself has faded from my memory, but the accompaniments are still vivid. An anxious farewell from mother, as Dym and I stepped into a hansom, set us off.

Mother had put me into my nearest approach to an evening dress, which Dym approved, so that I was not too shy when I sat in the dress-circle, and walked into the grill-room after the play. This was full of cheery people and a pleasant hum of enjoyment and hurrying waiters. I felt it to be like something in the Arabian Nights. Tom and Charles [two older brothers] walked in and joined us. A low-toned chat with the waiter followed, while I looked with amazement at the wide array of knives and forks by our places.

’What can all these be for?’ I asked Charles.

’You’ll see. I’ll tell you which to use as we go on; and remember you needn’t finish everything up; it’s the thing to leave something on your plate.’

Such a meal as I had never dreamt of was then brought along in easy stages. Never had I been treated so obsequiously as by that waiter. When wine was served I began to wonder what mother would think. It gave that touch of diablerie to the whole evening that was the main charm. To this day I never pass the ‘Cri’ without recalling my one and only visit to it, with those adored brothers.

One reason for the paucity of Molly’s experience was that theatre, in this period, was not really considered suitable for the well-bred; the quality went to the opera, to stroll in the pleasure gardens (by now gas-lit and open in the evenings) or perhaps to musical recitals such as the popular programmes of choral singing offered by the children at London’s Foundling hospital. Nevertheless, more or less elaborate theatricals were widely available and popular with the working classes. They ranged from the “penny gaff” – the cheapest sort of neighbourhood theatre, most popular in the first half of the Victorian period, and often found in the back room of a pub – up to large theatres that operated, by the end of the era, as music halls and were the most popular form of mass entertainment before the advent of the cinema.

ictorian era pub.png

A Victorian-era public house. Women were welcome – in some parts of the premises – and in some circumstances would visit unaccompanied.

Given Henry Mayhew’s broad experience of (and considerable sympathy for) many of the aspects of working class life in the mid-Victorian period, it is interesting that his view of the penny gaff was negative; he thought it “the foulest, dingiest place of public entertainment I can conceive,” with an unspeakably vile odour – a place “where juvenile poverty meets juvenile crime.” The entertainment on offer consisted of six performances a day of gory retellings of violent crimes, laced with “filthy songs, clumsy dancing and filthy dancing,” which – reading between the lines – we can suppose were shocking more for their crude and sexual- or violence-charged lyrics and actions than anything else. Also shocking: most the audience at a penny gaff, Mayhew found, were women.

Street performances of various sorts were also popular and affordable. Puppetry, usually centred around Punch and Judy, was an enduring perennial in all its various forms (“the Fantoccini… the Chinese Shades…”), but there were also hundreds of performers scraping a living as clowns, fire-eaters, sword swallowers and so on. “When we perform in the streets, we generally go through this programme,” one Fantoccini man explained to Mayhew, as he set out a highly elaborate set of entertainments:

We begins with a female hornpipe dancer; then there is a set of quadrilles by some marionette figures, four females and no gentlemen… for four is as much as I can handle at once. After this we include a representation of Mr. Grimaldi the clown, and a comic dance and so forth, such as trying to catch a butterfly. Then comes the enchanted Turk. He comes on in the costume of a Turk, and he throws off his right and left arm, and then his legs, and they each change into different figures, the arms and legs into two boys and girls, a clergyman the head, and an old lady the body…. Then there’s the tightrope dancer, and next the Indian juggler… They are all carved figures, and all my own make.

Just down the road on a holiday evening, one might encounter stilt-walkers, strong-men or groups of “street posturers,” as contortionists and acrobats were sometimes known.

“There’s five in our gang now,” the leader of one such troupe of tumblers said, around 1850:

There’s three high for ‘pyramids’ and ‘the Arabs hanging down’ … there’s ‘the spread,’ that’s one on the shoulders and one hanging from each hand, and ‘the Hercules,’ that is, one on the ground while one stands on his knees, another on his shoulders, and the one one a-top of them two, on their shoulders… The dances are mostly comic dances, or, as we call them, comic hops. He throws his legs around and makes faces, and he dresses as a clown.

Such performers had to be acutely aware of exactly how and when they might be paid:

Our gang generally prefers performing in the West End, because there’s more ‘calls’ there. Gentlemen looking out of the window see us, and call us to stop and perform; but we don’t trust them, even, but make a collection when the performance is half over… And yet we like poor people better than the rich, for it’s the halfpence that tells [adds] up the best.

Oxford music hall 1875.jpg

The Oxford Music Hall in 1875 – a relatively high-class example of such an establishment, and one in which the sexes mingled. In lower-class London music halls, prostitution and sexual encounters were commonplace.

By the 1850s, though, tastes in entertainment were already changing. Theatre, music hall and pantomime (already becoming a Christmas-time entertainment by then, but one that was available for months at a time, rather than solely during the festive season) began to emerge in the 1860s. Performances were long and varied, often lasting from 7,00 or 7.30 till 11 – so, rather as in 1930s cinemas, with their cartoons, newsreels, second and main features, you got an entire evening’s entertainment for somewhere around 2d or 4d. Most early Victorian programmes centred on melodrama, or sometimes circus-style performance, but by the end of the period music hall had triumphed as the most popular form of entertainment for lower-class audiences. A typical programme might involve a dozen different touring artists, who worked circuits up and down the country, from popular singers such as Marie Lloyd to comedians like Dan Leno. Broad, often rather “blue” humour was increasingly permitted and appreciated as the century progressed, and it was often possible to smoke and drink in the auditorium, which helped to make for an especially raucous atmosphere.

The most famous British music hall was the Alhambra, in London, which had a capacity of about 5,000 and had started out as a sort of permanent circus venue in the 1850s. it was well known for its elaborate scenery and mixed everything from ballet to what was advertised as “a dance forbidden in Paris” into its programme. Such venues did attract female audiences, but could often be centres of prostitution. Entry to the main part of the building cost a shilling just for standing room – a large sum for the time – and, visiting in 1869, James Greenwood found that the entertainment on offer was aimed more squarely at men than at women:

in the boxes and balconies sat brazen-faced women, blazoned in tawdry finery, curled and painted … there is no mistaking these women.

Behind the numerous bars, meanwhile,

superbly-attired barmaids vend strong liquor… besides these, there are small private apartments to which a gentleman desirous of sharing a bottle of wine with a recent acquaintance may retire.

This brings us on to pub-going, which was undoubtedly a central part of working class nights out. Female drinkers were normal sights in such places, making up between a quarter and a third of the clientele (but middle- and upper class women most certainly were not). Young women tended not to be regular pub-goers however; the typical Victorian era female clientele was middle-aged or even elderly. There was a reason for this; Gutzke explains that

age, marital status, and income imposed insuperable barriers to acceptability. Young, unmarried women seldom ventured into the pub alone, lest they be mistaken for prostitutes. Middle-aged or older wives, the preponderant women in pubs, displayed two types of drinking behaviour: during the week the poverty-stricken – the largest group – drank with each other, while on the weekend wives from the lower-middle classes downwards might accompany their husbands.

While most pubs had bars that were male-only, therefore, they also had spaces where women were allowed. Charles Booth, the Salvation Army leader – and hence a morally disapproving commentator – spoke to one publican in the late 1890s who ran five public houses, in one of which which there were “seven bars, two of which are reserved for men only” – and also noted that while “children do sip the beer they are sent to fetch… this is not the origin of their liking for beer. This dates back to early infancy while they were yet in their mother´s arms. Mothers drink stout in order to increase the supply of milk in the breast but often help the baby straight from the pintpot from which they help themselves.” Another publican, “Mr Clews of Clerkenwell,” observed that this was “a great area for women’s drinking… Women take rum in cold weather and gin in hot. “Dog´s nose” they also drink which is a compound of beer and gin.”

Marie Lloyd

Marie Lloyd, the great female star of the English music halls, had an act that combined renditions of popular songs with “blue” humour.

Of course, not all entertainment was so raucous. Young mothers – and many young women were also young mothers in the Victorian period – might not often get the chance to visit any sort of theatre or penny gaff, and their entertainments were often of a gentler kind. One girl born in 1855 looked back fondly to the gatherings that her mother and her mother’s female friends had taken their children to in summer in Victoria Park, by the 1860s the only significant area of greenery in London’s crowded East End. A special attraction of the park was that it was possible to hire prams there – “very few people had prams of their own then, but it was possible to hire them at 1d an hour… We would picnic on bread and treacle under the trees and return home in the evening a troop of tired but happy children.”

For the rather better off, there were zoos in Regents Park and Surrey Gardens, the British Museum (open three days a week from 10 till 4 – and till 7pm in summer), the titillating medical exhibits of the museum of the Royal College of Surgeons, and indeed freak shows, and one-off performances of all sorts, at which one main attraction was the chance of witnessing death and disaster. A pioneering parachutist, Cocking, died attempting a descent from 5,000 feet at Vauxhall Gardens in 1837; in 1871 a huge crowd gathered on London Bridge to watch the celebrated swimmer ‘Natator, the Man-Frog’ dive from the balustrade into the river, only to be disappointed when the performer appeared but was promptly arrested for attempted suicide.

So a wide variety of entertainment was on offer in the Victorian city, much of it relatively innocent, some of it considered, by the moral authorities of the day, liable to corrupt, and a little of it actually dangerous. But we cannot close without considering the moral dimension of popular entertainment, especially as it applied to single women. Judith Walkowitz’s influential City of Dreadful Delight, for example, maps the social panics prompted by the “narratives of sexual danger” that warned young, independent Victorian-era women that enjoyment of the city’s pleasures might easily usher them down the path that led to prostitution, pre-marital sex, venereal disease, or alcoholism. All this raises important questions, not least about agency; the women Walkowitz writes about were all too often “figures in an imaginary urban landscape of male spectators” – and male predators. These fears, Walkowitz shows, typically coalesced into strident anti-vice campaigns, condemnation of most expressions of sexuality, melodramatic newspaper coverage, and fresh mutations in the Foucauldian power relationships of the period. They are also powerful reminders that the history of popular entertainment in the Victorian period is as good a demonstration as any other of the inequality of opportunity, treatment and potential agency that coloured female experience, and was typical of this, and other, periods.

Ruth Alexander, who writes of New York in a slightly later period, givens numerous excellent examples of the way in which any “rebellious working girls” who fought against the sort of constraints imposed on them in these ways could easily be treated. Even slightly sexually awakened, or emancipated, behaviour on the part of young women was regarded as a serious threat – with often catastrophic consequences for the girls. For example, 16-year-old Nellie Roberts was sent to the New York State Reformatory for Women in 1917 as a “menace to the community” for the crime of standing on the roadside and “hailing men on motorcycles and asking them for rides.” This was seen as tantamount to prostitution. Alexander’s detailed and more empathetic investigation of Nellie’s circumstances uncovered a desire to escape fuelled by a desperately unhappy family background – a dead mother; a drunk father who raped his eldest daughter and “got fresh” with Nellie, too, on several occasions; poverty; boyfriends who might sometimes be “good to her” but were equally capable of sexual assault. When we read contemporary accounts of female “social delinquency,” we would do well to remember that many such cases were underpinned by circumstances as bad as those that Nellie Roberts endured, or worse.

Sources

Ruth Alexander, The ‘Girl Problem’: Female Sexual Delinquency in New York, 1900-1930 (1995); Carl Chinn, They Worked All Their Lives: Women of the Urban Poor in England, 1880-1939 (1988); Mike Dash, “Friedrich Engels’ Irish Muse,” mikedashhistory.com 2013; David W. Gutzke, “Gender, Class, and Public Drinking in Britain During the First World War,” Social History (1994); M.V. [Molly] Hughes, A London Family, 1870-1900 (1946); Henry Mayhew, London Labour and the London Poor (1851); Liza Picard, Victorian London: The Life of A City, 1840-1870 (2005); Judith Walkowitz, City of Dreadful Delight: Narratives of Sexual Danger in Late-Victorian London (1992).

Q:

Judith Walkowitz’s influential City of Dreadful Delight, for example, maps the social panics prompted by the “narratives of sexual danger” that warned young, independent Victorian-era women that enjoyment of the city’s pleasures might easily usher them down the path that led to prostitution, pre-marital sex, venereal disease, or alcoholism.

I recently read a paper by Ruth H. Bloch, “Changing Conceptions of Sexuality and Romance in Eighteenth-Century America,” in which she sets out to examine normative rather than transgressive sex and sexuality and how that changes across the century.

To quote her:

Many scholars have focused on the prohibitions or abuse; few have examined the aspirations.

The titles of two of the books you cited hint to me that this is probably an almost universal issue. Sexual delinquency and sexual danger, especially, of course, with regard to women. Are there many sources out there that help to coax out the aspirations of female sexuality in Victorian England? Aside from what might be shouted down from the pulpit, of course.

A: I think that’s a very fair question and Bloch seems to me to be clearly right – though perhaps Walkowitz and Alexander might contend that the cases they are writing about were actually the products of increasing aspirations.

Part of the problem, certainly, is the nature of the sources available. “Female delinquency” resulted in court cases, concerned reports by learned bodies and official enquiries, and of course copious newspaper coverage as well. Very few women wrote about their sexual feelings in this period. Other forms of aspiration (such as Molly Hughes’s – she eventually became one of the most prominent figures in education in London in the early 1900s) leave little trace, and they might also be cut short as well – in Molly’s case she gave everything up to be a wife when she married (entirely willingly, it should be said, though of course her willingness was in itself a product of her upbringing), and went back to work only after the early death of her husband.

We’re reliant on diaries, letters and memoirs, like Molly’s, for much of our information about women’s aspirations when these did not cause them to run foul of the moral and the actual police of the period – and these are conspicuously devoid of information about explicitly sexual aspirations. On top of that, our sources are very heavily biased towards upper and middle class women, which in turn makes them unrepresentative because such women had access to wider (though still very limited) opportunities. One of the very few examples of a working class Victorian woman making a huge success of her own life, and agitating to improve the lives of others, is that of Victoria Woodhull, who in the 1870s became the first woman to run for US President – and it’s very notable that Woodhull had to take at least the first steps along that path by exploiting her great beauty, rather than her impressive brains. The most important reason why Woodhull aroused the widespread condemnation and revulsion that she did was that she was a “sex radical” – meaning a supporter of women’s right to enjoy the same sexual pleasure and sexual experience as a contemporary man. This was a profoundly shocking position to take in the 1870s. I think you might find Joanne Ellen Passet’s Sex Radicals and the Quest for Women’s Equality (2003) especially interesting as a result.

One interesting sidelight on all this is the way in which popular religion and popular protest formed a legitimate outlet for female aspiration. You would probably be interested in studies of the roles that women played in the new spiritualist movement and it is very noticeable, also, how prominent women from less well-off social backgrounds, such as the merely middle class Annie Besant, were in theosophy. Then there was nursing – where the all-too-recently eminent Mary Seacole made her name. A few working class women were also prominent in the women’s suffrage movement, even though the vast majority of suffragists did not think it was feasible to agitate for the vote for women who failed to meet the usual property qualifications (which excluded pretty much the entire working class). But the prominent ones were so rare that they were practically exhibits, used by their better-off colleagues to demonstrate that such aspirations actually existed. The suffragettes of the WSPU, for instance, made a great deal of Annie Kenney, a former mill worker who was the only working class person to feature among their most senior hierarchy.

Finally, one of my favourites of all the things I’ve written is this essay on the experiences of Philippa Fawcett (the daughter of the suffragist leader Millicent Fawcett, and a government minister, so hardly poorly off) in demonstrating conclusively that women were not in fact “fragile, dependent, prone to nerves and—not least—possessed of a mind that was several degrees inferior to a man’s” – which she did by causing international consternation in becoming the only female ever to top the results in Cambridge’s mathematics tripos. It’s possibly the only thing I’ve ever written that is capable of raising goose-bumps.

So you may also find some interesting reading in the following:

Lynn Donald, Mary Seacole: the Making of a Myth (2014)

Amanda Fricksen, Victoria Woodhull’s Sexual Revolution: Political theater and the Popular Press in Nineteenth century America (2004)

Anne Braude, Radical Spirits: Spiritualism and Women’s Rights in Nineteenth Century America(1989)

Annie Kenney, Memoirs of a Militant (1921)

 


Q: Did British criminals in the 1700s and 1800s really worship a deity called the Tawny Prince? If so, what were the origins of this deity?

Criminals worshiping the Tawny Prince is mentioned briefly in this book on Australian history I’m reading, Commonwealth of Thieves by Thomas Keneally… Googling the Tawny Prince gets me nothing at all.

A: Thomas Keneally’s Commonwealth of Thieves is a popular history of the first years of the British colony in Australia, published in 2006.

Keneally (an Australian who is, of course, best known as a novelist, and as the author of Schindler’s List) uses the term “Tawny Prince” – always with capitals – five times in the course of his book. The most significant mentions are:

“… In Spitalfields to the east, in squalor unimaginable, lived all classes of criminals, speaking a special criminal argot and bonded together by devotion and oath to the criminal deity, the Tawny Prince. The Tawny Prince was honoured by theft, chicanery and a brave death on the gallows…” [p.20]

[Of convicts on their arrival in Australia:] “Not that they were reborn entirely, since they brought their habits of mind and the Tawny Prince, the deity of the London canting crews, with them” [p.81]

[Of a wild celebration in the rain:] “The great Sydney bacchanalia went on despite the thunderstorm. Fists were raised to God’s lightning; in the name of the Tawny Prince and in defiance of British justice, the downpour was cursed and challenged…” [p.89]

All this is referenced, so Keneally did not invent the Tawny Prince, but a little further research does suggest he took a fairly basic reference, elaborated it, embroidered it, and used it to produce a much more solid and distinct figure than the evidence actually warrants. All in the name of good colour, I am sure.

rookery

The crowded streets of Georgian London were a haven fro thousands of criminals of all varieties. But did the rookeries and canting crews actually spawn a perverted religion?

Let’s start with Keneally’s own notes. He cites as his references for a collection of material about the “Tawny Prince” and cant (thieves’ slang) as Watkin Tench’s Sydney’s First Four Years and Captain Grose’s Dictionary of the Vulgar Tongue of 1811.

Tench was an officer in the marines who was part of the First Fleet. The book Keneally cites was a 1961 reprint of one originally titled A Narrative of the Expedition to Botany Bay, first published by Debrett in London in 1789. This contains no reference to the Tawny Prince, so in fact Keneally’s only source is Grose (1731-91), an antiquary, whose work (correctly titled Classical Dictionary of the Vulgar Tongue) was first published in 1785.

This work does contain a passing reference to the Tawny Prince, not in the form of a separate entry, but rather inserted as a phrase that forms part of a much longer oath supposedly taken by “Gypsies” (a term which Grose uses not to mean “Romani,” but as a synonym for vagrants of all sorts) when “a fresh recruit is admitted into the fraternity.” The relevant extract is the first of several clauses, and is:

“I, Crank Cuffin, do swear to be a true brother, and that I will in all things obey the commands of the great tawney prince, and keep his counsel and not divulge the secrets of my brethren.”

Now, The Routledge Dictionary of Historical Slang confirms that a “crank-cuffin” is an 18th century term for a vagrant feigning sickness, which at least implies that the claimed oath is in the language of the period, but Grose incorporates no commentary at all, so we are left to our own devices in attempting to make sense of the terms and of the passage as a whole.

We can start with the bio of Grose that appears in the Dictionary of National Bibliography, which notes:

Captain Francis Grose, antiquary and author of the Classical Dictionary of the Vulgar Tongue

From 1783 he published in a torrent to make a living. The Supplement to the Antiquities was resumed, with a greater proportion of views from other artists, particularly S. H. Grimm, and was completed with 309 plates in 1787. This and the main series were reissued in a cheaper edition in 1783–7. A Classical Dictionary of the Vulgar Tongue (1785) and A Provincial Glossary, with a Collection of Local Proverbs, and Popular Superstitions (1787) were at the time the largest assemblage of ‘non-standard’ words or meanings, about 9000, omitted from Samuel Johnson’s Dictionary; they drew on his fieldwork as far back as the 1750s. The first parts of two other pioneering works appeared in 1786: Military Antiquities and A Treatise on Ancient Armour. Both relied mainly on his specialist library and the armouries at the Tower of London, but also included observations on military music from the 1740s. Of more popular appeal was Rules for Drawing Caricaturas: with an Essay on Comic Painting (1788).

How much further does this get us? The reference to “fieldwork” is intriguing, but it’s balanced by the discussion of a “torrent” of works churned out to make a living, and in fact a careful search shows that Grose’s source was not some vagrant informer, but rather the grammarian James Buchanan’s New Universal Dictionary of 1776, which has an entry for “Gypsies” that contains a fuller version of the same oath that Grose gives, referenced more precisely to what appears to be a description of the Romani people, in which is embedded in a significantly more detailed account of gypsy oath-making. Buchanan, sadly, gives no source for his information or the reference to the “tawney prince”, but his own gloss on the oath as a whole is as follows:

“The Canters have, it seems a Tradition, that from the three first Articles of this Oath, the first Founders of a certain boastful, worshipful Fraternity, who pretend to derive their Origin from the earliest Times, borrowed of them, both the Hint and Form of their Establishment. And that their pretended first derivation from Adam, is a forgery…”

That is as far back as I have been able to trace the term,* but I’m afraid that a more sober consideration of Grose and especially of Buchanan and his gloss indicates that the “great tawny prince” was not some sort of special deity of thieves, in the way that Keneally uses the term, but simply a synonym for the prince of darkness – that is, the devil. (Note, in support of this argument, the lack of capitals in the term “tawney prince” as given by both Grose and Buchanan, and in contradistinction to Keneally’s usage.) I think the inference of the word “tawny” is essentially “animal-like” – with a hide. Although it’s much less common now, 17th and 18th century portrayals of the devil very frequently saw him described as a shape-shifter who could and did assume animal forms, appearing as an ox or a bull, among other disguises. These carried with them implications of physical vigour, lack of restraint, and being placed beyond the order of human society.

We can also check the impressive online collection of trial reports known as The Proceedings of the Old Bailey. This is “a fully searchable edition of the largest body of texts detailing the lives of non-elite people ever published, containing 197,745 criminal trials held at London’s central criminal court” between 1674 and 1913. Although the reports are not literally trial transcripts of everything that was said in every case, but rather court reporters’ summaries of salient points, the most celebrated and interesting trials did receive extensive coverage that included verbatim reporting of some segments. Nowhere in this gigantic criminal word-mine do the terms “Tawny Prince” or “Tawney Prince” appear – so I think we can be certain that the figure imagined by Keneally was not a commonly-evoked deity, or even a figure commonly sworn to, in the whole of this period.

As such, it seems likely that the oath taken is presented not as one sworn to a real “god” of any sort, but rather an inversion of the sort of decent oath an honest Christian might swear by his or her God, in which insertion of mention of the devil actually serves to underline the dastardly and perverted nature of the oath for the dictionary’s intended audience – not thieves, but gentlefolk who, it is intended, will be horrified by it. The idea that thieves and criminals of every stripe were organised into an ordered fraternity that placed itself in distinct opposition to decent society was not only an outrage in itself, but also helped to justify their persecution – which, in this period, before the repeal of the ‘Bloody Code’, was notoriously severe.

* Further research shows the “gypsy” oath does date to a slightly earlier period. A colleague informs me: “The whole inverted-oath and attached gloss goes back at least to Richard Head’s The Canting Academy of 1673. He has it as “great tawny Prince.” Head is most famous as a satirist and fiction writer, so it’s a toss-up that he pastiched the oath together himself.”

 


 

Q: How bad would it have smelled in a medieval city?

One of the almost perfectly preserved medieval alleyways in Albarracín, Spain – a village noted for its surviving 10th-15th century architecture.

A: Smell is a problem for historians. The vocabulary that we have to describe smells is much less nuanced than it is for other senses (Gordon, 120); Isidore of Seville divided them very crudely into either “sweet” or “stinking”. Moreover, unlike physical objects, smell leaves no trace of itself to be studied, so we are entirely dependent on written descriptions. And we’re all familiar with the ways in which we quickly become inured to bad smells – smelly rooms cease to stink so badly when we spend some time in them – so it’s very probable that things that would smell very strongly to us, were we to be suddenly exposed to them now, passed largely unnoticed in their time. A good example is garum, the Roman condiment used as freely by them as ketchup is by us. Garum’s main ingredient is putrid fish guts, but the smell, highly offensive to us, was not considered foul by them. Adds Piers Mitchell:

“Some of the nuisances and smells that annoy many modern urban populations were an accepted part of everyday life in ancient cities. People simply had a higher tolerance to the unsanitary conditions of their city, and therefore the rigorous standards of proper waste disposal would seem irrelevant and impossible to reach for those in the past.” (Mitchell, 70).

We can certainly say that medieval people did notice smells and that they described them in terms that ascribed moral dimensions to them. They believed there was such a thing as an “odour of sanctity”, generally described as sweet, like honey; paradise was thought to smell “sweet, like a multitude of flowers”; and the martyrdom of Thomas Becket was “likened to the breaking of a perfume box, suddenly filling Christ Church, Canterbury, with the fragrance of ointment” (Woolgar, 118). But good smells were also temptations (monks were urged to avoid the smells of spices, which would tempt them to demand better food) and when they entered the body, they could be channels for disease (Cockayne, 17). Conversely, bad smells were associated with hypocritical, evil or irreligious behaviour, and those who sinned were assumed to have acquired a stench: Shakespeare’s Gloucester “smells a fault” and later in Lear is thrown out to “smell his way to Dover,” where an enemy army is waiting.

The public latrines on the Thames at London Bridge, from a medieval manuscript: British Library Yates Thompson, 47 f. 94v.

In other words, “defamation had a strong moral odour” (Woolgar, 123); a case brought before the courts at Wisbech in c.1468 involved the insulting of John Sweyn by William Freng, who had called him a “stynkyng horysson”. Allen has some revealing things to say about the medieval attitude to farting: “To smell the intestinal by-product of others brings one into extimate relation with them; more profound than psychoanalysis, it entails a knowledge more intimate than sight or hearing, more detached than touching or licking…. The stink of a fart belongs to a different mode of being.” (Allen, 52-3)

Smell in this period was also closely associated with the concept of miasma – the idea that disease was borne on waves of foul air that were betrayed by their smell. This means we do have evidence that medieval people noticed changes in the levels of smells that they might not otherwise have commented on; there was a case in London in 1421, involving the surreptitious dumping of refuse by one William atte Wode, which tells us a lot about what were then considered the main sources of bad smells in the city – and also that the people of London differentiated between stench and a “wholesome aire” which was

“faire and cleare without vapours and mists… lightsome and open, not dark, troublous and close … not infected with carrian lying long above ground… [nor] stinking and corrupted with ill vapours, as being neere to draughts, sinckes, dunghills, gutters, chanels, kitchings, church-yardes and standing waters.” (Rawcliffe, 124)

Tanneries, which cured leather with the help of large pits filled with urine, night soil and ash – were a major contributor to the stench of medieval cities throughout Europe. The remains of this tannery were discovered beneath modern Nottingham.

With all this said, we can also highlight some of the smells that would have struck us most strongly, have we visited a medieval or early modern city (I include the latter because we have more evidence for them, and changes in the way cities were run were not extensive between the medieval and early modern periods.) In terms of overall sensation, these would include the sulphurous smell of burning coal (Brimblecombe, 9); Green rather imaginatively, but probably fairly, goes further and invokes a “richly layered and intricately woven tapestry of putrid, aching stenches: rotting offal, human excrement, stagnant water,… foul fish, the burning of tallow candles, and an icing of animal dung on the streets.”

In terms of locales, we would notice the smells generated by small scale industry, which was mixed up indiscriminately with living spaces (no industrial parks in those days) – perhaps most especially those created by the slaughterman and butcher (whose work produced a rich stench of blood and excrement), the fuller, the skinner and the tanner. Not a lot of care went into disposing of the by-products of these industries. The dredging of one Cambridge well yielded 79 cat carcasses, dumped there by a local skinner; his preparation of their pelts would have involved treating them with a high-smelling solution of quicklime (Rawcliffe, 206). Tanning – which required the copious use of bodily wastes and the immersion of skins “for long periods in timber lined pits of increasingly noisome liquids… a malodorous combination of oak bark, alum, ashes, lime, saltpetre, faeces and urine” (Rawcliffe, 207) – was widely considered the worst-smelling work of the period. Glue- soap- and candle-making all involved rendering animal fats, and their smells would also have been prominent; soap-makers boiled lime, ash and fat together to make their products (Cockayne, 199). Then there were the smells of cooking and of animals (Ackroyd notes that in the fifteenth century the dog house at London’s Moorgate sent forth “great noyious and infectyve aiers”). The area along the Thames would have added the smell of pitch, used to caulk timbers in the shipbuilding trade (Cockayne, 9).

Fleet Ditch – London’s most infamous open sewer – was actually the highly polluted River Fleet. It is seen here in the Victorian period and the river still flows under London today – less polluted now, and completely built over.

We would certainly notice the open sewers, such as London’s infamous Fleet Ditch – actually a small river into which nightsoil and industrial byproducts were dumped – which ran directly down the centre of major roads towards the Thames, even though contemporary accounts rarely refer to them unless something happened to make the smells worse than usual. This happened to the Fleet in the 13th century, when the river became so choked with tannery filth that it was no longer navigable above Holborn Bridge (Chalfant, 81). In 1749, a body dragged from the Ditch was initially supposed to be that of a murder victim; it turned out to belong to a man who made his living dragging the sewers for the carcasses of dogs that he could sell to skinners, and who had fallen in by accident (Cockayne, 199)

Different towns would have had their own characteristic smells, based in large part on the nature of local industry. In my own book Tulipomania, I discussed the smells of the Dutch town of Haarlem (a great centre of brewing and linen-dyeing) in the early 17th century:

The city stank of buttermilk and malt, the aromas of its two principal industries: bleaching and beer. Haarlem breweries produced a fifth of all the beer made in Holland, and the town’s celebrated linen bleacheries, just outside the walls, used hundreds gallons of buttermilk a day to dye cloth shipped to the city from all over Europe a dazzling white. The milk filled a series of huge bleaching pits along the west walls, and each evening it was drained off into Haarlem’s moat, and thence into the River Spaarne, dyeing the waters white.

Last, but not least, of course, there were the smells of the human population itself, with its unwashed, decaying or diseased bodies. The lack of dental treatment available in the period meant that most people would have suffered badly from bad breath. At least until the advent of sugar in the diet in early modern period, decay was not as common as it would become – the grain-based diet of the period tended to wear down teeth to flat but regular planes, without leaving crevices in which food could fester. But archaeology reveals extensive evidence of plaque build ups that would have been very noticeable to anyone in close proximity. Dante likens the stench of the hellmouth to the stink of human breath, and Jones notes that in medieval Wales, “a peasant woman could divorce her husband on the grounds of his halitosis.” (Jones and Ereira, 29)

Sources

Peter Ackroyd, London: The Biography; Valerie Allen, On Farting: Language and Laughter in the Middle Ages; Peter Brimbelcombe, The Big Smoke: A History of Air Pollution in London; Fran C. Chalfant, Ben Jonson’s London: A Jacobean Placename Dictionary; Emily Cockayne, Hubbub: Filth, Noise and Stench in England; Mike Dash, Tulipomania; Sarah Gordon, Culinary Comedy in French Medieval Literature; Matthew Green, London: A Travel Guide Through Time; Terry Jones and Alan Ereira, Terry Jones’ Medieval Lives; PM Mitchell, Sanitation, Latrines and Intestinal Parasites in Past Populations; Carol Rawcliffe, Urban Bodies: Communal Health in Late Medieval English Towns and Cities; CM Woolgar, The Senses in Late Medieval England

Q: Your points about smelly industries makes me wonder about something: I vaguely recall reading that specifying where tanneries and such could be located was a very early form of city building code. IOW, the people of the time recognized that these tasks were unpleasant and wanted them to be located at a slight remove. Is there any truth to this recollection?

A: There was some regulation – there is evidence from early 14th century Norwich that polluting industries were forced to locate downriver of the main population, and Stanford’s Ordinances of Bristol notes that Bristol soap-boilers so polluted the Avon that they were ordered to halt the practice of throwing waste ash into the waters for fear that it would lead to “the utter decaie and destruction of the same river.” But this was rare and the product of severe and repeated problems. The reality is that small scale change was more often achieved by bringing cases to a court than by pre-emptory law-making.

So Kermode, in Medieval Merchants: York, Beverley and Hull in the Later Middle Ages (p.19) and Schofield and Vince, in their Medieval Towns (p.144), all point out that clear-cut zoning of occupations was not a feature of medieval towns; certain industries did cluster together, as is commonly demonstrated by surviving street names – a study of Ghent shows distinct quarters for carpenters, drapers, mercers, fishmongers and leatherworkers – but this was more a matter of convenience than law-making. The same effects help explain clusters of related industries. Tanners used the bark discarded by carpenters.

London Bridge, with Southwark, on the south bank of the Thames, in the foregrounds, from an engraving by Claes van Vischer, 1616.

That said, the London tanning industry was based largely in Bermondsey, on the far side of the river to most of the city, in part because it was also, notoriously, in a part of town that was much more lightly regulated and policed than the City of London itself. That is why Southwark was also the centre of London’s disreputable (and closely connected) theatre and prostitution industries. Cockayne notes that “location was the cause of some nuisances” – meaning indictments brought because of inconvenience – and that while “all citizens drank beer, used candles, and wore shoes, few wanted to live near a brewer, a chandler, or a tanner.” (Cockayne, 21). In Manchester, “leet juries” had the power to hear cases involving “noysome” – meaning smelly – inconveniences.

Incidentally, the word noisome itself is a contraction of “annoysome”.

Q: Was it likely that garum smelt particularly worse than modern Thai/Vietnamese fish sauce, or even Worcestershire Sauce? Both are made from fermented fish.

A: The best garum was made from the guts of rotten mackerel. The bones were taken out and the flesh and fish blood was mashed up together and poured into a large amphora. A layer of strong-tasting herbs like dill, mint and oregano was tipped on top, then lots of salt – ‘two finger lengths’, one recipe said, which is about 12cm. The Romans added more layers of fish, herbs and salt until the jar was full. Then they left it lying in the hot sun for a week until the fish had gone off and was pretty rank.

Making garum – a modern reconstruction of an ancient method.

After that, the mixture was stirred every day for three weeks, before it was sieved and the fish sauce was sent to Rome, where it fetched high prices.

On this basis one might suppose garum was better smelling than South East Asian fish sauces, which don’t usually contain herbs. It would depend on how you feel about the smell of mackerel compared to the smell on anchovy, the typical ingredient in a Thai fish sauce. Both are oily fishes, so perhaps it’s not too different. The Roman method, which left the fermenting fish lying around in an amphora, likely created stronger concentrations of smell than the Asian method, in which fermentation takes place inside a sealed barrel.

The best garum came from Barcelona in Spain. The factories that made it smelled so bad that they had to be built miles away from the nearest houses. Ordinary Romans were banned from making garum in their own homes because the stench was so awful the neighbours tended to complain.

 


 

Q: What exactly were the relics on which Harold Godwinson swore his oath to William of Normandy?

Harold swears his fatal oath of fealty to William – from that masterpiece of Norman-era propaganda, the Bayeux tapestry.

A: The exact nature of the relics is not specified in the most reliable contemporary sources. According to Orderic Vitalis – an Anglo-French monk who was writing more than half a century later, but who had been brought up in a Norman monastery – Harold swore on super sanctissimus reliquias juraverat, generally translated as “very sacred saint’s relics”. Anglo-Saxon sources from the 1060s are completely silent on the subject, however, and it’s as possible to argue that the entire story of Harold’s oath was an invention of the Norman propaganda machine, used to justify William’s invasion, as it is to suppose that the Saxon chronicles deliberately suppressed an incident discreditable to one of their own. Elizabeth van Houts, in her 1992 edition of William of Jumieges’s Gesta Normannorum Ducum, argues that the entire story is based on a single, official, Norman account, drawn up to be presented to the Pope in order to justify and seek approval for the invasion. Even Norman sources suggest there was an element of deception involved – that Harold only realised too late the holyness of the oath he’d sworn because the presence of the relics was concealed from him.

That said, some later sources are more specific. The Brevis Relatio, written by a monk of Battle Abbey around 1130, the Warenne Chronicle (about 1157) and Wace (a canon at Bayeux), writing after 1155, say one of the reliquaries was the “ox eye” or “bull’s eye”. Wace’s passage says that William “ordered that all the holy relics be assembled in one place, having an entire tub filled with them; then he ordered them covered with a silk cloth so that Harold neither knew about them nor saw them, nor was it pointed out to him. On top of it he placed a reliquary, the finest he could choose and the most precious he could find; I have heard it called ‘ox-eye’.” Apparently this was because the centre of the reliquary was an elaborate mounting for a magnificent gemstone.

William the Conqueror (c.1028-1086): a decidedly non-contemporary image.

If this identification is accepted then the relics may have been those of St Pancras, which the Warenne Chronicle explicitly identifies with the ox-eye reliquary. Pancras, a Roman citizen of Diocletian’s reign, was especially venerated in England because St Augustine had been despatched from Rome bearing some of his relics when he was sent to convert the Saxons. However, the cult of St Pancras was unknown in Normandy, which perhaps suggests that Warenne was wrong, and implies that it was more likely that any oath was sworn on locally venerated relics.

If so, then much depends on where exactly an oath may have been sworn. There is no agreement on this point. Orderic says Rouen, William of Poitiers says Bonneville-sur-Toques, and the Bayeux Tapestry puts it in Bayeux. Stephen D. White, in his “Locating Harold’s oath and tracing his itinerary” in Pastan and White [eds] The Bayeux Tapestry and its Contexts, takes this latter identification and uses it to suggest that the most likely candidate is the bones of Saints Rasyphus and Ravennus. He argues that Duke William’s brother, Bishop Odo of Bayeux, is known to have commissioned a new reliquary to hold these c.1050 which can be identified with the image of the reliquary shown on the Bayeux Tapestry.

However, Odo’s name is not associated with this stage of the tapestry narrative, meaning it’s difficult to establish who exactly selected the reliquaries for any oath that was sworn. There’s no way to be certain, but if the incident occurred, and if it occurred at Bayeux, then most likely Odo would have been involved. As such, White’s identification, while it must be considered tentative, is certainly not implausible.

Q:  I remember reading David C. Douglas’s “William the Conqueror” and one line stood out. I’m paraphrasing since it’s been about eight years since I read it and a quick look through the book was unsuccessful in finding the exact quote, but it was something like, “There can be no doubt [Harold Godwinson’s oath to William] was genuine.” My immediate thought was “wait, what, it sounded like total bullshit made up by the Normans.”

What is the current historical consensus (or majority view) as to whether or not Harold ever swore an oath of loyalty to William?

A: There’s no question that Harold’s visit to Normandy, and the oath-swearing ceremony in which he vowed to support William’s claim to the English throne, are accepted by pretty much 100% of the authorities on this period – in fact the paper I linked to above is the only one I can recall reading that absolutely dismisses the idea. Other authorities question the veracity of Norman accounts of events, to varying degrees, but not that the events took place.

While it’s true that no contemporary Anglo-Saxon source mentions a visit by Harold to Normandy, and while I personally share your instinctive scepticism about the traditional account of events, it was therefore a little bit glib of me to imply that the two possibilities (there was a visit and an oath-swearing, there wasn’t) are equally probable. Let’s review the evidence as it’s generally set out.

Edward the Confessor, whose childless death precipitated a succession crisis in late Anglo-Saxon England.

[1] Could Harold have visited Normandy?

Saxon sources are silent about Harold’s whereabouts and activities between the conclusion of his campaign in Wales in 1063 and July 1065, when he is recorded as giving orders for the construction of a new hunting lodge at Portskewet to replace one lost in the course of a Welsh raid. Norman sources give no exact date for the supposed visit, but William of Poitiers places it at about the time of William’s acquisition of the county of Maine, which was complete by 1064. So there’s a sufficiently large hole in Harold’s known itinerary for the visit to have taken place at the time that Norman sources suggest it did. We have to conclude it’s possible he did travel to Normandy in 1064.

[2] How plausible is it that Harold could have gone to Normandy without leaving any trace in the contemporary record?

Perfectly plausible. We know this because Harold also made a visit to Flanders in 1056 which likewise left no trace in English sources. We only know about it because he witnessed a diploma drawn up in Flanders that year that has, fortuitously, survived. In addition, assuming that some ceremony in which Harold promised to support William’s claim to the English throne did take place, albeit under duress, Harold would have had no motive for broadcasting his actions when he returned home. There is one tiny fragment of evidence that suggests the Anglo-Saxon polity may have been aware of an oath-taking ceremony prior to the Conquest; this is a passage in the Vita Eadwardii Regis (a life/hagiography of Edward the Confessor) that observes that Harold was “rather too generous with his oaths (alas!)” But even if this gnomic comment refers to Harold’s Norman visit – and Frank Barlow prefers to interpret it as suggesting that Harold, unlike his brother Tostig “had the ‘smoothness’ of their father,” Earl Godwin – the VER was probably not completed until 1067 and the sole manuscript of it that we have appears to date to c.1100. We can’t rule out the insertion of a comment intended to win favour with the new king.

[3] What motive would Harold have had for visiting Normandy?

Four have been suggested.

The first is that he didn’t intend to go the France at all, but was caught in a storm while at sea in the English Channel. (This is first suggested by a late but very highly-regarded source, William of Malmesbury. As for what reason he might have had for being at sea … Malmesbury says a fishing expedition, and the Bayeux Tapestry’s rendition of this part of the story features a line of thread that has been interpreted as a fishing rod. If true, this would make this the sole reference to a high English noble engaging in fishing as a sport, rather than the more conventional hunting or falconry.) A Scandinavian source, King Harald’s Saga, also says bad weather, though it suggests Harold had been caught while sailing for Wales. Whether any of this is truth, it’s at least highly plausible to suppose that Harold had not intended to go to Normandy, as opposed to elsewhere in what’s now France, for reasons that I will discuss below.

Old Bayeux – the Norman town is possibly the likeliest setting for the controversial oath-taking ceremony Harold is alleged to have submitted to.

The second relates to another odd reference in the VER: that Harold was engaged in a study of the “princes of Gaul” and “noted down most carefully what he could get from them if he ever needed their services in any of his projects.” This has been interpreted as meaning he possibly undertook an expedition in search of a marriage alliance for one of his daughters, or perhaps a similar journey on behalf of his king.

The third is that he went to secure the freedom of two relatives – one of them his nephew, Hacon – who had been held hostage by Duke William since 1052. This is the view of the Canterbury chronicler Eadmer, writing in c.1100, but since it means that Harold would have willingly placed himself at the mercy of Duke William to secure the freedom of people whom he had apparently made no effort to get freed in the years 1052-64, I find this implausible.

The last is the official Norman view: that Harold was sent by Edward to confirm his promise of his throne to William. This last is also, I believe, highly unlikely, for three reasons. First, the throne was not entirely within the gift of the childless Edward; confirmation of an “outsider” candidate such as William would, at minimum, have required the approval of the Saxon witan, and Harold himself, it can very plausibly be supposed, would have vehemently opposed it – since it would very likely have resulted in a sharp fall in his personal power and prosperity. Second, when Edward wanted to name Edward (and later Edgar) Ætheling as his heir, he had him brought back to England, where he could be presented to the entire Saxon nobility; had he really wanted William to be accepted as his heir, it would have made more sense for William to be brought to England to meet the whole Saxon leadership than it would for one Saxon earl to travel to Normandy. Third, while the idea that there was a set line of succession is anachronistic in this period, it’s clear that Edward’s nephew, the ætheling Edgar – grandson of Edmund Ironside – was generally accepted as his heir at this point, and in fact had been brought back with his (by now deceased) father from exile in Hungary specifically to fill the role of heir apparent. It was only the timing of the Confessor’s death, which occurred when Edgar was still aged only about 14, too young to lead an army in battle, that made it possible for Harold to seize the throne in the extraordinary circumstances of 1066.

[4] What motive would Duke William have had for requiring Harold to swear an oath?

In my view, it’s here that the main conventional accounts of the oath-swearing ceremony break down. The purpose of the ceremony is quite clearly stated in the Norman sources to have been for William to secure Harold’s backing for his claim to be heir apparent to Edward’s throne. For William to have wanted to secure Harold’s support in this way makes perfect sense in light of what happened in 1066 – that is, it makes sense if we assume it went ahead on the basis that the Confessor would die in circumstances that made it possible for Harold to seize the throne for himself, and hence in circumstances that [i] made William’s claim much easier to press and [ii] made it necessary for him to dispose of Harold as a rival claimant. It makes very little sense in the circumstances that actually existed in 1064, when Edgar was very clearly the most obvious candidate for the throne – and, moreover, one whose recall from Hungary post-dated the supposed promise from Edward that William based his claim to the throne on.

Cnut the Great (right) and his wife, Emma. The Danish warlord took the Saxon throne by conquest in 1016, but his marriage to a Norman woman would have consequences for the kingdom three decades after his death.

To clarify this last point: Edward the Confessor certainly had pro-Norman sympathies. His mother was Norman and, during the period of Danish supremacy in England (the reigns of Cnut, Harold Harefoot and Harthacnut, 1016-1042), he had been sheltered at the Norman court. William’s claim was based on a promise Edward had supposedly made in c.1051-2, at the time of the fall of Harold’s father, Godwin, from royal favour. It is generally supposed that any such promise would have been engineered by, and made in the presence of, the Norman archbishop of Canterbury, Robert of Jumièges, prior to Robert’s deposition and exile at the behest of the resurgent Godwin family in 1052. Again, the very existence of such a promise is disputed, but if it had been made we can certainly say [i] that Edward had no absolute authority to make it and [ii] that, however much he did so under compulsion, it must have been effectively superseded or withdrawn by events post-1052, when Godwin was restored to his position as the chief power behind the throne. Given that the invasion of England from Normandy was an entirely unprecedented event, one that required the construction of a huge navy from scratch and that William’s barons apparently thought unfeasible, it would have made very little sense for William to assume that getting Harold’s support, via an oath sworn under compulsion – since, if Harold was in Normandy in 1064, he was effectively William’s prisoner there – would have helped much to cement a claim that would have set William against a legitimate member of the Saxon royal house, a close relative of the king, whose claim was acknowledged by the whole country (and de facto by the pope) at this time.

For me, such an action only makes sense in the context of William’s need to secure papal support for a claim made in the face of Harold’s kingship, not Edgar’s. That’s my main reason for suspecting that the oath-swearing was an invention of the Norman propaganda machine in 1066, not something that would plausibly have taken place in 1064. It is not impossible that a man as ambitious as William would have been willing to attempt an invasion of England in support of a claimed promise dating back to the early 1050s, and in face of the accession of an older Edgar Ætheling. But it seems highly doubtful he could have carried the papacy and his barons with him easily in order to assert such a claim. If he knew that then forcing an oath-taking ceremony on Harold makes comparatively little sense.

[5] Why are historians of the late Saxon and early Norman period so willing to support the oath-taking accounts given by Norman chroniclers?

Essentially, as Harold’s recent biographer Ian W. Walker puts in in his Harold: The Last Anglo-Saxon King (p.105) because they feel that Norman accounts “must have a basis in truth, otherwise their authors would lose [he means would have lost, at the time] credibility completely.” To believe this, you need to believe that [i] the true events of 1064 and the oath-swearing ceremony were widely known in the period 1066-1100, and [ii] that public credibility (in the very limited sense of credibility among the audience of likely readers of these manuscript chronicles) mattered more to chroniclers such as William of Jumièges and William of Poitiers than the favour of King William. I would respond that there is no evidence of general awareness of an oath-swearing ceremony in this period – much less of any willingness on the part of those who were familiar with it to challenge the version mandated by the man who had become one of the most powerful rulers of his age – and that both chroniclers were intimately associated with William and his court, and therefore highly unlikely to care about anything as much as they cared about retaining William’s favour. This undoubtedly made them potential, if not actual, mouthpieces for Norman propaganda, including the oath-taking story.

Tl;dr Historical consensus strongly favours the reality of the oath-taking story, but there are, nonetheless, reasons to doubt it is correct.

 


 

Q: What prompted the first emperor of Qin to have hundreds of scholars buried alive and their works burned? If history was the primary concern, what interpretations and narratives was he trying to suppress? Was live burial a “normal” punishment or an exceptional one, to make an example?

Qin Shihuangdi, the First Emperor of China (259-210 BC) – a later, romanticised portrayal.

A: There are several points to consider here, and I will try to cover them one by one.

First, the general idea of live burial was not an invention of the First Emperor. More than 1,200 sacrificial burials dating to the Shang dynasty have been excavated at Xibeigsang alone, and these include “a few children who seem to have been trussed up and buried alive.” A second set of live burials – both men and women – dating to the Warring States period has been uncovered at Langjiazhuang. This type of burial is termed “human offerings” by Chinese archaeologists, as distinct from the “companions in death”, typically young women, who were dignified with their own burials. Both types of burial seem to have involved the executions of wives, slaves or attendants who were to accompany some eminent dynast or court official into the afterlife.

In addition, Chinese chronicles record that during the latter years of the Warring States period, not long before the First Emperor’s birth, the survivors of a Zhao army that had invaded the state of Qin, but been surrounded and starved into submission, were supposedly “buried alive” en masse by Bo Qi (AKA Bao Qi), the military genius who played a major part in setting up the eventual victory of Qin and the First Emperor over its rival states [Cambridge History of Ancient China I, 193, 640, 734].

This is the closest we get to an example of burial alive apparently being used as an exemplary punishment before Qin Shihuangdi’s time, but there is a vitally important caveat that applies both to Bo Qi’s atrocity and the deaths of 460 scholars in 212 BC that Sima Qian records, in his the Records of the Grand Historian, were ordered by the First Emperor. This relates to the correct way to translate “k’eng“, the word use to describe the deaths of both the scholars and the men of the defeated Zhao army. In its noun form, k’eng means “pit” and it is for this reason that it has been understood since at least the 16th century to mean “buried” or even “buried alive.” However, both Emmanuel-Edouard Chavannes and Timoteus Pokora have convincingly argued that it should be translated to mean only “to destroy” or “to put to death”; hence there has to be considerable doubt as to whether any scholars were buried alive in the First Emperor’s reign at all. This is not an insignificant point, since the extreme nature of the punishment is integral to the way in which the First Emperor has typically been viewed both by later Chinese chroniclers (who of course could readily imagine themselves suffering similar fates) and by modern historians. [Chavannes, Les Mémoires historiques de Se-ma Ts’ien traduits et annotés, II, 119; for Pokora’s views see Archiv Orientální 31 (1963), 170-1.]

The Daoist alchemists consulted by the First Emperor sought recipes for immortality and wealth in nature.

Second, if we go back to the works of Sima Qian, it becomes clear that the idea of burning books, if not that of executing scholars, was not the First Emperor’s, but rather a policy that was urged on him by his chancellor and chief advisor Li Si (Li Ssu in the old Wade-Giles system of transliteration).

The sequence of events as set out by Sima Qian is that a number of Confucian “scholars of wide learning” attended an imperial banquet held at Qin Shihuangdi’s palace in 213. One of these dared to criticise the Emperor for not following the example of Shang and Zhou emperors in giving fiefs to his sons and to “meritorious ministers,” a policy that, it was intimated, was crucial to the maintenance of these earlier dynasties. Li Si responded angrily that the “stupid literati” did not understand that things had changed and that “now the world has been pacified, laws and ordinances issue from one source alone.” Criticism of “the present age” would only “confuse and excite the ordinary people,” leading to a decline in imperial power. “It is expedient that these [criticisms] be prohibited,” he concluded.

The practical upshot of Li Si’s recommendation was an order that all the relevant records in the imperial record bureau be burned, that any public discussion of the two most important, the poems of the Book of Songs and the chronicles and collections of speeches contained in the Book of Documents, be punishable by death. Furthermore, Li Si urged that orders be given that all copies of the prohibited works that existed outside the immediate control of the imperial government be burned within 30 days.

This order applied specifically to works of history, custom and law. Works relating to divination, agriculture, medicine and forestry were excluded from the edict. Moreover, even copies of the forbidden works were authorised to be preserved within the archives of the Bureau of Academicians. The purpose of the order that interests you, therefore, was explicitly to prevent unrest and not to utterly destroy knowledge, nor, as is sometimes supposed, to establish a Pol Pot style “Year Zero” for a new period of Chinese history, beyond which future historians would not be able to penetrate. It was possession and discussion of the forbidden works by scholars who were beyond the immediate control of the state (unlike those manning the Bureau of Academicians) that Li Si really objected to.

Most scholars suppose that the edict remained in place for no more than about five years (though it was not formally rescinded until 191) and hence that the loss and destruction of old texts was less than total. Nonetheless, if only by drastically limiting the number of copies of ancient works that actually survived, the impact of the decree was considerable. Many of the archived works would have been destroyed when Han armies burned the Qin palaces at Xianyang in 206 BC. It’s worth pointing out, however, that this sort of attrition has been a normal feature of Chinese history. We have a catalogue of the Han imperial library as it existed in the first years of the first century (more than two hundred years after the First Emperor’s time), for example; of the 677 works listed, three-quarters are now lost to us.

The members of the Terracotta Army constructed to guard Qin Shihuangdi in death are now a major symbol of the emperor’s wealth and power.

As for the “execution of the literati”: that took place one year after the infamous burning of their books and, according to Sima Qian, for a different reason. Its proximate cause was the First Emperor’s infamous determination to keep his movements hidden, which the Grand Historian attributed to the advice of the magician Master Lu, who was brought in to assist the Emperor in his search for an elixir of immortality. One consequence of this was that the emperor would have anyone known to have revealed his whereabouts put to death.

On one visit to the east coast, Qin Shihuangdi was angered to note the large numbers of carriages and attendants surrounding Li Si – these, he felt, clearly represented a risk that his whereabouts would be disclosed and the magical forces needed to secure the desired elixir dissipated. This news reached the ears of the chancellor, who, fearing the Emperor’s wrath, took immediate steps to reduce their numbers. When Qin Shihuangdi realised that there had to be an informer among his attendants, he had the entire group who had been with him at the time executed.

This drastic action, in turn, prompted unrest among Master Lu and the scholars who associated with him. Lu and several other magicians clearly feared they might be next and fled – taking with them the Emperor’s chief hope of attaining eternal life. Qin Shihuangdi ordered an immediate inquiry into how and why Lu and his helpers had been able to flee, and when the other scholars in his entourage blamed one another, he had 460 of them selected for execution. In this case, therefore, the “scholars” we hear so much about were likely Daoists, alchemists and magicians rather than court historians or academics.

[Main sources for the discussion above: Derek Bodde, “The state and empire of Ch’in,” in The Cambridge History of China I, 69-72; Frances Wood, The First Emperor of China pp.40-5, 78-88.]

With regard to the follow-up question on sources: Sima Qian’s Records are indeed the most significant resource for the reign of the First Emperor. They are important not only because they are the only detailed source we have for much of what happened, but because they were compiled free of much of the inbuilt bias that bedevils later Chinese historiography. As the Cambridge History of China puts it (I, 972):

As yet they were not bound by the inhibitions under which their successors labored. They were not required to display their masters as paragons of fine behavior, whose predecessors had rightly deserved destruction. As yet they were not obliged to portray the past in terms of the steady influence exerted on mankind by the force of Confucian teaching.

[Such characteristics first emerged with full force in histories of the remarkable usurper Wang Mang, whose highly controversial reign at around the time of Christ separates the Former from the Late Han; I discuss these problems in more detail here.]

Two other important sources do exist, however. The first is the Han su or Book of Han, a chronicle of the Former Han Dynasty, modelled on the style established by Sima Qian and covering the period 206 BC to AD 5 in 12 volumes. This work is late – completed c. AD 111 – but its early coverage deals with the overthrow of the Qin dynasty and some of the people who figure in it were important in the First Emperor’s reign.

The second source that needs to be considered is archaeological and numismatic evidence – most obviously the imperial tomb from which the famous Terracotta Warriors have been unearthed, but also numerous lesser archaeological sites – most notably record steles – and coin finds.

Good sources of detailed information on the historiography of the period include Van der Loon’s “The ancient Chinese chronicles and the growth of historical ideals,” in Beasley & Pulleyblank’s Historians of China and Japan, and Michael Loewe [ed], Early Chinese Texts: A Bibliographical Guide.

Sima Qian, the “Grand Historian” and author of the only major surviving account of China during the Qin dynasty, was caught up in court intrigues and found guilty of serious offences. He chose castration over execution in order to finish his work – an example to all who have come after him.

Q: Could you please also comment on several recent discoveries of Qin period bamboo slips? To what extent did these newly discovered texts change our perspective on Shihuangdi and his empire?

A: There are two such discoveries, I believe: one a set of material from a Qin era tomb in Hubei, which total 11,000 strips, comprising 10 distinct texts, including some Qin statutes; the other a cache of 36,000 strips found in a well in Hunan.

The Hunan strips are curse official documents produced by low level officials in the district. They can tell us something about the local organisation of the Qin state – for example, the details of its postal service, its document styles and its bureaucracy, and their dating also fills in some things we did not know about the Qin calendar. The Hubei strips tell us more about the operations of the Qin state at a slightly higher level. They reveal many hitherto unknown details about Qin administration, its requirements, and its accounting practices. Also included in the latter collection are some biographical annotations concerning the magistrate in whose tomb the slips were laid, and a complete divination almanac showing the best days for making sacrifices, digging wells and so forth.

Broadly, we can say that the two collections are helpful in understanding everyday life in the Qin period, but they do not reveal much about the top level workings of the Qin government or the doings of the First Emperor. As such, they fit more into the continuum of Chinese social and economic history than they illuminate the distinct political upheaval caused by Qin Shihuangdi.

 


 

Q: Were the pyramids still kept in repair at the time of Cleopatra?

The pyramids of the Giza group today.

A: We have only a little information about the state of Egyptian structures in the late Pharaonic/Roman period, so it’s difficult to be precise as to the state of repair of the pyramids – or any other Egyptian monuments – at this time. However, the short answer to your question is that, at least while Egypt retained some independence, occasional restoration work was done on some monuments, usually for religious/magical reasons to do with aiding souls that had already passed into the afterlife. For this reason, Pharaonic restoration work tended to involve erecting new inscriptions rather than making extensive repairs to old monuments.

Even this work seems to have largely ended by the time Egypt passed under Roman control (at least we have no evidence of its continued practice), and the Graeco-Roman period is often considered to mark the start of “tourism” to Egpyt. Certainly it was in this period that many of the monuments famous today first seem to have been visited on a regular basis simply because they were remarkable sights.

That’s the summary; here are a few salient details:

  • We do know that Egyptians completed some repairs to the sphinx soon before the reign of Thutmosis IV began in about 1420 B.C. The monument was then almost buried in sand (as it later would be again), and Thutmosis, who was one of the then pharaoh’s sons but not actually in line to succeed him, had it excavated and built a retaining wall to prevent it sanding up again too easily. His workmen also re-secured some blocks from its back in their proper places. This was not, however, a typical thing for an Egyptian ruler to do; we know from the so-called “Dream Stele” left at the site that Thutmosis’s motive for the restoration was that he had had a dream in which the sphinx promised him he would become pharaoh if he would restore it.
  • Later, in the reign of Ramesses II (c.1280 B.C.) the two main pyramids at Giza appear to have undergone some restoration. This work is attributed to Ramesses’ som Khaemwaset, who added hieroglyphic inscriptions to monuments at Giza, Saqqara and Dashur. Although Khaemwaset is sometimes called “the first Egyptologist,” these additions had explicitly religious functions; although a contemporary inscription records that the prince “loved antiquity and his noble ancestors,” and could not bear to see old monuments fall to ruin, his texts were created because they “literally renewed the memory of those buried within, benefitting their spirits in the afterlife,” Manassa notes.
  • A deep scar marks the north face of the third pyramid at Giza, tomb to the pharaoh Menkaure. From an 1842 sketch by EJ Andrews.

    Possibly associated with this same period is evidence from within the Great Pyramid of limited repair and replastering work that hardly fits the MO of the typical tomb robber. It’s not possible to date this but it’s usually attributed to the Pharaonic period.

  • About a century later, during the 12th Dynasty, a ruler named Khnumhotep set up an inscription (first transcribed by Percy Newberry in 1890-1) which imply some Pharaonic-style conservation work took place in this period. His inscription boasted: “I caused the names of my fathers which I had found destroyed upon the doors to live again…”
  • At some point during the Middle Kingdom, at the height of the Cult of Osiris, the royal tombs at Abydos were excavated in search of Osiris’s tomb. When the diggers uncovered the First Dynasty tomb of Djet, they took it to be the deity’s resting place and so restored it, building a new roof and an access stairway.
  • In the Third Intermediate and Late Periods, older monuments were studied so that their styles could be replicated in new buildings. Some dilapidated temples were restored at this time. The work was not extensive however and with the decline of the state funds for restoration probably weren’t available in any case. Thompson states that “by the Roman period, Egypt was little more than a mass of ruins.” What survived was generally that which had been built most solidly – not least, of course, the pyramids.
  • Both Strabo (writing within 6 years of Cleopatra’s death, in 24 B.C.) and Diodorus Siculus give accounts of the Great Pyramid that imply they personally visited the site and were taken around it by local guides, who told them stories about its construction. Diodorus, who visited in around 50 B.C., writes in chapter 64 of his Universal History of the Great Pyramid that he saw “the entire structure undecayed” – though it would be unwise to assume this was a careful description.
  • That’s not least because Roman era graffiti was found inside the Great Pyramid early in the 19th century, written in soot on the roof of the subterranean chamber, which again strongly suggests that the pyramid was open to at least some visitors at this time; that the pyramid’s Descending Passage was left open, not sealed, argues against the idea that the local people were keeping the monuments “in repair” in Cleopatra’s time, and might suggest they no longer considered them sacred in this period, several centuries after the arrival of dynasties of Greek rulers.
  • We also know that Romans often visited other Egyptian sites to see their wonders – popular destinations include Armana, Abydos, Hatshepsut’s mortuary temple, Karnak and the Valley of the Kings. Unfortunately all we have in these cases are inscriptions, not accounts of what exactly these sites looked like at the time. But again this argues against Pharaonic monuments being considered sacred and inviolate in this period.
  • There are numerous other Graeco-Roman grafitti on various Egyptian monuments, perhaps most famously on the plinths and legs of the pair of sandstone colossi commemorating Amenhotep III (reigned c.1350 B.C.) near Luxor that are popularly known as the Colossi of Memnon. One of these statues was felled by an earthquake in 27 B.C., only three years after Cleopatra’s death, and it was after that occurred that the statue famously began to emit an unusual sound, said to have been like the string of a broken lyre, soon after sun-up on some mornings. Largely thanks to this phenomenon, the Colossi acquired a reputation as an oracle. Because of the fame thus acquired, and the graffitti left by visitors, we know something of their history around this time and it’s clear that while the damaged statue was not immediately repaired, the fallen portions were replaced about 200 years later – a restoration popularly ascribed to Septimus Severus, who visited the statues but failed to hear the sound shortly before 200 A.D.

Sources

Thomas W. Africa, “Herodotus and Diodorus in Egypt,” Journal of Near Eastern Studies 22 (1963); Colleen Manassa, Imagining the Past: Historical Fiction in New Kingdom Egypt; Maria Swetnam-Burland, Egypt in the Roman Imagination; Jason Thompson, Wonderful Things: A History of Egyptology 1: From Antiquity to 1881