Ask Mike

A selection of answers to questions posed by readers of AskHistorians. Refreshed most weeks, with the latest postings at the top.

Or go here for more answers from Mike.

Short index to questions (the lower the number, the further down the column the answer will be found)

[16] Did Coca-Cola produce a clear version of Coke for General Zhukov that could be disguised as vodka? If so, for how long was this going on?

[15] Why did Poland have lower rates of Black Death than other European countries during the 1300s?

[14] What is the truth about “getting shanghaied”? Was there such a thing as a bar in 19th-century San Francisco with a chair that dumped drugged people down a trapdoor to kidnap them and force them into the sailing life?

[13] During the New York Draft Riots (1863), supposedly the New York Times defended their office from the mob with 2 Gatling guns. Where did they obtain these guns and ammunition and how did they turn away the mob?

[12] Is it true that Henry VIII feared being attacked so much he had himself bricked into his bedroom every night?

[11] On the giants of Patagonia

[10] Whatever happened to the hotel detective?

[9] At any point between the end of WWI and the end of WWII was there ever a rise of supernatural beliefs in Japan?

[8] Why did Hatshepsut’s successors attempt to proscribe her memory?

[7] What was the murder rate during the medieval period?

[6] I am a hot-blooded young British woman in the Victorian era, hitting the streets of Manchester for a night out with my fellow ladies and I’ve got a shilling burning a hole in my purse. What kind of vice and wanton pleasures are available to me?

[5] Did British criminals in the 1700s and 1800s really worship a deity called the Tawny Prince? If so, what were the origins of this deity?

[4] How bad would it have smelled in a medieval city?

[3] What exactly were the relics on which Harold Godwinson swore his oath to William of Normandy?

[2] What prompted the first emperor of Qin to have hundreds of scholars buried alive and their works burned?

[1] Were the pyramids still kept in repair at the time of Cleopatra?

Q: Did Coca-Cola produce a clear version of Coke for General Zhukov that could be disguised as vodka? If so, for how long was this going on?

A: Zhukov’s “white Coke” was a product of the Coca-Cola Company’s Technical Observer programme, established with the help of the US Army during World War II as part of a plan to raise morale among American troops by ensuring they had a constant supply of their preferred non-alcoholic beverage.

The story seems to have originated in interviews conducted by Mark Pendergrast in the late 1980s and early 1990s for his unauthorised history of Coca-Cola. He reports that, in the course of the war, Coke sent 248 TOs into the main combat zones, each of them exempt from conscription thanks to “a remarkably cozy arrangement” with US draft boards. The TOs were charged with installing Coke manufacturing and bottling plants behind the lines and ensuring supplies of the proprietary syrup from which the drink was made came through safely from the States. Between them, they were responsible for serving the US armed forces a total of 10 billion Cokes in the course of the war. The scheme was a propaganda triumph for the Americans (and Coke), with MacArthur personally autographing the first bottle of Coke to come off the production line in the Philippines after the liberation. Eisenhower was also a supporter of the programme, and according to the Times-Herald (19 June 1945), on his return to Washington after the war,

After feasting copiously at the Statler luncheon yesterday, Gen. Eisenhower was asked if he wished anything else.

‘Could somebody get me a Coke?’ he asked.

After polishing off the soft drink, the General said he had one more request. Asked what he wanted, he answered:

‘Another Coke.’

Coke had withdrawn from central Europe during the war, though thanks to the efforts of some remarkably loyal local employees, their operation in Germany had managed to stay in existence during the conflict under another name and without its lead product – Fanta being the most famous replacement drink it introduced during this period. But the Company took aggressive steps to recover its position after the war ended, opening 38 new plants in southern Europe in the years 1946-47 alone in an effort to prevent Pepsi from establishing itself in what had once been Coke territories.

One of Coke’s TO’s in Europe was Mladin Zarubica, a wartime PT boat captain who was sent to Austria in 1946 to supervise the installation of a massive new bottling plant there, four city blocks long and capable of producing 24,000 cases of the beverage per day. Zarubica’s relations with the US Army were typically close – the first consignment of the sugar required by the new plant was guarded by 500 GIs to prevent it being plundered by black marketeers. He also had sufficient influence to get a huge villa in Berchtesgaden refurbished as a base for corporate entertainment. “We had waiting lists to come there,” he recalled. “Senators, potentates, you name it.”

It was Zarubica who arranged for the engineering of Zhukov’s “white Coke.” According to his account, Eisenhower had introduced Zhukov to Coke during their time together in the Occupied Zones, when Zhukov was in charge of the Russian zone and Eisenhower of the American one. Zhukov liked the drink enough to request Eisenhower’s subordinate, Mark Clark, for a supply of it, but with one proviso:

It couldn’t look anything like Coke. As the central Russian war hero, Zhukov knew he couldn’t be seen drinking an American imperialist symbol. Clark passed the request up the line to President Truman, who summoned Jim Farley [Chairman of the Board of the Coca-Cola Export Corporation], and soon word filtered back to Zarubica, who found a chemist to take out the caramel coloring. Then the Coca-Cola man had the Crown Cork and Seal Company in Brussels make a special straight, clear bottle with a white cap and a red star in the middle.

Zarubica’s first shipment of White Coke for Zhukov amounted to 50 cases of the drink. In addition to the benefit to Soviet/US relations, there was also a plum for Coke: “The regular Coke supply from [the manufacturing plant at] Lambach had to pass through the Russian zone to reach its Vienna warehouse. While other supplies often waited weeks for the Russian bureaucracy to allow them through, the Coke shipment was never stopped.”

So Zarubica’s account explains how and why Zhukov’s white Coke came to be manufactured, but unfortunately he gives us no clue as to how long it was made for. We also need to be aware that the whole incident has been filtered through the lens of the – very often self-glorifying – Coke company tradition; in fact, outside the Coke tradition, I’ve seen no evidence the incident took place. There’s good evidence in Zarubica’s book The Year of the Rat that he could be a highly unreliable narrator; the book suggests that the guide the hired to hunt chamois during a holiday in the Tyrol around this time turned out to be Martin Bormann.

But there is one interesting bit of corroborating evidence. Pendergrast’s book contains an appendix discussing the famous “secret formula” for the Coke syrup that lies at the heart of the Company’s IP and contributes substantially to its mystique. This formula, notoriously, is available only to a tiny number of Company high-ups, there are restrictions in place to prevent all those who know it travelling together on the same plane, and in 1977 the Company withdrew from India rather than share the formula with the Indian government – which had demanded sight of it as a condition for Coke continuing to trade in India.

In the course of his research, Pendergrast uncovered what he believed to be the original formula in the Coke archives – unlabelled and apparently missed by the Company archivists. When he interviewed Zarubica, and mentioned he had a Coke formula, the former TO responded:

“Oh, really? So do I. The Company gave me one when I had to take the colour out for Zhukov. Want to see it?”

I did indeed. When the photocopy of his January 4, 1947, correspondence arrived, it contained exactly the same formula that I had found in the archives – same amounts, same format, even the same misspelling of “F.E. Coco.” The only difference was that Zarubica’s formula was incomplete, leaving off the final two ingredients in 7X [the flavouring at the heart of the Company’s formula]. It appeared they hadn’t wanted to release the complete formula and had taken the precaution of altering it in this fashion… [leaving] enough for his chemist to figure out how to turn brown Coke to white.

Pendergrast pp.422-3

Of course Zarubica could still have been lying about the Zhukov incident – but, if so, for what reason would Coke have sent a mere TO a copy of its sacred formula? He had no need of it to run a manufacturing or bottling plant, since Coke always made up the syrup in the US and shipped it out to overseas partners. That was the way it ensured the secret was kept. And Zarubica never claimed to have done anything else that would have involved him in tampering with the formula to produce a variant Coke.

So since Pendergrast was not a Coke man (and Coke wasn’t happy he published the original formula in his book), I’d say it’s not a bad bit of supporting evidence.


Mark Pendergrast, For God, Country and Coca-Cola: The Unauthorized History of the Great American Soft Drink and the Company That Makes It (1993); Mladin Zarubica, The Year of the Rat (1964).

Skeletons from a 14th century plague pit excavated in Lincolnshire. The pit , uncovered beneath the remains of an excavated hospital building at the site of Thornton Abbey monastery, contained 48 skeletons — 27 of which were children

Q: Why did Poland have lower rates of Black Death than other European countries during the 1300s?

I’m in the middle of creating an essay/presentation on this topic, and I’m having trouble finding concrete, sourced info. I’ve read anecdotal information on Reddit, Quora, etc. about the leadership of King Casimir the Great and the lack of cat-killing playing a role, among other things. What I really need is some quality, sourced primary or secondary information about the topic.

A: I agree with you that this question deserves proper investigation, and a better sourced answer to what is a pretty frequently posted query. My contribution is this short critical review of sources. I need to start by stressing that I do not read Polish and that the real answer, insofar as one does exist or can exist, is almost certainly going to be found in specialist economic and demographic studies done by Polish-speaking scholars. I’d strongly encourage anyone able to supplement our knowledge with information drawn from such sources to do so.

That said, I think we can make a couple of basic points that take us further than we’ve gone so far. The first is that Poland has not always been associated with low incidences of plague deaths. Something happened at some point in the 20th century to change our ideas as to how the relative incidence of plague in Poland should be viewed, and it’s interesting and revealing to investigate what exactly that “something” was, and to what extent it was based on heavy-duty primary source research in Poland.

Philip Ziegler, author of the most widely-sold history of the Black Death: his book provided clues that solved the mystery.

My second point is that there is some reason to suppose that the association of Poland with low incidence of plague is a product not of any actual variance in the death rate, but of lack of sources or lack of research. Ziegler, in his influential popular history of the Black Death (1969), makes the point that “until quite recently it was accepted tradition that the plague scarcely penetrated to Castile, Galicia and Portugal”; but he accepts that more detailed investigation of these regions dispelled the idea. This seems important to me, especially as discussions of Poland as mysteriously almost “plague free”, and forming a dramatic contrast to the rest of Europe, are pretty much entirely an internet phenomenon; while historians of the period, and even historians of plague, concede that the region did experience the Black Death somewhat differently to the lands to the west, they do not see that history as so different that they seem to have felt compelled to launch in-depth investigations to understand why; the explanation most commonly suggested in scholarly texts is that the population density of the area in this period was too low to allow the disease to be transmitted readily. Hence it seems to me to be at the very least possible that the idea that the Polish experience of the Black Death was unique, or almost so, is actually wrong, and that the explanations you cite for the supposed lesser incidence of plague deaths in the region (no cat-killing, the leadership of Casimir the Great) are post-hoc rationalisations – attempts to find explanations that fit the supposed evidence – rather than being sourced in contemporary materials.

Danse Macabre: Ole J. Benedictow calculates the Black Death killed 50 million people across Europe between 1347 and 1351.

Finally, I have to wonder to what extent lack of contemporary source materials impacts on the difficulty of discussing what happened in Poland during the plague years. Assessing the number of deaths caused by the Black Death is a notoriously difficult thing to do, even in countries with significant quantities of extant records, with estimates for total mortality varying dramatically from author to author; even today, after decades of intensive research, one can read estimates of mortality across Europe that vary from a third up to a half up to 60 percent of total population, so it would be astonishing if there was anything like a consensus for Poland, and indeed the materials I’ve read suggest there are dramatic instances of disagreement on crucial matters such as the approximate population of Poland (and even what constituted the “borders of Poland”) in the period 1000-1500. Until such matters are resolved (if it is even possible to do so), we will never be in a position to make any statements about the demographic impact of the Black Death in Poland with any confidence.

Part I: A survey of histories of the plague in Poland, 1832-2015

OK, let’s start with a survey of what has been said about the impact of the Black Death in Poland over time.

Jan Długosz, 15th century chronicler of Poland’s history.

• Hecker, a once highly renowned German medical historian, says in his On the Black Death (1832), that the plague arrived in Poland from Germany, and cites the chronicle of Jan Długosz (better known to English speakers as Johannes Longinus Dlugloss) Annales seu cronici incliti regni Poloniae  (1515) as his authority for the statement that “in Poland the infected were attacked with spitting of blood and died in a few days in such vast numbers that… scarcely a fourth of the inhabitants were left” – that is, he thought mortality in Poland was practically 75%.

• Most other works published around the middle of the nineteenth century mention Poland only in the context of what was then seen as the most interesting and remarkable thing about the plague’s passage through eastern Europe: “it had thus made the circuit of the Black Sea, by way of Constantinople, southern and central Europe, England, the Northern Kingdoms, and Poland, before it reached Russian territories, a phenomenon which has not again occurred with respect to more recent pestilences originating in Asia.” (Buckle’s Common Place Book 556 (1872)

Monumental: Sticker’s three volume Abhandlungen aus der Seuchengeschichte und Seuchenlehre (1908-10)

• Sticker, in his monstrous Treatises on the History of Epidemics and Episcopal Doctrine (1908/10), basing himself on two contemporary chronicles, suggests that the Black Death entered Poland from Hungary and caused incredible mortality. He writes that the plague killed half the population and depopulated entire towns and villages. This estimate is in line with the contemporary figure normally given for plague deaths elsewhere in Europe.

• Similarly, the Polish Encyclopædia (1921) states that “long, for instance, did the land bear the traces of the ‘Black Death’… which swept down with unabated fury upon Poland after decimating the populations of Italy, France, England and other countries of western and central Europe.”

• This seems to have been the consensus for some time. As late as 1969, George Deaux, in his The Black Death, 1347, could write that “the Black Death attacked Hungary and Poland at the same time as it appeared in Austria and with the same results. Towns were left totally depopulated…” In this context, it is interesting to note that Deaux (a popular writer) had access to no population studies for the region more recent than Hecker’s and makes the same claims about 75% mortality in Poland.

Then we come to the mid-20th century shift, after which it is generally accepted that something unusual did happen in Poland. Thus for example…

Perry Anderson: historian of absolutism.

• Perry Anderson’s Lineages of the Absolutist State, a very broad survey first published in 1974, observes that “Poland suffered less from the late feudal crisis than any other country in Eastern Europe; the Black Death (if not ancillary plague) largely passed it by, while its neighbours were ravaged.”

• Joseph Strayer’s Dictionary of the Middle Ages (1982) says that the plague was devastating “in all of eastern Europe save Poland.”

• Norman Davies’s God’s Playground: A History of Poland (1982) states not only that

Poland escaped the scourge of the Black Death,” but also makes the extraordinary claim that “economic life was not disrupted.

• And by 1983, a Rutgers professor, Robert S. Gottfried, could confidently state that “Poland lost about a quarter of its population to the plague.”

To the extent that this “something” is defined, it’s generally placed, as I mentioned above, in the context of relative population density – Poland was too sparsely populated to allow the plague to spread as rapidly as it did elsewhere. Sometimes the relative absence of trade – and hence of both travellers and the sort of transport of goods in which infected rats might hitch rides – is also suggested. It has also been argued that the relatively slow and late enserfment of peasants in eastern Europe suggests there was no demographic catastrophe in the region in the mid-14th century.

And the change in position had certainly been well-cemented by 2000:

• Jerzy Lukowski & Hubert Zawadzki, A Concise History of Poland (2005) p.30 suggest that “the Black Death left a sparsely populated Poland largely unscathed“.

• Adam Zamoyski’s popular Poland: A History (2009) goes so far as to state that “most of Poland remained unaffected. The populations of England and France, of Italy and Scandinavia, of Hungary, Switzerland, Germany and Spain were more than halved. Poland’s grew, perhaps as a consequence of conditions elsewhere.”

Mass burials during the plague years.

• But these are survey works, mostly interested in later periods of Polish history, in which rather basic research and sweeping statements may be expected. Robert Frost, in The Oxford History of Poland-Lithuania, provides a more nuanced discussion. He similarly suggests that the “relatively sparsely peopled lands of east central Europe did not suffer as badly as western and southern Europe from the Black Death,” but adds that “Poland was not untouched. Population growth slowed, but its upward trajectory was not reversed, and by the mid fifteenth century the disparity between the density of settlement in Poland and western Europe had ceased.”

Frost goes on to state that the stream of German migrants heading east was “abruptly halted” by the plague, and that this caused the Polish economy to collapse after 1350, in part because of shortages of specie flowing in from the west. This may well be true, but, if so, it would partially or completely mask any impact to Polish population and economic activity caused by the Black Death within the borders of the kingdom, and make it harder to study the population problem in Poland. I think this is an important point, since once we start to consider Poland as an interconnected part of the broader socio-economic history of eastern Europe, we necessarily also have to start wondering about any claims to either its “isolation” or its “uniqueness”, both of which are central to the popular view of Poland as being mysteriously unaffected by the plague.

Part II: Was there anything special about Poland?

So, what can we do to investigate the idea of “Polish exceptionalism” in the spread of the Black Death?

• To begin with, I think that the literature which is available to me suggests it is actually difficult, verging on impossible, to extrapolate any accurate demographic info for Poland in the period before, during and immediately after the Black Death. The sort of detailed, manor-by-manor, records that still survive for some parts of England, for example, and inform works such as Hatcher’s The Black Death: An Intimate History, simply don’t exist. All we have left are chroniclers’ stories (which are notoriously likely to over-estimate the amount of destruction caused, and certainly seem to have informed the 19th century estimates we’ve seen of 75% death rates in Poland), one detailed – but late – economic study, and extremely broad surveys of Poland’s demography, which attempt to extrapolate large figures from isolated bits of information and tiny samples. Thus…

• The chapters by Christopher Dyer (on rural Europe) and Alexsander Gieysztor (on the post-1370 Kingdom of Poland and the Grand Duchy of Lithuania) in The New Cambridge Medieval History, vol.7 help add to our understanding of all this. Specifically, Dyer takes issue with the idea that it is possible to use the enserfment of Poles as an indicator of demographic crisis in the region:

parts of north-eastern Europe – now Poland and the Baltic states – are often cited as following a course opposite to that found in the west. Weak states, an undeveloped urban sector and a powerful nobility meant that peasant conditions deteriorated, beginning in the period of ‘second serfdom,’ as tenants were restricted in their movement and forced to perform heavy labour services. In fact, the peasants of eastern Europe were being brought under serfdom for the first time (they had been encouraged to settle the new lands in the east with privileges and easy terms in earlier centuries.) Enserfment took a long time, beginning in the later years of the fifteenth century, and was not completed until well after 1500 This cannot therefore be seen as an immediate response to any fall in population.

• Gieysztor notes that the population density of Poland in 1370, after the ravages of the plague, was about 8.6 people per square km, based on a total population of about 2m, but this figure is challenged by Frost, who notes it includes “Mazovia, whose princes recognized Casimir’s overlordship, but not that of the Polish kingdom, or of Louis of Anjou.” A second estimate by Kuklo (Demografia Rzecypospolitej przedrohiorowej, 2009, cited by Frost) excludes Silesia but includes Prussia and Mazovia, and suggests an estimated population of 1.25m in 1000 (with a density of 5 per rising to 2m in 1370 (8 per and 3.4m in 1500 (13 per

Norwegian plague historian Ole J. Benedictow.

• I have found only one detailed study which appears to show the impact of the plague on Poland. Two economic historians, Pelc (a Pole, writing in the 1930s) and Abel (a German, writing in the 30s and again in the 50s, and reorganising Pelc’s apparently opaque series of data), organised wage and price data from Krakow for the period beginning in 1369. This is a little late to be ideal, but Abel’s broad conclusion was that the Polish data matched equivalent figures from France and England for the same period; that is, there was a major fall in grain prices, and a major rise in wages, both of which are best explained by a significant decline in population. Benedictow observes that while these figures only relate to one Polish town, they must imply that there was a shortage of labour across at least much of Poland, otherwise the availability of higher wages would have encouraged immigration to Krakow.

These patterns therefore suggest the impact of the plague on Poland was similar to that in western Europe – that is, very significant.

A nation of cat lovers?

• We can also attempt to trace the idea that Poland’s escape from contagion was a product of special factors – the ideas, frequently encountered online that Casimir the Great “wisely quarantined the borders” or that Polish love of cats was a determining factor. The earliest reference to the former I can find appears in Christine Zuchora-Walske’s Poland (2013), and though I would certainly love to have a contemporary source, I have to point out that even if something of the sort actually was ordered, that’s not proof that an order was effectively implemented, or had a measurable impact.

As for cats – the idea that they helped retard the spread of plague can be traced back online at least to 2010 (though not in the context of Poland), but not to any academic study I have found. I have not seen any evidence that suggests either that cats were commonly massacred in most of Europe in this period because they were associated with the devil, as Hollee Abbee argues, or indeed that the Poles were less likely to kill cats than people of any other group. And it seems well established that cats can act as carriers of both bubonic and pneumonic plague in any case, so the idea that Polish cats were efficiently disposing of diseased rats, without picking up fleas and contracting plague themselves, seems highly dubious (see Kauffman et al; Doll et al, Weiniger et al, all in the sources at the foot of this post).

[We’re actually passing another historical rabbit hole here, one I just don’t have the time right now to explore in any depth. But briefly: it’s possible to trace the idea that there was an extensive slaughter of cats in Europe the period before the Black Death to various discussions of a Papal decretal known as Vox in Rama, issued by Gregory IX in c.1233. Thus Wikipedia features an entry for this document suggesting it was issued to condemn a sect of German heretics uncovered in Mainz who “worshiped devils in the forms of a demonic man and of a diabolical black cat”. The same entry goes on to claim that

Some historians have claimed that Vox in Rama is the first official church document that condemns the black cat as an incarnation of Satan. In the bull the cat is addressed as “master” and the incarnate devil is half-man half-feline in nature. Engels claims that Vox in Rama was “a death warrant for the animal, which would be continued to be slaughtered without mercy until the early 19th century.” It is said that very few all-black cats survive in western Europe as a result.

The sources given for these statements are Donald Engels’s book Classical Cats: The Rise and Fall of the Sacred Cat(1999) and Malcolm Lambert’s The Cathars (1998). I have not found any sources dating to earlier than 1995 that make this claim, or any more solidly scholarly resources, of any date, that suggest the decretal resulted in any persecution of cats whatsoever, but there are plenty of internet resources out there making precisely this claim in extravagant fashion, for example “That One Time the Pope Banned Cats and It Caused the Black Plague”. Kors and Peters stress that Vox in Rama was not a bull (as it is often stated to be), and never entered canon law. The decretal also suggests devils take the form of frogs and toads, so any focused persecution of cats would seem odd. And anyway, even Engels suggests only black cats were killed, presumably leaving Europe’s population of other-coloured cats untouched.

Gregory IX (r.1227-41)L did he really set in motion a centuries-long massacre of Europe’s cats?

So what we seem to be seeing here is another process of post-hoc rationalisation, where the line of argument – flawed throughout – goes something like this:

  1. Gregory IX’s decretal suggested that cats were the tools of the devil.
  2. This prompted a great cat massacre, lasting for centuries, which killed most of the cats of Europe.
  3. Without cats, the rats that carried the Black Death were able to flourish, significantly increasing the impact and spread of disease.
  4. Poland escaped the ravages of the plague.
  5. Therefore the Poles cannot have massacred their cats.

… but the argument itself is obscured by the fact that the articles, essays and blog posts that result from it start with the definite – but unsourced and unproven – statement that the Poles had always had a special relationship with their cats, one so unique and so strong that it allowed them, and only them, to ignore the Papal “orders” contained in Vox in Rama.]

Norman F. Cantor, Emeritus Professor of History, Sociology, and Comparative Literature at New York University.

• Finally, it’s worth adding that other explanations have also been hazarded, again apparently only very recently. For example Norman Cantor’s In the Wake of the Plague: The Black Death and the World It Made observes that “the absence of plague in … Poland is commonly explained by the rats’ avoidance of these areas due to the unavailability of food the rodents found palatable.” This seems an extraordinary idea, and it took only a minute’s searching to uncover (for instance in Lardner) Polish chroniclers’ tales featuring “countess multitudes of rats, of an enormous size,” which seems to argue pretty strongly against the idea that such vermin were scarce in the area.

Part III: The smoking gun

So, with all that said, let’s look at the reason for the seismic shift in attitudes to the impact of the plague in Poland, which, as we’ve seen, takes place (at least in the English language sources) in about 1969-74. I think it is possible to identify exactly where the idea that Poland was somehow less badly affected by the Black Death comes from. The clue comes from Philip Ziegler’s best-selling and influential popular history, The Black Death – published in 1969, remember – which states (p.118):

Dr Carpentier has prepared a map of Europe at the time of the Black Death showing the movements and incidence of the plague. Virtually nowhere was left inviolate. Certain areas escaped lightly: Bohemia; large areas of Poland; a mysterious pocket between France, Germany and the Low Countries; tracts of the Pyrenees.

It seems to be Élisabeth Carpentier’s map, then – first published in the French journal Annales in 1962, but given a substantial push in its cross-over into English by Ziegler – which helped to introduce the idea that Poland escaped much of of the impact of the Black Death. Now, admittedly, if this information is merely written down, it’s hard to understand how Carpentier’s work makes it possible for us to privilege Poland’s exceptionalism. After all, she and Ziegler go on to list other areas of Europe that also appear to have almost escaped its ravages. (In this context, it’s well worth adding, at this point, that the legend Carpentier adds below the map specifically draws attention to the apparent escape of Milan from the worst effects of the plague. That has also become an item of popular belief, and is very frequently the subject of questions posted online.)

Elisabeth Carpentier’s provisional – but unanticipatedly-influential – 1962 Annales map showed the spread of the Black Death, with areas of lesser impact (“totally or partially spared”) – shaded

We need to look at the map itself [above] to understand how it could have had such a dramatic impact. It illustrates the regions lightly touched by the Plague using shading – and simple geography dictates that the area left “untouched” in Poland is well over ten times the size of the next largest bit of shading, covering Béarn, in the northern reaches of the Pyrenees. Don’t believe me? How about this version of the same map, coloured this time, but taken originally from Angus Mackay’s Atlas of Medieval Europe? Or the spectacular gif offered by Wikipedia to show the spread of the outbreak? Looking at these, it would be hard not to conclude that something amazing happened in Poland in 1347-60, something requiring some remarkable explanation.

I think a glance at Carpentier’s map, by itself, is sufficient to let us see how the idea that Poland was somehow veryspecial came about – and when we couple that with mention of the effect (and publication of the map) in Ziegler’s book, which was and is by far the best-selling popular study of the Black Death in English, we can make a pretty educated guess as to how it wormed its way into our collective consciousness, and from there crossed over onto the internet. It must have helped that, as early as 1964, the same map was also republished in Scientific American.

The oriental rat flea (Xenopsylla cheopis) is generally supposed to have been the initial carrier of the Black Death.

While it was crossing over into the Anglophone world, however, the map was also attracting some criticism, by far the most interesting example of which is an article by David Mengel of Xavier University which appeared in Past & Present(pretty much as prestigious a history journal as there is, it seems hardly necessary to add) in 2011. Mengel’s focus is on the widely-accepted escape of the Kingdom Bohemia (a region very roughly equivalent to the modern Czech Republic) from the worst ravages of the plague, but what he says applies equally to the very similar situation with regard to Poland. His paper acknowledges the “influential role of popular history in shaping the questions and assumptions of scholars” and also discusses “the power of cartography to convey historical arguments.” He calls the influence of Carpentier’s map “astounding”.

Mengel’s criticisms are backed by those of a Czech historian, Frantisek Graus, who was the author of several highly detailed studies of Bohemia, which drew on sources – chronicles, sermons and letters – that had never previously been used in plague studies. Graus’s work showed that there was far less uniformity in the impact of the plague in Bohemia than Carpentier’s – very broad brush – map implied. More importantly, it concluded that while Bohemia had probably not suffered exceptionally severely from the initial outbreak of plague in 1349-50, the disease had returned in catastrophic style in 1380, visiting an exceptionally severe outbreak of plague on Prague and other major cities in the kingdom. In other words, Bohemia was not in some way uniquely resistant to the plague. It got (relatively) lucky once, but overall suffered at least as badly from the Black Death as did most other parts of Europe. In fact, Graus’s work is largely a call for the Black Death to be placed in a much broader context, as part of a wider pattern of epidemics. This, he stresses, is certainly how contemporaries saw it, though we can also read his work as a plea for the social crisis of the fourteenth century to be viewed in classical Marxist terms, and not as the product of the chance intervention of mere pathogens. For Graus, it was Bohemia’s proximity to (and not, as Ziegler mistakenly argued, distance from) Europe’s major trade routes that explained the impact of the epidemic in 1349-50 and 1380.

Skeletons excavated during building work beneath the ticket hall at Liverpool Street underground station in London are rare examples of confirmed victims of the plague – this time from the famous outbreak of 1665.

There are plenty examples of the sort of problems caused by basic acceptance of the idea that various areas of Europe “escaped” the plague – it’s invidious to pick out examples, really, but this is an important issue, so see for instance the Rutgers undergraduate paper referenced in the notes. It’s an example of what happens when you assume something to be true and then try grimly to explain it, without challenging yourself further on your initial assumption.

There’s only one thing left to do at this point, and that’s to ask exactly what encouraged Carpentier to conclude that Poland experienced the Black Death differently to practically every other part of Europe. After all, however inadvertently, her paper, and her map, have caused a thousand lazy writers on the net – and some first-rate historians, as well – to think of Poland as some sort of shining beacon in the grim history of mid-fourteenth century Europe. What on earth prompted her to identify the kingdom as an area “partiellement ou totalement épargnées par la peste” – partially or totally spared by the plague? What evidence did she cite, and how much detail did she go into? Could it be we’re overlooking some crucial bit of evidence that she dug up half a century ago?

Well, the answer to that final question is a resounding “no”. Carpentier’s paper mentions Poland only once, and pretty much in passing. She asserts that the country was “only affected – weakly – in its northern part”, and urges further study, noting in passing that the one Polish chronicle account familiar to her deals solely with Torun, a small town on the Vistula, and is in any case quite useless, being copied word for word from a French source. The problem requires further study, she suggests. Worse still, her brief passage is not footnoted or sourced, making it impossible to tell – without engaging in a major act of historiographical archaeology – how she was able to conclude that only northern Poland was visited by the Black Death. The citations for the rest of the article are all to secondary sources, though – this is a survey article, not a paper based on any primary source research.

All in all, it’s staggering that this short, casually-composed passage – and the map that she drew based on it – has had so massive an impact on plague studies. And it is, to put it as politely as I can, more than a little bit unfortunate that it unleashed a myth that’s only getting ever more entrenched with every general survey of Polish history that’s published, and every bit of internet clickbait written about the Black Death.


This has necessarily been a long post, so a tl;dr seems sensible. I conclude:

• There is currently no detailed, accurate demographic data for Poland in the period up to and after the Black Death that would allow us to extrapolate the number of deaths caused by the plague in this region, even very approximately, with confidence. Data for prices and wages from one major Polish city suggest an impact similar to that experienced in western Europe.

• Poland was not sufficiently isolated from the rest of Europe for isolation to explain its apparent “escape” from plague. And it seems to be the case that it experienced a significant number of deaths, though – perhaps as a consequence of population densities – possibly fewer, in proportion, than more populous states did.

• However, the difficulty of assessing the impact of the Black Death in the kingdom is increased by its integration into the eastern European trading economy of the 14th century. The flow of people and cash eastwards from Germany was sufficiently significant to obscure and confuse any attempt to measure the impact of the plague in Poland, both economically and demographically.

• English-language studies draw on only a couple of contemporary chronicle accounts from the region, not enough to base any proper sort of study on – and these suggest that the impact of the Black Death in Poland was, if anything, at least as catastrophic as it was elsewhere in Europe. More detailed local records may be largely or wholly lacking; if they exist, they have not been the subject of studies that have had an impact in the English-speaking world.

• Nonetheless, no peer-reviewed books and papers written by academic historians, at least in western European languages, suppose there was anything extraordinary about the passage of the plague through Poland.

More modern studies generally suppose that the country escaped relatively lightly in 1347-51, though they acknowledge that it was not unaffected by the plague, nor was it necessarily invulnerable to later outbreaks of epidemic. Since I have not yet come across any such work that cites more detailed surveys of the impact of the Black Death in the region, it may well be that even the writers of these studies are basing their ideas about Poland an the plague on Carpentier and her work.

The Encyclopedia Britannica’s current map of the spread of the Black Death still follow’s Carpentier’s. Click to view in larger resolution.

• Mention of the actions of Casimir the Great in blocking the borders of Poland, or of the idea that the reluctance of Poles to kill cats helped to retard the spread of disease there, exist only in popular books and on the internet; such suggestions only began to appear after 1995. They do appear not to have been made by specialists in Polish history, nor does there seem to be any evidence at all that they are true. It seems more likely they are post-hoc rationalisations, brainwaves dreamed up to explain Poland’s supposed escape from the ravages of the Black Death.

• The idea that areas of Europe (including Poland and Milan) “escaped” or almost escaped the impact of the epidemic can be traced to incautious reading and recopying of a map showing preliminary conclusions only regarding the spread of the disease first published by Carpentier in 1962, and widely republished in English-language popular works from 1969.


[General note: there is a large void in studies of medieval Poland. The literature is relatively abundant up to c.1250 and after the Union of Krewo in 1385, but I have struggled to find a single volume written in the past 50 years, in any language, devoted to the history of Poland after 1250 and before its union with Lithuania, much less a study of the impact of the Black Death there. Benedictow – who devotes a chapter to the question of “Did Some Countries or Regions Escape?” – acknowledges that the dearth of studies of Poland makes it difficult to draw conclusions about the impact of the epidemic there.]

Hollee Abbee, “Cats and the Black Plague,” Owlcation 4 February 2010, accessed 29 October 2017; Wilhelm Abel, Agrarkrisen und Agrarkonjunktur: Eine Geschichte der Land- und Ernährungswirtschaft Mitteleuropas seit dem hohen Mittelalter (1966); Perry Anderson, Lineages of the Absolutist State (1974); Ole J Benedictow, The Black Death, 1346–1353: The Complete History (2004); Henry Buckle, Miscellaneous and Posthumous Works (1872); Norman Cantor, The Black Death and the World It Made (2001); Élisabeth Carpentier, “Autour de la Peste Noire: Famines et épidémies dans la histoire du XIVe siècle,” Annales, XVII (1962); Alice Creviston, “Economic, social and geographical explanations of how Poland avoided the Black Death,” Rutgers undergraduate paper 2015; Norman Davies, God’s Playground: A History of Poland (1982); George Deaux, The Black Death, 1347 (1969); JM Doll et al, “Cat transmitted fatal pneumonic plague in a person who travelled from Colorado to Arizona.” American Journal of Tropical Medicine & Hygiene 51 (1994); Christopher Dyer, ‘Rural Europe.’ In The New Cambridge Medieval History: Volume 7, c.1415-c.1500 (1998); Robert Frost, The Oxford History of Poland-Lithuania I: The Making of the Polish-Lithuanian Union, 1385-1569 (2015); Alexsander Gieysztor, ‘The Kingdom of Poland and the Grand Duchy of Lithuania, 1370-1506.’ In The New Cambridge Medieval History: Volume 7, c.1415-c.1500 (1998); Robert S. Gottfried, The Black Death: Natural and Human Disaster in Medieval Europe (1983); Frantisek Graus, “Autour de la peste noire au XIVe siècle en Bohème,” Annales ESC (1963); J. F. C. Hecker, Der Schwarze Tod im Vierzehnten Jahrhundert (1832); Fred Hoyle and Chandra Wickramasinghe, Diseases from Space (1979); AF Kaufmann et al, “Public health implications of plague in domestic cats.” Journal of the American Veterinary Medical Association 179 (1981); Alan Kors and Edward Peters, Witchcraft in Europe, 400-1700: A Documentary History (2001); Dionysius Lardner, The Cabinet Cyclopædia… History, Poland (1831); Jerzy Lukowski & Hubert Zawadzki, A Concise History of Poland (2005); David C. Mengel, “A plague on Bohemia?” Past & Present (2011); Julian Pelc, Ceny w Krakowie w latach 1369-1600 (1935); National Polish Committee of America, The Polish Encyclopædia (1921); Georg Sticker, Abhandlungen aus der Seuchengeschichte und Seuchenlehre I: Die Pest (1908-10); Joseph Strayer, Dictionary of the Middle Ages (1982); Helen Taylor [ed.], The Miscellaneous and Posthumous Works of Henry Thomas Buckle (1872); Bruce Weiniger et al, “Human bubonic plague transmitted by a domestic cat scratch,” Journal of the American Medical Association 1984; Adam Zamoyski, Poland: A History (2009); Philip Ziegler, The Black Death (1969)

 Q: What is the truth about “getting shanghaied”?

Getting shangaied – a popular view.

A: There’s no doubt that San Francisco earned its reputation as a place where unwary or unlucky sailors risked being pressed into service on board ships departing for long voyages across the Pacific. Forced labour of this sort was relatively commonplace in the years roughly from the 1840s until about 1915; certainly there must have been many thousands of cases in this period, and an entire group of procurers, known as “crimps,” sprang up on the Barbary Coast to supply this need.

There were a couple of reasons why shanghaiing was a relatively common practice and why it was specifically associated with San Francisco. Merchant seamen were itinerants in this period. Rather than serving as part of one ship’s crew for relatively lengthy periods that covered several different voyages and deployments, as was the case in the navy, they signed up for only single voyages and had the option of leaving a ship at the end of that voyage if they were not happy with the master, the conditions, the pay, or indeed the next planned port of call. They might also choose to leave a ship out of the necessity of requiring paid employment, since they were paid only while they were actually at sea; a ship that stayed in port for a few weeks or months, awaiting her next cargo, was costing these men money. Thus, while a competent ships’ captain would do everything in his power to retain a core of trusted, skilled men around him, he would also typically lose some members of his crew, and hence need to recruit ordinary seamen every time he was in port. In the 19th century, these men were supplied via a largely informal system focused on boarding houses along the waterfront. Sailors looking for work could find it by applying to the well-connected men and women who ran the major seamen’s flophouses; masters looking to top up their crews passed out their requirements lists to the boarding house keepers known to supply men, who not infrequently had a reputation for providing these bodies by less than above-board means. A bad master, or a poor or unsafe ship, would always struggle to attract sufficient men, and it was in cases such as these that the crimps turned to illegal methods of forcing unwilling recruits on board.

San Francisco in the gold rush era, at a time when sailors would regularly jump ship to join the thousands heading for the goldfields.

The problem was particularly associated with San Francisco both because it was the main US Pacific port, and so a starting point for many very long trans-oceanic voyages of the sort no man would want to undertake on a “hell-ship,” and because at the very start of the period, in 1849, the California gold rush created such an hysteria that it was common for seamen to desert their ships in large numbers in order to join the rush to the goldfields, creating a major manning problem for the ships arriving at the port that could only be solved by kidnapping unwilling crewmen from the bars and flophouses along what became known, for its lawlessness, as the “Barbary Coast.”

‘The Shanghai Chicken.’ John Devine was one of the most infamous crimps ever to operate out of San Francisco (at least according to the not-always-reliable Herbert Asbury.)

Getting the hapless men who became victims of “shanghaiing” (a phrase that dates to 1872) onto ships was one of the riskiest parts of the job, and a considerable folklore has sprung up around it. This lore incorporates not only trick chairs positioned over trap doors (an idea that can be traced back at least as far as the legend of Sweeney Todd, the “demon barber of Fleet Street”, which in turn has its origins in the Paris of the early 17th century), but also secret tunnels running down to the shore. And certainly rendering these men insensible, through drink or drugs, was a technique occasionally used by crimps such as the infamous ‘Scar-Face’ Johnson, Michael Connor – a terror of the 1880s – Paddy West and ‘The Shanghai Chicken,’ John Devine, the last of whom went to the gallows for murder in 1873.

Hiram P. Bailey – who quite literally wrote the book (albeit a semi-fictional one) on being shanghaied out of ‘Frisco in the ’90s, spoke of being slipped mickey finns in a seafront bar and waking up on board a hell-ship that he calls the *Washington*, notorious for her vicious master, which was “putting to sea with two clergymen, three bar-tenders, four agricultural labourers – all shanghaied – among a more or less nautical crew of thirty men.” Davidson describes the “standard concoction” served by crimps to men they planned to shanghai as a mix of “whiskey, brandy, gin, and opium,” capable of knocking a man out for days; this was the “Miss Piggott Special,” so named after a Barbary Coast landlady rumoured to serve the drink to unwary custmers. A notice in the *California Police Gazette,* similarly, warned of “strychnine whiskey,” served in San Francisco in South of Market bars, which had an almost identical effect.

Shanghaiing occurred worldwide, of course – not just in San Francisco. A typical case from Rio de Janeiro involved a Swedish sailor, Gustav Johnson, who had gone ashore from his ship to purchase medicine:

As he was leaving the chandler’s shop he was set upon by four negroes, directed by a short, thick-set white man. They “knocked him down and bound him, carried him to a wharf, threw him into a small boat [and] rowed him” back to their barque, the Canvas-Down. On the Filipino island of Cebu, Johnson tried to go ashore, but his captain prevented him from doing so. He protested. The captain took him to the British consulate and had him locked up until departure, then audaciously deducted this cost from Johnson’s pay. The Swede secured his release in New York and used that city’s courts to successfully sue the captain for $150 in damages.

Aerial view of the Chicago River district in 1898. Click to view in higher resolution.

We get some clues here as to the interconnectedness of the elements of the shanghai trade, which required collaboration between masters, crimps, and often the authorities. An especially blatant example of the latter comes from the Great Lakes, where in the 1880s much of the Chicago trade was in the hands of a shipping agent called “Big Jack” McQuade, whose position was immeasurably strengthened by his connections to the city’s infamously corrupt municipal government:

>A few days before Christmas 1888… when the 1,000 ton schooner C.C. Barnes broke free of her icy imprisonment in the Chicago River with the help of a tug, she needed extra hands before getting underway to Buffalo with a cargo of 38,000 bushels of grain. McQuade headed to the Sans Souci Bar and told its owner, Olaf the Swede, to give him six men. Olaf protested that he could not find mariners right before Christmas. McQuade threatened to arrange for the rescinding of the saloon’s license. That night, Olaf rolled up with a wagon carrying six unconscious men.

This said, there is absolutely no credible evidence for the use of trapdoors to secure victims, either in San Francisco or anywhere else; ditto for secret tunnels, in San Francisco at least (there is some evidence they were used in Portland). Bill Pickelhaupt, author of one of the key studies of crimping and shanghaiing, studied oral and written accounts for years without coming upon a verifiable case, and there are a couple of good reasons why this should be so: first, the practice would have risked injury to a man who needed to be able-bodied to be of much use to his new employer, and, second, the law required sailors to sign ship’s articles, something there was no guarantee a man who had been kidnapped or forced aboard would do. It was far easier to target men who, while they might not want to undertake the voyage, grudgingly accepted they had little alternative but to do so.

Boarding houses in San Francisco’s Steuart Street, c.1912

Mark Strecker comments that while crimps

>used drink, drugs, coercion, and trickery to shanghai men, most preferred to employ financial blackmail. It worked like this: seamen chronically lacked money, so a crimp extended them far more credit than they could possibly repay. The crimp then offered those indebted to him the choices of debtors’ prison – something many American states and territories had – or taking a berth on a ship of the crimp’s choosing. Most did the latter. The crimp took what they owed him from their advance… The loss of this portion of earnings caused the affected to have short wages at the voyage’s end. To make up the difference, they had to get credit from another boardinghouse master, starting the process over.

It would be remiss of me not to mention that there is some evidence that crimps *did* make use of trapdoors at one key point in the shanghaiing process – Davidson mentions “dead-falls” in wharf-front warehouses, “through which shanghaied sailors were shoved into rowboats waiting below.” But Strecker’s book also gives us an interesting example of the consequences of employing the sort of coercive methods you are interested in:

>One Sunday night a seaman named Patrick Grant went carousing along [the Shanghai] waterfront. The next morning he awoke on a strange ship as the newest member of her crew. Feeling ill beyond the effects of a hangover, he informed the second mate he could not work because he had a nasty fever. During the four days [the voyage] lasted he received not a morsel of food. Somehow, his plight caught the attention of Mr Brown, a British vice-consul, who charged Grant’s master, Captain Murphy, with illegally bringing a man on board without first having him sign the ship’s articles, a violation of the Merchant Shipping Act of 1854. For his defense, Murphy produced articles with Grant’s signature, saying he had put it there himself as Grant could not write. Grant proved his literacy and went free. The judge fined the captain a mere £10 for his violation as he felt Murphy had meant no harm.

Shanghaiing in San Francisco: a 2007 collage.

Shanghaiing continued to flourish until the very early 20th century, in part because, as Davidson points out, “as members of a migratory class unable to vote for protective legislation, sailors formed a classic lost community in a democracy.” It took organisation and unionisation to force the legislative changes that outlawed crimping and made shanghaiing a thing of the past. The Dingley Act (1884), a Federal law that prohibited payment of advance wages by masters to crimps, was one significant blow to the shanghaiing system, as was the formation of the Seaman’s Union in 1885 and the agitation that led to the Maguire Act (1885) and the outlawing of the crimps’ other major stand-by, the attachment of sailors’ clothing. This was followed by the White Act (1898), which limited the amount a sailor could be required to pay to any “original creditor”.

Between them, these three acts – and the end of the sailing ship era, which had required a large number of semi-skilled crewmen used largely as muscle – pretty much finished off the crimps of San Francisco – but that does not mean that sailors did not continue to risk shanghaiing elsewhere in the world for many years to come.


Hiram P. Bailey, Shanghaied Out of ‘Frisco in the Nineties(1925)

Lance S. Davidson, “Shanghaied: the systematic kidnapping of sailors in early San Francisco.” California History 64 (1985)

Bill Pickelhaupt, Shanghaied in San Francisco: Politics and Personalities (1996)

Mark Strecker, Shanghaiing Sailors: A Maritime History of Forced Labor, 1849 to 1915 (2014)

The New York draft riots of July 1863 protested the introduction of conscription during the American Civil War – though the main targets were Manhattan’s blacks, who, it was feared, would take the jobs of working class white men sent to war. The disturbances lasted for four days, killed well over 100 people, and were the single biggest civil disturbance in the 19th century US. But did the staff of the New York Times protect themselves and their paper from the mob with the help of new-fangled machine guns?

Q: During the New York Draft Riots (1863), supposedly the New York Times defended their office from the mob with 2 Gatling guns. Where did they obtain these guns and ammunition and how did they turn away the mob?

Did the Times staff kill members of the mob? I’ve attempted to find more information about this incident but have been unable to. The NYT makes the claim itself on this site.

A: The most authoritative version of this story – which itself is based on no more than late recollection and “tradition” – suggests that none of the guns were fired. The Gatling was a complicated bit of machinery even though it was designed to be used with minimal training, and it would have been hard for ordinary members of the paper’s staff to use it effectively in any case.

From the History of the New York Times by Elmer Davis (1921):

Warned by the misfortune of The Tribune, which had actually been attacked by rioters and saved only by the opportune arrival of a detachment of the overworked police, The Times fortified itself.

The Gatling gun had lately been invented and offered to the War Department, though it was not used either widely or successfully in the war. Two specimens of the gun had been obtained by The Times, according to tradition through the President’s friendship for [Henry J.] Raymond [one of the partners who owned the paper], and were mounted just inside the business office under the command of Leonard W. Jerome. If the mob had not been more interested in attacking those who were unable to defend themselves, it would have found some trouble waiting for it at the Times office, for the entire staff had been armed with rifles; and there was a third Gatling gun on the roof mounted so that it could sweep the streets in any direction. It is only a malicious invention of jealous rivals that this gun was kept trained on the window of Horace Greeley’s office in the near-by Tribune Building.

Numerous accounts state that Leonard Jerome, a partner in the Times and grandfather of Winston Churchill, personally manned one of the Times’s machine guns during the riots.

The question of where these weapons may have come from, if they ever existed, is an interesting one. The Gatling gun had not been accepted or purchased by the US Army at this time. So – while I have read numerous histories of New York and of the riots which state unequivocally that the guns were obtained from the army, one writer even specifying that they were “appropriated from one of New York’s armouries” – it would seem that the only way the Times could have obtained them, with or without the assistance of Abraham Lincoln, is if it got them from the Gatling company itself.

This is more than a little problematic, since Gatling was based in Indianapolis, with manufacturing located in Cincinnati. At the time that the Draft Riots took place in the summer of 1863, it would appear there may have been as few as 12 actual Gatling guns in existence and there had only been one formal trial of the weapon; another took place in Washington Naval Yard that summer. Certainly they were not available, and had not been sold, in any quantity.

Of course it’s possible that the manufacturer had a few examples in a sales room somewhere in New York, but I’ve never read any such claim, and it’s not clear why it would have done so. Although individual army officers did purchase samples of the gun for their own use later in the war, the sole customer for the gun at this point in time was the Army; by the summer of 1863, Major General Horatio C. Wright, commander of the Department of Ohio, was the sole officer to have taken an active interest in the weapon, and he had gone to Cincinnati to witness trials. The first sales to foreign powers did not occur till 1867, when Russia bought the weapon.

The New York Times’s five storey building some time after the war. The paper’s history states that one Gatling gun was mounted on the roof to offer clear fields of fire over approach roads.

The rioting lasted for four days, but the current New York Times site you link to suggests the mob threatened its building on the first day, 13 July. That would rule out any attempt to source guns from any distance. Even if that account is wrong, is it really plausible that the NYT not only anticipated the unprecedented duration of the rioting, but also telegraphed to Ohio for help and received a rail shipment while the disorder was actually in progress? And then there’s the multiplying number of guns involved; Davis’s account raises the number from two to three, without explanation, in the space of a few lines.

As for the idea that the guns were provided courtesy of Lincoln’s personal intervention, this seems highly implausible; we know that Lincoln had tried to press another early machine gun, the Ager “coffee mill”, onto his generals in 1861, but when Gatling approached the President direct in an attempt to sell his invention in 1864, he was ignored – which does not suggest that Lincoln was either familiar with Gatling’s gun, or much of an enthusiast for it, one year after the Draft Riots occurred.

For these reasons – and since, despite the numerous anecdotal accounts of the story that have appeared in print since 1921, I have not been able to locate any contemporary or even near-contemporary source for the story – I have significant doubts as to whether the incident ever actually occurred.

The Model 1862 Gatling gun – the inventor’s original design – fired 150-200 bullets per minute, but used paper cartridges and was prone to jamming even in expert hands.


For exactly how far Gatling had got in selling his gun by July 1863:

David Armstrong, Bullets and Bureaucrats: The Machine Gun and the United States Army, 1861-1916 (1982)

For accounts of the NYT’s Gatling guns:

Ric Burns et al, New York: An Illustrated History (2001); Elmer Davis, History of the New York Times (1921); James M. McPherson, The Illustrated Battle Cry of Freedom: The Civil War Era (2003); John Strasbaugh, City of Sedition: The History of New York City During the Civil War (2016)

For a – very slightly – more sceptical account:

Paul Wahl & Donald Topol, The Gatling Gun (1965)

For the book that explicitly claims the guns were “appropriated from one of New York’s armouries”:

Clint Johnson, A Vast and Fiendish Plot: The Confederate Attack on New York City (2010)

Brick wall.png

Q: Is it true that Henry VIII feared being attacked so much he had himself bricked into his bedroom every night?

I just heard on Horrible Histories that towards the end of Henry VIII’s life, he would be bricked into his bedroom every night then broken out the following morning. I’ve never heard this before and it sounds really implausible…

A: There is certainly no truth to the story, which is not mentioned by any of Henry’s major biographers. Furthermore, even had such an action been suggested or required, for some unfathomable reason, the actual process would have been impracticable in the sixteenth century, well before the introduction of quick-drying mortar and cement.

The story beaome “live” again in September 2017, presumably owing to the transmission of an episode from series 7 of the show entitled “Ruthless Rulers”, the programme notes for which read:

Get ready for some serious bad behaviour, as Horrible Histories brings you the most ruthless rulers of all time. Henry VIII is so demanding he has a brick wall built at his bedroom door every night, those vicious Vikings find that sorry seems to be the hardest word, hold your nose in Versailles because Louis XIV hasn’t got any loos, and rock out with the Warlords from Hell – take it away Genghis and Vlad.

Greg Jenner.png

Greg Jenner: “We are well aware many facts are possibly myths.”

This in turn has prompted others to question the rumour. As a result, there was an interesting exchange on Twitter between a couple of sceptics and Greg Jenner, who teaches an MA seminar in public history at York, is an historical consultant to the “Horrible Histories” TV show – and, somewhat incredibly, claims to be “both a passionate defender and careful critic of the way in which the past is exploited by our society for entertainment.”

While Twitter is not normally a good source, it’s worth giving the exchange here as it does illuminate the sort of standards a children’s TV history show produced for the BBC feels it needs to stick to these days – the short answer being that the standards are quite unbelievably low.

The question was first posed on History Stack Exchange:

I was just watching some TV with my kids, and we were enjoying the (normally reliable) Horrible Histories TV show.

It claimed that Henry VIII had a long series of bedtime preparations to ensure his nightly slumber was safe. Fair enough. The final step, though, was to brick up his doorway each and every night, taking the wall down in the morning.

This seems pretty crazy. If Henry could get out in the morning, intruders would surely have been able to get in fairly easily. If the wall was mortared, it would take too long to dry. Not to mention the level of skill involved by artisans to do the brickwork.

So I searched on it and found nothing except a bunch of other amateur historians also ridiculing the idea.

I posted about this on social media and, to my surprise, the historical adviser to the series replied to say he’d heard the story from the owners of Allington Castle. That seems a bit of a flimsy basis to me. And even with the extra information I couldn’t track down any evidence.

Is there any truth to this story? Is it as unlikely as it sounds?

This query was then forwarded to Jenner by an archaeologist called Iain McCulloch:

One for Greg Jenner I think.

And Jenner responded:

Haha thanks, it’s one of those half dubious stories which circulate and we thought it would be fun to run with it.

A Twitter user called Matt Thrower then asked:

Sorry it’s me again, wondering if I can find any evidence 🙂

To which Jenner replied:

Feel free to ask around, we are well aware many facts are possibly myths but until they are disproved they remain usable on a comedy show.


these stories come from somewhere, but where… and why


I suppose a TV show is free to set its own rules in this regard, and though I find it regrettable that such a popular series plays so fast and loose with the facts, I can at least see some spin off benefit in the form of more kids finding history more fascinating.

henry_viii scared.jpg

Henry VIII: what does the brick wall story suggest about his reign?

But I find it unfortunate, in fact unforgivable, that anyone who calls himself an historian could take such a cavalier approach to evidence and sources. After all, if HH requires that someone “disprove the myth” in order that some check be placed on its content, that’s something its “historical consultant” ought to be responsible for, and could quite easily have done. Clearly he didn’t feel it necessary to try.

In this context, it’s worth mentioning that a response to the initial query was posted on History Stack Exchange, where a user called Patricia Shanahan observed:

In this case, I think the absence of evidence does very, very strongly suggest it didn’t happen. For example, the Eltham Ordinance lists many types of royal household workers and their duties, but never mentions the Privy Chamber Bricklayer, who would have had to be close to the King twice a day. It discusses the handling of left over torches and wax, but not the handling of the bricks for the King’s chamber. It specifies when the pages and squires have to get up in order to be ready to attend the King at eight in the morning, without saying what time the bricklayer should dismantle the wall.

It strikes me that this is a useful way of approaching the problem, although hardly a definitive one – the Eltham Ordinance dates to 1526, which is more than two decades before Henry’s death.


Allington Castle: source of the brick wall rumour?

This still leaves unresolved the question of where the original rumour originated. We have Jenner’s claim that he heard it from “the owners of Allington Castle”, which is privately owned, in Kent, and does have associations with Henry VIII. This is quite plausible, as the castle website reveals that several series of Horrible Histories have shot at this location. The current owners are Sir Robert Worcester, the founder of the MORI polling organisation, and his wife. Other than that, the earliest account I have been able to trace dates to November 2013, when the question was debated on the forum of a website called The Anne Boleyn Files. The original questioner sourced it as follows – adding an alternate supposed reason for the practice:

It cropped up during a conversation I had with a friend some time ago. She said to me…

“Did you know that Henry (towards the end of his life) used to be bricked up inside his lodgings every night, and the bricks taken down every morning, due to his fear of getting sick”

I know he had a fear of getting poorly… but did he take it this far!?

That’s pretty vague. But perhaps further investigation will reveal if the rumour can be traced further back than that.

One closing comment: I think there’s really no excuse for HH not to make an effort to look into stories it knows are dubious.

I can only imagine they don’t because they don’t want to rule out stuff they think would make good TV. But however entertaining, this is also the stuff that sticks, and it has an impact on perceptions more generally.

In this case, if kids are being told Henry was frightened of being killed in his bed, that affects how they will think about him as a ruler, and think about how dangerous it was to be a king in Tudor England – all in ways that won’t be helpful if they come to study the period at A level or at university. So it’s not just a harmless bit of fun.



Patagonian giants (right) illustrated in a late 18th century German atlas

Q: When Ferdinand Magellan circumnavigated the Earth, he came across supposed “Giants of Patagonia”. Who were these giants and why were they so tall?



A hotel detective depicted in the old Hal Roach film “Barnum & Ringling, Inc.,” dating to 1928.

Q: Whatever happened to the hotel detective?

I read a lot of old pulp fiction, and hotel detectives are in a lot of the stories. But I also travel a lot and have never noticed one.

A: Hotel detectives were indeed once extremely ubiquitous, and are now indeed much less commonly encountered. They were men who did a very distinct type of job. They were responsible for ensuring that the hotel the detective worked for was safe and secure – memoirs describe regular rounds of the building and endless testing of locks – but they were, for the most part, far more concerned with protecting their employer than their guests. Their job certainly did involve preventing crimes from taking place on the premises, and solving, where possible, those that did occur – but mostly this was done to ensure that the business they worked for was not being ripped off. Only occasionally, and at the very best hotels, would their work extend to more customer-focused activities, such as offering protection to distinguished guests.

A large proportion of these “house officers” were former policemen who took hotel posts after retiring from the force. Such men made ideal employees. The skills required of a hotel detective included a good understanding of human nature, a talent for conflict resolution, and a good working knowledge of the local criminal element – all things that were readily picked up in the course of a career in law enforcement. Ample experience of dealing with crooks and crime was important not least because, in taking up a house position, a former policeman forfeited a good deal of the powers he’d had as a cop. “The hotel detective is the world’s most fenced-in man,” the journalist Frederick Laurens observed in 1946. “He has no badge, can carry no weapon, has no authority to push people around, as have the regular police, and must either rely on tact or threats and ugly looks to get his way.”

Hotel bellhops could be enlisted to provide important information to the house detective.

Experience was the most important attribute of a hotel detective, since, for the most part, such men had three main functions to perform. The first was to protect the hotel’s reputation and prevent it from unwittingly breaking any laws, which, especially in earlier periods, often involved preventing an establishment from acquiring a reputation as the sort of place that allowed unmarried couples to have sex on the premises – very often an offence at the time under laws relating to “unlawful cohabitation”. A 1979 article in Texas Monthly notes that in earlier times a house officer would, as part of his routine, challenge male guests with the line “Is there a woman in your room?” Dev Collans, in his pulpy exposé I Was A House Detective (1954), describes enlisting bellboys to report on “couples who wouldn’t open their suitcases while the bellboy was still in the room; married couples didn’t hesitate to. A man in sleek clothes with a woman whose shoes were run down at the heels is another giveaway.”

The detective’s second major task was to screen new employees and know as much as possible about those who made it onto the staff, in order to prevent them from robbing both the hotel (of food, silverware, bedlinen and pretty much everything else) and the guests. In this respect, a New York Times article dating to 1902 recounted how “Detective Sergeant ‘Sam’ Davis, who has for twenty years been responsible to Police Headquarters for all the hotel detective work between Fourteenth and Fifty-ninth Streets” in Manhattan, fingered “one chef, three cooks, two porters, half a dozen chambermaids, and a woman in charge of the linen room” as thieves in a single hotel. As soon as the 13 malefactors had been fired, “the robberies stopped [and] the proprietor found his receipts growing larger.”

The Adolphus Hotel, an old-style establishment in an unfashionable area of Dallas, continued to employ a team of three hotel detectives into the late 1970s.

Thirdly, a house officer would be expected to keep criminal elements from causing trouble in his hotel. This involved recognising known crooks and prostitutes – another reason why retired police officers from the district were highly favoured as hotel detectives. A detective might, for example, agree on a signal with the desk clerk to warn of a known criminal attempting to check in; since the crook would be highly likely to leave without paying his bill, he would be told the establishment was full and there were no vacancies.

Of course, an especially large proportion of their work involved spotting when guests were taking prostitutes into their rooms, and either stopping them or, more usually – since active intervention embarrassed and angered guests, and tended to cause scenes – logging the girls’ locations, and dealing later with any problems that occurred as a result of their visits to the hotel. These only rarely had anything to do with sex; as Charley Coyle, a house detective at the Adolphus Hotel in Dallas, noted in 1979, “These girls aren’t there just to have sex and get paid. It would be different if they were. Not so much trouble for us. They’re there to steal.” According to Gregory Curtis of Texas Monthly, keeping track of prostitutes was by far the most time-consuming aspect of the house detective’s job: “Every hotel detective I talked with, from those in the plainest hotels to those in the fanciest, said prostitution was still their main problem.” One reason for this was that sex workers favoured working in hotels, not least because of the advantage that a prostitute had over her clients when they were in a public space. Lou Speer of the Adolphus explained that

A clever working girl can get the money she’s been promised, then clean out her client’s wallet and possibly his luggage, and escape from the room with her virtue, at least the sexual part of it, intact.

No, they don’t usually carry guns or nothing. They don’t really have to. A lot of times they’ll get out of the rooms just by saying they’re going down the hall for some ice to put in their drinks…. Usually what they do is make sure the mark takes his clothes off first. Hell, he’s got his own ideas about what she’s there for, so all he has to do is just heat him up a little bit, and he’s not going to think twice about stripping down. Then, with him naked as a jaybird, she can grab his wallet and run out the door and there’s no way he’s going to come running after her.”

A prostitute, her victim, and the stereotypical hotel detective – posed photo from Texas Monthly, 1979.

Interestingly – in Speer’s experience, at least – the hotel detective’s main role in cases such as this was not to catch the girl, but to prevent the guest from attempting to bring a claim of theft against the staff. Few men would admit to bringing a prostitute into the establishment, much less to being stupid enough to allow themselves to be robbed by her, but many would attempt to lodge a complaint that their wallet had been stolen while they were in the hotel. In such circumstances, Speer and his men had recourse to their “hooker reports” – a log they kept of single guests who entered the premises with women on their arms.

“If a guest comes down in the morning and says his wallet was stolen, the first thing I do is look up my hooker reports to see if he had a girl up there. The guest is trying to say that the hotel is responsible for the loss. You ought to see the expression on some of their faces when I say, ‘But what about the girl you took up to your room at twelve-eleven last night?'”

As to why hotel detectives are now a dying breed: two key developments have combined to do away with them. One is changes in morals; no modern hotel is likely to acquire a dubious reputation simply because it allows clearly unmarried couples to share a room, and it’s no longer against the law for guests of this sort to “unlawfully cohabit” – so house detectives are no longer required to police the guests. The second is the ubiquity of close-circuit television. When it comes to deterring and detecting theft, it’s cheaper and probably more effective to outfit a hotel with multiple CCTV cameras than it is to pay a roster of former detectives to work often unsociable hours to try to solve such crimes after they have taken place. The problems of staff theft and of prostitutes stealing from guest rooms can both fairly readily be investigated now – and evidence handed over to the local police – without recourse to a house detective.


Dev Collans’s pulpy 1954 exposé of the house detective’s world.

Hotel detectives and their experiences“, New York Times, 1 June 1902

Dev Collans, I Was a House Detective (1954)

Gregory Curtis, “Hotel detective,” Texas Monthly February 1979

Norman Hayner, Hotel Life (1936)

Frederick V. Laurens, “Hotel detective – 1946” in Best: the Popular Digest

Frank O’Sullivan & Walter Wright, Practical Instruction in Police Work and Detective Science (1940)

Horace Herbert Smith, Crooks of the Waldorf: Being the Story of Joe Smith, Master Detective (1930)


Q: At any point between the end of WWI and the end of WWII was there ever a rise of supernatural beliefs in Japan?

A: In fact there was – though the “interwar” period has much less meaning in Japan than it does in the west, and the spike in interest and belief can be more accurately dated to c.1910-35. This rise and fall had more to do with the history of Japan’s engagement with western ideas than it did with the impact of the two World Wars.

I would point you towards four figures in particular who played a key part in this spike, and who influenced the way in which Japan thought about the subjects you are interested in: Asano Wasaburō (1847-1937), a teacher at the Naval War College who imported a version of western spiritualism into Japan; Deguchi Nao (1837-1918), who was a sort of trance medium who claimed to have visions of the deity Ushitora-no-Konjin; her son-in-law Deguchi Onisaburô (1871-1946), a flamboyant Shintoist-spiritualist who blended existing folk belief with new concepts of divination, exorcism and millennarianism to create a new religion, Oomoto, which eventually took on an unusual anti-state flavour that resulted in its persecution in the 1920s; and Inoue Enryo (1858-1919).

To deal with Enryo first: he was an academic philosopher who attempted to fuse Buddhism with western science and founded, first the Enigma Research Society, and then the more successful Fushigi Kenkyukai, or “Society for Research on the Mysterious” in 1886 – pretty much at the same time as the rough-equivalent Society for Psychical Research was founded in the UK (1882). He became popularly known as “Dr Ghost” and was the creator of what Japanese call yokaigaku, literally “monsterology” but more usually translated as “mystery studies”.

Enryo was an active Buddhist (in fact at one point in his life, a Buddhist priest) and as such found it easier than many of his fellow academics to accept the reality of some psychical phenomena. One of his key concepts was that the distinction between the “false mystery” and “true mystery” was key to understanding superstitious belief. His society studied “Ghosts, foxes and tanukis [狐狸], strange dreams, reincarnations, coincidences [偶合], prophesies [予言], monsters [怪物], witchcraft [幻術], insanity, and so on.”

There’s an English paper on “Inoue Enryo’s Mystery Studies” in the journal International Inoue Enryo Research, 2 (2014), 119-55, and a full bibliography of western language materials about him can be seen here.

The other three were significantly more influential than Enryo, and they have been quite extensively studied. See Nancy Stalker, Prophet Motive: Deguchi Onisaburo, Oomoto and the Rise of New Religion in Imperial Japan; Emily Groszos Ooms, Women and Millenarian Protest in Meiji Japan: Deguchi Nao and Omotokyo; Birgit Staemmler, “The chinkon kishin: divine help in times of national crisis.” In Kubota et al [eds], Religion and National Identity in the Japanese Context and Kenta Kasai. “Theosophy and related movements in Japan.” In Prohl & Nelson [eds], Handbook of Contemporary Japanese Religions

In addition, Helen Hardacre, the great authority on Shintoism, has a useful introductory chapter on their movement in Sharon A Minichello (ed.), Japan’s Competing Modernities: Issues in Culture and Democracy, 1900-1930 entitled “Asano Wasaburō and Japanese Spiritualism in Early Twentieth-Century Japan.” This paper is a good English language intro that explains how western spiritualist concepts entered Japan and were adapted to relate to pre-existing religious, nationalist and especially shamanic concepts – bringing the new movement inevitably into conflict with state-sponsored Shintoism. Hardacre concedes that throughout this period such beliefs were marginal to the mainstream of Japanese culture. However,

nineteenth-century spiritualism from the West was a subject of great interest in early twentieth-century Japan. Situated on a border between mass culture and the more rarefied pursuits of Westernized, bourgeois salon culture, Japanese spiritualism represented, in part, the importation of Western cultural fads for seances, telekinesis, clairvoyance, and hypnosis. As such, it was [initially] romantic and escapist in a larger cultural context of empire, industrialization, and the expansion of state powers.

Trouble really started when Asano’s ideas ideas became fused with the apolocalyptic preaching of Deguchi Omisaburo, who caused considerable official alarm by attempting to spread his ideas in military and university circles. Hardacre, Ooms and Stalker all also discuss the ways in which western-influenced spiritualism, in the form of the Omotokyo movement founded by Deguchi Nao but later run by Asano, was heavily suppressed in Japan in two campaigns dating to 1921 and 1935.

Finally, a quick summary of other English-language resources:

  • Michael Dylan Foster’s Pandemonium and Parade: Japanese Monsters and the Culture of Yokai looks at the evolution of the Japanese concept of ‘monster’ – specifically the variety known as yôkai from 1700-2000. It covers the way in which ghost and monster stories evolved over this period, but it’s not a chronological study.
  • Noriko T. Reider’s Japanese Demon Lore: Oni from Ancient Times to the Present includes a couple of chapters on the changing conceptions of demons in Tokyo in the late 19th and early 20th centuries, with a focus on the increasing commercialisation of demons in the media.
  • Suzuki Kentaro’s paper “Divination in contemporary Japan” is an analysis of a detailed survey of contemporary divination practices. Although focused on the present, it is very useful in presenting a breakdown of the types of belief in divination that exist in Japan, and as such it would be a useful jumping off point for more more detailed research in the period that interests you. Incidentally, Suzuki comments that this is “a subject upon which there is at present almost no academic research.” In Japanese Journal of Religious Studies 22 (1995), 249-66.
  • A popular account of palmistry in Japan just before the period you are interested in is S. Culin, “Palmistry in China and Japan,” Overland and Out West 23 (1894). This one is available online from the University of Michigan library.
  • Curran et al, in Multiple Translation Communities in Contemporary Japan, mention that the vampire story became popular in Japan in the period 1915-30, as a result of the influx of translated western works of all sorts that peaked in the early 1900s. The Japanese term for vampire, kyuketsuki, was coined in 1915. It seems this new interest was literary and academic, however, rather than resulting in the appearance of actual supposed cases of vampiric activity.



Sphinx of Hatshepsut

Sphinx of Hatshepsut (ca. 1479–1458 B.C), from the Metropolitan Museum, New York. Contrary to popular belief, some images of female pharaoh survive – this seven-tonne example was destroyed on the orders of Hatshepsut’s successor, Tuthmosis III, and the remains hurled into a quarry, from whence it was recovered and painstakingly reassembled by archaeologists.

Q: Did Ramses II try to erase Queen Hatshepsut from the record books because she was a successful ruler or because she was a woman (who depicted herself as male)?

A: At its simplest, the answer to your question is that the destruction wrought on Hatshepsut’s monuments and memory seems to have occurred explicitly because she was a women – for reasons that I will try to set out for you below.

We do need to be honest about the problem here: we have no histories, no chronicles from Hatshepsut’s time (c.1507-1457 B.C.). The evidence we have is – bar a few late king lists – archaeological, and while it can tell us something of what happened during her reign as pharaoh, and after her death, it tells us little – directly at least – about why things happened as they did: why so many examples of her cartouche were shaved down and recut in order to ascribe them to some other pharaoh, and why elsewhere “her entire figure and accompanying inscription were effected and replaced with the image of some innocuous ritual object such as an offering table.” [Dorman] All answers are speculation; the distinction that we need to draw is that between informed and ill-informed guesswork.


Re-imaginings of Hatshepsut – some more realistic and respectful than others – are commonplace nowadays, testament to the hold the ancient Egyptian ruler still exercises over our imaginations.

But, with that said, the following is broadly agreed, by most Egyptologists, to be true: that Hatshepsut was a powerful member of the Egyptian royal family of the 18th dynasty, being the eldest daughter of one of Egypt’s greatest warrior-kings, the pharaoh Thuthmosis I (her very name means ‘Foremost of Noble Women’); that she was fortunate, in that her parents had no surviving male child, which eventually led her to move close to a position of power – as was commonly the case in ancient Egypt, she was married to a close relative, her half-brother, also Thuthmosis, the son of a high-ranking woman in the royal harem who eventually succeeded to the throne; and that she was also, very probably, ambitious, for when her husband died, leaving her to rule as regent for his infant heir by another woman from the royal harem – her step-son and nephew, the future Tuthmosis III – she was able to manouevre herself (in ways that have, unfortunately, left no clear traces in the archaeological record) into a position of absolute power. Hatshepsut the king’s-woman (which is the literal translation of the ancient Egyptian word for ‘queen’ – rank in this period, even for a woman of Hatshepsut’s lineage, was entirely the product of a husband’s or a father’s status) became Hatshepsut the pharaoh, ruling alone and portraying herself in masculine terms, most famously by overseeing the production of statues that showed hear sporting a full beard.

It’s worth pausing briefly to look at the reign before we consider what happened to Hatshepsut’s monuments and to her reputation after she died. One key point to make is that, while she was not actually the first woman to take absolute power in Egypt, she was the first one to do so in a time of peace; the only previous female pharaoh, Sobekneferu of the 12th dynasty (r. c.1800 B.C., at the tail end of the Middle Kingdom period), had taken power at a time of national crisis, and apparently out of necessity, there being no other senior royal males available to rule. Another is that Hatshepsut was apparently not, as she is sometimes portrayed, a ruler with a distinctively “feminine” agenda, preferring peace to war. It is true that one of the more notable achievements of her reign was a trading voyage to the land of Punt (far to the south, sometimes identified with modern Somalia), but Egypt did wage war – successfully – in Hatshepsut’s time. This, together with her use of standard Egyptian iconography, and her entirely conventional determination to divert vast state resources to the construction of funerary monuments for herself (her magnificent mortuary temple, which survives, is one of the most iconic tourist attractions on the Nile) tend to argue that, whatever the reason for the post-mortem destruction that partially obliterated her name, it was not because she forced through policies or ordered actions that were outrageous or reviled. She was no Akhenaten – the 18th dynasty pharaoh notorious for neglecting the old gods in favour of a quasi-monotheistic new cult focused on the sun god, Aten, whose name was also wiped from Egyptian records after his death.It is also very helpful to look at what we know of Hatshepsut’s relationship with her stepson, Tuthmosis III, since it was in his reign that much of the destruction wrought on her monuments took place. Two points emerge most clearly here. The first is that there is no direct evidence that Hatshepsut ever did anything to suggest that Tuthmosis was not the rightful heir to the throne. Dating the monuments that survive, it would appear that she ruled in her stepson’s stead, as regent, for at least two years before claiming power for herself; thereafter, Tuthmosis was not only allowed to live, but was actually given a solid training for taking power, being not only highly educated by the standards of the time, but also allowed to rise within the ranks of the Egyptian army until he became its commander-in-chief.

Tuthmosis I

Hatshepsut was the daughter of Tuthmosis I (ca. 1506-1493 B.C), a powerful New Kingdom pharaoh of the 18th dynasty. His remains, disturbed by tomb robbers years after his death, were concealed in a cache of royal mummies at Deir el-Bahri, above his daughter’s mortuary temple, and were recovered in 1881.

It seems inconceivable that the stepson would have been permitted a distinguished military career, and command over a powerful army, had Hatshepsut viewed him as a direct threat to her rule. Tuthmosis must have accepted – at least on some level – Hatshepsut’s right to rule, and we have no evidence that he made any attempt to seize power or prepare any sort of coup while she was still alive. Similarly, it is almost impossible to believe that a woman, who long custom and Egyptian political philosophy alike conceived as having no divine right to rule, could have held onto power for 22 years without the active support of a large portion of the country’s elite. There are other examples in Egyptian history of inconvenient heirs meeting suspicious ends, and of elites rising up against unpopular rulers; it has to be significant that neither of these things occurred during Hatshepsut’s reign.

Several alternatives have been advanced to explain how power may have been wielded during this period. We know that Hatshepsut made an effort to stress the legitimacy she had acquired via her royal parentage, emphasising not only that she was the rightful heir to a powerful king, but also divine, as the product of her father’s union with a royal mother from the same family. In this sense, importantly, she was actually more “royal” than her half-brother and husband, who was the son of a much lower-status woman. We also know that Hatshepsut was depicted far more commonly than was her stepson, and nominal co-ruler, on monuments constructed during her regency and then reign; surveying her mortuary complex, Vanessa Davies counts 87 occurrences of Hatshepsut’s name and figure, compared to 37 of Thuthmosis III. All this suggests that her efforts to portray herself as a worthy ruler, and as a divine monarch, were successful, and perhaps this best explains why she did not feel threatened by her stepson, and why she not only allowed him his army career, but also permitted him to be represented, during her reign, as a figure of considerable power and potency. Davies concludes that he “was represented as a multi-faceted and powerful figure; thus one might infer that he actually behaved and functioned in this manner, or, at the very least, that Hatshepsut intended for him to be viewed in this light.”

So while the Egyptian state may have expected and prefered to be ruled over by a male pharaoh, it seems there was no absolute proscription on female rule; it was highly unusual, but neither blasphemous nor “impossible.” Joyce Tyldesley concludes that

Legally, there was no prohibition on a woman ruling Egypt. Although the ideal pharaoh was male – a handsome, athletic, brave, pious and wise male – it was recognised that occasionally a woman might need to act to preserve the dynastic line. When Sobeknofru ruled as king at the end of the troubled 12th Dynasty she was applauded as a national heroine. Mothers who deputised for their infant sons, and queens who substituted for husbands absent on the battlefield, were totally acceptable. What was never anticipated was that a regent would promote herself to a permanent position of power.

Yet this is not to say that Hatshepsut was not aware of the underlying weakness of her position. There are two points to make in this respect. First, let’s hear again from Tyldesley:

Morally Hatshepsut must have known that Tuthmosis was the rightful king. She had, after all, accepted him as such for the first two years of his reign. We must therefore deduce that something happened in year three to upset the status quo and to encourage her to take power. Unfortunately, Hatshepsut never apologises and never explains… Indeed, seen from her own point of view, her actions were entirely acceptable. She had not deposed her stepson, merely created an old fashioned co-regency, possibly in response to some national emergency. The co-regency, or joint reign, had been a feature of Middle Kingdom royal life, when an older king would associate himself with the more junior partner who would share the state rituals and learn his trade. As her intended successor, Tuthmosis had only to wait for his throne; no one could have foreseen that she would reign for over two decades.


Hatshepsut’s mortuary temple survives close to the Valley of the Kings, and is today one of the best-preserved New Kingdom monuments in Egypt.

It is interesting, in this context, to consider how Hatshepsut portrayed herself over the course of her reign. Early statuary from her regency period clearly depicts a woman wearing male regalia, breasts visible on a naked upper body. Later, after her coronation as pharaoh, depictions change; Hatshepsut is now portrayed as a man, with wider shoulders and no breasts. And she was buried as a man, as well, in a king’s sarcophagus. As Kara Cooney points out, this can be seen as a matter of convention, not deception; Hatshepsut never changed her – very clearly feminine – name, so it seems unlikely she was trying to pretend to be something she was not. Yet it is difficult to imagine that female rule was simply accepted without any question in the Egypt of her day; its consequences were too stark a departure from religiously-rooted norms. The problem here was one of political philosophy, not simply politics. But, as Cooney notes:

Given that the king on earth was nothing less than the human embodiment of the creator god’s potentiality, Hatshepsut must have been all too aware that her rule posed a serious existential problem: she could not populate a harem, spread her seed, and fill the royal nurseries with potential heirs; she could not claim to be the strong bull of Egypt.

Perhaps, then, it is better to see “male” images of Hatshepsut as nods to a conventional iconography that applied equally to any Egyptian ruler, of whatever sex, than it is to imagine them as admissions of serious political weakness.

So, with all this said, we can turn at last to answering the question posed: why were Hatshepsut’s images destroyed after her death, and why was her name removed from so many of the monuments she made?

It’s important, first, to recognise that the new regime was not a complete break with the past. Thuthmosis continued to employ a large proportion of the royal servants who had served his stepmother. And the desecration he ordered – which Cooney estimates accounted for “hundreds, if not thousands” of images and inscriptions – was not a campaign of attempted absolute obliteration, as the campaign against Akhenaten seems to have been. Not all of Hatshepsut’s statues were destroyed, and not all of her cartouches were hacked away; a significant number survived, not least those representing her as queen, including some that were quite prominently displayed on her tomb, which would surely have been a prime target for any Roman-style campaign of damnatio memoriae. The same is true of the desecration that seems to have occurred to the monuments of her prime supporter, her steward Senenmut – whose name was removed from only 9 of his surviving 25 statues. Cooney summarises by saying that the statues that were removed or desecrated were those in public places – the aim, therefore, may have been to “prevent people from seeing and interacting with her as king.” The archaeological record, moreover, strongly suggests that the campaign did not begin immediately on Thuthmosis’s accession – Hatshepsut was buried with all honour, for one thing, and works underway at the time of her death were completed, which can only imply that her heir ordered work on them to continue. Something happened later to change this, something that Cooney concludes was probably a shift that took place in the mind of an ageing ruler considering his legacy.


Surviving statue of Hatshepsut in a devotional pose. The image comes from the early part of the reign and the pharaoh is still depicted as a woman.

Modern consensus is that the desecration of Hatshepsut’s monuments cannot have begun earlier than the 42nd year of Thuthmosis’s reign, which is 20 years after his aunt’s death. We also know that it continued into the reign of his son, Amenhotep II – to a period when few of those responsible would have had any memory of the female pharaoh. Finally, where Hatshepsut’s name was obliterated, it was rarely replaced with her stepson-nephew’s; more usually, the new name carved was that of her father, his grandfather, Thuthmosis I.

All of this suggests that the campaign was neither wildly aggressive, nor “personal”. It seems unlikely to have been carried out on the orders of a man who had spent the 20 years of Hatshepsut’s reign boiling with anger at being usurped.

Most modern archaeological interpretations of Hatshepsut’s reign, including those of Dorman, Tyldesley and Cooney, prefer instead to see the destruction of her name as a form of reassertion of what would have been seen as the the natural political and theological order – “an impersonal attempt at retrospective political correctness” (Tyldesley) aimed at stressing the male prerogative to rule. This would explain why Thuthmosis seems to have ordered the adoption of distinctive artistic style in sculptures and paintings showing him – one that was very much a break from the old style that had existed in Hatshepsut’s style, and which harked back, more importantly, to the styles adopted by his grandfather. Dorman argues that the key intention was to stress Thuthmosis III’s royal lineage (and hence legitimacy) while removing signs of female disruption to the approved order, most probably because “the recently invented phenomenon of a female king had created such conceptual and practical complications that the evidence of it was best erased.”

For Cooney, meanwhile,

“the Egyptian system of political-religious power simply continued to work for the benefit of male dynasty. Hatshepsut’s kingship was a fantastic and unbelievable aberration. Ancient civilization didn’t suffer a woman to rule, no matter how much she conformed to religious and political systems; no matter how much she ascribed her rule to the will of the gods themselves; no matter how much she changed her womanly form into masculine ideals. Her rule was perceived as a complication by later rulers—praiseworthy yet blameworthy, conservatively pious and yet audaciously innovative—nuances that the two kings who ruled after her reconciled only through the destruction of her public monuments.


Kara Cooney, The Woman Who Would Be King: Hatshepsut’s Rise to Power in Ancient Egypt (Broadway Books, 2015); Vanessa Davies, ‘Hatshepsut’s use of Tuthmosis III in Her Program of Legitimation,’ Journal of the American Research Center in Egypt 41 (2004); Peter F. Dorman, ‘The proscription of Hatshepsut,’ in Roehrig, Dreyfus & Keller, Hatshepsut from Queen to Pharaoh (Yale, 2005); Joyce Tyldesley, ‘Hatshepsut and Tuthmosis: a royal feud?’ BBC History, 2011



St Brices day

A Viking-era burial pit uncovered on the Dorset Ridgeway in 2009. The mass grave contains the remains of about 50 mostly young men. All had been decapitated. Carbon-dating and isotope analysis show that the skeletons date to c.975-1024, and it is possible they were victims of the St Brice’s Day Massacre of November 1002, an attempt by Æthelred the Unready to rid his kingdom of every Danish inhabitant in a single day.

Q: This article in The Atlantic mentions that the murder rate in the Medieval period was 12%. That seems absurdly high. Is there any truth to it?

It just seems absurd. Like, just estimating, half the world’s population or more was in India and China. China and south India both had long periods of political stability – does that mean a European had something like a 25% chance of dying due to violence? Are they counting people who die to due to war-caused famines as being murdered?


A: Tracking this claim to its source is a good example of heading down an Alice in Wonderland-style rabbit hole.

Checking back to the article you cite, it’s clear that the claim is based on a new and really quite exceptionally broad survey of violence among all mammal populations. This was published in Nature this week as Gomez et al, “The phylogenetic roots of human lethal violence.” Superficially this means the source is an impressive one, since Nature is certainly one of the most prestigious scientific journals in the world. However, it’s worth noting that the paper appears as a “Letter” rather than as a full fledged article, and that Nature has a surprising history of publishing high-profile but what can politely be termed “controversial” articles, such as one offering a scientific name for the Loch Ness Monster based on underwater photographs of what later turned out to be almost certainly a tree stump.

In this case The Atlantic itself sounds some cautionary notes about the evidential basis of the violence survey. It is a meta-analysis, and by its nature not a very comprehensive one (that is, it does not include any original research, but collates the results of earlier surveys) that attempts to compare the levels of violence among a huge variety of different mammal populations across the whole of the archaeological record. A total of 1024 species are surveyed, humans being just one of them. So it’s reasonable to wonder exactly how much effort was put into making sure the human sample was comprehensive and representative, and problems associated with the data had been completely thought through.

The Atlantic has a few pertinent comments about the team’s methodology:

  1. “First, he and his team compiled everything they could find on causes of death for various mammals, accumulating some 3,000 studies over two years.”
  2. As for the sources of information for the human sample: they did this “by poring through statistical yearbooks, archaeological sites, and more, to work out causes of death in 600 human populations between 50,000 BC to the present day.”
Polly Wiessner

Polly Wiessner: unimpressed.

As for the way in which the data has been handled: “Polly Wiessner, an anthropologist from the University of Utah … is unimpressed with the study’s human half. “They have created a real soup of figures, throwing in individual conflicts with socially organized aggression, ritualized cannibalism, and more. The sources of data used for prehistoric violence are highly variable in reliability. When taken out of context, they are even more so.”

“Richard Wrangham from Harvard University has similar concerns about the mammalian data, noting that Gómez have folded a lot of different kinds of killing—infanticide, adult deaths, and more—into a single analysis. And from an evolutionary standpoint, it matters less whether two related species kill their own kind at a similar rate, but whether they do so in a similar way.”

To go further into this requires a close reading of the original article, which is available online here. It’s worth noting that the article itself gives very inadequate sources for most of the information it contains, which is not surprising when it is based on such a vast meta-analysis. To find out what the actual sources are we have to go to the “Supplementary material” section which is separately available here.

Let’s summarise what we can discover about the sources used and their comprehensiveness and reliability by reading through these two sources.

First, the Letter itself.

This points out that the human violence figures available were divided into four categories by socio-political organisation: bands, tribes, chiefdoms and states. This is a categorisation widely used in the social sciences (eg anthropology) but one that I’d say a lot of historians find unhelpful. After all, there’s a vast historical literature devoted entirely to trying to define what a “state” actually is.

Statistical yearbook.png

Above: a statistical yearbook yesterday.

There are also some acknowledgements of potential bias that give some clues as to the sorts of sources being used: “The level of violence inferred from skeletal remains could be under-estimated because many deadly injuries do not damage the bones…” And there’s also a reference to “statistical yearbooks” being a prime source of information. So it would appear that the data for the medieval period is going to be based on the archaeological record, rather than the written record. There’s not much clue yet as to how broad the sample will be, geographically or temporally, but if a lot of reliance is being placed on “statistical yearbooks” then that sounds some pretty loud warning bells for me when it comes to making accurate assessments of the medieval period, since, of course, no such contemporary sources exist for this period.

That’s about it for the Letter itself, and the bibliography offers no further clues as to the exact sources of information. To go deeper we have to look at the “Supplementary information:” document. This contains a couple of useful additional bits of information. First, by “medieval period” the authors mean the period 1300-500 BP, which is 716-1516 A.D. Second, their data is based on a sample of 17,372 human remains. Again this sounds warning bells, since such as sample size is not going to be enough to provide proper coverage of the whole human population across the whole globe for that whole period. It’s a real drop in the ocean sort of a figure – a sampling. Actually, we’re told that figures from 600 different populations were compiled for the survey as a whole (covering the period from 50,000 BP to now), which in one sense is quite impressive, but which also implies that very likely no one population is followed in a consistent and systematic way across the whole period.

Third, there is a better definition of “lethal violence” offered: for the purposes of the paper, this is defined as “the percentage of the people that died owing to interpersonal violence.” If we think about that for a moment in the context of earlier cautions, this sets off more alarm bells. If we’re looking at archaeological data, it’s going to be very difficult to distinguish for example between interpersonal violence and accidents and suicides in many of these records. Was a broken leg inflicted in a battle or a fall? If the researchers have been scrupulous, I would expect this factor would result in a form of understating of figures for the medieval periods, since they should only be counting wounds that were clearly inflicted by weapons. Again though we have to recognise that this whole debate goes on in the wider context of the difficulty of identifying some marks of violence from purely skeletal remains. Many arrow wounds, not to mention cut throats, deaths by poison etc etc are not going to show up readily in the archaeological record.

Finally, we can use the data supplied here to be much more explicit about the precise sources consulted. I’ve copied the portion of the survey data that refers to the medieval period here.

Serbian fractures.png

Some of the fractures identified in medieval remains from Serbian cemeteries by Djuric et al. But what caused them?

This shows the site of the remains surveyed, rough date and number of remains, and (final column) a source, which you can chase up in the bibliography if you’re so inclined. They are anthropological and archaeological, not historical. Just to give one example, one of the sources for Serbian violence is Djuric MP, Roberts CA, Rakocevic ZB, Djonic DD, Lesic AR (2006). “Fractures in late Medieval skeletal populations from Serbia.” American Journal of Physical Anthropology 130: 167-178.

From all this we can see that the survey is very limited – it covers only the UK, Ireland, Portugal and Spain, Scandinavia, Germany, Poland and Croatia. Even allowing for my earlier comments about the lack of comprehensiveness here, in my opinion this is a ridiculously limited pool of data from which to extrapolate a worldwide, pan-medieval figure. There are many reasons for supposing that even if the European figures are representative (which we can’t know, but look at some of the specifics – two Viking cemeteries, a burial pit associated with the Battle of Towton (the bloodiest battle, probably, in British history), a royal graveyard in Croatia, and some monks’ graveyards … it would be so easy to argue that these are very unrepresentative samples), these figures are just not very useful.

Towton battle damage.png

Easily identifiable battle damage from a skull excavated from a burial pit dating to the Battle of Towton (1461), during the Wars of the Roses. But the battle is hardly representative of medieval experience – perhaps 28,000 died in what was the bloodiest engagement ever fought on British soil.

All we can really conclude from this is that a survey of remains from 40 different, broadly medieval, European sites, containing a very varied number of bodies, from very varied periods that include some periods of war, estimates deaths by violence at an average of 12%. Even in this limited context, I immediately have hundreds of questions about how typical these sites are, what sorts of violence, how we know who inflicted what wounds in what circumstances, and even whether the victim survived them to die a natural death much later. None of these questions are answered by the Letter and to focus on something as specific as deaths in the medieval period is to use the paper itself for purposes it was not really intended for.


Assassins Creed London skyline.png

A London cityscape in the Victorian era, as re-imagined for “Assassin’s Creed.” The reality would have been significantly smellier and dirtier.

Q: I am a hot-blooded young British woman in the Victorian era hitting the streets of Manchester for a night out with my fellow ladies and I’ve got a shilling burning a hole in my purse. What kind of vice and wanton pleasures are available to me?

A: To begin with, I need to caution that Manchester – which looked like this in 1870 – has not been as widely written about as other cities, so I have drawn on some studies of other major cities as well; in addition, there would have been huge gulfs in experience depending on social class, and the “Victorian era” is in itself an extremely broad term, covering 60 years and some substantial shifts in lived experience and in the types of entertainment on offer. For all these reasons, consider this answer a rather broad one that attempts to cover young women’s experiences in the big city generally, and mostly in the latter half of the Victorian period.

Let’s start, though, by considering what elements may have been unique to Victorian Manchester, which in the course of this period passed Liverpool and Dublin to contend, with Birmingham and Glasgow, for consideration as the “second city of the empire.” It was, to put it bluntly, an industrial hell-hole, albeit one that offered exciting opportunities – the main centre of cotton manufacturing in the UK at a time when Britain was a gigantic net exporter of finished textile products. This had several important impacts that we need to be aware of, of which the most important was that the city became a magnet for workers from rural or small-town backgrounds, who could easily find work in the myriad of factories that sprang up there, and lodgings in the vast swathes of slum housing that inevitably grew up as a result. All this meant that Manchester was home to a large number of young workers of both sexes who were a considerable degree free of the sort of restraints that they would experience at home. Adolescent and young women might live without parents, and sometimes siblings; the social bonds and restraints created by the church were also significantly weakened, and the Religious Census of 1851 revealed church attendance among working class people in major industrial centres to be scandalously low.

Sewing factory in late Victorian England.png

Female workers in a Victorian-era sweat-shop.

By the 1840s, then, Manchester was already the greatest and most terrible of all the products of the industrial revolution: a large-scale experiment in unfettered capitalism in a decade that witnessed a spring tide of economic liberalism. Government and business alike swore by free trade and laissez faire, with all the attendant profiteering and poor treatment of workers that their doctrines implied. It was common for factory hands to labour for 14 hours a day, six days a week, and the conditions in domestic service – which was the other main source of employment for young women – were only a little better. Chimneys choked the sky; Manchester’s population soared more than sevenfold. Thanks in part to staggering infant mortality, the life expectancy of those born in Manchester fell to a mere 28 years, half that of the inhabitants of the surrounding countryside. One keen observer of all this was an already-radical Friedrich Engels, sent to Manchester in 1842 to help manage a family-owned thread business (and keep him out of the hands of the Prussian police). The sights that Engels saw in Manchester (and wrote about in his first book, The Condition of the Working Class in England) helped to turn him into a communist. “I had never seen so ill-built a city,” he observed. Disease, poverty, inequality of wealth, an absence of education and hope all combined to render life in the city all but insupportable for many. As for the factory owners, Engels wrote, “I have never seen a class so demoralised, so incurably debased by selfishness, so corroded within, so incapable of progress.” Once, Engels wrote, he went into the city with such a man “and spoke to him of the bad, unwholesome method of building, the frightful condition of the working people’s quarters.” The man heard him out quietly “and said at the corner where we parted: ‘And yet there is a great deal of money to be made here: good morning, sir.’”

For all these reasons, it is hardly surprising that Manchester was also a noted centre of radicalism and an early hotbed of the labour movement in this period. The infamous Peterloo Massacre, in which cavalry had charged a vast crowd demonstrating for parliamentary reform, killing or injuring as many as 500 of them, took place in the city before Victoria’s day (1819), but it cast a very long shadow over the decades to come. Manchester became of the biggest supporters of the Chartist movement, a (for then) radical mid-century organisation calling for a large-scale expansion of the franchise.

So, to summarise: to be working class in Victorian Manchester was to do work that was long, hard and dangerous; to be an interchangeable and expendable part in an industrial machine built by factory owners who laboured to resist unionisation; and to work in an environment in which “health and safety” was largely non-existent. Terrible accidents involving unguarded, whirring machinery and human limbs were hideously common.

There was every reason to seek escape in the city’s entertainments.

Molly Hughes.jpg

Molly Hughes, education pioneer and author of the invaluable – and still highly readable – trilogy A Victorian Family 1870-1900.

Let’s begin by considering the degree to which male and female entertainments were, or were not, one and the same in the Victorian era. To a great extent, it seems, women – or at least the right sort of women – might go almost anywhere, if appropriately accompanied; at one of the main dog pits in London, where crowds assembled to watch dogs take on a dozen wild rats at a time, Henry Mayhew (author of the utterly invaluable London Labour and the London Poor) was told: “I’ve had noble lords and titled ladies come here to see the sport – on the quiet.” But class was a vital determining factor when it came to entertainment. The experiences of Molly Hughes – who was a girl in London in the 1870s, and an adolescent in the city in the 1880s, and who grew up in a family that seems to have been both relatively liberal and relatively fun-loving, give an interesting insight into just how constrained middle-class life could be for a girl. Molly had to press hard to get herself a decent education, and her experiences of life outside the family home– which she considered to be unusually broad, by the standards of her contemporaries – strike us today as almost comically limited. When Molly was a girl, her mother

was for encouraging any scrap of originality in anybody at any time, and allowed me to ‘run free’ physically and mentally. She had no idea of keeping her only girl tied to her apron-strings, and from childhood I used to go out alone in our London suburb of Canonbury, for a run with my hoop or to do a little private shopping.

As an adolescent, however, her experience of big-city fun was limited to just one or two vividly-recalled and heavily-chaperoned experiences. Here is by far the grandest and the most important that teenaged Molly ever enjoyed – and it is quite evident that the men in the family had serious concerns about the idea of taking her out at all, and that she herself was permitted no part in bringing it about. She was 16:

During the Christmas holidays of ’82, it occurred to the boys that I ought to have a little relaxation, in view of the rigorous time I was likely to have at my new school. How would I like to go to a theatre and see a real play? … I had never even been to a pantomime. Mother was consulted, and thought it wouldn’t do me any harm, especially as Dym [a brother and Cambridge undergraduate] said he would choose a small theatre and a funny farce – Betsy at the Criterion… The play itself has faded from my memory, but the accompaniments are still vivid. An anxious farewell from mother, as Dym and I stepped into a hansom, set us off.

Mother had put me into my nearest approach to an evening dress, which Dym approved, so that I was not too shy when I sat in the dress-circle, and walked into the grill-room after the play. This was full of cheery people and a pleasant hum of enjoyment and hurrying waiters. I felt it to be like something in the Arabian Nights. Tom and Charles [two older brothers] walked in and joined us. A low-toned chat with the waiter followed, while I looked with amazement at the wide array of knives and forks by our places.

’What can all these be for?’ I asked Charles.

’You’ll see. I’ll tell you which to use as we go on; and remember you needn’t finish everything up; it’s the thing to leave something on your plate.’

Such a meal as I had never dreamt of was then brought along in easy stages. Never had I been treated so obsequiously as by that waiter. When wine was served I began to wonder what mother would think. It gave that touch of diablerie to the whole evening that was the main charm. To this day I never pass the ‘Cri’ without recalling my one and only visit to it, with those adored brothers.

One reason for the paucity of Molly’s experience was that theatre, in this period, was not really considered suitable for the well-bred; the quality went to the opera, to stroll in the pleasure gardens (by now gas-lit and open in the evenings) or perhaps to musical recitals such as the popular programmes of choral singing offered by the children at London’s Foundling hospital. Nevertheless, more or less elaborate theatricals were widely available and popular with the working classes. They ranged from the “penny gaff” – the cheapest sort of neighbourhood theatre, most popular in the first half of the Victorian period, and often found in the back room of a pub – up to large theatres that operated, by the end of the era, as music halls and were the most popular form of mass entertainment before the advent of the cinema.

ictorian era pub.png

A Victorian-era public house. Women were welcome – in some parts of the premises – and in some circumstances would visit unaccompanied.

Given Henry Mayhew’s broad experience of (and considerable sympathy for) many of the aspects of working class life in the mid-Victorian period, it is interesting that his view of the penny gaff was negative; he thought it “the foulest, dingiest place of public entertainment I can conceive,” with an unspeakably vile odour – a place “where juvenile poverty meets juvenile crime.” The entertainment on offer consisted of six performances a day of gory retellings of violent crimes, laced with “filthy songs, clumsy dancing and filthy dancing,” which – reading between the lines – we can suppose were shocking more for their crude and sexual- or violence-charged lyrics and actions than anything else. Also shocking: most the audience at a penny gaff, Mayhew found, were women.

Street performances of various sorts were also popular and affordable. Puppetry, usually centred around Punch and Judy, was an enduring perennial in all its various forms (“the Fantoccini… the Chinese Shades…”), but there were also hundreds of performers scraping a living as clowns, fire-eaters, sword swallowers and so on. “When we perform in the streets, we generally go through this programme,” one Fantoccini man explained to Mayhew, as he set out a highly elaborate set of entertainments:

We begins with a female hornpipe dancer; then there is a set of quadrilles by some marionette figures, four females and no gentlemen… for four is as much as I can handle at once. After this we include a representation of Mr. Grimaldi the clown, and a comic dance and so forth, such as trying to catch a butterfly. Then comes the enchanted Turk. He comes on in the costume of a Turk, and he throws off his right and left arm, and then his legs, and they each change into different figures, the arms and legs into two boys and girls, a clergyman the head, and an old lady the body…. Then there’s the tightrope dancer, and next the Indian juggler… They are all carved figures, and all my own make.

Just down the road on a holiday evening, one might encounter stilt-walkers, strong-men or groups of “street posturers,” as contortionists and acrobats were sometimes known.

“There’s five in our gang now,” the leader of one such troupe of tumblers said, around 1850:

There’s three high for ‘pyramids’ and ‘the Arabs hanging down’ … there’s ‘the spread,’ that’s one on the shoulders and one hanging from each hand, and ‘the Hercules,’ that is, one on the ground while one stands on his knees, another on his shoulders, and the one one a-top of them two, on their shoulders… The dances are mostly comic dances, or, as we call them, comic hops. He throws his legs around and makes faces, and he dresses as a clown.

Such performers had to be acutely aware of exactly how and when they might be paid:

Our gang generally prefers performing in the West End, because there’s more ‘calls’ there. Gentlemen looking out of the window see us, and call us to stop and perform; but we don’t trust them, even, but make a collection when the performance is half over… And yet we like poor people better than the rich, for it’s the halfpence that tells [adds] up the best.

Oxford music hall 1875.jpg

The Oxford Music Hall in 1875 – a relatively high-class example of such an establishment, and one in which the sexes mingled. In lower-class London music halls, prostitution and sexual encounters were commonplace.

By the 1850s, though, tastes in entertainment were already changing. Theatre, music hall and pantomime (already becoming a Christmas-time entertainment by then, but one that was available for months at a time, rather than solely during the festive season) began to emerge in the 1860s. Performances were long and varied, often lasting from 7,00 or 7.30 till 11 – so, rather as in 1930s cinemas, with their cartoons, newsreels, second and main features, you got an entire evening’s entertainment for somewhere around 2d or 4d. Most early Victorian programmes centred on melodrama, or sometimes circus-style performance, but by the end of the period music hall had triumphed as the most popular form of entertainment for lower-class audiences. A typical programme might involve a dozen different touring artists, who worked circuits up and down the country, from popular singers such as Marie Lloyd to comedians like Dan Leno. Broad, often rather “blue” humour was increasingly permitted and appreciated as the century progressed, and it was often possible to smoke and drink in the auditorium, which helped to make for an especially raucous atmosphere.

The most famous British music hall was the Alhambra, in London, which had a capacity of about 5,000 and had started out as a sort of permanent circus venue in the 1850s. it was well known for its elaborate scenery and mixed everything from ballet to what was advertised as “a dance forbidden in Paris” into its programme. Such venues did attract female audiences, but could often be centres of prostitution. Entry to the main part of the building cost a shilling just for standing room – a large sum for the time – and, visiting in 1869, James Greenwood found that the entertainment on offer was aimed more squarely at men than at women:

in the boxes and balconies sat brazen-faced women, blazoned in tawdry finery, curled and painted … there is no mistaking these women.

Behind the numerous bars, meanwhile,

superbly-attired barmaids vend strong liquor… besides these, there are small private apartments to which a gentleman desirous of sharing a bottle of wine with a recent acquaintance may retire.

This brings us on to pub-going, which was undoubtedly a central part of working class nights out. Female drinkers were normal sights in such places, making up between a quarter and a third of the clientele (but middle- and upper class women most certainly were not). Young women tended not to be regular pub-goers however; the typical Victorian era female clientele was middle-aged or even elderly. There was a reason for this; Gutzke explains that

age, marital status, and income imposed insuperable barriers to acceptability. Young, unmarried women seldom ventured into the pub alone, lest they be mistaken for prostitutes. Middle-aged or older wives, the preponderant women in pubs, displayed two types of drinking behaviour: during the week the poverty-stricken – the largest group – drank with each other, while on the weekend wives from the lower-middle classes downwards might accompany their husbands.

While most pubs had bars that were male-only, therefore, they also had spaces where women were allowed. Charles Booth, the Salvation Army leader – and hence a morally disapproving commentator – spoke to one publican in the late 1890s who ran five public houses, in one of which which there were “seven bars, two of which are reserved for men only” – and also noted that while “children do sip the beer they are sent to fetch… this is not the origin of their liking for beer. This dates back to early infancy while they were yet in their mother´s arms. Mothers drink stout in order to increase the supply of milk in the breast but often help the baby straight from the pintpot from which they help themselves.” Another publican, “Mr Clews of Clerkenwell,” observed that this was “a great area for women’s drinking… Women take rum in cold weather and gin in hot. “Dog´s nose” they also drink which is a compound of beer and gin.”

Marie Lloyd

Marie Lloyd, the great female star of the English music halls, had an act that combined renditions of popular songs with “blue” humour.

Of course, not all entertainment was so raucous. Young mothers – and many young women were also young mothers in the Victorian period – might not often get the chance to visit any sort of theatre or penny gaff, and their entertainments were often of a gentler kind. One girl born in 1855 looked back fondly to the gatherings that her mother and her mother’s female friends had taken their children to in summer in Victoria Park, by the 1860s the only significant area of greenery in London’s crowded East End. A special attraction of the park was that it was possible to hire prams there – “very few people had prams of their own then, but it was possible to hire them at 1d an hour… We would picnic on bread and treacle under the trees and return home in the evening a troop of tired but happy children.”

For the rather better off, there were zoos in Regents Park and Surrey Gardens, the British Museum (open three days a week from 10 till 4 – and till 7pm in summer), the titillating medical exhibits of the museum of the Royal College of Surgeons, and indeed freak shows, and one-off performances of all sorts, at which one main attraction was the chance of witnessing death and disaster. A pioneering parachutist, Cocking, died attempting a descent from 5,000 feet at Vauxhall Gardens in 1837; in 1871 a huge crowd gathered on London Bridge to watch the celebrated swimmer ‘Natator, the Man-Frog’ dive from the balustrade into the river, only to be disappointed when the performer appeared but was promptly arrested for attempted suicide.

So a wide variety of entertainment was on offer in the Victorian city, much of it relatively innocent, some of it considered, by the moral authorities of the day, liable to corrupt, and a little of it actually dangerous. But we cannot close without considering the moral dimension of popular entertainment, especially as it applied to single women. Judith Walkowitz’s influential City of Dreadful Delight, for example, maps the social panics prompted by the “narratives of sexual danger” that warned young, independent Victorian-era women that enjoyment of the city’s pleasures might easily usher them down the path that led to prostitution, pre-marital sex, venereal disease, or alcoholism. All this raises important questions, not least about agency; the women Walkowitz writes about were all too often “figures in an imaginary urban landscape of male spectators” – and male predators. These fears, Walkowitz shows, typically coalesced into strident anti-vice campaigns, condemnation of most expressions of sexuality, melodramatic newspaper coverage, and fresh mutations in the Foucauldian power relationships of the period. They are also powerful reminders that the history of popular entertainment in the Victorian period is as good a demonstration as any other of the inequality of opportunity, treatment and potential agency that coloured female experience, and was typical of this, and other, periods.

Ruth Alexander, who writes of New York in a slightly later period, givens numerous excellent examples of the way in which any “rebellious working girls” who fought against the sort of constraints imposed on them in these ways could easily be treated. Even slightly sexually awakened, or emancipated, behaviour on the part of young women was regarded as a serious threat – with often catastrophic consequences for the girls. For example, 16-year-old Nellie Roberts was sent to the New York State Reformatory for Women in 1917 as a “menace to the community” for the crime of standing on the roadside and “hailing men on motorcycles and asking them for rides.” This was seen as tantamount to prostitution. Alexander’s detailed and more empathetic investigation of Nellie’s circumstances uncovered a desire to escape fuelled by a desperately unhappy family background – a dead mother; a drunk father who raped his eldest daughter and “got fresh” with Nellie, too, on several occasions; poverty; boyfriends who might sometimes be “good to her” but were equally capable of sexual assault. When we read contemporary accounts of female “social delinquency,” we would do well to remember that many such cases were underpinned by circumstances as bad as those that Nellie Roberts endured, or worse.


Ruth Alexander, The ‘Girl Problem’: Female Sexual Delinquency in New York, 1900-1930 (1995); Carl Chinn, They Worked All Their Lives: Women of the Urban Poor in England, 1880-1939 (1988); Mike Dash, “Friedrich Engels’ Irish Muse,” 2013; David W. Gutzke, “Gender, Class, and Public Drinking in Britain During the First World War,” Social History (1994); M.V. [Molly] Hughes, A London Family, 1870-1900 (1946); Henry Mayhew, London Labour and the London Poor (1851); Liza Picard, Victorian London: The Life of A City, 1840-1870 (2005); Judith Walkowitz, City of Dreadful Delight: Narratives of Sexual Danger in Late-Victorian London (1992).

Q: You wrote:

Judith Walkowitz’s influential City of Dreadful Delight, for example, maps the social panics prompted by the “narratives of sexual danger” that warned young, independent Victorian-era women that enjoyment of the city’s pleasures might easily usher them down the path that led to prostitution, pre-marital sex, venereal disease, or alcoholism.

I recently read a paper by Ruth H. Bloch, “Changing Conceptions of Sexuality and Romance in Eighteenth-Century America,” in which she sets out to examine normative rather than transgressive sex and sexuality and how that changes across the century.

To quote her:

Many scholars have focused on the prohibitions or abuse; few have examined the aspirations.

The titles of two of the books you cited hint to me that this is probably an almost universal issue. Sexual delinquency and sexual danger, especially, of course, with regard to women. Are there many sources out there that help to coax out the aspirations of female sexuality in Victorian England? Aside from what might be shouted down from the pulpit, of course.

A: I think that’s a very fair question and Bloch seems to me to be clearly right – though perhaps Walkowitz and Alexander might contend that the cases they are writing about were actually the products of increasing aspirations.

Part of the problem, certainly, is the nature of the sources available. “Female delinquency” resulted in court cases, concerned reports by learned bodies and official enquiries, and of course copious newspaper coverage as well. Very few women wrote about their sexual feelings in this period. Other forms of aspiration (such as Molly Hughes’s – she eventually became one of the most prominent figures in education in London in the early 1900s) leave little trace, and they might also be cut short as well – in Molly’s case she gave everything up to be a wife when she married (entirely willingly, it should be said, though of course her willingness was in itself a product of her upbringing), and went back to work only after the early death of her husband.

We’re reliant on diaries, letters and memoirs, like Molly’s, for much of our information about women’s aspirations when these did not cause them to run foul of the moral and the actual police of the period – and these are conspicuously devoid of information about explicitly sexual aspirations. On top of that, our sources are very heavily biased towards upper and middle class women, which in turn makes them unrepresentative because such women had access to wider (though still very limited) opportunities. One of the very few examples of a working class Victorian woman making a huge success of her own life, and agitating to improve the lives of others, is that of Victoria Woodhull, who in the 1870s became the first woman to run for US President – and it’s very notable that Woodhull had to take at least the first steps along that path by exploiting her great beauty, rather than her impressive brains. The most important reason why Woodhull aroused the widespread condemnation and revulsion that she did was that she was a “sex radical” – meaning a supporter of women’s right to enjoy the same sexual pleasure and sexual experience as a contemporary man. This was a profoundly shocking position to take in the 1870s. I think you might find Joanne Ellen Passet’s Sex Radicals and the Quest for Women’s Equality (2003) especially interesting as a result.

One interesting sidelight on all this is the way in which popular religion and popular protest formed a legitimate outlet for female aspiration. You would probably be interested in studies of the roles that women played in the new spiritualist movement and it is very noticeable, also, how prominent women from less well-off social backgrounds, such as the merely middle class Annie Besant, were in theosophy. Then there was nursing – where the all-too-recently eminent Mary Seacole made her name. A few working class women were also prominent in the women’s suffrage movement, even though the vast majority of suffragists did not think it was feasible to agitate for the vote for women who failed to meet the usual property qualifications (which excluded pretty much the entire working class). But the prominent ones were so rare that they were practically exhibits, used by their better-off colleagues to demonstrate that such aspirations actually existed. The suffragettes of the WSPU, for instance, made a great deal of Annie Kenney, a former mill worker who was the only working class person to feature among their most senior hierarchy.

Finally, one of my favourites of all the things I’ve written is this essay on the experiences of Philippa Fawcett (the daughter of the suffragist leader Millicent Fawcett, and a government minister, so hardly poorly off) in demonstrating conclusively that women were not in fact “fragile, dependent, prone to nerves and—not least—possessed of a mind that was several degrees inferior to a man’s” – which she did by causing international consternation in becoming the only female ever to top the results in Cambridge’s mathematics tripos. It’s possibly the only thing I’ve ever written that is capable of raising goose-bumps.

So you may also find some interesting reading in the following:

Lynn Donald, Mary Seacole: the Making of a Myth (2014)

Amanda Fricksen, Victoria Woodhull’s Sexual Revolution: Political theater and the Popular Press in Nineteenth century America (2004)

Anne Braude, Radical Spirits: Spiritualism and Women’s Rights in Nineteenth Century America(1989)

Annie Kenney, Memoirs of a Militant (1921)


Q: Did British criminals in the 1700s and 1800s really worship a deity called the Tawny Prince? If so, what were the origins of this deity?

Criminals worshiping the Tawny Prince is mentioned briefly in this book on Australian history I’m reading, Commonwealth of Thieves by Thomas Keneally… Googling the Tawny Prince gets me nothing at all.

A: Thomas Keneally’s Commonwealth of Thieves is a popular history of the first years of the British colony in Australia, published in 2006.

Keneally (an Australian who is, of course, best known as a novelist, and as the author of Schindler’s List) uses the term “Tawny Prince” – always with capitals – five times in the course of his book. The most significant mentions are:

“… In Spitalfields to the east, in squalor unimaginable, lived all classes of criminals, speaking a special criminal argot and bonded together by devotion and oath to the criminal deity, the Tawny Prince. The Tawny Prince was honoured by theft, chicanery and a brave death on the gallows…” [p.20]

[Of convicts on their arrival in Australia:] “Not that they were reborn entirely, since they brought their habits of mind and the Tawny Prince, the deity of the London canting crews, with them” [p.81]

[Of a wild celebration in the rain:] “The great Sydney bacchanalia went on despite the thunderstorm. Fists were raised to God’s lightning; in the name of the Tawny Prince and in defiance of British justice, the downpour was cursed and challenged…” [p.89]

All this is referenced, so Keneally did not invent the Tawny Prince, but a little further research does suggest he took a fairly basic reference, elaborated it, embroidered it, and used it to produce a much more solid and distinct figure than the evidence actually warrants. All in the name of good colour, I am sure.


The crowded streets of Georgian London were a haven fro thousands of criminals of all varieties. But did the rookeries and canting crews actually spawn a perverted religion?

Let’s start with Keneally’s own notes. He cites as his references for a collection of material about the “Tawny Prince” and cant (thieves’ slang) as Watkin Tench’s Sydney’s First Four Years and Captain Grose’s Dictionary of the Vulgar Tongue of 1811.

Tench was an officer in the marines who was part of the First Fleet. The book Keneally cites was a 1961 reprint of one originally titled A Narrative of the Expedition to Botany Bay, first published by Debrett in London in 1789. This contains no reference to the Tawny Prince, so in fact Keneally’s only source is Grose (1731-91), an antiquary, whose work (correctly titled Classical Dictionary of the Vulgar Tongue) was first published in 1785.

This work does contain a passing reference to the Tawny Prince, not in the form of a separate entry, but rather inserted as a phrase that forms part of a much longer oath supposedly taken by “Gypsies” (a term which Grose uses not to mean “Romani,” but as a synonym for vagrants of all sorts) when “a fresh recruit is admitted into the fraternity.” The relevant extract is the first of several clauses, and is:

“I, Crank Cuffin, do swear to be a true brother, and that I will in all things obey the commands of the great tawney prince, and keep his counsel and not divulge the secrets of my brethren.”

Now, The Routledge Dictionary of Historical Slang confirms that a “crank-cuffin” is an 18th century term for a vagrant feigning sickness, which at least implies that the claimed oath is in the language of the period, but Grose incorporates no commentary at all, so we are left to our own devices in attempting to make sense of the terms and of the passage as a whole.

We can start with the bio of Grose that appears in the Dictionary of National Bibliography, which notes:

Captain Francis Grose, antiquary and author of the Classical Dictionary of the Vulgar Tongue

From 1783 he published in a torrent to make a living. The Supplement to the Antiquities was resumed, with a greater proportion of views from other artists, particularly S. H. Grimm, and was completed with 309 plates in 1787. This and the main series were reissued in a cheaper edition in 1783–7. A Classical Dictionary of the Vulgar Tongue (1785) and A Provincial Glossary, with a Collection of Local Proverbs, and Popular Superstitions (1787) were at the time the largest assemblage of ‘non-standard’ words or meanings, about 9000, omitted from Samuel Johnson’s Dictionary; they drew on his fieldwork as far back as the 1750s. The first parts of two other pioneering works appeared in 1786: Military Antiquities and A Treatise on Ancient Armour. Both relied mainly on his specialist library and the armouries at the Tower of London, but also included observations on military music from the 1740s. Of more popular appeal was Rules for Drawing Caricaturas: with an Essay on Comic Painting (1788).

How much further does this get us? The reference to “fieldwork” is intriguing, but it’s balanced by the discussion of a “torrent” of works churned out to make a living, and in fact a careful search shows that Grose’s source was not some vagrant informer, but rather the grammarian James Buchanan’s New Universal Dictionary of 1776, which has an entry for “Gypsies” that contains a fuller version of the same oath that Grose gives, referenced more precisely to what appears to be a description of the Romani people, in which is embedded in a significantly more detailed account of gypsy oath-making. Buchanan, sadly, gives no source for his information or the reference to the “tawney prince”, but his own gloss on the oath as a whole is as follows:

“The Canters have, it seems a Tradition, that from the three first Articles of this Oath, the first Founders of a certain boastful, worshipful Fraternity, who pretend to derive their Origin from the earliest Times, borrowed of them, both the Hint and Form of their Establishment. And that their pretended first derivation from Adam, is a forgery…”

That is as far back as I have been able to trace the term,* but I’m afraid that a more sober consideration of Grose and especially of Buchanan and his gloss indicates that the “great tawny prince” was not some sort of special deity of thieves, in the way that Keneally uses the term, but simply a synonym for the prince of darkness – that is, the devil. (Note, in support of this argument, the lack of capitals in the term “tawney prince” as given by both Grose and Buchanan, and in contradistinction to Keneally’s usage.) I think the inference of the word “tawny” is essentially “animal-like” – with a hide. Although it’s much less common now, 17th and 18th century portrayals of the devil very frequently saw him described as a shape-shifter who could and did assume animal forms, appearing as an ox or a bull, among other disguises. These carried with them implications of physical vigour, lack of restraint, and being placed beyond the order of human society.

We can also check the impressive online collection of trial reports known as The Proceedings of the Old Bailey. This is “a fully searchable edition of the largest body of texts detailing the lives of non-elite people ever published, containing 197,745 criminal trials held at London’s central criminal court” between 1674 and 1913. Although the reports are not literally trial transcripts of everything that was said in every case, but rather court reporters’ summaries of salient points, the most celebrated and interesting trials did receive extensive coverage that included verbatim reporting of some segments. Nowhere in this gigantic criminal word-mine do the terms “Tawny Prince” or “Tawney Prince” appear – so I think we can be certain that the figure imagined by Keneally was not a commonly-evoked deity, or even a figure commonly sworn to, in the whole of this period.

As such, it seems likely that the oath taken is presented not as one sworn to a real “god” of any sort, but rather an inversion of the sort of decent oath an honest Christian might swear by his or her God, in which insertion of mention of the devil actually serves to underline the dastardly and perverted nature of the oath for the dictionary’s intended audience – not thieves, but gentlefolk who, it is intended, will be horrified by it. The idea that thieves and criminals of every stripe were organised into an ordered fraternity that placed itself in distinct opposition to decent society was not only an outrage in itself, but also helped to justify their persecution – which, in this period, before the repeal of the ‘Bloody Code’, was notoriously severe.

* Further research shows the “gypsy” oath does date to a slightly earlier period. A colleague informs me: “The whole inverted-oath and attached gloss goes back at least to Richard Head’s The Canting Academy of 1673. He has it as “great tawny Prince.” Head is most famous as a satirist and fiction writer, so it’s a toss-up that he pastiched the oath together himself.”



Q: How bad would it have smelled in a medieval city?

One of the almost perfectly preserved medieval alleyways in Albarracín, Spain – a village noted for its surviving 10th-15th century architecture.

A: Smell is a problem for historians. The vocabulary that we have to describe smells is much less nuanced than it is for other senses (Gordon, 120); Isidore of Seville divided them very crudely into either “sweet” or “stinking”. Moreover, unlike physical objects, smell leaves no trace of itself to be studied, so we are entirely dependent on written descriptions. And we’re all familiar with the ways in which we quickly become inured to bad smells – smelly rooms cease to stink so badly when we spend some time in them – so it’s very probable that things that would smell very strongly to us, were we to be suddenly exposed to them now, passed largely unnoticed in their time. A good example is garum, the Roman condiment used as freely by them as ketchup is by us. Garum’s main ingredient is putrid fish guts, but the smell, highly offensive to us, was not considered foul by them. Adds Piers Mitchell:

“Some of the nuisances and smells that annoy many modern urban populations were an accepted part of everyday life in ancient cities. People simply had a higher tolerance to the unsanitary conditions of their city, and therefore the rigorous standards of proper waste disposal would seem irrelevant and impossible to reach for those in the past.” (Mitchell, 70).

We can certainly say that medieval people did notice smells and that they described them in terms that ascribed moral dimensions to them. They believed there was such a thing as an “odour of sanctity”, generally described as sweet, like honey; paradise was thought to smell “sweet, like a multitude of flowers”; and the martyrdom of Thomas Becket was “likened to the breaking of a perfume box, suddenly filling Christ Church, Canterbury, with the fragrance of ointment” (Woolgar, 118). But good smells were also temptations (monks were urged to avoid the smells of spices, which would tempt them to demand better food) and when they entered the body, they could be channels for disease (Cockayne, 17). Conversely, bad smells were associated with hypocritical, evil or irreligious behaviour, and those who sinned were assumed to have acquired a stench: Shakespeare’s Gloucester “smells a fault” and later in Lear is thrown out to “smell his way to Dover,” where an enemy army is waiting.

The public latrines on the Thames at London Bridge, from a medieval manuscript: British Library Yates Thompson, 47 f. 94v.

In other words, “defamation had a strong moral odour” (Woolgar, 123); a case brought before the courts at Wisbech in c.1468 involved the insulting of John Sweyn by William Freng, who had called him a “stynkyng horysson”. Allen has some revealing things to say about the medieval attitude to farting: “To smell the intestinal by-product of others brings one into extimate relation with them; more profound than psychoanalysis, it entails a knowledge more intimate than sight or hearing, more detached than touching or licking…. The stink of a fart belongs to a different mode of being.” (Allen, 52-3)

Smell in this period was also closely associated with the concept of miasma – the idea that disease was borne on waves of foul air that were betrayed by their smell. This means we do have evidence that medieval people noticed changes in the levels of smells that they might not otherwise have commented on; there was a case in London in 1421, involving the surreptitious dumping of refuse by one William atte Wode, which tells us a lot about what were then considered the main sources of bad smells in the city – and also that the people of London differentiated between stench and a “wholesome aire” which was

“faire and cleare without vapours and mists… lightsome and open, not dark, troublous and close … not infected with carrian lying long above ground… [nor] stinking and corrupted with ill vapours, as being neere to draughts, sinckes, dunghills, gutters, chanels, kitchings, church-yardes and standing waters.” (Rawcliffe, 124)

Tanneries, which cured leather with the help of large pits filled with urine, night soil and ash – were a major contributor to the stench of medieval cities throughout Europe. The remains of this tannery were discovered beneath modern Nottingham.

With all this said, we can also highlight some of the smells that would have struck us most strongly, have we visited a medieval or early modern city (I include the latter because we have more evidence for them, and changes in the way cities were run were not extensive between the medieval and early modern periods.) In terms of overall sensation, these would include the sulphurous smell of burning coal (Brimblecombe, 9); Green rather imaginatively, but probably fairly, goes further and invokes a “richly layered and intricately woven tapestry of putrid, aching stenches: rotting offal, human excrement, stagnant water,… foul fish, the burning of tallow candles, and an icing of animal dung on the streets.”

In terms of locales, we would notice the smells generated by small scale industry, which was mixed up indiscriminately with living spaces (no industrial parks in those days) – perhaps most especially those created by the slaughterman and butcher (whose work produced a rich stench of blood and excrement), the fuller, the skinner and the tanner. Not a lot of care went into disposing of the by-products of these industries. The dredging of one Cambridge well yielded 79 cat carcasses, dumped there by a local skinner; his preparation of their pelts would have involved treating them with a high-smelling solution of quicklime (Rawcliffe, 206). Tanning – which required the copious use of bodily wastes and the immersion of skins “for long periods in timber lined pits of increasingly noisome liquids… a malodorous combination of oak bark, alum, ashes, lime, saltpetre, faeces and urine” (Rawcliffe, 207) – was widely considered the worst-smelling work of the period. Glue- soap- and candle-making all involved rendering animal fats, and their smells would also have been prominent; soap-makers boiled lime, ash and fat together to make their products (Cockayne, 199). Then there were the smells of cooking and of animals (Ackroyd notes that in the fifteenth century the dog house at London’s Moorgate sent forth “great noyious and infectyve aiers”). The area along the Thames would have added the smell of pitch, used to caulk timbers in the shipbuilding trade (Cockayne, 9).

Fleet Ditch – London’s most infamous open sewer – was actually the highly polluted River Fleet. It is seen here in the Victorian period and the river still flows under London today – less polluted now, and completely built over.

We would certainly notice the open sewers, such as London’s infamous Fleet Ditch – actually a small river into which nightsoil and industrial byproducts were dumped – which ran directly down the centre of major roads towards the Thames, even though contemporary accounts rarely refer to them unless something happened to make the smells worse than usual. This happened to the Fleet in the 13th century, when the river became so choked with tannery filth that it was no longer navigable above Holborn Bridge (Chalfant, 81). In 1749, a body dragged from the Ditch was initially supposed to be that of a murder victim; it turned out to belong to a man who made his living dragging the sewers for the carcasses of dogs that he could sell to skinners, and who had fallen in by accident (Cockayne, 199)

Different towns would have had their own characteristic smells, based in large part on the nature of local industry. In my own book Tulipomania, I discussed the smells of the Dutch town of Haarlem (a great centre of brewing and linen-dyeing) in the early 17th century:

The city stank of buttermilk and malt, the aromas of its two principal industries: bleaching and beer. Haarlem breweries produced a fifth of all the beer made in Holland, and the town’s celebrated linen bleacheries, just outside the walls, used hundreds gallons of buttermilk a day to dye cloth shipped to the city from all over Europe a dazzling white. The milk filled a series of huge bleaching pits along the west walls, and each evening it was drained off into Haarlem’s moat, and thence into the River Spaarne, dyeing the waters white.

Last, but not least, of course, there were the smells of the human population itself, with its unwashed, decaying or diseased bodies. The lack of dental treatment available in the period meant that most people would have suffered badly from bad breath. At least until the advent of sugar in the diet in early modern period, decay was not as common as it would become – the grain-based diet of the period tended to wear down teeth to flat but regular planes, without leaving crevices in which food could fester. But archaeology reveals extensive evidence of plaque build ups that would have been very noticeable to anyone in close proximity. Dante likens the stench of the hellmouth to the stink of human breath, and Jones notes that in medieval Wales, “a peasant woman could divorce her husband on the grounds of his halitosis.” (Jones and Ereira, 29)


Peter Ackroyd, London: The Biography; Valerie Allen, On Farting: Language and Laughter in the Middle Ages; Peter Brimbelcombe, The Big Smoke: A History of Air Pollution in London; Fran C. Chalfant, Ben Jonson’s London: A Jacobean Placename Dictionary; Emily Cockayne, Hubbub: Filth, Noise and Stench in England; Mike Dash, Tulipomania; Sarah Gordon, Culinary Comedy in French Medieval Literature; Matthew Green, London: A Travel Guide Through Time; Terry Jones and Alan Ereira, Terry Jones’ Medieval Lives; PM Mitchell, Sanitation, Latrines and Intestinal Parasites in Past Populations; Carol Rawcliffe, Urban Bodies: Communal Health in Late Medieval English Towns and Cities; CM Woolgar, The Senses in Late Medieval England

Q: Your points about smelly industries makes me wonder about something: I vaguely recall reading that specifying where tanneries and such could be located was a very early form of city building code. IOW, the people of the time recognized that these tasks were unpleasant and wanted them to be located at a slight remove. Is there any truth to this recollection?

A: There was some regulation – there is evidence from early 14th century Norwich that polluting industries were forced to locate downriver of the main population, and Stanford’s Ordinances of Bristol notes that Bristol soap-boilers so polluted the Avon that they were ordered to halt the practice of throwing waste ash into the waters for fear that it would lead to “the utter decaie and destruction of the same river.” But this was rare and the product of severe and repeated problems. The reality is that small scale change was more often achieved by bringing cases to a court than by pre-emptory law-making.

So Kermode, in Medieval Merchants: York, Beverley and Hull in the Later Middle Ages (p.19) and Schofield and Vince, in their Medieval Towns (p.144), all point out that clear-cut zoning of occupations was not a feature of medieval towns; certain industries did cluster together, as is commonly demonstrated by surviving street names – a study of Ghent shows distinct quarters for carpenters, drapers, mercers, fishmongers and leatherworkers – but this was more a matter of convenience than law-making. The same effects help explain clusters of related industries. Tanners used the bark discarded by carpenters.

London Bridge, with Southwark, on the south bank of the Thames, in the foregrounds, from an engraving by Claes van Vischer, 1616.

That said, the London tanning industry was based largely in Bermondsey, on the far side of the river to most of the city, in part because it was also, notoriously, in a part of town that was much more lightly regulated and policed than the City of London itself. That is why Southwark was also the centre of London’s disreputable (and closely connected) theatre and prostitution industries. Cockayne notes that “location was the cause of some nuisances” – meaning indictments brought because of inconvenience – and that while “all citizens drank beer, used candles, and wore shoes, few wanted to live near a brewer, a chandler, or a tanner.” (Cockayne, 21). In Manchester, “leet juries” had the power to hear cases involving “noysome” – meaning smelly – inconveniences.

Incidentally, the word noisome itself is a contraction of “annoysome”.

Q: Was it likely that garum smelt particularly worse than modern Thai/Vietnamese fish sauce, or even Worcestershire Sauce? Both are made from fermented fish.

A: The best garum was made from the guts of rotten mackerel. The bones were taken out and the flesh and fish blood was mashed up together and poured into a large amphora. A layer of strong-tasting herbs like dill, mint and oregano was tipped on top, then lots of salt – ‘two finger lengths’, one recipe said, which is about 12cm. The Romans added more layers of fish, herbs and salt until the jar was full. Then they left it lying in the hot sun for a week until the fish had gone off and was pretty rank.

Making garum – a modern reconstruction of an ancient method.

After that, the mixture was stirred every day for three weeks, before it was sieved and the fish sauce was sent to Rome, where it fetched high prices.

On this basis one might suppose garum was better smelling than South East Asian fish sauces, which don’t usually contain herbs. It would depend on how you feel about the smell of mackerel compared to the smell on anchovy, the typical ingredient in a Thai fish sauce. Both are oily fishes, so perhaps it’s not too different. The Roman method, which left the fermenting fish lying around in an amphora, likely created stronger concentrations of smell than the Asian method, in which fermentation takes place inside a sealed barrel.

The best garum came from Barcelona in Spain. The factories that made it smelled so bad that they had to be built miles away from the nearest houses. Ordinary Romans were banned from making garum in their own homes because the stench was so awful the neighbours tended to complain.



Q: What exactly were the relics on which Harold Godwinson swore his oath to William of Normandy?

Harold swears his fatal oath of fealty to William – from that masterpiece of Norman-era propaganda, the Bayeux tapestry.

A: The exact nature of the relics is not specified in the most reliable contemporary sources. According to Orderic Vitalis – an Anglo-French monk who was writing more than half a century later, but who had been brought up in a Norman monastery – Harold swore on super sanctissimus reliquias juraverat, generally translated as “very sacred saint’s relics”. Anglo-Saxon sources from the 1060s are completely silent on the subject, however, and it’s as possible to argue that the entire story of Harold’s oath was an invention of the Norman propaganda machine, used to justify William’s invasion, as it is to suppose that the Saxon chronicles deliberately suppressed an incident discreditable to one of their own. Elizabeth van Houts, in her 1992 edition of William of Jumieges’s Gesta Normannorum Ducum, argues that the entire story is based on a single, official, Norman account, drawn up to be presented to the Pope in order to justify and seek approval for the invasion. Even Norman sources suggest there was an element of deception involved – that Harold only realised too late the holyness of the oath he’d sworn because the presence of the relics was concealed from him.

That said, some later sources are more specific. The Brevis Relatio, written by a monk of Battle Abbey around 1130, the Warenne Chronicle (about 1157) and Wace (a canon at Bayeux), writing after 1155, say one of the reliquaries was the “ox eye” or “bull’s eye”. Wace’s passage says that William “ordered that all the holy relics be assembled in one place, having an entire tub filled with them; then he ordered them covered with a silk cloth so that Harold neither knew about them nor saw them, nor was it pointed out to him. On top of it he placed a reliquary, the finest he could choose and the most precious he could find; I have heard it called ‘ox-eye’.” Apparently this was because the centre of the reliquary was an elaborate mounting for a magnificent gemstone.

William the Conqueror (c.1028-1086): a decidedly non-contemporary image.

If this identification is accepted then the relics may have been those of St Pancras, which the Warenne Chronicle explicitly identifies with the ox-eye reliquary. Pancras, a Roman citizen of Diocletian’s reign, was especially venerated in England because St Augustine had been despatched from Rome bearing some of his relics when he was sent to convert the Saxons. However, the cult of St Pancras was unknown in Normandy, which perhaps suggests that Warenne was wrong, and implies that it was more likely that any oath was sworn on locally venerated relics.

If so, then much depends on where exactly an oath may have been sworn. There is no agreement on this point. Orderic says Rouen, William of Poitiers says Bonneville-sur-Toques, and the Bayeux Tapestry puts it in Bayeux. Stephen D. White, in his “Locating Harold’s oath and tracing his itinerary” in Pastan and White [eds] The Bayeux Tapestry and its Contexts, takes this latter identification and uses it to suggest that the most likely candidate is the bones of Saints Rasyphus and Ravennus. He argues that Duke William’s brother, Bishop Odo of Bayeux, is known to have commissioned a new reliquary to hold these c.1050 which can be identified with the image of the reliquary shown on the Bayeux Tapestry.

However, Odo’s name is not associated with this stage of the tapestry narrative, meaning it’s difficult to establish who exactly selected the reliquaries for any oath that was sworn. There’s no way to be certain, but if the incident occurred, and if it occurred at Bayeux, then most likely Odo would have been involved. As such, White’s identification, while it must be considered tentative, is certainly not implausible.

Q:  I remember reading David C. Douglas’s “William the Conqueror” and one line stood out. I’m paraphrasing since it’s been about eight years since I read it and a quick look through the book was unsuccessful in finding the exact quote, but it was something like, “There can be no doubt [Harold Godwinson’s oath to William] was genuine.” My immediate thought was “wait, what, it sounded like total bullshit made up by the Normans.”

What is the current historical consensus (or majority view) as to whether or not Harold ever swore an oath of loyalty to William?

A: There’s no question that Harold’s visit to Normandy, and the oath-swearing ceremony in which he vowed to support William’s claim to the English throne, are accepted by pretty much 100% of the authorities on this period – in fact the paper I linked to above is the only one I can recall reading that absolutely dismisses the idea. Other authorities question the veracity of Norman accounts of events, to varying degrees, but not that the events took place.

While it’s true that no contemporary Anglo-Saxon source mentions a visit by Harold to Normandy, and while I personally share your instinctive scepticism about the traditional account of events, it was therefore a little bit glib of me to imply that the two possibilities (there was a visit and an oath-swearing, there wasn’t) are equally probable. Let’s review the evidence as it’s generally set out.

Edward the Confessor, whose childless death precipitated a succession crisis in late Anglo-Saxon England.

[1] Could Harold have visited Normandy?

Saxon sources are silent about Harold’s whereabouts and activities between the conclusion of his campaign in Wales in 1063 and July 1065, when he is recorded as giving orders for the construction of a new hunting lodge at Portskewet to replace one lost in the course of a Welsh raid. Norman sources give no exact date for the supposed visit, but William of Poitiers places it at about the time of William’s acquisition of the county of Maine, which was complete by 1064. So there’s a sufficiently large hole in Harold’s known itinerary for the visit to have taken place at the time that Norman sources suggest it did. We have to conclude it’s possible he did travel to Normandy in 1064.

[2] How plausible is it that Harold could have gone to Normandy without leaving any trace in the contemporary record?

Perfectly plausible. We know this because Harold also made a visit to Flanders in 1056 which likewise left no trace in English sources. We only know about it because he witnessed a diploma drawn up in Flanders that year that has, fortuitously, survived. In addition, assuming that some ceremony in which Harold promised to support William’s claim to the English throne did take place, albeit under duress, Harold would have had no motive for broadcasting his actions when he returned home. There is one tiny fragment of evidence that suggests the Anglo-Saxon polity may have been aware of an oath-taking ceremony prior to the Conquest; this is a passage in the Vita Eadwardii Regis (a life/hagiography of Edward the Confessor) that observes that Harold was “rather too generous with his oaths (alas!)” But even if this gnomic comment refers to Harold’s Norman visit – and Frank Barlow prefers to interpret it as suggesting that Harold, unlike his brother Tostig “had the ‘smoothness’ of their father,” Earl Godwin – the VER was probably not completed until 1067 and the sole manuscript of it that we have appears to date to c.1100. We can’t rule out the insertion of a comment intended to win favour with the new king.

[3] What motive would Harold have had for visiting Normandy?

Four have been suggested.

The first is that he didn’t intend to go the France at all, but was caught in a storm while at sea in the English Channel. (This is first suggested by a late but very highly-regarded source, William of Malmesbury. As for what reason he might have had for being at sea … Malmesbury says a fishing expedition, and the Bayeux Tapestry’s rendition of this part of the story features a line of thread that has been interpreted as a fishing rod. If true, this would make this the sole reference to a high English noble engaging in fishing as a sport, rather than the more conventional hunting or falconry.) A Scandinavian source, King Harald’s Saga, also says bad weather, though it suggests Harold had been caught while sailing for Wales. Whether any of this is truth, it’s at least highly plausible to suppose that Harold had not intended to go to Normandy, as opposed to elsewhere in what’s now France, for reasons that I will discuss below.

Old Bayeux – the Norman town is possibly the likeliest setting for the controversial oath-taking ceremony Harold is alleged to have submitted to.

The second relates to another odd reference in the VER: that Harold was engaged in a study of the “princes of Gaul” and “noted down most carefully what he could get from them if he ever needed their services in any of his projects.” This has been interpreted as meaning he possibly undertook an expedition in search of a marriage alliance for one of his daughters, or perhaps a similar journey on behalf of his king.

The third is that he went to secure the freedom of two relatives – one of them his nephew, Hacon – who had been held hostage by Duke William since 1052. This is the view of the Canterbury chronicler Eadmer, writing in c.1100, but since it means that Harold would have willingly placed himself at the mercy of Duke William to secure the freedom of people whom he had apparently made no effort to get freed in the years 1052-64, I find this implausible.

The last is the official Norman view: that Harold was sent by Edward to confirm his promise of his throne to William. This last is also, I believe, highly unlikely, for three reasons. First, the throne was not entirely within the gift of the childless Edward; confirmation of an “outsider” candidate such as William would, at minimum, have required the approval of the Saxon witan, and Harold himself, it can very plausibly be supposed, would have vehemently opposed it – since it would very likely have resulted in a sharp fall in his personal power and prosperity. Second, when Edward wanted to name Edward (and later Edgar) Ætheling as his heir, he had him brought back to England, where he could be presented to the entire Saxon nobility; had he really wanted William to be accepted as his heir, it would have made more sense for William to be brought to England to meet the whole Saxon leadership than it would for one Saxon earl to travel to Normandy. Third, while the idea that there was a set line of succession is anachronistic in this period, it’s clear that Edward’s nephew, the ætheling Edgar – grandson of Edmund Ironside – was generally accepted as his heir at this point, and in fact had been brought back with his (by now deceased) father from exile in Hungary specifically to fill the role of heir apparent. It was only the timing of the Confessor’s death, which occurred when Edgar was still aged only about 14, too young to lead an army in battle, that made it possible for Harold to seize the throne in the extraordinary circumstances of 1066.

[4] What motive would Duke William have had for requiring Harold to swear an oath?

In my view, it’s here that the main conventional accounts of the oath-swearing ceremony break down. The purpose of the ceremony is quite clearly stated in the Norman sources to have been for William to secure Harold’s backing for his claim to be heir apparent to Edward’s throne. For William to have wanted to secure Harold’s support in this way makes perfect sense in light of what happened in 1066 – that is, it makes sense if we assume it went ahead on the basis that the Confessor would die in circumstances that made it possible for Harold to seize the throne for himself, and hence in circumstances that [i] made William’s claim much easier to press and [ii] made it necessary for him to dispose of Harold as a rival claimant. It makes very little sense in the circumstances that actually existed in 1064, when Edgar was very clearly the most obvious candidate for the throne – and, moreover, one whose recall from Hungary post-dated the supposed promise from Edward that William based his claim to the throne on.

Cnut the Great (right) and his wife, Emma. The Danish warlord took the Saxon throne by conquest in 1016, but his marriage to a Norman woman would have consequences for the kingdom three decades after his death.

To clarify this last point: Edward the Confessor certainly had pro-Norman sympathies. His mother was Norman and, during the period of Danish supremacy in England (the reigns of Cnut, Harold Harefoot and Harthacnut, 1016-1042), he had been sheltered at the Norman court. William’s claim was based on a promise Edward had supposedly made in c.1051-2, at the time of the fall of Harold’s father, Godwin, from royal favour. It is generally supposed that any such promise would have been engineered by, and made in the presence of, the Norman archbishop of Canterbury, Robert of Jumièges, prior to Robert’s deposition and exile at the behest of the resurgent Godwin family in 1052. Again, the very existence of such a promise is disputed, but if it had been made we can certainly say [i] that Edward had no absolute authority to make it and [ii] that, however much he did so under compulsion, it must have been effectively superseded or withdrawn by events post-1052, when Godwin was restored to his position as the chief power behind the throne. Given that the invasion of England from Normandy was an entirely unprecedented event, one that required the construction of a huge navy from scratch and that William’s barons apparently thought unfeasible, it would have made very little sense for William to assume that getting Harold’s support, via an oath sworn under compulsion – since, if Harold was in Normandy in 1064, he was effectively William’s prisoner there – would have helped much to cement a claim that would have set William against a legitimate member of the Saxon royal house, a close relative of the king, whose claim was acknowledged by the whole country (and de facto by the pope) at this time.

For me, such an action only makes sense in the context of William’s need to secure papal support for a claim made in the face of Harold’s kingship, not Edgar’s. That’s my main reason for suspecting that the oath-swearing was an invention of the Norman propaganda machine in 1066, not something that would plausibly have taken place in 1064. It is not impossible that a man as ambitious as William would have been willing to attempt an invasion of England in support of a claimed promise dating back to the early 1050s, and in face of the accession of an older Edgar Ætheling. But it seems highly doubtful he could have carried the papacy and his barons with him easily in order to assert such a claim. If he knew that then forcing an oath-taking ceremony on Harold makes comparatively little sense.

[5] Why are historians of the late Saxon and early Norman period so willing to support the oath-taking accounts given by Norman chroniclers?

Essentially, as Harold’s recent biographer Ian W. Walker puts in in his Harold: The Last Anglo-Saxon King (p.105) because they feel that Norman accounts “must have a basis in truth, otherwise their authors would lose [he means would have lost, at the time] credibility completely.” To believe this, you need to believe that [i] the true events of 1064 and the oath-swearing ceremony were widely known in the period 1066-1100, and [ii] that public credibility (in the very limited sense of credibility among the audience of likely readers of these manuscript chronicles) mattered more to chroniclers such as William of Jumièges and William of Poitiers than the favour of King William. I would respond that there is no evidence of general awareness of an oath-swearing ceremony in this period – much less of any willingness on the part of those who were familiar with it to challenge the version mandated by the man who had become one of the most powerful rulers of his age – and that both chroniclers were intimately associated with William and his court, and therefore highly unlikely to care about anything as much as they cared about retaining William’s favour. This undoubtedly made them potential, if not actual, mouthpieces for Norman propaganda, including the oath-taking story.

Tl;dr Historical consensus strongly favours the reality of the oath-taking story, but there are, nonetheless, reasons to doubt it is correct.



Q: What prompted the first emperor of Qin to have hundreds of scholars buried alive and their works burned? If history was the primary concern, what interpretations and narratives was he trying to suppress? Was live burial a “normal” punishment or an exceptional one, to make an example?

Qin Shihuangdi, the First Emperor of China (259-210 BC) – a later, romanticised portrayal.

A: There are several points to consider here, and I will try to cover them one by one.

First, the general idea of live burial was not an invention of the First Emperor. More than 1,200 sacrificial burials dating to the Shang dynasty have been excavated at Xibeigsang alone, and these include “a few children who seem to have been trussed up and buried alive.” A second set of live burials – both men and women – dating to the Warring States period has been uncovered at Langjiazhuang. This type of burial is termed “human offerings” by Chinese archaeologists, as distinct from the “companions in death”, typically young women, who were dignified with their own burials. Both types of burial seem to have involved the executions of wives, slaves or attendants who were to accompany some eminent dynast or court official into the afterlife.

In addition, Chinese chronicles record that during the latter years of the Warring States period, not long before the First Emperor’s birth, the survivors of a Zhao army that had invaded the state of Qin, but been surrounded and starved into submission, were supposedly “buried alive” en masse by Bo Qi (AKA Bao Qi), the military genius who played a major part in setting up the eventual victory of Qin and the First Emperor over its rival states [Cambridge History of Ancient China I, 193, 640, 734].

This is the closest we get to an example of burial alive apparently being used as an exemplary punishment before Qin Shihuangdi’s time, but there is a vitally important caveat that applies both to Bo Qi’s atrocity and the deaths of 460 scholars in 212 BC that Sima Qian records, in his the Records of the Grand Historian, were ordered by the First Emperor. This relates to the correct way to translate “k’eng“, the word use to describe the deaths of both the scholars and the men of the defeated Zhao army. In its noun form, k’eng means “pit” and it is for this reason that it has been understood since at least the 16th century to mean “buried” or even “buried alive.” However, both Emmanuel-Edouard Chavannes and Timoteus Pokora have convincingly argued that it should be translated to mean only “to destroy” or “to put to death”; hence there has to be considerable doubt as to whether any scholars were buried alive in the First Emperor’s reign at all. This is not an insignificant point, since the extreme nature of the punishment is integral to the way in which the First Emperor has typically been viewed both by later Chinese chroniclers (who of course could readily imagine themselves suffering similar fates) and by modern historians. [Chavannes, Les Mémoires historiques de Se-ma Ts’ien traduits et annotés, II, 119; for Pokora’s views see Archiv Orientální 31 (1963), 170-1.]

The Daoist alchemists consulted by the First Emperor sought recipes for immortality and wealth in nature.

Second, if we go back to the works of Sima Qian, it becomes clear that the idea of burning books, if not that of executing scholars, was not the First Emperor’s, but rather a policy that was urged on him by his chancellor and chief advisor Li Si (Li Ssu in the old Wade-Giles system of transliteration).

The sequence of events as set out by Sima Qian is that a number of Confucian “scholars of wide learning” attended an imperial banquet held at Qin Shihuangdi’s palace in 213. One of these dared to criticise the Emperor for not following the example of Shang and Zhou emperors in giving fiefs to his sons and to “meritorious ministers,” a policy that, it was intimated, was crucial to the maintenance of these earlier dynasties. Li Si responded angrily that the “stupid literati” did not understand that things had changed and that “now the world has been pacified, laws and ordinances issue from one source alone.” Criticism of “the present age” would only “confuse and excite the ordinary people,” leading to a decline in imperial power. “It is expedient that these [criticisms] be prohibited,” he concluded.

The practical upshot of Li Si’s recommendation was an order that all the relevant records in the imperial record bureau be burned, that any public discussion of the two most important, the poems of the Book of Songs and the chronicles and collections of speeches contained in the Book of Documents, be punishable by death. Furthermore, Li Si urged that orders be given that all copies of the prohibited works that existed outside the immediate control of the imperial government be burned within 30 days.

This order applied specifically to works of history, custom and law. Works relating to divination, agriculture, medicine and forestry were excluded from the edict. Moreover, even copies of the forbidden works were authorised to be preserved within the archives of the Bureau of Academicians. The purpose of the order that interests you, therefore, was explicitly to prevent unrest and not to utterly destroy knowledge, nor, as is sometimes supposed, to establish a Pol Pot style “Year Zero” for a new period of Chinese history, beyond which future historians would not be able to penetrate. It was possession and discussion of the forbidden works by scholars who were beyond the immediate control of the state (unlike those manning the Bureau of Academicians) that Li Si really objected to.

Most scholars suppose that the edict remained in place for no more than about five years (though it was not formally rescinded until 191) and hence that the loss and destruction of old texts was less than total. Nonetheless, if only by drastically limiting the number of copies of ancient works that actually survived, the impact of the decree was considerable. Many of the archived works would have been destroyed when Han armies burned the Qin palaces at Xianyang in 206 BC. It’s worth pointing out, however, that this sort of attrition has been a normal feature of Chinese history. We have a catalogue of the Han imperial library as it existed in the first years of the first century (more than two hundred years after the First Emperor’s time), for example; of the 677 works listed, three-quarters are now lost to us.

The members of the Terracotta Army constructed to guard Qin Shihuangdi in death are now a major symbol of the emperor’s wealth and power.

As for the “execution of the literati”: that took place one year after the infamous burning of their books and, according to Sima Qian, for a different reason. Its proximate cause was the First Emperor’s infamous determination to keep his movements hidden, which the Grand Historian attributed to the advice of the magician Master Lu, who was brought in to assist the Emperor in his search for an elixir of immortality. One consequence of this was that the emperor would have anyone known to have revealed his whereabouts put to death.

On one visit to the east coast, Qin Shihuangdi was angered to note the large numbers of carriages and attendants surrounding Li Si – these, he felt, clearly represented a risk that his whereabouts would be disclosed and the magical forces needed to secure the desired elixir dissipated. This news reached the ears of the chancellor, who, fearing the Emperor’s wrath, took immediate steps to reduce their numbers. When Qin Shihuangdi realised that there had to be an informer among his attendants, he had the entire group who had been with him at the time executed.

This drastic action, in turn, prompted unrest among Master Lu and the scholars who associated with him. Lu and several other magicians clearly feared they might be next and fled – taking with them the Emperor’s chief hope of attaining eternal life. Qin Shihuangdi ordered an immediate inquiry into how and why Lu and his helpers had been able to flee, and when the other scholars in his entourage blamed one another, he had 460 of them selected for execution. In this case, therefore, the “scholars” we hear so much about were likely Daoists, alchemists and magicians rather than court historians or academics.

[Main sources for the discussion above: Derek Bodde, “The state and empire of Ch’in,” in The Cambridge History of China I, 69-72; Frances Wood, The First Emperor of China pp.40-5, 78-88.]

With regard to the follow-up question on sources: Sima Qian’s Records are indeed the most significant resource for the reign of the First Emperor. They are important not only because they are the only detailed source we have for much of what happened, but because they were compiled free of much of the inbuilt bias that bedevils later Chinese historiography. As the Cambridge History of China puts it (I, 972):

As yet they were not bound by the inhibitions under which their successors labored. They were not required to display their masters as paragons of fine behavior, whose predecessors had rightly deserved destruction. As yet they were not obliged to portray the past in terms of the steady influence exerted on mankind by the force of Confucian teaching.

[Such characteristics first emerged with full force in histories of the remarkable usurper Wang Mang, whose highly controversial reign at around the time of Christ separates the Former from the Late Han; I discuss these problems in more detail here.]

Two other important sources do exist, however. The first is the Han su or Book of Han, a chronicle of the Former Han Dynasty, modelled on the style established by Sima Qian and covering the period 206 BC to AD 5 in 12 volumes. This work is late – completed c. AD 111 – but its early coverage deals with the overthrow of the Qin dynasty and some of the people who figure in it were important in the First Emperor’s reign.

The second source that needs to be considered is archaeological and numismatic evidence – most obviously the imperial tomb from which the famous Terracotta Warriors have been unearthed, but also numerous lesser archaeological sites – most notably record steles – and coin finds.

Good sources of detailed information on the historiography of the period include Van der Loon’s “The ancient Chinese chronicles and the growth of historical ideals,” in Beasley & Pulleyblank’s Historians of China and Japan, and Michael Loewe [ed], Early Chinese Texts: A Bibliographical Guide.

Sima Qian, the “Grand Historian” and author of the only major surviving account of China during the Qin dynasty, was caught up in court intrigues and found guilty of serious offences. He chose castration over execution in order to finish his work – an example to all who have come after him.

Q: Could you please also comment on several recent discoveries of Qin period bamboo slips? To what extent did these newly discovered texts change our perspective on Shihuangdi and his empire?

A: There are two such discoveries, I believe: one a set of material from a Qin era tomb in Hubei, which total 11,000 strips, comprising 10 distinct texts, including some Qin statutes; the other a cache of 36,000 strips found in a well in Hunan.

The Hunan strips are curse official documents produced by low level officials in the district. They can tell us something about the local organisation of the Qin state – for example, the details of its postal service, its document styles and its bureaucracy, and their dating also fills in some things we did not know about the Qin calendar. The Hubei strips tell us more about the operations of the Qin state at a slightly higher level. They reveal many hitherto unknown details about Qin administration, its requirements, and its accounting practices. Also included in the latter collection are some biographical annotations concerning the magistrate in whose tomb the slips were laid, and a complete divination almanac showing the best days for making sacrifices, digging wells and so forth.

Broadly, we can say that the two collections are helpful in understanding everyday life in the Qin period, but they do not reveal much about the top level workings of the Qin government or the doings of the First Emperor. As such, they fit more into the continuum of Chinese social and economic history than they illuminate the distinct political upheaval caused by Qin Shihuangdi.



Q: Were the pyramids still kept in repair at the time of Cleopatra?

The pyramids of the Giza group today.

A: We have only a little information about the state of Egyptian structures in the late Pharaonic/Roman period, so it’s difficult to be precise as to the state of repair of the pyramids – or any other Egyptian monuments – at this time. However, the short answer to your question is that, at least while Egypt retained some independence, occasional restoration work was done on some monuments, usually for religious/magical reasons to do with aiding souls that had already passed into the afterlife. For this reason, Pharaonic restoration work tended to involve erecting new inscriptions rather than making extensive repairs to old monuments.

Even this work seems to have largely ended by the time Egypt passed under Roman control (at least we have no evidence of its continued practice), and the Graeco-Roman period is often considered to mark the start of “tourism” to Egpyt. Certainly it was in this period that many of the monuments famous today first seem to have been visited on a regular basis simply because they were remarkable sights.

That’s the summary; here are a few salient details:

  • We do know that Egyptians completed some repairs to the sphinx soon before the reign of Thutmosis IV began in about 1420 B.C. The monument was then almost buried in sand (as it later would be again), and Thutmosis, who was one of the then pharaoh’s sons but not actually in line to succeed him, had it excavated and built a retaining wall to prevent it sanding up again too easily. His workmen also re-secured some blocks from its back in their proper places. This was not, however, a typical thing for an Egyptian ruler to do; we know from the so-called “Dream Stele” left at the site that Thutmosis’s motive for the restoration was that he had had a dream in which the sphinx promised him he would become pharaoh if he would restore it.
  • Later, in the reign of Ramesses II (c.1280 B.C.) the two main pyramids at Giza appear to have undergone some restoration. This work is attributed to Ramesses’ som Khaemwaset, who added hieroglyphic inscriptions to monuments at Giza, Saqqara and Dashur. Although Khaemwaset is sometimes called “the first Egyptologist,” these additions had explicitly religious functions; although a contemporary inscription records that the prince “loved antiquity and his noble ancestors,” and could not bear to see old monuments fall to ruin, his texts were created because they “literally renewed the memory of those buried within, benefitting their spirits in the afterlife,” Manassa notes.
  • A deep scar marks the north face of the third pyramid at Giza, tomb to the pharaoh Menkaure. From an 1842 sketch by EJ Andrews.

    Possibly associated with this same period is evidence from within the Great Pyramid of limited repair and replastering work that hardly fits the MO of the typical tomb robber. It’s not possible to date this but it’s usually attributed to the Pharaonic period.

  • About a century later, during the 12th Dynasty, a ruler named Khnumhotep set up an inscription (first transcribed by Percy Newberry in 1890-1) which imply some Pharaonic-style conservation work took place in this period. His inscription boasted: “I caused the names of my fathers which I had found destroyed upon the doors to live again…”
  • At some point during the Middle Kingdom, at the height of the Cult of Osiris, the royal tombs at Abydos were excavated in search of Osiris’s tomb. When the diggers uncovered the First Dynasty tomb of Djet, they took it to be the deity’s resting place and so restored it, building a new roof and an access stairway.
  • In the Third Intermediate and Late Periods, older monuments were studied so that their styles could be replicated in new buildings. Some dilapidated temples were restored at this time. The work was not extensive however and with the decline of the state funds for restoration probably weren’t available in any case. Thompson states that “by the Roman period, Egypt was little more than a mass of ruins.” What survived was generally that which had been built most solidly – not least, of course, the pyramids.
  • Both Strabo (writing within 6 years of Cleopatra’s death, in 24 B.C.) and Diodorus Siculus give accounts of the Great Pyramid that imply they personally visited the site and were taken around it by local guides, who told them stories about its construction. Diodorus, who visited in around 50 B.C., writes in chapter 64 of his Universal History of the Great Pyramid that he saw “the entire structure undecayed” – though it would be unwise to assume this was a careful description.
  • That’s not least because Roman era graffiti was found inside the Great Pyramid early in the 19th century, written in soot on the roof of the subterranean chamber, which again strongly suggests that the pyramid was open to at least some visitors at this time; that the pyramid’s Descending Passage was left open, not sealed, argues against the idea that the local people were keeping the monuments “in repair” in Cleopatra’s time, and might suggest they no longer considered them sacred in this period, several centuries after the arrival of dynasties of Greek rulers.
  • We also know that Romans often visited other Egyptian sites to see their wonders – popular destinations include Armana, Abydos, Hatshepsut’s mortuary temple, Karnak and the Valley of the Kings. Unfortunately all we have in these cases are inscriptions, not accounts of what exactly these sites looked like at the time. But again this argues against Pharaonic monuments being considered sacred and inviolate in this period.
  • There are numerous other Graeco-Roman grafitti on various Egyptian monuments, perhaps most famously on the plinths and legs of the pair of sandstone colossi commemorating Amenhotep III (reigned c.1350 B.C.) near Luxor that are popularly known as the Colossi of Memnon. One of these statues was felled by an earthquake in 27 B.C., only three years after Cleopatra’s death, and it was after that occurred that the statue famously began to emit an unusual sound, said to have been like the string of a broken lyre, soon after sun-up on some mornings. Largely thanks to this phenomenon, the Colossi acquired a reputation as an oracle. Because of the fame thus acquired, and the graffitti left by visitors, we know something of their history around this time and it’s clear that while the damaged statue was not immediately repaired, the fallen portions were replaced about 200 years later – a restoration popularly ascribed to Septimus Severus, who visited the statues but failed to hear the sound shortly before 200 A.D.


Thomas W. Africa, “Herodotus and Diodorus in Egypt,” Journal of Near Eastern Studies 22 (1963); Colleen Manassa, Imagining the Past: Historical Fiction in New Kingdom Egypt; Maria Swetnam-Burland, Egypt in the Roman Imagination; Jason Thompson, Wonderful Things: A History of Egyptology 1: From Antiquity to 1881