A Complex Roman System
The spread of religions in the Roman world offers great examples of how complexity science can help us better understand important aspects of the Roman past. Mithraism, early Christianity, Jupiter Dolichenus: the processes behind the rise of these religions were extremely complicated, and the success of their dissemination was the result of many factors, from the individual decisions made by believers and non-believers up to state policies and attitudes towards religions. But what were the key factors, how important were the choices of individuals, and why were some religions extremely successful while others were not? Was it the inherent nature of a particular set of beliefs? Or maybe the coincidence between the decisions made by emperors such as Galerius and Constantine about early Christianity and the state of the society at that moment? Perhaps success was due to a genuine improvement the religions brought to people’s lives? Or their integration with the organisational structure of the Roman state? Whatever the combination of reasons, archaeological and historical empirical evidence can only take us so far in establishing and testing theories about the main causes of these complex processes. By complementing and testing hypotheses based on empirical sources, complexity science can help us understand aspects of the successful spread of religions. It offers a research perspective and tools to treat such complex phenomena as emerging through the interactions of large numbers of people in specific historical and institutional contexts: phenomena whose properties and functioning are very different from those of the individuals that gave rise to them.
Roman Studies abounds with examples of such phenomena: institutions and the Roman state, communities and social networks, demography and urbanisation, cultural transmission and technological innovation, trade, agriculture and the impacts of climate change. In each example, the behaviours of multiple individuals and their context collectively gave rise to properties that cannot be understood as merely the sum of individual practices. As such, these phenomena represent complex systems that are most appropriately studied via the perspective and tools of complexity science.
The study of complex systems has primarily been undertaken in contemporary settings, in disciplines such as physics, ecology, medicine, and economics. Yet, in recent years archaeologists have increasingly employed such approaches for the study of the past, while the concepts of ‘emergence’ and ‘complexity’ have been considered central to archaeology’s grand challenges (Kintigh et al. 2014). Formal modelling approaches such as scaling studies (e.g. settlement scaling theory; Ortman et al. 2014; 2015; Hanson and Ortman 2017; Hanson et al. 2017; Hanson et al. in press), agent-based modelling (Lake 2014; Cegielski and Rogers 2016; Davies and Romanowska 2018) and network science (Brughmans 2013; Knappett 2013) allow archaeologists to represent their theories, explore the implications of aggregating individuals’ behaviours, and identify and interpret emergent phenomena in a new way. However, for the majority of Roman archaeologists and historians, these approaches remain elusive as they have only been applied sporadically in Roman research contexts or using Roman archaeological data. Instead, the field has relied heavily on the empirical record, aiming to infer causal mechanisms from patterns in the data using conceptual modelling. This is a crucial aspect of any research endeavor as it generates new hypotheses; however, inductive reasoning (trying to infer causality from correlation) cannot be used to verify the correctness of such hypotheses fully. The rarity of the necessary final step — testing the plausibility of hypotheses — is at least partly due to lack of accessible, comprehensive, and critical reviews of how formal modelling approaches and complexity science concepts can make useful contributions in Roman Studies. This paper aims to fill that lacuna.
Complexity science has great potential as a research perspective for Roman Studies, and we argue that the formal computational tools associated with it should complement our existing toolbox. Before presenting a manifesto that sets out our arguments, we provide a brief introduction to complexity science and the advantages of working with formal methods to explore theories about past complex systems in a scientific framework. The manifesto is followed by an overview of the research themes in Roman Studies for which a complexity science perspective offers particular potential. In this reference overview, we provide brief introductions to key concepts and methods and discuss how their application can provide important contributions in a range of research contexts within the broader field of Roman Studies.
What is Complexity Science?
Complexity science is a branch of science concerned with studying complex systems. Contrary to the conventional reading of the word ‘complex’ in archaeology (as in, ‘complex societies’), a ‘complex system’ does not imply ‘complicated’, ‘hierarchical’ or even ‘large’. A complex system is a system in which multiple entities (e.g., traders, brain cells, or birds, etc.) may interact with each other and with their environment following often simple rules (e.g., ‘change religion if the majority of your social contacts do so’, ‘buy low, sell high’, ‘align your flight with nearby birds’). These simple interactions give rise to unexpected global, population-level patterns that have vastly different properties than the entities that produced them (e.g., Christianity as a world religion, stock exchange fluctuations, birds flocking). Thus, the meaning of ‘complex’ is best expressed in the saying ‘the whole is greater than the sum of its parts’. This property of complex systems – the disparity between the characteristics of the entities that make up the system and the global outcomes of their behaviour (believers >< Christianity as a world religion, traders >< stock markets, birds >< flocking) – is called ‘emergence’. Emergence is a key concept in complexity science because it dictates the development and selection of appropriate methods to study complex systems (good introductions to the field of complexity science include Epstein 2006; Mitchell 2009; Downey 2012).
Why use Complexity Science and Formal Modelling Tools?
The research perspective of complexity science cannot be separated from its formal computational tools and scientific process: formal representation of theories, computational simulation modelling, falsification of hypotheses, and ability to replicate results. We will explain this strong link between tools and perspective by highlighting the four advantages of applying them:
- Dealing with emergent properties,
- Specifying formal theories,
- Hypothesis testing,
- Transparent, reproducible and cumulative research.
Dealing with emergent properties
The human brain is particularly bad at forecasting the behaviour of complex systems with emergent properties: it might be easy enough for us to imagine the outcome of an interaction between a believer in a religion and a non-believer, but to predict the successful spread and establishment of a world religion emerging from such interactions of millions of individuals is beyond the abilities of any human. Computers, on the other hand, are very adept at keeping track of simple calculations repeated over and over again. These simple operations can be used as formal representations of behaviours and interactions of millions of individuals. Similarly, in other kinds of studies repeated calculations done by computers are required to identify patterning in empirical datasets of thousands or millions of data points and determining their correlations with modelled distributions: the human brain is much less reliable at such millions of repetitive tasks and would take infinitely longer to process these large volumes of data. Thus, complexity science perspectives cannot be meaningfully applied to research problems without the use of formal mathematical and computational modelling methods. This notion applies to both data modelling and theory modelling.
Specifying formal theories
The application of formal, computation tools in research, in general, has benefits beyond the ability to understand the emergent properties of complex systems. Crucially, it enforces formalism in the definition of hypotheses and analysis of data. This formalism ensures that there is only one possible reading of the propositions put forward by the researchers, therefore minimising the risk of ideas being misinterpreted or misused. For example, when a scholar theorises that the spread of Christianity was structured by the social networks connecting people throughout the Roman Empire, there are numerous ways in which these notions (social network structures, spread mechanism, interaction) could be interpreted. Thus, for this theory to be representable in computational tools using mathematics and computer code, all its components need to be unambiguously defined: what precise structure of social networks is theorised, what is the probability of a pair of Roman citizens connected in a social network to influence each other’s beliefs? Formalism also enforces so-called ‘hygiene of thinking’, that is, limiting the scope for under-determination of scientific models. It is possible in verbal arguments and natural language to gloss over troublesome details such as specifying the ability for two people living two thousand years ago to influence each other’s beliefs (do they live near each other? Are they part of social groups that regularly interact?). Formalism prevents this from happening – without specifying all elements of one’s theory with concrete values or their ranges this theory cannot be formally represented, and thus the ability of the theory to explain the data patterns as argued by the theory’s proponent cannot be demonstrated. One may worry that such unambiguously defined models (hypotheses) cannot be meaningfully formulated given the high level of uncertainty that is inherent to any study of past societies. Or that precise social processes that took place 2000 years ago could not possibly be made concrete and formal. However, this is what a model does – it is an abstract proposition of how the world might have worked, and whether it did, in fact, work this way and is therefore correct can only be verified if it is formally presented and tested against the available evidence (data). Only by testing multiple models of social processes 2000 years ago can we see which ones are consistent with the only direct remaining evidence of said social processes – the data. Providing only under-defined theories prevents us from ever getting closer to explaining past phenomena. Avoiding unambiguous formalism does not remove the uncertainty; it compounds it.
Formal representations enable testing alternative hypotheses against available evidence and determining their plausibility, at least in relation to other models. The probability that a given formal model correctly represents a past phenomenon can be established and compared to the probability of any other formal model developed to explain that phenomenon. Low-probability models and the hypotheses they represent can be rejected if they do not agree with the available evidence (archaeological and historical data), thus limiting the number of possible explanations for a studied phenomenon. This is a probability-driven process, meaning that with each new iteration of comparing an implementation of a model to data (including different datasets) we gain more certainty regarding the plausibility of the hypothesis. Note that the outcome of this process is establishing that some models are less probable explanations of a complex past phenomenon than other models, and not necessarily to completely discard models. If different models representing a theory fail to match multiple independent data sets, but those of other theories do, we have established beyond reasonable doubt that the former theory does not portray the past as well as the latter theory. For example, the plausibility of each competing theory for the successful and rapid spread of Christianity, and their agreement with data, can be established. If a formal model that emphasizes the structuring role of social networks on the spread of Christianity better represents the rate and scale of adoption of the religion than a formal model emphasizing the role of government intervention through edicts, then future research efforts for credible explanations of this past phenomenon can focus more on the role of social networks than that of government intervention. Similarly, more complex formal models may one day be able to demonstrate how different degrees of social integration in different provinces combined with the specific timing of an imperial edict created the perfect conditions for early Christianity to flourish in some regions but not in others. Such complex models can be realised in a cumulative fashion by first trying and testing simpler ones.
Transparent, reproducible and cumulative research
Formal models enable a research process of proposing, testing, rejecting or improving theories of past phenomena. The results of formal modelling in a complexity science framework are cumulative, in that each new model is built on the basis of previously tested models, incrementally bringing us closer to a more detailed and more robust theory of a past phenomenon. For example, early Christianity was no doubt an extremely complicated phenomenon, and its successful spread can never have been the result of one single factor such as the structure of social networks. However, by first establishing the plausibility of one formal model representing the structuring of social networks, more complex models can be built by adding other factors to this first model. Such a research process is cumulative but also necessarily more transparent and reproducible than non-formal approaches. Every step is clearly spelt out so that any researcher can repeat it and check whether the claims stand up to scrutiny. Although it is not possible to completely remove the social and personal biases in any research, the results of formal models (even if not the models themselves) are independent of personal beliefs, weight of authority or prestige. Biases in formal models can be transparently exposed to stimulate discussion. Formal modelling approaches can, therefore, contribute new, independent arguments to debates driven mainly by more qualitative and authority-based arguments.
A Manifesto for Complexity Science in Roman Studies
Complexity science has proven a highly constructive addition to virtually every other discipline (Mitchell 2009; Downey 2012; Chattoe-Brown 2013; Castellani 2018). The authors of this paper are convinced there is no reason why complexity science and formal modelling methods could not make equally constructive contributions to Roman Studies. We present our arguments as a 10-point manifesto for the use of complexity science in Roman Studies and for making its associated computational tools part of our ‘tools of the trade’. The statements in our manifesto are purposefully short and to-the-point to ensure their clarity, but they should be understood as strongly rooted in and supported by the subsequent sections of this paper where we showcase their applicability to particular research topics.
- The study of complex systems is integral to Roman Studies.
- It is appropriate to conceptualise and study the Roman state, its territory and inhabitants, and their interactions with states and peoples within and across their borders at any time during its history as a complex system.
- It is also appropriate to conceptualise and study phenomena that are aspects of the Roman complex system as complex systems in their own right: society, politics, economy, religion, institutions, communities, military, micro-regions and others.
- Complexity science is a constructive and necessary contribution to existing research perspectives in Roman Studies, providing theoretical approaches and methods for studying key concepts in complex phenomena, such as emergence, self-organisation or self-organised criticality.
- Constructively applying complexity science requires breaking through disciplinary silos to look for similar patterns, processes and models across different scientific domains to gain a more holistic view of the system in question and to avoid reinventing the wheel.
- To understand the behaviour of complex systems and to propose falsifiable theories of Roman complex systems one needs to use the formal tools developed to represent and study such systems.
- A multiscalar approach is integral to studying complex Roman systems, to understand how local interactions of Roman individuals resulted in regional patterns and the dynamics of the whole system.
- The plausibility and internal coherence of any hypothesis explaining a data pattern or emergent phenomenon should be formally demonstrated.
- Formalism and transparency should be employed in hypothesis formation, testing and reporting as well as in data analysis and management. All research output should be reproducible.
- Traditional archaeological and historical methods, fieldwork, geochemical analysis, close reading, epigraphy, numismatics etc. are not in any way less crucial or informative than complexity science approaches. It is only by taking full advantage of all scientific techniques available to us – especially the confrontation of empirical data and modelling approaches – that we can make progress in understanding the Roman past.
Techniques and Applications in Roman Studies
So far this manifesto has focused strongly on introducing complexity science and formal modelling – approaches so far rarely applied in Roman Studies and therefore requiring a more in-depth introduction. How exactly can they help further our understanding of the Roman world? Which Roman phenomena can be usefully studied using complexity science and formal modelling? What specific formal techniques and models are particularly appropriate for addressing research questions in Roman Studies?
In this final part of the paper, we provide an overview of the main research themes in Roman Studies to which formal modelling approaches within a complexity science framework can be usefully applied. Because complexity science and formal modelling are umbrella terms covering very different concepts and techniques, this part of the paper presents a series of short sections each focused on a particular concept or technique. Each of these has the same structure: a brief definition of the topic is followed by a specific applied example from Roman Studies and a discussion of the potential for future application to Roman Studies research themes. This part of the paper is meant as a reference point for Romanists to explore how complexity science and formal modelling can be usefully and critically applied in their own research.
The concepts and techniques covered in this overview are strongly interrelated (Figure 1) and here we present them under four broad headings:
- Key concepts: emergence, self-organised criticality, bounded rationality.
- Key methods: formal modelling, simulation, agent-based modelling, network science, stochastic models, Monte Carlo methods, Bayesian inference.
- Spatial methods: spatial modelling, predictive modelling, pedestrian modelling, Wilson’s spatial interaction model.
- Urban phenomena: the science of cities, rank-size distribution, settlement scaling theory, the social reactor model.
Key Complexity Science Concepts
Lead authors: Iza Romanowska and Tom Brughmans
Definition: Emergence is a common feature of complex systems. The term refers to a disparity between the characteristics of the system’s elements (micro level) and the global patterns that arise as a result of their behaviour (macro level). Understanding the relationship between simple micro-behaviours and complex macro-outcomes requires the use of formal modelling tools.
Roman example: If we assume Roman traders were economically rational and aimed to optimise their profits, we could describe their behaviour on the market as a reasonably simple algorithm: ‘buy low, sell high’. We know from studies of the present-day stock market that the stock exchange fluctuations resulting from such simple behavioural rules are nevertheless notoriously difficult to predict due to the non-linear character of trading interactions. This implies that even if we hypothesise Roman traders were simple profit maximisers (‘buy low, sell high’), their behaviour could have resulted in highly complex emergent global patterns of the Roman economy.
Potential for Roman Studies: Emergence is a key concept in existing simulation studies of the Roman economy, but will be central to all applications of complexity science in Roman Studies (Graham 2006a; Romanowska 2018). Frequency distributions of archaeological data, such as amphorae or stamps, can be compared to data patterns emerging from macro-economic models such as free market trade (e.g. Rubio-Campillo et al. 2017). Robust patterns in the geographical spread of such archaeological data can be explored as emerging from theorised Roman economic systems with varying degrees of integration (e.g. Graham and Weingart 2015; Brughmans and Poblome 2016a; 2016b).
Lead author: Dries Daems
Definition: Self-organised criticality (SOC) describes the tendency of non-equilibrium complex systems to self-organise towards a critical state, operating at the interface between order and randomness: a state sometimes called ‘the edge of chaos’ (Bak 1996). A system in SOC-state is poised for change, where even small perturbations can trigger cascading sequences of interconnected events at all scales. When plotting size and frequency of these events for observed real-world systems on a chart with logarithmic axes, a power law with a slope of -1 is obtained, indicating that this distribution is scale-free and has no characteristic average value (Barabási and Albert 1999; Bentley and Maschner 2003).
Roman example: As a system reaches a critical state, even small disturbances (including disturbances that occurred many times before but have not caused any substantial changes) can push it over the edge, resulting in societal collapse or transformation (Brunk 2002). One of the quintessential examples is the fall of the Roman Empire. SOC can provide a framework to study various disturbance events, including from military, environmental, political and cultural elements. If a system can be shown to be critically organised at certain points, it could explain why certain disturbance events had a larger impact than others, thus moving beyond the common ‘extraordinary events require extraordinary causes’ reasoning.
Potential for Roman Studies: The first introduction of SOC for archaeology was included in a volume by Bentley and Maschner (2003), with a focus on SOC as a key property of scale-free networks. This approach has been applied to the study of the diffusion of religious innovation in the Roman Empire (Collar 2013), or the brick industry of the Tiber Valley (Graham 2006a). SOC models can furthermore be used to describe models of technological evolution and innovation, cultural change, networks of social interaction, polity cycles, and societal change and institutional development of the Roman state (Griffin 2011).
Lead author: Dries Daems
Definition: Humans rarely make perfectly rational decisions. To do so would require access to complete and reliable information and the ability to process all this information. Instead, bounded rationality was coined to describe people’s intentions to act rationally in pursuit of their goals, but subject to constraints on ability and willingness to acquire and process information (Simon 1955; 1997). Bounded rationality in human decision-making is often combined with ‘satisficing’ (i.e., deeming outcomes as ‘good enough’) as the main heuristic tool in pursuing individual and collective goals, rather than behaviour optimisation.
Roman example: Classical economics long assumed rational economic behaviour as its main heuristic tool. Following this perspective, Roman elites were criticised for not recognising entrepreneurial opportunities for capital investment and commercial activities. However, the bounded rationality concept can help to elucidate how the ability of the Roman elite to respond to such opportunities was constrained by the biases of the cognitive framework within which they approached economic undertakings, including non-economic factors such as social prestige attached to land ownership (Kehoe 2007).
Potential for Roman Studies: Bounded rationality is a key element in agent-based modelling (ABM). Unequal availability of commercial information and associated transaction costs have been used as a characteristic property of an ABM of market integration in the Roman economy (Brughmans and Poblome 2016a). Bounded rationality, in combination with pathways of development, can elucidate how initial investments induce winnowing of possibilities for societal development, leading to specific avenues of change (van der Leeuw and de Vries 2002; van der Leeuw 2016). For example, emperors Diocletian and Constantine responded to institutional and military crises by designing a government and military that was increasingly larger, more complex, and more highly organised. Associated costs and system rigidity were not realised in time, and it is hypothesised that this failure in long term planning might have eventually developed into a decisive factor in the fall of the Roman Empire (Tainter 1988).
Key Complexity Science Methods
Lead author: Xavier Rubio-Campillo
Definition: Formal models are representations of reality expressed through mathematical techniques. These models are neither complete nor fully-detailed descriptions of a system, but tools designed to improve our understanding of the system and answer specific research questions.
While lacking the flexibility of natural-language based models, formal models provide advantages such as non-ambiguity (i.e. clear definition of every concept) and completeness (i.e. full description of those interactions within the system deemed most relevant) that promote the insightful exploration of research questions. These benefits also facilitate the testing of working hypotheses against existing evidence through quantitative analysis as formal models can generate a prediction on the behaviour of a system under a given set of initial conditions. The comparison between predicted values and evidence allows the researcher to assess the plausibility of an idea and discard weak explanations.
Roman example: The degree of connectivity between Roman provinces is a vital topic linked to a diversity of questions about trade, economy and politics. A number of authors suggested different natural-language models to explain these large-scale dynamics, but the lack of formal models did not allow to test to what extent these provinces interacted with each other for specific case studies. Rubio-Campillo et al. (2018) developed a model of large-scale province connectivity based on the similarity of amphoric stamps using a dataset of over 24.000 codes. The model was used to 1) reject the null-hypothesis that provincial trade was randomly connected and 2) identify the strongest relations between some provinces but not others. The results revealed strong connectivity between Atlantic provinces and the German limes which was interpreted as the predominance of an Atlantic supply route to the Roman military garrisons in Germania and Britannia.
Potential for Roman Studies: Formal modelling is a generic term that covers the application of computational techniques to test working hypotheses, and hence can be applied to all formalizable hypotheses in Roman Studies. The generalisation of this approach within Roman studies would facilitate the comparison of different answers to specific research questions and assess the explanatory power of each of the ideas. This would allow the field to improve its understanding of the past by discarding wrong ideas while promoting explanations that closely fit existing evidence.
Lead author: Iza Romanowska
Definition: Simulation refers to a family of techniques used to explore and experiment on models, i.e., artificial, usually computational versions of real-world systems (see ‘Formal modelling’ above). The functioning of a model over time is what we call simulation. Simulations are built to better understand the operation and change over time in systems that are otherwise difficult to directly observe and experiment on due to their size (e.g. too big or too small), cost, accessibility (such as the past) or ethical considerations (e.g. an experiment could potentially cause harm).
Roman example: Graham (2006b) explored the rate of the spread of information through a Roman road network using an agent-based simulation. By releasing autonomous agents on the road network created on the basis of Antonine Itineraries he was able to quantify the level of connectedness and cohesiveness within Roman provinces and between them. He showed how the dynamics of information transmission differed depending on the location and the history of each province and provided predictions regarding the speed at which cultural phenomena (e.g., the Imperial Cult) were able to spread throughout different parts of the empire.
Potential for Roman Studies: Simulation provides the only method of investigating causality in systems that are not otherwise accessible for direct observation – criteria that any past phenomenon necessarily fulfils. Simulation is the primary tool to study social dynamics, the repercussions of catastrophic events, the impact of external forcing factors (such as climate) on the functioning of any socio-natural system (including the Roman world), or the evolution of political forms, social tendencies or economic trends.
Lead author: Iza Romanowska
Definition: Agent-based modelling (ABM) is a type of simulation technique in which individual agents governed by pre-defined behavioural rules interact with each other and with their environment. ABM is a bottom-up simulation type because the dynamics of the system are represented (modelled) from the perspective of the system’s entities (elements that make up the system), in contrast to top-down approaches (e.g. equation-based modelling) that treat the system as a whole. Agents follow a set of deterministic rules but are autonomous, adaptive, and have the capacity to sense, learn, communicate and change their behaviour according to their circumstances.
Roman example: In the MERCURY model, Brughmans and Poblome (2016a; 2016b) constructed an ABM consisting of traders trading goods within and across markets. By changing the properties of the commercial network (from well-connected within markets to well-connected between markets) they showed that only in scenarios where the market was well integrated (i.e., traders had access to contacts from outside their immediate surroundings) the resulting pattern of distribution of goods matched the archaeologically attested spatio-temporal trends in the distribution of Terra Sigillata in the Roman East. This test of two competing verbal hypotheses: ‘Roman bazaar’ (Bang 2008) and ‘market economy’ (Temin 2013) showed that the predictions of the former do not agree with the archaeological record and therefore the latter was considered to be more plausible given available evidence.
Potential for Roman Studies: In recent years ABM has become archaeology’s favourite simulation technique. It offers a unique advantage over other simulation methods in that it enables researchers to design an artificial world from a familiar set of entities: individuals or groups, and their behaviours and capacities. But at the same time, ABM allows to obtain results on a population level (representing the aggregation of individual agent trajectories), which are congruent with the archaeological record. It is therefore particularly well suited to models that focus on the role of individual behaviours, beliefs and abilities in the shaping of large scale trends. Models of social interaction, cultural transmission and economic interaction hold particular potential as the base models have already been developed in other disciplines and can be easily translated into a Roman archaeology context (Lake 2014; Bianchi and Squazzoni 2015; Hamill and Gilbert 2016).
Lead author: Tom Brughmans
Definition: Network science is the body of concepts and techniques that allow for the study of relational phenomena through the formal visualisation, exploration and analysis of network data (i.e. points connected by lines). It is one of the most commonly applied approaches in complexity science used to study a wide range of relational phenomena: the internet, neurons in the brain, food webs, academic citation, friendship and power grids. The archaeological application of network science methods in a complexity science framework is now well established, with the most commonly studied past phenomena being transport systems, cultural transmission and visual signalling.
Roman example: The approach has been used to study local transport networks in the Dutch part of the Roman Limes (Groenhuijzen and Verhagen 2016; 2017). It allows us to explore how provisioning of the Roman army was organised on a regional scale. Least-cost path methods in GIS were used to create a regional transport network, which was subsequently analysed using network science methods to identify the role played by individual settlements on this network. This GIS-based network construction technique was subsequently scrutinised by using network models that represented different methods for provisioning Roman settlements.
Potential for Roman Studies: Network science has been used to explore large datasets of Roman tableware distributions (Brughmans 2010; 2018), to study and model Roman transport systems (Graham 2006b; Isaksen 2007; Scheidel 2014; Fulminante et al. 2017), to study the social networks revealed by Cicero’s letters (Alexander and Danowski 1990), the spread of cults (Collar 2013), and to explore the degree of Roman imperial economic integration (Graham and Weingart 2015; Brughmans and Poblome 2016b). In these applications, the representation of Roman datasets as networks is a particularly common approach. Future applications should additionally focus on the formal representation of theories about Roman phenomena using network science, and their testing using Roman datasets.
Lead author: Paul Kelly
Definition: An approach to modelling Roman phenomena using stochastic (or randomisation) techniques where at least one of the input variables to the model is a probability distribution reflecting epistemic uncertainty or aleatory variability (randomness). Epistemic uncertainty may reflect our lack of knowledge of the past while aleatory variability may reflect naturally variable inputs such as climate. The model is run multiple times to provide different possible outputs which are themselves then combined into probability distributions, enabling nuanced conclusions to be drawn.
Roman example: Current research modelling the risks inherent in farming for tenants, smallholders and small-scale landlords in Roman Egypt incorporated stochastic elements (Kelly 2018). The terms of land leases are well known, but inputs such as the yield from harvests, which are dependent on climate and human action, need to be defined as probability distributions if valid conclusions are to be reached. Conclusions, presented as the likely distribution of financial outcomes for families after a generation, illustrate the degree to which people in different financial positions were exposed to the chance of financial ruin or falling into debt.
Potential for Roman Studies: Most theories about Roman phenomena involve aspects that include a level of uncertainty, the effects of which need to be tested using stochastic models. All agent-based models in Roman studies involve stochastic elements, resulting from the interactions of autonomous agents and the differences in their rules of behaviour (e.g. Graham 2006b; Brughmans and Poblome 2016a; Romanowska 2018). In Roman studies, it has been used to address social issues such as the spread of Roman citizenship (Lavan 2016).
Monte Carlo methods
Lead author: Stephen A. Collins-Elliott
Definition: Monte Carlo methods consist of a suite of approaches to problem-solving which rely on a technique of repeated random sampling or one which replicates repeated random sampling (hence used in ‘stochastic models’, see above). It is often used in simulations where analytical methods are either impossible or impracticable, as in Bayesian inference to estimate a posterior probability (see ‘Bayesian inference’ below).
Roman example: If we wanted to estimate the frequency of a Roman artefact class in use per year, like ceramics or coinage, we might apply a rate of loss to an assemblage. Instead of supposing just one value, we could draw repeated and random values from an interval (say, between 0.01 and 0.10). This interval does not have to be uniform (in which every value has an equal chance of being drawn) but can be expressed as a probability distribution, to indicate more likely values than others. Repeating the process of random sampling according to that distribution would produce a set of simulated estimates of the abundance of material in question.
Potential for Roman Studies: Orton (2009: 69) noted the potential of Monte Carlo simulation concerning ceramic quantification. Bayesian methods will in practice require the use of Markov Chain Monte Carlo, like Gibbs sampling, to calculate posterior probabilities, and are thus already inherent in many probabilistic analyses (Rubio-Campillo et al. 2017). Collins-Elliott (2017) used randomisation as a means to establish credible intervals around estimates of the degree of difference between vessel assemblages. Risk analysis offers a good analogue for the use of simulation in Roman studies, as Lavan (2016) has discussed in a paper estimating the total population of Roman citizens under reasonable conditions. The development of more complex and realistic models of economic interaction in the ancient Mediterranean will need to accommodate several variables which will have a measure of uncertainty in their quantification, dating, and location (cf. Crema et al. 2010), necessarily involving a Monte Carlo approach.
Lead author: Simon Carrignon
Definition: Bayesian inference (BI) allows to estimate the probability of a hypothesis to be true by assuming a prior distribution of the variables described by the hypothesis and subject knowledge. By testing a range of different hypotheses against the same known information, it becomes possible to identify which of the competing hypotheses are better suited to explaining the studied phenomenon. One major limitation is that it is often impossible to analytically determine all elements needed to identify those probabilities. Nonetheless, the increase of computational power and generalisation of randomised simulations (e.g. ‘Monte Carlo Methods’, see above) led to the development of Approximate Bayesian Computation (ABC), which overcomes this issue.
Roman example: The approach has been used to evaluate competing hypotheses about the sharing of commercial information between traders in different markets throughout the Roman Empire (Carrignon et al. forthcoming). The hypotheses of success-biased and non-biased social learning were formulated to explain the strong differences in the distribution widths of four types of tableware in the Roman Eastern Mediterranean between 200 BC and AD 300. By applying ABC, it was found that a process of traders copying the buying strategies of the most successful traders at other markets (success-biased social learning) could offer a better explanation of the changes in tableware distribution.
Potential for Roman Studies: This manifesto advocates that formal modelling methods (such as agent-based modelling) should become more commonly used in Roman Archaeology for expressing our hypotheses about Roman phenomena. However, this comes with the challenge that every hypothesis can be implemented in numerous ways, making them hard to compare between each other or against archaeological data. BI and ABC offer a promising way of testing diverse hypotheses of Roman phenomena against Roman data. The method can, therefore, be applied in future research to any sets of formally represented hypotheses in Roman studies including economic phenomena (Rubio-Campillo et al. 2017), population dynamics (Gowland and Chamberlain 2002; Tocheri et al. 2005) and the differentiation of artefact assemblages (Collins-Elliott 2017).
Key spatial methods
Lead author: Maria del Carmen Moreno Escobar
Definition: Spatial Modelling is an analytical process focussed on the study and simulation of spatial phenomena. Although its application in archaeology predates the introduction of geographic information systems (GIS), its use has increased exponentially ever since. Spatial visualisation and analysis in GIS help researchers to understand spatial aspects of datasets quickly. Many formal modelling approaches include spatial data and theories about the spatial location of past features or the interaction of past individuals with the landscape (e.g. predictive modelling, ABM, the science of cities).
Roman example: In the context of navigation over the river Tiber in Imperial times, Malmberg (2015) proposed the existence of three lanes of river traffic connecting Ostia/Portus and Rome. However, no argument was provided for asserting the validity of his hypothesis. In this context, developing a simulation within a GIS-environment that would replicate the conditions described by Malmberg, whilst also considering factors influencing navigation on the river (e.g. its historical course, current geoarchaeological and archaeological evidence, and concerns on river navigation) are most helpful approaches to exploring this and alternative hypotheses (Moreno Escobar 2017; 2018).
Potential for Roman Studies: The creation and exploration of models of spatially-explicit theories in Roman archaeology have been relatively scarce until recently. However, it can be usefully applied for modelling settlement patterns and other territorial processes, such as the dynamics between rural and urban settlements (Keay et al. 2001; Brughmans et al. 2015), assessing agricultural models of rural exploitation and occupation as described in literary sources (Ascani et al. 2008; Martín-Arroyo Sánchez and Remesal Rodríguez 2018), and the development of systems of connectivity between different settlements within a given region (Moreno Escobar 2015).
Lead author: Manuela Ritondale
Definition: A common use of formal modelling is prediction (but this is by no means its only use; Epstein 2008). The predictive modelling technique as commonly applied in archaeology enables the prediction of unobservable outcomes (e.g. past or future phenomena, movements or distribution of sites and materials in a region), on the basis of a set of input variables and/or a set of rules (Kamermans et al. 2009; Kohler and Parker 1986; Verhagen 2007; 2018; Verhagen and Whitley 2011; Verhagen et al. 2019). Two different approaches can be distinguished: in inductive or data-driven models one starts from observed empirical data (e.g. the distribution of a sample of sites in a region) to statistically derive predictions about non-observed features; in deductive or theory-driven models one starts from prior theoretical assumptions (e.g. on human behaviour) and tests them with the empirical data.
Roman example: Maritime and riverine connections made up much of the Roman transport system, since navigation enabled the optimisation of time and cost of travel. In order to derive likely past sea-borne routes, one should not consider the fastest or shortest path connecting likely attractive locations (e.g. harbours, emporia, places of worship), but rather the one enabling adaptation of the available technology to the environmental conditions, and avoidance of both natural and human hazards, real and perceived. Therefore, a great number of aspects should be considered, which include environmental and technological constraints, cognitive processes, economic and socio-cultural dynamics (Leidwanger and Knappett 2018; Verhagen et al. 2019; Ritondale et al. forthcoming). Formal predictive modelling approaches are suitable to derive probable transport routes resulting from the above interacting parameters (Scheidel 2014; Warnking 2016).
Potential for Roman Studies: Archaeological predictive models have been mainly used for heritage management and probability assessment of the existence of archaeological features, as well as to assess the risk for archaeological sites to be damaged (Verhagen 2007; Kamermans et al. 2009; Verhagen et al. 2010; Danese et al. 2014; Perissiou 2014). They can be used to explore the interplay between environmental and Roman socio-cultural-economic factors (e.g. preferences for settlement locations, effects of climate changes, agricultural capacity, resource availability, transport routes. Warnking 2016; Verhagen et al. 2018). The combination of more traditional spatial analysis approaches to predictive modelling with dynamic agent-based modelling has particular potential for addressing new research themes, focusing on cognitive aspects of agency.
Lead author: Katherine A. Crawford
Definition: Pedestrian modelling refers to the attempt to understand how people navigate urban spaces using formal modelling techniques, which make it possible to understand the dynamics that result from the interactions between individual pedestrians, their movement and urban space. It can provide new insights into how people and groups negotiated urban space that moves beyond space syntax methodologies thanks to the ability to represent the interactions between different individual agents’ movements (e.g. Stöger 2011).
Roman example: Pompeii is an urban environment particularly well suited for pedestrian modelling. At a domestic analytical scale, we can study how people moved within individual rooms or buildings, which allows us to understand potential differences between the past uses of Pompeian spaces. At a larger analytical scale, we can study different pedestrian movement patterns through Pompeii’s streets and address questions about the accessibility and use of different resources such as shops or temples.
Potential for Roman Studies: Pedestrian modelling is a crucial approach in the study of Roman urbanism and can be applied to both excavated and surveyed cities (Gutierrez et al. 2007; van Ness 2014). Spatial distribution patterns of structures within the built environment can be studied in relation to pedestrian accessibility. Analysis of pedestrian traffic characteristics such as walking speed, crowd density, and direction of travel can be studied in relation to the development of both street systems and street infrastructure (e.g. Maïm et al. 2007).
Wilson’s spatial interaction model (Rihll and Wilson model)
Lead authors: Eleftheria Paliou and Tymon de Haas
Definition: Alan Wilson’s spatial interaction model was developed in the 1960s and 1970s to examine the growth of modern retail centres. It has been used in archaeology since the late 1980s to estimate flows of people, goods and ideas between settlements, mainly to explore the evolution of settlement systems and the emergence of regional centres within a given settlement distribution. To date, this model has been applied by archaeologists and historians with a relative degree of success in different cultural contexts, including Geometric Greece (Rihll and Wilson 1987; Evans and Rivers 2017), Bronze Age Crete (Bevan and Wilson 2013; Paliou and Bevan 2016), Middle Bronze Age and Iron Age Syria (Davies et al. 2014; Palmisano and Altaweel 2015) and Latenian Europe (Filet 2017).
Roman example: Ongoing research by Paliou and de Haas (forthcoming) applies Wilson’s model to the study of long term settlement evolution in the Pontine region in three subsequent periods (Archaic 600–350 BC, Republican 350 – 50 BC and Imperial 50 BC – AD 250). This work suggested that non-urban settlements (minor centres) would have gained an advantage in local interaction networks due to their location within the settlement structure. The model proved more successful in highlighting important settlements in periods of intensive economic integration when interaction and trade maximising strategies were adopted, and less so in phases when a settlement was determined by different socio-political considerations (for instance, military strategy).
Potential for Roman Studies: The application of Wilson’s model may offer new insights into the evolution and functioning of regional settlement systems in the Roman World in several ways: it can help identify potentially important settlements in areas that are not well documented; it can add a new perspective to the study of urban systems and the complementary role of non-urban sites in such systems; it can offer insights into the geographic or socio-political factors that mostly affected settlement and population growth; and finally, when combined with material culture analysis, it can contribute to a better understanding of the transmission of cultural traits and patterns of economic exchange.
Key Urban Phenomena
The science of cities
Lead author: Eleftheria Paliou
Definition: The science of cities (Batty 2013; Townsend 2015) is an emerging interdisciplinary field of research that seeks to understand the complex mechanisms underlying urban growth, the structure and internal order of cities, and the interactions that tie urban systems together. It builds strongly on complexity science, social physics, urban economics, transportation theory, regional science, urban geography and network science. The Science of Cities uses computational and quantitative methods to explore urban phenomena, such as network analysis, spatial interaction modelling, settlement scaling, space syntax, agent-based modelling and urban simulation.
Roman example: All research applying the methods mentioned above to study urban phenomena fall under the umbrella term of Science of Cities. For specific examples see below ‘Rank-size distribution’, ‘Pedestrian modelling’, ‘Wilson’s spatial interaction model’, ‘Settlement scaling theory’.
Potential for Roman Studies: By adopting a quantitative approach, the Science of Cities allows diachronic and cross-cultural comparisons of settlement systems and can lead to new insights into Roman urbanism from a cross-disciplinary and comparative perspective. Conversely, the Science of Cities holds the potential of enhancing the scientific impact of Roman studies on other disciplines, by highlighting the value of Roman-era material and textual evidence for evaluating scientific theories and models about the effects of urbanisation on human societies through time.
Lead authors: Francesca Fulminante and John W. Hanson
Definition: A correlation between a settlement’s size and its socio-political rank within a certain regional unit, which can be described as a mathematical rule. While there is still debate on the theoretical underpinnings and the mathematical explanation of the rule, it seems to be generally predictive at least at the empirical level.
Roman example: It has been applied to both settlement and funerary data within the Roman world, to investigate the degree of integration and the nature of interactions within a region and/or between regions. Its application to Early Iron Age Italy has allowed assessing the degree of centralization of Pre-Roman communities and compare their different trajectories toward urbanization (Judson and Hemphill 1981; Guidi 1985; Vanzetti 2006; Fulminante and Stoddart 2012; Fulminante 2014; Stoddart forthcoming); while its application to Roman provinces such as Britain, the Iberian Peninsula and Asia Minor, has allowed showing the interdependence and reliance of these provinces on the rest of the Roman Empire (Marzano 2011; Hanson 2011; 2016; Kron, forthcoming).
Potential for Roman Studies: The future application of Rank-Size to Roman Studies can be pursued in several directions: (1) more applications to different regions and/or chronological frameworks are desirable to allow cross-cultural comparisons and analysis of long-term trajectories, which include not only the growth but also the collapse of emerging systems (Kron, forthcoming); (2) building on our recent ability to access more refined demographic estimates (Hanson 2016; Hanson and Ortman 2017) and therefore also to calculate and compare urbanization rates (Wilson 2011; Hanson 2016; Hanson and Ortman 2017); (3) the integration both at an empirical and theoretical level with the study of networks and complex systems within the framework of new work on settlement scaling theory (see ‘Settlement scaling theory’ below).
Settlement scaling theory
Lead authors: John W. Hanson and Matthew J. Mandich
Definition: Settlement scaling theory describes how the attributes of settlements change with population. This is based on the empirical observation that various aggregate properties of settlements scale systematically with population: socio-economic outputs increase faster than population, while the division of labour, resource use, and infrastructure increase slower than population (Bettencourt 2013). The model based on this observation applies to both ancient and modern contexts, providing statistical averages across settlements (it does not necessarily make specific predictions about individual sites).
Roman example: Initial work has shown that there is a consistent relationship between the inhabited areas and numbers of residential units in a selection of settlements throughout the Greek and Roman world (Hanson and Ortman 2017). This approach gives us independent estimates for the size of settlements, allowing us to explore the relationships between their inhabited area, design, and amenities for the first time.
Potential for Roman Studies: Scaling relationships have also been found between the sizes of settlements and the attested numbers of associations (such as collegia), shedding new light on the ability of settlements to foster division of labour (Hanson et al. 2017). More recent research suggests similar correlations existed between the sizes of ancient settlements and their infrastructure, including fora, agorae, and street networks (Hanson et al. in press). While there is still no direct evidence for relationships between the scale of settlements and their outputs, further research in this direction could prove pivotal to our understanding of ancient economies and urbanisation (Mandich 2016).
The social reactor model
Lead authors: John W. Hanson and Matthew J. Mandich
Definition: An important tenet of settlement scaling theory is that settlements act as ‘social reactors’, which increase the number of opportunities for individuals to interact, therefore increasing their chances to share resources, divide tasks, and exchange knowledge, skills, and ideas (Bettencourt 2013). This is associated with both increasing economies of scale and increasing returns to scale, leading to economic growth. The social reactor model, therefore, has important implications for our understanding of the links between agglomeration and economic development, since it suggests that there are systematic, predictable relationships between them, which might even be open-ended.
Roman example: Recent research has used this model to investigate the relationship between the sizes of ancient settlements and the overall numbers of activities that occurred within them via associations (Hanson et al. 2017). In theory, as the densities of settlements increase, individuals will, on average, have larger numbers of social contacts, allowing them to concentrate on a narrower range of tasks while relying on others to fulfil those remaining. This seems to be borne out by the available data, suggesting a link between urbanisation and both division of labour and specialisation/diversification, as well as with economic development more generally.
Potential for Roman Studies: The social reactor model should have broad applicability in Roman studies given that it is based on the simple conception of settlements as containers for human interaction that increase their efficiency and output with size. While certain scaling relationships between the size, number of residential units, and communal infrastructure in ancient settlements have been found to exist (Hanson and Ortman 2017; Hanson et al. in press), it should also be possible to investigate the dependencies between individuals, resources, and wealth, assuming workable data from proxies for socio-economic outputs can be used appropriately (Mandich 2016).
Towards Complexity Science in Roman Studies
A complexity science theoretical framework implemented through formal modelling tools has great potential for leading to new insights on a wide range of topics in Roman Studies. In this paper, we have introduced complexity science and formal modelling and provided arguments as to why they should be adopted as tools of the trade in Roman Studies. An elaborate encyclopaedia-style overview of the different concepts and computational techniques included in the approach shows how it can make contributions to our understanding of many Roman phenomena and will offer inspiration and bibliographical pointers to any Romanist interested in using these approaches in their own research.
The constructive integration of complexity science in Roman Studies requires the adoption of an open scientific process. It should be clear that this manifesto does not claim all past phenomena can or should be studied through formal scientific approaches. However, we do argue that for those aspects of past phenomena that can be studied using these formal approaches, Romanists should enable their use where possible (either for themselves or for colleagues) by specifying theories in detail, unambiguously describing all concepts within the theories, formally describing empirical data patterns, and making the datasets used openly accessible. Enabling unambiguous formal description of theories, their formal testing with open archaeological data, and the reproduction of results obtained through computational experimentation is a prerequisite to allowing formal approaches to complexity to contribute constructively to Roman Studies. This manifesto argues that this is now possible thanks to the recent increase in the number of open access models of Roman Studies theories and openly available digitized Roman datasets (for an aggregation of these resources, see https://projectmercury.eu), as well as the existence of a community of scholars, including the authors and signatories of this paper, who aim to implement this manifesto in future research.
We believe complexity science has great potential for enhancing Roman Studies, but achieving this potential will require close collaboration between scholars with different expertise. If the final aim of this manifesto is to make important and substantial gains in our understanding of the Roman world, particularly computer literate advocates of the approach, such as the authors of this paper, simply cannot and should not achieve this in isolation. Instead, formal modelling and complexity science should be considered a field of expertise in Roman Studies in its own right (alongside archaeobotany, epigraphy and ceramology to give but a few examples). Scholars with this expertise should collaborate with those with other expertise to identify what research questions and phenomena can be appropriately studied through critical data analysis and formal modelling. Moreover, they should provide the resources and guidelines to make it possible for other Romanists not aiming to work within this framework to independently enable future formal modelling of their theories and using their datasets. The authors of this manifesto are committed to this cause and firmly believe in the need for constant constructive collaboration to usefully embed complexity science and formal modelling in Roman Studies.