Cellular Automata for Physical Modelling
Current game environments are too static. The sorts of things that move in games are restricted to either small, discrete objects such as vehicles and people, or sometimes some larger, mechanical or pre-scripted objects. In some cases, the water level in a container can move in scripted ways, but it is only a single horizontal plane that moves up or down, and there is no way for the player to directly interact with it.
A lot of real-life things are missing that are less concrete, but nevertheless would play a large part in the sorts of environments that games are set in.
Some of these features are in some games, but only in heavily scripted and constrained ways – frequently they play little part in actual gameplay, and so can look rather artificial – which of course is exactly what they are. Using Cellular Automata (CA) to simulate these ideas can lead to far more dynamic and realistic behaviour, and allow new types of gameplay and new tactics within games. At the very least, they allow more realism, better graphical rendering and so increase player immersion.
Cellular Automata (CA), and their close relatives Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) [CFD ref] are already used in plenty of applications modelling air and water flow, heat distribution, building stresses and strains, and many other aspects of the real world. However, the main emphasis of the academic and commercial modellers is accuracy. As games programmers, our only real concern is whether something looks good enough and runs fast, and in almost every case, the simulation can be enormously simplified while still looking perfectly correct to most people.
The basics of a CA are simple. The world is divided into a grid of fixed-size cells. Each cell has various numbers associated with it to represent its state. Usual values held in cells are the air pressure, temperature, amount of water, which direction the water and/or air is flowing in, and so on.
Each game turn, every cell is processed, and it compares itself with its neighbouring cells. Differences between them result in changes to the state of the cell and/or its neighbours according to various laws. In this article, these will be based very loosely on real physical laws. One of the best-known CAs is “Conway’s Game of Life” [Conway ref]. This is an extremely simple CA – it has a single bit of state – whether the cell is full or not – and some extremely simple rules for changing state according to the state of neighbouring cells. Nevertheless, even this incredibly simple model is Turing-Machine-compatible [Conway Turing ref].
The CAs used in games will have rules based on various physical models, and determine the amount of heat, air, water or smoke that is transferred between neighbouring cells. Run the rules quickly enough on enough cells, and water will flow downhill and find a level, gently heated air will form convection currents, and strongly heated air will burn objects and in turn be heated by the burning objects.
In a 3D array of cubic cells, there are three possible definitions of “neighbour” cells:
Surprisingly, the rules that are used for physics simulations give almost the same sort of results, whichever of the three definitions we use. Of course the first version is far simpler, and there is only one type of neighbour cell, rather than three. For this reason, it is far easier to only consider the 6 cells that share a face with the central cell to be neighbours.
The first thing to do is pick a physical size of a CA cell. For human-sized games, we decided to use cubes that are half a metre across. Any bigger, and a CA cell of air will not fit inside a narrow passageway. Smaller cubes give higher resolution and allow smaller pipes, narrower gaps, and so on – but the extra space and processing is expensive. Different scales of games will naturally require a different size of CA cell, but because most games are set at human scales, for convenience this article will assume a scale of half-metre cube cells in its examples.
Another important consideration in a human-sized game is how to model thin walls. Most internal house walls and doors are only a few centimetres thick. They will stop water flow, and slow fire down, and stop smoke & air spreading, so they must be modelled in some way. Modelling them conventionally by using many small cells, and marking those occupied by the wall as solid, would require using cells of no more than about ten centimetres across, which requires 125 times as many cells – an extremely expensive option.
Two possible solutions present themselves. One method, used by the first in the X-Com series of games in their impressive and innovative use of CAs [X-Com ref], is to model the faces between the cells as entities, as well as the cells themselves. So walls, floors and ceilings always lie between two cells, along cell faces. This works quite well, but it does mean that there are now two distinct classes of object – things that fill a whole cube (rock, dirt, furniture, tall grass); and things that sit between two cells (walls, floorboards, short grass, doors). This creates annoying special cases in the code used to model substances and their interactions, and causes code replication between the two types, and debugging spaghetti. However, if this model fits, then it is a viable one, and it is fairly intuitive – the internal representation of objects matches their rendered shape fairly closely.
The other solution is one that retains its generality without resorting to many tiny cells. This is far more flexible about where the visual “edges” of the cells are in the world. Rather than the concept of a fixed solid cubic cell, the edges between cells can move about a bit according to the contents. This allows a thin wall to be chopped into half-metre squares, and each square lives in a cell. Because the walls are only a few centimetres thick, neighbouring cells are thought of as expanding to make up the extra space. This “expansion” is simply a way of thinking about it – the CA code itself does not know or care what shape the objects it represents are. As far as the CA physics are concerned, everything is still half a metre thick. Most of the work in making things look otherwise is in the rendering, rather than in the CA routines. It is the job of the rendering to ensure that water goes all the way to the wall’s mesh, and not just the edge of the CA cube, which would leave a large gap.
In this scheme, a one-metre-wide corridor with thin wooden walls is represented by a plane of “wood” cells, a plane of “air” cells, and then a plane of “wood” cells. Since the centres of the cells are each half a metre away from each other, the total width from wall to wall is still one metre. Of course, the graphical representation of the world still shows that the “cubes” of wood are not cubes at all but flat planes a few centimetres thick, and this is the representation that will be used for any collision-detection, but the distinction makes very little difference to the things that are modelled with the CA – water, air, fire. Because these entities are fairly amorphous, the difference between what is rendered and what is actually being modelled is very hard for the player to see. Again, accuracy is sacrificed for speed wherever the game can get away with it.
The next factor to consider is a gameplay decision – the difference between using passive scenery and active scenery.
In this system, as far as the CA is concerned, the scenery is inert. Water will flow around scenery; fire, air and heat will be stopped by scenery. But the scenery is not affected by the actions of the CA in any way – it does not burn, it does not get damaged, it does not get wet, or move in currents. This is the simpler of the two representations, but still allows discrete objects such as the ubiquitous oil drum and crate to float away on rivers of water, or to explode or burn when heated by fire.
Because the CA only knows about cells, not polygons, the scenery must be converted into a cell representation – usually as a pre-processing step. These cells are simply marked as inert rock or similar, and their only function is to ensure that water cannot flow into them, and that heat is not exchanged with them. Of course, the scenery is usually a collection of arbitrary polygons, and not aligned to cell boundaries at all, but the things being modelled with CAs are so amorphous that this difference does not matter in practice. As long as solid walls are represented by a continuous wall of cells with no holes in it, water will not flow through and break the illusion.
Even in this system, special cases should be made for doors that can be opened, and other animating or moving objects. When doors are open, they should remove (sliding doors) or move (swinging doors) the solid cells that represent them, so that water and fire can move through them.
The far more versatile, though also more adventurous option, is to have the scenery modelled by the CA as well, and for fire, water and so on to affect bits of the scenery. This also opens up the “totally destructible world” concept that many are looking to as the next big thing in games, though as with everything, there is nothing truly new in computer games [XCom ref again].
In this system, rather than simply being cells of inert material, scenery is modelled by its actual properties –current temperature, how easily and fiercely it burns, how strong it is, and so on. As the cells of the CA change their state according to the physical rules of the CA, so the graphics engine changes how it renders the associated polygonal objects – they become sooty, damaged, glow red hot, or (if the graphics engine can handle it) they vanish altogether.
In the latter case, the graphics engine can either be of the “Geo-Mod” type [Red Faction ref], or the object itself can simply have been specially marked as destructible, such as a thin wooden wall, and have an alternative “broken” representation, as done by current engines when dealing with damage by weapon fire.
Those considering implementing these CA methods will have quickly noticed that storing half-metre cells for even a modestly-sized level consumes a huge amount of memory, and the processing and memory-bandwidth requirements become severe. The trick is to not store or process cells that are not participating in any interesting activities – notably inert walls, and air at ambient or “standard” temperature and pressure (STP).
An octree is ideally suited to storing this arrangement, specifically a dynamically-allocated octree. In any implementation of the octree, remember that by far the most common operation in a CA is “find the cell next to this one”, so optimising for this type of operation when implementing the octree will pay off in terms of speed. If this request is made, and there is no neighbouring cell in the octree, it is assumed that the neighbouring cell is air at standard temperature and pressure. The physical simulations are carried out accordingly, and if they result in the “missing” cell becoming significantly different from standard temperature and pressure, a cell with the new properties is created and inserted in the octree. When an air cell returns to within a certain tolerance of standard temperature and pressure, it is deleted from the octree and is no longer processed.
The octree holding CA cells can also be useful as a general-purpose octree. Many games use octrees to optimise collision detection and culling when rendering (e.g. frustum culling), and there is no reason this one cannot fulfil both roles, and hold objects not directly related to the CA. A fairly easy adaptation to the search algorithms allows the octree to become a “loose octree,” [ref Thatcher Ulrich] which has several other advantages over a conventional octree. This does not change its behaviour when dealing with the CA aspect of its behaviour, since all CA cells are aligned to regular intervals and have a fixed size.
The main thing to remember when writing CA physics routines is to keep things simple. It is surprisingly easy to write very simple routines that look perfectly good to the eye, despite taking major physical shortcuts. As long as the basics of conservation of mass and energy are retained (and the latter is frequently optional), most of the other code deals with keeping the simulations stable, ensuring that numbers do not spiral out of control and become very small or (usually worse) very large.
The major problem encountered during implementation was finding good simple models of the various physical features. Most of the standard references deal with the application of Navier-Stokes equations for various materials, and implementing them with as little error as possible. This enormously complicates the code, and most of the academic and commercial literature is concerned with these error reductions. For games, what is required is simplicity, not accuracy. Most of the time, finding implementations involved getting the general feel of the behaviour from the literature, then writing something only vaguely approximating it.
Most of the properties simulated by the CA work in similar ways. To illustrate these common methods, here is a very simple fluid simulation that simply tries to achieve even distribution of pressure throughout the available space. Even this simple model is very useful for air and fluid modelling, even though it misses out some major features of real gasses and fluids.
for ( neigh = each neighbour cell ) { if ( neigh->Material->IsInert() ) { continue; } float DPress = cell->Pressure – neigh->Pressure; float Flow = cell->Material->Flow * DPress; Flow = clamp ( Flow, cell->Pressure / 6.0f, -neigh->Pressure / 6.0f ); cell->NewPressure -= Flow; neigh->NewPressure += Flow; }
The clamp() operation is performed to prevent NewPressure going negative. The division by six is because there are six neighbour cells. In practice, even more damping may be needed to retain stability, and prevent numbers going negative, and small oscillations (e.g. waves on the surface of water) becoming large, unrealistic oscillations.
Conventionally, once all the cells have been processed in this way, for each cell, all the NewPressure values are copied to the Pressure values. This double-buffering is necessary, rather than simply writing directly to Pressure at the end of the routine, otherwise pressure will be transmitted extremely fast (sometimes instantly) in the direction that the cells are visited during an update cycle, and much slower in the reverse directions. This produces asymmetry in heat distribution, water flow and other processes, and the effect is very obvious to players.
The double visit to each cell can hurt performance considerably, especially as the second visit is simply a copy, and will be limited by memory bandwidth on most modern CPUs. A better method is to store the last turn that a cell was processed. When subsequently processing that cell, the turn number is checked, and if it is earlier than the current turn, the copy is done. Although slightly odd-looking, this is in fact much quicker than scanning the whole array of cells twice. The code becomes:
if ( cell->Turn != CurrentTurn ) { cell->Turn = CurrentTurn; cell->Pressure = cell->NewPressure; } for ( neigh = each neighbour cell ) { if ( neigh->Material->IsInert() ) { continue; } if ( neigh->Turn != CurrentTurn ) { neigh->Turn = CurrentTurn; neigh->Pressure = neigh->NewPressure; } // ... same physics code as before ... }
The above simple model works well for uniform redistribution of air pressure. At first glance, this is not something that is frequently modelled in games, but in fact it is one of the commonest effects – explosions, and their effects on things. An explosive is simply a lump of material that produces a huge amount of air in a very short time. They can be modelled by finding the nearest CA cell to the centre of an exploding grenade, and adding a large number to the cell’s pressure, then letting the CA propagate the pressure through the world. Damage is done to the surroundings by either high absolute pressures or high pressure differences – in reality both do different sorts of damage to different objects, but that is usually unnecessary complication for the purposes of a game.
The advantages of this method of modelling over conventional ones is that line-of-sight modelling is handled automatically. Explosions in confined spaces are far more deadly at a certain range than explosions in open spaces, because there is less space for the pressure to dissipate. In addition, it shows that pure line-of-sight is not protection enough from explosions – they do go around corners and obstructions to a certain degree.
Because the simulation of the flow of air is qualitatively correct to the human eye, debris and small objects can be carried along with the explosion, without having to worry about the illusion being shattered by debris going the wrong way, or through solid walls.
Water is only slightly more complex than air. The obvious distinction is that air expands to fill the available space with cells changing pressure to do so, while water stays at the bottom of its container, and is incompressible.
In fact, the easiest way to simulate the transmission of pressure through water is to make it slightly compressible. This means pressure can be stored as a slight excess mass of water in the cell, above what the cell’s volume should be able to hold. In practice, the amount of compression needed is tiny – allowing just 1% more water per cell per cube height is easily enough. In a static body of water whose cells can normally contain one litre of water each, the cells at the top will contain one litre, the ones under them will contain 1.01 litres, the cells under those will contain 1.02 litres, and so on to the bottom. This tiny amount of compression will be completely unnoticeable to the player, but has enough dynamic range to allow all the usual properties of liquids. For example, the levels of water in two containers joined by a submerged pipe will be the same, even if water is poured into one of them – it will flow through the pipe to the other container.
if ( neighbour cell is above this one ) { if ( ( cell->Mass < material->MaxMass ) || ( neigh->Mass < material->MaxMass ) ) { Flow = cell->Mass - material->MaxMass; } else { Flow = cell->Mass – neigh->Mass - material->MaxCompress; Flow *= 0.5f; } } else if ( neighbour cell is below this one ) { if ( ( cell->Mass < material->MaxMass ) || ( neigh->Mass < material->MaxMass ) ) { Flow = material->MaxMass - neigh->Mass; } else { Flow = cell->Mass – neigh->Mass + material->MaxCompress; Flow *= 0.5f; } } else // neighbour is on same level { Flow = ( cell->Mass – neigh->Mass ) * 0.5f; }
This Flow value is then scaled and clamped according to some measure of the maximum speed that the fluid can flow, allowing some fluids to appear more viscous than others, and to prevent any resulting masses going negative.
The two cases of code for both the up and the down case deal with different situations. The first case is where one of the two cells is not full of water – on the surface of a body of water, or if the water is splashing or falling (for example, in a waterfall). Here, the behaviour is simple – water flows downwards to fill the lower cell of the two to the value MaxMass – this is the mass of water that can be contained by a single cell’s volume. In the example above, the mass of 1 litre of water.
The second case is where both cells are full of water, or perhaps a bit over-full. This is an area of water that is at pressure, and are cells in the middle of a body of water. Here, the flow acts to try to make sure that the upper cell has exactly MaxCompress more water than the lower cell. MaxCompress is the amount of “extra” water that can be fitted in because of compression – in the example above, it would be the mass of 0.01 litres of water.
So far the air and water models have ignored a fairly important property of any liquid or gas – its speed of flow. It has simply taken the difference in pressures between two cells, and used that to move mass around. This is fine for relatively static environments that we wish to bring to a stable state (uniform air pressure, or water finding its level). Many games will only use these simple properties for all the gameplay and realism they need.
However, what happens in real life is that water and air have momentum (which equals flow times mass), and the difference in pressure only influences the flow between cells, it does not rigidly set it. Storing momentum or flow is important when modelling waves, flowing rivers and air currents. Although rivers can flow in models without momentum, they usually have a very visible slope, which looks very bizarre.
To model momentum (or, alternatively, speed of flow), each processing step, the difference in masses determines the pressure gradient, as before. However, instead of changing the masses of the cells directly, the pressure gradient only alters the flow between the cells. The flow then changes the masses in the cells. The code is slightly more complex, as flow is a three-dimensional vector, and not a scalar like mass.
There are two possible ways to think about flow. The first is to think of a flow vector as being the flow through the centre of the cell. This is possibly the most intuitive model – the flow and the mass of the cell are both measured at its centre. However, in this case, the flow is affected by the pressure differential between the two neighbouring cells, and in turn determines how mass flows from one neighbouring cell to the other. Note the slightly odd result that for a particular cell, the flow stored in it is not affected by the mass in the cell itself (only its neighbours), nor does it change the mass of the cell (only its neighbours). This is a slightly surprising result, and in some cases can lead to some odd behaviour.
The more useful model is to think of each component of the flow vector as being the flow between two adjacent nodes – from the “current” node to the node in the positive relevant direction. Thus the flow vector F stored at cell (x,y,z) has the meaning that F.x is the flow from cell (x,y,z) to cell (x+1,y,z); F.y is the flow from cell (x,y,z) to cell (x,y+1,z); and similarly for F.z. The “meaning” of the vector F is now not as intuitive, but the physical model does seem more sensible. In practice, this is the most common model, but either model can be used for simulation with appropriate adjustment of the various constants.
The most important thing in this model is to keep a very firm grip on oscillations. Not only does this model allow waves, it tends to encourage them to build up, and sufficient damping must be applied to the flow (i.e. friction), otherwise waves can build up higher and higher instead of dying down, and the liquid or gas starts to do very odd things indeed.
It is worth mentioning that although one of the most common applications of flow is in rivers, in most “human-sized” games, large bodies of water such as lakes and rivers are frequently far too large to participate in gameplay. Their behaviour will stay fairly constant whatever the player does, and if they do change, they will do so in highly constrained ways. They do not usually require the flexibility of a CA, and are often far better modelled and rendered in more conventional ways – pre-animated meshes and collision models, and scripted events. However, there are many other genres that operate on larger scales and will want to properly simulate rivers with a CA.
Transmitting heat through the environment, whether from burning objects or from other sources, happens through three separate mechanisms – conduction, convection and radiation.
Conduction is the simplest to simulate. Neighbouring cells pass heat energy between each other so that eventually they reach the same temperature as each other. This is complicated because different materials are heated by different amounts by the same amount of energy – called the Specific Heat Capacity (SHC – usually measured in J/kg°C). If a hot cell made of water (high SHC, hard to heat up) is next to a colder cell made of the same mass of iron (low SHC), equilibrium will be reached at somewhere very close to the original temperature of the water, not at the average of the two temperatures. This is because when a given amount of energy is transferred from the water to the iron, the water’s temperature drops far less than the iron’s temperature rises.
Note that the above example is true for the same mass of each substance. However, iron has a far greater density than water, and therefore for the same volume, they have very similar heat capacities.
// Find current Heat Capacities. float HCCell = cell->material->SHC * cell->Mass; float HCNeigh = neigh->material->SHC * neigh->Mass; float EnergyFlow = neigh->Temp - cell->Temp; // Convert from heat to energy if ( EnergyFlow > 0.0f ) { EnergyFlow *= HCNeigh; } else { EnergyFlow *= HCCell; } // A constant according to cell update speed. // Usually found by trial and error. EnergyFlow *= ConstantEnergyFlowFactor; neigh->Temp -= EnergyFlow / HCNeigh; cell->Temp += EnergyFlow / HCCell; // Detect and kill oscillations. if (((EnergyFlow>0.0f)&&(neigh->Temp<cell->Temp))|| ((EnergyFlow<=0.0f)&&(neigh->Temp>cell->Temp))) { float TotalEnergy = HCCell * cell->Temp + HCNeigh * neigh->Temp; float AverageTemp = TotalEnergy / ( HCCell + HCNeigh ); cell->Temp = AverageTemp; neigh->Temp = AverageTemp; }
The code at the end is necessary if two materials with very different SHCs are side by side – the temperatures of the two can oscillate violently, and can grow out of control. The physically correct solution is to integrate the transfer of heat over time. However, this approach simply finds the weighted average temperature (i.e. the temperature that the system would reach eventually). It is less accurate, but looks perfectly good to the eye, and is quite a bit quicker to execute. Importantly, it obeys the conservation of energy, so any artefacts are purely temporary – the longer-term state is the same as a more realistic simulation.
Convection is the phenomenon of heat rising. Hot areas of fluid (air, water or otherwise) are less dense than cold areas, and thus try to rise. This can be simulated by incorporating temperature into the model of water or air. If using a flow model, the flow will be influenced by the relative temperatures of cells as well as their relative pressures. If not using a flow model, convection does not work very well, though its effects can be faked as described below in the section on fire.
Hot things glow – they emit light at various wavelengths which travels in straight lines, hits other surfaces, and in turn heats them up. This effect is very important physically, but unfortunately is also extremely expensive to model. Each source of heat must effectively shoot many rays out from itself and heat up whatever they hit.
Radiative heat modelling is very similar to the radiosity modelling that is used when creating lightmaps for many current games. Both are extremely expensive to model in runtime even crudely, though there are some cunning methods that use a heavy amount of approximation to improve the speed of radiative heat modelling. Even with these algorithms, modelling even a fraction of the radiative heat seems like a prohibitive amount of work for a game. These algorithms are also extremely complex, and do not involve the standard cell-to-cell interactions that model all the other physical properties mentioned – for both these reasons, they are well beyond the scope of this Gem. The application of radiation to fire is discussed further below.
The physics of burning materials is frequently extremely complex. There are multiple parts that burn at different rates and heats, and there are also different phases of material involved in the process.
To perform the calculations in real-time, the material models used need to be trimmed down to their minimum, and for each material, an appropriate model chosen that emphasises the main characteristic.
Of many models considered, the one that finally seemed to give best results for the least effort was a quadratic approximation of an exponential graph. This graph shows how much heat energy is released per unit time when a substance burns at a certain temperature. There is a maximum amount of energy that can be released, no matter how hot the fire gets, but even at relatively cool temperatures, a lot of heat is released. This explains why open fires tend to start small, rapidly grow to a certain size, and then not grow any bigger, but sit burning for a long time, despite there being plenty of fuel available. They are simply not generating enough heat energy to compensate for that lost to the environment (which is directly proportional to the temperature).
float Temp = cell->Temp – material->Flashpoint; if ( Temp < 0.0f ) { // Not burning. continue; } // Damage the cell. CellDamage = Temp * material->BurnRate; float Burn; // Convert to actual burning value. if ( Temp > material->MaxBurn * 2 ) { Burn = material->MaxBurn; } else { Burn = ( 1.0f - ( 0.25f * Temp / material->MaxBurn ) ) * Temp; } ASSERT ( Burn <= material->MaxBurn ); ASSERT ( Burn >= 0.0f ); // And heat the cell up from the burning. cell->Temp += Burn * material->BurnTemp;
Note that the damage done to a cell is proportional to its actual temperature, not how much heat is generated by burning. This allows materials that burn at low temperatures to nevertheless be far more severely damaged if exposed to high temperatures. By varying the factors MaxBurn and BurnTemp, anything from paper, wood, oil, gunpower and high explosives can be simulated.
Of course, one of the major aspects of fire is that it is hot, and thus it relies heavily on the modelling of heat flow by the three methods discussed above. In real-life fires, convection and radiation are incredibly important for their behaviour. Convection makes fires spread vertically far easier than spreading across floors, and leads to distinctive “walls of fire” in burning buildings. Radiation concentrates fire in corners of rooms, causing fire to spread up the corners of room first.
Sadly, radiative heat, as mentioned above, is extremely hard to model, and convection, although slightly more straightforward, requires large numbers of air cells around the source of the fire to be modelled and updated, which is expensive. Far better would be to find some hacks that simulate some of the features of these, without incurring the considerable expense involved.
A hack for convection effects is simply to make conduction of heat far easier in an upwards direction. In real life, a section of burning wall heats the air beside it, which rises, heats the section of wall higher up, which makes it far easier for the flames to spread upwards. In this hack, conduction of heat is made artificially asymmetrical. In the model presented above, a single factor – ConstantEnergyFlowFactor – was used for heat conduction for all six neighbors of a cell. Instead of this, a higher figure is used when conducting heat upwards, and a lower figure when conducting heat downwards.
A hack for radiation is harder, but it is possible that some pre-computation could be done using the same techniques as radiosity to decide which parts of a room would be more susceptible to fire because of the feedback effects of radiative heat. One possibility is computing the ambient occlusion term [Ambient ref] and using that to boost the heat generated by fire – generally around the edges and corners.
A factor that may not be immediately obvious is that these hacks are far more controllable than any realistic solution. Convection in real life is a notoriously chaotic system, and small changes in conditions can cause it to adopt very different patterns of flow. This can make designing gameplay around the effect very tricky indeed. What game and level designers usually require is a high degree of control and predictability to carefully create exciting set-pieces for the player to experience. The hacks presented above are far more predictable and linear in their behaviour, which is usually a far more desirable quality in a game than absolute realism.
The nature of some of the physical properties being simulated here require high update rates to maintain realism. The flow of any property from one cell to another can only proceed at a maximum speed of one cell per update cycle. Fire may spread quickly – metres per second or faster. Water spilling from a container may move faster – tens of metres per second. Explosions require extremely high update rates – real-life explosion shock waves spread at the speed of sound – roughly 340m/s.
Simulating all the above implies that update rates of 680 cycles per second may be required. This is an awesome speed, and it seems unlikely any current platform can sustain these sorts of update rates for a decently-sized game world – the number of cells to update per second is enormous.
As with the optimisation of not storing or processing cells of standard temperature and pressure (STP), it is possible to use the octree to reduce the update rates for cells that do not require fast updates to maintain realism.
When a cell is processed, it decides is how fast it needs to be updated to maintain a good simulation, based on its current state. Cells involved in explosions require high update rates, cells holding flowing water, burning objects or high heat need medium update rates, cells with fairly static water require lower update rates, and cells that hold scenery at ambient temperature require no processing at all until something disturbs them.
This speed of processing is then stored in the cell, and also passed up the octree hierarchy, each level being marked so that it is processed at the highest update rate of any of its children. This then allows the update routine to start at the top node of the octree and recurse down the tree. At each level, it decides if the current node would require processing. If not, it can be sure that none of the children do either, and proceed to the next node without even touching the memory of the child nodes.
One point to note is that this system only works if the update rates are quantised to powers-of-two. For example, if a child node needs to be updated every third turn, but the parent node is marked as being updated every second turn, every sixth turn the child node needs updating, but the parent does not. Because of the traversal algorithm’s early-out path, the child does not get updated this time, and in the end only gets updated every sixth turn – half the required frequency. Quantising update rates to powers of two solves this problem, and also allows some slight extra efficiency by storing the update rates as the power, rather than the actual number.
An obvious consequence of this variable update rate is that the physics routines need to be able to handle variable update rates as well. So far all the code has assumed that it will be run at a set speed, and that the physical constants will be adjusted to give good results for that speed. With variable update rates, the physically correct behaviour is to integrate the various equations over the given period. However, one of the purposes of the variable update rate is to choose an update rate to ensure that cells have fairly constant behaviour over the update interval.
This assumption makes integration rather simple – simply multiply the given behaviour (water flow, heat flow, burn rate, etc) by the time period since the last update. This slight extra complication is more than offset by the savings in processing time and memory bandwidth from the huge reduction in the number of cells updated per second.
In many cases, where the maths is simple, it may be more efficient to actually perform the integration. The extra accuracy of the simulation will then allow the use of an even lower update rate, further improving speed overall.
An unexpected artefact of using a variable update rate can occur when neighbouring cells have very different update rates. Since one cell is being updated much more frequently than its neighbour, it can change rapidly before the other cell has time to react. The solution is to limit the maximum difference in update rates of adjacent cells. Every time a cell is processed, as well as exchanging temperature, heat, and suchlike with neighbouring cells, it also ensures that those cells are being updated at least a quarter as fast as it. This value of a quarter is simply a factor that was found by experimentation. Using a factor of two causes too many cells to have their update rates raised pointlessly by nearby events, when in fact nothing exciting is happening to them. Using a factor of eight or more allows more possible artefacts, but does not reduce the processing load appreciably. As with the many arbitrary factors that are found by pure experimentation, other implementations should experiment to find what works best for them.
The use of CA methods allows the simulation of a wide range of real-world effects and situations rarely seen in games so far. They allow the player to interact with them fully, flexibly and logically, without the limitations of pre-scripting. This opens the way for more inventive puzzles, more lateral thinking by the player, more freedom to experiment, more realistic rendering, and overall better immersion in the game world as a real place, rather than as a collection of polygonal entities.
[Conway ref] http://dmoz.org/Computers/Artificial_Life/Cellular_Automata/Conway's_Game_of_Life/, Wikipedia
[Conway Turing ref] An "implementation" of a Turning Machine within Conway's Game of Life, written by Paul Rendell. The original site has vanished off the web, but Archive.org still has a copy here: http://web.archive.org/web/20030210114324/http://www.rendell.uk.co/gol/tm.htm
[XCom ref] “X-Com: UFO Defense”(US)/”UFO: Enemy Unknown”(UK) Microprose, 1994 (Codo games, Mobygames and Wikipedia)
[Red Faction ref] “Red Faction”, developed by Volition Inc, published by THQ Inc, 2000 (Mobygames). It should be noted that as far as I know, Red Faction did not use the sort of voxelised representation discussed here. My belief (from experimentation) is that it created a BSP representation of the scenery, and as the various "geomod" weapons and abilities were used, they inserted planes and nodes into the BSP. However, it is a good example of a game that takes the idea of "everything is destructible" and makes it a core gameplay feature.
[Thatcher Ulrich ref] “Loose Octrees”, Thatcher Ulrich, Game Programming Gems, Charles River Media, 2000 (http://tulrich.com/geekstuff/)
[Ambient ref] Ambient occlusion turns out to be the DC component of the elegantly general Spherical Harmonic representation. A good introduction to SH for the games programmer is a presentation I did in 2003 called Spherical Harmonics in Actual Games.
The above article was originally published in Game Programming Gems 3 in 2001. I've tweaked some of the references, as they are inevitably out of date by now.
Some people don't like the term "Cellular Automata" when used in the above way. Strictly, CAs have a finite number of states (in Conway's Life, there are two - alive and dead), whereas the above uses physical values that have a continuous range. A better term would probably be Finite Element Analysis, but that makes it sound far more robust and formal than it really is!
You will also note that I did not start with the Navier-Stokes equations and simplify. This is deliberate. While they are necessary for a physically correct simulation, I wasn't looking for correctness, I was looking for plausibility and speed. In practice of course, many of the terms correspond closely to parts of Navier-Stokes.
This article was based on my experimentations with all of these mechanisms inside a test-bed that I was playing with at the time. The plan was to put the results into a game at some point, but the right game never came along. While the test-bed was sufficient to show the mechanisms worked, worked fast enough, and were physically plausible, the graphical display was terrible (a bunch of coloured cubes), and there was no "game" as such, just hard-coded scenarios, which is why it was not supplied on the book's CD. One day I may find the right framework to put it in, work out how to render some of the effects in a pretty way, and actually have something usable.
The main problem with inserting the techniques into an existing game is having a CA-compatible representation of the world. It is surprisingly difficult to take a polygon soup and generate a water-tight voxellisation of it. And of course if it's not water-tight, a water CA is going to find the holes, and suddenly you have water appearing in places you really didn't want. To warrant the effort of making a physically-robust world, the game needs to be based around those physical processes. As mentioned in the article, X-Com 1-3 are the only mainstream ones I know of. Obviously, if you represent your entire game as a voxelisation rather than a polygon soup, as they did, life is much simpler.
Honorable mention should also be made of the game Silent Storm, which had some similar mechanisms, notably a fully-destructable environment. Remarkably, they also perform stress and strain simulations in an FEA style on buildings to determine when they should fall down due to damage (unlike Xcom, where you could entirely remove supports from structures and leave them floating!). This is an impressive achievement - the only regret is that the constraints of the game scenario (World War II) meant that the opportunities for observing this were limited - not enough highly-explosive ammunition was easily available for the player to be able to fully experiment with the gameplay implications of blowing the supports off a building to kill the people inside.
In this respect, X-Com did much better, as right from the start of the game, using high-explosive and incendiary rounds was easy, cheap, and extremely effective. Can't see the alien hiding in the corn field at night? Fire an incendiary round into the field. Now the corn is alight, providing illumination. If you were close enough in your aim, the alien is also alight. And if it's still hiding, the corn will burn down, removing the cover).
If you know of any other notable games using these principles, let me know.
TomF, 29th December 2006