Probability has always been a field of mathematics with many applications in the real world: gambling strategies, growth of animal populations, spread of disease, or the performance of financial markets. More recently, with increases in knowledge and in computing power, new areas of interest have appeared, many of which rely on structures that resemble trees or large networks. To name three specific examples, it is now feasible to study the evolution of the DNA of species; to have efficient methods of organising the large amounts of data on our computers; and to understand the large clusters of computers that make up the internet. Evolution of DNA and branching Brownian motion Brownian motion was described by the botanist Robert Brown as he watched particles of pollen moving in water. He noticed that small changes caused by water molecules hitting the particles caused a slow, macroscopic, random movement of the pollen grains. DNA strings are extremely complex. Even the simplest organisms can contain millions of molecules. Each time a cell divides it creates two copies of every bit of this data. Inevitably mistakes occur, but most of these mistakes have a tiny effect given the extra information built into the cell. Nonetheless these small fluctuations slowly precipitate to create large-scale changes which contribute to the evolution of the species. These random movements caused by tiny mistakes make Brownian motion a good model for this process. So the evolution of the DNA of one organism can be modelled using Brownian motion; but each organism also breeds, creating copies of its DNA that then independently mutate and evolve. This description leads us to consider a model called branching Brownian motion, which is a tree-like structure in which each branch moves in space according to a Brownian motion. Probabilists have extensively studied the overall spread of this process: in biological terms, how fast a species will evolve if left to its own devices. If a species does not evolve as fast as its environment is changing then it will quickly become extinct. We can then ask how long the species will survive for, and how fast the population will grow. Data structures and sorting algorithms As larger and larger files are required to store the enormous amounts of data on our computers, it is important for that data to be organised in such a way that it can be easily accessed. One such method is known to computer scientists as quicksort, and has been extensively studied. The process works by sorting the data into a tree-like structure, which can then be accessed at speed by making a relatively small number of checks at the branch points of the tree. Very fine detail is now known about the height of this tree, which corresponds to how many checks must be made to find the hardest-to-reach bits of data. However, almost nothing is known on how much of the data must be stored at the highest levels of the tree, which would tell us how often we have to access the furthest (and slowest) corners of our drives. The internet and large random networks The internet is made up of huge numbers of computers (and web pages) linked together. This creates a complicated structure that is permanently changing. The connectivity properties of the network are very important for the speed of the internet: on the local scale this boils down to whether one computer can reach another, and how many links it takes to make that connection. The same ideas can be used to examine other related structures like social networking sites. There are a large number of points, connected to each other by links. As time progresses each of these links may appear or disappear. Very small alterations can cause the large-scale behaviour of the system to change suddenly, affecting the speed at which data can be shared. From bacteria to blue whales, the BBC Micro to broadband internet, probability theory provides tools for studying all of these structures.