Swarm robotics has been put forward as a method of addressing a number of scenarios where scalability and robustness are desired. In order to deploy robotic swarms in safety-critical situations, it is necessary to verify their behaviour. Model checking gives a possible approach to do this; however, with traditional model checking techniques only systems of a finite size can be considered. This presents an issue for swarm systems, where the number of participants in the system is not known at design-time and may be arbitrarily large. To overcome this, parameterised model checking (PMC) techniques have been developed which enable the verification of systems where the number of participants is not known until run-time. However, protocols followed by robotic swarms are often stochastic in nature, and this cannot be modelled with current PMC techniques. This is the gap that this thesis aims to overcome. In particular, two parameterised semantics for reasoning about multi-agent systems are extended to incorporate probabilities. One of these semantics is synchronous, whilst the other is interleaved. Abstract models which overapproximate the systems being considered are constructed using counter abstraction techniques. These abstract models are used to develop parameterised verification procedures for a number of specification logics on both bounded and unbounded traces. The decision procedures presented are shown to be sound, and in some cases also complete. Further, the techniques are extended to allow modelling of situations where agents may exhibit faulty behaviour, as well as scenarios where the strategic capabilities of the participants needs to be verified. The procedures are all implemented in a novel verification toolkit called PSV (Probabilistic Swarm Verifier), built on top of the probabilistic model checker PRISM. This toolkit is used to verify three case studies from both swarm robotics and other application domains.