Show simple item record

Distributed analysis of Markov chains

dc.contributor.advisorKritzinger, Pieter Sen_ZA
dc.contributor.authorMestern, Mark Andrewen_ZA
dc.date.accessioned2014-11-16T20:07:57Z
dc.date.accessioned2018-11-26T13:53:20Z
dc.date.available2014-11-16T20:07:57Z
dc.date.available2018-11-26T13:53:20Z
dc.date.issued1998en_ZA
dc.identifier.urihttp://hdl.handle.net/11427/9693
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/11427/9693
dc.descriptionBibliography: leaves 88-91.en_ZA
dc.description.abstractThis thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.en_ZA
dc.language.isoengen_ZA
dc.subject.otherComputer Scienceen_ZA
dc.titleDistributed analysis of Markov chainsen_ZA
dc.typeThesisen_ZA
dc.type.qualificationlevelMastersen_ZA
dc.type.qualificationnameMScen_ZA
dc.publisher.institutionUniversity of Cape Town
dc.publisher.facultyFaculty of Scienceen_ZA
dc.publisher.departmentDepartment of Computer Scienceen_ZA


Files in this item

FilesSizeFormatView
thesis_sci_1998_mestern_m.pdf968.3Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record