A. Omondi; I. Lukandu; G. Wanyembi
Abstract
Variable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes ...
Read More
Variable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes or in anticipation of future changes in order to maintain the system’s performance at an optimum level. However, these manual performance tuning interventions are prone to error and lack of standards due to fatigue, varying levels of expertise and over-reliance on inaccurate predictions of future states of a business information system. As a result, the purpose of this research is to investigate on how the capacity of probabilistic reasoning to handle uncertainty can be combined with the capacity of Markov chains to map stochastic environmental phenomena to ideal self-optimization actions. This was done using a comparative experimental research design that involved quantitative data collection through simulations of different algorithm variants. This provided compelling results that indicate that applying the algorithm in a distributed database system improves performance of tuning decisions under uncertainty. The improvement was quantitatively measured by a response-time latency that was 27% lower than average and a transaction throughput that was 17% higher than average.
A. Omondi; I.A. Lukandu; G.W. Wanyembi
Abstract
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search ...
Read More
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search approach which uses lightweight random simulations to balance between the exploitation of relevant features and the exploration of features that have the potential to be relevant. In doing so, the study evaluated how effective the manipulation of the search component in feature selection is on achieving high accuracy with reduced dimensions. A control group experimental design was used to observe factual evidence. The context of the experiment was the high dimensional data experienced in performance tuning of complex database systems. The Wilcoxon signed-rank test at .05 level of significance was used to compare repeated classification accuracy measurements on the independent experiment and control group samples. Encouraging results with a p-value < 0.05 were recorded and provided evidence to reject the null hypothesis in favour of the alternative hypothesis which states that meta-heuristic search approaches are effective in achieving high accuracy with reduced dimensions depending on the outcome variable under investigation.