Classification approaches, e.g. Decision trees or Naive Bayesian classifiers, are often tightly coupled to learning strategies, special data structures, the type of information captured, and to how common problems, e.g. Over fitting, are addressed. This prevents a simple combination of classifiers of differentclassification approaches learned over different data sets. Many different methods of combiningclassification models have been proposed. However, most of them are based on a combination of the actual result of classification rather then producing a new, possibly more accurate, classifier capturing the combined classification information. In this paper we propose a new general approach to combiningdifferent classification models based on a concept of Decision Algebra which provides a unified formalization of classification approaches as higher order decision functions. It defines a general combining operation, referred to as merge operation, abstracting from implementation details of differentclassifiers. We show that the combination of a series of probably accurate decision functions (regardless of the actual implementation) is even more accurate. This can be exploited, e.g., For distributed learning and for efficient general online learning. We support our results by combining a series of decision graphs and Naive Bayesian classifiers learned from random samples of the data sets. The result shows that on each step the accuracy of the combined classifier increases, with a total accuracy growth of up to 17%.