We consider the problem of optimal control of a mean-field stochasticdifferential equation (SDE) under model uncertainty. The model uncertaintyis represented by ambiguity about the law LðXðtÞÞ of the stateX(t) at time t. For example, it could be the law LPðXðtÞÞ of X(t) withrespect to the given, underlying probability measure P. This is the classicalcase when there is no model uncertainty. But it could also be thelaw LQðXðtÞÞ with respect to some other probability measure Q or,more generally, any random measure lðtÞ on R with total mass 1. Werepresent this model uncertainty control problem as a stochastic differentialgame of a mean-field related type SDE with two players. Thecontrol of one of the players, representing the uncertainty of the lawof the state, is a measure-valued stochastic process lðtÞ and the controlof the other player is a classical real-valued stochastic process u(t).This optimal control problem with respect to random probability processeslðtÞ in a non-Markovian setting is a new type of stochastic controlproblems that has not been studied before. By constructing a newHilbert space M of measures, we obtain a sufficient and a necessarymaximum principles for Nash equilibria for such games in the generalnonzero-sum case, and for saddle points in zero-sum games. As anapplication we find an explicit solution of the problem of optimal consumptionunder model uncertainty of a cash flow described by amean-field related type SDE.