In this thesis we consider robust support vector machines (SVMs) and show equivalence with the regularized SVM, a well-known machine learning model for classification and regression tasks. Robustness is an essential condition for consistency of learning algorithms. Robust optimization and regularization in machine learning are strongly related, and in some cases are equivalent. The similarity between regularization and robustness gives a physical interpretation of the regularization process which explains why support vector machines are statistically consistent from a robustness point of view. In fact regularized SVM are indeed equal to a new robust optimization formulation. SVM classification algorithms have built in protection to noise, and control overfitting, and therefore in this thesis we study a connection between robustness and regularized SVM, show through experiments that regularized SVMs generalize well because of the robustness property compared to other classical classifiers. We show that a certain selection of the perturbation exactly recovers the solution attained by penalizing complexity via regularization. SVM can be re-derived from a robust optimization perspective.