This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. More specifically, a study was carried out to reveal which "naturally occurring" gestures can be observed in a scenario where users had to explain to a robot how to perform a specific home task. The study followed a within-subjects design where ten participants had to demonstrate how to lay a table for two people using two different methods for their explanation: utilizing only gestures or gestures and speech. The experiments also served to validate a new coding scheme for human gestures in human-robot interaction, with good inter-rater reliability. Moreover, annotated video corpus was produced and characteristics such as frequency, duration, and co-occurrence of the different gestural classes have been gathered in order to capture requirements for the designers of HRI systems. The results regarding the frequencies of the different gestural types suggest an interaction between the order of presentation of the two methods and the actual type of gestures produced. Moreover, the results also suggest that there might be an interaction between the type of task and the type of gestures produced