The main reason for this work is to find an appropriate way to include multi-word units in a latent semantic vector model. This would be of great use since these models normally are defined in terms of words, which makes it impossible to search for many types of multi-word units when the model is used in information retrieval tasks. The paper presents a Swedish evaluation set based on synonym tests and an evaluation of vector models trained with different corpora and parameter settings, including a rather naive way to add bi- and trigrams to the models. The best results in the evaluation is actually obtained with both bi- and trigrams added. Our hope is that the results in a forthcoming evaluation in the document retrieval context, which is an important application for these models, still will be at least as good with the bi- and trigrams are added, as without.