The technology is language-agnostic as we take into account only tokens*. As long as a token holds semantic information that can be extrapolated into a context, it doesn’t matter if the language is English, Spanish, French, Arabic, Russian or even Japanese. The algorithm won’t detect the language, it just works with those tokens to find semantic contexts. 

*For English and Spanish we are currently using a base lemmatizer.

Did this answer your question?