LightFM

From RecSysWiki
Revision as of 09:20, 14 February 2016 by Maciejkula (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

LightFM provides efficient implementations for estimating matrix factorization collaborative filtering and hybrid recommender models, using both explicit and implicit feedback. It is Apache licensed and written in Python and Cython.

The LightFM model<ref>Maciej Kula (2015). Metadata Embeddings for User and Item Cold-start Recommendations. In Proceedings of the 2nd Workshop on New Trends on Content-Based Recommender Systems co-located with 9th ACM Conference on Recommender Systems (RecSys 2015), Vienna, Austria, September 16-20, 2015.</ref> incorporates both item and user metadata into the traditional matrix factorization algorithm. It represents each user and item as the sum of the latent representations of their features, thus allowing recommendations to generalise to new items (via item features) and to new users (via user features). The traditional matrix factorization model is a special case where users and items are represented by indicator features.

Highlights:

  • learning-to-rank training on implicit feedback data using BPR, WARP<ref> Weston, Jason, Samy Bengio, and Nicolas Usunier. "Wsabie: Scaling up to large vocabulary image annotation." IJCAI. Vol. 11. 2011.</ref>, or WARP-kOS<ref>Weston, Jason, Hector Yee, and Ron J. Weiss. "Learning to rank recommendations with the k-order statistic loss." Proceedings of the 7th ACM conference on Recommender systems. ACM, 2013.</ref> losses
  • training on binary explicit feedback using logistic loss
  • training using stochastic gradient descent using either AdaGrad<ref>Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." The Journal of Machine Learning Research 12 (2011): 2121-2159.</ref> or AdaDelta<ref>Zeiler, Matthew D. "ADADELTA: An adaptive learning rate method." arXiv preprint arXiv:1212.5701 (2012).</ref> optimizers
  • an efficient, parallel<ref>Recht, Benjamin, et al. "Hogwild: A lock-free approach to parallelizing stochastic gradient descent." Advances in Neural Information Processing Systems. 2011.</ref> implementation using Cython and OpenMP.

The documentation and source code is available at https://github.com/lyst/lightfm


External Links


References

<references/>