lexvec

command module
v1.0.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 9, 2017 License: MIT Imports: 13 Imported by: 0

README

LexVec

This is an implementation of the LexVec word embedding model (similar to word2vec and GloVe) that achieves state of the art results in multiple NLP tasks, as described in this paper and this one.

Pre-trained Vectors

Evaluation

In-memory, large corpus
Model GSem GSyn MSR RW SimLex SCWS WS-Sim WS-Rel MEN MTurk
LexVec, Word 81.1% 68.7% 63.7% .489 .384 .652 .727 .619 .759 .655
LexVec, Word + Context 79.3% 62.6% 56.4% .476 .362 .629 .734 .663 .772 .649
word2vec Skip-gram 78.5% 66.1% 56.0% .471 .347 .649 .774 .647 .759 .687
  • All three models were trained using the same English Wikipedia 2015 + NewsCrawl corpus.

  • GSem, GSyn, and MSR analogies were solved using 3CosMul.

  • LexVec was trained using the default parameters, expanded here for comparison:

    $ ./lexvec -corpus enwiki+newscrawl.txt -output lexvecvectors -dim 300 -window 2 \
    -subsample 1e-5 -negative 5 -iterations 5 -minfreq 100 -matrix ppmi -model 0
    
  • word2vec Skip-gram was trained using:

    $ ./word2vec -train enwiki+newscrawl.txt -output sgnsvectors -size 300 -window 10 \
    -sample 1e-5 -negative 5 -hs 0 -binary 0 -cbow 0 -iter 5 -min-count 100
    
External memory, huge corpus
Model GSem GSyn MSR RW SimLex SCWS WS-Sim WS-Rel MEN MTurk
LexVec, Word 72.6% 72.7% 70.5% .499 .459 .668 .761 .669 .800 .707
LexVec, Word + Context 81.1% 67.8% 65.0% .489 .418 .644 .771 .688 .813 .712
word2vec 73.3% 75.1% 75.1% .515 .436 .655 .741 .610 .699 .680
GloVe 81.8% 72.4% 74.3% .384 .374 .540 .698 .571 .743 .645
  • All models use vectors with 300 dimensions.

  • GSem, GSyn, and MSR analogies were solved using 3CosMul.

  • LexVec was trained using this release of Common Crawl which contains 58B tokens, restricting the vocabulary to the 2 million most frequent words, using the following command:

    $ OUTPUTDIR=output ./external_memory_lexvec.sh -corpus common_crawl.txt -negative 1 \
    -model 0 -maxvocab 2000000 -minfreq 0 -window 2                                             
    

    Note: we restricted negative sampling to 1 because of computational constraints in our lab. We believe using the default value of 5 can improve model performance based on our experiments with smaller corpora. If you have spare computing capacity and would like to help, please contact me.

  • The pre-trained word2vec vectors were trained using the unreleased Google News corpus containing 100B tokens, restricting the vocabulary to the 3 million most frequent words.

  • The pre-trained GloVe vectors were trained using Common Crawl (release unknown) containing 42B tokens, restricting the vocabulary to the 1.9 million most frequent words.

Installation

Binary

The easiest way to get started with LexVec is to download the binary release. We only distribute amd64 binaries for Linux.

Download binary

If you are using Windows, OS X, 32-bit Linux, or any other OS, follow the instructions below to build from source.

Building from source
  1. Install the Go compiler

  2. Make sure your $GOPATH is set

  3. Execute the following commands in your terminal:

    $ go get github.com/alexandres/lexvec
    $ cd $GOPATH/src/github.com/alexandres/lexvec
    $ go build
    

Usage

In-memory (default, faster, more accurate)

To get started, run $ ./demo.sh which trains a model using the small text8 corpus (100MB from Wikipedia).

Basic usage of LexVec is:

$ ./lexvec -corpus somecorpus -output someoutputdirectory/vectors

Run $ ./lexvec -h for a full list of options.

Additionally, we provide a word2vec script which implements the exact same interface as the word2vec package should you want to test LexVec using existing scripts.

External Memory

By default, LexVec stores the sparse matrix being factorized in-memory. This can be a problem if your training corpus is large and your system memory limited. We suggest you first try using the in-memory implementation, which achieves higher scores in evaluations. If you run into Out-Of-Memory issues, try this External Memory approximation (not as accurate as in-memory; read paper for details).

$ env OUTPUTDIR=output ./external_memory_lexvec.sh -corpus somecorpus -dim 300 ...exactsameoptionsasinmemory

Pre-processing can be accelerated by installing nsort and pypy and editing pairs_to_counts.sh.

References

Salle, Alexandre, Marco Idiart, and Aline Villavicencio. Matrix Factorization using Window Sampling and Negative Sampling for Improved Word Representations. The 54th Annual Meeting of the Association for Computational Linguistics. 2016.

Salle, A., Idiart, M., & Villavicencio, A. (2016). Enhancing the LexVec Distributed Word Representation Model Using Positional Contexts and External Memory. arXiv preprint arXiv:1606.01283.

License

Copyright (c) 2016 Salle, Alexandre atsalle@inf.ufrgs.br. All work in this package is distributed under the MIT License.

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL