| roboreport/doc2vec-api |
92 |
|
0 |
0 |
over 3 years ago |
0 |
|
1 |
lgpl-2.1 |
Python |
| document embedding and machine learning script for beginners |
| Hironsan/ja.text8 |
74 |
|
0 |
0 |
over 8 years ago |
0 |
|
0 |
|
Python |
| Japanese text8 corpus for word embedding. |
| koomri/text-segmentation |
73 |
|
0 |
0 |
over 6 years ago |
0 |
|
3 |
|
Python |
| Implementation of the paper: Text Segmentation as a Supervised Learning Task |
| google-research-datasets/wiki-split |
72 |
|
0 |
0 |
almost 7 years ago |
0 |
|
2 |
|
|
| One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia edits. |
| blei-lab/deep-exponential-families |
53 |
|
0 |
0 |
about 8 years ago |
0 |
|
0 |
|
C++ |
| Deep exponential families (DEFs) |
| google-research-datasets/wiki-atomic-edits |
47 |
|
0 |
0 |
almost 7 years ago |
0 |
|
1 |
|
|
| A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages. |
| EagleW/Describing_a_Knowledge_Base |
42 |
|
0 |
0 |
almost 5 years ago |
0 |
|
0 |
mit |
Python |
| Code for Describing a Knowledge Base |
| thoppe/today-AI-learned |
35 |
|
0 |
0 |
over 10 years ago |
0 |
|
0 |
|
Python |
| Training a classifier to reddit's TIL to find new things on Wikipedia |
| rodrigosetti/dbn-cuda |
34 |
|
0 |
0 |
almost 11 years ago |
0 |
|
0 |
|
Python |
| GPU accelerated Deep Belief Network |
| todd-cook/ML-You-Can-Use |
24 |
|
0 |
0 |
about 4 years ago |
0 |
|
3 |
other |
Jupyter Notebook |
| Practical ML and NLP with examples. |