| IndicoDataSolutions/Passage |
527 |
|
4 |
0 |
over 7 years ago |
4 |
February 24, 2015 |
12 |
mit |
Python |
| A little library for text analysis with RNNs. |
| nishitpatel01/Fake_News_Detection |
251 |
|
0 |
0 |
about 4 years ago |
0 |
|
10 |
mit |
Jupyter Notebook |
| Fake News Detection in Python |
| zwc12/Summarization |
70 |
|
0 |
0 |
over 8 years ago |
0 |
|
0 |
|
Python |
| A sequence to sequence model for abstractive text summarization |
| sillasgonzaga/lexiconPT |
40 |
|
0 |
0 |
over 8 years ago |
0 |
|
0 |
gpl-2.0 |
R |
| R package: Lexicons for Portuguese Text Analysis |
| ekagra-ranjan/fake-news-detection-LIAR-pytorch |
19 |
|
0 |
0 |
over 4 years ago |
0 |
|
0 |
|
Python |
| Fake News Detection by Learning Convolution Filters through Contextualized Attention |
| arvindshmicrosoft/YelpDatasetSQL |
15 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
|
TSQL |
| Working with the Yelp Dataset in Azure SQL and SQL Server |
| aghasemi/ChronologicalPersianPoetryDataset |
11 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
cc-by-sa-4.0 |
|
| A chronological (up to the century in which the poet has lived) of Persian poetry, extracted from the brilliant Ganjoor database |
| Desklop/Russian_subtitles_dataset |
9 |
|
0 |
0 |
almost 7 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| Preprocessing of the dataset of 347 subtitles for the TV series (thanks to Taiga Corpus) to build a word2vec model, JamSpell model, neural network training, chat bot training or in any other NLP task. |