| EnterpriseDB/hdfs_fdw |
131 |
|
0 |
0 |
almost 3 years ago |
0 |
|
9 |
other |
C |
| PostgreSQL foreign data wrapper for HDFS |
| LuckyZXL2016/Cloud-Note |
59 |
|
0 |
0 |
about 8 years ago |
0 |
|
0 |
|
Java |
| 基于分布式的云笔记(参考某道云笔记),数据存储在redis与hbase中 |
| livingsocial/ganapati |
53 |
|
1 |
0 |
over 12 years ago |
7 |
February 10, 2011 |
0 |
gpl-3.0 |
Ruby |
| Ruby interface to Hadoop's HDFS via Thrift |
| adobe-research/spark-parquet-thrift-example |
44 |
|
0 |
0 |
over 11 years ago |
0 |
|
1 |
apache-2.0 |
Scala |
| Example Spark project using Parquet as a columnar store with Thrift objects. |
| intenthq/pucket |
29 |
|
0 |
5 |
almost 8 years ago |
7 |
September 05, 2016 |
1 |
mit |
Scala |
| Bucketing and partitioning system for Parquet |
| NetEase/hive-tools |
27 |
|
0 |
0 |
over 6 years ago |
0 |
|
0 |
|
Java |
| looker/spark_log_data |
21 |
|
0 |
0 |
almost 10 years ago |
0 |
|
0 |
mit |
Scala |
| Flume-to-Spark-Streaming Log Parser |
| pierre/collector |
19 |
|
0 |
0 |
over 12 years ago |
0 |
|
1 |
apache-2.0 |
Java |
| HDFS endpoint collecting and aggregating data flows |
| yankay/libchdfs |
16 |
|
0 |
0 |
over 13 years ago |
0 |
|
7 |
|
C++ |
| libhadoop is a pure c/c++ liberary for hadoop hdfs like libhdfs |
| huyphan/Scribe-with-HDFS-support |
10 |
|
0 |
0 |
over 16 years ago |
0 |
|
0 |
apache-2.0 |
C++ |
| This is a customized version of Scribe to support writing log files to HDFS. Scribe is a server for aggregating log data streamed in real time from a large number of servers. It is designed to be scalable, extensible without client-side modification, and robust to failure of the network or any specific machine. |