当前位置: 动力学知识库 > 问答 > 编程问答 >

database - Search and aggregate in 6000 row csv/json document

问题描述:

I'm looking for a good solution for full text search in a document with approx. 6000 rows, 6 columns in each.

I've currently tried Meteor and MongoDB, but I'm struggling a bit with high CPU when doing the aggregations, and the pub/sub response time is quite slow.

I need to search for multiple words, and sum/aggregate a number field.

What technologies is worth looking in to for a fast and easy setup?

网友答案:

Meteor pub/sub is not suited for sending large datasets at once. It is rather designed for reactive updates of data (automatically update on data-change).

In an optimal condition, this data is sent in small chunks to the client via lazy loading, using a limit, and completed on demand.

However, Mongoldb itself is good for searching in large datasets! You may search on that topic.

The first results on googling "mongodb search in large data set" returned these articles:

https://www.mongodb.com/big-data-explained

http://johnpwood.net/2011/05/31/fast-queries-on-large-datasets-using-mongodb-and-summary-documents/

This might be a starting point.

Then you may try to stick with your search/aggregations on the server-side of meteor and return only your results to the client (using a lazy loading mechanism).

Regarding your CPU load, you may also consider the first article and workaround "greedy queries":

https://docs.mongodb.com/manual/core/query-optimization/

分享给朋友:
您可能感兴趣的文章:
随机阅读: