Is map reduce suitable to analyze large data set that has an iterative function?
2 次查看(过去 30 天)
显示 更早的评论
Currently, I have a large timetable that is more than 1 billion rows x 3 columns.
Some of the highlighted functions I use include:
unstack: which turns my timetables into 1 billion rows x 1000 columns.
fillmissing(data, 'previous'): which fills all the NaN values from the previous value.
retime: which in some cases, can increase my number of rows 10 fold.
cumsum: add all the previous data together.
I am able to process small datasets using standard matlab function. But for some of the larger dataset (> 1 billion rows). I run into memory issues.
I am planning to break my timetable into smaller pieces, record all the "states" at the end of each section, and repopulate at the beginning of the next batch.
Can map reduce help me in this situation?
Any pseudo code is appreciated. Thank you
4 个评论
Ive J
2021-9-10
Regardless, in general you can use mapreduce , something like:
ds = datastore(...);
raw = mapreduce(ds, @myMapper, @myReducer);
raw = readall(raw); % may not fit into memory (maybe tall?)
function myMapper(data, intermKeys, intermKVStore)
data = fillmissing(data, 'previous');
% other filters go here
% do whatever
offsetData = intermKeys.Offset;
add(intermKVStore, offsetData, data)
end
function myReducer(intermKey, intermValsIter, outKVStore)
data = [];
while(hasnext(intermValsIter))
data = [data; getnext(intermValsIter)];
end
add(outKVStore,intermKey,data);
end
However, you should be careful about fillmissing (or maybe unstack too) in cases where first rows in some chunks are missing (so you don't have access to the previous rows because they're in another chunk). So, this approach is good if chunks can be treated somehow independ of each other.
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Data Preprocessing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!