Text Extraction and retrieval
2 次查看(过去 30 天)
显示 更早的评论
<P ID=1>
A LITTLE BLACK BIRD.
</P>
<P ID=2>
Story about a bird,
(1811)
</P>
<P ID=3>
Part 1.
</P>
As I am new to text extraction, I need help in;
0 个评论
采纳的回答
Akira Agata
2017-10-25
Just tried to make a script to do that. Here is the result (assuming the maximum ID = 10).
% Read your text file
fid = fopen('yourText.txt');
C = textscan(fid,'%s','TextType','string','Delimiter','\n','EndOfLine','\r\n');
C = C{1};
fclose(fid);
% 1. Count the delimiters '</P>'
idx = strfind(C,'</P>');
n = nnz(cellfun(@(x) ~isempty(x), idx));
% 2. Remove all punctuation
C2 = regexprep(C,'[.,!?:;]','');
% 3. Break the text into individual documents at each delimiter
idx2 = find(strcmp(C,'</P>'));
for kk = 1:10
str = ['<P ID=',num2str(kk),'>'];
idx_s = find(strcmp(C,str));
if ~isempty(idx_s)
idx_e = idx2(find(idx2>idx_s,1));
fileName = ['document',num2str(kk),'.txt'];
fid = fopen(fileName,'w');
fprintf(fid,'%s\r\n',C(idx_s:idx_e));
fclose(fid);
end
end
6 个评论
Akira Agata
2017-10-30
编辑:Akira Agata
2017-10-30
Thanks for your reply. I've just made a script to do the items 1~3, as follows. I hope this will help you somehow.
Regarding your last question ("count the number of documents each word appear in"), I think you can do that by combining the following script with my previous one.
% Read your text file
fid = fopen('yourText.txt');
C = textscan(fid,'%s','TextType','string','Delimiter','\n','EndOfLine','\r\n');
C = C{1};
fclose(fid);
C = regexprep(C,'<[\w \=\/]+>',''); % Remove tags
C = regexprep(C,'[.,!?:;()]',''); % Remove punctuation and brackets
C = regexprep(C,'[0-9]+',''); % Remove numbers
C = lower(C); % Convert to lower case
% Extract every words
words = regexp(C,'[a-z\-]+','match');
words = [words{:}];
% (1) Count total number of words
numOfWords = numel(words); % --> 9
% (2) Count the total number of distinct words
numOfDistWords = numel(unique(words)); % --> 7
% (3) Find the number of times each word is used in the original text
wordList = unique(words);
wordCount = arrayfun(@(x) nnz(strcmp(x,words)), wordList);
% Show the result
figure
bar(wordCount)
xticklabels(wordList)
更多回答(2 个)
Cedric
2017-10-26
Here is another approach based on pattern matching:
>> data = regexp(fileread('data.txt'), '(?<=<P[^>]+>\s*)[\w ]+', 'match' )
data =
1×3 cell array
{'A LITTLE BLACK BIRD'} {'Story about a bird'} {'Part 1'}
if you don't need the IDs (e.g. if in any case they will go from 1 to the number of P tags), you are done.
If you needed the IDs, you could get both IDs and content as follows:
>> data = regexp(fileread('data.txt'), '<P ID=(\d+)>\s*([\w ]+)', 'tokens' ) ;
data = vertcat( data{:} ) ;
ids = str2double( data(:,1) )
data = data(:,2)
ids =
1
2
3
data =
3×1 cell array
{'A LITTLE BLACK BIRD'}
{'Story about a bird' }
{'Part 1' }
Christopher Creutzig
2017-11-2
编辑:Christopher Creutzig
2017-11-2
It's probably easiest to split the text and then check the number of splits created to count, using string functions:
str = extractFileText('file.txt');
paras = split(str,"</P>");
paras(end) = []; % the split left an empty last entry
paras = extractAfter(paras,">") % Drop the "<P ID=n>" from the beginning
Then, numel(paras) will give you the number of </P>.
If you do not have extractFileText, calling string(fileread('file.txt')) should work just fine, too.
In one of the comments, you indicated you also need to count the frequency of words in documents. That is what bagOfWords is for:
tdoc = tokenizedDocument(lower(paras));
bag = bagOfWords(tdoc)
bag =
bagOfWords with 13 words and 3 documents:
a little black bird . …
1 1 1 1 1
1 0 0 1 0
…
2 个评论
shilpa patil
2019-9-23
编辑:shilpa patil
2019-9-23
how to rewrite the above code for a document image
instead of text file
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Text Data Preparation 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!