1 <!doctype html public "-//w3c//dtd html 4.0 transitional//en">
\r
3 Licensed to the Apache Software Foundation (ASF) under one or more
\r
4 contributor license agreements. See the NOTICE file distributed with
\r
5 this work for additional information regarding copyright ownership.
\r
6 The ASF licenses this file to You under the Apache License, Version 2.0
\r
7 (the "License"); you may not use this file except in compliance with
\r
8 the License. You may obtain a copy of the License at
\r
10 http://www.apache.org/licenses/LICENSE-2.0
\r
12 Unless required by applicable law or agreed to in writing, software
\r
13 distributed under the License is distributed on an "AS IS" BASIS,
\r
14 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
\r
15 See the License for the specific language governing permissions and
\r
16 limitations under the License.
\r
20 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
\r
23 <p>API and code to convert text into indexable/searchable tokens. Covers {@link Lucene.Net.Analysis.Analyzer} and related classes.</p>
\r
24 <h2>Parsing? Tokenization? Analysis!</h2>
\r
26 Lucene, indexing and search library, accepts only plain text input.
\r
30 Applications that build their search capabilities upon Lucene may support documents in various formats – HTML, XML, PDF, Word – just to name a few.
\r
31 Lucene does not care about the <i>Parsing</i> of these and other document formats, and it is the responsibility of the
\r
32 application using Lucene to use an appropriate <i>Parser</i> to convert the original format into plain text before passing that plain text to Lucene.
\r
34 <h2>Tokenization</h2>
\r
36 Plain text passed to Lucene for indexing goes through a process generally called tokenization. Tokenization is the process
\r
37 of breaking input text into small indexing elements – tokens.
\r
38 The way input text is broken into tokens heavily influences how people will then be able to search for that text.
\r
39 For instance, sentences beginnings and endings can be identified to provide for more accurate phrase
\r
40 and proximity searches (though sentence identification is not provided by Lucene).
\r
42 In some cases simply breaking the input text into tokens is not enough – a deeper <i>Analysis</i> may be needed.
\r
43 There are many post tokenization steps that can be done, including (but not limited to):
\r
45 <li><a href = "http://en.wikipedia.org//wiki/Stemming">Stemming</a> –
\r
46 Replacing of words by their stems.
\r
47 For instance with English stemming "bikes" is replaced by "bike";
\r
48 now query "bike" can find both documents containing "bike" and those containing "bikes".
\r
50 <li><a href = "http://en.wikipedia.org//wiki/Stop_words">Stop Words Filtering</a> –
\r
51 Common words like "the", "and" and "a" rarely add any value to a search.
\r
52 Removing them shrinks the index size and increases performance.
\r
53 It may also reduce some "noise" and actually improve search quality.
\r
55 <li><a href = "http://en.wikipedia.org//wiki/Text_normalization">Text Normalization</a> –
\r
56 Stripping accents and other character markings can make for better searching.
\r
58 <li><a href = "http://en.wikipedia.org//wiki/Synonym">Synonym Expansion</a> –
\r
59 Adding in synonyms at the same token position as the current word can mean better
\r
60 matching when users search with words in the synonym set.
\r
64 <h2>Core Analysis</h2>
\r
66 The analysis package provides the mechanism to convert Strings and Readers into tokens that can be indexed by Lucene. There
\r
67 are three main classes in the package from which all analysis processes are derived. These are:
\r
69 <li>{@link Lucene.Net.Analysis.Analyzer} – An Analyzer is responsible for building a {@link Lucene.Net.Analysis.TokenStream} which can be consumed
\r
70 by the indexing and searching processes. See below for more information on implementing your own Analyzer.</li>
\r
71 <li>{@link Lucene.Net.Analysis.Tokenizer} – A Tokenizer is a {@link Lucene.Net.Analysis.TokenStream} and is responsible for breaking
\r
72 up incoming text into tokens. In most cases, an Analyzer will use a Tokenizer as the first step in
\r
73 the analysis process.</li>
\r
74 <li>{@link Lucene.Net.Analysis.TokenFilter} – A TokenFilter is also a {@link Lucene.Net.Analysis.TokenStream} and is responsible
\r
75 for modifying tokens that have been created by the Tokenizer. Common modifications performed by a
\r
76 TokenFilter are: deletion, stemming, synonym injection, and down casing. Not all Analyzers require TokenFilters</li>
\r
78 <b>Lucene 2.9 introduces a new TokenStream API. Please see the section "New TokenStream API" below for more details.</b>
\r
80 <h2>Hints, Tips and Traps</h2>
\r
82 The synergy between {@link Lucene.Net.Analysis.Analyzer} and {@link Lucene.Net.Analysis.Tokenizer}
\r
83 is sometimes confusing. To ease on this confusion, some clarifications:
\r
85 <li>The {@link Lucene.Net.Analysis.Analyzer} is responsible for the entire task of
\r
86 <u>creating</u> tokens out of the input text, while the {@link Lucene.Net.Analysis.Tokenizer}
\r
87 is only responsible for <u>breaking</u> the input text into tokens. Very likely, tokens created
\r
88 by the {@link Lucene.Net.Analysis.Tokenizer} would be modified or even omitted
\r
89 by the {@link Lucene.Net.Analysis.Analyzer} (via one or more
\r
90 {@link Lucene.Net.Analysis.TokenFilter}s) before being returned.
\r
92 <li>{@link Lucene.Net.Analysis.Tokenizer} is a {@link Lucene.Net.Analysis.TokenStream},
\r
93 but {@link Lucene.Net.Analysis.Analyzer} is not.
\r
95 <li>{@link Lucene.Net.Analysis.Analyzer} is "field aware", but
\r
96 {@link Lucene.Net.Analysis.Tokenizer} is not.
\r
101 Lucene Java provides a number of analysis capabilities, the most commonly used one being the {@link
\r
102 Lucene.Net.Analysis.Standard.StandardAnalyzer}. Many applications will have a long and industrious life with nothing more
\r
103 than the StandardAnalyzer. However, there are a few other classes/packages that are worth mentioning:
\r
105 <li>{@link Lucene.Net.Analysis.PerFieldAnalyzerWrapper} – Most Analyzers perform the same operation on all
\r
106 {@link Lucene.Net.Documents.Field}s. The PerFieldAnalyzerWrapper can be used to associate a different Analyzer with different
\r
107 {@link Lucene.Net.Documents.Field}s.</li>
\r
108 <li>The contrib/analyzers library located at the root of the Lucene distribution has a number of different Analyzer implementations to solve a variety
\r
109 of different problems related to searching. Many of the Analyzers are designed to analyze non-English languages.</li>
\r
110 <li>The contrib/snowball library
\r
111 located at the root of the Lucene distribution has Analyzer and TokenFilter
\r
112 implementations for a variety of Snowball stemmers.
\r
113 See <a href = "http://snowball.tartarus.org">http://snowball.tartarus.org</a>
\r
114 for more information on Snowball stemmers.</li>
\r
115 <li>There are a variety of Tokenizer and TokenFilter implementations in this package. Take a look around, chances are someone has implemented what you need.</li>
\r
119 Analysis is one of the main causes of performance degradation during indexing. Simply put, the more you analyze the slower the indexing (in most cases).
\r
120 Perhaps your application would be just fine using the simple {@link Lucene.Net.Analysis.WhitespaceTokenizer} combined with a
\r
121 {@link Lucene.Net.Analysis.StopFilter}. The contrib/benchmark library can be useful for testing out the speed of the analysis process.
\r
123 <h2>Invoking the Analyzer</h2>
\r
125 Applications usually do not invoke analysis – Lucene does it for them:
\r
127 <li>At indexing, as a consequence of
\r
128 {@link Lucene.Net.Index.IndexWriter#addDocument(Lucene.Net.Documents.Document) addDocument(doc)},
\r
129 the Analyzer in effect for indexing is invoked for each indexed field of the added document.
\r
131 <li>At search, as a consequence of
\r
132 {@link Lucene.Net.QueryParsers.QueryParser#parse(java.lang.String) QueryParser.parse(queryText)},
\r
133 the QueryParser may invoke the Analyzer in effect.
\r
134 Note that for some queries analysis does not take place, e.g. wildcard queries.
\r
137 However an application might invoke Analysis of any text for testing or for any other purpose, something like:
\r
139 Analyzer analyzer = new StandardAnalyzer(); // or any other analyzer
\r
140 TokenStream ts = analyzer.tokenStream("myfield",new StringReader("some text goes here"));
\r
141 while (ts.incrementToken()) {
\r
142 System.out.println("token: "+ts));
\r
147 <h2>Indexing Analysis vs. Search Analysis</h2>
\r
149 Selecting the "correct" analyzer is crucial
\r
150 for search quality, and can also affect indexing and search performance.
\r
151 The "correct" analyzer differs between applications.
\r
152 Lucene java's wiki page
\r
153 <a href = "http://wiki.apache.org//lucene-java/AnalysisParalysis">AnalysisParalysis</a>
\r
154 provides some data on "analyzing your analyzer".
\r
155 Here are some rules of thumb:
\r
157 <li>Test test test... (did we say test?)</li>
\r
158 <li>Beware of over analysis – might hurt indexing performance.</li>
\r
159 <li>Start with same analyzer for indexing and search, otherwise searches would not find what they are supposed to...</li>
\r
160 <li>In some cases a different analyzer is required for indexing and search, for instance:
\r
162 <li>Certain searches require more stop words to be filtered. (I.e. more than those that were filtered at indexing.)</li>
\r
163 <li>Query expansion by synonyms, acronyms, auto spell correction, etc.</li>
\r
165 This might sometimes require a modified analyzer – see the next section on how to do that.
\r
169 <h2>Implementing your own Analyzer</h2>
\r
170 <p>Creating your own Analyzer is straightforward. It usually involves either wrapping an existing Tokenizer and set of TokenFilters to create a new Analyzer
\r
171 or creating both the Analyzer and a Tokenizer or TokenFilter. Before pursuing this approach, you may find it worthwhile
\r
172 to explore the contrib/analyzers library and/or ask on the java-user@lucene.apache.org mailing list first to see if what you need already exists.
\r
173 If you are still committed to creating your own Analyzer or TokenStream derivation (Tokenizer or TokenFilter) have a look at
\r
174 the source code of any one of the many samples located in this package.
\r
177 The following sections discuss some aspects of implementing your own analyzer.
\r
179 <h3>Field Section Boundaries</h3>
\r
181 When {@link Lucene.Net.Documents.Document#add(Lucene.Net.Documents.Fieldable) document.add(field)}
\r
182 is called multiple times for the same field name, we could say that each such call creates a new
\r
183 section for that field in that document.
\r
184 In fact, a separate call to
\r
185 {@link Lucene.Net.Analysis.Analyzer#tokenStream(java.lang.String, java.io.Reader) tokenStream(field,reader)}
\r
186 would take place for each of these so called "sections".
\r
187 However, the default Analyzer behavior is to treat all these sections as one large section.
\r
188 This allows phrase search and proximity search to seamlessly cross
\r
189 boundaries between these "sections".
\r
190 In other words, if a certain field "f" is added like this:
\r
192 document.add(new Field("f","first ends",...);
\r
193 document.add(new Field("f","starts two",...);
\r
194 indexWriter.addDocument(document);
\r
196 Then, a phrase search for "ends starts" would find that document.
\r
197 Where desired, this behavior can be modified by introducing a "position gap" between consecutive field "sections",
\r
198 simply by overriding
\r
199 {@link Lucene.Net.Analysis.Analyzer#getPositionIncrementGap(java.lang.String) Analyzer.getPositionIncrementGap(fieldName)}:
\r
201 Analyzer myAnalyzer = new StandardAnalyzer() {
\r
202 public int getPositionIncrementGap(String fieldName) {
\r
208 <h3>Token Position Increments</h3>
\r
210 By default, all tokens created by Analyzers and Tokenizers have a
\r
211 {@link Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute#getPositionIncrement() position increment} of one.
\r
212 This means that the position stored for that token in the index would be one more than
\r
213 that of the previous token.
\r
214 Recall that phrase and proximity searches rely on position info.
\r
217 If the selected analyzer filters the stop words "is" and "the", then for a document
\r
218 containing the string "blue is the sky", only the tokens "blue", "sky" are indexed,
\r
219 with position("sky") = 1 + position("blue"). Now, a phrase query "blue is the sky"
\r
220 would find that document, because the same analyzer filters the same stop words from
\r
221 that query. But also the phrase query "blue sky" would find that document.
\r
224 If this behavior does not fit the application needs,
\r
225 a modified analyzer can be used, that would increment further the positions of
\r
226 tokens following a removed stop word, using
\r
227 {@link Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute#setPositionIncrement(int)}.
\r
228 This can be done with something like:
\r
230 public TokenStream tokenStream(final String fieldName, Reader reader) {
\r
231 final TokenStream ts = someAnalyzer.tokenStream(fieldName, reader);
\r
232 TokenStream res = new TokenStream() {
\r
233 TermAttribute termAtt = (TermAttribute) addAttribute(TermAttribute.class);
\r
234 PositionIncrementAttribute posIncrAtt = (PositionIncrementAttribute) addAttribute(PositionIncrementAttribute.class);
\r
236 public boolean incrementToken() throws IOException {
\r
237 int extraIncrement = 0;
\r
239 boolean hasNext = ts.incrementToken();
\r
241 if (stopWords.contains(termAtt.term())) {
\r
242 extraIncrement++; // filter this word
\r
245 if (extraIncrement>0) {
\r
246 posIncrAtt.setPositionIncrement(posIncrAtt.getPositionIncrement()+extraIncrement);
\r
256 Now, with this modified analyzer, the phrase query "blue sky" would find that document.
\r
257 But note that this is yet not a perfect solution, because any phrase query "blue w1 w2 sky"
\r
258 where both w1 and w2 are stop words would match that document.
\r
261 Few more use cases for modifying position increments are:
\r
263 <li>Inhibiting phrase and proximity matches in sentence boundaries – for this, a tokenizer that
\r
264 identifies a new sentence can add 1 to the position increment of the first token of the new sentence.</li>
\r
265 <li>Injecting synonyms – here, synonyms of a token should be added after that token,
\r
266 and their position increment should be set to 0.
\r
267 As result, all synonyms of a token would be considered to appear in exactly the
\r
268 same position as that token, and so would they be seen by phrase and proximity searches.</li>
\r
271 <h2>New TokenStream API</h2>
\r
273 With Lucene 2.9 we introduce a new TokenStream API. The old API used to produce Tokens. A Token
\r
274 has getter and setter methods for different properties like positionIncrement and termText.
\r
275 While this approach was sufficient for the default indexing format, it is not versatile enough for
\r
276 Flexible Indexing, a term which summarizes the effort of making the Lucene indexer pluggable and extensible for custom
\r
280 A fully customizable indexer means that users will be able to store custom data structures on disk. Therefore an API
\r
281 is necessary that can transport custom types of data from the documents to the indexer.
\r
283 <h3>Attribute and AttributeSource</h3>
\r
284 Lucene 2.9 therefore introduces a new pair of classes called {@link Lucene.Net.Util.Attribute} and
\r
285 {@link Lucene.Net.Util.AttributeSource}. An Attribute serves as a
\r
286 particular piece of information about a text token. For example, {@link Lucene.Net.Analysis.Tokenattributes.TermAttribute}
\r
287 contains the term text of a token, and {@link Lucene.Net.Analysis.Tokenattributes.OffsetAttribute} contains the start and end character offsets of a token.
\r
288 An AttributeSource is a collection of Attributes with a restriction: there may be only one instance of each attribute type. TokenStream now extends AttributeSource, which
\r
289 means that one can add Attributes to a TokenStream. Since TokenFilter extends TokenStream, all filters are also
\r
292 Lucene now provides six Attributes out of the box, which replace the variables the Token class has:
\r
294 <li>{@link Lucene.Net.Analysis.Tokenattributes.TermAttribute}<p>The term text of a token.</p></li>
\r
295 <li>{@link Lucene.Net.Analysis.Tokenattributes.OffsetAttribute}<p>The start and end offset of token in characters.</p></li>
\r
296 <li>{@link Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute}<p>See above for detailed information about position increment.</p></li>
\r
297 <li>{@link Lucene.Net.Analysis.Tokenattributes.PayloadAttribute}<p>The payload that a Token can optionally have.</p></li>
\r
298 <li>{@link Lucene.Net.Analysis.Tokenattributes.TypeAttribute}<p>The type of the token. Default is 'word'.</p></li>
\r
299 <li>{@link Lucene.Net.Analysis.Tokenattributes.FlagsAttribute}<p>Optional flags a token can have.</p></li>
\r
302 <h3>Using the new TokenStream API</h3>
\r
303 There are a few important things to know in order to use the new API efficiently which are summarized here. You may want
\r
304 to walk through the example below first and come back to this section afterwards.
\r
306 Please keep in mind that an AttributeSource can only have one instance of a particular Attribute. Furthermore, if
\r
307 a chain of a TokenStream and multiple TokenFilters is used, then all TokenFilters in that chain share the Attributes
\r
308 with the TokenStream.
\r
312 Attribute instances are reused for all tokens of a document. Thus, a TokenStream/-Filter needs to update
\r
313 the appropriate Attribute(s) in incrementToken(). The consumer, commonly the Lucene indexer, consumes the data in the
\r
314 Attributes and then calls incrementToken() again until it retuns false, which indicates that the end of the stream
\r
315 was reached. This means that in each call of incrementToken() a TokenStream/-Filter can safely overwrite the data in
\r
316 the Attribute instances.
\r
320 For performance reasons a TokenStream/-Filter should add/get Attributes during instantiation; i.e., create an attribute in the
\r
321 constructor and store references to it in an instance variable. Using an instance variable instead of calling addAttribute()/getAttribute()
\r
322 in incrementToken() will avoid expensive casting and attribute lookups for every token in the document.
\r
326 All methods in AttributeSource are idempotent, which means calling them multiple times always yields the same
\r
327 result. This is especially important to know for addAttribute(). The method takes the <b>type</b> (<code>Class</code>)
\r
328 of an Attribute as an argument and returns an <b>instance</b>. If an Attribute of the same type was previously added, then
\r
329 the already existing instance is returned, otherwise a new instance is created and returned. Therefore TokenStreams/-Filters
\r
330 can safely call addAttribute() with the same Attribute type multiple times. Even consumers of TokenStreams should
\r
331 normally call addAttribute() instead of getAttribute(), because it would not fail if the TokenStream does not have this
\r
332 Attribute (getAttribute() would throw an IllegalArgumentException, if the Attribute is missing). More advanced code
\r
333 could simply check with hasAttribute(), if a TokenStream has it, and may conditionally leave out processing for
\r
337 In this example we will create a WhiteSpaceTokenizer and use a LengthFilter to suppress all words that only
\r
338 have two or less characters. The LengthFilter is part of the Lucene core and its implementation will be explained
\r
339 here to illustrate the usage of the new TokenStream API.<br>
\r
340 Then we will develop a custom Attribute, a PartOfSpeechAttribute, and add another filter to the chain which
\r
341 utilizes the new custom attribute, and call it PartOfSpeechTaggingFilter.
\r
342 <h4>Whitespace tokenization</h4>
\r
344 public class MyAnalyzer extends Analyzer {
\r
346 public TokenStream tokenStream(String fieldName, Reader reader) {
\r
347 TokenStream stream = new WhitespaceTokenizer(reader);
\r
351 public static void main(String[] args) throws IOException {
\r
352 // text to tokenize
\r
353 final String text = "This is a demo of the new TokenStream API";
\r
355 MyAnalyzer analyzer = new MyAnalyzer();
\r
356 TokenStream stream = analyzer.tokenStream("field", new StringReader(text));
\r
358 // get the TermAttribute from the TokenStream
\r
359 TermAttribute termAtt = (TermAttribute) stream.addAttribute(TermAttribute.class);
\r
363 // print all tokens until stream is exhausted
\r
364 while (stream.incrementToken()) {
\r
365 System.out.println(termAtt.term());
\r
373 In this easy example a simple white space tokenization is performed. In main() a loop consumes the stream and
\r
374 prints the term text of the tokens by accessing the TermAttribute that the WhitespaceTokenizer provides.
\r
375 Here is the output:
\r
387 <h4>Adding a LengthFilter</h4>
\r
388 We want to suppress all tokens that have 2 or less characters. We can do that easily by adding a LengthFilter
\r
389 to the chain. Only the tokenStream() method in our analyzer needs to be changed:
\r
391 public TokenStream tokenStream(String fieldName, Reader reader) {
\r
392 TokenStream stream = new WhitespaceTokenizer(reader);
\r
393 stream = new LengthFilter(stream, 3, Integer.MAX_VALUE);
\r
397 Note how now only words with 3 or more characters are contained in the output:
\r
406 Now let's take a look how the LengthFilter is implemented (it is part of Lucene's core):
\r
408 public final class LengthFilter extends TokenFilter {
\r
413 private TermAttribute termAtt;
\r
416 * Build a filter that removes words that are too long or too
\r
417 * short from the text.
\r
419 public LengthFilter(TokenStream in, int min, int max)
\r
424 termAtt = (TermAttribute) addAttribute(TermAttribute.class);
\r
428 * Returns the next input Token whose term() is the right len
\r
430 public final boolean incrementToken() throws IOException
\r
432 assert termAtt != null;
\r
433 // return the first non-stop word found
\r
434 while (input.incrementToken()) {
\r
435 int len = termAtt.termLength();
\r
436 if (len >= min && len <= max) {
\r
439 // note: else we ignore it but should we index each part of it?
\r
441 // reached EOS -- return null
\r
446 The TermAttribute is added in the constructor and stored in the instance variable <code>termAtt</code>.
\r
447 Remember that there can only be a single instance of TermAttribute in the chain, so in our example the
\r
448 <code>addAttribute()</code> call in LengthFilter returns the TermAttribute that the WhitespaceTokenizer already added. The tokens
\r
449 are retrieved from the input stream in the <code>incrementToken()</code> method. By looking at the term text
\r
450 in the TermAttribute the length of the term can be determined and too short or too long tokens are skipped.
\r
451 Note how <code>incrementToken()</code> can efficiently access the instance variable; no attribute lookup or downcasting
\r
452 is neccessary. The same is true for the consumer, which can simply use local references to the Attributes.
\r
454 <h4>Adding a custom Attribute</h4>
\r
455 Now we're going to implement our own custom Attribute for part-of-speech tagging and call it consequently
\r
456 <code>PartOfSpeechAttribute</code>. First we need to define the interface of the new Attribute:
\r
458 public interface PartOfSpeechAttribute extends Attribute {
\r
459 public static enum PartOfSpeech {
\r
460 Noun, Verb, Adjective, Adverb, Pronoun, Preposition, Conjunction, Article, Unknown
\r
463 public void setPartOfSpeech(PartOfSpeech pos);
\r
465 public PartOfSpeech getPartOfSpeech();
\r
469 Now we also need to write the implementing class. The name of that class is important here: By default, Lucene
\r
470 checks if there is a class with the name of the Attribute with the postfix 'Impl'. In this example, we would
\r
471 consequently call the implementing class <code>PartOfSpeechAttributeImpl</code>. <br/>
\r
472 This should be the usual behavior. However, there is also an expert-API that allows changing these naming conventions:
\r
473 {@link Lucene.Net.Util.AttributeSource.AttributeFactory}. The factory accepts an Attribute interface as argument
\r
474 and returns an actual instance. You can implement your own factory if you need to change the default behavior. <br/><br/>
\r
476 Now here is the actual class that implements our new Attribute. Notice that the class has to extend
\r
477 {@link Lucene.Net.Util.AttributeImpl}:
\r
480 public final class PartOfSpeechAttributeImpl extends AttributeImpl
\r
481 implements PartOfSpeechAttribute{
\r
483 private PartOfSpeech pos = PartOfSpeech.Unknown;
\r
485 public void setPartOfSpeech(PartOfSpeech pos) {
\r
489 public PartOfSpeech getPartOfSpeech() {
\r
493 public void clear() {
\r
494 pos = PartOfSpeech.Unknown;
\r
497 public void copyTo(AttributeImpl target) {
\r
498 ((PartOfSpeechAttributeImpl) target).pos = pos;
\r
501 public boolean equals(Object other) {
\r
502 if (other == this) {
\r
506 if (other instanceof PartOfSpeechAttributeImpl) {
\r
507 return pos == ((PartOfSpeechAttributeImpl) other).pos;
\r
513 public int hashCode() {
\r
514 return pos.ordinal();
\r
518 This is a simple Attribute implementation has only a single variable that stores the part-of-speech of a token. It extends the
\r
519 new <code>AttributeImpl</code> class and therefore implements its abstract methods <code>clear(), copyTo(), equals(), hashCode()</code>.
\r
520 Now we need a TokenFilter that can set this new PartOfSpeechAttribute for each token. In this example we show a very naive filter
\r
521 that tags every word with a leading upper-case letter as a 'Noun' and all other words as 'Unknown'.
\r
523 public static class PartOfSpeechTaggingFilter extends TokenFilter {
\r
524 PartOfSpeechAttribute posAtt;
\r
525 TermAttribute termAtt;
\r
527 protected PartOfSpeechTaggingFilter(TokenStream input) {
\r
529 posAtt = (PartOfSpeechAttribute) addAttribute(PartOfSpeechAttribute.class);
\r
530 termAtt = (TermAttribute) addAttribute(TermAttribute.class);
\r
533 public boolean incrementToken() throws IOException {
\r
534 if (!input.incrementToken()) {return false;}
\r
535 posAtt.setPartOfSpeech(determinePOS(termAtt.termBuffer(), 0, termAtt.termLength()));
\r
539 // determine the part of speech for the given term
\r
540 protected PartOfSpeech determinePOS(char[] term, int offset, int length) {
\r
541 // naive implementation that tags every uppercased word as noun
\r
542 if (length > 0 && Character.isUpperCase(term[0])) {
\r
543 return PartOfSpeech.Noun;
\r
545 return PartOfSpeech.Unknown;
\r
549 Just like the LengthFilter, this new filter accesses the attributes it needs in the constructor and
\r
550 stores references in instance variables. Notice how you only need to pass in the interface of the new
\r
551 Attribute and instantiating the correct class is automatically been taken care of.
\r
552 Now we need to add the filter to the chain:
\r
554 public TokenStream tokenStream(String fieldName, Reader reader) {
\r
555 TokenStream stream = new WhitespaceTokenizer(reader);
\r
556 stream = new LengthFilter(stream, 3, Integer.MAX_VALUE);
\r
557 stream = new PartOfSpeechTaggingFilter(stream);
\r
561 Now let's look at the output:
\r
570 Apparently it hasn't changed, which shows that adding a custom attribute to a TokenStream/Filter chain does not
\r
571 affect any existing consumers, simply because they don't know the new Attribute. Now let's change the consumer
\r
572 to make use of the new PartOfSpeechAttribute and print it out:
\r
574 public static void main(String[] args) throws IOException {
\r
575 // text to tokenize
\r
576 final String text = "This is a demo of the new TokenStream API";
\r
578 MyAnalyzer analyzer = new MyAnalyzer();
\r
579 TokenStream stream = analyzer.tokenStream("field", new StringReader(text));
\r
581 // get the TermAttribute from the TokenStream
\r
582 TermAttribute termAtt = (TermAttribute) stream.addAttribute(TermAttribute.class);
\r
584 // get the PartOfSpeechAttribute from the TokenStream
\r
585 PartOfSpeechAttribute posAtt = (PartOfSpeechAttribute) stream.addAttribute(PartOfSpeechAttribute.class);
\r
589 // print all tokens until stream is exhausted
\r
590 while (stream.incrementToken()) {
\r
591 System.out.println(termAtt.term() + ": " + posAtt.getPartOfSpeech());
\r
598 The change that was made is to get the PartOfSpeechAttribute from the TokenStream and print out its contents in
\r
599 the while loop that consumes the stream. Here is the new output:
\r
608 Each word is now followed by its assigned PartOfSpeech tag. Of course this is a naive
\r
609 part-of-speech tagging. The word 'This' should not even be tagged as noun; it is only spelled capitalized because it
\r
610 is the first word of a sentence. Actually this is a good opportunity for an excerise. To practice the usage of the new
\r
611 API the reader could now write an Attribute and TokenFilter that can specify for each word if it was the first token
\r
612 of a sentence or not. Then the PartOfSpeechTaggingFilter can make use of this knowledge and only tag capitalized words
\r
613 as nouns if not the first word of a sentence (we know, this is still not a correct behavior, but hey, it's a good exercise).
\r
614 As a small hint, this is how the new Attribute class could begin:
\r
616 public class FirstTokenOfSentenceAttributeImpl extends Attribute
\r
617 implements FirstTokenOfSentenceAttribute {
\r
619 private boolean firstToken;
\r
621 public void setFirstToken(boolean firstToken) {
\r
622 this.firstToken = firstToken;
\r
625 public boolean getFirstToken() {
\r
629 public void clear() {
\r
630 firstToken = false;
\r