stanford nlp - How to use standard pipeline (tokenize,ssplit,pos,lemma) with the new parsers? -


i have been using older version of stanford nlp, switch newest, coolest algorithms. looked @ demo of nn dependecy parser , don't know how integrate corenlp pipeline.

i using jython code:

props = properties() props.put("annotators","tokenize,ssplit,pos,lemma,parse") props.put("isonesentence",true) pipeline = stanfordcorenlp(props) 

but i'd use newer algorithms. possible current pipeline? if not, easy way rewrite this, produces same results without annotation pipeline?

thanks in advance! pavel

the annotator you're looking "depparse", not "parse". so, code like:

props = properties() props.put("annotators","tokenize,ssplit,pos,lemma,depparse") props.put("isonesentence",true) pipeline = stanfordcorenlp(props) 

note no longer have constituency trees (tree) after this, dependency tree (semanticgraph).


Comments

Popular posts from this blog

css - SVG using textPath a symbol not rendering in Firefox -

Java 8 + Maven Javadoc plugin: Error fetching URL -

order - Notification for user in user account opencart -