Natural Language Programming

*Natural Language Programming is fundamental to how search engine algorithms work.

According to Wikipedia, Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages — in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.

The concept of Natural Language Processing began back in 1950 when Alan Turing published an article titled “Computing Machinery and Intelligence” which proposed what is now called the Turing test as a criterion of intelligence — which can be considered a model of a general-purpose computer.

Turing had also previously designed one of the first designs for a stored-program computer, the Automatic Computing Engine in 1948.

https://www.aclweb.org/anthology/P14-5010.pdf

Scholarly publications on the subject: References Steven Bethard, Philip Ogren, and Lee Becker. 2014. ClearTK 2.0: Design patterns for machine learning in UIMA. In LREC 2014.

Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media.

Joshua Bloch. 2008. Effective Java. Addison Wesley, Upper Saddle River, NJ, 2nd edition.

Angel X. Chang and Christopher D. Manning. 2012. SUTIME: A library for recognizing and normalizing time expressions. In LREC 2012.

James Clarke, Vivek Srikumar, Mark Sammons, and Dan Roth. 2012. An NLP Curator (or: How I learned to stop worrying and love NLP pipelines). In LREC 2012.

Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: an architecture for the development of robust HLT applications. In ACL 2002.

Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006, pages 449–454.

David Ferrucci and Adam Lally. 2004. UIMA: an architectural approach to unstructured information processing in the corporate research environment. Natural Language Engineering, 10:327–348.

Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In ACL 43, pages 363–370.

I. Gurevych, M. Muhlhauser, C. Muller, J. Steimle, ¨ M. Weimer, and T. Zesch. 2007. Darmstadt knowledge processing repository based on UIMA. In First Workshop on Unstructured Information Management Architecture at GLDV 2007, Tubingen.

U. Hahn, E. Buyko, R. Landefeld, M. Muhlhausen, ¨ Poprat M, K. Tomanek, and J. Wermter. 2008. An overview of JCoRe, the Julie lab UIMA component registry. In LREC 2008.

Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, pages 3–10.

MIT Press. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4).

Anthony Patricio. 2009. Why this project is successful? https://community.jboss.org/wiki/WhyThisProjectIsSuccessful

Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP 2013, pages 1631–1642.

Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In NAACL 3, pages 252–259

--

--