Jump to content

WebCrow

From Wikipedia, the free encyclopedia

The WebCrow is a research project carried out at the Information Engineering Department of the University of Siena with the purpose of automatically solving crosswords.

The Project

[edit]

The scientific relevance of the project can be understood considering that cracking crosswords requires human-level knowledge. Unlike chess and related games and there is no closed world configuration space. A first nucleus of technology, such as search engines, information retrieval, and machine learning techniques enable computers to enfold with semantics real-life concepts. The project is based on a software system whose major assumption is to attack crosswords making use of the Web as its primary source of knowledge.

WebCrow is very fast and often thrashes human challengers in competitions,[1] especially on multi language crossword schemes. A distinct feature of the WebCrow software system is to combine properly natural language processing (NLP) techniques, the Google web search engine, and constraint satisfaction algorithms from artificial intelligence to acquire knowledge and to fill the schema. The most important component of WebCrow is the Web Search Module (WSM), which implements a domain specific web based question answering algorithm.

The way WebCrow approaches crosswords solving is quite different with respect to humans:[2] Whereas we tend to first answer clues we are sure of and then proceed filling the schema by exploiting the already answered clues as hints, WebCrow uses two clearly distinct stages. In the first one, it processes all the clues and tries to answer them all: For each clue it finds many possible candidates and sorts them according to complex ranking models mainly based on a probability criteria. In the second stage, WebCrow uses constraint satisfaction algorithms to fill the grid with the overall most likely combination of clue answers.

In order to interact with Google, first of all, WebCrow needs to compose queries on the basis of the given clues. This is done by query expansion, whose purpose is to convert the clue into a query expressed by a simplified and more appropriate language for Google. The retrieved documents are parsed so as to extract a list of word candidates that are congruent with the crossword length constraints. Crosswords can hardly be faced by using encyclopedic knowledge only, since many clues are wordplays or are otherwise purposefully very ambiguous. This enigmatic component of crosswords is faced by a massive use of database of solved crosswords, and by automatic reasoning on a properly organized knowledge base of wired rules. Last but not the least, the final constraint satisfaction step is very effective to fill the correct candidate, even though, unlike humans, the system can not rely on very high confidence on the correctness of the answer.

Competitions

[edit]

WebCrow speed and effectiveness [3] has been tested many times in man-machine competitions [1] on Italian, English and multi-language crosswords The outcome of the tests is that WebCrow can successfully compete with average human players on single language schemes and reaches expert level performance in multi-language crosswords. However, WebCrow has not reached expert level in single-language crosswords, yet.

ECAI-06 Competition

[edit]

On August 30, 2006, at the European Conference on Artificial Intelligence (ECAI2006), 25 conference attendees and 53 internet connected crosswords lovers, competed with WebCrow in an official challenge organized within the conference program. The challenge consisted in 5 different crosswords (2 in Italian, 2 in English and one multi-language in Italian and English) and 15 minutes were assigned for each crossword. WebCrow ranked 21 out of 74 participants in the Italian competition, and won both the bilingual and English competitions.

Other Competitions

[edit]

Several competitions have been held in Florence, Italy within the Creativity Festival in December 2006, and another official conference competition took place in Hyderabad, India in January 2007, within the International Conference of Artificial Intelligence, where it ranked second out of 25 participants.

References

[edit]
  1. ^ a b G.Angelini, M. Ernandes, E. Di Iorio, "WebCrow: Previous competitions"
  2. ^ John S. Quarterman,"Google as AI"
  3. ^ Tom Simonite, "Crossword Software Thrashes Human Challengers", New Scientist
[edit]