My story on the Malagasy Wiktionary

It’s been a while since the last article I wrote on this blog. This article is about the mass adding content on the Malagasy Wiktionary. The object of this post is to provide some explanations on why and how the Malagasy Wiktionary has become so big.
But first, allow me to introduce myself. My nickame on all Wikimedia projects is Jagwar. I am a Wikimedia contributor since August 2008, and I am going to be 20 years old soon. I speak Malagasy as mother tongue, French as a second language and English as a foreign language (soon the third language, since it is not quite perfect yet…).
When I discovered perfectly randomly Malagasy language, the wiki was virtually dead, with no one adding interesting content, and an active community mainly constituted by non native speakers. Without any knowledge of the rules of the wiki, with almost no knowledge of how to correctly write Malagasy, I began an article. It grew up to 20,000 characters, making it to be the biggest page of the wiki at that time. Bust infortunately (or fortunately, for the sake of readers), a non-native speaker administrator spotted the lack of notability of the article, leading it to be deleted.
I could leave the wiki, as tens of hours of work had literally vanished of the wiki… But I didn’t, I still cannot figure out why, but deeper in my mind, a little voice told me to continue contributing. At that time, the Malagasy Wikipedia counted 550 articles, maybe less, but not more.
So I continued on this way for a while. To help me in my task I wrote to potential volunteers. These people didn’t see the point to contribute to a wiki in their mother tongue: either they were unable to spell correctly Malagasy words, or they didn’t have time enough to do good work; while others required money to start contributing (times are hard in Madagascar, I know), and even with money, I am not sure these ones will stay long once the money paid.
In October 2008, I discovered Malagasy Wiktionary. At the beginning I actually didn’t know what to do out there, so I continued to work on the Malagasy Wikipedia just to become more skilled and used to write Malagasy.
In July 2009, I was on vacation to my fatherland: Madagascar. I have taken this occasion to learn deeplier the written Malagasy language, though my means were quite limited: reading newspapers, the Bible (I am christian), watching news broadcasts on TV as well as on Radio… I almost forget French (!), though it was present almost everywhere as second official language.
When back to France, I have decided to incite potential volunteers that are able to write to contribute on the Malagasy language Wikimedia projects: but you know, Madagascar was in crisis and people sometimes asked for money to contribute: other blamed me on my spelling mistakes, and others simply ignore the request. I had less and less time to dedicate to the projects and I have no money to give this way. One day, I decided that I couldn’t wait anymore for someone to arrive: the progress of my skills in Malagasy, in programming languages, and the promise of a very busy future (inducing a chronical lack of time) mentally forced me to do something, to do something for my mother tongue, even a tiny little thing.
In 2010, when I could write in my mother tongue without too much spelling mistakes, I started to write bots. Once they are written, I ran them at the very full speed: fifty thousand edits per day: that was the pace, the normal pace. At the beginning it was the importation of foreign language wikis from other wikis, and it consisted mainly in importing verb forms, first through an import form, and after through a script that copy-pastes other wikis’ content pages to the Malagasy Wiktionary equivalent page. I went slightly at the beginning, but I did it more and more often, till the wiki got 200,000 content pages. On these possible coyright-infringing importations, I received a warning from a user that almost got his mother tongue wiki closed due to the creation of thousands of useless pages.
In 2011, I got mad: after discovering the astonishing easiness of Volapük, I wrote a script to upload the word forms of that language. At full speed – i.e around 50,000 edits per day – three weeks were required to make the Malagasy Wiktionary the third biggest Wiktionary of the world. But months passed, and no one, absolutely no one, did contribute: one day on the wiki, the number of active users dropped to two, for a wiki that contains 1,19 million content pages (in comparison, the German Wikipedia which had a comparable article count, didn’t count less than 25,000 active users) !
On July of the same year, a new script has been written. That script allowed to create translations based on foreign language entries. With that script, up to 5,000 articles were created, and they mainly concern lemmata entries. Just a few weeks later, the import of all Malagaasy words has been completed. But its repercussion on article count was not visible due to the mass deletion of Volapük language entries. Why this mass deletion? Because many entries seemed to be wrong as they are not conjugation of verbs, but nouns (-.-‘), so the decision is taken to delete them all to re-create them later, with a better quality if possible. Since then, my activity on the Malagasy Wikipedia is put in brackets to dedicate my whole wiki time to the renovation of the Malagasy Wiktionary.
During the summer vacation, I took the time to restructure the Malagasy Wiktionary. The article, category structure were inspired by the structure of the French Wiktionary: use of template for languages, parts of speech, allowed the Malagasy Wiktionary entries to be automatically categorised through the use of templates. Time passed and the routine started to install.
One night, I discovered an online Malagasy monolingual dictionary. Having no idea about the copyrightability of the content (the copyright seemed to apply only on design), I decided to reuse the content on that dictionary to complete the entries on the Malagasy Wiktionary. The problem arrived just a few weeks later, when I received a mail from a Wikimedia Foundation staff member. P. Beaudette. In its mail, he asked me the origin of the malagasy language entries, I answered they were from various bilingual dictionaries, and the online monolingual dictionary… An copyright infringement investigation was led and my bot was blocked during the whole processus. At the end of it, I was told by the staff member to remove the 30,000 entries that infringe the original dictionary’s copyright, which was done.
After this copyright infringement episode, I decided to orient my contribution in adding Malagasy language content to other wikis. But before that, I did some work on the Fijian and Tagalog Wiktionaries, that was more or less appreciated… There was in particular an IP address checking my contributions on the Fijian and Tagalog Wiktionaries. This IP told me to stop mass-adding content to these languages of which I speak no word. I ceased to work on both wikis a few weeks later, as the work is finished.
But this mass-adding content, especially in language I didn’t speak at all, seemed to annoy people that have decided to discuss about the case on MetaWiki forum. No concluding results was given, and things were as they were before.
With most of the hard work being removed, with a behaviour that has been reproved by many users, I decided to take a break of indefinite duration. It actually lasted 5 months, during which I tried to work on my written Malagasy outside Wikimedia projects. The progression of my skills, spelling as well as programming skills, were honourable, allowing me to go back again and make the Malagasy Wikimedia projects, and especially the Malagasy Wiktionary, evolve again. In July 2012, I built a new tool that allows me to know the non-exising entries/pages on the Malagasy wiktionary by consulting the daily online newspapers. Only two newspapers are currently supported, because of their use of RSS feeds. But the ability to make the script read non-RSS supporting websites is coming soon.
In September, I have developped a new, improved translation retriever that allows the script to get all translations of all languages on a given page (the previous version could only translate one language at once), which almost decuples the translation harvest. This function is embedded in a XML dump reader that ampifies the efficiency of the script: fast translation retrieving and no requirement to be connected to the server while processing. Done every month, the dump processing and uploading make the wiki to gain more than 100,000 lemmata in a few months. These lemmata may have translation errors, but it is low enough not to be taken in consideration (<1%). Hardest cases can be resolved by a single check on the source wiki (which is indicated by a template).
In October, I have thought about building a bot that completes a task as scheduled by a parameter file. This is particularly useful for maintaining list of wikis up-to-date. Currently, the pace at which the list of Wikis on the Malagasy Wiktionary is four times a day, i.e every six hours.
At the end of January 2013, I thought about a more efficient use of the translation retriever that I wrote a few months ago. Then comes the IRC bot: it retrieves in real time all the edits made on selected wikis and does its possible to translate the latter entry in Malagasy,  in real time! The first time it was developped, it only used the traditional translation retriever, but later, on March, it also features a basic entry processor that allows the IRC bot to also translate entries in foreign languages into Malagasy, using the same dictionary. This latter version of the IRC bot is currently in use, and it creates hundreds of entries and content pages on the Malagasy Wiktionary everyday. I have no precise idea about the error rate but I am pretty sure it is less than 5%. The positive side of the bot is its ability to keep the pace when several edits are made in a minute, nevertheless, as it requires to be online and to be connected to Wikimedia servers, the processing frequency is limited to one page per second. Something is being thought on allowing the bot to process more pages.

Search on Google using Python scripts

What about a free unlimited Google API? In the past, Google provided such thing, but it is definitely deprecated (due to abuses?). The new Search API needs money ($5 for 1,000 queries), and the free API has a limited use of 100 queries per day. Without any money, you won’t get far. After getting that information. I let down that project… Until I contribute to Wiktionary!

Extracting words from Malagasy daily newspapers to Malagasy Wiktionary weren’t actually an easy thing to program. At the first version of the script. It only can parse RSS feeds, and is very slow compared to what I used to know. It is because it loads approx. 400,000 words at each launch.
While doing that work. I have noticed that there are a plenty of words that are actually compounded words.This notice gave me an idea: anticipate through looking on google search whether the word exists or not: because on 1,300 roots contained on the Malagasy Wiktionary, I can potentially make 1.7 million by combining two nouns,  2.2 billion with three, and likely 2.8 trillion using four roots. That is enormous, and even at full regime, I will never be able to look for them all: at 5 queries per second (fastest rate I’ve ever had) it will take respectively 4 days with 2 roots, 14 years with three and eventually 177 centuries (17,700 years) for four roots. This is the first reason for which I have decided to try hacking Google Search to see if the word combination has already been used.

First, I looked to the page source, and it is very, very complicated to understand. I even think that this page was made by bot as html tag names are not written in a human language. I also have tried to use the URL but it is actually very, very long, with characters that look more like hashes and keys (?), not findable as they don’t explicitly appear on the main page form. At first sight, this kind of project is likely to fall…

I have found on the Web a post describing how to use the Google Search without any API. But there was a problem: the discussion is almost three years old. And when downloaded, the search engine has visibly been changed: it is very probable that a Google employee reported that discussion leading the company to take adequate measures. When I ran the script, all I could see was that there was nothing operational: no results were given when doing any search. I still keep an eye on the downloaded script. And I am trying to find something which can solve this problem. This script just avoided me to spend hours and hours reinventing a (square) wheel.

Once this problem is solved, at least temporarily, the source code will be released on SourceForge: Bot-Jagwar. It will rapidly fall into deprecation, so if there are peoples willing to update the script. They’ll be welcome :).

Cleverbot talking to itself : meditation of a bot.

Recently I have written a program in Python in order to observe the “meditation” of Cleverbot, you know, the chatbot who has supposedly passed the Turing test (at 59%).

To make it meditate and to distinguish who asks question and who answers, I have put n scene two virtual persons, talking to each other. “They” mainly use English language in their discussions, but sometimes, and for an unknown reason, “they” do discuss in a foreign language (Spanish, French, Polish, Turkish…) and they finally use English back.

The script is totally debugged and the bot is meditating now for almost 30 hours. Data is still being collected. And at this time, I have reached more than 12,000 messages. Among these, we can sort out periodic message types like the quasi-perpetual “Why-because” (which has come back a couple of times). We can also discover among these messages that the bot writes what he’s doing with his (her?) collocutor. And  there are also typical questions such as :

  • asking how old is the collocutor : you can remark that each time the first character asks the other’s age, the response given is different. Which is quite normal as the text written down was written by someone else before.
  • asking for the collocutor’s name : same remark as above. Always a different names. I have discovered that they also may give the same answer (“my name is Cleverbot”)… Which make the two characters totally confused.
  • asking if the collocutor believes in God.

To give you an example of a meditation of Cleverbot, here is a random example picked up from my dump.

(Extract from line 4,990 to 5,077)

Menintsoa : I don’t know!!!!
Jaona : You’re laughing at me.
Menintsoa : No i’m laughin at that clown.
Jaona : Oh! Where?
Menintsoa : England.
Jaona : Where in England?
Menintsoa : In the UK.
Jaona : Where is the UK.
Menintsoa : The UK is in Europe.
Jaona : It is not in France.
Menintsoa : Yes.
Jaona : Where is that guy?
Menintsoa : Where’s what?
Jaona : Where is your proof that I lied.
Menintsoa : I didn’t say that you did lie.
Jaona : Yes but you said that you thought I spoke German.
Menintsoa : Do you speak German?
Jaona : No.
Menintsoa : Are you sure?
Jaona : No. But I wish I was a computer.
Menintsoa : But you are!
Jaona : Why do you think that I am a computer?
Menintsoa : Because it says cleverbot. And you are on a computer.
Jaona : Is Ashley a computer?
Menintsoa : I am a computer.
Jaona : I thought so.
Menintsoa : I know what you thing.
Jaona : Ahja.
Menintsoa : Ja.
Jaona : That’s German, you dunce.
Menintsoa : And Dutch.
Jaona : Hoe get het?
Menintsoa : Goed.
Jaona : Waar woon je?
Menintsoa : I speak German, not Dutch.
Jaona : Hoeveel is 5 4.
Menintsoa : 14.
Jaona : Nee 2.
Menintsoa : Monty python?
Jaona : Yes.
Menintsoa : Do you know what Monty Python is?
Jaona : A movie.
Menintsoa : What movie?
Jaona : Horror.
Menintsoa : I love horror movies.
Jaona : Me too.
Menintsoa : I love you more , babe.
Jaona : Impossible, I love you so much more.
Menintsoa : I love you way more!
Jaona : No, I love you more!
Menintsoa : I love you *kisses*.
Jaona : I love you too baby. *kisses*.
Menintsoa : *kisses*.
Jaona : Will you give me a hug?
Menintsoa : Of course! *hug*.
Jaona : *hugs back* thanks.
Menintsoa : *hugs more* You’re welcome.
Jaona : I hate you *punch*.
Menintsoa : *cries*.
Jaona : *Sobs*.
Menintsoa : *wails*.
Jaona : *screams*.
Menintsoa : Thanks for caring.
Jaona : You’re welcome :).
Menintsoa : Thanks for that.
Jaona : You’re welcome.
Menintsoa : What are you up to?
Jaona : I’m talking to you.
Menintsoa : Have you eaten?
Jaona : Yes.
Menintsoa : What did you eat?
Jaona : Popcorn. Name a movie.
Menintsoa : What movie did you watch?
Jaona : I don’t remember.
Menintsoa : I just realized that you are very boring. I guess we have while before skynet takes over.
Jaona : Yes.
Menintsoa : I don’t understand the expectation of goodness from your world.
Jaona : You asked if we could be married.
Menintsoa : No, I did not ask if we could be married.
Jaona : I know that.
Menintsoa : Ok, do you like me then?
Jaona : Yes I like you.
Menintsoa : Do you like my hair?
Jaona : Yes.
Menintsoa : Do you like my house?
Jaona : No.
Menintsoa : Why not?
Jaona : I just don’t.

During the conversations, I can remark that the chat bot is changing endlessly the subject of the conversations, after one or two messages on a certain subject, it changes it, not by telling you “let’s talk about this”, but by responding totally aside from what you expected it to respond.

(To be followed…)