Keen Learner

The tale of a thirsty mind

Riding the memory lane… — August 18, 2016

Riding the memory lane…

I knew that this day will come , when I will look back and reminisce about the great time I had with the WikiToLearn family ( ❤ ) during the course of my Google Summer of Code 2016 project : WikiRatings.

davide.png

 

So on February 26,2016 I texted Davide about my interest in one of the WikiToLearn projects. I got an immense amount of support from the community as well due to which I was able to prepare a good proposal and finally got my project selected for GSoC-2016

Selection Chat

 

Now all I had was a terrific opportunity to completely transform my work ethics and take my technical temperament to a next level because the people I was (still am and will be ❤ ) working were the respective domain experts and always motivated me to come up with my own ideas .

Today, when I look back I realize how extraordinary my last 3 months were! It was like a proper exposure to the production level development. I interacted with my mentors on daily basis and they too were regular with their work. They constantly gave me useful reviews and helped me to think in the right direction. Over this period I have developed a great bonding with my community (We joked about stuff like how buying a MACBOOK will turn me into a bad boy. LOL! ) .

WikiToLearn community is one of the most supportive organisation I have ever worked with. The environment was(certainly is and will be ❤ ) so healthy and passionate that I never felt that I was working , it all felt like a leisure pursuit.Due to this fact, I was able to deliver the major components of my GSoC project(WikiRating Video Demo) , so my current TODO list looks like this:

  • Construction of a Rating Engine that can assess the quality of a Page on WikiToLearn platform.
  • A MediaWiki Extension that serves as the User Interface and collects votes as well as displays Page Ratings.
  • Code cleaning and documentation.

Right now I am back to my college, I am doing some mild code cleaning.A new chapter is about to begin. Gradually everything will come back to it’s previous state: Classes, Assignments,Friends and maybe I will get even busier eventually, but once in a  while whenever I will look back I am sure to find my WikiToLearn family holding my hand.

 WikiToLearn

 

 

Advertisements
Tip of the iceberg ! — August 6, 2016

Tip of the iceberg !

So we have previously seen how Davide, Alessandro and I designed the Rating Engine for our  WikiRating:Google Summer of Code project. Now this is the time for our last step , that is to connect the engine to the Website for displaying the computed results and for providing voting functionality to WikiToLearn users.

In MediaWiki additional functionalities like this are added via extensions. You can think of extensions in the literal sense too as something that provides some extension points on the top of the current code base. This make the life of developers easier since by using extensions we can add new code in a modular fashion and thereby not much fiddling with the Wiki code base.

So now I needed to write an extension that can the following:

  • Fetch the information about the page being viewed by the user.
  • Allowing the user to vote for the page.
  • Displaying additional information about the page is user demands.

So with the following things in mind I began to analyse the basic components of a MediaWiki Extension.

extension_components

So besides the boiler plate components that required minor tweaking extension.json , modules , specials are of our interest.

extension.json

extension_JSON

This JSON file stores the setup instructions for instance name of the extension, the author, what all classes to load etc.

modules

module_pic

The module folder of our WikiRating Extension contains these 2 components:

  • resources: where all the images and other resources are stored.
  • wikiRating.js: A java script file to fetch, send and display data between the engine and the Website instance.

It is the wikiRating.js script where we wrote most of our code.

specials

specials

This folder contains a php script whose function is display additional information about the page when asked for. The information will be passed to the script via the URL parameter by our master script (wikiRating.js).

So the final step(or first step !) is to enable our extension by adding this to LocalSettings.php file in the WikiToLearn local instance.

wfLoadExtension( 'WikiRating' );

So now it is the time to see the fruits of our labour:

score1
Basic information about the page
score2
Additional information about the page 

So this is how the output of our engine looks , subtle like a tip of an iceberg 😛

 

We’ve come a long way from where we began! — July 25, 2016

We’ve come a long way from where we began!

“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”― Bill Gates

After working for several weeks on our WikiRating:Google Summer of Code project Davide, Alessandro and I have slowly reached up to the level where we can now visualize the entire project in its final stages. It has been quite long since we wrote something , so after seeing this :

commits

We knew it was the time that we summarize what we were busy doing in these 50 commits.

I hope you have already understood our vision by reading the previous blog post. So after a fair amount of planning it was the time that I start coding the actual engine, or more importantly start working on the ❤ of the project. This was the time when my brain was buzzing with numerous Design patterns and  Coding paradigms and I was finding it a bit overwhelming. I knew it was a difficult phase but I needed some force (read perseverance) to get over it. I planned , re-planned but eventually with the soothing advice of my mentors : Don’t worry about the minor intricacies now, focus on the easiest thing first.
I began to code!

How stuff works?

I understand that there are numerous things to talk about and it’s easy to lose track of main theme therefore we are going to tour the engine as it actually functions that is we will see what all happen under the hood as we run the engine.Let me make it easier for you, have a look at this:

method
The main method for engine run

 

You can see there are some methods and some parent classes involved in the main run of the engine let’s inspect them.

Fetching data(Online):

The initial step is to create all the classes for the database to store data. After this we will fetch the required data like Pages,Users and Revisions via the queries listed here.

{
    "batchcomplete": "",
    "limits": {
        "allpages": 500
    },
    "query": {
        "allpages": [
            {
                "pageid": 148,
                "ns": 0,
                "title": "About WikiToLearn"
            },
            {
                "pageid": 638,
                "ns": 0,
                "title": "An Introduction to Number Theory"
            },
            {
                "pageid": 835,
                "ns": 0,
                "title": "An Introduction to Number Theory/Chebyshev"
            },
            {
                "pageid": 649,
                "ns": 0,
                "title": "An Introduction to Number Theory/Primality Tests"
            },
            {
                "pageid": 646,
                "ns": 0,
                "title": "An Introduction to Number Theory/What are prime numbers and how many are there?"
            },

This is a typical response from the Web API, giving us info about the pages on the platform.

Similarly we fetch all the other components (Users and Revisions) and simultaneous store them too.

database addition1
Code showing construction of Page Nodes

After fetching the data for Pages and Users we will work towards linking the user contributions with their corresponding user contributors . Here we will make edges from the user nodes to the respective revisions of the pages. These edges also contain useful information like the size of the contribution done.

We also need to work on linking the pages with other pages via Backlinks for calculating the PageRank  (We will discuss these concepts after a short while).

Once we have all the data via the online API calls , we will now move toward our next pursuit to do offline computation on data fetched.

Computation(Offline):

Since this feature is new to WikiToLearn platform therefore there were no initial user votes on any of the page versions, hence we wrote a class to select random users and then making them vote for various pages. Later we will write a MediaWiki Extension to gather actual votes from the users but till then now we have sample data to perform further computations.

So after generating votes we need to calculate various parameters like User Credibility, Ratings, PageRank and Badges (Platinum,Gold,Silver,Bronze,Stone). The calculation of the credibility and Ratings are listed here. But Badges and PageRank are a new concept .

Badges:

We will be displaying various badges based on the Percentile Analysis of the Page Ratings. That is we will be laying the threshold for various badges say top 10% for platinum badge then we filter out the top  10% pages on the basis of their page rating and then assign them the suitable badge.The badges will give readers an immediate visual sense of the quality of the pages.

Another very important concepts are  PageRank and Backlinks let’s talk about them too.

PageRank & Backlinks:

Let’s consider a scenerio :

Drawing (1)
Page interconnections

There are 5 pages in the system the arrows denote the hyperlink from a page to another these are called backlinks . Whenever the author decides to cite another user’s work a backlink is formed from that page to the other. It’s clear to understand that the more backlinks a page will have the more reliable it becomes (Since each time the authors decide to link someone else’s work then they actually think it is of good quality).

So the current graph :

To<­From
Page 0 : 4, 3, 2
Page 1 : 0 ,4
Page 2 : 4
Page 3 : 4, 2
Page 4 : 3

Here we have connections like Page 0 is pointed by 3 pages 4,3,2 and so on.

Now we will calculate a base rating of all the pages with respect to the page having maximum number of backlinks.Therefore we see that Page 0 has maximum number of backlinks(3).
Then we divide the backlinks of all the other pages by the maximum.This will give us the
importance of pages based on their backlinks.


We used this equation:
Base Weight=(no of backlinks)/(Max backlinks)

So Base Weight of Page 0 = (1+1+1)/3=1

Here:
Base weights
1, 0.666667 ,0.333333 ,0.666667 ,0.333333 of Page 0 ,Page 1 and so on

There is a slight problem here:
Let’s assume that we have 3 pages A , B and C. A has  high backlinks than B but according to the above computation whenever a link from A to C is there it will be equivalent to that of B to C. But it shouldn’t happen as Page A’s link carries more importance than Page B’s link because A has higher backlinks than B.Therefore we need a way to make our computation do this.

We can actually fix this problem by running the computation one more time but now instead of taking 1.0 for an incoming link we take the Base Weight so now the more important pages contribute more automatically. So the refined weights are:

Revised Base Weight of Page 0 =(0.334+0.667+0.334)/3=0.444444

Page weights
0.444444, 0.444444 ,0.111111, 0.222222 ,0.222222

So we see that the anomaly is resolved 🙂

This completes our engine analysis. And finally our graph in OrientDB looks like this:

sample graph

Right now I am developing an extension for the user interaction of the engine and will return soon with the latest updates. Till then stay tuned 😀

Leave out all to ‘REST’ — June 14, 2016

Leave out all to ‘REST’

It has been sometime since I mentioned how Davide, Alessandro and me met our algorithm while working for Google Summer of Code project WikiToLearn:Ratings.
So now after the formulation of algorithm to calculate the Page Rating , it was the time to actually initialize a repository and start coding .So after some careful discussions about what language to use we narrowed down on JAVA to construct a RESTful API(Rest what?) .
This API will be requested for a Page’s Rating when a user will visit the page on the wiki platform. So how will our API work ? From where do we get the data? Will you store it somewhere? So before I spill the beans let us learn how to make a sandwich !

The Sandwich Architecture:

FN_sandwich-thinkstock_s4x3
MediWiki–RESTful API–OrientDB

So just like you have a top, juicy veggies and  a fine base in a sandwich our API too has a 3 layered architecture:

MediaWiki API <TOP>

One thing is certainly true that to actually generate some sort of rating we need to access the data stored in the MediaWiki servers. So we have a lot of client APIs  available. So after careful examinations of all my requirements I zeroed down on Wikdata ToolkitA wonderful client API with a great custom query support, just what I needed.So now we can easily access the MediaWiki database by simply manipulation the ApiConnection object returned by the API.This is great as we reduced our efforts of manually handling the Networking calls to the MediaWiki Database.

RESTful API <Juicy veggies>

This is the heart of our API this is the part that is responsible for all the computations  and request initiation and handling. So to achieve the REST architecture we used JAX-RS ideology and utilized Jersey framework. In the rest architecture everything is a resource. We can easily access a page by simply constructing a URL like:
http://en.wikitolearn.org/Main_Page

So we see that everything here is constructed like a directory structure similar to you computer at home. Well this is just one of numerous features of REST architecture.

Often when we make a project like this , we need to integrate existing pieces of technology to get optimal performance.So it was clear that we needed to import a great deal of libraries and JAR’s . This can be an intimidating experience ! Nobody wants to be a maze runner 😛 .Therefore to save ourselves from some trouble we use Build Tools.

We used Maven for this task. It’s so convenient to just add a dependency in the pom.xml build the project and Maven does all the hard work like downloads, imports etc for you.Further we used Apache Tomcat   to host our RESTful API.

OrientDB Graph API <Base>

So what do we do after we have done fetching the data and computing the results? Well we store them 😛 . So to achieve this I am using a Native Java API (Graph API) for handling the database efficiently.

So this was all about our Sandwich architecture. I am currently coding the RESTful API. My aim is to make a nexus  of all the elements like Pages, Users, Revisions which are linked by various suitable relations. You can find my work on our community repository, and before I sign off just a small thing to always remember while you code…

“The journey of a thousand commits begins with a single init”

geekhumor-ft

How I met our Algorithm! — June 5, 2016

How I met our Algorithm!

So I have successfully completed the community bonding period 😀 and it was 23, May 2016 when Davide, Alessandro and me decided to dig deeper into our Google Summer of Code project WikiToLearn:Ratings.

The best thing about this project is that we will be developing a piece of technology from scratch. Well, that excites me. Not only because this job is challenging but because when you have the liberty to build something starting from its very roots ,  when you have the opportunity to think about the whole architecture you feel really connected to your job.(Some Job Satisfaction here 😛 ).

Therefore we decided to begin the project by laying down the procedures and equations for computing the Rating of the pages. Basically the entire algorithm starting from a user voting for a page to the final display of the results.It took us 3 days to design the initial draft.The procedures defined here are quite rigorous but ultimately we were able to get a clear picture of what is supposed to be done and how.

The next step was to map the proposed mathematical model to code. That is to think of ways to access the database of WikiToLearn to get the desired information.
Now WikiToLearn uses MediaWiki so the next step was to get familiar with this platform and to find out how things work here. Sooner I found myself playing with the MediaWiki APIThis was a necessary step as we will be using the API to fetch the data that will be needed by the algorithm. So after some PHP scripting sessions I finally managed to understand what all information we can directly get from the API and what other things we needed to cook ourselves. So this was the proceedings of my first week of GSoC.

Therefore now when we have listed down all the items that we need to fetch we will be starting by writing down a service to fetch the data from the database via the MediaWiki API. So let’s begin the quest.

Gotta fetch ’em all !

😎

e663312cc594c51c8f8c9f1187519bed

Connecting the dots… — May 23, 2016

Connecting the dots…

It has been over a month since Davide,Alessandro and I started working on WikiToLearn:Ratings  for GSoC-2016 and we have already started with Database design . I have already shared my experiences while setting up OrientDB inside  docker in this post. So now let’s take a step further and talk about how we made a sample graph database to represent our project (abstract here). To understand it in a better way , let’s consider the following scenario:

Jon,Jacob and Josh study Physics at a renowned university. One day they came across WikiToLearn  and they were deeply influenced by it’ s philosophy –knowledge only grows if shared. Being roommates they arrived at a collective decision to share their knowledge and author a course on Mechanics under physics  section on WikiToLearn.
So Jon decided to write about Newton Law,Josh about Work and Power . Jacob was busy with his cookery classes so Josh decided to author Pseudo Force for him while he was away.They were very happy with their work and wanted to share their work with all the university students but before that they decided to proof read each other’s work. So while doing it they improvised some of the sections of pages written. Now they came to know that WikiToLearn has this unique feature called Versions. Basically versions are a great way to keep track of the history of changes such that whenever a page is changed a new version is stacked on the top of old one such that it forms a chain of changes and only latest version is accessible to users. So they created versions on each other’s work by reviewing and editing is subsequently. Now when they are done reviewing they vote the page content so that the entire course rating can be generated by the Rating Engine.
Each user has some credibility that will determine the weight carried by his vote.This will be determined by his loyalty to WikiToLearn platform( his activities  like contributions,reviews,days active).Here it is essential to remember that a contributor can’t vote his own work.( 😛 ). So now the work of the editors is over and now it’s up to the Rating Engine to calculate the Reliability of their work.They are waiting with their fingers crossed!

So to model this type of information I basically used OrientDB as a graph database.Let’s see a simple way of doing it.

Vertices and Edges:

So in this scene we have some entities like User,Page,Course and Version. These entities will form the heart of our graphical database. Information like User’s Name , Page Name will be embedded inside these entities. These entities will interact with each other with various relationships like Jon contributes newton law. 
In OrientDB these entities are  represented by Vertices and relationships by Edges of a graph.

Setting up the vertices:

Just like the Object Oriented terminology we extend out custom made classes to the base ones so in the web editor or the console,issue the following commands:

CREATE CLASS USER EXTENDS V
CREATE CLASS PAGE EXTENDS V
CREATE CLASS COURSE EXTENDS V
CREATE CLASS VERSION EXTENDS V
CREATE CLASS CONTRIBUTE EXTENDS E
CREATE CLASS REVIEW EXTENDS E
CREATE CLASS INSIDE EXTENDS E
CREATE CLASS V_STACK EXTENDS E
CREATE CLASS P_VERSION EXTENDS E

Let’s now embed information inside the vertices and edges don’t worry if you don’t understand all the used parameters they will be explained in the subsequent posts.

USER:

CREATE VERTEX USER SET NAME="JON",J_DATE="2014-05-12"


CREATE VERTEX USER SET NAME="JOSH",J_DATE="2015-05-20"


CREATE VERTEX USER SET NAME="JACOB",J_DATE="2013-12-10"

PAGE:

CREATE VERTEX PAGE SET NAME="NEWTON LAW",C_RELIABILITY=2.0,RATING=3.4


CREATE VERTEX PAGE SET NAME="WORK AND POWER",C_RELIABILITY=2.0,RATING=3.4


CREATE VERTEX PAGE SET NAME="PSEUDO FORCE",C_RELIABILITY=2.0,RATING=3.4

VERSION:

CREATE VERTEX VERSION SET V_NO=0,P_NAME="NEWTON LAW",C_RELIABILITY=1.0,RATING=4.2


CREATE VERTEX VERSION SET V_NO=1,P_NAME="NEWTON LAW",C_RELIABILITY=1.0,RATING=4.2


CREATE VERTEX VERSION SET V_NO=1,P_NAME="WORK AND POWER",C_RELIABILITY=1.0,RATING=4.2


CREATE VERTEX VERSION SET V_NO=0,P_NAME="WORK AND POWER",C_RELIABILITY=1.0,RATING=4.2


CREATE VERTEX VERSION SET V_NO=0,P_NAME="PSEUDO FORCE",C_RELIABILITY=1.0,RATING=4.2


CREATE VERTEX VERSION SET V_NO=1,P_NAME="PSEUDO FORCE",C_RELIABILITY=1.0,RATING=4.2

COURSE:

CREATE VERTEX COURSE SET NAME="MECHANICS",C_RELIABILITY=2.0,C_VOTE=5.2

Drawing Edges:

So now we need to link the disjoint vertices with relationships.We need to connect the contributor to his work (CONTRIBUTE), Reviewer to content reviewed(REVIEW), pages to course(INSIDE), Versions to pages in a stack like manner (V_STACK) and finally linking the current version to pages(P_VERSION). Let’s see them one by one:

CONTRIBUTE:

CREATE EDGE CONTRIBUTE FROM (SELECT FROM USER WHERE NAME="JON") TO (SELECT FROM VERSION WHERE P_NAME="NEWTON LAW")


CREATE EDGE CONTRIBUTE FROM (SELECT FROM USER WHERE NAME="JOSH") TO (SELECT FROM VERSION WHERE P_NAME="WORK AND POWER" OR (P_NAME="PSEUDO FORCE" AND V_NO=0))


CREATE EDGE CONTRIBUTE FROM (SELECT FROM USER WHERE NAME="JACOB") TO (SELECT FROM VERSION WHERE P_NAME="PSEUDO FORCE" AND V_NO=1)

REVIEW:

CREATE EDGE REVIEW FROM (SELECT FROM USER WHERE NAME="JON") TO (SELECT FROM VERSION WHERE P_NAME="PSEUDO FORCE" AND V_NO=0) SET VOTE=5


CREATE EDGE REVIEW FROM (SELECT FROM USER WHERE NAME="JOSH") TO (SELECT FROM VERSION WHERE P_NAME="NEWTON LAW" AND V_NO=1) SET VOTE=8


CREATE EDGE REVIEW FROM (SELECT FROM USER WHERE NAME="JACOB") TO (SELECT FROM VERSION WHERE P_NAME="WORK AND POWER" ) SET VOTE=9

P_VERSION:

CREATE EDGE P_VERSION FROM (SELECT * FROM VERSION WHERE P_NAME="NEWTON LAW" AND V_NO=0) TO (SELECT * FROM VERSION WHERE P_NAME="NEWTON LAW" AND V_NO=1)


CREATE EDGE P_VERSION FROM (SELECT * FROM VERSION WHERE P_NAME="PSEUDO FORCE" AND V_NO=0) TO (SELECT * FROM VERSION WHERE P_NAME="PSEUDO FORCE" AND V_NO=1)


CREATE EDGE P_VERSION FROM (SELECT * FROM VERSION WHERE P_NAME="WORK AND POWER" AND V_NO=0) TO (SELECT * FROM VERSION WHERE P_NAME="WORK AND POWER" AND V_NO=1)

V_STACK:

CREATE EDGE V_STACK FROM (SELECT * FROM VERSION WHERE P_NAME="NEWTON LAW" AND V_NO=1) TO (SELECT * FROM PAGE WHERE NAME ="NEWTON LAW")


CREATE EDGE V_STACK FROM (SELECT * FROM VERSION WHERE P_NAME="WORK AND POWER" AND V_NO=1) TO (SELECT * FROM PAGE WHERE NAME ="WORK AND POWER")


CREATE EDGE V_STACK FROM (SELECT * FROM VERSION WHERE P_NAME="PSEUDO FORCE" AND V_NO=1) TO (SELECT * FROM PAGE WHERE NAME ="PSEUDO FORCE")

Voila! here it is 🙂

ori

 

Build.Run.Contain! — May 22, 2016

Build.Run.Contain!

Once you realize how addictive and rewarding Open Source Development is, you will end up spending days with it. So here I am working on my WikiToLearn:Ratings project for GSoC-2016.Only few days back it felt that it is good time to choose a proper database application for handling our data , which can be modeled into a graph.We stumbled upon OrientDB a distributed graph database. So here I am sharing my experiences while setting up OrientDB inside Docker.

Just imagine you went to a fine restaurant and you ordered a delicious pizza, as you were busy taming your taste buds you get served with this.

maxresdefault
You are further given some red tomatoes and freshly prepared basil to make sauce and toppings on your own. I know you how do you feel not because I went to such restaurant , but because this is a common scenario that the programmers face.

When we are working in teams it becomes really essential to collaborate seamlessly. That requires that you have a uniform working environment as your fellow team mates. But unfortunately this is not the case. Ever! . There are always some differences in the developing environments that need to be bridged . Now that mostly involves installing dependencies, packages and what not. The situation similar to the pizzeria as we ourselves need to make sauce and toppings for our pizza!

But what if I tell you that there is something that can save you from all this ?

Here comes DOCKER.According to the internet:

“Docker allows you to package an application with all of its dependencies into a standardized unit for software development.”

That means to run an application on any platform you just need to install docker, build the pack with the application and all it’s supporting components and then you are done.
Now anyone using that application only needs docker installed to run that application. No more ugly setup , nothing just run docker and your application is up.

Tasty-Pizza

So recently I got a chance to use docker to fire up my OrientDB database. Therefore in the remaining post I will be explaining what it took to run OrientDB inside Docker!
I am using Ubuntu 14.04(LTS)

  1. Initially we need to download the Docker image of OrientDB .Think of this image as your full application along with all the necessary components to run it.
    So need to a need a set of instructions to download all the dependencies and  install them. This instructions can be run in a batch process with the help of a Dockerfile. Just copy the contents of this file and save it on your disk with your favorite text editor, then run this command on the terminal to build the image from the Dockerfile:docker build -f /path/to/a/Dockerfile .
  2.  

  3. Now once you have the image and the other dependencies you need to run this:docker run -d -p 2424:2424 -p 2480:2480  -v config:/orientdb/config     -v database:/orientdb/databases  -v backup:/orientdb/backup     -e ORIENTDB_ROOT_PASSWORD=mypasswdhere     orientdb:latestSo let’s understand what we just did. We need to run an instance of the image we just built. Docker provides a faculty known as Containers to do it. Containers are like resource friendly virtual machines , a sandbox where you will run your application.
    The -d parameter will detach the container so that it won’t hog the terminal and the container will be up even if your close the terminal.
    -p is used to map the ports.
    As we will be working with the databases we need persistent storage to hold our changes in memory. But the moment we will kill the containers running the application all the data associated with it are destroyed. Therefore we will use Volumes to store the data in main memory.So -v to specify volumes.
  4.  

  5. Your server is up now you can access the web interface by going on this address:http://localhost:2480/
  6.  

  7. If you want to run console you need to additionally issue this command:
    docker run -it --net="host" orientdb:latest /orientdb/bin/console.sh
    The –net=”host” will select the host networking mode in docker. You can read more about networking in docker here.To connect to your database issue:orientdb> connect remote:localhost root <Passwd>
  8.  

  9. Now if you need to kill the container you can issue this command to see the running containers:
    docker ps

    You can see the container name you wish to kill and use:
    docker kill <container name>

 

Congratulations! now you have a fully functional OrientDB server running inside a docker container.

blog local.png

Google Summer of Code 2016 — May 12, 2016

Google Summer of Code 2016

Final logo“If you want to go fast walk alone

If you want to go further walk together”

I have always seen computers as the portals to travel into the new era. To change things . Being a computer science  student myself I am often faced by this question-“What can I do to bring a change?”.
It was this question that introduced me to the open source culture. The idea of having a vibrant community, all sharing a common purpose to innovate is what makes open source development a noble and exciting pursuit at the same time. I started by developing some of my own small projects. I can tell you it feels really great to see your own idea take shape. It is like you grow with the product you are developing and it imparts a pleasant satisfaction.
During my experiments with the open source community I came to know about Google Summer of Code program. When I researched more about it I found out it was a wonderful opportunity for any student like me to not only gain exposure and insights into real world programming  paradigms but also  to work with the top craftsmen of the open source community. So I started looking for the proposed projects under various organisations.
And then I found this lovely organisation named WikiToLearn under the umbrella organisation of KDE which had  a project named WikiToLearn Ratings(Click to view abstract!) that caught my attention.

WikiToLearn works on the ideology that “knowledge only grows if shared“. It provides a platform where learners and teachers can together complete, refine and re-assemble notes, lecture notes in order to create text books, tailored precisely to their needs, so that we can provide free, collaborative and accessible text books to the whole world.

Our project is basically aimed at ranking the content on wiki-style learning platforms. The content on such platforms is more or less of good quality but it still depends on various parameters like author’s credibility, interconnections between different pages and user votes. Also we will pay special attention on how different versions will stack on the top of old ones to change their cumulative ratings.

We will be developing a Rating Engine that will be used rate the the users based on their interactions with WikiToLearn platform and then using their votes as one of the indicators for the page quality.In fact through this approach we will further calculate the final rating of the whole course. This will give user a general assessment of the course quality even before taking the course.

Of course for doing great work you need a great team so I am being guided and helped by my mentor Davide Valsecchi along with helpful folks from my WikiToLearn family( Yes! I can proudly call it a family ❤ ❤ ). Davide really helped me in understanding the problem to the core and with the constant support and feedback from the community I was able to produce a fine proposal that got me selected!

So in a nutshell I feel really great to be a part of WikiToLearn family! We have a lot of fun at telegram channel(Sometimes we even discuss why spaghetti aren’t noodles 😛 ).People here are really amazing and always come forward to resolve even the slightest issues.It has been an exciting journey so far and I am sure it will only get better from here.Cheers!!!

Let the summer of code begins! 

😎

Hi! WikiToLearn — February 29, 2016