16 September 2014


#4 CMU Sphinx

Hello guys!

We are getting close to the end of the summer, and I am almost done with my work for CMU Sphinx.


Since I last wrote here my main activity implied:

- memory optimizations;

- testing the code;

- preparing the code for integration;


My code has already been integrated and now I am working on the finishing touches.


Thank you for a great summer!

Bogdan Constantin Petcu

de bogdanpetcu la 16 September 2014 11:41 AM

29 August 2014


Wyliodrin #3

Hello, everyone! :)

My RSoC experience at Wyliodrin is getting better every day. We just walked into the last month of the school program and I am glad to share with you my recently hot accomplishments.

In the last post I have told you how I took care about the Analog I/O part, what technologies I used and the difficulties that popped up.

Since then, first of all I had to do some code refactoring and redesigned the table that keeps information about the pins on udoo. I tested again all features available until that moment, too.

Secondly, I focused on the Servo part. Servo allows the users to control their servomotors. Due to the special architecture present on udoo with two processors, an Arduino-compatible one and an iMx6, the existent Servo library on libwyliodrin is not compatible and I used firmata protocol again to make Servo work. I implemented two functions, servo_attach() and servo_write().

Also, I took care of the I2C serial computer bus and coded all the functions that will allow data to be sent and received using i2c.

You can follow my entire work on Github.

Stay tunned for my next post!


de dinuand la 29 August 2014 04:44 PM

27 August 2014


#3 CMU Sphinx

Hello all!

Time is passing and we are closer and closer to the end of the “Summer of code”.

Here is my progress since I previously posted:

- I implemented a module the clusters the gaussians from a mean file into a given number of classes. The clustering is made considering the euclidean distance between them. The purpose of clustering is to optimize adaptation process.

- I also implemented a module that adapts the acoustic model using the clustering tool that I’ve just described. The main idea of this type of adapting is the following: for each class(cluster) we collect counts separately and generate a separate transform. This way a more particular transform is estimated for each gaussian.

You can see my work at:

- https://github.com/bogdanpetcu/sphinx4/tree/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/adaptation        - adaptation package that I implemented from scratch.

- https://github.com/bogdanpetcu/sphinx4/tree/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/adaptation/clustered        - clustered adaptation package that I implemented from scratch.

- https://github.com/bogdanpetcu/sphinx4/commits/master        - here you can see all of my commits


Enjoy the rest of your summer!

Bogdan Constantin Petcu


de bogdanpetcu la 27 August 2014 07:53 PM

19 August 2014


Wyliodrin #2

Hello again!

Since my last post I took a short holiday and detached a little bit from work. So now I am buckling down to go the extra mile.

I have completed the PWM and Analog I/O functions. Now I am working on the I2C and SPI communication. The next step after the serial communication is to extend the same functions for other programming languages. I am very satisfied with my progress so far :D

Th-th-th-that’s all folks!


de Razvan Madalin MATEI la 19 August 2014 10:50 AM

11 August 2014


Wyliodrin #2

Greetings from IP Workshop and hello again!

This is my second post on the blog since the RSoC 2014 started and I am very excited about how things are going. Me and Matei did some very interesting stuff during this time and learned a lot about our project and I think the way we approach new challenges shows that.

Over the past weeks I managed to deal with the most provocative challenge until now – coding the Analog I/O functions. There are two processors on the UDOO board: Freescale iMx6 and Atmel Sam3x. The problem was that the user has access only to the iMx6 processor and Analog I/O cannot be controlled from there. Therefore, I spent a long time documenting on this topic and I realised how the processors can communicate with each other. I used firmata and serial port to make them work. [1]

I have also implemented the Time part and some of the Advanced I/O functions that allows you to work with a shift register. I made a few tests and code refactoring, too.

I am looking forward to successfully completing the wiring library coding part. You can find the reference about the wiring library here: [2].

Right now, I am participating at the IP Workshop [3] summer school at Tirgu Mures organized by my mentors and I am making the best of it, trying to learn as much as I can. I attend the Internet of Things course. It is a good time to test all the features that I implemented on the UDOO board till now and ask about every vagueness.

Stay tunned for my next post!

[1]: http://www.firmata.org/wiki/Main_Page

[2]: http://arduino.cc/en/Reference/HomePage

[3]: http://www.ipworkshop.ro/ 



de dinuand la 11 August 2014 03:13 PM

07 August 2014


#2 CMU Sphinx

Hello again!

It’s been another two weeks of “coding for decoding” for me.

What did I manage to do since my last post?

Things look really good by now, I’m making constant progress. A very important thing I managed to implement is collecting adaptation data from a Result object. I also implemented the part that uses this data for creating the adaptation file.

I must say that after seeing in the first weeks that the algorithm which estimated the transform was working well, but reading counts from a file generated with sphinxtrain, this was the next big step, to collect counts from the result of the first decoding process. This task is the part that took me the most by now.

I also implemented a component that based on the adaptation data creates a new means file. This is equivalent to having a new adapted acoustic model that will decode better if used with audio files containing speech of the persons that adaptation was made for.


You can see my work at:

- https://github.com/bogdanpetcu/sphinx4/tree/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/adaptation        - adaptation package that I implemented from scratch.

- https://github.com/bogdanpetcu/sphinx4/commits/master        - here you can see all of my commits

Have a great week!

Bogdan Petcu

de bogdanpetcu la 07 August 2014 01:24 PM

30 July 2014


#1 OpenSIPS


My name is Victor Ciurel and this summer I will be working on OpenSIPS. More precisely, I will implement a module that will allow OpenSIPS to communicate with SMPP(Short Message Peer-to-Peer) servers.

OpenSIPS is an Open Source SIP proxy/server for voice, viceo, IM, presence and any other SIP extensions. The module I will implement will allow a SIP device and a SMPP device to communicate through messages.

With the help of my mentor, Razvan Crainea, I established the flows for the communication from SIP to SMPP and vice versa. We also chose and tested a SIP client software (linphone [1]) and a SMPP library (C Open SMPP v3.4 [2]), which I will use in order to represent the SMPP messages.

Razvan sugested that I get familiar with module implementation and the structures used in OpenSIPS by watching a webinar [3] and implementing my very own module, that printed parameters given in the opensips configuration file. Having finished this print module, I started implementing the actual module that will be used for SIP/SMPP communication. I am working now on the SMPP -> SIP translation. So far, I have implemented a bogus SMPP server and connected my OpenSIPS module to it.

In the following week I will finish documenting on the SIP and SMPP structure and work on the SMPP -> SIP translation.

See you next time.

[0] http://www.opensips.org/
[1] http://www.linphone.org/
[2] http://sourceforge.net/projects/c-open-smpp-34/
[3] https://www.youtube.com/watch?v=oVPdqMgN7l0

de victor-ciurel la 30 July 2014 11:47 AM

27 July 2014


CMU Sphinx #1


My name is Georgiana Chelu and this summer I am working at CMU Sphinx.

CMU Sphinx is a great open source toolkit that uses speech recognition. The idea of controlling a device with your own voice is pretty amazing! I was very excited to find out that I will work on this project.

The process behind voice recognition is quite complex and you need time to get familiar with lots of new concept. In the first two weeks we had to read the documentation and understand the code, little by little.

An important step before starting to code is to create a setup, where you can test all your file modifications. You will write code easier and prevent most of the bugs. I’ve created a setup that gets us accuracy numbers, the adaptation matrix and other important information. We work with a lot of data, especially sound records. So, I wrote it some bash scripts that make the setup easier to use.

Now, I am ready to move to the next step: writing the actual code of the new feature!

de gchelu la 27 July 2014 08:23 PM

20 July 2014


Wyliodrin #1


This is Razvan Madalin MATEI. There are 5 weeks by now since I have been coding for Wyliodrin. It seems like it’s time for me to write down some reviews.

The most important thing that happened since I got involved in this projects is this nice collaboration between us, also known as the interns (me and Andrei), and the mentors (Ioana and Alex). Me and Andrei always consult each other before starting something, and Ioana and Alex are always there for us when in trouble.

The second most important thing is that I really learned a _lot_ of things this summer. I am responsible for adapting the wiring library for the Beaglebone Black. As an outlandish from the embedded world, I got a pretty harsh time configuring this board and I am pretty proud that I have not burnt even a single led. Yet.

I also did some coding. Till now I implemented the Digital I/O and Time functions. Now I am working on the PWM stuff and Analog I/O. I’ve tried to adapt libmraa to work on the Beaglebone, but the pins are configured and multiplexed differently, there are different table pins hearders, different facilities offered by the kernel. So I took courage and started the implementation from sysfs.

Since I started working on the Wyliodrin project I have kept a task oriented journal and a work stats spreadsheet. Analysing these documents I found out that I am most productive on Thursdays. I advise every intern to do so as both interns and mentors can keep track of work.

I also took initiative and started the Wyliodrin Coding Style Convention. Me and Andrei are constantly updating this document with guidelines for a homogene library. This is our legacy for future interns and coders on libwyliodrin.

Th-th-th-that’s all folks!

de Razvan Madalin MATEI la 20 July 2014 08:08 PM

#1 CMU Sphinx


My name is Bogdan Petcu and I am working within this year’s RSoC program at CMU Sphinx.

CMU Sphinx is a toolkit used for building applications that use speech recognition. Our aim for this summer is to implement a module in Sphinx4 that adapts the data that is used for decoding so that recognition process has better results.

For decoding with Sphinx4 you could use a general acoustic model (e.g for English language), but if you want to decode audio files that contains speech of non-native speakers, or if the recording environment has background noise you would want to adapt this general acoustic model for those particular speakers so that the decoding process is more precise. Currently adapting an acoustic model requires using the sphinxtrain tool provided by CMU Sphinx. Using sphinxtrain requires building manually some files ( recordings, their transcription etc.).

Our aim is to make Sphinx4 adapt the acoustic model by itself using information from the first decoding and after that redecode with the adapted model improving the recognition process.

Until now I implemented a component that collects adaptation data and another component that based on the collected adaptation data builds a specific file containing the transformation that will be applied to the acoustic model in order to adapt it.


Github repository: https://github.com/bogdanpetcu/sphinx4

de bogdanpetcu la 20 July 2014 07:54 PM

Wyliodrin #1


        My name is Andrei Dinu and this summer I am working at Wyliodrin, which is a service that allows passionate people to program their embedded devices remotly using a browser or visual programming. [1]

        Since now, Wyliodrin supports only RaspberryPi and Arduino Galileo boards. My main goal these months is to extend libwyliodrin in order to be functional on the UDOO board. Also, I would like to develop some new features for the RaspberryPi board.

        I spent the first two weeks heavily documenting on the topic of RaspberryPi, UDOO and a professional tool designed to build, test and package software. I did not know almost anything about embedded systems, but I was enthusiastic. I installed all the required libraries, setup both boards, tried to understand the code and found a few bugs that I managed to solve.

        At the beggining of the last two weeks, I made a script that indicates which version of RPi you have and tried to implement some new functions. I left for the moment RPi and now I am taking care of UDOO. I designed the pin table associated with the board and implemented almost all the gpio configuration functions. Tested them, too.

        I am currently working on the wiring library [2]. I coded the Digital I/O part. Next step is the Analog I/O part which is different from other boards and a little bit tricky. You can follow my work on github, udoo branch. [3]

        Stay tunned for the next post!

[1]: www.wyliodrin.com

[2]: ArduinoReference

[3]: https://github.com/Wyliodrin/libwyliodrin/tree/udoo


de dinuand la 20 July 2014 01:14 PM

15 July 2014


#1 Vmchecker

VMChecker will have a new look based on Meteor.js

So far I reimplemented the vmchecker interface to the point of the last interface and added the ability to download your last submission from server ( it needs polishing ).

Change Log:

  • Reimplemented Site using Meteor.js and Node.js
  • Elements of the site are now rearanged
  • Last Submision can now be downloaded
  • Solved Some Bugs

de crushack la 15 July 2014 11:37 AM

18 October 2013

ROSEdu Tech Blog

Facebook Hackathon Live Blogging


Ladies and gentleman, fast hackers and coder perfectionists, web developers and mobile app creators, we present you the first edition of the Facebook hackathon in Romania. Organized by your favorite open-source community ROSEdu, the volunteers have been busy all morning preparing the workspace for the 15 participating teams. We have pizza, beer and a mountain of bean bags for people who move fast and break things.


People have started their IDEs (or text editors for more hardcore people) and started installing their gems (Ruby guy here sorry). After a quick intro from the organizers about the rules, the Facebook engineers presented their skills and their expectations: it’s fun to code, but it’s awesome to ship. So happy shipping hackers!


A brief pause and all the keyboard presses have stopped. The Facebook representatives have given out a random prize! One Facebook T-shirt. Congratulations to Andrei Duma! People are now back to coding and making their ideas come to life: done is better than perfect.

First team

Only 4 hours in the event! We have interviewed some of the participants and they’re coding, designing and implementing their application basis! The first team we interviewed is 3_awesome_guys_and_a_llama. These students from the University “Politehnica” of Bucharest are writing an Event Planner. From what they told us, it’s an application which tries to help people organize events for them and their friends for their night out. It’s more focused on location, than being focused on time, so they can make it a planned drink-up or dance-off. They integrate it with the Facebook Places API and would like to have bars, clubs and restaurants use their app so people can make reservations. As technology stack, they have Python on top of Google App Engine. One of the devs said that he learned about it on a Udacity course which I recommend it to you. They also plan to use Twitter’s Bootstrap library because they do not have enough frontend experience.

Be green, recycle

You are a human, walking down and you see a big pile of garbage. It’s a scenario common here in Romania. But what if you have an app for cleaning it? That’s what sudoRecycle is trying to do with their Android idea. You see the junk, take a photo, tag it with the GPS location and send it to their servers. Using their backend written in PHP, they will send teams of robots that will clean the area. Because we human beings are really lazy, they plan to use the Facebook API for gamification, so you could level up in cleaning the world.

Explore the underground

We’ve all endured the lack of knowledge of moving around Bucharest, if we haven’t lived here. But dark_side_of_the_moon is going to remedy this with their offline mobile subway connection app. You want to get from X to Y using the shortest route. It also wants to tell you what ground-level public transportation is there and what you can visit. Furthermore they want it to tell your friends where you’ve been after you used its functionality to check-in at your destination. Under the hood, it’s using Android 4.0+ API and they want to integrate with the Facebook API to see the places your friends have visited. The coolest feature they want to code will tell you when the next tube will arrive.


Did you know that in the year 2013, if you apply to MIT, you must send the papers by fax or postal mail? And after you send them, a person will manually go through them and tell you that the papers have arrived? Or if you get into a university you must write 6 papers with about 60% redundant information? That’s what GRails, the only team made entirely of girls, is trying to solve, fighting bureaucracy with Rails 4. Now with 100% less paper involved!


Everybody knows that Romania has some of the best hiking routes, beautiful views and mysterious mountains. And who doesn’t want to know what trips you can make in the wild nature? Well, you can now check out a map and see what is available for adventurers! The map also shows you elevation, so you know if it’s a long road and also an abrupt road. A Django platform by saltaretii should be enough to support this paradise for nature’s explorers!

I want to ride my bicycle, I want to ride my bike

2 wheels, foot power and long distance travelling made easy! These two guys are achieving the awesome tool that brings bikers a dream app come true! Using complex algorithms, they want to give bikers many possible routes from one place to another. You can choose your own type of road, either abrupt and short or longer and less steep. The point? You can choose which kind of road you want and which is fit for you! If that is not enough, these 2 guys are doing this client side with ClojureScript… yeah, it’s the new functional kid in town which tries to solve the event driven callback hell. FlatRide on, people!

Jackson Gabbard

From an English major in Tennessee, to the 300th Facebook employee, to the 4th one to move in the new London office. He works on developer tools for the engineers and oversees some of the most important components like Tasks which devs open daily to get their job done. He is a self-taught hacker and he had an enlightment moment about the power of programming the first time he used the array structure.

He was really communicative and willing to tell us of his opinions, about the event, mentioning that he’s amazed about the main focus of students. ‘Transportation’, ‘Finding things’ and ‘Group organization’ are recurrent themes. He said some of his coworkers are Romanian and he thinks Romania is a land where lots of engineers are being created. Proud to be a full-time hackers around here!

We also asked him about the Bootcamp in London, which is about learning to code. And guess what? Even executives go through these preparations to get into Facebook. The engineering team has lots of fun hacking in that period of education. It teaches you how to love the company, you get to learn the ropes while communicating and interact with other mind-like people.

Finally he has participated in lockdowns each year. These are periods of time when teams gather in a room and stay there for several days (usually 30) and ship a big feature. Pretty hardcore, but that’s life at Facebook.

18 October 2013 09:00 PM

25 September 2013


#6 World of USO – Code Refactoring and Social Login


I’ve been working on some improvements and a couple of bugs since the last blog post.

I didn’t like the fact that Google was not working as a social login provider. It was raising a weird ‘Permission Denied’ exception after the user clicked the ‘Accept’ button for granting permission. Andrei Preda, my colleague from WHC project, pointed me out that I should look over the app settings in Google API Console. Indeed, the problem was that none of the available Google APIs were enabled. All I had to do was mark the Google+ API as active.

After fixing that issue, I came across an unpleasant bug. The player did not receive the initial points and gold if he used the social login feature. This was caused by the fact that the ‘user_login’ view, which handles the usual login mechanism, sent an ‘addActivity’ signal. The receiver connected to that signal was responsible for granting the points and gold. However, the ‘user_login’ view wasn’t called when using the social login, therefore, no signal was sent. I decided to remove the ‘addActivity’ signal and use Django’s built-in ‘user_logged_in’ signal, since both mechanisms were sending it after a successful login.

Another issue I came across was that the ‘magic disable’ button was not working as intended. It was merely removing the ‘cast’ button from the player’s profile page. But one could still cast spells if he went to the URL responsible for spell casting.

Finally, I have used signals and receivers to refactor two methods from god.py (post_cast and post_expire).

ROSEdu Summer of Code has come to an end. It was a great experience for me and if I were to choose I’d do it all over again. I’ve learned a lot of new useful things, including the required soft skills for working in a team project. I am highly indebted to my mentor Alex and the entire RSoC community for supporting me. Thank you!

de badescunicu la 25 September 2013 09:18 AM

16 September 2013


#10 DexOnline – Romanian Literature Crawler


Last week I finally finished my diacritics learning application. I went through
a lot of bugs and code changes, since I discovered that utf8_general_ci uses
1 byte for characters from [A-Za-z] and 2 bytes for ones from [ăâîșț]. After I
came up with a first version of the application using 1 byte per char string
functions (I was tesing at each char if it’s a 1 byte char or a 2 byte
one), Cătălin showed me that there are multibyte string functions which could
easily simplify the code so I used them.

My next steps are to build the diacritics inserter application and a to do a
lot of testing. I will also have to see if my diacritics learning application will
scale up with mysql, since we will have millions of records in our database.
One idea is to use mongoDB, another one is to store the records in multiple tables, using a refference table as the base pointer(some sort of a hashtable with huge buckets).

See you all at the grand finale.

de alinu la 16 September 2013 08:07 PM

15 September 2013


Mozilla Firefox #6

The last two weeks were pretty awesome!

Since the last post I’ve been working on some tests for about:networking, there still is some more work to do, but they look promising.

In the first week I hit a big wall of documentation because I never developed tests on mozilla platforms. Don’t worry I didn’t get hurt, I managed to understand how those platforms work and by the end of the week I finished the implementation of 3 tests for the http, dns and sockets features. It wasn’t that difficult, I had to be a little careful because we have a lot of asynchronous calls in those features, but the XPCShell harness has some nice ways to deal with this kind of situations.

After that I tried to implement a test for the websockets feature, but in the XPCShell documentation I read it wouldn’t have given me a window to use the WebSocket API and therefore I resort to another harness, Mochitest. This kind of tests has a big overhead, but it was the only way I could test this feature. There still was a little problem: I had to write a little python websocket server because our tests shouldn’t rely on external services.

These first four tests landed about two days ago and are ready to protect the dashboard against any harmful code.

Currently I’m trying to test the ping diagnostic tool. Things got a little more complicated with this test, there are a lot of callbacks within async calls and my mind is spinning like the event loop because I don’t understand why a local http server blocks my test refusing to close itself.

I asked the module owner for an advice about separating this test from the others because I found out there is a nsIServerSocket interface which implements a server socket that can accept incoming connections and it really works, but running this test beside the others, under these circumstances, causes an interference between them.

I hope to get an answer soon and solve this problem. I will notice you next post!

de robertbindar la 15 September 2013 09:09 AM

03 September 2013


Mozilla Firefox – The Networking Dashboard. Week 8 and 9


Over the past two weeks I’ve finally been able to finish my Proxy Settings Test Diagnostic Tool patch. This took me a while because of the response lag of my request for feedback and review to Patrick Mcmanus ( the owner of networking module). I found out that he was a little bit busy so I don’t blame him. Anyway, in this two weeks he was very responsive and we’ve managed to create a good patch.

First off all, there was a function (AsyncResolve()) for which I didn’t ask myself what would happen if it failed. So I’ve fixed that with a simple IF statement. After this he brought to my attention another problem – there was  a Cancel object (nsiCancelable) which wasn’t used in d’tor and this created a leak in Firefox, because it sometimes remained an outstanding request. In order to cancel that object in d’tor, I had to see first if that wasn’t null, and if so I’d simply use a cancel function on it.

The next problem that was pointed, created some problems for me. Firstly, I should say that the Mozilla code isn’t about the quantity but for the quality of it. That being said, for every Dashboard functionality that we want to implement, we create a new structure. This all have a callback object because of the async functions, threads and interaction between JS and C++ code. However at the beginning of functions, if demands it, we firstly initialised the callback object with the callback of the demand, and if a function would fail, we simply made that object null and returned a  fair result. Patrick thought that it would be better if I first made that object null, and at the end of function, before returning  a positive response, I would initialise it. It looks simple, and so it was, but after I did that, at every attempt for a proxy test, Firefox would SIGSEV (segmentation fault).  It took me a while, and Patrick was surprised when I pointed him the problem – it seems that OnProxyAvailable function (the function which creates the dictionary for JS) was being called from AsyncResolve() stack, and I was differentiating callback in that function. He said that he didn’t think that was possible for our API, but here it was. In order to get over this segmentation fault, I initialised callback object before AsyncResolve() function was called.

For me it was a surprise, because another async resolve function, which I have used in DNS Lookup tool, was working perfectly – but this was because the implementation of that function was different. There were a couple smaller problems and also the fact that I had to use an assert function at some point – which for me was a first; I didn’t know what an assert function would do, but it turned out that this function will terminate the program, usually with a message quoting the assert statement, if its argument turns out to be false – a thing that is quite useful.

Because of this important changes that I had done, I decided to file another bug for my DNS Lookup tool (which is already in Mozilla Core code base) in which I’ve modified it and now it is a lot safer and good looking :) .

However, there is another catch. In order for my proxy tool to be accepted in Mozilla Core code base, it had to have also a frontend. I thought that this would be one of the last things to do for our project, but because of some regulations that were presented to me by Patrick, I’ve started working not only for proxy but also for dns tool UI. I’ve managed to create some basic interfaces – for which I am still waiting for a feedback from Tim Taubert.

Another thing on which I had worked on was a bug filed by Valentin (our mentor). It seems that in its current state, the Networking Dashboard is not thread safe and it can’t even be called from the same thread multiple times (if the previous call hasn’t ended). He managed to implement a new function which creates a runnable event with a given argument – after this will be accepted it will help other projects as well. I had to make use of this new function, modify a lot of implemented functions, instantiate structures in .cpp files not in headers and other things too. So far I’ve worked over socket, http and web socket data. I’ve decided to stop working on it because it is an important and also a big patch and I want to apply changes over all code – so I’m waiting for my other two implementations to be accepted first.

This is what I have been working on for the past two weeks. For the upcoming weeks I want to start implementing some test (xpcshell files) for our dashboard and also add the functionality which will test the reachability of a proxy.

See you next time!

de catalinn.iordache la 03 September 2013 06:15 PM

#9 DexOnline – Romanian Literature Crawler


This week I did some testing and I decided that we will have better scrapped text if we just make custom HTML parsing for each domain. I saw that romlit.ro is placing valuable text between paragraph tags and wikipedia is using <div id=”mainContent”></div> and also paragraphs.

I also password protected my crawler status page (in browser) in an easy manner with .htaccess and htpasswd, to restrict regular access.

At the end of the week I started implementing the diacritics mechanism. This is a long shot because of mysql poor speed when working with millions of records so stay tuned to find out if we will decide to use mongodb instead.

de alinu la 03 September 2013 09:30 AM

01 September 2013


WHC::IDE #4 – Editor

Hello readers! This time I’ve been working on improving the editor. My goal is to add some basic code editing features and fix the broken ones.

I am trying to integrate kate, the kde editor, into WHC::IDE, but there are some problems that (I think) are caused by my system having both qt4 and qt5 installed. There appears to be a conflict. For some reason, the compiler chooses qt5, but the cmake files specify that qt4 is to be used.

While struggling with kate, I took some time with improving the current editor. This way I have two options in case one of them fails. I’ve added bracket matching, fixed the highlighting and made the options relevant. One of the biggest problems was the options system that would not load when opening the editor. This made it useless. I am happy with the results and very soon we will also have autoindent.

Except from the editor, I also fixed a bug caused by connecting two data diagrams. Data diagrams contain, as suggested by their name, only data files that await to be processed by a task or are the output of a task. The IDE didn’t know what to do when two data diagrams were connected and this caused problems with the execution.

de Andrei Preda la 01 September 2013 07:52 PM

30 August 2013


#5 World of USO – Code Refactoring

Hi again,

Over the past two weeks I focused on refactoring views that were using a workaround for passing success and error messages to the next view. They were rendering the template with two additional context variables (‘message’ and ‘error’), leaving the template responsible for displaying those messages.

However, Django provides an easy way of achieving such functionality, through its django.contrib.messages module. After a quick scan of the code base I have found a function called ‘do_result’ in the challenge module, which was responsible for creating and passing those two extra variables to a certain template. Alex encouraged me to delete it and use the Django messages framework followed by a redirect to the challenges’ homepage, whenever the ‘do_result’ function was called.

While I was working at refactoring a view from the magic module which did not use the messages framework, I stumbled upon a weird issue which needs further investigation. I tried to turn some points into gold using the exchange feature. Unfortunately, after hitting the ‘exchange’ button, I ended up with a negative amount of gold.

I have also improved the social login feature by making it pluggable. It is as easy as setting SOCIAL_AUTH_ENABLED to ‘True’ or ‘False in settings.py to activate or deactivate social login. The tricky part was that I didn’t know how I could access a variable from settings.py in the templates. The solution was configuring an existent context processor to pass the needed value to the templates.

Don’t forget to check out this blog for more posts about this project!

de badescunicu la 30 August 2013 11:38 AM

29 August 2013


Mozilla Firefox #5


For the past weeks I’ve been taking advantage of the Networking Dashboard integration in Firefox and I’ve fixed some bugs in the graphical user interface.

In the first week after the last evaluation we received a mail from the module owner with some suggestions about the GUI. He wanted for us to add some JavaScript to make our table’s data sortable by clicked column header. I stepped in and took this bug so I came out with a simple solution, not the most efficient, but I think is the most suitable for our situation:

A listener on the table headers gives me the index of the clicked column in the table, I take the table rows, put them into an array and using the JavaScript Array.sort() method, along with a particular comparison callback, the table becomes sorted by the clicked column.

This method is not that efficient because it takes the already rendered table, sorts it and renders it again (is the best solution when we sort an already rendered table, but when we want to keep the sorting order between table refreshes?) . Rendering a table is pretty expensive so my reviewer advised me to sort the data before first render, thus only a sort and a render operations will take place when refreshing.

This was a little bit trickier because I had to sort in parallel some arrays stored in a JS object. I figured out a solution would be to sort the array corresponding to the sorting column and, with a special comparison callback for the sort function, cache the results of the comparisons. The others array in the object will be sorted with a comparison callback which only returns the cached results. It works great, but there are some problems which make me wonder if it’s worthed, now I’m waiting for feedback.

Another bug I filed focuses on the refreshing feature. Initially, the refresh button and the auto-refresh checkbox request the new data for all the existing tabs. This problem was causing a lack of performance, especially with the auto-refresh feature, so I fixed it. Valentin came with a very good idea of letting the refresh button requests data for all the tabs, in case one wants to make a snapshot of all data on a specific moment of time, and the auto-refresh checkbox request data only for the active tab. It’s done and landed in trunk.

Between these bugs I discovered a crash in the dashboard’s menu, it was leaked by me when I helped Valentin with the integration:D, it’s now fixed.

Our next goal is to land some tests for the dashboard. Those were some fun weeks, see you next post!

de robertbindar la 29 August 2013 09:53 AM

27 August 2013


Fortnightly Post #4.7: Long time, no post

Hi, there! It has been a while since I last posted. Time has swiftly passed and there were notable events galore. I have enjoyed my spare time that I planned from the start and now it’s time to get back to work.

Last time I talked about the “blueprints” for the tag page. Now, it is almost complete, but unfortunately, we might give it up. Why you may ask, well it’s because we haven’t yet decided which format the images will be. It depends on those that “draw” (better said, create) them. They might be svg or a section of a 3D model, in which case the drawer will also be the one to tag them. I am quite happy with my tag page as I learnt a gamut of technologies like: JavaScript, JSON, AJAX, (better) PHP, using plugins like jCrop and Select2.

Because the updates for the tag page have stalled, I now have to focus on the presentation part. I have to create a gallery for the forthcoming images and I have one plugin in my mind but it first needs approval. Till then…

Happy birthday DEX Online!

de Marian Alexandru Grigoroiu la 27 August 2013 04:58 PM

#8 DexOnline – Romanian Literature Crawler


This week I fixed a bug which inserted the same link in the database again, I also rearranged the code for a better reading and I built a TODO list to have a better reading of what should be done next.
I expect that next week Radu, Cătălin and I will agree on the Diacritics Tool design document so I can do some serious coding.

de alinu la 27 August 2013 06:05 AM

26 August 2013


Mozilla Firefox – The Networking Dashboard. Week 6 and 7


The Networking Dashboard has been finally included in Mozilla Core code base. In order to see it, you will have to get Firefox Nightly, but I would recommend patience. The product is far from being final. We still have a lot of work to do. We are pleased because this has finally happened and also for the support that we already see it in people which are reporting bugs (not many though :) ).

So in this two weeks I haven’t been able to continue my work on Proxy Settings Test Diagnostic Tool because apparently Patrick (owner of networking module) had a lot of work to do and we were waiting for his review in order to know what I should modify or if my work is good so far.

I’ve started to work on logging bug, but after a few days me and Valentin realised that it is more complicated than we had expected. Also we found out that there are a few developers that are already working at something similar. I will get in touch with them and see if I can help them with something (I’d love to).

I have continued working at some UI futures and I’ve also got prepared for mid term.

About our meeting at ROSEdu – well, what can I say? it was a lot of fun. We were pleased with our presentation and the game of bowling afterwards.

Not a lot had happened in this two weeks but I’m glad that I’ve been able to get a little break.

See you next post!

de catalinn.iordache la 26 August 2013 12:06 PM

19 August 2013


#4 Teamshare – Peer-to-Peer Streaming Peer Protocol


In the seventh week I continued writing unit tests for my team configuration generator. The unit tests are now covering a large part of the functionality of the two generators.

At my mentor’s suggestion I started learning about the protocol that Teamshare is going to use for data transfers, Peer-to-Peer Streaming Peer Protocol (PPSPP). I will briefly introduce the protocol in the remainder of the post.

PPSPP is a protocol for disseminating the same content to a group of interested parties in a streaming fashion. The protocol supports both pre-recorded and live data transfer. In contrast to other peer-to-peer protocols, it has been designed to provide shorter time-till-playback, and to prevent disruption of the streams by malicious peers. In my opinion, the most interesting parts of PPSPP are the chunk addressing schemes and the content integrity protection.

Regarding the chunk addressing schemes, PPSPP uses start-end ranges and bin numbers. As the name suggests, the start-end range identifies chunks by the specification of the beginning and ending chunk. The bin numbers is a novel method of addressing chunks in which a binary interval of data is addressed by a single integer. This reduces the amount of data to be recorded by every peer.

For content integrity protection, PPSPP uses the Merkle Hash Tree scheme for static transfers, and an Unified Merkle Hash Tree scheme which adds a public key for verification. The content is identified by a single cryptographic hash, the root hash of a Merkle hash tree, calculated recursively from the content.In contrast with BitTorrent, which needs all the chunk hashes before it can start the download, PPSPP needs only a part of them, which leads to a limited overhead, especially for small sized chunks.


For more details, feel free to read the IETF draft at this webpage http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-07.

de victor-ciurel la 19 August 2013 09:35 PM

18 August 2013


#7 DexOnline – Romanian Literature Crawler


Sorry I forgot to provide you with a link to my work:


Last week I forgot to post so I’ll state my progress here: I learned how to use the Smarty library, with whom I built a functional crawlerLog page with which you can see the Crawler progress on your computer or smartphone.

This week I used ajax on the crawlerLog web page to refresh its information every 5 seconds and I fixed the www.romlit.ro problem with broken HTML at a general level ( I’m repairing the broken html by using simple_html_dom, removing styles and scripts and adding body tags where there are none)  so I don’t have to use a different HTML parser for romlit. I also improved the Crawler by adding fixtures like crawling a certain area of the site and abstracting the database query layer for faster technology change (e.g. mysql is not very scalable with the amount of data we continue to gather so we may turn to pl/sql)

de alinu la 18 August 2013 08:55 PM

15 August 2013


#4 World of USO – Social Login


I’m back to work, after a seaside vacation.

My current task is to make possible for the user to login through various social networks, such as Facebook, Twitter, Google. This is quite important because we might run World of USO in another context and users would be more likely to try our game if they had the possibility to login with an existing social account.

I started reading about the OAuth protocol and how the login mechanism works. I learned that you have to follow a bunch of steps before you are granted permission to access the user’s data. First, you register your app with the desired social network to get a unique ID. After that you make a GET request to their servers with some parameters (app_id, redirect_uri). They give you back a code (if the user authorizes your app) which you are going to exchange for an access token. Eventually, you use that access token to get the data you need using their API.

I was able to implement that routine myself for Facebook, after reading their documentation. But there are some pitfalls regarding user creation. Therefore, Alex and I decided to use a tested and well-known mechanism among Django users. It is called django-social-auth and it does exactly what we need.

I managed to integrate django-social-auth with World of USO. Now users are able to log in with Facebook and Twitter. It raised a weird exception when trying to authenticate with Google but I think it can be fixed. I am now waiting for Alex’s review and further instructions.

The thing I enjoyed most about working on the social login was that I got to talk with the man who wrote django-social-auth. I was confused about how the mechanism was authenticating its users, so I decided to send a mail to its creator. He responded very fast and was patient with me. That’s why I love the open source community!

Below is a screenshot with the newly added feature.

Stay tuned!


de badescunicu la 15 August 2013 09:00 PM

12 August 2013


FinTP Application GUI #3


On this third post for RSoC program, I will present you the user interface i am working at for FinTP project. If you read my last post you should know by now that in order to configurate FinTP you have to write XML configuration files for all the connectors parts of the application.

Here is an example of a possible XML file for a particular connector.

There are some mappings for the interface, every sectionGroup in XML goes to a separate tab in UI and all its child tags goes on tab’s page as elements which can be labels, fields, drop-down menus, etc.

The purpose of this application is to read the Xml file and create the interface for it. You can then modify fields and update the current Xml or write a new one.

Until now this is how it looks. I’m using QDomDocument which is a DOM parser, while it parse the xml file it populates the UI with Qt widgets. Depending on a tag’s name, its attributes or inner text can become combo-boxes, line edits or labels.

I have also added a menu for this interface where user can open another xml file, can save the interface into a new one or it can update the existing file. These function are still under work at this time, I have to learn more about Qt signals and slots mechanism.
Until next time I will try to fix them and add some new functionalities that my mentor told me, one of them is to use XSLT files to transform our xml files into something else.

de Macavei Andrei Gabriel la 12 August 2013 09:06 PM

11 August 2013


#4 Mozilla Firefox


The last two weeks I worked on some Telemetry bugs. For the first one I had to report whether or not the DoNotTrack privacy feature was used and if it was, which option was selected by the user.

I started with a patch which was reporting the specified data even if the user was toggling between the options, I sent it and asked for feedback, but I was very sceptical about its behavior, the way it was approximating user’s choices and I decided to search for a way to report that data once-per-sesion. It wasn’t that difficult, I called the telemetry stuff in the nsHttpHandler destructor, but we were not sure if  the data would be actually reported because the same time HTTP was shutting down, so I put some breakpoints and I saw the destructor was called in the right time. The patch landed a few days ago and I hope the numbers will help the DNT developers.

After that, I started working on another bug, it was supposed to report the HTTP connections utilizations. I had begun with the first two tasks, how often a backup connection is created and how often this backup is never used, then looking through the code I realized there was a lot of work, a lot of new concepts…therefore I stuck right in the middle of it. I lost a lot of time understanding the code, the algorithms used there were not documented anywhere but some comments. I am glad I did that because, with the help of my mentor and the community, I learned a lot of new things and some great strategies, one of which I will present you next, it’s called “happy eyeballs”.

“Happy eyeballs” it’s an algorithm which can support dual-stack applications, both ipv4 and ipv6 are supported. Firefox does not implement the classic strategy, it has some small changes which I managed to understand from bugzilla discussions and code comments. In a simplified version, a primary connection is created with the first preferred version of IP, a 250ms timer starts. If the connection establishes before the timer expires, then no backup is created, else a backup connection is born with an IPv4 address, for everyone of them input/output streams are attached. After the backup is created, Firefox listens for an output stream “ready” event, thus the connection with the first stream ready is used.

That’s how my last two weeks went, tomorrow I will try finish this last one bug then we will continue working on Networking Dashboard, maybe we will write some unit tests.

de robertbindar la 11 August 2013 07:44 PM

WHC::IDE #3 – Logging and execution improvement

Sorry for taking such a long break from the blog. In the last three weeks I’ve been working on the logging system and, also, I’ve taken a small vacation.

Last time I was talking about me having problems with Nagios. Those problems are now gone. I tested it on my machine and it worked well, but, in the end, I decided not to use it. There are two main reasons behind this. Firstly, Nagios is a bit of an overkill for what we need. It’s too complex and it would be too much to use it just to log our processes. Secondly, it doesn’t run on Windows. (Speaking of Windows, I have problems linking the OpenCL library. Sometimes it works, other times it doesn’t.)

The system I created for logging uses ini files that store data about the project ran (one file for each run). I works in a similar way to the execution restore system. It uses the signals emitted by the QProcess and Executie classes. To create a nice interface with statistics and graphs I used QCustomPlot, a free library for plotting.

Another improvement to the project is the new execution model. Before, the execution order was created by sorting the workflow graph using DFS. All devices would run a task at a time, each device with a different input from the inputs folder(s). The new execution model can run multiple different tasks, if they are independent. It doesn’t use DFS for topological sort. Insead, for each step, it removes the tasks that have 0 dependencies from the unsorted graph and adds them to the sorted execution order.

My next goal is the editor. I will talk about it next time.

de Andrei Preda la 11 August 2013 02:52 PM

07 August 2013


#6 DexOnline – Romanian Literature Crawler


This week I had my first code review which went better than I expected.
I tried crawling a local site of mine and it seemed that while building the crawler I accidentally hardcoded the link building mechanism for wiki.dexonline.ro (my directory depth mechanism for composing relative links wasn’t working as I expected: eg: I had localhost/example.com/index.php/aboutus.php instead of localhost/example.com/aboutus.php)

I also fixed the followings regarding URL:
1) http://www.example.com/ and example.com  are the same
2) http://www.example.com/ and http://www.example.com/index.html, or .php, .aspx, .asp, .jsp, .pl, .py, etc are the same (or with a high probability the same, this depends on the directory index definition)
3) Also http://www.example.com/index.php and http://www.example.com/////index.php are the same and this is a server fault when building links dinamically.
4) http://www.example.com/   ,   http://www.example.com  ,   http://www.example.com/index.php/? are the same, since there are no GET parameters defined.

At the end of the week I wrote a first design sketch for the Indexer & diactritics mechanism. I already got some feedback and I expect that this specification will be finished by the end of next week.

de alinu la 07 August 2013 11:10 AM

#3 User-Interface Improvements


In the last two weeks I have continued to work at the user interface for the FinTp project. I finished the jQuery part for the populating of the table with the entries from the data base, the pagination option, the add, edit and delete option for each entry in the table and now I am doing small improvements to the way the interface looks.

I am doing these little improvements using jQuery and CSS, and as an example of what I am doing, I had to make sure that the header of the table doesn’t disappear when the user scrolls down or that the columns of the table are resizable.

de edymanoloiu la 07 August 2013 09:39 AM

05 August 2013


#3 Teamshare – Further bug fixing and unit testing


In my fifth week, I had to deal with some bugs from my filesystem event simulator that I have worked on. I had a tough time dealing with a particular bug due to the fact that it was rather hard to detect. The bug was rather rare, occurring only after two copy operations, followed by two deletions of the same copied file/directory from the destination directory. The second delete operation could choose  a nonexistent file/directory due to a faulty check during the copy process, that led to overwriting by the system, but the list of contents of the destination directory contained two different files/directories.

I have mentioned in my previous post that I used cp, mv and rm commands for the operations. I believed that they worked on Windows, because I tested them on my Windows, but after testing it on another Windows I noticed they don’t. I have solved this bug by using the corresponding commands for Windows.

After solving all the bugs I started working with Java by writing small programs to better understand the language. I have also continued reading the existing code from Teamshare. I also performed unit testing with the help of the Python module pyunit. I have so far tested the user configuration file generator that I worked on in the first weeks.

I will continue unit testing for the the team configuration file generator and the filesystem event simulator.

de victor-ciurel la 05 August 2013 07:54 PM

31 July 2013


Mozilla Firefox – The Networking Dashboard. Week 4 and 5


Over the past two weeks a lot of things have happened. We had the first evaluation and also a big part of Networking Dashboard it’s almost at an end.

Let’s start with 4th week. Robert, Valentin and I met at the university were we continued working at dashboard and also talked about what we were supposed to do over the next coming weeks. Also we have prepared for the evaluation presentation by putting all the patches over the code, testing all functionalities and establishing what we were going to talk about.

About the presentation we were feeling pleased. Also it was pleasing to see all the other students talking about their projects. I really didn’t know so much until the actually evaluation was held. The aftermath beer and the discussions were good and interesting :)

The next few days that were left of the week Valentin and I decided what I should do next. Of course I had to implement the last diagnostic tool: Proxy Settings Test but what we also wanted to do is to move to the next part of the project – Tracing/Debugging. The main idea here was to implement Honza Bambas’s about:timeline add-on ( http://www.janbambas.cz/firefox-detailed-event-tracer-about-timeline/ ). Unfortunately Honza had other plans with his work, but we weren’t discouraged by this. So, after this, we established that I should get working at proxy settings test diagnostic tool, help Valentin at an older patch that he had worked on – about logging ( Bug 801209 ) and also working some more at Javascript code for UI.

Last week I had only managed to implement Proxy Settings Test Diagnostic Tool – filed as Bug 898237; as DNS Lookup tool, the implementation isn’t ready for a review because I had to wait for Robert’s error bug to be accepted first, in order to use it’s functionality to complete both my diagnostic tool patches. The functionality of this tool was already tested and works at it supposed to do.

The implementation was by far the most interesting because I learnt so much. First of all, the only complain that my mentor had for me at first evaluation was that I wasn’t talking enough with people from Mozilla. So for this patch I decided to first ask them for help, and after talk to Valentin. Also, the implementation itself consists in modifying two .webidl files, one .idl file and some work in C++. This was the easiest part. The hard part was to search for the correct service, functions, headers and idl files.

At the end, as I already said, the tool is working beautifully. The code looks good. I can’t wait to show it at mid term.

Now, I have to start working at logging bug and ask people from Networking module for more functionalities that the Dashboard should have, and also get working at UI.

See you next post!

de catalinn.iordache la 31 July 2013 06:55 PM

29 July 2013


Fortnightly Post #2.6: Tag me well!

The code review went well, there were some minor fixes that Cătălin Frâncu, my mentor, had to make. But now it’s stable, there are no more errors or warnings and everything seems right. I still have to add some comments to my code as I reckon that good implementation needs good documentation.

I’ve started writing the tag page and there are ideas galore. Some of them still need discussed because in my mind they’re somehow equivocal. Basically, on this page, there will be one of the images that need tagging. There will be some fields which the volunteer has to complete (e.g. the lexeme that is in the tag, the centre of the tag coordinates, the the centre of the pointed area coordinates) and this information will be stored in the database. Of course the most difficult part is to get the coordinates from the image. But fortunately, all of this work is done by plugin which is called jCrop. It lets users create a selection on an image and then it returns the coordinates (x, y) and the size (width, height) of it. With some JavaScript (which I have just started learning), I calculate the coordinates of the selection centre, which populate the input fields.Based on this coordinates we plan to draw the tags and pointing arrows on the image using HTML 5 <canvas> tag. I really enjoy this part of the project as it seamlessly combines server side and client side scripting.

That’s all for now and for a few days henceforth. I will indulge myself with a short vacation.

de Grigoroiu Marian Alexandru la 29 July 2013 04:58 PM

#3 Mozilla Firefox


These weeks I focused on finishing the patches I had sent before, so Ping Diagnostic Tool is now reviewed+ and is waiting for landing. Along with it is a bug that was at the top of our stack because it was affecting the both diagnostic tools which me and Catalin had implemented.

This bug consists in a method that maps a NS_ERROR code to a string, more specific, NS_ERROR_OUT_OF_MEMORY error code is mapped to “NS_ERROR_OUT_OF_MEMORY”. We started with 2 ideas: a dictionary holding the (key, #key) pairs and an array of {key, #key} structures. We kept the second one because the hashtable was filling at runtime and this didn’t give us the expected level of performance. So, I began working on this method and I faced a really big problem, some of the error codes (some pseudo ones) was not unique within the Necko module and our method was returning unexpected results. I filed a bug with this problem and I sent a patch with unique error codes, but we gave up the idea because the module owner decided is not a thing to mess with, it has a lot of problems without localization and, to be honest, some tests crashed.

My work was not useless because out mentor had a great idea of constructing a second array with the duplicated codes and iterating firstly through this one, we get the desired results.

After this fight, I worked on tracing some data. We wanted to know how many bytes Firefox sent and how many it received. At that moment, the Networking Dashboard was keeping evidence of the data transferred through each active socket, but after the socket closed we didn’t know nothing about that amount of data. So, I created two bytes counters as members of the nsSocketTransportService class and, on every socket detach, the counters were updated. In the Dashboard class, I combined this counters with the existing tracing variables and now, about:networking has an improved data tracing feature.

By the time last week finished, I had taken a Telemetry bug and I warmed up for this week. This was very educative, I refreshed my knowledges about creating a new histogram and I found out that Firefox has a feature called DoNotTrack (shame on me, it shipped a long time ago) which lets you express a preference not to be tracked by websites. The guys wanted to know if usage measurements from telemetry line-up with their expectations and if anyone actually opts for “Yes, I want to be tracked”. I’ve sent a patch and now I’m waiting for a feedback because I’m not so sure if I understood right the behaviour they expect.

For each one of these features I’ve implemented some basic UIs for debugging purposes and some demos.

It was a great month, I’ve learned a lot of stuff and tricks (# and ## macro operators w0W!) and I’m eager for next evaluation and beer meeting.

de robertbindar la 29 July 2013 11:00 AM

Week III

Hello there,

I hope you’re doing well. These weeks I’ve been busy with working on the user-interface. I finished the logical part with javascript so now I am arranging it (so a lot of CSS). Our interface should contain all the facilities that we’ve implement at the begining. I had to pay very much attention to details and keep in mind that out interface should run on every browser from chrome to IE. And one command were supported by a browser and not by other so you should keep this in mind when you work with web pages.

I have learned about gradient, invisible pixels and other things to make my interface more stylish. I have also worked on some tests for our java application for one of our resources to make it more general.



de anda.nenu la 29 July 2013 07:34 AM

28 July 2013


#5 DexOnline – Romanian Literature Crawler


This week I build a stable crawling mechanism, which can crawl only a location of the site, having a better Link-following mechanism which follows only under the start URL. The crawled also has a mechanism which transforms relative links to absolute links.

A big problem which I still have is changing the way I am querying the database from idiorm to the paris library. I could not fix this because paris asks me for classes to build the table from.
e.g. If I have the table ‘CrawledPage’, I would need a class called ‘CrawledPage’ which extends ‘Model’.

Another problem when crawling was that my application parsed anything, even files which are not in html format (like png images). To fix this I added a mechanism which tells me what type of page I’m downloading: text, html, png, etc

I left my crawler running for a while and when I came back I found out that it was the 3rd system resources consummer. After some googling I found out that lost variable references are marked for cleanup and freed when the application is finished or when there’s insufficient system memory. After some more search I found out that newer versions of php let you call the garbage collection explicitly, option which was not present in older php versions..

I have to give credit to the logging mechanism build last week because it helped me a lot so far.

de alinu la 28 July 2013 09:32 PM

27 July 2013


#3 World of USO – Code Refactoring


This week I managed to refactor the views from the interface module. This module contains a lot of form processing views and I thought it will be a burden to refactor them. Thankfully, I found out that Django already had the perfect tools for dealing with forms, the generic class based views FormView, CreateView and UpdateView.

Those classes rely on the Python’s object orientation and multiple inheritance. Therefore, the documentation for them is spread across multiple files and it takes a long time to decipher. You have to go to several other pages to find what attributes and methods each class inherits. Fortunately, I read on Stackoverflow about a very useful site, which does exactly what I needed. It lists all the methods and attributes of each generic class, along with their source code.

The only thing that I’m not sure about this refactor is whether two views (edit_spell and add_spell) handle image upload correctly. I couldn’t make them work because of a glitch with my WoUSO development environment. I think I have an issue with Python Imaging Library.

I had a problem with moving a form from the view to forms.py file. The form was defined inside the view. Consequently, it was generating its fields dynamically when the view got called. When I moved it to another file, it wasn’t generating the correct number of fields anymore. Eventually the solution was overriding the default constructor for that form.

Another interesting thing that I learned during this week was that Python’s super() method is very powerful. It delegates method calls not only to a parent class, but also to a sibling class.

Now it’s time for a short vacation, I will be in Costinești the following week. I’ll keep you posted as soon as I get back to work.


de badescunicu la 27 July 2013 09:26 AM

23 July 2013


#4 DexOnline – Romanian Literature Crawler


This week I build a logging mechanism mainly for exceptions. I had a problem with initialising static variables with non static functions(return value) and the error php gave me was no help at all (it wasn’t expecting ‘(‘ after the function name). Finally, after looking at the variable declaration, it hit me it shouldn’t be declared as static:P.
I also made my crawler nicer:) because one of the sites has figured out that I’m not a browser so I had to find a way to fool it (I changed the user_agent).
Another problem that I encountered was that when I wanted to print to the terminal a new_line, it continued writing on the same line and ‘\n’ was no help at all. After googling for a while I found out that php had a predefined constant named PHP_EOL which did the job.
I also found out how to extract the HTTP code (eg 200, 301, 404, 500). Until now I was using a function made my someone on stackoverflow which was very limited in details (returned true or false). After looking deeply into curl_getinfo($curl_handler) I found out it returns an associative array which at index ['http_code'] contains the http code. This works only for 200 and 300 series. For HTTP code 400 and above, I use curl_errno($curl_handler).

I hope this weeks work solves the part where the crawler doesn’t know what HTTP code the page returns (which made a fool out of me at the RSoC presentation) and I hope I’ll have a better control over my crawler with all the logging going on.
I also hope in a short time to do my first commit on SVN.

Good Luck!

de alinu la 23 July 2013 07:23 PM

22 July 2013


#2 Teamshare – Bug fixing and the event simulator


During the third week I have fixed a series of bugs from the user and team configuration file generators and I have started understanding the code behind Teamshare. This process is slow due to the fact that it is mainly written in Java, which is a new language to me, but basic knowledge of XML and Maven is also necessary. In spite of this, I am starting to understand how the code works and I feel I have learned quite a bit.

In the fourth week, besides continuing my code understanding, I started work on a script that simulates filesystem events. These events consist in creating, copying, deleting, moving, removing or renaming files and directories within a given directory. After solving a few problems with the copy event, all the other events were easy to implement and fix.

Using the python library shutil, on the event simulator, in order to perform the copy, move and remove actions, I encountered a rather annoying problem. After implementing the copy functions, I noticed that the program crashed everytime it should have copied a file or a folder. After inspecting the problem, I found out that the copytree function from shutil requires the destination folder must not already exist. After seeking a solution to the problem, I resorted to using os.system to run external commands (cp, mv, rm).

In the following weeks I will perform unit testing on the work I have done and review my event simulator for bugs and improvements.

de victor-ciurel la 22 July 2013 10:15 PM

Fortnightly Post #2: Manage file with style!

Almost done with the elFinder file manager!

I have successfully binded its results of users actions to queries in the database. After completing an action (move, delete, copy, rename) the elFinder (elf, henceforth) adds the name of the command to an array and the results to another array. I created a function that makes queries based on the data stored in those arrays. For example, if a new file is uploaded the script creates a new entry in the table with the path of the added file and the user that completed the action; if a file is moved, it changes its path and so on. This is possible as the elf has a bind option that calls an external function whenever a specific user action is completed.

I have sent the code to be reviewed by my mentors and I’m looking forward to my second commit.

My next task is to create a new page where text in images can be tagged. This is helpful in search engine indexing and will be implemented using jCrop.

de Grigoroiu Marian Alexandru la 22 July 2013 07:34 AM

21 July 2013


FinTP – Configuration Wizzard


This is my second post for RSoC. Last time i was telling you how vast this project is and how many things are involved to create a secured and reliable product which can be used by financial institutions.

Until now i have been working on the core part of it and i wrote cpp unit tests for some of the methods that the Transport library contains. This library have all the functions necessary to manage message queueing. As a side note, C++ is incredibly fast but sometimes it’s a bit difficult to read because all that boilerplate code.

As i was telling you in the end of my first post , I was going to work on a GUI for FinTP. A very important part of FinTP are the connectors which put messages in a local queue to be sent to a remote queue manager.
Here is an easy to understand schema i recieved from my mentor.

This application will be dinamically configured through an XML file. The main usage scenario will be something like this:

a. Application -> Load FinTPGui.config: contains user interface components description ( tab, labels, fields… )
b. User -> Choose configuration file type to be generated: read from FinTPGui.config ( tags)
c. User -> Fill in all generated fields
d. Application -> Validate fields (using xmlschema)
e. Application -> Save field values to FinTPGui.config
f. User->Confirm to generate FinTP.config file
g. Application -> Validate section constraints (eg. if last filter for Fetcher has type WMQ)
h. Application -> Save final config file to disk

The first problem I have encountered was how to parse an XML file ? Luckily the qt library already had a class for it called QXmlReader. With it I could parse an XML and generate separate fields in GUI for all xml section/tags.
I use qt 5.x and i have to say i’m impressed of the capabilities of Widgets .

Until next time i will develop some more features for this app, and in the future it will also be available on mobile devices. See you soon.

de Macavei Andrei Gabriel la 21 July 2013 04:03 PM

20 July 2013


WHC::IDE #2 - execution restore and logging

Things have been working quite well with the project, with a few exceptions (some bugs). I managed to implement the execution restore. To do this, I started by logging the finished processes and adding them to a file. Restoring the execution consists of continuing from where the system crashed. While working on this I discovered a bug caused by old processes that were not properly removed from memory. This caused a segmentation fault in some rare cases because of signals emitted after the deletion of the process. The fix was simple: use deleteLater() to remove the instances, but it took me a lot to figure this out.

Moving on, I started working on the logging system. After a talk with Veaceslav Munteanu (my mentor) and Grigore Lupescu (Veaceslav’s mentor, from the previous RSoC) we decided that we should use an already existing logging system. Grigore suggested Nagios. The first step was preparing the IDE for a logging system. WHC::IDE had no way of telling how a process ended or, if it crashed, what was the cause of the crash.

I added this functionality, and while I was at it, I saw a way to improve the execution restore. This is closely related to how tasks run in WHC::IDE, so I am going to briefly explain it.

When you click on the “Run” button, the IDE performs a topological sort that establishes the order in which to run the tasks. A task may have more processes associated: for every input file combination, there is a corresponding process. The output from that process could be used by more tasks, but it is a waste of resources to run it for each task or folder that needs the output. The way the IDE does it is by running only once and putting the output in a temporary folder. After that, it copies that folder or gives it to other tasks that require it. There are a lot of IO operations involved so this could go wrong in a lot of cases. The improvement I saw possible was the following: the IDE will add the process to the list of ran processes, only this time it will mark it as an “IOError process”. When the user wants to restore the execution, WHC:IDE will go back to the temp file and retry the IO operations, but it will not run the process again.

After completing this, and working with the execution class, I saw yet another way I could improve the project. I am talking about the execution speed on machines with multiple devices. WHC::IDE can run the same task in parallel, each device running a process with different inputs. But what happens when you have many tasks with one input that can run in parallel? Well, they could all run on different devices with a small adjustment in the running algorithm. The problem with this “small adjustment” is that it requires A LOT of code refactoring. I started trying different approaches, but they are not ready to be committed to the project because they break some things.

Going back to the logging system (sorry for getting so distracted), I currently have some problems with getting Nagios to run on my machine. Also it doesn’t run on Windows, and one of the project’s goals is to make it cross platform. I am starting to believe that the way to go is by writing something new and lightweight for our project.

Next time I will tell you more about the logging system (I will get it working by then) and also about the editor :)

de Andrei Preda la 20 July 2013 04:52 PM

19 July 2013


Mozilla Firefox – The Networking Dashboard. Week 2 and 3

Our project at Mozilla Firefox consists of four parts: Information to Display, Diagnostic Tools, Tracing/Debugging and Progress and Ideas, which are presented in more details here: https://wiki.mozilla.org/Necko/Dashboard. As you already know from my previous post, in the first week me and Robert worked at Information to Display, where we covered all tools except Cache status, and this is because there are other developers who are working at another project which involves modifying it almost entirely, so for now we are expecting the final product in order that our work won’t be in vain.

Over the past two weeks me and Robert Bindar worked at Diagnostic Tools. After talking with my mentor, Valentin Gosu, I’ve decided to work at DNS Lookup tool and Failed URL test, but after a quick chat with Patrick McManus, the owner of networking module, we have decided not to implement Failed URL test anymore, because it was basically an http level test and developer tools have grown to the point where they support that really well.

The DNS Lookup tool works similar as a a resolver DNS, a tool I’ve learned during communication protocols classes. You can say that the notion wasn’t new for me, but I think working at this diagnostic tool, healped me a lot. First of all I had to know that whatever I was supposed to do, it had to be an interaction between Java Script and C++. So I created a function (requestDNSLookup in an .idl file) that would be called from JS. In cpp I’ve implemented this function that at the end will call an async function that did worked like a resolver, which at the right time on the right thread will fill up the parameters of another function – OnLookUpComplete. For this I had to create another dictionary in an .webidl file and a structure in cpp, so the result of the async resolver could be stored. After this it was just a case of – take the information that you need and do whatever you need to do with it.

It may not sound like much, but believe me it takes a lot of time to understand how it all works. I’ve learned about async functions and also how Java Script and C++ interactioned one another. Also, after all the compilation error were resolved, we had to see if the result is the expected one, so for these I had to learn more about Java Script (because until now I didn’t code in it) to be able to create objects that finally show us what we want.

It’s been almost four weeks from the moment this programme started and I think that after the first three weeks I can say that me and Robert are ahead of schedule and most important, with the help of our mentor and hard work, we’ve been able to learn a lot of new things, which I know that will be helpful in the next comming weeks.

See you next post !

de catalinn.iordache la 19 July 2013 05:11 PM

#2 User-Interface

A month has nearly passed since the start of the FinTp project in which I managed to learn new and interesting things and complete all of my tasks. In the last 2 weeks I started to work with JQuery, which is a multi-browser JavaScript library designed to simplify the client-side scripting of HTML, and HTML5.

First was the documentation process in which I learn how to work with these tools and also the power of JQuery and HTML5. I used them to create a user-interface witch interrogates the server, retrieves the data and displays it in a table, which had different options, such as adding a new entry in the table, deleting and editing an existing entry.

The server that I used returned the data in a JSON (JavaScript Object Notation) format and because I used JQuery it was very simple to parse through the data. For the pagination of the table I used a plugin named JPages which offered a lot of interesting and useful features and made my life easier.

As an environment I used Eclipse because it offers support for HTML5 and JQuery.

de edymanoloiu la 19 July 2013 03:30 PM

18 July 2013


#2 World of USO – Code Refactoring

Hello again,

Over the past two weeks I’ve been working on the first objective of the project (refactoring views from games module). Last night I was able to accomplish it and I also started working on the second milestone, which consists of refactoring views from the interface module.

I gained some experience with class-based views and I can proudly say that I’m pretty confident using ListView, DetailView and View. The thing I like most at the Django project is that it’s very well documented. Therefore, I was able to find the method flowchart for each of those classes. It’s very important to know which method gets called and when.

Yesterday I stumbled upon a tricky view. After refactoring it, some features stopped working, such as displaying a message after a spell is bought. It crossed my mind to ask for help on the #django freenode channel. I was surprised that a bunch of people were willing to share their knowledge with me. They gave me some valuable suggestions. Eventually, I managed to fix the message displaying issue with django messages framework. They did exactly what I needed: store a message and pass it to the next view.

Another problem was paginating the view’s results. It had a back end implementation for it, but it lacked the front end part. I created a template for paginating those results and I rewrote the back end using the much simpler class-based view. It was as easy as assigning a number to the the class’ attribute paginate_by.

See you next post!

de badescunicu la 18 July 2013 09:24 PM

14 July 2013


#3 DexOnline – Romanian Literature Crawler


This week I learned how to use Idiorm, a php library for mysql databases and Iimplemented the crawler DB side. I found out that the Idiorm INSERT usage was quite obscurely implemented because I didn’t find an example on the web so I started reading the library implementation. Finally I found out that you have to use $obj = ORM::for_table(‘table_name’)->create();  to make an object with the table fields as php variables, then you have to set the coresponding variables values ($obj->field_1 = $val_1;$obj->field_n = $val_n) and finally call $obj->save(); I wrote this code because the other DexOnline intern will need it.

I also wrote a mechanism to manipulate URLs (transform relative to canonical URLs, a mechanism to find if an URL is used (hash + special cases).

I stumbled upon saving the rawPage and parsedText to the filesystem because of directory rights. I didn’t want to change the directory owner so I moved the files to /tmp/DexContent/, but it still doesn’t want to save the files. I’m using file_put_contents($filename, $string) and $filename contains only alfanumeric values and  the ‘_’ char.

de alinu la 14 July 2013 10:37 PM

13 July 2013


Fortnightly Post #1.5: Please wait while updating…

I had the task to update elFinder, the file manager plugin that is used to store and manage The Word of The Day (wotd, henceforth) images. The new version has a lot of new features, like search and multiple roots. Multiple roots was the main reason for the update, but it wasn’t what we had really expected. Multiple roots meant that two directories could be managed from the same window and we didn’t want that, as different people will manage wotd and definition images respectively. But there where major differences in syntax and compatibility that convinced us the update was still needed.

After the code and files update, when testing, the plugin wouldn’t work. That was because the new version had some bugs when using jQuery library greater than 1.7.x (DEX Online had 1.8.3). Fortunately, elFinder had a branch that solved these bugs. Unfortunately, it needed newer versions of jQuery and jQuery UI than the site had, so I found myself once again updating. When everything seemed right, one admin table (which was implemented using jqGrid) disappeared completely. You guessed, it also needed a small update because jQuery deprecated some of its methods that jqGrid used.

After that I tried most of the site pages looking for errors and all looked promising, until I checked The Word Mill game. Yep, you didn’t guess. This time, it was not my fault and neither any update’s. It was simply because on local clones, definitions that the game needed to work, weren’t copied.

Albeit it might have seemed tedious and annoying, on the contrary, it was utterly constructive as I had to wander about the source code, which made me understand how it works and ease further development.

My next task is to create a separate page to manage the definition images, and write a method that creates or modifies entries in a data table, depending on users’ actions in elFinder.

de Grigoroiu Marian Alexandru la 13 July 2013 01:09 PM

11 July 2013


Mozilla Firefox – #2 -

It’s my second post since the programme started and I have to say that I’m very excited about how things are going, we’ve learned a lot about our project and I think the way we approach new problems shows that.

Since my first post I’ve been working on a diagnostic tool, a transport layer ping meant to reach a server over the tcp/ssl protocol and give relevant information about the connection status to the user (reached, timed out, etc). I had a slow start, I didn’t even know how to begin, where from or how this feature should work, but I kept learning, documenting and with the help of our mentor and the community I’ve almost finished it.

I’m gonna present you a basic use-case, which I also received it from Patrick, the module owner, to understand better how this feature works:

Firefox says “cannot reach server https://foo.com“; I wonder why that is?


  1. Can I change foo.com to an IP address? (DNS lookup, Catalin is working on it)
  2. Can I connect to IP address port 443?(TCP ping)If not, why not?(refused, timed out)
  3. Can I handshake with SSL on port 443? If not, why not? (bad SSL version, invalid certificate, timed out, etc)

I started with actually doing the connect and reporting whether it worked or not and I recently finished to implement some timeouts for those connections whose status is never “reached”, so now the user can set a timespan before the connection is declared “timed out”. I’m currently facing a problem with mapping some error codes to localized strings. A method has a behaviour I haven’t expected so I’m now waiting for an answer from someone who already worked with that interface.

That’s all about how my work was going, I hope to find a solution for this problem until tomorrow.

See you next post!

de robertbindar la 11 July 2013 04:52 PM

09 July 2013


# Week II

Another interesting week has passed in which I learned a lot and managed to
complete my tasks. The first two days, I documented all that I have done in the
last week. So remember: any project requires a proper documentation.
After that the exciting part: jquery which is a multi-browser JavaScript library
designed to simplify the client-side scripting of HTML. I had to use jquery and
HTML5 to build a user-interface which would interrogate the server with resources
(see post #WeekI). I spend a day learning JavaScript and then focusing on my tasks.

I used JSON (JavaScript Object Notation) which it would be something like a
collection of name/value pairs: {obj1: [{ key1:valu1 , key2:value2 },
{ "key3":"value3" , "key4":"value4" }]}. I used the jquery to extract, process the
http response and then I put all these data in a table.
There I had to pay attention on pagination; some of the resources may return a
lot of records and we wouldn’t want to display them all on the same page. That’s
how I learned to get parameters from the URL in JavaScript and then interrogating
the WebService with two parameters /itemsPerPage;pageNumber. I used window.location.href
to get the url in JavaScript and then I split it to get my parameters. I also used
some css options for the design.

As an environment, at first I have used VisualWeb which was very cosy but there
were some problem with the server: it didn’t allow the use of the same port for
more services. So I switched to Eclipse and start working.

Now I’m prepared for a new week with all my forces renewed.
So keep calm and code! Ciao!

de anda.nenu la 09 July 2013 10:51 AM

07 July 2013


#2 DexOnline – Romanian Literature Crawler


This week we managed to agree on 60% of the design document (which includes the crawler part) so I can start coding. By the end of the week I implemented a crawling mechanism which takes an url, returns the raw page, parses the content and returns the plain text.

The crawler uses a cURL mechanism which fools the server my application is a browser (a fake Firefox running on Windows NT aka Windows XP), it even has a cookie jar :) to store the site cookies. The crawler doesn’t have a login mechanism but if we need authentication to read a page, I will need to send the login parameters through POST and enable CURLOPT_POST. For HTTPS pages I can enable CURLOPT_SSL_VERIFYPEER.

Since we might have different parsing algos for different sites, I created an AbstractCrawler which has 2 abstract methods (startCrawling and parseText) that need to be implemented for each derived Crawler class. In startCrawling you can choose to log in or a SSL connection (both to be implemented) and in parseText you can choose the best way to get plain text for that site.

I had to decide between a couple of libraries like PHP’s DOMDocument, Simple HTML DOM, Tidy, Ganon and phpQuery.  Since we could work on broken HTML (unclosed tags) I also found HTMLPurifier, a PHP library which fixes broken HTML.

As I tested the libraries, all of them managed to parse broken HTML (so no need for HTMLPurifier), but Simple HTML DOM caught my attention through its simplicity in use and through its reviews.

Well, that’s all folks!

de alinu la 07 July 2013 08:46 PM

Teamshare – Testing Infrastructure


My name is Victor Ciurel and I am working this summer on the Teamshare project, under the supervision and guidance of my mentor, Adriana Draghici. Teamshare is responsible with distributed file management in Teamwork, which is an easy to use, portable
system for team management.

My goal in this project is to implement a testing and benchmarking service for the decentralized file sharing system.

In my first week, I learned about the history and development of Teamshare and Teamwork. I began working on the project in a surprising way by solving incompatibilities between the tools and technologies used and the operating system on my laptop. I read the documentation to better understand the design and conventions used by Teamshare. After documenting about JSON and its implementation in Python, I started working on my first Python script for generating random user data and writing it in a JSON format file.

In my second week, I finished my script for generating random user data after many modifications. I have continued my documentation on the technologies needed for this project and I started working on a script for generating random team data and I am still working on small problems.

Adriana and I have decided on a short-term workflow for me. After finishing the random team data generator, I will skim through the existing Teamshare code and I will work on a script in Python, that will modify the users and teams data files to simulate real modifications.

Although I have some catching up to do on my work, I feel very motivated and eager to learn more and to work on my tasks.


de victor-ciurel la 07 July 2013 05:12 PM

World of USO – Code Refactoring (Post #1)


My name is Nicu Badescu and I am working on refactoring WoUSO’s code base, under the supervision of my mentor, Alex Eftimie. World of USO is an educational game well-known among the students which aims to improve their general knowledge about computers.

My goal is to replace some of the old function-based views with the newer, more versatile class-based views. I also have to move the logic which is tied to the model from the views.

Two weeks have passed since I began working on the project and I enjoy it so far. During the first week I went to Bucharest to celebrate the beginning of RSoC and talk to Alex, in order to get some important tips on how I should get started. I read about testing Django applications and managed to add unit tests to a feature I had previously implemented. After that, I skimmed through the code base and marked the chunks of code which are in need of refactoring.

I have set a couple of milestones with Alex and I am currently working on completing the first one, which consists of refactoring views from the games module. The basic work flow is as follows: write a test for a specific view, refactor that view, check if the test passes.

I am confident that I am going to learn a lot of useful things this summer. I’ll keep you posted!

de badescunicu la 07 July 2013 01:17 PM

FinTP – open source alternative for payments transactions

Hello, I am Andrei Gabriel Macavei and i’m working on FinTP project which is an open source version of the qPayIntegrator software from Allevo.

This is a vast project but to understand it I first have to tell you what is Message Queuing (MQ).

Most business companies or financial institution have to use some software to deal with transactions like payment bills, etc.. This category of software is called Message Oriented Middleware and it uses a messaging protocol (MQ) that allow applications running on separate servers to communicate in an asynchronous and failsafe manner without being restricted on the system’s implementation.

Messages are sent in queues which act like a temporary storage location that holds the messages to be validated first and afterwards sends them through the network. There are many and complex layers a message have to pass to be correctly formatted and comply with the international bank regulation and SWIFT standards.

If anyone don’t have anything better to do and want to know more about it , here is a link of the documentation for Apache’s ActiveMQ software we’re using.

After understanding this now I can tell you what I’m working on. Because the purpose of this project is to re-engineer the closed source product qPI, we have to make it work without using some of the current proprietary prerequisites(i.e. ActiveMQ which is open source instead its proprietary brother WebSphereMQ). The best way of doing this is to write unit tests so we can know what went wrong when changing something in the code.

The technologies I’ m working with are

  • C++ – for core engine cause it needs to be fast and with C++ you can optimize that
  • cppunit – a testing framework which is a C+++ port of JUnit framework
  • Qt – for building an appplication interface.

What we hope to achieve in the end is a modular and open source product where clients that are using this software don’t have to know all the details about how a Message Oriented Middleware is implemented and can just use it to create their own version of software adapted for their needs.

I have also discussed with my mentor Gabriel Stanciu and we both agreed on doing a GUI tool in Qt which will be configurable through an XML file. This will give users an easier way for writing configuration files.

This was my first blog post for ROSEdu Summer of Code , I hope it wasn’t that hard to diggest :)

de Macavei Andrei Gabriel la 07 July 2013 12:12 PM

06 July 2013


WHC::IDE #1 – porting to Qt5 and memory leaks

WHC::IDE is an IDE for parallel and distributed projects using OpenCL. My goals in the last two weeks have been porting to from Qt4.8 to Qt5 and fixing leaks.

I do not understand the reason for the changes that made old projects incompatible, but it wasn’t for me to decide. All I could do was to find a way to make it work again. To be honest, it wasn’t that difficult, considering there are many helpful blog posts and articles floating around the internet. I don’t have much to say about this, but if you are interested in porting old projects, the most useful resource I found was this.

After porting, the real work began. I started this project knowing that it has a huge, black hole inducing amount of memory leaks, so I was prepared for the worst. And what I saw seemed bad. After every run, valgrind would leave a 5 MB log (I redirected standard error to a file). The thing that concerned me the most was that I didn’t understand anything from the 5 MB file. The stack trace was too short and I couldn’t see which methods from my code caused the mess. After a quick search, I found the –num-callers parameter that sets the size of the stack trace. Once more, I ran the program with valgrind, this time with a much bigger stack trace limit, and started examining the log file. It showed that almost every error was caused by Qt methods. I went to Google with those errors and I learned that Qt causes a lot of false positives in valgrind and that there is a way to create a suppression file.

A suppression file is used to suppress certain errors that valgrind encounters. The good news is that the QtCreator IDE (the program I use to develop the project) already has valgrind set to suppress the false positives. Running from QtCreator showed that WHC::IDE is much better than I thought. I spent the rest of the time finding real memory leaks and segmentation faults.

Next time I will talk about restoring the running state of a project, in case of a system crash. I strongly believe I’m going to get this working by next week.

de mirror3000 la 06 July 2013 08:29 PM

Fortnightly Post #1: DEX Online — The Illustrated Dictionary


My name is Marian Alexandru Grigoroiu and I was assigned ‘The Illustrated Dictionary’ project. My main objective is to create a platform for adding images to words’ definitions.

I have reserved myself a fortnight to study the source code and to do some research. Monday I will start the real coding and I can hardly wait. We are yet to decide all the details regarding the looks and the implementation, but our guideline is: ‘Keep it simple!’.

Good luck to you all!

de Grigoroiu Marian Alexandru la 06 July 2013 09:15 AM

04 July 2013


First week – FinTp


My name is Edi Manoloiu and here are some of my impressions after my first week working at FinTP Project.

In my first week I have learned a lot of new and interesting things, some of them hard, but the fact that there was someone who explained them very well to us made it so much easier. The experience here at Allevo is challenging, but rewarding because when you see that after a hard day of word all is good and all of the tests pass.

For technologies, we use JavaEE, WebServices and JPA;

I have used JPA, to create entities from tables (from a SQL database). Based on each entity, I created resources and then I have made tests to see if they work properly.

de edymanoloiu la 04 July 2013 07:43 AM

03 July 2013


First Week – Mozilla Firefox

Hi, my name is Robert Bindar and I am glad to take part in RSoC 2013.

Our project consists of two parts: the first one is a Firefox feature meant to provide a dashboard that monitors network activity and the second one is a system of histograms whose goal is to expose relevant informations about Firefox’s network performance, both of it very useful for developers.

After this summer I am confident the Networking Dashboard will be better, it will expose more information like the protocol version, TCP half-open connections and it will be more useful for developers with its new diagnostic and debugging tools.

The first week was fun! For the first three days I met with my colleague Catalin Iordache and our mentor Valentin Gosu at the university where we talked about the project, we studied the code and we also implemented some features like exposing the protocol version and half open connections.

We begun the second week with some documentation, we had a lot of questions for Valentin and the other module owners, but now we have a certain direction to make for implementing some diagnostic tools like Ping and DNS lookup.

It’s a good start and I’m confident we will stick to the plan and fulfil our mission.

de robertbindar la 03 July 2013 07:06 AM

01 July 2013


Mozilla Firefox – The Networking Dashboard. Week 1

Hello, my name is Catalin Iordache and I am working on a Mozilla Firefox project – The Networking Dashboard, which is a project that is meant to offer the same functionality as chrome://net-internals. Some of the functionalities are already implemented, but there is more to be done. Also because Mozilla has two projects in RSoC, this means that there are two of us that are currently working, and this will be me and Robert Bindar. Valentin Gosu, who is our mentor, had the excellent idea, that it is more productive and good for us if we will be helping each other.

This isn’t my first time working on Mozilla Core code base, because I did already contribute some patches I few months back in the Upstream Challenge competition, at University Politehnica of Bucharest. I have to say that, resolving a simple bug it is one think, but working on a big project like this, it is another.

Therefore, this is how the first week went:

For three days, Robert and I met at the University and studied the code. We first looked at some functionalities that were already implemented and which are needed to be exposed later which are RTT(Round Trip time or ping time) and TTL(Time to live) and also did some documentation about SPDY Protocol and Half-Open Connections, two of the functionalities that were needed to be implemented. While studying the code, I noticed a small error, so I fixed it (Bug 887566).

After two days and two meetings with our mentor, Robert and I implemented the functionality which will allow us to display if a connection is using SPDY Protocol and also which version it is using (spdy/2 or spdy3). For this we made changes in nsHttpConnection.h, nsHttpConnectionMgr.cpp, NetDashboard.webidl and Dashboard.cpp. This changes were submitted by Robert as a bug on Bugzilla (Bug 888267).

After this, we established what I have to do for the coming weeks and we left Bucharest for a more peacefully place to work (our homes). During the weekend I did more research about half open connections and after this I set about implementing this functionality. Now, the Networking Dashboard will be able to display for every host which sockets are haf opened and how many of them there are. I filed the implementation on Bugzilla (Bug 888628).

To work on Firefox is turning up to be very addictive, which I think it is a good thing.

Have a nice day!


de catalinn.iordache la 01 July 2013 06:10 PM