Wednesday, October 12, 2011

Is Schema.org the right way to go?

[ Originally written for Kuliza Technologies on June 14th, 2011 ]
Did the three big companies take the correct decision in introducing Schema.org ?

Semantic web is a web of information, which is marked with machine understandable metadata in addition to the Human readable web-content. Recently, Google, Yahoo and Microsoft collaborated and came up with Schema.org, which is their privately hosted Semantic mark-up vocabulary.
This introduction has been a hot topic of discussion in the Semantic web community, majorly because of the syntax chosen by the three companies to develop the vocabulary. The major issue with this release has been that the terms in Schema.org are expressed in microdata syntax, as opposed to the currently popular RDFa serialization of RDF. I am currently contributing open-source code to the Semantic web community through my project, which involves creating an RDF Vocabulary publishing platform. So maybe I might appear a bit biased towards RDFa over microdata here.

Bit of History -
RDF is a knowledge representation framework that encodes data as subject-predicate-object triples. When you combine triples, they form graphs. Initially, RDF/XML serialization format was used for semantic marking, and it separated the semantic marking from the HTML content. Over the course of time, Microformat syntax emerged, wherein the Semantic metadata content was integrated into the HTML itself. RDFa is another serialization of RDF, that was based on Microformat, i.e., integrating HTML Content and the metadata. Microdata is a set of tags, introduced with HTML5, which claimed to improve upon RDFa.
An important thing to note here is that RDFa and Microdata – both are syntaxes. Both are both Entity-Attribute-Value models that support using URIs as universal identifiers. There also exists an algorithm for converting Microdata to RDF. On the other hand, Schema.org is a vocabulary. A vocabulary has terms, which can be specified in any syntax. Schema.org terms have been originally specified in Microdata syntax.

Can’t we just specify all the terms in RDFa syntax and continue using them?
The answer is Yes, and as a matter of fact, the work is already in progress as I write this post. People in the RDFa community, Richard Cyganiak (My Google summer of code 2011 mentor) and Michael Hausenblas, have worked to develop an RDFS definition for the terms of Schema.org, and hosted it at http://schema.rdfs.org/.

So what is the issue here?
Google has asked the web community to use either microdata or RDFa since using both the syntaxes confuses its parsers.

“While it’s OK to use the new schema.org mark-up or continue to use existing Microformat or RDFa mark-up, you should avoid mixing the formats together on the same web page, as this can confuse our parsers.” … “If you have already done mark-up and it is already being used by Google, Microsoft, or Yahoo!, the mark-up format will continue to be supported. Changing to the new mark-up format could be helpful over time because you will be switching to a standard that is accepted across all three companies, but you don’t have to do it.”

And then it adds:
“We will also be monitoring the web for RDFa and Microformat adoption and if they pick up, we will look into supporting these syntaxes.”
This sounds as if Google is pushing developers who are looking for SEO to start using microdata syntax, a standard that is not in much use yet, since it gets a sort of priority in its parsing algorithms. This takes away the freedom from the developers to choose whatever syntax works best for them.  Although RDFa is a bit more complex than Microdata, it can covers more use cases, and some developers might be more comfortable using it.

Few years ago, the web-developers community was reluctant in semantically marking their web-content. The semantic web community worked hard to make the web developers understand the future benefits of having linked data all over the web. So, many of the developers slowly started using RDFa and Microformat, and a recent survey showed that 4% of websites used RDFa, which is more than any other. See http://tripletalk.files.wordpress.com/2011/01/rdfa-deployment.png for the comparison.

RDFa is being used by Drupal 7, Facebook OGP, Best Buy, all e-commerce sites which use the GoodRelations Vocabulary and many more major deployments globally.
And now schema.org asks them to learn a new syntax yet again. Lets face it; if Google, MS and yahoo declare that they would support only microdata for parsing content on the web, most of the web developers who are majorly looking for SEO would definitely follow. This would adversely affect the growth of RDFa deployments.

Thus, a large portion of the Semantic Web community is not happy with the decisions. Some believe that the vocabularies provided by schema.org won’t suffice if you want to cover complex domains since it is not extensible.

Another matter of concern is that it seems w3c was not consulted at all, while schema.org was developed. Commercialization of standards is never a good thing, and that’s what Schema.org does. In fact, Manu Sporny, chairperson of RDFa group in w3c, has been very aggressive in opposing schema.org and he goes to the extent of saying that he would soon start a revolution against “The false choice” of using microdata in schema.org. I have been following him on twitter and he has been gathering support there to put pressure on the three big Companies. He also believes that “Microdata doesn’t scale as easily as RDFa – early successes will be followed by stagnation and vocabulary lock-in.”

The solutions-
The most obvious solution to this problem is that Google, bing and yahoo announce that they would treat RDFa and microdata with equal priority in their parsing algorithms.
Bing has already stated that it can parse a page that includes multiple syntaxes. However, Google parsers cant do this, and needs to incorporate this feature in their parsing algorithms as soon as possible.

However…
Schema.org does seem to have a created a lot of negative buzz, but lets not forget that some kind of RDF vocabulary standardization like this was long due. Currently, due to lack of a definite standard, it is difficult for developers to decide on which one to use for mark-up. Schema.org does solve this problem and makes life easier for developers as well as for search engines. As Google states:

“Creating a schema supported by all the major search engines makes it easier for webmasters to add mark-up, which makes it easier for search engines to create rich search features for users.”

Friday, September 16, 2011

Kuliza@Mysore Day 1

We want to implelemt a working prototype of a sharing/analytics widget like easyshare, within 7 days. The widget is targetted at e-commerce platforms.


Problem :

Person 1 shares a link through our widget on facebook.
Person 2 shares the same link through our widget on facebook.
Person 3 reshares the same link on facebook by seeing Person 1's shared link.
Person 4,5 and 6 reshare the same link on facebook by seeing Person 2's shared link.
Person 7,8,9 and 10 see Person 3's link and reshare it.

We need to find who was the most influential person among each one who shared.


Tree models :
1 --- 3 --- (7,8,9,10)
2 --- (4,5,6)
As we can see here, Person 3 is the most influential here.


Solution :

Consider Person 1 and 2 as the root users. When a root user shares our link on facebook, we store the following in the backend :
1. Our system generated userid
2. Facebook id of the person
3. URL shared
4. parent Id : null

Then we append a query parameter to the URL which is the facebook User Id of the root user and shotren it using bit.ly, and then this link is shared.

eg. Person 1 shares http:www.hostname/productpage
we store Person 1's facebook userId and the URL http:www.hostname/productpage in the backend.

Now we append the url with a query parameter and shorten the url using bitly.

http:www.hostname/productpage/?q=userid
bit.ly/Li32df34

Then we share this link on facebook.

When a new person Re-shares the same link on facebook using the link provided by person 1:

Store the following in the database :
1. Our system generated userid
2. Facebook id of the new person
3. URL shared (got through the parent)
4. parent Id : id retrieved from the query parameter.

In this way, we can track the most influential user by querying to get the id which appears maximum time in the parentid.

Using bitly APIs, we can track the number of hits for a particular URL and hence find out the most influencial person.

Thursday, August 4, 2011

#Note PHP get contents of a remotely hosted file without cross-domain ajax

$file = file_get_contents('http://qa.agrinova.intuit.com/webmetrics/farmerCount.groovy');
echo $file;

Output :

<?xml version="1.0"?>
<webmetrics xmlns='http://agrinova.intuit.com'>
  <farmer_count>290158</farmer_count>
  <statewise_count state='GJ' count='135070' />
  <statewise_count state='AP' count='155088' />
</webmetrics>

 


Saturday, June 25, 2011

Updates : Porting Neologism to Drupal 7 [3]

A lot has happened in the project since the last post. In fact, I am ready with my mid-term submission.

The three content types : vocabulary, class and property have been added to the port. The fields that have been added are :

Vocabulary :
  • Title
  • Namespace URI
  • Authors
  • Abstract
  • Body
  • Additional Custom RDF

Class :
  • Related vocabulary
  • Class URI
  • Label
  • Comment
  • Superclass
  • Disjoint with 
  • Details

Property :
  • Related Vocabulary
  • Property URI
  • Label
  • Comment
  • Details
  • Functional Property
  • Inverse Functional Property
  • Domain
  • Range
  • Superproperty
  • Inverse
The vocabulary, class and property are correctly being registered with evoc.

Next steps are mentioned by Richard here-
http://drupal.org/node/1196510




Thursday, May 26, 2011

Updates : Porting Neologism to Drupal 7 [2]

Coding period has started.

We decided to start by creating bundles for :
  • Vocabulary
  • Class
  • Property
References module will be used for creating fields field_type Node Reference.


Monday, May 16, 2011

Updates : Porting Neologism to Drupal 7

Wiki page of the Project : http://groups.drupal.org/node/145269

Project page on drupal.org : http://drupal.org/project/neologism

We would be using the issue queue on drupal.org for all discussions related to the project and the entire drupal community can provide suggestions. This is the link to the issue queue of Neologism :
http://drupal.org/project/issues/neologism?categories=All

Lin Clark gave a short introductory session over skype on how to use the issue queue on drupal.org and the git and told about various portals which could be useful during the project like http://drupal.stackexchange.com/

Richard created a branch for the project on the drupal.org Neologism repository. We had two options, either we could clone the entire D6 code into the branch and then transform it to a D7 Module step by step, or start with an empty branch and then add feature after feature. We chose the second option since it was cleaner and always gave us a working drupal 7 module.

Also, I would be working on my new MacBook Pro for GSoC and since I had previously worked only on Windows while developing on Drupal, I contacted Guido, who is currently the main developer of Neologism, so he could help me with the development environment for the project.

I have a basic Drupal 7 website running on MAMP and will be using Komodo for development and Command Line Interface for GIT.

GSoC 2011 Proposal for Drupal : Porting Neologism to Drupal 7

I support Drupal's vision to become the best CMS in projects related to Semantic web.

In view of the above initiative, Drupal 7 comes with a core RDF module. There is a contributed RDF module as well which allows us to extend the functionality of the core module through :
1. RDFx - Provides additional serialization formats.
2. RDF-UI - UI to specify RDF Mappings.
3. Evoc    - UI to import Vocabularies and store them in DB.



For different sites to have perfectly interoperable RDF, they should use the same RDF vocabulary. It is a fact that presently no comprehensive vocabulary exists which can provide predicates to suit each and every Semantic web-development project. For instance, foaf may be a very good choice of RDF vocabulary when it comes to building social networking web-sites. However, if one wants to create a project which involves Learning Resource Objects, foaf would not be able to provide predicates for all the Learning object metadata elements. That's why it's important to be able to create new vocabularies according to the specific requirements of the project.


As of now, there is no standard User interface for creating an entire RDF Vocabulary in Drupal 7. One has to write an RDF Schema in XML format and then register it with Drupal using Evoc.

My initial idea :

I initially wanted to extend the functionality of the Contributed RDF Module by adding a User Interface to "create" and "register" customised RDF-vocabularies. The module would generate the corresponding RDFS in backend and allow the user to register the vocabulary with the Drupal and provide an easy UI for creating the vocabularies in the frontend.

Why I felt the Need for this extension :
Since the Drupal has a relatively steep learning curve, we must try to make things as easy as possible for the newbies so that more and more people can enthusiastically join Drupal's Semantic initiative and start using Drupal for their Semantic Web Projects.

Using the User Interface that I planned to develop, someone with even a little knowledge of writing an RDF Schema would be able to create and register his own vocabulary.

Change of Plans :

I thus went ahead and posted my proposal on GSoC-11 Drupal Group.

As you can see here(http://groups.drupal.org/node/136969), the discussions turned out to be very fruitful indeed. Lin Clark advised me to have a look at the ongoing Neologism Project(http://neologism.deri.ie/), which provided a free and open-source vocabulary publishing platform. It turned out to be functionally very similar to what I had planned for my project. However, Neologism is not yet available as a module for Drupal 7, Lin also advised me to get in touch with Richard Cyganiak (http://richard.cyganiak.de/), who was actively working on Neologism development.

Neologism is a powerful codebase for publishing customised vocabularies that is already in quite some use in the RDF community, but using it in existing Drupal sites is difficult since there is no dedicated D7 Module. Moreover, the code is hosted on Google Code Repositories. To confuse matters further, there is a very old version of the neologism module on Drupal.org, which was not updated as the project progressed on Google Code. It also has several dependencies, a few of which are not even easily available on the internet since the previously existing links are now broken. So there was scope of collaboration.

I contacted Richard and we discussed my project proposal over several emails. Richard informed me that he would soon be working to port Neologism to D7. I offered to do it as a part of my GSoC project. I felt it was better to contribute to Neologism module rather than creating another module from scratch which overlaps in functionality with the upcoming Neologism module. Moreover, Neologism has many good features like a vocabulary overview diagram and a time tested User Interface, due to which it makes even more sense to port it to D7. Richard liked the idea of pooling our resources to work for a common cause. He also agreed to mentor the project.


Finally, we came up with the following abstract for the project :

  1. Porting Neologism to D7
  2. Migrating the Neologism code-base and documentation from Google Code to drupal.org
  3. Updating the documentation and informing existing users about the change.
  4. Testing that the Neologism module works well in existing D7 sites
I intend to carry forward the work that has already been put into creating the Neologism vocabulary publishing platform by porting it to D7 and making it available to the huge Drupal community and any existing Drupal sites that want to use RDF with custom vocabularies.

At this moment, I believe that that the Evoc module in D7 provides all the features that we need to successfully create the Neologism module.



Timeline for the Proposed Project:


April 25 - May 23 (Before official coding period starts) [Information Learning Curve and Background readings]
  • Familiarise with the current Neologism codebase and Drupal RDF modules.
  • Go through the current documentation of the Neologism project.
  • Discuss the implementation plans and risks with the mentors.
  • Familiarise the coding standards and development practices followed while creating Drupal modules.
  • Get used to working on the Drupal Repositories since code migration from Google code to Drupal repositories would also be a part of the SoC project.
   
May 23 - 29 (First week) [Familiarizing]

  • Fix some bugs/implement simple features for the current Neologism platform to familiarize further with the codebase.
  • Create a document for general reference which describes how the module would appear at the end of the Summer of Code. Documentation at this stage would not go into the technical details but only describe how the module would appear to the end user at the end of the project.

May 30 - June 5 (1 week) [DB Migration]
Neologism is currently running on D6. There are a lot of differences between the Evoc module in D6 and D7. Thus, we need to change the DB Schema of Neologism to match the D7 Version of Evoc.

This marks the End of Phase-1.
At this moment, we are ready to start porting Neologism to D7.


June 6 - July 24 (7 weeks)[Porting Neologism to D7]
This is the major task of the project. This task has been further divided into sub-tasks as follows :
Week 1 : Port the menu system and vocabulary list to D7

Week 2 : Port the vocabulary overview page to D7
Week 3 : Port the RDF output to D7
Week 4 : Provide the feature of importing and loading vocabulary by using the evoc module
Week 5 : Port the vocabulary creation/edit form to D7
Week 6 : Port the class/property creation/edit forms to D7
Week 7 : Port content negotiation and caching to D7

Also, during this period, I would need to carry out integration testing for the module.

This marks the end of Phase 2.
At this stage, we have a functional D7 port of Neologism module.

July 25 - July 31 (1 week) [Documentation Migration/Upgrading and Migrating the code to Drupal Repository]
The tasks planned for this phase are as following :
  • Set up Drupal.org infrastructure for neologism module
  • Coordinate with documentation team to move existing documentation to drupal.org   
  • Update documentation wherever needed
  • Notify existing users of the changes
August 1 - August 7 (1 week) [Test the module on existing Drupal sites]We would need to evaluate how the Neologism module works if installed into existing D7 sites and identify any issues. Currently Neologism is built as an installation profile which installs an entire site that provides just a vocabulary editor. There might be some initialization which was previously done during the installation procedure which would now need to be done when the Neologism module is installed into existing sites. We need to make sure there are no issues faced when the module is installed or reinstalled into existing D7 sites.


August 8 - August 14 (1 week) [Buffer period]

Buffer for general Neologism bugfixing/improvements as identified throughout the project

August 15 - GSoC Ends.
End of Phase 3.


Deliverables :

  • A functional D7 Port of Neologism, which isntalls on existing sites without any major issues.
  • Updated Documentation of Neologism.
  • Documentation of the status of the module at the end of GSoC completes and the plan of action for the future.
  • List of known issues in the module.


Link to Discussion created in on Drupal Groups


I had already planned my idea well before GSoC. Thus, I was quick to draft my proposal initially on the Drupal GSoC-11 Group. You may find the discussion here : http://groups.drupal.org/node/136969

I also asked the members of the Semantic Web Group in Drupal-Groups to provide me feedback on my proposal. http://groups.drupal.org/node/137274


On the IRC Channels of drupal, (drupalcommerce and drupal-contribute) I got the opportunity to discuss my idea with a few people who provided me with useful bits of information and guidance.
Mentors:

I tried to contact the people who have been actively involved with the development of RDF and related Modules in Drupal 7. Lin Clark(linclark) (http://lin-clark.com/), suggested I get in touch with Richard Cyganiak (cygri) (http://richard.cyganiak.de/), for mentoring me on my project since it relates to the Neologism project(http://neologism.deri.ie/) he had started and has been working on. I contacted Richard and he generously agreed to mentor me on my GSoC project.

Lin Clark (linclark) has offered to help me during the first few weeks to learn the customs of using Drupal.org issue queue and creating clean patches.

Guido Cecilio, (guidocecilio) who is the current main developer of Neologism, will also be available to answer questions regarding Neologism code and coordinate his work with me.

Stephane Corlosquet (scor) has agreed to help by answering questions regarding D6 to D7 migration.

Thus, I have the overwhelming support of the Drupal community to assist me during the course of my project.

Sunday, April 17, 2011

Project Plan for BIOMOD-2011

I am a part of the team DA-NanoTrons, representing my college DA-IICT in BIOMOD-2011.

Our team is -
  • Faculty mentors
    • Manish K. Gupta [DA-IICT, Gandhinagar]
    • Taslimarif Saiyed [NCBS, Bangalore]

  • Team members 
    • Avinash Parida
    • Denny George
    • Mayank Kandpal 

Project plan :

Part 1 : (BIOMOD-2011)
Providing an interface for the users to input equations corresponding to 2-D shapes, which will generate a caDNAno friendly .json file output. This can directly be opened with cadnano and the structure can be further edited there. (So its like a basic cadnano template creator which can then be used and made into more complex structures on caDNAno)

The application will be a standalone for now and might be integrated into caDNAno later on.
We can even provide some default templates for some very basic equations.

Plan of Action :
1.1 Understand the format of cadnano gerenated json files and try to create simple files which are correctly displayed in cadnano. this would be done by creating some simple files in caDNAno and understanding the structure of files after saving them.  Initially dont worry about 3d, just create 2d .json file structures and run them on caDNAno.

1.2 Hack through the cadnano ActionScript code-base to understand their auto-stapling algorithm.

1.3 What would the program do :
    1.3.1 Take equation as input
    1.3.2 Generate the outline of the corresponding 2D shape
    1.3.3 Generate a single loop which fills the entire structure  
    1.3.3 Divide the loop into 7000 parts (there is a reason behind 7000)
    1.3.4 Select a point to break the loop and thus create a single long scaffold.
    1.3.5 Assign each division a base-pair (ACGT) ordered in the sequence of the standard M13mp18 virus DNA sequence. I have a rough visualization of the expected output after this stage, which I would share soon.
    1.3.6 Use the auto stapling algo to generate staples in the structure. (optional)
    1.3.7 Automatic StapleError correction feature (optional, will skip this most probably in Phase-1)
    1.3.8 Create the cadnano friendly json file corresponding to the structure and staples we generated.

How we could divide the work :
For parallely working on different things, we all need to be clear on how we would be storing the structure in each stage in the backend. In other words, what the output format/structure of each stage would look like.
For example, if we are clear initially how the backend would look like in step 1.3.6, then one person can start working directly on a manualy created output of step 1.3.6 and work on how to create a cadnano friendly json from the structure that we finally come up with.
Thus, before we begin with any coding, we need to be clear with what output we expect in each stage. For this, we need to first of all decide a platform which we would be working on. Considering the requirements. I am assuming java (or python) would be the best choice.
In case we find other platforms with better library support we would use that.  So first step is to hunt down the available libraries for each task.
Parallel task 1 : 1.3.1 - 1.3.5
Parallel task 2 : 1.3.6
Parallel task 3 : 1.3.7

FallBack Plan for part 1:
If we are too technically handicapped to understand the auto-stapling algorithm, we can simply skip step 1.3.6 and jump to 1.3.7, i,e, just generate the single long scaffold corresponding to the 2D structure represented by the equation and convert it to .json format which would can be opened in cadnano. The user can then use the autostapling feature within cadnano.


Part 2 : (to be done in the next year’s Biomod, OR if time permits(unlikely), within the current Biomod timeline)

2.1 Provide support for equations of 3D structures.

2.2 Either create Views Interface so that the user doesnt need to switch to cadnano just for checking out the output. OR port the entire application as a cadnano plugin itself.

Monday, March 7, 2011

Developing @cric : An SMS based app on TXTWEB

My first project at Kuliza was to create an SMS based application for mobile phone users, through which they could receive live cricket scores updates and cricket schedule  through SMS. As you might have guessed, the application was made keeping in mind the potentially huge user-base during the ICC Cricket World-cup.

The main features of the Application are :
  1. View summary of all the live matches in progress
  2. View detailed score of a match
  3. Set a match as Favorite
  4. View schedule of upcoming ODIs, Tests and T20s.
  5. Predictor feature which enables users to vote who will win the match.

The application was immediately pushed into live production and within 3 matches, the app had already got a total of 16,000+ hits!

The usage statistics have been scaling new heights with each World-Cup match. Recently it crossed a total of 100,000 hits. We expect to cross 200,000 SMSes by the end of the World-Cup.

You may wish to try out the App: SMS @cric to 9243342000 to know live cricket scores and schedule of upcoming matches.


The Development :
The @cric Application runs on three core technologies :

1) TXTWEB SMS EngineTXTWEB is Intuit's SMS Platform for Mobile App Developers. http://www.txtweb.com/ is an online network for developers of SMS based apps to showcase and promote apps and connect with each other.

2) Google App Engine
From http://code.google.com/appengine/ :
 "Google App Engine lets you run your web applications on Google's infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow."
I used the The Datastore Java API of Google App Engine for storing data, which is a schemaless object datastore, with a query engine and atomic transactions.

3) Google Web Toolkit
From http://code.google.com/webtoolkit/ :
 "GWT is a development toolkit for building and optimizing complex browser-based applications."
I used the Eclipse-GWT plugin for development.

The live scores are scraped from http://www.espncricinfo.com. JSoup HTML Parser library is used to get the data. In case there is a problem with CricInfo, I have implemented a backup scraper which uses http://scores.sify.com/index.shtml to get live-scores data. Schedule of upcoming matches is obtained from http://www.cricschedule.com.

Sunday, March 6, 2011

New kid on the corporate block

It has been a couple of months since I joined Kuliza. The journey so far has been a  wonderful experience and I could not have wished for a better or happier start to my corporate career.

Like most freshers, the 'One' big question that troubled me before I joined Kuliza was whether the work which would be assigned to me, be of my liking. I love to do work which involves lot of logical thinking, coding and using the latest technologies. And I was happily surprised when I was assigned my first live project within two weeks. n/ I was asked to create an application for mobile phone users to receive live cricket scores and cricket schedule updates through SMS. The application was easy to code but I learnt a lot of new things since I worked on the latest technologies like the Google web toolkit(GWT), Google App Engine and the TxtWeb SMS Engine. Moreover I also learnt how projects are managed in an organisation using Versioning and other Project Management tools. The best part about this project was that it was immediately pushed into live production and people actually started using it to keep track of live scores.

It is always a dream come true for a Developer to see their app being used by the masses. And I was overwhelmed when within 3 matches, the App had already got a total of 16000+ hits. :) With the Cricket World-Cup round the corner, hopefully the app will gain wide popularity.

You may wish to try out the App : SMS @cric to 9243342000 to know live cricket scores and schedule of upcoming matches.
[Normal SMS charges apply. No extra charges.]

Currently I am working on a project related to Semantic Web which involved Intelligently fetching data from a set of resources according to the need of the user using LOM and RDF to add intelligence to the system. I will soon blog more about this project.

Inspite of possessing an array of hard-core technologists who are ready to work round the clock when the situation demands, we never lose out when it comes to having fun. My short stint at Kuliza has been a delightful experience, not only because I love the work I am doing, but because of so many enjoyable moments I spent here. Be it my first 'official' outing experience was with the gDev team (Uday, Rohit, Nikhil and Gaurav) or the 'Resort-cum-Paint-Ball' Interns'-day-out with Deepak 'Sir', Achal and all the other interns. We are always looking for a reason to celebrate at Kuliza, be it Christmas or Kite-Flying. For the Gamers, Friday nights are reserved exclusively for Lan-Gaming
.


Apart from all this, the weekly bizKul sessions for the Interns, which are meant to help us transcend from college culture to the corporate culture provide a refreshing change with their innovative activities.

My Internship will continue another couple of months. I am happy to have learnt so much new and met so many wonderful people over such a short span of time at Kuliza.

ZA-Life ftW!!!


P.S. Looking forward to the next Hackathon :D
Powered By Blogger
Custom Search