Tuesday, November 23, 2010

Demystifying Google Chrome Threading Model

This post is all about technical stuff that might interest developers who are looking to enhance their thread programming skills.


May be the best way to learn is to see what others had already made in terms of proven concept. What better can this be more than learning how Google did it in its Chrome browser?


Google threading model consists of three components:
- Thread class that encapsulates the operating system threads. It is a class that abstracts Chrome browser from the operating system details. So whether it Windows, Linux or others Chrome code deals only with the Thread class.
- MessageLoop class contains queues for receiving and handling tasks. This is the way to communicate with the thread. Each Thread class contains one message loop. MessageLoop is subclassed to handle tasks in a specialized manner.
- MessagePump class, could be better named as orchestrator, controls the job of the MessageLoop. MessagePump comes with different flavors, depending on the mission it is designed for.

So a Thread object owns a MessageLoop object which in trun owns a MessagePump object. When this system runs it goes like the following:
After the tread is initialized and started, it calls its Run() method which subsequently calls the run method of the MessageLoop object which in turn calls the run method of the MessagePump.






MessagePump

The default behaviour of a MessagePump is an endless loop that passes through the following steps:
- tells the message loop to execute the pending tasks
- checks if a quit order has been issued, if this is the case quits otherwise continue
- tells the message loop to execute delayed tasks
- checks if a quit order has been issued, if this is the case quits otherwise continue
- tells the message loop to execute any idle job


MessageLoop

The MessageLoop has four queues to manage tasks. The incoming-queue that stocks any new task submitted to the thread, a work-queue that contains the tasks to be executed and a delayed-work-queue that contains tasks that should be executed sometimes in the future.
Finally the fourth queue is deferred-non-nestable-work-queue which has to do with reentrancy and will be discussed later.
All queues work in FIFO mode.

When a task is submitted to the Thread, it is inserted in the incoming-queue. The MessagePump tells the message loop to execute pending tasks, the MessageLoop checks whether the work-queue contains tasks. If it does not, all the tasks in the incoming-queue are transferred to the work-queue. If the work queue already contains tasks, the execution continues normally.
The MessageLoop pulls a task from the work-queue and verifies if it should be executed at once. If the task is delayed, meaning should be executed in the future, it is inserted into delayed-work-queue, otherwise it is executed right away.
Observers are components that might be interested in knowing what tasks are being executed in the MessageLoop. For this reason they register their intention with the message loop. The message loop notifies all observers before and after the execution of each task.

The good question to be asked in here is why to have the incoming-queue and the work-queue? Why not only have the incoming-queue alone! Actually it is all about performance. Each time a incoming-queue is accessed for input or output a lock is acquired to avoid data corruption. So in the aim to reduce the number of locks mainly by the MessageLoop, every once of a while (when the work queue becomes empty) a lock is acquired on the incoming-queue and all its content transferred to the work queue. This is way the income-queue is now available to receive more tasks while the message loop is busy executing those on work-queue.
It is worth noting that this strategy works fine when the number of expected incoming tasks is high. On the other hand if the number of incoming tasks is low there will be no performance gain since MessageLoop will waste time on locking and transferring tasks from the incoming-queue to the work-queue.

Reentrancy

Generally speaking a function is reentrant when it can be called again even when the first call did not finish yet. An example of reentrancy is recursive function.

MessageLoop is reentrant, meaning that when it is executing a task, its Run() method can be called to execute subsequent tasks even when the first one was not over yet. In this case we name the first call the outer MessageLoop and the second call to Run() the inner MessageLoop (Important: we are talking about the same MessageLoop instance. The outer and the inner qualify the first and second call to its Run() method).

Consider an example where a task opens a modal dialog box. This dialog box must be able to respond to user’s input so the message loop has to keep executing tasks! However since it did not return from the task that created the modal dialog box, it can’t proceed with the other tasks (this is the outer MessageLoop). To make this possible the modal dialog box calls the Run() method of the MessageLoop in the following way (this is the inner MessageLoop):


bool old_state = MessageLoop::current()->NestableTasksAllowed();
MessageLoop::current()->SetNestableTasksAllowed(true);
MessageLoop::current()->Run();
// Restore task state.
MessageLoop::current()->SetNestableTasksAllowed(old_state);

This ensures that the modal dialog box is responsive and the tasks are being handled by the same MessageLoop.

Tasks that run in an inner MessageLoop are called nestable tasks and have their nestable property set to true.
However if a task is not nestable this means it must be execute after the first task completes. So to make this possible, the inner MessageLoop won't execute the task but adds it to deferred-non-nestable-work-queue. When all inner message loops finish their job and are released the outer MessageLoop will be executing tasks that were pushed to deferred-non-nestable-work-queue.

This was a brief simplified overview of how threading in Google Chrome browser works. It constitutes a good start for those who scratch their heads to find a solid and proven concept to implement threading.

Saturday, November 20, 2010

How to become world class developer

Every developer dreams of reaching a world class rank. But reaching this rank is not given to everyone unless he/she is working in a big company, and on a big project. Working on small projects limits the horizons of the thoughts and hides much of the big projects complexities and approaches to the solutions.


While books are necessary to learn, they stop short of giving real experience. Books are good for getting started, learning the methodology, the object model and the APIs. No doubt this is a necessary package however it is necessary but not sufficient. No book is able to explain the details of a project and the pros and cons of every technical decision made in the course of the development.


The picture looks somehow dim for those who are not lucky enough to work in giant companies and produce softwares that are used by millions of people.


This is not completely true. There is a way of becoming top notch developer even while working on small projects. The solution exists but it is not free! It costs time and efforts. It is only for people who are passionate about developing.


The large majority of commercial softwares have their code hidden and restricted to the teams that are working on them. But there are also very important open source projects that are accessible to everyone and are used all around the world. These softwares are written by highly skilled engineers either part of their daytime job or on their free time. In both ways, these projects are “gold mines” for developers who are aiming to acquire new skills.


So the solution is to find the right open source project, one that is interesting to the person wanting to learn. Then, try to build the project and start learning the code. At first it looks like a daunting task. It might fail few times, but success comes with persistency and perseverance.


Every open source project has forums and discussion groups, but almost all of them lack complete and clean documentation. So a good approach would be to grasp the overall architecture of the project and all the available documentation that goes around. Then start looking in the code for the following points:
- Coding style: it is important to understand how the lines of codes are written because it helps improve readability.
- Tips and tricks: every module or sub module of the project brings a solution to a problem. Some are straight forward; others are trickier and utilize an indirect approach. It is crucial to learn these tricks in order to use them when facing similar problems.
- Global architecture: while tips and tricks are used on a granular level, the global architecture gives the big picture and discusses
the decisions made to solve key problems. It is worth mentioning that there is no single ideal solution, but a solution that fits certain priorities established by the project authors.


After understanding the code or at least part of it, it is important to get involved with the community that is working directly or indirectly on this project. For this reason it is imperative to visit the project forums on regular basis and actively contribute to the discussions. This will increase the understanding of the project and the reasons why some decisions are being taken.


Another major point is to discover bugs and propose solution for them. Many open source projects have policies to allow people to contribute to the project that has a mandatory path that goes through discovering and solving bugs. Once a certain number of bugs and fixes are reached, the person is allowed to become a committer which means contributing to the development of the project. When such a stage is reached, congratulations! You are now a world class developer.

Sunday, October 10, 2010

VC++ Tip: Show Includes

The following is a useful Visual C++ tip that solves a frequent problem any VC++ developer might face.
There are time in which you get errors from files you have never created, or included into your codes.
This can be very frustrating and hard to debug especially if that file is burried deep into #include tree.

Consider the following example where you get an error such as
c:\program files\microsoft visual studio 8\vc\include\math.h(486): error C2084: function 'long abs(long)' already has a body

You might be scratching your head to figure out what does this mean knowing that you have never included "math.h" in your code.
One way of proceeding is to go open each nested #include file to determine where this file is include. However you will soon discover
that you have entered a complex labyrinth in which the way out is so difficult to find.

Another easier approach is tell the compiler to show the list of all included files used in the comilation:
1.Open the project's Property Pages dialog box.
2.Click the C/C++ folder.
3.Click the Advanced property page.
4.Modify the Show Includes property to 'yes'






On compilation you get a list like the following:
:c:\globalProjects\localProjects\srcCtrl\srv\src\myproj\util\net\ServerBase.h
: c:\globalProjects\localProjects\srcCtrl\srv\src\myprojTools\myprojUtils.h
: c:\globalProjects\localProjects\srcCtrl\srv\src\myproj\util\net\NetworkConnection.h
: :c:\globalProjects\localProjects\srcCtrl\srv\src\myproj\util\Obj.h
: : C:\Dev\Dependencies\thrdParty\extension\thrdParty\stlport\stl\_threads.h
: : :C:\Dev\Dependencies\thrdParty\extension\thrdParty\stlport\stl\_cstddef.h
: : : C:\Program Files\Microsoft Visual Studio 8\VC\include\stddef.h
: : : :C:\Program Files\Microsoft Visual Studio 8\VC\include\crtdefs.h
: : : : C:\Program Files\Microsoft Visual Studio 8\VC\include\sal.h
: : : : C:\Program Files\Microsoft Visual Studio 8\VC\include\crtassem.h
: : : : C:\Program Files\Microsoft Visual Studio 8\VC\include\vadefs.h
: : :C:\globalProjects\thrdParty\extension\stlport\stl/_cstdlib.h
: : : C:\Program Files\Microsoft Visual Studio 8\VC\include\stdlib.h
: : : :C:\Program Files\Microsoft Visual Studio 8\VC\include\crtdefs.h
: : : :C:\Program Files\Microsoft Visual Studio 8\VC\include\limits.h
: : : : C:\Program Files\Microsoft Visual Studio 8\VC\include\crtdefs.h
: : : C:\globalProjects\thrdParty\extension\stlport\stl\_cmath.h
: : : :C:\Program Files\Microsoft Visual Studio 8\VC\include\math.h
: : : : C:\Program Files\Microsoft Visual Studio 8\VC\include\crtdefs.h
>c:\program files\microsoft visual studio 8\vc\include\math.h(486) : error C2084: function 'long abs(long)' already has a body


In here you will find the whole tree of #include before the error. Notice how deep math.h is include far below ServerBase.h.


It is extremely difficult to detect its presence without the use of this option.

Sunday, October 3, 2010

Quality Content Comparative Analysis

There are plenty of ways to create content on the internet. Blogs, articles, videos, forums, and Q&A etc...
There is also on unexploited potential way that can be used to create quality content which is chatting.
Certainly there are pros and cons for each way. Below is a comparative analysis for each method:

Articles and blogs are maybe the most well known methods their advantage is that they are or might be complete and detailed with complete exposure of the domain or point of view. On the other side they are the less interactive. Despite that most of them have commenting features; it is rare to see true interaction between authors and readers.

Video content are now notorious with Youtube especially that there are plenty of people who are trying to provide quality videos. Consuming this type of content is by far easier than reading especially when presented in an appropriate manner. The problem of video content is mostly in availability of the topics. It is still far behind written text. Interactivity is not better than blogs, it consists only of viewers comments.

Forums and Q&A on other hand have better interactivity but less complete content. They are usually centered on a post or a question that people discuss and exchange opinion. The content is not really a detailed info rather than an exposure of a problem that the community is asked to contribute to the solution. Although the interactivity is higher however it is not in real-time. So the interest party must have enough patience before succeeding getting the community to be engaged in the solution.

Chatting has always been the fun thing and to meet other people generally from the opposite sex. However this does not have to be this way. People are usually interested in topics, this clearly seen in the comments on blogs or videos as described earlier. So chatting can be part of the quality content creation if it is presented accordingly. Any event, news or article can trigger a discussion between supporters and opponents. Services like www.SimpleConnexion.com give people the option to find others who are interested in a particular subject and exchange with them opinion in real time fashion and at the end publish the content of this discussion so that other people benefit from this opinion exchange.
The advantage of this method is that it is real-time; people get replies in a matter of seconds. It is challenging because a person can easily ask his peer to clarify his answer or opinion which means it creates a better understanding. The content of the discussion can be published as just explained.
The difficulty is to really find someone to chat on a specified topic at a particular time. This difficulty will be overcome as more and more people adopt this type of chatting.

In conclusion diversity of methods leads to richness and creativity as well as provides suitable ways for different people according to their preferences and needs.

Sunday, September 26, 2010

Added Value Chat

There is chat for fun and there is chat for the sake of sharing knowledge and information. So far sharing knowledge was restricted to the realm of forums and blogs. Information chat on the other hand exists within enterprises via instant messaging systems (IM).

In companies colleagues and project members refer to shared documents for references and how-to information. But generally efficient knowledge transfer occurs, most of the time, after talking to other people. The key to such efficiency is the informal way the subject is tackled and the straight approach consisting of a direct question and a direct answer. This is not available with formal documentation because it contains lots of literature to explain the context and the scope of the document before delving into real material. Certainly these are most needed when starting a project. Project members need to know all the factors involved in the project.

However as the project goes on, questions start to rise from here and there. Sparse questions are met with sparse answers with lots of email exchange or direct discussion. Due to the workload and deadlines the documentations are rarely or never updated to reflect the topics discussed. This results in losing crucial information used on day to day basis.

One of the possible solutions to this problem is to encourage discussion between team members via chatting system and save those discussions and make them available to others so they can benefit from the information they contain.

www.Simpleconnexion.com is a service that pushes forward this concept.

Friday, September 24, 2010

Interview Your Contacts

Whether for professional purposes or for fun any discussion between two people could be viewed as an informal interview in which opinion and/or information is exchanged. Interviews are not only a formal discussion between a journalist and a celebrity; it can also be a chat between two friends, or two colleagues. What matters is the information contained in such interviews and how valuable it is for the public.

Obviously interviewing has many forms, it can be on TV, in a newspaper or magazine, and it can also be online! With the unprecedented success of the internet and the adoption of the web, there is no reason that interviews can’t be done online.

Actually, informal interviews have been practiced since the early days of internet through the chatting services. The only difference from the formal ones, other than the style of course, is the persistency of the interview. While the latter are published and made available to virtually anyone to read, the former was mostly confined to the private domain.

There‘s no doubt that chatting is a form of interview. Even if it does not involve a well-known celebrity or an experienced journalist, nevertheless it still can contain valuable information that could benefit others.

Consider an example where one is asking his colleague about technical matter related to a certain project. Sometimes this exchange is done via email but there are other times when it is done over a chatting service like MSN. The best thing to do at the end is to publish this discussion that can be also thought as an interview. Even though there are no nationally or internationally known figures involved, however the contributors are valuable to their colleagues and other project members, and their issue they discussed is certainly important to them.

www.SimpleConnexion.com concept goes into this direction and tries to promote the concept of publishing added value discussions.

Thursday, September 23, 2010

Chatting or Blogging

This might seem odd to compare two distinct activities. One might think like comparing apples to oranges! Although at first this appears to be the case, but when looking closer, some similarities emerge to the surface.

Blogs (web logs) are usually maintained by an individual with regular entries of commentary, descriptions of events, or other material such as graphics or video. Most blogs are interactive, allowing visitors to leave comments and even message each other via widgets on the blogs and it is this interactivity that distinguishes them from other static websites.

On the other hand online chat can refer to any kind of communication over the Internet, but is primarily meant to refer to direct one-on-one chat or text-based group chat , using tools such as instant messengers, Internet Relay Chat, talkers and possibly multiple user dungeons (MUD)s. The expression online chat comes from the word chat which means "informal conversation".

So the differences between blogs and chat are persistency and formality. Blogs are persistent meaning they exist on a webpage and they can be accessed any time. They are also formal (at least a good part of them) and they address a whole audience and not a particular person. On the contrary, chat is transient, ephemeral; it does not live beyond the life of a discussion and of course the majority of the chat is informal.

Although differences seem to weigh considerably, similarities on the other hand exist. Blogging and chatting can both carry information that is valuable to others. Meaning, a person seeking a piece of information or looking to solve a problem, can find it either on a blog or in a chat room. The only concern related to the chat is that the info might have passed or not yet arrived so the timing is critical.

However, this can be solved by simply saving a discussion and convert it into an informal blog. This method offers good benefits to the reader because he will be seeing an interaction between two people and this means some specific questions and clarifications might be also evoked in the course of the discussion. To the person who is giving this information he is freed of the task of writing formal blog.

In this context www.SimpleConnexion.com is looking to promote this concept.

Monday, August 30, 2010

Educational Youtube (part 2)

Shortly after I have written the blog about Educational Youtube, and what effective role it can play in the future education. I have been introduced to an amazing website that offers free educational courses using videos hosted by Youtube. Did I mention for free? Well it is for FREE.

This site is done by only one man; he made some 1600 videos separated into three main categories: Math, Science and Others. He made his videos in such manner that they are short and easy to understand; in another word you get the essential of each course in just 10minutes which is very efficient and straight forward. You don’t have to sit in a class room and listen to a lengthy and boring lecture. All what you have to do is pick a subject from the long list and pay attention to the explanation for few minutes and that’s it. Later on you can return to stay another ten minutes and continue the course.

But that’s not all! The story does not end here. Sal Khan, the creator of this video library www.KhanAcademy.com got a really unexpected and unusual fan: Bill Gates. According to Fortune when Bill Gates knew about Khan he said “this guy is amazing”.

He actually is; he might become a leader in innovative education with his stock of videos. He is surely able to reach a large base of potential students and change the way they are learning.
Of course this is also made possible because of the infrastructure laid down by Youtube service.

However, what is missing in this system as a whole is the interaction between student and teacher which is also a building block to the comprehension. But we are only seeing the beginning of the evolution, the tip of the iceberg. The technology will soon catch up and try to fill the gap.

Check the following link fo more details about the story of Sal Khan http://money.cnn.com/2010/08/23/technology/sal_khan_academy.fortune/index.htm

Sunday, August 29, 2010

Educational Youtube

The conventional way to learn new materials is either enroll into a course or read a book. However these are no more the only options that we have nowadays.

Actually the previous two methods have certainly their advantages such as getting some sort of certification after the completion of the course and this gives you credibility that you really know what you claim to know. As for reading a book it is still a handy and convenient way since you can take it with you while travelling in the train, plane or even on the beach. Note that even with these features books are being challenged by technology with the advent of IPad.

However, there are disadvantages related to those methods. Course involves displacement to a certain location on a certain date at a certain hour. For working people this is not always easy to do and even when done they might not be in shape to fully assimilate the explanation especially after having a bad day. On the other hand, books also require some energy to read and assimilate. In addition, if the content needs to be practiced there will be switching from reading to practicing and vice-versa.

Luckily, this seems to be changed by Youtube! Until recently I used Youtube to watch some funny movies. But the other day I had the idea to search for a technology related subject and I was surprised to see tens of videos to that topic.
When running few of them I realized how efficient this method is. The explanations are directly backed by a demonstration, so the viewer won’t lose focus switching from book to computer. When, in doubt or you miss something you can repeat the clip as much as you need. You can pause to have some rest and resume later. In short you have all the advantages that you don’t have when you take a real life course. Plus it is totally FREE.

This reminds me of the open-source model, where people spend considerable amount of time and efforts to build software only to give it away for free. Same thing seems to happen in here; people are doing a series of learning videos and put them on YouTube for free. Surely they expect to get something in return by becoming well known in the domain they are talking about, but nevertheless the public is profiting from this model to get free learning and educational stuff.

Could we see in the future students logging to their Youtube class?! Nobody knows for sure, but one thing is certain; technology is reaching every aspect of the human life.

Twitter Source of Discussion

Beyond any doubt Twitter is the number one service in microblogging. With its 50million tweets a day it sits on top of “traffic chain” (term borrowed from “food chain”) and constitutes an enormous source of information.

Actually Twitter stream is a mixture of information and opinions, where a measure of the information popularity is possible thanks to the retweeting. However there is a gap in this system! Tweets are not long living. In fact their life expectation does not exceed few seconds. So catching a tweet among 50millions is like searching for a needle in a hay stack.

Because tweets carry information and information might be useful to different other people, they might use it in different ways according to their needs and objectives.

One of these objectives is to discuss information contained in the tweets and exchange opinions about them. One method to that is to reply to the original tweet with a comment. The drawback of this method is the lack of interactivity. The source of the tweet might not be available for replying or might be simply uninterested. On the other hand some other people might be happy to discuss it, but the problem is how to find them?
Besides of information a tweet carry an opinion. People interested in some news or information are logically tempted to tweet them. In other words what they do is bind their own preferences within the tweets. These people are more likely ready to talk about their tweets and discuss them.

Many products and services are trying to benefit from these two features. One of them is www.SimpleConnexion.com that tries to create discussions between people around a certain tweet.

Thursday, August 26, 2010

Q&A websites review

Social networking ideas are hatching everywhere. Basically there are endless ways to bring people together, the aim is to let people meet, discuss and interact.

The new trend nowadays is the Questions & Answers websites. People with questions go there ask them and people who have answers, whether right or wrong, reply.

Forums have been doing that for ages, but with a major difference. Forums are most of the time specialized, so you find IT forums, medical forums, parental forums etc… If you have a question first you need to find the right forum for yourself. Then you have to subscribe, create a thread and post your question. This might seem tedious for those who need a fast answer.

Questions & Answers websites have another approach to the problem. You go find any of these websites, you still have to subscribe or login using your Facebook account for those who support Facebook Connect. Then you ask your question. The website will take charge of finding people who might answer it. It alleviates the task of finding the right place and the right people.

Forums are still very efficient when you have a very specific question. For example if you are a developer and looking to solve a problem, it is by far better if you search for it in a specialized forum, because all its members are IT professionals.

Conversely if you have general question related to an event, celebrity or broad business Q&A sites will be useful for you, because they gather people from different domains and backgrounds.

Another major benefit of the Q&A is socializing. Because people might have general questions there is a better chance to create a discussion. While specific question means that you are mostly interested in a good answer that solves your problem without having the time to chitchat.
There are many Q&A websites out there but the most known are Quora.com and Vark.com.

However the drawback that lacks behind is the absence of direct one to one discussion with interesting people. This is where http://www.simpleconnexion.com/ is trying to solve.

Saturday, July 17, 2010

Make Your Own Market Study For Free


If you are one of these people who are seeking to create a new startup, you might have some ideas that you believe in and want take a new adventure. But before you start you need to tap the ground and discover the readiness of the market. Every project must have a market study to assess its viability. If you have big bucks in your pockets you can delegate this task to a company to do it for you and hand you the result while you are playing golf. However if you are startup creator, the last thing you need to do is to spend money uselessly. Every penny counts for entrepreneurs and their startups.

Market study is used to determine the consumer interest in your product and to weigh the competition. Based on these results you consider your decision whether to enter the domain or not.
Although there are professional companies that are specialized in making market studies, you can do it by your own using simple tools for free. These tools are:
- Google Trend
- Google Search
- Alexa.com

At start you might be interested in getting data about consumers needs. People usually use Google to search for what they are looking for. Luckily Google gives you a way to find out what they are looking for. By going to Google.com/trends you will discover the keywords that people used in their search. Put the keywords related to your new business and check the results. Try several combinations in order to get the best results. For example if you want to give online photography courses you might search for the following “online photography courses” or “photography courses online” (order of the keywords matters). If you are interested in one region of the world you can specify it too. In case the index given by Google Trend is not sufficiently high, you might not have a real market or you might be using the wrong keywords to describe your business or you need to find other ways to get data.

Once you are satisfied of the keyword results and you think there might be enough demand, what you need to do next is to determine the competition. Use Google search to find sites in your domain using the same keywords you used in Google Trend. Start by checking the commercial links displayed by Google because these companies are being aggressive at the market and you surely need to know what they are offering. Then take most of the results that appear on the first two pages of Google; check them one by one and see how they differ from your ideas and what features they are offering to the users.

After you have determined the quality of these sites you need to know how popular they are and if you will be able to compete against them. To know that go to Alexa.com and search for each of these websites and check their ranks and the traffic rank history. The current rank tells you how strong in popularity they are. If their rank is below 10 000 that means they are very strong and you might expect a fierce competition. The traffic rank history will give you an idea of how much time they took to reach their current position. This gives you an overview of how hard or easy is the road ahead.

In some cases you can get additional info about their financial situation such as sales and revenues either directly from their website or by searching for this info on some sites such as Tracked.com, although it is not always guaranteed that you will find useful data.

Now you have all the information in your hand. Based on these figures you can take a moment to study and evaluate your chances of success and the stiffness of the competition.

Friday, July 16, 2010

The Uptime Curse

There are times in which you feel low, depressed and frustrated, and there are other times in which you have high morale, enthusiasm and optimism. While the former feeling is counterproductive and should be treated quickly to overcome it, the latter does not necessarily mean good thing. If not handled smartly it won’t result in boosting productivity as expected.

As a matter of fact high morale leads to a rise of the self confidence and may lead to overconfidence which might have severe consequences on the ability to take the right decision. When experiencing an uptime it is crucial to keep your feet on the ground and avoid thinking that everything is possible, and can be done disregarding of the required resources and efforts.

Ignoring this fact might lead you into an endless labyrinth which results into failure to achieve any of the given goals and shorten the way to another state of depression, pessimism and frustration.

To avoid this pitfall, you should understand that this sort of ecstasy you are passing through is certainly transient and does not make you a superman. You can’t go beyond your capabilities. In fact if you try to do that you will over burn this energy in a minimum amount of time and return to square zero.

Once you are convinced that what you can’t do normally you won’t be able to achieve because of some psychological burst, you should use this energy wisely and try to get the most benefits from it.
To do that you need to concentrate your efforts on one or few tasks or goals that you need to accomplish. Do not over estimate your capacity, just prioritize the tasks or projects you have and focus on the most important ones.

Avoid spreading your efforts on multiple subjects because you will soon discover that you are not progressing on any of them and that’s bad for your morale and your body.

Remember that the up time has its own curse too. Neglecting this fact will condemn you to failure and frustration.

Friday, July 9, 2010

How to Write Original Content

Back to the early school days, it was very common that students were asked to construct a “meaningful” sentence using a verb such as “eat”. Usually most of the little girls and guys answer by “The boy eats an apple”. The sentence is grammatically correct; however the teacher was not happy seeing tens of similar copies. Same for search engines!

Search engines, just like teachers, do not like to see the same content recurring over and over for this reason they degrade its rank. However, you don’t need to be Shakespeare nor Einstein to be able to create an original content that pleases search engines.

Actually the original content is related to the writings and not to the subject. You really don’t have to come up with something totally new, creative and innovative. If you can this would be wonderful, but if you cannot it will not be a catastrophe.

Here are few steps to accomplish something original for your own blog:

  1. Determine the area of interest on which you want to write
  2. Search on the internet for articles, documentation, or even other blogs related to this subject
  3. Read 3 or more of these articles and try to fully understand what they are saying
  4. Take 15 min pause
  5. Rewrite in your own way what you have understood from those readings
  6. Congratulations, you have now an original content that search engines will like
If you are thinking this is might be somehow fraudulent, rest assured it is not! Quite on the contrary you are contributing to the general knowledge and culture of the people who are looking for the subject you wrote about.

In fact people have different ways for reading and assimilating the material they read. They might not be able to understand one article because it might be written in a way that is counterintuitive for them. This is why they will drop reading very fast and search for something similar. Because different people have different ways, there will be a group that will appreciate your article over others.

Therefore, your content has become original and useful.

Monday, July 5, 2010

What Is Selective Chat

Since the old days of IRC till our days the expression “Hi, asl plz?” is the must have opening for every chatting session between two strangers. Furthermore, if you are a male you won’t accept less than something like “25 F …”

Due to the lack of enough information any discussion was very slow to start, moreover it is rarely scoped into a certain topic or subject. It is true that most of the chatters are males seeking females. This is what the statistics reveal, and can’t be denied. However any discussion would be far more interesting and fruitful if the two parties have minimum information regarding each other and be centered on a specific topic that is of common interest.

It is not unusual to meet someone who is seeking to talk with another having a certain intellectual background and education level or having experience in a very specific domain. There are thousands of people who are seeking information and are not sure where to find them and how to utilize them. Obviously it can be helpful if they can meet someone online who can help them in real time manner. No doubt this type of information exchange and collaboration increases the richness of the discussion and avoid wasting time on lengthy introductions that ends most of the time in frustration.

The importance of the selective chat as opposed to the random chat lays in the fact that the latter is full of bad surprises and annoyances, while the former is based on a careful selection of the peer in order to open a discussion with him or her. This gives a huge benefit for both parties because they can capitalize on their discussion to make new ones and maybe evolve their cyber relation to a real life one.

In this perspective www.simpleconnexion.com is trying to offer something of value to its users and by making it intuitive and friendly to use and avoiding the hassle of a long and boring registration process and profile management. Check about us for more info.

Wednesday, June 30, 2010

Breeding Ideas

According the Merriam Webster online dictionary one possible definition of the word ‘idea’ is a “formulated thought or opinion”. So if you are looking for new ideas you need to think about it because they do not just come by themselves. Even if sometimes you think you got an idea spontaneously that’s not true, it is only an illusion. In reality your subconscious was busy working on it and was waiting for some sort of event or trigger to send it to your conscious.

So how ideas are created? Usually ideas are issued from unsolved problems or unsatisfying solutions. People who are having trouble with certain matters on a daily basis are more likely to think about new ways to improve the way to perform their day to day tasks.

If they succeed in finding a solution then the idea is born and might appear genius to some people and irrelevant to others. Not every idea will be a success at the business level, it all depends how the market reacts towards it. However, if you have solved a problem that you were facing, there are big chances that some others out there are facing the same issue and might be looking for the solution that you have invented.

The above is one approach for breeding ideas. But that’s not all! There is at least one other way to create new ones and innovate. Needless to say every approach requires intellectual efforts and brainstorming.

So another way to breed more ideas is to combine several existing one to create a new hybrid one. At start, there might not be an immediate need for this new idea, but as the time goes on and the idea is put to use, the public might discover that it is serving them in a way they did not expect.

Combining several ideas into one is straightforward. All what is needed is to draw a table in which you put current inventions in rows and their features or uses in columns.

Consider the following example: A mobile is used for making calls and send SMS at a long range; a remote control is used to control a device such a TV or Stereo at a small distance. One possible combination is to merge them into one instrument that allows people to make calls and control their home machinery over a long distance. This way when they get home they have everything in place as desired; like music is on, plate heated in the oven, the AC is working etc…

Is it useful or will it gain acceptance? This is a different story. The important is that you created the idea and now you can work on it to make it appeal to people.

You can also read an interesting blog “How to unlock your creative genius” written by Don Dodge in which he exposes his opinion about how genius ideas are born.

Monday, June 28, 2010

IPad, Just Another Gadget

Recently I have been tinkering with IPad for some time, in an attempt to convince myself to use it on day to day basis. However I felt it is was not built for working people who use the computers to produce something of value to their companies or customers.

This might be hard said, but that was my reaction. For me IPad is a gadget and not a tool for work. Before going further, I would like to note that this does not diminish its value as an amazing technological device.

On the other hand for people who use computers as a productivity tool, there are some big issues related to it. The minute you lay your hands on one of them you will instantly notice the following:

  • Your fingers are not as precise as a mouse pointer, so if you are doing something like paintings or graphic design you will have hard times going through it.
  • Also it is not suitable for real work on spreadsheets such as Excel, due to the endless manipulations you need to do with your fingers, instead of a mouse.
  • Certainly it is not suitable for programming, but I guess this is the least of concerns for many people.
  • Another annoying feature is the continuous switching back and forth of keyboards as well as jumping from alphabetic mode to numerical mode. This will make any writings take at least 1.5 times the writing on a real keyboard.
  • Now the real big issue is in the design concept itself. To really use the IPad for writing you will have to bend over it to use its keyboard causing you a pain in the neck and shoulders on the long run, and reducing your ability to easily view the content of the screen. If you place it vertically to read or view the content you won’t be able to write without twisting your fingers and wrists.


So with all those drawbacks, what is IPad for?

Well it suffices to look at some of the advertisement to know that IPad is only for sharing photos, doing social networking, reading books or articles, viewing videos, and web browsing in general. But if you are doing serious work it won’t help you.

When I asked a friend if she is going to buy an IPad she replied: “what for? For music I have my IPod, for work I have my laptop and for roaming I have my mobile”.

So it is just fair to think that IPad is more cumbersome than a mobile, and less effective than a laptop.

Based on this assertion I don’t think that all of us agree with Mr. Steve Jobs that the era of PC is over.

Friday, June 25, 2010

The People, The Place and The Subject

There are plenty of chatting and communication services out there. Each one has its own concept and tries to solve a specific problem. They adopt different approaches towards serving their users:

- Instant Messengers are centered on detecting when a friend or a contact goes online in order to chat with him/her.

- Chat rooms are about bringing people together in one place to discuss a certain subject. Usually rooms are labeled by topics. However there is rarely specified material to discuss and people engage into free discussions and speeches.

- Forums are usually more focused and members are required to stick to the subject of the thread otherwise their posts will be deleted. The problem of the forums is the lack of real time interaction, and of course people have to search for the forums which fit their needs. Believe me that's not an easy task.

However a brand new approach is emerging. This new concept takes a different approach to tackle the problem and offers a solution by bringing together the people, the place and the subject.

The place is simply any web page on any website.
The people are the users who are viewing that web page.
The subject is the content of that web page.

So if a number of people are seeing the same web page, they are most probably interested in the subject offered there. If they are interested, chances are they either agree or disagree and probably they would like to say something about it and discuss it with others who are viewing the same web page as well.

Wednesday, June 23, 2010

Social Networking Anywhere: Make Every Website a Chat Room

When you are browsing the web you are most likely searching for something like a product, service, info, or may be just for fun.

Suppose you have found what you are looking for, or probably something similar. What would you do next?

Most probably you will not venture into taking what is given to you for granted, or at least this is what you should do, but instead you will look to concur it with a different source, such as an article, a review, a blog etc…

So common sense tells you to make a little research little research, otherwise you would be taking a risk that will affect your wallet, your credibility and may be your position!
Chances are that you google again the product/service/article to get a feedback about it, especially if your credit card is on the stake. You check the forums to see what people are saying about this stuff, and may be you would join to discuss the subject and weigh the advantages and disadvantages.

In short once you have found your item, you will enter another circle of indecision (unless you are 100% sure of what you are doing of course).

However, there is something very obvious that has not been very well implemented so far. What about meeting people on the same website and discuss with them on spot about the content of the page being visited! It would be great if there are some people around to ask them about this item or that; a discussion is always a way to enlighten both parties.

This is a real issue! It is wonderful to feel that someone is not alone!
It is just amazing to know that the web is alive and there is someone over there with whom you can instantly share your opinion and experience.

SimpleConnexion.com is trying to do just that.

Monday, June 21, 2010

Selective Chat vs Random Chat

Random chat is a new trend for chatting. It started in mid 2009 with Omegle and continued in 2010 with Chatroulette. Although it is popular, there are some severe problems concerning these services.

These issues were detailed in a previous blog “Random Chat Review” and in short they are: a person can’t easily chat because users are always hitting next button, there are lots of perversions and people abusing the service by showing sexual acts, most of the time guys are looking for girls who are rarely found.

Although the idea is appealing, however these services are not delivering real value. What is meant by real value is to be able to have a nice chat with interesting people with whom some would share a minimum of common stuff.

Selective Chat might be the solution! This is fairly a new term that does not appear a lot on Google Search or Google Trends. It consists of finding the right person to chat with. A person who displays a minimum profile that allows others to find him or her.

As opposed to social networking services like Facebook or others the aim of a “Selective Chat” is to bring together people who want to chat and express a preference for a specific topic. While on social networking people share videos, images and activities with contacts and friends; they are reluctant to accept people they don’t know. Besides they are not necessarily online at the same time and might not be interested in chatting at all.

Conversely in “Selective Chat” people go there in order to find someone to chat with. However to avoid lengthy introduction and discovery of interests of the other person, “Selective Chat” gives people the option to build a mini profile in which they can tell about themselves, their interests, their personal website or YouTube.

Because each person might have his/her own set of preferences and interests it will be easier to find others who share the those interests, thus resulting in a nice constructive chat that deliver value to the chatters and might pave the way for some more evolved types of relations.

This concept is being implemented by http://www.simpleconnexion.com/ in which this service is trying to offer chatters an appropriate context for people to meet each other and discuss about common topics in real time.

Monday, June 14, 2010

When File Beats Database

It might seem odd or unusual to compare system files to databases. You might think that the comparison is by far in favor of databases that present a huge advantage in manipulating data in every way one would image. While is this completely true, there might be cases when you really discover that system files are needed to assist databases due to better performance.

In the financial field there are market data providers with whom you can connect to get information about financial instruments, their prices and values. The data received are a stream in the form of tickers. You have the TickerID, the DateTime and the values which can be Ask, Bid, and Mean.

You might receive thousands and thousands of data in a very short time. Certainly you will have to store them in a database in order to use them later in your computation. However, if you have a standard database, you will experience very poor performance trying to insert those data as they arrive.

Consider this MySql database example in which there is Tickers table like the following:


CREATE TABLE `tickers` (
`TIK_ID` varchar(50) NOT NULL,
`TIK_DATE` datetime NOT NULL,
`TIK_BID` double NOT NULL,
`TIK_ASK` double NOT NULL,
`TIK_MEAN` double NOT NULL,
PRIMARY KEY (`TIK_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Inserting market data into this table leads to execute an SQL statement like “INSERT INTO tickers (TIK_ID, TIK_TIME, TIK_ASK, TIK_BID, TIK_MEAN) VALUES (@id, @time, @ask, @bid, @mean);” .

Suppose you have 1000 tickers to store in the table. You have to execute the previous SQL statement 1000 times. Whether you use a StoredProcedure or not the execution time varies between 30sec and 40sec.

Certainly you don’t want your database to be locked down by a continuous, resource consuming process like that.

Luckily there is workaround for this problem. The solution consists of storing this stream of data into a text file and periodically imports it into the database. Let’s say you have incoming tickers stream, instead of executing SQL statement for each one, what you do is append the data to the end of a text file. When the file grows big enough, you import it into the database table using a special import tool that usually ships with most databases.

I have rewritten the same example but this time using text file. The performance was clearly far better. The time needed to write 1000 tickers into the text file and import them into the database is only 500ms. This constitutes a gain of 600%.

Here is the code snippet:

private void SaveToFile() {
    DateTime t0 = DateTime.Now;
    DateTime t1 = DateTime.Now;
    StreamWriter wrt = null;
    MySqlConnection con = null;
    try {
        Random rand = new Random();
        wrt = File.AppendText("dbvsfile.txt");
        for (int i = 0; i < 1000; i++) {
            String id = "ticker" + i;
            DateTime dt = DateTime.Now;
            double ask = rand.NextDouble() * 100;
            double bid = rand.NextDouble() * 100;
            double avg = (ask + bid) / 2;
            wrt.WriteLine(String.Format("{0};{1};{2};{3};{4}", id, dt, ask, bid, avg));
        }
        if (wrt != null) {
            wrt.Close();
        }

        con = new MySqlConnection(Properties.Settings.Default["dbvsfileConnectionString"] as string);
        con.Open();
        MySqlCommand cmd = con.CreateCommand();
        cmd.CommandText = @"LOAD DATA infile 'dbvsfile.txt' into table tickers fields terminated by ';' lines terminated by '\r'";
        cmd.ExecuteNonQuery();
    } catch (Exception x) {
        MessageBox.Show(x.Message);
    } finally {

        if (con != null) {
            con.Close();
        }
        t1 = DateTime.Now;
        TimeSpan ts = t1 - t0;
    }
}


There is no doubt that the second method is by far more advantageous than the first, however everything comes at a price. The file solution has the following issues:

  • Data are not stored in real-time into the database: there is a laps of time (estimated in minutes) in which the data reside in a text file before being imported into the table. During this time those data are not exploitable.
  • Management overhead: An extra code must be developed to verify proper import and to handle errors that might occur during the process.



In case you are asking yourself why it is so, the response is crystal clear. It is similar to the previous blog “Why Excel COM API is a Bad Choice”. Every time you are executing an SQL command, this order travels through layers of software framework to arrive to the database server to be executed over there. While the import tool avoids this entire complex trip and writes directly to the database.

Once again, it is better to avoid (when possible) the use of APIs when there is another clean solution.

Wednesday, June 9, 2010

ASP.Net HttpApplicationState and IIS

If you are ASP.net developer you certainly know about the HttpApplicationState object in which you store application wide data. As opposed to HttpSessionState the HttpApplicationState object will store those data as long as your application is running, while those stored in the HttpSessionState object are destroyed when the session ends i.e. the user signed out or has been inactive for some time.

However, this is the theory. The reality is not as simple as that. In case you plan to use the HttpApplicationState object to store information be careful because what is successfully tested on your PC might not work as expected when deployed on the server.
Let’s consider a simple example in which you want to count the number of hits on your website. You will do something like this:

int cnt = (int)Application[“hits”];
Application[“hits”] = cnt +1;

You test the code in your Visual Studio and everything works perfect. But when you deploy it to your server you begin to notice weird behavior. Sometimes you get the right number of hits the next time you will find a zero or a number that is far less than the expected one.

In order to understand what is going on, you should know how IIS deals with your web application.
As a matter of fact, you do not control the life cycle of your application, IIS does. At start your application will not be running until the first web page request of your application reaches the server. Then IIS loads the application into memory and runs it inside a private process. At this time the Application[“hits”] will be zero. After the first request it will become 1. As long as requests reach this same process the Application[“hits”] continues to increase. However at a certain period of time the IIS decides to create another process for your application. Now you have two processes running, each with its own copy of Application[“hits”]. So if the value of Application[“hits”] in the first process is 100 the value Application[“hits”] in the second process will be zero!

Furthermore, if a process appears to be idle for a certain amount of time, IIS will kill it and destroy all the information it contain, which means that you will lose the count in Application[“hits”].

What you have to learn is that the HttpApplicationState object is unique per process provided that this process is still alive. You can be sure that your process will not be alive (running) if it is going to be idle for a long time.


To solve this issue you have two options:
  • Force the IIS to maintain only one process for your application, which means the same process will handle all the http requests. This way IIS will not create extra processes to handle new requests. This solution works only for transient data that do not have to live beyond the life of your application. But performance will suffer in case the number of hits increases significantly, because IIS will not be permitted to create another process to handle them. To configure IIS, open the IIS console, expand Application Pool and open your application property page. On the Home tab click configuration and select the performance tab. There you can set the “maximum number of worker processes to 1”.





  • Use a database to store all important information. This solution is better when the information must be stored and outlive the application life cycle. It is also useful even if multiple processes of your application are created at the same time.

As you can see HttpApplicationState definition can be misleading if considered without the IIS configuration. It is better not to rely on it to store application wide data but use database instead to store them effectively and permanently.

Saturday, June 5, 2010

Best Ways to Learn from Books

Since the start of my career I have been reading books, I have read hundreds of technical books of all types. My problem was always how to keep this knowledge and avoid losing it by the time. This was hard to accomplish.


Throughout the years I have devised a technique that helped me keep a minimum of the information I gather from books, and in case I forgot some parts, I was able to quickly recover them with a relatively small time.

This technique is not big secret, however it needs devotion and commitment and seriousness at work. All you need to do is to summarize the book!

If you think you can go that far, here are the details of how to do it:

Open your book whether a hardcopy or an eBook and grab a pen and a notebook. Read one chapter comprehensively and then do another pass while stopping at end of each paragraph; write down in your notebook what have you understood from this paragraph and move to the next one. I know the task is daunting and time consuming in comparison to just reading one time. But remember, if you are reading a technical book, it means that you are determined to learn from it and not to read it for fun as it was a novel.

The summary must not exceed 15% of the original book size. So a 400 pages book can be reduced to 60 pages only. These 60 pages are stripped from all the fat content that usually comes with each book. They constitute what you have understood from your reading, and are written in your own style. For this reason, you can get back to them and refresh your memory at any time, in a very quick manner. It looks like that you have reread the whole original book in just one hour.

Here is a list of Do’s and DON’Ts:
  • Don’t copy/paste from an eBook to a Word document because you won’t be writing in your own style.
  • Read while sitting in front of your desk because it will be easier for you to take notes. Don’t read in bed or in the train.
  • Don't use markers to highlight paragraphs on the book itself, because you won't be able to locate them rapidly in the future.
  • Write in a notebook and not on separate sheets because you will lose them very easily.
  • Don’t take notes directly on your PC because you will not be able to resist copy/paste for a long time.
  • Read one chapter a day or two if their size is not big.
This technique has helped me a lot in keeping important information from fading away over the time. But it should be noted that those that will be instantly available in your head are those that you are applying in practice.

Tuesday, June 1, 2010

Why Excel COM API is a Bad Choice

In some projects you might be asked to generate reports as Excel Worksheets. This is especially true when your customer is in the financial domain. Financial guys are keen of Excel; it is Omni-present in every aspect of their job.
Disregarding if their addiction to Excel is justified or not, there is something you as a software engineer should know. To generate an Excel report your first reflex is to use the COM API.
However this might not always be the right choice.

I am going to share my experience with Microsoft.Office.Interop.Excel which is the official .Net assembly that uses Excel COM API underneath.
Two years ago one of my customers asked me to include in my project the generation of reports as Excel file. Naturally my reflex went directly into using Excel APIs. I spent over a week creating all the details needed to add into the report. At the end I launched a test on real data and it took almost 10 minutes to complete! This was way unacceptable performance.
It was really one of the most stressful moments during that project.

To solve the problem I had two choices:




  1. Search on the internet for readymade packages that generate Excel files directly and without passing through the COM APIs
  2. Or use VBA script to generate the reports.

The first option was too risky for me because that involves asking the customer to buy an unknown library, which needs to be tested and approved and it is programming model learnt. If it turned out not suitable we need to claim our money back and start searching for another one. I kept this option as the last painful resort.

The second option could be tested in just one day. I replaced the APIs calls by a code that generates VBA then loaded that VBA into Excel and executed it. Everything worked fine and the generation time dropped to 3 minutes, which was far acceptable than 10.
Everything went fine and the project ended successfully.

Still, it is not the ultimate solution. So if one day you are confronted to a similar situation, first thing you do is to include in your plan to search for and test libraries that generate Excel files. Discard and resist any attempt to use COM APIs.
The following code compares the generation and savings of 100k strings into an Excel file. The first one using Excel COM API and the second one using ExcelLibrary found on Google code.

Excel COM API
Excel._Application app = new Excel.ApplicationClass();

Excel.Workbook wb = app.Workbooks.Add(Type.Missing);
Excel.Worksheet wrk = (Excel.Worksheet)wb.Sheets.Add(Type.Missing, Type.Missing, 1, Excel.XlSheetType.xlWorksheet);
for (int r = 1; r <= 1000; r++) {

for (int c = 1; c <= 100; c++) {
wrk.Cells[r, c] = String.Format("row #{0}, col#{1}", r, c);
}
}
wb.SaveAs("c:\\testExcelApi.xls", Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Microsoft.Office.Interop.Excel.XlSaveAsAccessMode.xlExclusive, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); wb.Close(Type.Missing, Type.Missing, Type.Missing);
app.Quit();
Marshal.ReleaseComObject(wrk);
Marshal.ReleaseComObject(wb);
Marshal.ReleaseComObject(app); app = null;
wb = null;
wrk = null;


ExcelLibrary
string file = "C:\\testExcelLib.xls";
Workbook workbook = new Workbook();
Worksheet wrk = new Worksheet("First Sheet");
for (int r = 1; r <= 1000; r++) {

for (int c = 1; c <= 100; c++) {
wrk.Cells[r, c] = new Cell(String.Format("row #{0}, col#{1}", r, c));
}
}
workbook.Worksheets.Add(wrk); workbook.Save(file);


You will be shocked to know that the first code takes 3 minutes to execute, while the second only 2 seconds. This is simply 80 times faster.
The explanation is pretty obvious; the poor performance of Excel COM API comes from the fact each API call travels through several layers of codes and frameworks which makes it time consuming. Conversely ExcelLibrary writes directly into the file without any intermediary layers.


As conclusion, it is better to keep your options wide open and check/test available libraries before venturing into the unknown.

PS. Some of the available open source libraries are ExcelLibrary, NPOI, ExML…But of course you might find others.

Sunday, May 30, 2010

Random Chat Review


Starting 2010 there has been a new trend in the chatting behavior with the advent of ChatRoulette and the “mediatization” of the service. However this service was not the first one to the market, actually another chat service called Omegle was already there since 2009.
But what distinguishes ChatRoulette was the use of webcams to allow users to randomly chat with each other without any registration or login; all what is required is just go to the website, accept to turn on the webcam and there you go.

The idea was fast to grow in popularity, it suffices to look at Google trends for the keyword “random chat” and note how the growth was exponential in a very short time. Notice the scale (between 10 and 20) in the graph below.




On the other hand the keyword “no registration chat” started to gain momentum since Q4 2008 with an index ranging from 5 to 8.





What does this mean?
Actually, it tells that internet users prefer easy to use website, especially for those who do not intend to make long lasting relationship. So the mysterious aspect of the randomness and the visualization of the other person have attracted people. The mere thought of “who might be the next person” gives the user a strong impulse to find what is hidden behind each click of a button.

However this comes at a cost. While in theory chatting with random people who might be from all over the world is appealing, the constant reflex of pressing the “next” button renders the experience completely useless. As a matter of fact you can rarely chat, because you are always driven by the feeling to know who is next. Besides the anonymous aspect of the service makes it open to all sort of “perversions”.


A small study
I decided to do a small study. So one early evening I opened ChatRoulette and tried for over 30 minutes to talk to someone. Then I repeated the same experience with Omegle.
Here is what I found. I will expose the Chatroulette statistics knowing that those of Omegle are similar. But first let me make some definitions that are useful to the understanding of the figures.


Appearance: is when something displays on my screen showing the peer, whether it is a man, woman, advertisement or anything else.
Repetition: when the same appearance shows more than once.
Pervert: a person showing explicit sexual act.
Video: a suspected recorded video for a person or others (not live broadcast).
Advertisement: a commercial announcement for a product or website.



So here are the results: During the 30 minutes I have seen around 200 appearances, 150 are men, among them 8 old people (look like 50 or above), 2 young boys (less than 15) and 62 perverts. There were also 20 appearances that were girls, 10 videos and 20 ads. The number of repetition was 55. The figures and their percentages are depicted in the table below.














TotalMen Women Perverts VideosAdsRepetition
2001502062102055
100%75% 10%31% 5% 10% 27.5%



In addition to those statistics it is important to note that the longest discussion I had made was less than 30 seconds before the peer leaves. Those discussions occurred only twice and the remaining time it was the peer who is pressing “Next” on the spot.

As you can see, although I spent 30 minutes, I talked less than one minute.
This clearly shows that these sites are not result oriented; people go there either by curiosity, or guys searching for girls who are really rare species.

Moreover, most of the videos or ads were about adults websites. This means that these websites are starting to position themselves in a particular manner. As a result this will drive away most of the people who are looking for real experience.


For this reason I believe that it won’t be long before people lose interest in those services unless some others come to the market with a different strategy aiming to favor real communications that are ‘decent’ and containing more ‘values'.


You might want to check http://www.simpleconnexion.com/ for another approach on how to meet new interesting people based on a concept that delivers value.

Tuesday, May 25, 2010

Shared database review

Recently I came across a startup, called FluidInfo, that is doing an interesting job. Creating a shared database!
At first one might think that this is just another fancy engineering artifact, however when he delves into the large horizons that the concept opens and the big opportunities behind, he will give it a second thought.
The concept is straight forward. The database is open for everyone to push/pull information and data into and from it. It is somehow similar to Wikipedia with some differences in few areas.

So what?
The question that follows is what will this concept, no matter how technically challenging it is, benefit the users or the industry as a whole.
To be able to reply to this question, we should first examine the problem for both users and enterprises. Let’s consider that a user is searching for a review about a certain product, such as electronic device or a car or anything else. His only tool for the moment is Google. He just enters his keywords and clicks search. Then he will have the daunting task to go through the search results dissecting them for meaningful reviews whether from experts or from the public.

Another alternative is to give Wikipedia a try. However, although Wikipedia is a respectful source of information it lacks real “quantifiers”. It can provide you with text and stories but it does not give you a concise assessment in form of a significant value. It is up to the user to draw the conclusion by himself.

FluidInfo thinks it has the solution. What it does is to create a database and open it for the public. The database is structured as objects containing tags. You can think of an object as a subject, and the tags as attributes that contain values and qualify these objects.
For example, consider an object called iPad. It might contain a tag called “satisfaction” where each user can add his own level of satisfaction. This way when someone searches for iPad satisfaction he won’t have to go through different websites to have an idea what is the general opinion of the public, all he can do is to query the iPad object for the “satisfaction” tag.
This is enormous! Think about the time that can be spared by immediately getting the results.

But that’s not all; it is much more promising than that. It might serve for endless type of analysis and the most important it allows the development of numerous types of applications especially decision-helper ones. Imagine that a company needs to take a decision based on some market data; this shared database is able to provide their application with enough figures allowing it to compute a valid assessment.

What’s the hurdle?
Having said that! It does not mean that this will be implemented the next day. To be successful, this type of shared database must appeal to the public just like Twitter did. The adoption of this idea by the public is essential and crucial, without it everything will go down the drain.
May be a good strategy for FluidInfo is to aggressively seek strategic alliances and partnerships with websites and application builders to leverage this concept and make it accessible to all internet users.

Monday, May 24, 2010

Retargeting Explained

According to statistics only 2% of website visitors will end up using their credit cards to buy something, while the remaining 98% will just have a look and leave. Of course this is bad news for website owners. Most of the case these websites pay big money to drive traffic. These checks are in majority given to Google for its notorious AdWords service.

So if you think about it for a second you will see how much money being dumped with no return on investment. Suppose you paid 1000US$ for Google AdWords, you will be shocked to know that only 20US$ will generate income for you and the remaining 980US$ will simply evaporate.

Retargeting is a solution devised to this type of problems. It consists of an agency that builds and indirectly connects a network of publishers and advertisers. The process of retargeting goes like the following:
  • You go to the advertiser website let’s say Amazon.com

  • Amazon will insert a special script from the retargeting agency into your browser; this script will in turn insert a cookie, known to the browser as third party cookie, with a unique id. This way the retargeting agency will uniquely identify you. This is a completely harmless because it does not gather any personal information; it simply assigns you a unique number.

  • As you browse the Amazon web site and look at the different products, the script transmits those data to the retargeting agency.

  • Suppose you have added a product to your basket and proceeded to checkout, but for some reasons you did not complete the purchase operation. For Amazon you are a lost customer. However, the retargeting agency knew what product you were about to purchase. Using the cookie it has inserted into your browser it will be able to identify you again whenever you land into one of its publisher’s website.

  • To continue with our example, suppose you left Amazon.com and went surfing the web until you pass through Yahoo.com. It happens that Yahoo.com and Amazon.com deal with the same retargeting agency. Because of that famous cookie (inserted when you were at Amazon.com) the retargeting agency will know that you were the person who did not complete the purchase at Amazon.com and will display to you an advertisement about the product you were about to buy.

  • This technique has proven effective to the advertisers in converting potential customers into buyers.


In case you need to have a concrete sense of what is happening behind the scenes, you can tell your browser to clear all cookies (caution clearing all cookies will cancel automatic login on some sites, so you will need to manually re-login). If you are using Firefox you can go to the menu “Tools > Options”, choose “Privacy tab” then click on “Show Cookies” and then click on “Remove All Cookies” in the Cookies dialog box.





Once you do that, go to Amazon.com then reopen to Cookies dialog box again take a note with the available cookies and remove them all. Go to Yahoo.com and again open the Cookies dialog and verify that there is at least one cookie in common with the Amazon.com.

If you are frightened by the above and fearing of being tracked, don’t be. Actually this is a safe procedure and usually there is no harm from that. Quite on the contrary, it might help websites know the behavior of their visitors in order to serve them in a better way.

Facebook also uses this technique with partner sites. To know what it looks like, login to your Facebook account and then without logging off go to CNN.com; chances are that you will be told what article(s) your friends liked over there.

In case all of that still make you fear for your privacy, it is useful for you to know that you can stop this by simply telling your browser to reject third party cookies. On Firefox you can go again to “Tools > Options > Privacy” and uncheck “Accept third-party cookies”.

Internet Explorer 8 has a feature called "Private Browsing" that allows you to browse the web without being tracked.

Saturday, May 22, 2010

Google Wave Productivity

Are you receiving tens of emails a day?
These emails are most of the time a discussion of one precise subject; however you get this discussion in some sort of scattered rarely organized chunks.

Imagine you are planning an outdoor picnic with a group of friends on the weekend. I am quite sure that you will be emailing each other for the whole week. You will start by suggesting the day, the hour, the place, the type of food, the people you want them to join in and so forth…Naturally, every friend of yours has his own ideas and his own suggestions. Just take the four or five topics mentioned earlier and multiply them by the number of friends in the group; you will directly discover the amount of emails you will get just while trying to organize this picnic.

Now consider that you have at least two other subjects you need to take care of via email exchange and look what will happen to your inbox! With all the mess that you experience you can be sure that it has taken too long before someone is trying to bring a solution. What is needed is to put some order into this counter-productive technology that is called email.

Since most, if not all, email exchanges are centered on discussions; why not organize these discussions into threads? The idea is not new, ListServers existed for years, then later replaced by web forums. In those tools each discussion has its own thread and every participant will post his own opinion in the correspondent thread. The only problem with these services is they are public and not private. You can’t go organize or evoke personal stuff there. What is needed is something like your own personal discussion service that lets you create and conduct private discussions.

Until the creation of Google Wave this has been difficult to find. So you guessed it, Google Wave is primarily about communication and collaboration but instead of doing that using tens of scattered emails that pollute your inbox and make your life miserable, Google Wave will let you sort all this mess into a reduced number of discussions that are easily manageable and searchable.

Now let’s go back to our picnic project, instead of having tens of emails related to this subject, all you will have is one discussion in which every participant will post his opinion and at the end you will be able to easily find your way to carry on with this project.

May be Google Wave won’t replace emails in the near future simply because bad habits do not vanish easily. However I believe that people will start using it as the time goes on, especially when they realize how effective it is in handling their communication. It is also up to Google to make efforts towards providing consistent offers for businesses.

Friday, May 21, 2010

Google celebrating PAC-MAN 30th Birthday


As usual Google celebrates known and unknown occasions on its own way. This time it is remembering old PAC-MAN game that used to be famous in the eighties, at least I used to play it at that time on my Intel 8086 PC!
Although this can be qualified as ancestor game however it revives nostalgia for some people. To know more about it you can check this wikipedia article.

The concept of celebrating such occasions, was discussed in an interesting article that appeared on CNN few months ago."For Google, doodles are oodles of surprise" tells the story behind this idea.

Update: "Chomp! Pac-Man, the arcade classic, turns 30" a new arcticle on CNN talking about the 30 years old game.

Wednesday, May 19, 2010

Facebook Privacy Issue


Privacy is a hot issue nowadays, everyone is yelling about Facebook privacy policy and even new projects are starting to raise tens of thousands of US$ on the pledge to develop a more privacy-compliant service.
The most criticism towards Facebook privacy is the frequent updates the social network is bringing to its privacy policy. People are losing faith and trust and many are closing their accounts.To have an idea of how much people are really looking to close their accounts, you can just go to Google trends and enter the following phrase ‘delete facebook account’. What I got is the following chart.


Notice the peak in the curve after the first quarter of 2010.
So what’s wrong in that privacy policy? If you have time you can go to Facebook Privacy Policy and read over 200 lines of policy rules or you can just read a summary of it below.


In short Facebook privacy policy says the following:

  • Children below age 13 are not accepted and any information about children below that age are not stored.

  • Facebook will not store your password you provide to Facebook in order to import your contact list from you email account such as Hotmail, Gmail, Yahoo Mail…

  • Facebook will log your activities on its website, which means whatever action you do on your profile will be logged. Actually every site does that. Once I called Microsoft Business Parnter support and the guy told me what I did three years ago!

  • Facebook might (in my opinion this means WILL) share information with advertisers about your behavior on THEIR website! So if you click on an ad on Facebook and you went to the advertiser site and navigated N number of pages or selected this item or that item, Facebook is likely to know! However Facebook says that after 180 days (6months) this information will be ‘anonymized’ which means it won’t be associated to your account any more.

  • Now the biggest issue is in third party applications:

    • Facebook does not guarantee that any third party application will comply with its privacy policy.

    • If a friend of yours added a third party application and gave it enough permissions, this application will be able to access any data on your profile that your friend can access by himself. So you are at the mercy of your friend wise judgment.


After reading this, if you feel you are insecure and you want to close your account. Facebook has given you a way to do that, just follow this link delete account.
Another alternative is to avoid putting online any sensitive data. This is a general rule and not only related to Facebook.