Friday, September 27, 2013

Floyd Warshall algorithm easy way to compute

You can download my Presentation at: SlideShare
This is a external slide set which helped me to understand FW: SlideShare

Sunday, March 17, 2013

Importance of geographical position with information services and Location Determination Technologies

We are living in a day where all the forms are half filled. Early days there were web forms that need to be filled by the user from top to bottom. Now all the developers trying to fill the form as much as possible from what they know about the user. It is just a one example; people are trying to customize the services according to the customer using whatever details they have on customer. Geolocation of the customer is one of the most important details of them. We have seen Google homepage is customized to the country; we have seen ads that changed according to the country of the visitor. Finding the country is not enough, now we need the exact position of the user in real time. Location Based Service (LBS) is an innovative technology that provides information or makes information available based on the geographical location of the mobile user. In addition, LSB tend to be one of the hottest research areas these days.
“Where am I”, “Where is the nearest fuel station”, “From where I can have my breakfast”, these questions are asked from humans in the past, but not anymore. People need there handheld mobile device to answer those questions. This is where knowing the geolocation of the user become most important to the developers.
Location Determination Technologies (LDTs) are the heart of Location Based Services (LBS). These LDTs can be separated into 4 main parts Network-based, Mobile-based, Mobile-assisted, and Network-assisted.
 

Network-based

Normally network based techniques are less cost techniques. These do not require any change in the mobile handset. Possibility of using with almost any handset (even with low end ones) gives a clear advantage.
 

Mobile-based

In Mobile-based LTDs locating it and doing the required calculations is done by the mobile device. It might need some little pieces of information from the network too. Mobile-based implementation does not support legacy handsets.
Mobile-assisted and Network-assisted techniques are the used to distribute some amount of load to the Base Station (BS) from the Mobile Station (MS) for vice-versa. Those are used to overcome limitations and disadvantaged of each base technique.
LDTs can be separated into 2 main categories. Satellite and Cellular LDTs. Satellite LDTs are based on the principle of measuring the interval of time a set of signals spend travelling from a set of orbiting satellites to a receiver on or near the surface of earth. The main satellite LDTs are GPS, AGPS, DGPS, GLONASS and Galileo. Cellular LDTs use the signals of the cellular system to find the location of a MS. The main Cellular LDTs are,
  • Cell-ID (or Cell of Origin (COO))
  • Received Signal Level (RSL)
  • Angle Of Arrival (AOA)
  • Uplink Time Difference of Arrival (TDOA)
  • Downlink Observed Time Differences (DOTD).
Each of these methods has their own advantages and disadvantages. To overcome those disadvantages there are some hybrid methods that will combine advantages of two or more methods to archive greater quality.
Cell-ID
This is the simplest and straightforward way to find the location. Here we will use the area of the BS, which services the user, as his location. This has very low accuracy and it tend to get worst in rural areas.
clip_image002
With this, we get that large area covered by the served BS as the user location. This method does not have any calculation and very good for applications that need quantity over quality. With a high error of 500m order, it is questionable whether this is of any use. Nevertheless, there is a set of significant services, the so-called Resource Discovery Services (RDS), for which Cell-ID’s accuracy might be sufficient.
Cell-ID techniques are encouraged to use with the help of other methods or location information. As an example in Voice Location-Based Services user, provide their locations by voice and cell ID identification helps in identifying the location for Automatic Speech Recognizer (ASR). ASR have a large set of locations to compare the user voice with, where knowing the Cell-ID reduce that set of locations to a workable small set.
It seems that even though cell-Id works well in urban areas, there is a tendency of not connecting to the closest BS in the urban areas. Some experimental studies conducted (in Italy) in this areas shows that, percentage of samples not connected to the closest BTS reaches 43% when it comes to urban areas (1).
Received Signal Level (RSL)
This Method uses the signal strength from MS to calculate the distance to the MS. In urban areas, the received signal level decreases more rapidly with distance than in open areas. Multipath fading and shadowing poses a problem for distance estimation based on signal level. We need to have a suitable propagation model that account on those fading, shadowing and estimate the distance. Factors like traveling in a vehicle, being in a seminar room can cause changes in the received signal strength and those cannot be captured in to calculations easily.
This is an easy and low-cost method to enhance the accuracy of pure cell-ID based location. This method can be used with any kind of handset with GSM enabled. (2)
Angle of Arrival
Here we will take the angle of arrival from 2 base stations and will take the intersection point as the MS location. A minimum of two BSs is required to determine the position of the mobile phone.
clip_image004
In each antenna there are array of small Omni directional antennas, these antennas are separated by a small distance and a measurable difference in arrival times and electrical phase received at each antenna are used to estimate the direction at which the transmission is originating. A small change in the arrival time can make a big error in the angle. AOA needs line of sight (LOS) from the BS and it is very rare that we have LOS from 2 BSs. Due to above issues this method is more towards unusable.
There are proposed systems that give better accuracy with AOA even in NLOS scenarios. Accuracy of AOA method increases with the number of base stations available for calculation. Root mean square of the distance between the true position and the estimate can be limited to 500m or below with 3 base stations and an angular error less than 5 degrees (3).
TDOA and DOTD
These techniques use the uplink and downlink times to find the location of the MS. TDOA happens at the MS and DOTD happens in the network end. Main concern relevant to these methods is, 1 microsecond error equals 300 meters measurement error. These techniques need additional timing equipment is required. The required infrastructure has an important cost effect.
RSL is my choice for location identification. None of these techniques is perfect and you might need to try a hybrid method to archive that perfection.
Comparison
Method Advantages Dis-Advantages
Cell-ID Easy, Not Complex
No Calculations
Less cost

Lesser accuracy (especially when cell serves a large area). Error (related to GPS location) around 500m-1000m (1)
Received Signal Level (RSL) Less cost
Good Accuracy in urban areas
Lesser Accuracy/impossible to use in rural areas with few towers
Need support from the network
Angle of Arrival Less Cost
Don’t need any extra modifications to the existing handset
A small change in the arrival time can make a big error in the angle
Need 2 BS at least
Complex computations

TDOA and DOTD Easy, Less Computation
High cost due to the equipment needed.
Accuracy greatly depend on the error, 1ms error can cause above 300m error

























Monday, December 24, 2012

Sunday, December 9, 2012

You can't survive without 'grep' (most importent grep commands)

I started my internship in wso2 6-7 months ago. By then I had no clue about the Linux command prompt. Even though I have worked with Ubuntu in when I was doing my A/Ls I didn’t use the bash to anything at all. But with the start of my internship I started Using the Command line for almost everything. It is a one way stop for everything and you feel really powerful and feel more control over things. With those I moved towards using the Command line more and more. When I do that ‘grep’ played a big part in my life. Simply you can’t survive in a command line environment without ‘grep’. Below are some nice little commands using grep that I think everyone should know.
basic usage of grep command is to search for a specific string within a specified set of files. In below commands you have to replace the <> and what is in there with what you need.

grep "<string you need to search>" filename

e.g. grep “submitFilterForm” index.jsp
Here we are searching for “submitFilterForm” in index.jsp. This will return the sentence that string can be found (if there is any).you can give a file pattern instead of the file name

e.g. grep “submitFilterForm” *.jsp
And if you need you can replace the search string with a regex. You can use some parameters to make the search more advances.
  • i - Ignore case(ignore capital, simple, “the”, “THE” and “The” all will be same)
  • w - Full strings only (if you don’t use this all the substring matches also will be there in the search)
  • v - Negative search. ( When you want to display the lines which does not matches the given string/pattern)
  • n - Line numbers (To show the line number of file with the line matched. It does 1-based line numbering for each file. Use -n option to utilize this feature.)
e.g. grep -iw "submitfilter" *.java
Above will search for "submitfilter" as a full word, ignoring case insensitively within all java files, try it and you will understand more.
e.g. grep -i "submitfilter" *.java
Above will search for "submitfilter", ignoring case insensitively within all java files.
e.g. grep -v "submitfilter" *.java
This will search for places which do NOT match "submitfilter”, within all java files.


Saturday, September 29, 2012

Free Learning is Everywhere (for Programmers at least): 50 of the best places to learn free


  1. UC Berkeley Webcasts: UC Berkeley's Computer Science department offers a huge collection of courses in programming and computing.
  2. MIT OpenCourseWare: Find more than a hundred online course materials for electrical engineering and computer science in MIT's OpenCourseWare collection.
  3. Stanford University: Through iTunesU and Coursera, Stanford University offers plenty of programming courses, including Coding Together: Apps for iPhone and iPad, Programming Methodology, and Human-Computer Interaction.
  4. The Open University: U.K.-based Open University has a variety of learning units in computing and ICT.
  5. University of Southern Queensland: From the University of Southern Queensland, you'll find courses in Object Oriented Programming in C++ and Creating Interactive Multimedia.
  6. Princeton: Through Princeton University's Coursera site, you can find courses on algorithms, computer architecture, and networks.
  7. University of Michigan: From the University of Michigan, you'll get access to great programming courses including Computer Vision and Internet History, Technology, and Security.

General



If you're just dipping your toes into programming, or you want to find a variety of resources, these sites offer several different ways to learn how to code.
  1. School of Webcraft: Mozilla Foundation's School of Webcraft is a peer-powered school that offers free web development education.
  2. Google Code University: Google Code University is full of excellent resources for code learning, including tutorials, introductions, courses, and discussion forums.
  3. Google Code: Search Google's repository of code through this awesome resource.


Friday, September 28, 2012

What do software architects really do? - Review

Original Paper:
Name:  What do software architects really do?
Download: http://goo.gl/906Yk

By: Philippe Kruchten
 
This paper is published in 2008 and written by Philippe Kruchten, who was a Philippe Kruchten is professor of software engineering in the department of Electrical and Computer Engineering of the University of British Columbia. Qualification shows that he can answer the question “What do software architects really do?
This is one of the interesting papers that I have read. This made me interested because of two reasons. First, it is the language style that the writer has used. Then, it is the graphical representation that is used in this paper. At the start, writer take a nice opening with the dialog captured from the OOPSLA workshop in Vancouver in the fall of 1992. If I ask, the question “who is an architect?” answer can be given in the domain of the question “What do software architects really do?” This simply mean that this paper actually answer two questions, which I mentioned above.

Deviating from common definitions that can be found in the internet, this paper gives a more practical and measurable definition to the architect and what he does. In this paper, he talks about some common mistakes, which he call as “antipatterns” which fails software architect or software architecture team. As example “creating a perfect architecture, for the wrong system”, “creating a perfect architecture, but too hard to implement” was two antipatterns that is listed in the paper.
In the next section writer start talking about “Roles and responsibilities of an architect” where he mention eight main roles and responsibilities. We can see that not all of them talks only about the architecture of the system.  After that writer gives a nice graphic that clears, “What architect should do?” This graphic is drawn using, how the time of an architect is allocated. Here writer identify 3 main areas, where is spent.
1.    Architecting: architectural design, prototyping, evaluating, documenting, etc.
2.    Getting input: listening to customers, users, product manager, and other stakeholders (developers, distributors, customer support, etc.). Learning about technologies, other systems’ architecture, and architectural practices.
3.    Providing Information: providing information or help to other stakeholders or organizations, communicating the architecture, project management, product definition.
Writer recommends that, architecture should spend his time on these this in a 50:25:25 ratio.
In the next section writer shows, how each antipattern affects the above ratio. In addition, all of these faces are represented in graphs too. These graphs make it really easy to identify how those antipatterns effect in one site. Comparing these graphs will give you a clear idea of how damaging is each antipattern. If we take a team, these ratios will change from individual to another. These ratios will depend on the phase of the project; at the beginning, there will be more internal focus, where as in the development and transition phase there will be more outward focus. According to the writer, our main consideration is the ratios of the whole team taken together. This team ratio should hover somewhere near 50:25:25.
In last section, writer gives us a nice method of identifying and measuring the ratios. He elaborates on, how we can really implement this system in a practical way. Furthermore, he gives us a way of identifying how each work by a architect can be categorize as a inward outward or architecting. This is simple method that will not cost you anything for trying. Therefore, try this in your workplace. Let the writer know your experience in that.
Reviewed By: Romesh Malinga Perera, Undergraduate, University of Moratuwa.
 


Performance Optimization of SOA based AJAX Application - Review

Original Paper:
Name:  Performance Optimization of SOA based AJAX Application
http://dl.acm.org/citation.cfm?id=1506233&dl=ACM&coll=DL&CFID=120989012&CFTOKEN=50544860
By: Kanakalata N, Udayan Banerjee, Shantha Kumar. NIIT Technologies Ltd.

Review:
Nowadays we can see java script based libraries been used in almost every web site or web application. AJAX is one of the most popular JS libraries around. This research paper shows how AJAX can be used to optimize SOA based applications. This paper has a nice organization and a flow that take the reader through several fine points that will prove that, there is a room for optimization in SOA based applications using AJAX. This talk about several AJAX based solutions to optimize the response time and the usability of the application. This does not talk about how to optimize the server performance. It shows that writer has identified the boundaries of the paper and has managed to stick to it.
This Paper starts explaining and elaborating on the topic through an example of an insurance underwriting application. This walks the reader through main design considerations, elimination of coding inefficiencies, and selecting though design alternatives and finally optimizing for usability. This makes a fine order to anyone who reads this paper. Writer of this paper has not made it hard, which make it readable by anyone with a basic knowledge in AJAX and SOA.
Other main factor, that make me believe in their idea of using AJAX for SOA optimization is that, all of their arguments are backed up by good facts and theories. As a example, in this paper they describe why we should shift the load entirely to the client. Writer took well-known Moore’s law to show how power of servers and client computers increase over years. However if we take “power per user” servers do not show a large increment compared to PCs, as number of users are rapidly increasing. With above argument writer proves the importance of AJAX.
Another point, which fascinated me, is the “Micro Granular Repainting”. This talks about having a XML files for both data and HTML and, repainting at the node level. Using this approach, it is possible to repaint the part of the page at the lowest level of granularity. In the next section, they talk about client side caching which is a relatively new idea with AJAX. Here they say that, it improves the response time and overall performance of the system. Use of caching reduces the number of requests for the server, which enables sever to handle higher number of users.
In the “Improve Perceived Response” section, they talk about “Lazy Loading” which is known by most of developers, mainly as a concept and a design pattern. However, I think it would be better if they give the brief description on “What is lazy loading”, at least for the completeness sake. “Lazy loading” as they suggest can reduce the initial page loading time. According to my experience, there are users who let the page load while he is doing something else and return after sometime. This type of uses might get irritated seeing that only a partially loaded page.
The research paper ends with nice set of test result that shows the success of this framework and design considerations. However, these results are taken relative to “insurance underwriting application”, and can change as the domain changes. Nevertheless, the underline facts of those search results suggest that such a change will not happen. According to my knowledge in AJAX and web applications it is clear to me that this framework will improve the responsiveness in any SOA based application regardless of the domain. I see a better future for SOA based applications through AJAX optimization which give you the total control over how, when, where your application should be loaded.
Reviewed By: Romesh Malinga Perera. Undergraduate, University of Moratuwa



Content

Email:

Syndicators/Readers:

FaceBook: