Latest Posts

Saturday, September 12, 2015


When describing TDD to developers, development managers, and project managers who have never
experienced it, I am usually met with skepticism. On paper, creating code does seem like a long and
convoluted process. The benefits cannot be ignored, however:

➤ TDD ensures quality code from the start. Developers are encouraged to write only the code needed to make the test pass and thus fulfill the requirement. If a method has less code, it ’ s only logical that the code has fewer opportunities for error.

➤ Whether by design or by coincidence, most TDD practitioners write code that follows the SOLID principals. These are a set of practices that help developers ensure they are writing quality software. While the tests generated by the practice of TDD are extremely valuable, the quality that results as a side - effect is an incredibly important benefit of TDD.

➤ TDD ensures a high degree of fidelity between the code and the business requirements. If your requirements are written as tests, and your tests all pass, you can say with a high degree of confidence that your code meets the needs of the business.

➤ TDD encourages the creation of simpler, more focused libraries and APIs. TDD turns development a bit on its head, because the developer writing the interface to the library
or API is also its fi rst consumer. This gives you a new perspective on how the interface
should be written, and you know instantly if the interface makes sense.

➤ TDD encourages communication with the business. To create these tests, you are encouraged
to interact with the business users. This way, you can make sure that the input and output combinations make sense, and you can help the users understand what they are building.

➤ TDD helps keep unused code out of the system. Most developers have written applications in which they designed interfaces and wrote methods based on what might happen. This leads to systems with large parts of code or functionality that are never used. This code is expensive. You expend effort writing it, and even though that code does nothing, it still has to be maintained. It also makes things cluttered, distracting you from the important working code. TDD helps keep this parasite code out of your system.

➤ TDD provides built - in regression testing. As changes are made to the system and your code, you always have the suite of tests you created to ensure that tomorrow ’ s changes do not damage today ’ s functionality.

➤ TDD puts a stop to recurring bugs. You’ve probably been in situations where you are developing a system and the same bug seems to come back from QA repeatedly. You think you ’ vefinally tracked it down and put a stop to it, only to see it return two weeks later. With TDD, as soon as a defect is reported, a new test is written to expose it. When this test passes, and continues to pass, you know the defect is gone for good.

➤ When developing applications with testability in mind, the result is an architecture that
is open, extensible and flexible. Dependency Injection is a key component of both TDD and a loosely coupled architecture. This results in a system that by virtue of its architecture is robust, easy to change, and resistant to defects.

Friday, September 11, 2015


The history of TDD starts in 1999 with a group of developers who championed a set of concepts
known as Extreme Programing (XP). XP is an agile based methodology that is based on recognizing
what practices in software development are beneficial and dedicating the bulk of the developers
time and effort to those practices under the philosophy “ if some is good, more is better. ” A key
component of XP is test - first programming. TDD grew out of XP as some developers found they
were not ready to embrace some of the more, at the time, radical concepts, yet found the promise
of improved quality that was delivered by the practice of TDD compelling.

As mentioned, agile methodologies do not incorporate a big upfront design. Business requirements
are distilled into features for the system. The detailed design for each feature is done when the
feature is scheduled. Features, and their resulting libraries and code, are kept short and simple.

TDD as a Design Methodology
When used as an application design methodology, TDD works best when the business user is
engaged in the process to help the developer define the logic that is being created, sometimes going
so far as to define a set of input and its expected output. This is necessary to ensure that the
developers understand the business requirements behind the feature they are developing. TDD
ensures that the final product is in line with the needs of the business. It also helps ensure that
the scope of the feature is adhered to and helps the developer understand what done really means
with respect to the current feature in development.

TDD as a Development Practice
As a development practice, TDD is deceptively simple. Unlike development you’ve done in the past,
where you may sit down and start by creating a window, a web page, or even a class, in TDD you
start by writing a test. This is known as test first development , and initially it might seem a bit
awkward. However, by writing your test first, what you really are doing is creating the requirement
you are designing for in code. As you work with the business user to define what these tests should
be, you create an executable version of the requirement that is composed of your test. Until these
tests pass, your code does not satisfy the business requirement.

When you write your first test, the first indication that it fails is the fact that the application does
not compile. This is because your test is attempting to instantiate a class that has not been defined,
or it wants to use a method on an object that does not exist. The first step is simply to create the
class you are testing and define whatever method on that class you are attempting to test. At this
point your test will still fail, because the class and method you just created don ’ t do anything.
The next step is to write just enough code to make your test pass. This should be the simplest code
you can create that causes the test to pass. The goal is not to write code based on what might be
coming in the requirement. Until that requirement changes, or a test is added to expose that lack
of functionality, it doesn’t get written. This prevents you from writing overly complicated code where a simple algorithm would suffice. Remember, one of the goals of TDD is to create code that is
easy to understand and maintain.

As soon as your fi rst test is passing, add more tests. You should try to have enough tests to ensure
that all the requirements of the feature being tested are being met. As part of this process, you want
to ensure that you are testing your methods for multiple input combinations. This includes values
that fall outside the approved range. These are called negative tests . If your requirement says that
your interest calculation method should handle only percentage rates up to 20%, see what happens
if you try to call it with 21%. Usually this should cause an exception of some sort to be thrown. If
your method takes string arguments, what happens if you pass in an empty string? What happens
if you pass in nulls? Although it ’ s important to keep your tests inside the realm of reality,
triangulating tests to ensure the durability of your code is important too. When the entire
requirement has been expressed in tests, and all the tests pass, you ’ re done.

Thursday, September 10, 2015

The Principles and Practices of Test-Driven Development

These methodologies are all different in how they are implemented, but they share some

➤ They all make communication across the team a high priority. Developers, business users,
and testers are all encouraged to communicate frequently.

➤ They focus on transparency in the project. The development team does not live in a black
box that obscures their actions from the rest of the team. They use very public artifacts
(a Kanban board, a big visible chart, and so on) to keep the team informed.

➤ The members of the team are all accountable to each other. The team does not succeed or
fail because of one person; they either succeed or fail as a team.

➤ Individual developers do not own sections of the code base. The whole team owns the entire
code base, and everyone is responsible for its quality.

➤ Work is done in short, iterative development cycles, ideally with a release at the end of
each cycle.

➤ The ability to handle change is a cornerstone of the methodology.

➤ Broad strokes of a system are defi ned up front, but detailed design is deferred until the
feature is actually scheduled to be developed.

Agile methodologies are not a silver bullet. They are also not about chaos or “ cowboy coding. ” In
fact, agile methodologies require a larger degree of discipline to administer correctly. Furthermore,
no one true agile methodology exists. Ultimately, each team needs to do what works best for them.
This may mean starting with a branded agile methodology and changing it, or combining aspects of several. You should constantly evaluate your methodology and do more of what works and
less of what doesn ’ t.

Thursday, September 4, 2014

Cybercrime, Encryption, and Government Surveillance

The crime sounded alarming: an audacious theft of 1 million credit card numbers from numerous e-commerce sites stretched across twenty states. This disturbing incident, announced by law-enforcement authorities in March 2001, was described by the FBI as the largest organized criminal attack on the Internet to date. The FBI devoted considerable resources to this case, but so far its quarry proved to be too erratic. At first, some thought this had to be the work of ingenious hackers, but as the FBI untangled the details of this crime it discovered that these hackers were not so ingenious after all. They merely exploited security flaws, unpatched vulnerabilities in the Windows NT operating system. Microsoft had provided patches (or fixes) for these problems in 1998, but the victims carelessly failed to install them. Had these e-businesses been more assiduous about security it is quite likely that this costly theft could have been prevented (Levitt 2001).

High-profile cybercrimes that underscore the Net's vulnerability are frequently the subject of headlines in major publications. The Wall Street Journal proclaimed the Internet "Under Siege" (Hamilton and Cloud 2000) as it described how cyberterrorists had temporarily paralyzed some of the country's biggest Web sites through a denial-of-service attack. The technique is relatively simple, but the results can be catastrophic. Denial of service now joins a long list of other weapons that "black hat" hackers or crackers use to disrupt Web sites. These include packet sniffers, trojan horses, and malicious applets. Many companies fall prey to these damaging technologies despite their renewed vigilance and their heavy investment in security systems.

Privacy and intellectual property rights will be meaningless unless we can adequately secure the Net and thwart the efforts of those who engage in criminal activity. Also, as observed in Chapter 4, Internet commerce is unlikely to flourish in an environment rife with crime and theft. There must be a level of trust, but how can we achieve this trust with the opaqueness of so many Internet relationships and transactions?

In this final chapter we will cover some of the legal and technical background central to developing a lucid analysis of security and related policy issues. After a cursory overview of the Net's vulnerabilities and cybercrime, we turn to the new frontiers for law enforcement in cyberspace. Special focus will be on the encryption controversy in the United States, the uneasy issues raised by government surveillance, and the use of technologies such as the FBI's Carnivore. These issues have obviously assumed greater import thanks to the events of September 11. The problem is that some of the architectures used to secure the Net and protect privacy give succor to criminals and terrorists. Society must make difficult trade-offs between privacy and anonymity and the need for an Internet infrastructure that permits electronic surveillance by law-enforcement authorities. We will carefully look at how these tradeoffs have been managed so far and how the balance between security and liberty may need to be recalibrated to help in the struggle against terrorism.

We then shift focus to the topic of digital identity as a way to promote trust and security. Mandating digital identity as a means of assuring authentication appears to have the force of inevitability, but is it a sound and responsible idea? We will argue that code has a role to play in resolving this problem, since there are architectures that can authenticate without creating a privacy hazard. Finally, we conclude with a laconic discussion on whether security achieved through architectures is the best path to a more trustworthy Internet.


Sunday, March 2, 2014


In the business and software engineering literature the word agile is more a common adjective and the word agility is more a common noun than the name of an accurately defined set of principles, let alone the name of a methodology with well defend set of methods and techniques. Consequently it is easy to find substantial number of definitions of these terms in different contexts. Kettunen (2009) lists seventeen different definitions for the agility in the business literature from years 1995-2008. Further he points out definitions for the strategic, business, enterprise, organization, workforce, IT, manufacturing and supply chain agility and indicates that there are several definitions of agility in the business specific areas as well.

As a summary those 17 different definitions of agility contain the change or the response to change as a key characteristic. Nearly all of them indicate the customer value or the customer involvement as a key characteristic. High quality is expressed in 5 definitions. Also innovation, effectiveness, high performance, nimbleness, competitive capabilities, profitability, simplicity of practices, dexterity of performance, cost efficiency, quickness, resiliency, robustness, adaptive, and lightness are mentioned in some definitions.

Agility in the software development is no exception to this diversity. Kettunen (2009) lists these:

- Quick delivery, quick adaptations to changes in requirements and surrounding environments (Aoyama, 1998);

- Being effective and maneuverable; Use of light-but-sufficient rules of project behavior and the use of human and communication-oriented rules (Cockburn, 2002);

- Ability to both create and respond to change in order to profit in a tur
bulent business environment (Highsmith, 2002);

- Ability to expedite (Anderson, 2004);

- Rapid and flexible response to change (Larman, 2004);

- Building software by empowering and trusting people, acknowledging change as a norm, and promoting constant feedback; producing more valuable functionality faster (Schuh, 2005);

- Discovery and adoption of multiple types of ISD innovations through garnering and utilizing agile sense and respond capabilities (Lyytinen & Rose, 2006);

- Uses feedback to make constant adjustments in a highly collaborative environment (Subramaniam & Hunt, 2006);

- Iterative and incremental (evolutionary) approach to software development which is performed in a highly collaborative manner by self-organizing teams with "just enough" ceremony that produces high quality software in a cost effective and timely manner which meets the changing needs of its stakeholders (Ambler, 2007);

- Capability to accommodate uncertain or changing needs up to a late stage of the development (until the start of the last iterative development cycle of the release) (IEEE, 2007);

- Conceptual framework for software engineering that promotes development iterations throughout the life-cycle of the project Wikipedia (2007);

When the basic terminology offers such variability in definition, it is no wonder that the diversity can be found in understanding of the agile principles as well. The similar variation is also visible in the large number of agile methodologies. In this text agile is a common adjective characterizing any word it is connected with. Agile approach is common name of the agile way of developing software, complying with the agile principles. Agile methodology means some defined and named way of doing agile development, for example XP (Beck, 2000) is called a methodology.

Originally the agile approach in the software development was considered to be a team practice, which is discussed in this chapter first. Then the two dominant ways of expanding the agile approach to cover a wider scope of the software development life cycle in larger organizations are presented.

When the agile manifesto was published in 2001 it marked the start of the avalanche of the agile approach in the software development.

Sunday, November 4, 2012

Building High-Performing Teams

Well-integrated, high-performing teams - those that 'click' - never lose sight of their goals and are largely self-sustaining. In fact, they seem to take on a life of their own. And it's all down to leadership.

In every case that has been studied at the Europe-based Centre for Organizational Research, teams that 'click' always have a leader who creates the environment and establishes the operating principles and values that are conducive to high performance. The evidence for this is clearly seen in organizations where a manager who creates high performance moves to another part of the organization, or a different organization, and within 18 months they once again establish a high performing team.

We believe these leaders operate in an organized, systematic way to build successful teams, and that the formula not only involves what leaders should say and do, but also what they should not say and do. It also involves working backwards - leaders should envisage the future before dealing with the present.

The four most significant behaviors consistently demonstrated by high-impact leaders are:

» defining clear goals or a vision of the future in accordance with overall organizational aims (the 'big picture')

» creating blueprints for action to achieve those goals

» using language to build trust, encourage forward thinking and create energy within the team ('powerful conversations')

» getting the right people involved ('passionate champions').

Imparting a clear vision of where the team should be headed, and inspiring its members to make it a reality, is fundamental to team success. The great American tennis player Arthur Ashe had a wonderful phrase: "I never worried about winning or losing. I just went for it every time." Leaders who get teams to click consistently have their members tied together and "going for it".

This takes considerable effort on the part of a leader, so it's useful to reflect on why it's worthwhile. As the English manager in a large aerospace company explained to me, "It's a lot of work to get a team to click. It's a lot more work to live with a team that isn't clicking." It's as if successful team leaders calculate the up-front investment and then adopt a process to get the team to pull together to maximize the return on that investment.

Here is what high-impact leaders do. They create a clear vision and describe it in simple language. They take the time to get people to subscribe, or buy in, to that vision. Next, they assess the current situation, then work through the courses of action which are likely to yield results. It is the up-front work in getting to a clear end state that makes the process work.

This foundation-laying aspect of leadership is a determining factor in why some teams seem to grasp and then do their utmost to achieve organizational goals. It's all about how the leader continually visualizes a positive end result. So, when things get tough for the team (as they always do), these extraordinary leaders reintroduce the big picture with phrases like: "Remember our objectives," and "Let's keep our eye on the ball". This consistent single strategy of starting with the future and then moving back to the present allows leaders to make the tough decisions which enable the team to recognize and articulate problems ("What's really up?" or, "What's really so?"), sort through possible solutions, and then take action.

Teams that consistently don't 'get it together' over a long period of time can put up tough opposition for leaders who want to move forward. We like to say that such teams get 'caught in the swamp'. Unfortunately, what they also do is pull others into the swamp with them.

From extensive research, we conclude that extraordinary leaders employ distinctive forms of verbal communication. It is what these leaders say and what they don't say that gives them an advantage in getting teams to high-performance levels. These leaders truly mean what they say. They don't mix their messages, fudge meanings or use ambiguous words. Their conversations are always candid, clear, and followed by committed action.

We call them 'powerful conversations', because they make blueprints come alive and create positive attitudes and energy on the part of team members. They also encourage mutual understanding between team members and the leader; use language to make a vision seem real and worth attaining. A 'powerful conversation' typically progresses in four stages.

Stage 1: Before getting into the specific details of goals and objectives, high-impact leaders spend all the time that's needed on forming a clear vision (e.g., the general shape of a desired outcome or future state) which makes possible complete, undisputed acceptance of its attainability.

Stage 2: This entails a very candid and clear discussion of what people are thinking and feeling. The high-impact leader makes sure that everyone's agenda is heard and explored. He or she carefully asks questions to make sure there is a genuine expression of beliefs, expectations and even fears, while also patiently ensuring that the conversation remains relevant to the big picture. This keeps all those involved out of the swamp, and enables them to set up a useful and realistic agenda. Once this is done, the high-impact leader assesses the agenda.

Stage 3: The high-impact leader now skillfully discusses with team members the issues enmeshed in their proposed agenda. In this way, the leader can deepen his or her understanding of the team's goals and bring to the surface any hidden agendas. The high-impact leader describes scenarios linking future outcomes with the current situation, then proceeds to refine them. He or she continues to keep the process focused on the target future state, and helps the team to see how far it has moved and what progress it has made.

Stage 4: The leader makes sure participants know exactly what steps need to be taken next, and that they are open about what they will do to turn their commitments into reality - making the team 'alive'. The closing of a powerful conversation is also the time when a leader makes sure there is absolute buy-in, or belief in what the team is setting out to do, that team members' commitments are clear and accepted, that all action steps are well-defined and understood. In this way, the high-impact leader ensures that the powerful conversation will produce results.

These are the four most significant behaviors consistently demonstrated by high-impact leaders. But they are not the only such behaviors. What follows is a less detailed but fuller list of what leaders should do to get people to work together to attain organizational goals.


Source of Information : PhilHarkins 10 Leadership Techniques for Building High Performing Teams.

Friday, August 3, 2012

Tracking Domain Rank and Page Rank

Domain rank and page rank capture the value, authority, and trustworthiness of a site or page. These metrics play a key role in determining how you will rank in the search results. Domain rank looks at the entire site and how authoritative it is, while page rank looks at a specific page. Google only provides PageRank data, and this data is usually updated only a few times a year. To get at this data we need to use third-party tools that approximate how Google and Bing perceive the weight and importance of a page. The more authoritative a page is perceived to be, the more weight it has to redistribute its authority.

Think of page rank as a cup that can be filled up. Once it’s filled, it can then fill up several other cups (or in this case, other websites). It does this through linking to other pages. Every link on a page can fill up another page a little bit, but no cup can be empty, so it retains a bit of its rank as well. The result is that the further a page is from an authoritative site in links, the less on that site’s value will trickle down to that page.

When page rank is passed, it is split evenly across all links, including no follow links— so, a page with 10 links, two of which are no follow links, would still pass only one tenth of its authority to each of the other eight pages. This means that adding no follow links to third-party sites does not help you retain page rank in your site. The only advantage a no follow may have is to deter spammers from posting many links in a public forum by limiting the no follows off-site.

To track page rank data, SEOmoz provides two tools: Domain mozRank and mozRank. The values these tools return are not numbers provided by any search engine; instead these are numbers that SEOmoz has developed that can be used to estimate the value of pages. This data is found in the Open Site Explorer report, which provides details about inbound links, number of linking root domains, and more.

This data becomes very useful when trying to understand how and why certain pages rank in the positions in which they do. These are yet more analytics you can put into a keyword research diagram to figure out exactly what your chances of a page ranking on a specific term are. Pagerank can be a very important factor, but domain rank is also important. For example, even if your site has the exact same content as Wikipedia, it is likely that Wikipedia will outrank you, simply because of the authority of its parent domain. To move past this top-ranking site, you would need to grow your page rank significantly beyond that of Wikipedia, to compensate for your less authoritative domain.

Tracking both you and your competitors’ page and domain ranks over time will also help you anticipate if a competitor may overtake you on specific key terms that are important to your overall search strategy. Anticipating issues before they arise will allow you to become proactive and ensure that you retain the ground you have gained, as well as understanding what ground you are close to gaining.


Tuesday, July 31, 2012

Tracking Diversity of Links

Improving link diversity is one of the best ways to improve your rankings organically. A page with high link diversity receives a variety of links from many domains, through a variety of keywords. If you’re targeting a specific term, you may want to see some synonyms for the term you hope to rank on. Tracking link diversity can be beneficial to understanding your true potential to rank on a term. Understanding the link diversity of those ranking ahead of you also helps to paint this picture.

Sites that have a high level of link diversity may be difficult to overtake, or you may find the gap is just too big. At this point, though, you should realize that ranking on a term organically is not just about one factor, but a combination of factors. Link diversity it is yet one more data point you can use to analyze the realistic opportunities you have specific to a keyword. Further, knowing how the overall domain performs based on bulk back links will be another factor to consider in determining site authority.

Running Majestic SEO’s Bulk Backlink Checker on a specific URL will show you how many links you have compared to a competitor on a specific page. Looking at a pageto-page comparison can provide greater insight at the micro level than looking at overall back links to a site. While the domain authority is important, so too is the page-specific data. Establishing both the volume of links and the anchor text will help you evaluate whether you have a chance of beating a competitor out of a search spot. Filter out all the no follow links to get a better idea of how the search engines will interpret the data. If you find that you are close in links on a site, expand to the domain authority and look at how many inbound links you have from all the sites by running a Bulk Backlinks report. Then look at the domain and page ranks to get an idea of the value of these links.