Saturday, July 11, 2009

Uncovering Myths about Globalization testing- English version on Localized setup

Myth 15: There is no use testing the English version of a product on Localized Operating systems

This myth has a little bit of background in the Globalization testing Myth# 3 i.e. Globalization testing actually should start much before the International product is translated i.e. the User Interface, Documents etc. start showing up localized. The question here is how is the Internationalization testing done when the translated product is not available. I will just attempt to explain this in various points below-

Does English version of product and International version of product gets released for testing at the same time ?
In a more matured Software development life cycles, the code complete specific to English version of the product and the International version of product gets submitted at the same time i.e. one single English build have the code changes specific to International version as well.
In some Software development life cycles, the Internationalization specific changes gets introduced only after English build is released.

Can translation of product happen before English product's User Interface is frozen ?
Also, it is a well known fact that the in an ideal scenario, the Translation of product User interface into supported Localized languages happens only after English product's User Interface is Frozen.
* English product's User Interface is Frozen only after some cycles of testing in which entire User Interface is tested in different setups (OS, browsers etc.) to ensure that there all the User Interface specific issues are found and fixed before the different texts are translated. And even after User Interface freeze, the translation activity acually takes quite a bit of time because it is usually a manual process and has its own cycles of reviews before translation gets finalized.
* It is quite evident that since the time first build with Internationalization specific code changes is introduced to User Interface Freeze milestone to actual Translation of the product, there is a lot of time that gets elapsed. If Internationalization testing does not start when the first build (usually English build) is received, then many of the Internationalization specific changes will not get found until the Test team receives the translated build.

How can Internationalization testing happen when there is no translated software available ?
* The answer to this question is testing English version of the product on Localized setup e.g. Say a product is supporting German and Japanese languages and supports Windows XP, Windows Vista, Mac 10.5 OS- Internationalization testing in this case would involve testing English product on German Windows XP with German Internet Explorer 7.0, testing English product on Japanese Windows Vista with Japanese Firefox etc.

What kind of bugs is this type of testing (Testing English product on Localized Operating systems) helping to find ?
One may always argue that testing English product on a Localized version of the Operating system will always result in English specific issues because what we are essentially testing is the English product. This may be true to some extent but not entirely. Consider the below situations-
* The Product installation works fine when English product is installed on English Operating system. The Product installation fails when English product is installed on Spanish language. The reason- the Install Path name is hard coded in the product. The product usually gets installed in "Program Files" folder in Windows. "Program Files" folder is called as "Archivos de programa" in Spanish language.
* The data input using English characters e.g. Writing the name as "Anuj" works fine in English Operation system. When using the same build on Japanese Operating system and using the name as "廃れる" fails. The reason- the product does not recognize the Japanese language data.
These are just a few basic examples and there can be many such instances of unique bugs that can be found (i will cover this aspect in more details in upcoming blogs)

Keep testing passionately and do provide your feedback to me!

Friday, July 3, 2009

Uncovering Myths...."Security Testing is from Mars and Globalization Testing is from Venus"

This post is a continuation of my previous post based on the real time myths about Globalization testing as i have experienced.

Myth 14- Security Testing is from Mars and Globalization testing is from Venus

Introduction:
One of the intriguing areas in the sphere of Software Globalization testing is planning/performing testing of the international application from the security perspective. One of the popular myths or rather assumptions when talking about Globalization testing is that the software applications usually do not have any impact as far as Software Security is concerned and thus the Security testing is not required to be done on an International application. While this may be true in certain contexts but there is also a large possibility that system security gets compromised with incorrect assumptions about the topic. Security related bugs usually have a high business impact and are (in most cases) costlier to fix than the related functional bugs.
Without doubt this is a much broader topic to be reasonably covered in one article alone, this article is primarily an attempt to put together a "perspective" of how different security related aspects may have impact on International Software applications.

A background in possible Security considerations for International applications:
Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language. In many ways the emergence of Unicode standard has changed the way Software Internationalization has been perceived and carried out in product design. It has been one of the significant advancement over the past encoding systems. Unicode 5.1.0 contains over 100,000 characters and encompasses of large number of different writing systems of the world, the faulty usage of the same may result in potential security attacks. Unicode consortium defines the major security threats related to International applications in 2 major categories-
1) Visual security issues
2) Non Visual security issues

Visual security threats:
The threats under this category are nothing but the Visual spoofs. Since Unicode consists of myriad of characters, there is a good probability of a layman user coming across visually confusing strings. There are no hard-and-fast rules for visual confusability- many characters look like others when used with sufficiently small sizes, in different font, considering different sequences of characters e.g. In some cases sequences of characters can be used to spoof: for example, "rn" ("r" followed by "n") is visually confusable with "m" in many sans-serif fonts.
As security expert Eric Johanson mentions in an advisory, a security weakness in a standard for handling special character sets in domain names could let an attacker spoof Web sites. There are now many ways to display any domain name on a browser, as there are a huge number of character sets which look very similar to Latin characters. The advisory demonstrates the attack using the domain for PayPal, but using an alternate Unicode character for the first "a." That gives an address that looks like "http://www.pàypal.com," but with a "à". This can enable an attacker to create a fake Web site for a phishing scam.

Non Visual security threats:
Non Visual security threats primarily deal with how the Unicode data is interpreted by the system. There are different security flaws that can be exposed by indifferent use of Unicode data by the system. Some of such attacks include-
UTF-8 Exploits, Text Comparison, Buffer Overflows, SQL Injection Vulnerabilities, Cross-Site scripting, Format String vulnerabilities etc.

Potential Security tests on International applications:
For the sake of simplifying the usage of the tests, the Security tests on the International applications can be divided as-

1) Security tests based on Functional requirements
2) Security tests based on Non-Functional requirements

Security tests based on Functional requirements:
These tests validates the Security sensitive functional portion of the system. The tests that would primarily come under these categories would pertain to applications'-
a) Authentication
b) Authorization

Authentication and Authorization tests usually go hand in hand.

If the application is going to be used in International markets, then the relevant Security tests here would be-
- To use the International characters in the supported languages. From the Security perspective, depending upon the reach of the product - the test characters can be of languages not supported by product but are supported by Unicode.
- If the product authorization is dependent upon presence or absence of an dependenant application, for an international application it will make sense to use Localized version of the applications.

Security tests based on Non-Functional requirements
Non-functional security requirement can be something like “System should not be compromised.” As is clear from the wordings, this requirement is not associated with a specific feature and it’s a very generic but an important requirement. In order to look at the Security aspects of an application accurately, it is necessary to have a holistic view of the situation. The most predominant challenge is that if the requirement is as vague as the one mentioned above, there is no one simple test be performed to make sure that the security requirement is met.
One of the possible approach that can be used to find such vulnerabilities is to generate the Flaw specific test ideas. The obvious question is which flaws to consider. Below listing has a mention of few flaws that can potentially have an impact on Internationalized applications and some ideas on how the tests can unveil these. This, by no means is the complete list of Security tests for International applications.

a) Buffer Overflow vulnerability:
o About the Buffer overflow vulnerability-
Buffer overflows occur in the software written in programming languages that do no strictly enforce bounds checking on arrays. The basic concept of a buffer overflow is that we provide an application with more data to be stored in a particular variable than the programmer setup the space for. When this happens, it is likely that the application writes past the bounds of the variable buffer, allowing an attacker to change the value of other data stored in memory and even execute the malicious commands. It is easiest for the attacker to perform buffer overflow attacks on the stack. These attacks can happen over heap but considering the dynamic nature of heap, these are usually difficult to simulate.

o How can this vulnerability impact Localized applications-
One of the possible ways this vulnerability can impact Localized applications is that the application may have typical checks built-in regarding ASCII text but the entering the Unicode data may expose the buffer overflow vulnerability.

o Possible tests on Localized applications to unveil Buffer overflow vulnerability-
1. Identify the areas of the application that are potentially vulnerable. Majorly, it would all the areas where the application accepts user inputs and particularly the areas that are exposed to wider audience. Potential candidates for this type of vulnerability would be any text fields within application that do not have any input validation check.

2. The Localized application might have done the Input validation on the text fields by virtue of number of characters supported e.g. Say, an Input field is programmed to accept maximum of 45 characters for the name field. If the character input is in English, the 45 characters will amount to 45 bytes in case of UTF-8 encoding. But if the character input is in Japanese, then depending upon the character input one character might take 3 bytes- which will amount to 45*3= 135 bytes of "acceptable" data input. Such an application is potential candidate for buffer overflow vulnerability attacks, as this gives an opportunity for the attacker to input malicious code along with the input text.

3. Depending upon the underlying encoding system in use, number of bytes a character occupies varies e.g. the character "は" occupies 2 bytes in UTF-16, 3 bytes in UTF-8 and 5 bytes in UTF-7. Thus, by using character driven Fuzz techniques the test data can be generated to simulate a situation when the character data exceeds the available buffer space.

4. Converting strings between different character encodings (such as SBCS, MBCS, Unicode, UTF-8, and UTF-16) may produce a buffer size mismatch. Being aware of the areas where such conversion is happening in application may aid the test team to focus to find this vulnerability.

5. If the application reads certain text or data embedded in the communications protocol, this source can be populated with localized text to simulate the buffer overflow attacks. e.g. by some internal operation of the program, the string may expand- which will result in enlargement of the buffer. Strings may expand in casing: Fluß → FLUSS

b) SQL Injection vulnerability:
o About the SQL Injection vulnerability-
SQL injection is an attack in which malicious code is inserted into strings that are later passed to an instance of database server for parsing and execution. The primary form of SQL injection consists of direct insertion of code into user-input variables that are concatenated with SQL commands and executed.

o How can this vulnerability impact Localized applications-
This vulnerability can be tested in the applications with an interface to the database. In case the localized applications are using the database with localized schema, then this vulnerability (if existing) might be exposed by entering alternate encodings for the potentially problematic characters such as Apostrophe, Quotation mark, Comma, Bracket etc.

o Possible tests on Localized applications to unveil SQL Injection vulnerability-
1. Make a note of all the user input fields that commit the data to the database.
2. Generate the test data that includes the data changing or even schema changing commands. There are a lot of publically available SQL vulnerability cheat sheets available that can help generate the relevant test data depending upon the database being used.
3. One of the key changes that could be made to the test data is to use the equivalent characters in the different supported languages.

c) Other Security vulnerabilities:
One of the more methodical ways while considering the Security testing for International applications is to commence with the creation of appropriate threat model. A threat model is a description of a set of security aspects; that is, when looking at a piece of software (or any computer system), one can define a threat model by defining a set of possible attacks to consider.
There are other notable Security vulnerabilities that can have potential impact on the security of International applications (and can be considered in threat modeling) such as Format String vulnerability, canonicalization exploit and with more Software vulnerabilities being found with each passing day, the list of vulnerabilities having impact on International applications can never be fixed but would always be ever growing.
Local governments may have their own specific security requirements. For example, any product that either uses or implements cryptography for confidentiality must obtain necessary approvals from the French government prior to shipping to France.

Epilogue:
It is quite evident that the International application does bring out the unique challenges from Security perspective. There is a certain intersection between Security testing and Globalization testing, something that cannot be ignored. The adage "Security testing is from Mars and Globalization testing is from Venus" is possibly not quite right! and this is certainly one area that is waiting to be explored further and researched.

References:
http://www.unicode.org/
http://www.isecpartners.com/files/iSEC_Scott_Stender_Blind_Security_Testing.bh07.pdf

Friday, June 5, 2009

Uncovering Myths about Globalization testing- Input validation testing 2

This post is a continuation of my previous post on the same topic and is based on the real time myths about Globalization testing as i have experienced.
In my previous post, i talked about how the byte count varies across different Unicode representations. Any tester reading this may further have a few questions here-
- What is the right approach to come up with the localized test data ?
- Once i have the test data, how do i know which localized character occupy how many bytes considering i know the type of Unicode representation (UTF-8, UTF-16 , UTF-32 etc.) ?
The answer to the first question is in itself a very broad topic and i do plan to cover this in my future posts.
The second question i.e. decoding how many bytes a character occupies for a given encoding is equally interesting. I have found one of the General testing tool created by Bj Rollison known as String Decoder quite useful.
The Bj Rollison's website has great amount of details/user guides etc. about this great utility for Internationalization testing.

Tuesday, May 26, 2009

Uncovering Myths about Globalization testing- Input validation testing

This post is a continuation of my previous post on the same topic and is based on the real time myths about Globalization testing as i have experienced.
Myth 13- A tester can perform tests specific to text inputs for Localized applications using the similar approaches as the English language testing

Introduction:
The testing specific to Input field validation is an important form of testing in any Software application. An example of such testing can be suppose an application having a text field to input Credit card details and a tester can test the same by including various possible inputs to ensure that the valid data is being accepted by the application and the user is presented with a valid message indicating that the input is incorrect.

There are a several techniques that can be used to test this aspect of the application properly. Some available resources listed below-
http://www.testingeducation.com/BBST/Domain.html
Book: Lessons Learned in Software Testing Chapter-3 Testing Techniques Section- "How to create a Test Matrix for an Input field"

What is an encoding system ?:
Though the known techniques do talk about usage of various types of inputs including Language reserved characters e.g. the characters specific to any language that a Software application may be supported such as German, Japanese etc. as these languages do have their own writing systems and character sets. It is of utmost importance to test a Localized application with the language specific characters as any user in any of the product's supported countries would expect the application to support data processing in their own native language e.g. a Japanese user using an email client would expect the application to support writing emails in the Japanese language, otherwise the customer may not find the application worthwhile at all.
One of the important aspects specific to Localized data processing that the known techniques do not specifically talk about is the dependency the Localized data has on the underlying encoding system in the application. If you are new to the term- "encoding system", please read below mini description from www.unicode.org-
Unicode provides a unique number for every character, no matter what the platform,no matter what the program, no matter what the language.
Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers.

Does Unicode have different representations ?:
Unicode is actually an encoding system that encompasses virtually all the known character sets from different languages. There are several possible representations of Unicode data, including UTF-7, UTF-8, UTF-16 ,UTF-32 etc. Each of these different representations have its own advantages and disadvantages depending upon the context e.g.
UTF-8 is most common on the web. UTF-16 is used by Java and Windows. UTF-32 is used by various Unix systems.

Does encoding system representation affect the test data size ?:
One important fact to consider when testing the Input character set for an localized application is to know what type of encoding system is being used beneath. The reason why is it so important to know the underlying encoding system is that no. of bytes occupied for a certain character varies depending upon the encoding system used. Lets take a closer look at this statement by means of an example-
Take into consideration the following character from German language "ä". The byte count of this character depending upon the encoding system used is as follows-
UTF-16 Byte count for "ä"= 2
UTF-8 Byte count for "ä"= 2
UTF-7 Byte count for "ä"= 5

The above example shows that the encoding system do have an dependency on the no. of Bytes for a particular test character.

Different ways of Input text validation- No. of Bytes vs. No. of characters ?:
The next important factor before performing the Input validation testing in Localized applications is to know whether the validation logic is done as per no. of Bytes or the validation is done with no. of characters. Lets take a closer look at this statement by means of an example-
Suppose there is an application with a text field say Username. The usual assumption is that the validation will be done by no. of characters say the "Username" field will support maximum of 10 characters and a minimum of 3 characters.
Suppose a tester uses test data for "Username" as "ääääääääää" and the application is using encoding system as UTF-7. If the validation is done as per No. of characters, then the above is a valid test data as it represents 10 characters. In case the validation is done as per No. of Bytes, then it may not be a valid data (depending upon the Byte limit set), as the test data in the above example may amount to 50 bytes.

Thus, it is important to ascertain before you test to ensure the validation rules.

Summary:
So, before you consider performing the Input validation testing or even generate test data for testing for Localized application ensure that you know about the following-
- Encoding system used by the application
- Validation rule- does the application validates the data as per Bytes or by no. of characters ?

Sunday, May 24, 2009

Uncovering Myths about Globalization testing- Demystifying MUI Packs

This post is a continuation of my previous post on the same topic and is based on the real time myths about Globalization testing as i have experienced.

Myth 12: Testing International applications using "Microsoft's MUI Pack" or "Localized OS installation" means one and the same thing

Before we get into the fact underlying this myth, lets get to understand what MUI Pack is and their utility in International software design.

As wikipedia defines MUI-
" Multilingual User Interface (MUI) is the name of a Microsoft technology for Microsoft Windows, Microsoft Office and other applications that allows for the installation of multiple interface languages on a single system. On a system with MUI, each user would be able to select his or her own preferred display language. MUI technology was introduced with Windows 2000."

e.g. lets consider if a user has Windows XP Pro English version running for her usage and for some reason, the user want to change the XP Pro User Interface to German language- this can actually be achieved by installing the MUI pack on English XP Pro which will give the flexibility to the user to change the User Interface language.
One of the practical scenarios where MUI packs can prove to be of great utility is for the support organizations. With the the unique business model that the Software products offers, it takes no time for a successful software product to be made available in different countries (of course after including proper Internationalization engineering). In such a scenario, suppose the product has lot of penentration in German market and at the same time the support organization is located in China. And if Chinese support engineer is troubleshooting the issue online with the German customer, he may need to see the application is in English, German or a more familier Chinese OS environment. This is where MUI packs can help! If the MUI packs are installed, then the support engineer can change the language quite easily at the run time.
Just a note that there's a notable difference in the way MUI was handled pre Vista and pro Vista era. More information here.

With this background about MUI in mind, lets take a crack at the Myth- "Testing International applications using 'Microsoft's MUI Pack' or 'Localized OS installation' means one and the same thing.

In order to test the International software, the Microsoft Operating System setup can be largely created in these 2 broad ways-
1. Microsoft offers different ISOs for different languages e.g. for creating a Japanese Win XP from scratch, one can install Win XP Japanese ISO on the machine and prepare what is being referred to as "Localized OS installation" in the above statement.
2. One more possible way can be installing the English Windows XP and then later on installing the Japanese MUI pack which will result in the User Interface elements to be changed to Japanese.

Though the test setup using both these methods provide a sort of similar user experience but there are some fundamental technical differences between these two types of setups as listed in the table below-










Considering these difference may be there will be certain consistency in the UI display when an application is installed on these 2 different types of setup but I18N testing may result in different results.

Tuesday, April 7, 2009

Unveiling the Mysterious World of an Ethical hacker

One of my recent articles got published in TheSmartTechie Magazine. Here's the unedited article for your reading pleasure-

The Ethical Hacker Snapshot
What is the first thing that comes to your mind when you think of the word ‘hacker’? Let me attempt to draw a snap here; a kid in his late teens or early 20’s displaying modern demeanors — wearing a turned around cap, having ruffled hair, wearing a spectacle, and dressed up casually in jeans and t-shirt. Someone who looks a bit immature in his mannerisms but at the same time sounds like a deep thinking individual, possibly knowing everything about computers and with a malicious intention to break into computer systems and networks and cause harm to individuals and organizations. If you are like many others who are baffled by the mystery surrounding the hackers, your image of a hacker may not be too different from the one described above. In short, there is always a notorious vagueness surrounding the word ‘hacker’. So, who is a hacker anyways, and what’s so ‘ethical’ about a hacker?

The Ethical Hacker Defined:
The Oxford dictionary’s definition of the word ‘hacker’ is ‘someone who uses a high degree of computer skill to carry out unauthorized acts within a network.’ And the definition of the word ‘ethical’ is ’being morally correct’. So in plain terms, an ‘ethical hacker’ is someone who uses a computer to gain unauthorized access to data in a computer or network and at the same time is morally correct and does not have a malicious intent. In industry jargon, an ethical hacker is a computer and network expert who attacks a security system on behalf of its owners, seeking to detect vulnerabilities that a malicious cracker could exploit. Some experts even argue that hackers, by definition, are supposed to have ethical intent and so there is no need for the phrase ‘ethical hacker’. In this article, I have used the term ‘ethical’ in an attempt to counter the negative impression that exists around hackers. On the contrary, a cracker is someone who is also a computer and network expert and attacks a computer system or network and has a malicious intent, unlike a hacker. An ethical hacker is also sometimes called a ‘white hat’; a term that comes from old Western movies where the ‘good guy’ always wore a white hat and the ‘bad guy’ wore a black hat. So, a hacker does not have a criminal intent but a cracker does.
A Few Cracker Stories
In 2007, nearly 3,000 customer records were accessed by crackers who hacked into the system of a small bank in central U.S. Though there is no official record of how this happened reports say that this was possibly done by using SQL Injection attacks. Whoever thinks that online banking is convenient may get a perspective of the considerable risk at which this convenience comes. In February 2007, more than 10,000 online game servers that were hosting games such as Return to Castle Wolfenstein, Halo, Counter-Strike, and many others were attacked by ‘RUS’ hacker group. The Distributed DoS attack was made from more than a thousand computer units located across the former republics of the erstwhile Soviet Union. A lot of research is carried out on the Wi-Fi networks by means of Wardriving. In Wardriving, the researcher looks for Wi-Fi networks using a PDA or portable computer while in a moving vehicle. The prime idea behind Wardriving is to find out the vulnerable Wi-Fi networks, and if the Wardriver has a malicious intent he can use this information to break into vulnerable wireless networks using the computer or network resources. In the recent terrorist attack incidents, vulnerable wireless networks were used to send emails to media houses, which certainly left ignorant Wi-Fi users in much of a trouble. The Internet is full of news stories related to organizations’ and even home users’ computer security being compromised by crackers. Hackers are the people who help prepare the organizations against such attacks of even more drastic consequences.

Exploring the Mind of an Ethical Hacker
As Ankit Fadia, one of India’s renowned computer security experts put it an ethical hacker, or simply a hacker, is someone who
* Likes to think out of the box.
* Likes to try out and experiment with the things not mentioned in a computer book. * Has unlimited curiosity.
* Is highly creative and innovative.
* Believes in testing and stretching the limits of his own technological abilities. * Has an ability to think and stand on his own feet and achieve things that are beyond the capacity of a normal person.
* Is trustworthy and honest. Hackers in reality are actually good, pleasant, and extremely intelligent people who, by virtue of their knowledge, help organizations in a constructive manner to secure documents of strategic importance.

Similarity Between Ethical Hacking and Software Testing
The prime purpose of software testing is to detect the bugs in a software application before the customer does it. On similar lines, the purpose of hacking is to find the vulnerabilities before a cracker with malicious intention does it. A hacker needs a kind of brazen mindset for breaking things in order to carry out hacking. The same kind of mindset is found among persons performing security testing on software applications. Like a typical software security tester a hacker also needs to have loads of perseverance, as the success ratio of finding a vulnerability is not always quite high and it usually requires trying out different things persistently and creatively to find out something wrong with a particular computer system or a network. One of the important things that a hacker usually relies on to carry out a simulated attack is called ‘penetration testing’. A penetration test is a method of evaluating the security of a computer system or network by simulating an attack from a malicious source, known as a Black Hat Hacker, or Cracker. The process involves an active analysis of the system for any potential vulnerabilities that may result from poor or improper system configuration, known or unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. There are lots of freely available tools as well as commercial ones that can help one perform Penetration tests on websites, computer networks, and so on.

Epilogue
To catch a thief, you must think like a thief. That’s the basis of ethical hacking. One of the first examples of ethical hackers at work was in the 1970s, when the United States government used groups of experts called Red Teams to hack its own computer systems. One of the key roles of Red Team activity was that it challenged preconceived notions by demonstration and served to elucidate the true problem state that the attackers might be attempting to exploit. In a similar way organizations and government agencies hire ethical hacking services to gain insights into the vulnerability assessment of their own computer systems and networks to know how sensitive information is externalized and can be exploited by the crackers. Gaining information about the loopholes in a system, the ethical hacking services work to plug the holes and make the systems more secure and less exploitable. Are you aware of the ‘cyber thieves’ stalking your organization’s computers and networks? If not, ethical hacking will sure give you an answer.

Tuesday, March 31, 2009

The best way to make mistakes- "Fail faster"

I think one of the questions that you will always get an answer in Affirmative, when asked is- "Have you ever made a mistake ?". I believe till the time human race exists the answer to this question will always be "Yes". On the contrary, one of the more significant questions is- "What do you do when a mistake is made ?" or "How do you react to mistakes ?" or "What are the thoughts that run through your mind after you make a mistake ?". Answers to this questions largely depend upon myriad of factors such as one's social orientation, the education system which always teaches or rather prompt us to be "correct" or "perfect" always or sometimes the value system-which sees you in bad light on making mistakes.
The rule in learning something new is quite simple- You cannot learn to walk without falling down. You cannot learn to swim without accidentally slipping your head inside water. You cannot learn to ride a bicycle without falling down and hurting self.
Our present life is largely as a result of choices we make. So, after making a mistake one can either choose to criticize self and become overly cautious and defensive for rest of the life or one can safely ignore the mistake and live in the world of illusion as if nothing happened or one can move on and take the positives out of mistakes and learn from them.
I have been reading through a some stuff over the past few months and have observed some striking similarities in the thinking of successful people in how they dealt with their failures. Here are few instances-

Source# 1- http://www.rediff.com/getahead/2009/mar/12starting-a-business-on-your-own.htm

This article is about Anand Chhatpar who is the CEO of BrainReactions, which is in the business of identifying new opportunities for entrepreneurs and companies by generating creative new ideas. Anand says-"Let me assure you that everyone makes mistakes when starting a new business. What is needed to succeed is the will to recognise your mistakes and to fix them quickly. As I learned from my mentors during my internship, 'Fail fast to succeed sooner!'

Source# 2- Book: The little book of coaching (Authors: Ken Blanchard and Don Shula)
Don Shula , one of the most successful football coaches wrote in the book-"I had a Twenty-four rule. I allowed myself, my coaches, and our players a maximum of twenty four hours after a football game to celebrate victory or bemoan a defeat.
During that time, everyone was encouraged to experience the thrill of victory or the agony of defeat as deeply as possible, while learning as much as we could from that same experience. Once the twenty four hour deadline had passed, we put it behind us and focused our energies on preparing for next opponent."

Source# 3- Book: Micheal Phelps- The untold Story of a Champion (Author: Bob Schaller)
After his amazing feats in 2004 and 2008 Olympics, Michael Phelps
needs no introduction. This book primarily talks about his journey from childhood and exclusively covers his run in 2008 Olympics. There's a mention of one of his fellow champion swimmer in the book as follows-"Not making the Olympic team at 2004 Olympic trials really gave Garett Weber Gale a focus he needed in 2008 to avoid mistakes he made 4 years earlier."I have this quote from [UT Assistant] Kris Kubik,' Weber Gale said. "I was just totally broken up at the time, bawling. Kris came up and said, "The way to get through this is to take a minute, remember how this feels, and don't ever let it happen again." I promised myself that day, I wouldn't feel that again- that much disappointment. Its important, to me, to keep promises to myself- its a big deal."

Source# 4- Book: The Greatness Guide2 (Author: Robin Sharma)

"The CEO of Coca-Cola at the annual meeting informed shareholders that the company was now going on an innovation tear and that his organization's reinvention plan was contained in a documented entitled "The Manifesto for Growth." He noted that spending on marketing and innovation would increase by US$400 million and then- and here's the big line- observed, "You will see some failures. As we take more risks,
this is something we must accept as a part of the regeneration process." Which brings me to the imperative of Failing Fast. There can be no success without failure. Its just part of the success...You need to fail to win.

I think one thing that is quite clear from these instances is that the smart people know how to "Fail fast". To me Failing faster constitutes of several factors-
- First is to accept that failures are a part of day-to-day life. No matter how perfect may one claim to be, mistakes are inevitable.
- Do not kill yourself with negative thoughts whenever the mistakes happen.
- Let your failures have a limited shelf life. Remember, Don Shula's (Source# 2) Twenty-hour rule. Don't let your mistakes ruin your thinking after the shelf life expires. But do take learnings beyond twenty four hours.
- As with the case of swimmer Garett Weber Gale (Source# 3), always do remember how bad it feels whenever the mistake is made and use that feeling to enhance your resolve to not do it again.
- Don't just give up something that you believe in just because you have failed in a particular step.
- Learn not only from your mistakes but from others too and all the above rules apply appropriately.