Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
What is Six Sigma?
Six Sigma is a set of techniques and tools for process improvement. It was introduced by engineer Bill Smith while working at Motorola in 1986. Jack Welch made it central to his business strategy at General Electric in 1995. Today, it is used in many industrial sectors.

Six Sigma seeks to improve the quality of the output of a process by identifying and removing the causes of defects and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, mainly empirical, statistical methods, and creates a special infrastructure of people within the organization, who are experts in these methods. Each Six Sigma project carried out within an organization follows a defined sequence of steps and has specific value targets, for example: reduce process cycle time, reduce pollution, reduce costs, increase customer satisfaction, and increase profits.

The term Six Sigma (capitalized because it was written that way when registered as a Motorola trademark on December 28, 1993) originated from terminology associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield or the percentage of defect-free products it creates. A six sigma process is one in which 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects (3.4 defective features per million opportunities). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this goal became a by-word for the management and engineering practices used to achieve it.


Six Sigma doctrine asserts:
  • Continuous efforts to achieve stable and predictable process results (e.g. by reducing process variation) are of vital importance to business success.
  • Manufacturing and business processes have characteristics that can be defined, measured, analyzed, improved, and controlled.
  • Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality-improvement initiatives include:
  • A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.
  • An increased emphasis on strong and passionate management leadership and support.
  • A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather than assumptions and guesswork.
The term "six sigma" comes from statistics and is used in statistical quality control, which evaluates process capability. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO). Six Sigma's implicit goal is to improve all processes, but not to the 3.4 DPMO level necessarily. Organizations need to determine an appropriate sigma level for each of their most important processes and strive to achieve these. As a result of this goal, it is incumbent on management of the organization to prioritize areas of improvement.

"Six Sigma" was registered June 11, 1991 as U.S. Service Mark 1,647,704. In 2005 Motorola attributed over US$17 billion in savings to Six Sigma.

Other early adopters of Six Sigma include Honeywell (today's Honeywell is the result of a "merger of equals" of Honeywell and Allied Signal in 1999) and General Electric, where Jack Welch introduced the method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.

In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma. The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence". Companies such as GE, Verizon, GENPACT, and IBM use Lean Six Sigma to focus transformation efforts not just on efficiency but also on growth. It serves as a foundation for innovation throughout the organization, from manufacturing and software development to sales and service delivery functions.

The International Organization for Standardization (ISO) has published in 2011 the first standard "ISO 13053:2011" defining a Six Sigma process. Other "standards" are created mostly by universities or companies that have so-called first-party certification programs for Six Sigma.


Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.

DMAIC ("duh-may-ick", /dʌ.ˈmeɪ.ɪk/) is used for projects aimed at improving an existing business process.
DMADV ("duh-mad-vee", /dʌ.ˈmæ is used for projects aimed at creating new product or process designs.


The five steps of DMAIC
  • Define the system, the voice of the customer and their requirements, and the project goals, specifically.
  • Measure key aspects of the current process and collect relevant data; calculate the 'as-is' Process Capability.
  • Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.
  • Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.
  • Control the future state process to ensure that any deviations from the target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, visual workplaces, and continuously monitor the process. This process is repeated until the desired quality level is obtained.
Some organizations add a Recognize step at the beginning, which is to recognize the right problem to work on, thus yielding an RDMAIC methodology.


The five steps of DMADV
  • Define design goals that are consistent with customer demands and the enterprise strategy.
  • Measure and identify CTQs (characteristics that are Critical To Quality), measure product capabilities, production process capability, and measure risks.
  • Analyze to develop and design alternatives
  • Design an improved alternative, best suited per analysis in the previous step
  • Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).
Personal software process (PSP),  Capability Maturity Model (CMM) &  Team Software Process (TSP)

The personal software process (PSP) is a structured software development process that is intended (planned) to help software engineers better understand and improve their performance by tracking their predicted and actual development of code. The PSP was created by Watts Humphrey to apply the underlying principles of the Software Engineering Institute's (SEI) Capability Maturity Model (CMM) to the software development practices of a single developer. It claims to give software engineers the process skills necessary to work on a team software process (TSP) team.

The PSP aims to provide software engineers with disciplined methods for improving personal software development processes. The PSP helps software engineers to:
  • Improve their estimating and planning skills.
  • Make commitments they can keep.
  • Manage the quality of their projects.
  • Reduce the number of defects in their work.
PSP structure

PSP training follows an evolutionary improvement approach: an engineer learning to integrate the PSP into his or her process begins at the first level – PSP0 – and progresses in process maturity to the final level – PSP2.1. Each Level has detailed scripts, checklists and templates to guide the engineer through required steps and helps the engineer improve his own personal software process. Humphrey encourages proficient engineers to customise these scripts and templates as they gain an understanding of their own strengths and weaknesses.


The input to PSP is the requirements; requirements document is completed and delivered to the engineer.
  • PSP0, PSP0.1 (Introduces process discipline and measurement)
    • PSP0 has 3 phases: planning, development (design, coding, test) and a post mortem. A baseline is established of current process measuring: time spent on programming, faults injected/removed, size of a program. In a post mortem, the engineer ensures all data for the projects has been properly recorded and analysed. PSP0.1 advances the process by adding a coding standard, a size measurement and the development of a personal process improvement plan (PIP). In the PIP, the engineer records ideas for improving his own process.
  • PSP1, PSP1.1 (Introduces estimating and planning)
    • Based upon the baseline data collected in PSP0 and PSP0.1, the engineer estimates how large a new program will be and prepares a test report (PSP1). Accumulated data from previous projects is used to estimate the total time. Each new project will record the actual time spent. This information is used for task and schedule planning and estimation (PSP1.1).
  • PSP2, PSP2.1 (Introduces quality management and design)
    • PSP2 adds two new phases: design review and code review. Defect prevention and removal of them are the focus at the PSP2. Engineers learn to evaluate and improve their process by measuring how long tasks take and the number of defects they inject and remove in each phase of development. Engineers construct and use checklists for design and code reviews. PSP2.1 introduces design specification and analysis techniques
  • (PSP3 is a legacy level that has been superseded by TSP.)

Using the PSP

In practice, PSP skills are used in a TSP team environment. TSP teams consist of PSP-trained developers who volunteer for areas of project responsibility, so the project is managed by the team itself. Using personal data gathered using their PSP skills; the team makes the plans, the estimates, and controls the quality.

Using PSP process methods can help TSP teams to meet their schedule commitments and produce high quality software. For example, according to research by Watts Humphrey, a third of all software projects fail, but an SEI study on 20 TSP projects in 13 different organizations found that TSP teams missed their target schedules by an average of only six percent.

Successfully meeting schedule commitments can be attributed to using historical data to make more accurate estimates, so projects are based on realistic plans – and by using PSP quality methods, they produce low-defect software, which reduces time spent on removing defects in later phases, such as integration and acceptance testing.
Waterfall Methodology

At the highest level, the most commonly used software development lifecycle methodology, called the Waterfall Model, can be summarized as the following sequential series of steps:

  1. Requirements and Specification
  2. Design
  3. Construction (Coding)
  4. Integration
  5. Testing
  6. Deployment
  7. Maintenance
From this high-level view, the Waterfall Model looks like an ideal process where the output of one step is the input into the next, finally resulting in the deployed software. Because of this process view, it would seem that Six Sigma, more specifically DMAIC, could help solve the issues outlined above and help improve the Waterfall process to deliver higher quality software on time.

V-model means Verification and Validation model. Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins.  Testing of the product is planned in parallel with a corresponding phase of development in V-model.

[Image: V_model.jpg]

Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model before development is started, a system test plan is created.  The test plan focuses on meeting the functionality specified in the requirements gathering.

The high-level design (HLD) phase focuses on system architecture and design. It provide overview of solution, platform, system, product and service/process. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.

The low-level design (LLD) phase is where the actual software components are designed. It defines the actual logic for each and every component of the system. Class diagram with all the methods and relation between classes comes under LLD. Component tests are created in this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.

Coding: This is at the bottom of the V-Shape model. Module design is converted into code by developers.

Advantages of V-model:
  • Simple and easy to use.
  • Testing activities like planning, test designing happens well before coding. This saves a lot of time. Hence higher chance of success over the waterfall model.
  • Proactive defect tracking – that is defects are found at early stage.
  • Avoids the downward flow of the defects.
  • Works well for small projects where requirements are easily understood.
Disadvantages of V-model:
  • Very rigid and least flexible.
  • Software is developed during the implementation phase, so no early prototypes of the software are produced.
  • If any changes happen in midway, then the test documents along with requirement documents has to be updated.
When to use the V-model:
  • The V-shaped model should be used for small to medium sized projects where requirements are clearly defined and fixed.
  • The V-Shaped model should be chosen when ample technical resources are available with needed technical expertise.
High confidence of customer is required for choosing the V-Shaped model approach. Since, no prototypes are produced, there is a very high risk involved in meeting customer expectations.
Spiral Model

The general idea behind Spiral Model is that you don’t define everything in detail at the very beginning. You start small, define your important features, try them out, get feedback from your customers, and then move on to the next level. You repeat this until you have the final product. Each time around the Spiral involves 6 steps:

[Image: image.gif]

  1. Determine objectives, alternatives and constraints.
  2. Identify and resolve risks.
  3. Evaluate alternatives.
  4. Develop and test the current level.
  5. Plan the next level.
  6. Decide on the approach for the next level.
In this model, s/w tester gets a chance to influence the product early by being involved in the preliminary design phases. He can see where the project has come from and where is it going. Also, the cost of finding problems is low since they are found early.

The design and prototyping-in-stages are combined, in an effort to obtain the advantages of top-down and bottom-up concepts. It is a SDM (System Development Method) that combines the features of the prototyping model and the water fall model. The intention of spiral model is to deal with large, expensive and complicated projects.

The spiral model follows creation of a series of prototypes for refining the understanding of the requirements. The kind of approach is best suited to projects that are not at all clearly defined and a clear solution is yet to be arrived at. This model provides an opportunity to build various prototypes to understand the problem better and slowly arrive at a solution using the prototypes iteratively. The project starts with prototypes and ends with prototypes being developed into fully functional systems. It allows for a lot of flexibility from a customer and changing requirements perspective.
Chaos model

In computing, the chaos model is a structure of software development. Its creator, who used the pseudonym L.B.S. Raccoon, noted that project management models such as the spiral model and waterfall model, while good at managing schedules and staff, didn't provide methods to fix bugs or solve other technical problems. At the same time, programming methodologies, while effective at fixing bugs and solving technical problems, do not help in managing deadlines or responding to customer requests. The structure attempts to bridge this gap. Chaos theory was used as a tool to help understand these issues.

Software development life cycle
The chaos model notes that the phases of the life cycle apply to all levels of projects, from the whole project to individual lines of code.
  • The whole project must be defined, implemented, and integrated.
  • Systems must be defined, implemented, and integrated.
  • Modules must be defined, implemented, and integrated.
  • Functions must be defined, implemented, and integrated.
  • Lines of code are defined, implemented and integrated.
  • One important change in perspective is whether projects can be thought of as whole units, or must be thought of in pieces. Nobody writes tens of thousands of lines of code in one sitting. They write small pieces, one line at a time, verifying that the small pieces work. Then they build up from there. The behavior of a complex system emerges from the combined behavior of the smaller building blocks.

Chaos strategy
The chaos strategy is a strategy of software development based on the chaos model. The main rule is always resolve the most important issue first.
  • An issue is an incomplete programming task.
  • The most important issue is a combination of big, urgent, and robust.
    • Big issues provide value to users as working functionality.
    • Urgent issues are timely in that they would otherwise hold up other work.
    • Robust issues are trusted and tested when resolved. Developers can then safely focus their attention elsewhere.
  • To resolve means to bring it to a point of stability.

The chaos strategy resembles the way that programmers work toward the end of a project, when they have a list of bugs to fix and features to create. Usually someone prioritizes the remaining tasks, and the programmers fix them one at a time. The chaos strategy states that this is the only valid way to do the work.

The chaos strategy was inspired by Go strategy.

Connections with chaos theory.

There are several tie-ins with chaos theory.
  • The chaos model may help explain why software tends to be so unpredictable.
  • It explains why high-level concepts like architecture cannot be treated independently of low-level lines of code.
  • It provides a hook for explaining what to do next, in terms of the chaos strategy.
Top-down approach and Bottom-up approach are two popular approaches that are used in order to measure operational risk. Operation risk is that type of risk that arises out of operational failures such as mismanagement or technical failures. Operational risk can be classified into Fraud Risk and Model Risk. Fraud risk arises due to lack of controls and Model risk arises due to incorrect model application. Now, let’s look at top-down approach and bottom-up approach that is used in order to measures these types of risks.

Top-down Approach

In simple terms, top-down approach is an investment strategy that selects various sectors or industries and tries to achieve a balance in an investment portfolio. The top-down approach analyzes the risk by aggregating the impact of internal operational failures. It measures the variances in the economic variables that are not explained by the external macro-economic factors. As such, this approach is simple and not data-intensive. Top-down approach relies mainly on historical data. This approach is opposite to bottom-up approach.

Bottom-up Approach

A bottom-up approach on the other hand is an investment strategy that depends on the selection of individual stocks. It observes the performance and management of companies and not general economic trends. The bottom-up approach analyzes individual risk in the process by using mathematical models, and is thus data-intensive. This method does not rely on historical data. It is a forward-looking approach unlike the top-down model, which is backward-looking.

Differences between Top-down Approach and Bottom-up Approach
  • Top-down approach analyzes risk by aggregating the impact of internal operational failures while bottom-up approach analyzes the risks in individual process using models
  • Top-down approach doesn’t differentiate between high frequency low severity and low frequency high severity events while bottom-up approach does
  • Top-down approach is simple and not data intensive whereas bottom-up approach is complex as well as very data intensive
  • Top-down approaches are backward-looking while bottom-up approaches are forward-looking
Model Driven Engineering

Model-Driven Engineering (or MDE) refers to the systematic use of models as primary engineering artifacts throughout the engineering lifecycle.

MDE can be applied to software, system, and data engineering. Models are considered as first class entities.

The best known MDE initiative is the Object Management Group (OMG) called Model-Driven Architecture (MDA). Another related acronym is Model-Driven Development (MDD). Model Integrated Computing is yet another branch of MDE.

According to Douglas Schmidt, model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.
Thank you for the post..
Iterative Development Process

An iterative life cycle model does not attempt to start with a full specification of requirements. Instead, development begins by specifying and implementing just part of the software, which can then be reviewed in order to identify further requirements. This process is then repeated, producing a new version of the software for each cycle of the model.

For example:

[Image: Iterative_model_example.jpg]

In the diagram above when we work iteratively we create rough product or product piece in one iteration, then review it and improve it in next iteration and so on until it’s finished. As shown in the image above, in the first iteration the whole painting is sketched roughly, then in the second iteration colors are filled and in the third iteration finishing is done. Hence, in iterative model the whole product is developed step by step.

Diagram of Iterative model:

[Image: Iterative_model.jpg]

Advantages of Iterative model:
  • In iterative model we can only create a high-level design of the application before we actually begin to build the product and define the design solution for the entire product. Later on we can design and built a skeleton version of that, and then evolved the design based on what had been built.
  • In iterative model we are building and improving the product step by step. Hence we can track the defects at early stages. This avoids the downward flow of the defects.
  • In iterative model we can get the reliable user feedback. When presenting sketches and blueprints of the product to users for their feedback, we are effectively asking them to imagine how the product will work.
  • In iterative model less time is spent on documenting and more time is given for designing.
Disadvantages of Iterative model:
  • Each phase of an iteration is rigid with no overlaps
  • Costly system architecture or design issues may arise because not all requirements are gathered up front for the entire lifecycle
When to use iterative model:
  • Requirements of the complete system are clearly defined and understood.
  • When the project is big.
  • Major requirements must be defined; however, some details can evolve with time.

Forum Jump:

Users browsing this thread: 1 Guest(s)