Software Project evaluation.

Software Project evaluation and continuous improvement processes

Try to choose questions related to your software project and environment from the list of questions listed below (some of the questions can be addressed only to a very specific environment) Answers on some of these questions may help you to improve the quality of a project in the future.

  1. Were the right people involved for these software project activities?
  2. Was the right amount of time allotted for these software project activities?
  3. Do the project results justify the project inputs?
  4. Was there an effective strategy to deal with any defects found in testing?
  5. Was there a formal communication strategy developed and adopted?
  6. Were status meetings regularly scheduled?
  7. Were the minutes and actions of the meetings captured and distributed among the team members?
  8. Was information shared in a timely manner to the right people?
  9. Were there any information sessions/presentations to other groups in regards to the project and project status?
  10. What were the project deliverables? i.e. charter, project plan, business requirements, functional designs, detailed designs etc.
  11. Were templates utilized for project artefacts? If so, were they useful?
  12. Was the appropriate amount of time given to complete the deliverables?
  13. How was the consistency and quality of the deliverables? Were they clearly understood by all parties?
  14. Was there a Q and A process in place to deal with any concerns?
  15. Were the instructions and any relevant processes clearly defined and understood?
  16. Was project documentation reviewed by the appropriate parties and any changes communicated?
  17. Was sign off required on any project artefacts? Was this achieved? If not, should there have been a sign off?
  18. Was there a detailed project plan in place? If so, was it adequately communicated and shared?
  19. Were the activity, tasks and milestones timelines met? If not, was there a mitigation strategy in place?
  20. Was there a problem identification, escalation and resolution process in place?
  21. Was there an appropriate amount of time allotted for the tasks identified on the plan?
  22. Was there a software project kick off? (If so, was it well attended? Was there an appropriate amount of software project information available?)
  23. Were the project objectives clearly defined and understood by project participants?
  24. Was there a high level plan available?
  25. Were all key project members, stakeholders and related executive in attendance?
  26. Were the roles and responsibilities defined and understood for each of the project members?
  27. Was the Project Charter presented?
  28. Did we have the right people on the software project? i.e. appropriate level of expertise, enough representation from each of the stakeholder areas?
  29. Were the people on the project able to commit the appropriate and necessary amount of time in order to achieve the end goal?
  30. User involvement - was there an appropriate degree of user involvement?
  31. Did everyone involved understand their roles and responsibilities within the software project?
  32. Was there an appropriate amount of knowledge transfer for any new people joining the project team?
  33. Did everyone have a good sense of what was expected of them?
  34. Was sufficient time allocated on the schedule for specific activities? i.e. Deliverables, score cards, reporting, decisions etc
  35. Was sufficient time allowed for acquiring the necessary resources?
  36. Were the right people involved in the steering committee?
  37. Was there a clearly communicated and understood terms of reference for the steering committee?
  38. Was the software project status reporting structure adequate?
  39. Was the software project reporting timelines adequate?
  40. Were the risks and issues presented in a timely fashion?
  41. Were there clear reporting and decision making protocols?
  42. Did the scorecards presented accurately reflect the current status of the software project?
  43. Was the status reporting process sufficient in regards to content and frequency?
  44. Did the advisory council consist of the ‘right’ people in order to meet the mandate?
  45. Was the implementation smooth and on time?
  46. Was a dress rehearsal required, and if so, completed? If so, were there any lessons learned in this exercise?
  47. Were the right people on board for implementation?
  48. Was a back out plan created and communicated?
  49. Were customer expectations managed in terms of what the application would and wouldn’t do?
  50. Was there any kind of training required for field users of the application?
  51. Was there sufficient technical support during the implementation of the software project? Post-implementation of the software project?
Additional aspects those are important for measurement the quality of software: 
  1. Is there an installation and learner’s guide / manual available online?
  2. Are references to additional materials are available online and sufficient?
  3. Are extra materials and training required in order to make effective use of the system?
  4. Is terminology consistent and all new terms are defined in help file?
  5. Is content can be easily modified or customized in future?
  6. Is the software can be easily installed and updated and manual available?
  7. Is online help readily available and updated?
  8. Are the screens are easily understood and navigate?
  9. Is the navigation flow are obvious and functional?
  10. Is the program operates smoothly without any crashing or freezing?
  11. Is exiting the program is easy and obvious?
  12. Is saving a work or results is easily and can’t be lost?
Some possible metrics for the testing project
  1. Current defects count by severity
  2. Current defects count by priority
  3. Current defects count by the same root cause.
  4. What is the average time for fixing a defect in current release?
  5. What is the number of defects that was reopened by any cause?
  6. Defects count by defect type.
  7. Total number of test cases created for the current project.
  8. Keep track of total time that was spent for test case creation and number of test cases.
  9. Number of test cases by priority grouped by functionality or by type of testing.
  10. Test cases completed by priority
  11. Number test cases per defect found.
  12. Testing effort compare to the project effort
Questios to ask yourself about metrics:
  • Can you explain how the number of defects are measured?
  • Can you explain how the number of production defects are measured?
  • How can you explain defect seeding?
  • How can you explain DRE? DRE (Defect Removal Efficiency)
  • How can you explain unit and system test DRE?
  • How do you measure test effectiveness?
  • How can you explain defect age and defect spoilage? Aspects of project in detail - - Usability - Maintainability - Functionality - Delivery - Resources - People - Capabilities - Finance - Legal - Reputation - Quality - Relationship - Process - Management - Client Management - Technology - Business Environment - Asset Management Required actions and post-review follow-up - Identify/define problems and solutions - Conclude review

  • On this page I put some Project evaluation Questions for QA and testers. These Project evaluationQuestions are very simple and mainly were used for interviewing software testers who is involved in any type of testing. The Project evaluation questions found above are listed in order of complexity. However all new Project evaluation questions (regardless of there difficulty) will be added to the bottom of the list. You can find more Project evaluation Questions searching the WEB. END Project evaluation Questions.

    Software Testing Main Page
    © January 2006 Alex Samurin © 2009