Home

BDSA.2013.TEST - ITU old blogs

image

Contents

1. 44 minutes ago buid 8 about 17 hours ago buid 7 about 17 hours ago about 18 hours ago busld 6 about 18 hours ago build S about 19 hours ago build 4 1 day ago buid 3 Q 1 day ago buid 2 8 days ago buid 1 Steps in Integration Testing Based on the integration 4 Test subsystem strategy select a decomposition Define test component to be tested cases that exercise all Unit test all the classes in ara the component E 4 hem requir Ex Put selected component B bens tests together do any Keep records of the test preliminary fix up cases and testing activities necessary to make the Repeat steps 1 to 7 until integration test operational the full system is tested drivers stubs Test functional The primary goal of Hic requirements Define test testina is to e cases that exercise all uses with the current cases with the selected component configuration component IT UNIVERSITY OF COPENHAGEN ilures System Testing IT UNIVERSITY OF COPENHAGEN Testing Activities and Models Object System Requirements f Design Design Analysis rem Expectations Unit Integration System Acceptance Testing Testing i Testing 5 Testing Developer Client IT UNIVERSITY OF COPENHAGEN System Testing e Functional Testing Validates functional requirements Performance Testing Validates non functional requirements e Acceptance Testing Validates clients expectations IT UN
2. Result plane flipped over Reason Reuse of autopilot software from a rocket NASA Mars Climate Orbiter destroyed due to incorrect orbit insertion September 23 1999 Reason Unit conversion problem e The Therac 25 accidents 1985 1987 quite possibly the most serious non military computer related failure ever in terms of human life at least five died Reason Bad event handling in the GUI IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram The Therac 25 The Therac 25 was a medical linear accelerator Linear accelerators create energy beams to destroy tumors For shallow tissue penetration electron beams are used To reach deeper tissue the beam is converted into x rays The Therac 25 had two main types of operation a low energy mode and a high energy mode In low Enel mode an electronic beam of low radiation 200 rads is generate In high energy mode the machine generates 25000 rads with 25 million electron volts Therac 25 was developed by two companies AECL from Canada and CGR from France Newest version reusing code from Therac 6 and Therac 20 IT UNIVERSITY OF COPENHAGEN Jakob E Bardram A Therac 25 Accident In 1986 a patient went into the clinic to receive his usual low radiation treatment for his shoulder The technician typed X x ray beam realizing the error quickly changed X into E eh beam and hit enter X Delete char E enter
3. device SCMNode executionEnvironment SVNServer SCMServer device IntegrationBuildNode executionEnvironment Ant Builder executionEnvironment SVNClient SCMClient device executionEnvironment CruiseControl ClServer 5 SoftwareRepository DevelopmentNode executionEnvironment Ant Builder executionEnvironment SVNClient SCMClient executionEnvironment Eclipse Integrated Development Environment 5 ProgrammersDirectory Examples of Continous Integration Systems e CruiseControl and CruiseControl NET e Anthill Continuum C cruise e Hudson e and many more Feature comparison of continuous integration tools and frameworks http confluence public thoughtworks org display CC CI Feature Matrix IT UNIVERSITY OF COPENHAGEN Cruise Control Dashboard 4a STR e Build Time 27 Nov 2007 09 51 GMT 08 00 Duration 7 mmutes 40 seconds cce windows passed 44 minutes ago Build build 8 Artitacts Modificati uild Log Tests Errors and Warnings Modifications Sibestfriendchris Chris amp Gao Li Fixed issue with queued inactive status rev 3847 1 branches cce cruisecontrol reporting dashboard jsunit tests js Irev 3847 branches cce cruisecontrol reporting dashboard webapp javas IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram n to css test htmi cnpts json to css s Latest Builds a Qu 7 minutes ago build 9
4. U ua sequence in a short time frame about 8 sec was never este Therac 25 signaled beam ready and it also showed the technician that it was in low energy mode The technician typed B to deliver the beam to the patient The beam that actually came from the machine was a blast of 25 000 rads with 25 million electron volts more than 125 times the regular dose Malf 4 which was no SE TR EST MS n Operator hit jp to continue for more treatment Again the same error messag The patient felt sharp pains in his back much different from his usual treatment He died 3 months later IT UNIVERSITY OF COPENHAGEN Reasons for the Therac 25 Failure Failure to properly reuse the old software from Therac 6 and Therac 20 when using it for new machine e Cryptic warning messages e End users did not understand the recurring problem 5 patients died Lack of communication between hospital and manufacturer e The manufacturer did not believe that the machine could fail No proper hardware to catch safety glitches IT UNIVERSITY OF COPENHAGEN Testing Terminology IT UNIVERSITY OF COPENHAGEN Terminology Failure Any deviation of the observed behavior from the specified behavior Erroneous state error The system is in a state such that further processing by the system can lead to a failure Fault The mechanical or algorithmic cause of an error bug Validation Activity of chec
5. tests The tests are embedded in a program that can be run every time a change is made to a system You should establish a continuous integration testing setup Test first development is an approach to development where tests are written before the code to be tested Scenario testing involves inventing a typical usage scenario and using this to derive test cases Acceptance testing is a user testing process where the aim is to decide if the software is good enough to be deployed and used in its operational environment IT UNIVERSITY OF COPENHAGEN Jakob E Bardram 102 This Lecture Literature OOSE ch 11 SE9 ch 8 24 Introduction to Software Testing Testing Terminology Testing Activities Unit Component Testing Integration Testing System Testing Client Acceptance Testing Managing Testing Test Cases Test Teams Test Driven Development Documenting Testing IT UNIVERSITY OF COPENHAGEN
6. A B C D EF G IT UNIVERSITY OF COPENHAGEN Top down Testing Strategy e Test the subsystems in the top layer first e Then combine all the subsystems that are called by the tested subsystems and test the resulting collection of subsystems e Do this until all subsystems are incorporated into the tests IT UNIVERSITY OF COPENHAGEN Top down Testing Strategy Layer Layer Il All Layers IT UNIVERSITY OF COPENHAGEN Sandwich Testing Strategy e Combines top down strategy with bottom up strategy e The system is viewed as having three layers A target layer in the middle A layer above the target A layer below the target e Testing converges at the target layer IT UNIVERSITY OF COPENHAGEN Test F Test G IT UNIVERSITY OF COPENHAGEN Pros and Cons Top Down Integration Testing Pros Test cases can be defined in terms of the functionality of the system functional requirements No drivers needed Cons Stubs are needed Writing stubs is difficult Stubs must allow all possible conditions to be tested Large number of stubs may be required especially if the lowest level of the system contains many methods Some interfaces are not tested separately IT UNIVERSITY OF COPENHAGEN Pros and Cons Bottom Up Integration Testing e Pro No stubs needed Useful for integration testing of the following systems Object oriented systems Real time sy
7. Analysis Design amp Software Architecture BDSA Jakob E Bardram QUALITY ASSURANCE amp TESTING IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram This Lecture Literature OOSE ch 11 SE9 ch 8 24 Introduction to Software Testing Testing Terminology Testing Activities Unit Component Testing Integration Testing System Testing Client Acceptance Testing Managing Testing Test Cases Test Teams Test Driven Development Documenting Testing IT UNIVERSITY OF COPENHAGEN Program Testing Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use the process of finding difference between the expected behavior specified by system models and the observed behavior of the implemented system the attempt to show that the implementation of the system is inconsistent with the system models The goal of testing is to design tests that exercise defects in the system toreveal problems Testing is in contrast to all other system activities testing is aimed at breaking the system HENCE testing can reveal the presence of errors NOT their absence Testing is part of a more general verification and validation process which also includes static validation techniques IT UNIVERSITY OF COPENHAGEN Famous Problems e F 16 crossing equator using autopilot
8. Example A database stored in memory instead of a real database Stub Provides canned answers to calls made during the test but is not able to respond to anything outside what it is programmed for Mock object Mocks are able to mimic the behavior of the real object They know how to deal with sequence of calls they are expected to receive IT UNIVERSITY OF COPENHAGEN Motivation for the Mock Object Pattern Let us assume we have a system model for an auction system with 2 types of policies We want to unit test Auction which is our SUT bidders interface interface BiddingPolicy TimingPolicy Bidding Timing Policy Policy LINIAN JPA Tel nm WARE eNO AIL I NATUOKN I L 3 1 V LIPPE 1 i wwii i I NS LIN N ORV Es UU Motivation for the Mock Object Pattern Let us assume we have a system model for an auction system with 2 types of policies We want to unit test Auction which is our SUT The mock object test pattern is based on the idea to replace the interaction with the collaborators in the system model that is Person the Bidding Policy and the TimingPolicy by mock objects e These mock objects can be created at startup time factory pattern V Simple Inheritance interface interface BiddingPolicy TimingPolicy A Bridge A Bridge Pattern Pattern Bidding MockBidding Timing MockTiming Mock Person Policy y Policy Policy Policy SENT WEILIWImIBRYelin UA e I Lg WIOV L
9. IVERSITY OF COPENHAGEN Functional Testing Goal Test functionality of system Test cases are designed from the requirements analysis document better user manual and centered around requirements and key functions use cases e The system is treated as black box Unit test cases can be reused but new test cases have to be developed as well IT UNIVERSITY OF COPENHAGEN Performance Testing Goal Try to violate non functional requirements Test how the system behaves when overloaded Can bottlenecks be identified First candidates for redesign in the next iteration e Try unusual orders of execution Call a receive before send D the system s response to large volumes of ata If the system is supposed to handle 1000 items try it with 1001 items e What is the amount of time spent in different use Cases Are typical cases executed in a timely fashion IT UNIVERSITY OF COPENHAGEN Types of Performance Testing Stress Testing Stress limits of system Volume testing Test what happens if large amounts of data are handled Configuration testing Test the various software and hardware configurations Compatibility test Test backward compatibility with existing systems Timing testing Evaluate response times and time to perform a function IT UNIVERSITY OF COPENHAGEN Security testing Try to violate security requirements Environmental test Test t
10. OT ALIE ST SVT LOEO WI og SLT x Mock Object Pattern In the mock object pattern a mock object replaces the behavior of a real object called the collaborator and returns hard coded values These mock objects can be created at startup time with the factory pattern Mock objects can be used for testing state of individual objects as well as the interaction between Interface Collaborator Interface Mock Collaborator Collaborator a 7 instantiates one of objects that is to validate P P that the interactions of the baria SUT with collaborators behave is as expected Policy Factory IT UNIVERSITY OF COPENHAGEN Testing Activities IT UNIVERSITY OF COPENHAGEN Testing Activities and Models Object System Requirements j Design Design Analysis Cent Expectations Unit Integration System Acceptance Testing Testing Testing Testing Developer Client IT UNIVERSITY OF COPENHAGEN Types of Testing Unit Testing Individual components class or subsystem are tested Carried out by developers Goal Confirm that the component or subsystem is correctly coded and carries out the intended functionality Integration Testing Groups of subsystems collection of subsystems and eventually the entire system are tested Carried out by developers Goal Test the interfaces among the subsystems IT UNIVERSITY OF COPENHAGEN System Testing The entire sy
11. Table 11 1 Attributes of the class TestCase Attributes Description name Name of test case location Full path name of executable input Input data or commands oracle Expected test results against which the output of the test is compared log Output produced by the test IT UNIVERSITY OF COPENHAGEN Copyright O 2011 Pearson Education Inc publishing as Prentice Hall Managing Testing Establish the test objectives Design the test cases Write the test cases Test the test cases Execute the tests Evaluate the test results Change the system Do regression testing IT UNIVERSITY OF COPENHAGEN The Test Team Professional Tester too familiar rmm With code System Designer Configuration Management Specialist IT UNIVERSITY OF COPENHAGEN Jakob E Bardram The 4 Testing Steps 1 Select what has to be tested Analysis Completeness of requirements Design Cohesion Implementation Source code 2 Decide how the testing is done Review or code inspection Proofs Design by Contract Black box white box Select integration testing strategy big bang bottom up top down sandwich IT UNIVERSITY OF COPENHAGEN 3 Develop test cases A test case is a set of test data or situations that will be used to exercise the unit class subsystem system being tested or about the attribute being measured 4 Create the test oracle An oracle contains the predicted resul
12. Testing IT UNIVERSITY OF COPENHAGEN White box Testing Continued e Statement Testing Algebraic Testing Tests each statement Choice of operators in polynomials etc Loop Testing Loop to be executed exactly once Loop to be executed more than once Cause the execution of the loop to be skipped completely Path testing Makes sure all paths in the program are executed e Branch Testing Conditional Testing Ensure that each outcome in a condition is tested at least once Example if i TRUE printf Yes else printf No How many test cases do we need to unit test this stateme IT UNIVERSITY OF COPENHAGEN Example of Branch Testing if i TRUE printf Yes else printf No e We need two test cases with the following input data 1 i TRUE 2 i FALSE e What is the expected output for the two cases In both cases Yes This a typical beginner s mistake in languages where the assignment operator also returns the value assigned C Java So tests can be faulty as well amp e Some of these faults can be identified with static analysis IT UNIVERSITY OF COPENHAGEN Static Analysis Tools in Eclipse e Compiler Warnings and Errors Possibly uninitialized variable Undocumented empty block Assignment with no effect Missing semicolon e Checkstyle Checks for code guideline violations http checkstyle sourceforge net e M
13. Y OF COPENHAGEN Comparison of White amp Black box Testing e White box Testing Potentially infinite number of paths have to be tested White box testing often tests what is done instead of what should be done Cannot detect missing use cases e Black box Testing Potential combinatorical explosion of test cases valid amp invalid data Both types of testing are needed White box testing and black box testing are the extreme ends of a testing continuum Any choice of test case lies in between and depends on the following Often not clear whether the selected test cases uncover a particular error Does not discover extraneous use cases features IT UNIVERSITY OF COPENHAGEN Number of possible logical paths Nature of input data Amount of computation Complexity of algorithms and data structures Unit Testing Heuristics 1 Create unit tests when object design is completed Black box test Test the functional model White box test Test the dynamic model 2 Develop the test cases Goal Find effective num ber of test cases 3 Cross check the test cases to eliminate duplicates Don t waste your time 4 Desk check your source code Sometimes reduces testing time IT UNIVERSITY OF COPENHAGEN 5 Createa test harness Test drivers and test stubs are needed for integration testing 6 Describe the test oracle Often the result o
14. ailures and Faults IT UNIVERSITY OF COPENHAGEN Modular Redundancy IT UNIVERSITY OF COPENHAGEN Declaring the ORAS Bug as a lt p Feature IT UNIVERSITY OF COPENHAGEN Patching IT UNIVERSITY OF COPENHAGEN Testing IT UNIVERSITY OF COPENHAGEN Another View on How to Deal with Faults Fault avoidance Use methodology to reduce complexity Use configuration management to prevent inconsistency Apply verification to prevent algorithmic faults Use reviews to identify faults already in the design Fault detection Testing Activity to provoke failures in a planned way Debugging Find and remove the cause fault of an observed failure Monitoring Deliver information about state and behavior gt Used during debugging e Fault tolerance Exception handling Modular redundancy IT UNIVERSITY OF COPENHAGEN Fault Handling Fault Fault Fault Avoidance Detection Tolerance N N IN Configuration Atomic Modular Me Transactions Redundancy Debugging Integration Testing IT UNIVERSITY OF COPENHAGEN Jakob E Bardram Observations e It is impossible to completely test any nontrivial module or system Practical limitations Complete testing is prohibitive in time and cost Theoretical limitations e g Halting problem Edsger W Dijkstra 1930 2002 Testing can only show the b ap presence of bugs not their top absence D IJ ks
15. e available to users to allow them to experiment and to raise problems that they discover with the system developers Acceptance testing Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment Primarily for custom systems IT UNIVERSITY OF COPENHAGEN The acceptance testing process Test criteria Plan acceptance testing Define acceptance PF criteria IT UNIVERSITY OF COPENHAGEN Derive acceptance f tests acceptance f tests 4 lakah E Rardram Jakob E Bardram Test results Negotiate test results Testing report Accept or reject system Agile methods and acceptance testing e In agile methods the user customer is part of the development team and is responsible for making decisions on the acceptability of the system Tests are defined by the user customer and are integrated with other tests in that they are run automatically when changes are made There is no separate acceptance testing process Main problem here is whether or not the embedded user is typical and can represent the interests of all system stakeholders IT UNIVERSITY OF COPENHAGEN Managing Testing IT UNIVERSITY OF COPENHAGEN Test Cases Test case aset of input data and expected results that exercise a component
16. e later integration occurs in a project the bigger is the risk that unexpected faults occur Bottom up top down sandwich testing Horizontal integration strategies don t do well with risk 2 Continous integration addresses these risks by building as early and frequently as possible e Additional advantages There is always an executable version of the system Team members have a good overview of the project status IT UNIVERSITY OF COPENHAGEN Continuous Integration Testing IT UNIVERSITY OF COPENHAGEN Continuous Testing Strategy Vertical Integration Layer Currency Layer Il Converter y naryFile XMLFile Currency Layer Ill storage Storage DataBase Cells Sheet View Addition IT UNIVERSITY OF COPENHAGEN File Storage Definition Continuous Integration Continuous Integration A software development technique where members of a team integrate their work frequently usually each person integrates at least daily leading to multiple integrations per day Each integration is verified by an automated build which includes the execution of tests regres to detect integration errors as quickly as possible Source http martinfowler com articles continuousIntegration html IT UNIVERSITY OF COPENHAGEN Modeling a Continuous Integration System e Functional Requirements Set up the scheduling strategy poll event based Detect change Execute build script when change ha
17. essary for automated testing IT UNIVERSITY OF COPENHAGEN Automated Testing There are two ways to generate the test model Manually The developers set up the test data run the test and examine the results themselves Success and or failure of the test is determined through observation by the developers Automatically Automated generation of test data and test cases Running the test is also done automatically and finally the comparison of the result with the oracel is also done automatically Definition Automated Testing All the test cases are automatically executed with a test harness Advantage of automated testing Less boring for the developer Better test thoroughness Reduces the cost of test execution Indispensible for regression testing IT UNIVERSITY OF COPENHAGEN Test Doubles b e Atest double is like a double in the movies stunt double replacing the movie actor whenever it becomes dangerous e A test double is used if the collaborator in the system model is awkward to work with e There are 4 types of test doubles All doubles try to make the SUT believe it is talking with its real collaborators Dummy object Passed around but never actually used Dummy objects are usually used to fill parameter lists Fake object A fake object is a working implementation but usually contains some type of shortcut which makes it not suitable for production code
18. etrics Checks for structural anomalies http metrics sourceforge net FindBugs Uses static analysis to look for bugs in Java code http findbugs sourceforge net IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram FindBugs e FindBugs is an open source static analysis tool developed at the University of Maryland Looks for bug patterns inspired by real problems in real code Example FindBugs is used by Google at socalled engineering fixit meetings Example from an engineering fixit at May 13 14 2007 Scope All the Google software written in Java 700 engineers participated by running FindBugs e 250 provided 8 000 reviews of 4 000 issues More than 75 of the reviews contained issues that were marked should fix or must fix I will fix Engineers filed more than 1700 bug reports Source http findbugs sourceforge net IT UNIVERSITY OF COPENHAGEN Observation about Static Analysis e Static analysis typically finds mistakes but some mistakes don t matter Important to find the intersection of stupid and important mistakes Not a magic bullet but if used effectively static analysis is cheaper than other techniques for catching the same bugs e Static analysis at best catches 5 10 of software quality problems e Source William Pugh Mistakes that Matter JavaOne Conference http www cs umd edu pugh MistakesThatMatter pdf IT UNIVERSIT
19. f the first successfully executed test 7 Execute the test cases Re execute test whenever a change is made regression testing 8 Compare the results of the test with the test oracle Automate this if possible When should you write a unit test Traditionally after the source code is written e In XP TDD before the source code is written e Test Driven Development Cycle e Add a new test to the test model e Run the automated tests gt the new test will fail Write code to deal with the failure e Run the automated tests see them succeed Refactor code IT UNIVERSITY OF COPENHAGEN Integration Testing IT UNIVERSITY OF COPENHAGEN Testing Activities and Models Object System Requirements f Design Design Analysis rem Expectations Unit Integration System Acceptance Testing E Testing 5 Testing Testing Developer Client IT UNIVERSITY OF COPENHAGEN Integration Testing e The entire system is viewed as a collection of subsystems sets of classes determined during the system and object design Goal Test all interfaces between subsystems and the interaction of subsystems e The integration testing strategy determines the order in which the subsystems are selected for testing and integration IT UNIVERSITY OF COPENHAGEN Why do we do integration testing Unit tests only test the unit in isolation e Many failures result from faults in the interaction of subsystems e When Of
20. f the shelf components are used that cannot be unit tested e Without integration testing the system test will be very time consuming Failures that are not discovered in integration testing will be discovered after the system is deployed and can be very expensive IT UNIVERSITY OF COPENHAGEN Test Stubs and Drivers Test driver simulates the part of the system that calls the component under test a component that calls the TestedUnit controls the test cases e Test stub simulates a component that is being called by the tested component provides implements the same API as the component a component the TestedUnit depends on partial implementation returns fake values IT UNIVERSITY OF COPENHAGEN Driver Tested Unit 1 l Stub Example 3 layered architecture Currency Layer Il Converter y Currency Layer Ill DataBase BinaryFile XMLFile Storage Storage IT UNIVERSITY OF COPENHAGEN Big Bang Approach Test A Test B Test C Test D Test E Test F Test G IT UNIVERSITY OF COPENHAGEN Jakob E Bardram Bottom up Testing Strategy e The subsystems in the lowest layer of the call hierarchy are tested individually e Then the subsystems above this layer are tested that call the previously tested subsystems This is repeated until all subsystems are included IT UNIVERSITY OF COPENHAGEN Bottom up Testing Strategy Test
21. king for deviations between the observed behavior of a system and its specification IT UNIVERSITY OF COPENHAGEN What is this A failure An error A fault We need to describe specified behavior first Specification A track shall support a moving train IT UNIVERSITY OF COPENHAGEN Erroneous State Error 2 V MEE EP AA ZW 1 77 0 cA A A oO IT UNIVERSITY OF COPENHAGEN Possible algorithmic fault Compass shows wrong reading Fault Or Wrong usage of compass l NS Another possible fault Communication problems between teams IT UNIVERSITY OF COPENHAGEN al Fault Mechani IT UNIVERSITY OF COPENHAGEN e Where is the failure e Where is the error e What is the fault Bad use of implementation inheritance A Plane is not a rocket IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram Examples of Faults and Errors e Faults in the Interface Mechanical Faults specification very hard to find Mismatch between Operating temperature what the client needs outside of equipment and what the server specification offers Errors Mismatch between Wrong user input requirements and implementation Null reference errors Algorithmic Faults Concurrency errors Missing initialization eee pene Incorrect branching condition Missing test for null IT UNIVERSITY OF COPENHAGEN How do we deal with Errors F
22. of documentation that describe what the code should be doing IT UNIVERSITY OF COPENHAGEN Test Documentation Test Plan Test Case Specification Test Incident Report Test Report Summary Test Plan 1 Introduction 2 Relationship to other documents 3 System overview 4 Features to be tested not to be tested 5 Pass Fail criteria 6 Approach 7 Suspension and resumption 8 Testing materials hardware software requirements 9 Test cases 10 Testing schedule IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram Key Points in Software Testing IT UNIVERSITY OF COPENHAGEN Key points Testing can only show the presence of errors in a program t cannot demonstrate that there are no remaining faults Development testing is the responsibility of the software development team Aseparate team should be responsible for testing a system before it is released to customers Development testing includes unit testing in which you test individual objects and methods component testing in which you test related groups of objects system testing in which you test partial or complete systems IT UNIVERSITY OF COPENHAGEN Jakob 101 Key points Il When testing software you should try to break the software using experience and guidelines to choose types of test case that have been effective in discovering defects in other systems Wherever possible you should write automated
23. olerances for heat humidity motion Quality testing Test reliability maintain ability amp availability Recovery testing Test system s response to presence of errors or loss of data Human factors testing Test with end users Acceptance Client Testing IT UNIVERSITY OF COPENHAGEN Testing Activities and Models Object System Requirements f Design Design Analysis rem Expectations Unit Integration System Acceptance Testing Testing Testing Testing Developer Client IT UNIVERSITY OF COPENHAGEN Client testing Goal Demonstrate system is ready for operational use Choice of tests is made by client Many tests can be taken from integration testing Acceptance test is performed by the client not by the developer Useror customer testing is a stage in the testing process in which users or customers provide input and advice on system testing Usertesting is essential even when comprehensive system and release testing have been carried out The reason for this is that influences from the user s working environment have a major effect on the reliability performance usability and robustness of a system These cannot be replicated in a testing environment IT UNIVERSITY OF COPENHAGEN Types of user testing Alpha testing Users of the software work with the development team to test the software at the developer s site Beta testing Arelease of the software is mad
24. s been detected Run unit test cases Generate project status metrics Visualize status of the projects Move successful builds into software repository e Components Subsystems Master Directory Provides version control Builder Subsystem Executes build script when a change has been detected Continuous Integration Server Management Subsystem Visualizes project status via Webbrowser Notification Subsystem Publishes results of the build via different channels E Mail Client RSS Feed IT UNIVERSITY OF COPENHAGEN Analysis Functional Model for Continuous Integration Set up SCM Server Set up Cl Server Create Software Repository Set up CI Project Start CI Server Start SCM Server IT UNIVERSITY OF COPENHAGEN Create Programmer s Directory Manage Programmer s Directory Write Code Buildfile Run Build Locally Developer Choose project Notify Build Status metrics Track Progress Manager Visualize Build Results Visualize Project Metrics Design of a Continuous Integration System e Continuous build server e Automated tests with high coverage Tool supported refactoring e Software configuration management Issue tracking IT UNIVERSITY OF COPENHAGEN Design Deployment Diagram of a Continuous Integration System device ManagementNode executionEnvironment Safari Webbrowser executionEnvironment Mail EmailClient
25. stem is tested Carried out by developers Goal Determine if the system meets the requirements functional and nonfunctional Acceptance Testing Evaluates the system delivered by developers Carried out by the client May involve executing typical transactions on site on a trial basis Goal Demonstrate that the system meets the requirements and is ready to use Unit Component Testing IT UNIVERSITY OF COPENHAGEN Testing Activities and Models Object System Requirements f Design Design Analysis rem Expectations Unit Integration System Acceptance Testing Testing Testing Testing Developer Client IT UNIVERSITY OF COPENHAGEN Static Analysis vs Dynamic Analysis e Static Analysis Hand execution Reading the source code Walk Through informal presentation to others Code Inspection formal presentation to others Automated Tools checking for syntactic and semantic errors departure from coding standards Dynamic Analysis Black box testing Test the input output behavior White box testing Test the internal logic of the subsystem or class Data structure based testing Data types determine test cases IT UNIVERSITY OF COPENHAGEN Black box Testing Focus I O behavior If for any given input we can predict the output then the unit passes the test Almost always impossible to generate all possible inputs test cases e Goal Reduce number of
26. stems e Systems with strict performance requirements e Con Tests an important subsystem the user interface last Drivers are needed IT UNIVERSITY OF COPENHAGEN Pros and Cons of Sandwich Testing e Pro Top and bottom layer tests can be done in parallel e Con Does not test the individual subsystems and their interfaces thoroughly before integration e Solution Modified sandwich testing strategy IT UNIVERSITY OF COPENHAGEN Typical Integration Questions Do all the software components work together e How much code is covered by automated tests e Were all tests successful after the latest change e What is my code complexity e s the team adhering to coding standards e Were there any problems with the last deployment e What is the latest version I can demo to the client IT UNIVERSITY OF COPENHAGEN Regression testing Regression testing is testing the system to check that changes have not broken previously working code e Ina manual testing process regression testing is expensive but with automated testing it is simple and straightforward All tests are rerun every time a change is made to the program Tests must run successfully before the change is committed IT UNIVERSITY OF COPENHAGEN Risks in Integration Testing Strategies Risk 1 The higher the complexity of the software system the more difficult is the integration of its components Risk 2 Th
27. test cases by equivalence partitioning Divide inputs into equivalence classes Choose test cases for each equivalence class Example If an object is supposed to accept a negative number testing one negative number is enough IT UNIVERSITY OF COPENHAGEN Black box testing An example public class MyCalendar public int getNumDaysInMonth int month int year throws InvalidMonthException assume the following representations Month 1 2 3 4 5 6 7 8 9 10 11 12 where 1 Jan 2 Feb 12 Dec Year 1904 1999 2000 2010 How many test cases do we need to do a full black box unit test of getNumDaysInMonth IT UNIVERSITY OF COPENHAGEN Black box testing An example Depends on calendar We assume the Gregorian calendar Equivalence classes for the month parameter Months with 30 days Months with 31 days February Illegal months 0 13 1 Equivalence classes for the Year parameter A normal year Leap years Dividable by 4 Dividable by 100 Dividable by 400 llegal years Before 1904 After 2010 How many test cases do we need to do a full black box unit test of getNumDaysInMonth 12 test cases IT UNIVERSITY OF COPENHAGEN White box Testing e Focus Thoroughness Coverage Every statement in the component is executed at least once Four types of white box testing Statement Testing Loop Testing Path Testing Branch
28. tra Go To considered Harmful CACM e Testing is not for free Since 1970 Focus on Verification _ Define your goals and and Foundations of Computer Science priorities 1972 A M Turing Award IT UNIVERSITY OF COPENHAGEN Jakob E Bardram Testing takes creativity To develop an effective test one must have Detailed understanding of the system Application and solution domain knowledge Knowledge of the testing techniques Skill to apply these techniques Testing is done best by independent testers We often develop a certain mental attitude that the program behave in a certain way when in fact it does not Programmers often stick to the data set that makes the program work A program often does not work when tried by somebody else IT UNIVERSITY OF COPENHAGEN Test Model e The Test Model consolidates all test related decisions and components into one package sometimes also test package or test requirements e The test model contains tests test driver input data oracle and the test harness A test driver the program executing the test The input data needed for the tests The oracle comparing the expected output with the actual test output obtained from the test The test harness A framework or software components that allow to run the tests under varying conditions and monitor the behavior and outputs of the system under test SUT Test harnesses are nec
29. ts for a set of test cases The test oracle has to be written down before the actual testing takes place Test Driven Development IT UNIVERSITY OF COPENHAGEN Test driven development Test driven development TDD is an approach to program development in which you inter leave testing and code development Tests are written before code and passing the tests is the critical driver of development You develop code incrementally along with a test for that increment You don t move on to the next increment until the code that you have developed passes its test TDD was introduced as part of agile methods such as Extreme Programming However it can also be used in plan driven development processes IT UNIVERSITY OF COPENHAGEN Test driven development Identify new pese functionality Implement Write test F functionality and refactor IT UNIVERSITY OF COPENHAGEN O Jakob E Bardram Benefits of test driven development Code coverage Every code segment that you write has at least one associated test so all code written has at least one test Regression testing Aregression test suite is developed incrementally as a program is developed Simplified debugging When a test fails it should be obvious where the problem lies The newly written code needs to be checked and modified e System documentation The tests themselves are a form

Download Pdf Manuals

image

Related Search

Related Contents

AED 10TM Defibrillatore automatico esterno  Briggs & Stratton 50A User's Manual  e+p B 330  Please click here to the Vector Instruction Manual.  Interlink X-Link  CT-200jr  UVO SYSTEM USER`S MANUAL  Mode d emploi Angora  

Copyright © All rights reserved.
Failed to retrieve file