Popularity
1.9
Stable
Activity
0.0
Stable
64
5
8

Code Quality Rank: L4
Programming language: Swift
License: MIT License
Tags: Testing     Other Testing    
Latest version: v0.2.3

AcceptanceMark alternatives and similar libraries

Based on the "Other Testing" category.
Alternatively, view AcceptanceMark alternatives based on common mentions on social networks and blogs.

Do you think we are missing an alternative of AcceptanceMark or a related project?

Add another 'Other Testing' Library

README

License Language Build Issues Twitter

AcceptanceMark is a tool for generating Acceptance Tests in Xcode, inspired by Fitnesse.

Read this blog post for a full introduction to AcceptanceMark.

Fitnesse advantages

  • Easy to write business rules in tabular form in Markdown files.
  • All shareholders can write Fitnesse tests.
  • Convenient Test Report.

Fitnesse disadvantages

  • Does not integrate well with XCTest.
  • Requires to run a separate server.
  • Difficult to configure and run locally / on CI.

The solution: AcceptanceMark

AcceptanceMark is the ideal tool to write Fitnesse-style acceptance tests that integrate seamlessly with XCTest:

  • Write your tests inputs and expected values in markdown tables.
  • AcceptanceMark generates XCTest test classes with strong-typed input/outputs.
  • Write test runners to evaluate the system under test with the given inputs.
  • Run the chosen test target (Unit Tests supported, UI Tests will be supported) and get a test report.

How does this work?

Write your own test sets, like so:

image-tests.md

## Image Loading

| name:String   || loaded:Bool  |
| ------------- || ------------ |
| available.png || true         |
| missing.png   || false        |

Run amtool manually or as an Xcode pre-compilation phase:

amtool -i image-tests.md

This generates an XCTestCase test class:

/*
 * File Auto-Generated by AcceptanceMark - DO NOT EDIT
 * input file: ImageTests.md
 * generated file: ImageTests_ImageLoadingTests.swift
 *
 * -- Test Specification -- 
 *
 * ## Image Loading
 * | name:String   || loaded:Bool  |
 * | ------------- || ------------ |
 * | available.png || true         |
 * | missing.png   || false        |
 */

//// Don't forget to create a test runner: 
//
//class ImageTests_ImageLoadingRunner: ImageTests_ImageLoadingRunnable {
//
//  func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingOutput {
//      return ImageTests_ImageLoadingOutput(<#parameters#>)
//  }
//}

import XCTest

struct ImageTests_ImageLoadingInput {
    let name: String
}

struct ImageTests_ImageLoadingOutput: Equatable {
    let loaded: Bool
}

protocol ImageTests_ImageLoadingRunnable {
    func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingOutput
}
class ImageTests_ImageLoadingTests: XCTestCase {

    var testRunner: ImageTests_ImageLoadingRunnable!

    override func setUp() {
        // MARK: Implement the ImageTests_ImageLoadingRunner() class!
        testRunner = ImageTests_ImageLoadingRunner()
    }

    func testImageLoading_row1() {
        let input = ImageTests_ImageLoadingInput(name: "available.png")
        let expected = ImageTests_ImageLoadingOutput(loaded: true)
        let result = try! testRunner.run(input: input)
        XCTAssertEqual(expected, result)
    }

    func testImageLoading_row2() {
        let input = ImageTests_ImageLoadingInput(name: "missing.png")
        let expected = ImageTests_ImageLoadingOutput(loaded: false)
        let result = try! testRunner.run(input: input)
        XCTAssertEqual(expected, result)
    }

}

func == (lhs: ImageTests_ImageLoadingOutput, rhs: ImageTests_ImageLoadingOutput) -> Bool {
    return
        lhs.loaded == rhs.loaded
}

Write your test runner:

// User generated file. Put your test runner implementation here.
class ImageTests_ImageLoadingTestRunner: ImageTests_ImageLoadingTestRunnable {

    func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingResult {
        // Your business logic here
        return ImageTests_ImageLoadingResult(loaded: true)
    }
}

Add your generated test classes and test runners to your Xcode test target and run the tests.

Notes

  • Note the functional style of the test runner. It is simply a method that takes a stronly-typed input value, and returns a strongly-typed output value. No state, no side effects.

  • XCTestCase sublasses can specify a setUp() method to configure an initial state that is shared across all unit tests. This is deliberately not supported with AcceptanceMark test runners, and state-less tests are preferred and encouraged instead.

Installation

AcceptanceMark includes amtool, a command line tool used to generate unit tests or UI tests.

Pre-compiled binary

The quickest way to install amtool is to download the pre-compiled executable from the project releases page.

Once dowloaded, don't forget to add execute permission to the binary:

chmod +x amtool

Compile manually

Xcode 8 is required as amtool is written in Swift 3. To compile, clone this repo and run the script:

git clone https://github.com/bizz84/AcceptanceMark
cd AcceptanceMark
./scripts/build-amtool.sh

Once the build finishes, amtool can be found at this location:

./build/Release/amtool

For convenience, amtool can be copied to a folder in your $PATH:

export PATH=$PATH:/your/path/to/amtool

amtool command line options

amtool -i <input-file.md> [-l swift2|swift3]
  • Use -i to specify the input file
  • Use -l to specify the output language. Currently Swift 2 and Swift 3 are supported. Objective-C and other languages may be added in the future.
  • Use --version to print the version number

FAQ

  • Q: I want to have more than one table for each .md file. Is this possible?
  • A: Yes, as long as the file is structured as [ Heading, Table, Heading, Table, ... ], AcceptanceMark will generate multiple swift test files named <filename>_<heading>Tests.swift. This way, each test set gets its own swift classes all in one file. Note that heading names should be unique per-file. Whitespaces and punctuation will be stripped from headings.

  • Q: I want to preload application data/state for each test in a table (this is done with builders in Fitnesse). Can I do that?

  • A: This is in the future roadmap. While the specification for this may change, one possible way of doing this is by allowing more than one table for each heading, with the convention that the last table represents the input/output set, while all previous tables represent data to be preloaded. Until this is implemented, all preloading must be done directly in the test runner's run() method. Preloading example:

## Image Loading

// Preloaded data
| Country:String | Code:Bool |
| -------------- | --------- |
| United Kingdom | GB        |
| Italy          | IT        |
| Germany        | DE        |

// Test runner data
| name:String   || loaded:Bool  |
| ------------- || ------------ |
| available.png || true         |
| missing.png   || false        |
  • Q: I want to preload a JSON file for all tests running on a given table. Can I do that?
  • A: You could do that directly by adding the JSON loading code directly in the test runner's run() method. For extra configurability you could specify the JSON file name as an input parameter of your test set, and have your test runner load that file from the bundle.

LICENSE

MIT License. See the [license file](LICENSE.md) for details.


*Note that all licence references and agreements mentioned in the AcceptanceMark README section above are relevant to that project's source code only.