AcceptanceMark alternatives and similar libraries
Based on the "Other Testing" category.
Alternatively, view AcceptanceMark alternatives based on common mentions on social networks and blogs.
-
PonyDebugger
Remote network and data debugging for your native iOS app using Chrome Developer Tools -
iOS Snapshot Test Case
Snapshot view unit tests for iOS -
ios-snapshot-test-case
Snapshot view unit tests for iOS -
Mockingjay
An elegant library for stubbing HTTP requests with ease in Swift -
OCMockito
Mockito for Objective-C: creation, verification and stubbing of mock objects -
Buildasaur
Automatic testing of your Pull Requests on GitHub and BitBucket using Xcode Server. Keep your team productive and safe. Get up and running in minutes. @buildasaur -
Kakapo
🐤Dynamically Mock server behaviors and responses in Swift -
NaughtyKeyboard
The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data. This is a keyboard to help you test your app from your iOS device. -
trainer
Convert xcodebuild plist and xcresult files to JUnit reports -
Cribble
Swifty tool for visual testing iPhone and iPad apps. Every pixel counts. -
Mockingbird
Simplify software testing, by easily mocking any system using HTTP/HTTPS, allowing a team to test and develop against a service that is not complete or is unstable or just to reproduce planned/edge cases. -
Mockit
A simple mocking framework for Swift, inspired by the famous http://mockito.org/ -
MirrorDiffKit
Graduation from messy XCTAssertEqual messages. -
second_curtain
Upload failing iOS snapshot tests cases to S3 -
MetovaTestKit
A collection of useful test helpers designed to ease the burden of writing tests for iOS applications. -
SnappyTestCase
iOS Simulator type agnostic snapshot testing, built on top of the FBSnapshotTestCase. -
XCTestExtensions
XCTestExtensions is a Swift extension that provides convenient assertions for writing Unit Test. -
TestKit
The easiest way to implement full BDD in your Swift iOS projects! Use plain English specs (Gherkin) to drive tests that include both UI automation and interacting with application data & state. -
Bugfender Live
Stream the screen of an iOS device for live debugging. -
Parallel iOS Tests
Run iOS tests on multiple simulators in parallel at the same time
InfluxDB - Power Real-Time Data Analytics at Scale
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of AcceptanceMark or a related project?
README
AcceptanceMark is a tool for generating Acceptance Tests in Xcode, inspired by Fitnesse.
Read this blog post for a full introduction to AcceptanceMark.
Fitnesse advantages
- Easy to write business rules in tabular form in Markdown files.
- All shareholders can write Fitnesse tests.
- Convenient Test Report.
Fitnesse disadvantages
- Does not integrate well with XCTest.
- Requires to run a separate server.
- Difficult to configure and run locally / on CI.
The solution: AcceptanceMark
AcceptanceMark is the ideal tool to write Fitnesse-style acceptance tests that integrate seamlessly with XCTest:
- Write your tests inputs and expected values in markdown tables.
- AcceptanceMark generates XCTest test classes with strong-typed input/outputs.
- Write test runners to evaluate the system under test with the given inputs.
- Run the chosen test target (Unit Tests supported, UI Tests will be supported) and get a test report.
How does this work?
Write your own test sets, like so:
image-tests.md
## Image Loading
| name:String || loaded:Bool |
| ------------- || ------------ |
| available.png || true |
| missing.png || false |
Run amtool manually or as an Xcode pre-compilation phase:
amtool -i image-tests.md
This generates an XCTestCase
test class:
/*
* File Auto-Generated by AcceptanceMark - DO NOT EDIT
* input file: ImageTests.md
* generated file: ImageTests_ImageLoadingTests.swift
*
* -- Test Specification --
*
* ## Image Loading
* | name:String || loaded:Bool |
* | ------------- || ------------ |
* | available.png || true |
* | missing.png || false |
*/
//// Don't forget to create a test runner:
//
//class ImageTests_ImageLoadingRunner: ImageTests_ImageLoadingRunnable {
//
// func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingOutput {
// return ImageTests_ImageLoadingOutput(<#parameters#>)
// }
//}
import XCTest
struct ImageTests_ImageLoadingInput {
let name: String
}
struct ImageTests_ImageLoadingOutput: Equatable {
let loaded: Bool
}
protocol ImageTests_ImageLoadingRunnable {
func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingOutput
}
class ImageTests_ImageLoadingTests: XCTestCase {
var testRunner: ImageTests_ImageLoadingRunnable!
override func setUp() {
// MARK: Implement the ImageTests_ImageLoadingRunner() class!
testRunner = ImageTests_ImageLoadingRunner()
}
func testImageLoading_row1() {
let input = ImageTests_ImageLoadingInput(name: "available.png")
let expected = ImageTests_ImageLoadingOutput(loaded: true)
let result = try! testRunner.run(input: input)
XCTAssertEqual(expected, result)
}
func testImageLoading_row2() {
let input = ImageTests_ImageLoadingInput(name: "missing.png")
let expected = ImageTests_ImageLoadingOutput(loaded: false)
let result = try! testRunner.run(input: input)
XCTAssertEqual(expected, result)
}
}
func == (lhs: ImageTests_ImageLoadingOutput, rhs: ImageTests_ImageLoadingOutput) -> Bool {
return
lhs.loaded == rhs.loaded
}
Write your test runner:
// User generated file. Put your test runner implementation here.
class ImageTests_ImageLoadingTestRunner: ImageTests_ImageLoadingTestRunnable {
func run(input: ImageTests_ImageLoadingInput) throws -> ImageTests_ImageLoadingResult {
// Your business logic here
return ImageTests_ImageLoadingResult(loaded: true)
}
}
Add your generated test classes and test runners to your Xcode test target and run the tests.
Notes
Note the functional style of the test runner. It is simply a method that takes a stronly-typed input value, and returns a strongly-typed output value. No state, no side effects.
XCTestCase
sublasses can specify asetUp()
method to configure an initial state that is shared across all unit tests. This is deliberately not supported with AcceptanceMark test runners, and state-less tests are preferred and encouraged instead.
Installation
AcceptanceMark includes amtool, a command line tool used to generate unit tests or UI tests.
Pre-compiled binary
The quickest way to install amtool is to download the pre-compiled executable from the project releases page.
Once dowloaded, don't forget to add execute permission to the binary:
chmod +x amtool
Compile manually
Xcode 8 is required as amtool is written in Swift 3. To compile, clone this repo and run the script:
git clone https://github.com/bizz84/AcceptanceMark
cd AcceptanceMark
./scripts/build-amtool.sh
Once the build finishes, amtool can be found at this location:
./build/Release/amtool
For convenience, amtool can be copied to a folder in your $PATH
:
export PATH=$PATH:/your/path/to/amtool
amtool command line options
amtool -i <input-file.md> [-l swift2|swift3]
- Use
-i
to specify the input file - Use
-l
to specify the output language. Currently Swift 2 and Swift 3 are supported. Objective-C and other languages may be added in the future. - Use
--version
to print the version number
FAQ
- Q: I want to have more than one table for each
.md
file. Is this possible? A: Yes, as long as the file is structured as [ Heading, Table, Heading, Table, ... ], AcceptanceMark will generate multiple swift test files named
<filename>_<heading>Tests.swift
. This way, each test set gets its own swift classes all in one file. Note that heading names should be unique per-file. Whitespaces and punctuation will be stripped from headings.Q: I want to preload application data/state for each test in a table (this is done with builders in Fitnesse). Can I do that?
A: This is in the future roadmap. While the specification for this may change, one possible way of doing this is by allowing more than one table for each heading, with the convention that the last table represents the input/output set, while all previous tables represent data to be preloaded. Until this is implemented, all preloading must be done directly in the test runner's
run()
method. Preloading example:
## Image Loading
// Preloaded data
| Country:String | Code:Bool |
| -------------- | --------- |
| United Kingdom | GB |
| Italy | IT |
| Germany | DE |
// Test runner data
| name:String || loaded:Bool |
| ------------- || ------------ |
| available.png || true |
| missing.png || false |
- Q: I want to preload a JSON file for all tests running on a given table. Can I do that?
- A: You could do that directly by adding the JSON loading code directly in the test runner's
run()
method. For extra configurability you could specify the JSON file name as an input parameter of your test set, and have your test runner load that file from the bundle.
LICENSE
MIT License. See the [license file](LICENSE.md) for details.
*Note that all licence references and agreements mentioned in the AcceptanceMark README section above
are relevant to that project's source code only.