Mailspring/spec_integration/jasmine/bootstrap.js

40 lines
1 KiB
JavaScript
Raw Normal View History

feat(tests): add integration tests comment Adding test harness Using key strokes in main window test Tests work now Clean up argument variables Rename list manager and get rid of old spec-helper methods Extract out time overrides from spec-helper Spectron test for contenteditable fix spec exit codes and boot mode fix(spec): cleanup N1.sh and make specs fail with exit code 1 Revert tests and get it working in window Move to spec_integration and add window load tester Specs pass. Console logs still in Remove console logs Extract N1 Launcher ready method Make integrated unit test runner feat(tests): adding integration tests Summary: The /spectron folder got moved to /spec_integration There are now unit tests (the old ones) run via the renamed `script/grunt run-unit-tests` There are now integration tests run via the command `script/grunt run-integration-tests`. There are two types of integration tests: 1. Tests that operate on the whole app via Selenium/Chromedriver. These tests have access to Spectron APIs but do NOT have access to any JS object running inside the application. See the `app-boot-spec.es6` for an example of these tests. This is tricky because we want to test the Main window, but Spectron may latch onto any other of our loading windows. Code in `integration-helper` give us an API that finds and loads the main window so we can test it 2. Tests that run in the unit test suite that need Spectron to perform integration-like behavior. These are the contentedtiable specs. The Spectron server is accessed from the app and can be used to trigger actions on the running app, from the app. These tests use the windowed-test runner so Spectron can identify whether the tests have completed, passed, or failed. Unfortunately Spectron can't access the logs , nor the exit code of the test script thereby forcing us to parse the HTML DOM. (Note this is still a WIP) I also revamped the `N1.sh` file when getting the launch arguments to work properly. It's much cleaner. We didn't need most of the data. Test Plan: new tests Reviewers: juan, bengotow Differential Revision: https://phab.nylas.com/D2289 Fix composer specs Tests can properly detect when Spectron is in the environment Report plain text output in specs fixing contenteditable specs Testing slow keymaps on contenteditable specs Move to DOm mutation Spell as `subtree` not `subTree`
2015-11-20 07:29:49 +08:00
// argv[0] = node
// argv[1] = jasmine
// argv[2] = JASMINE_CONFIG_PATH=./jasmine/config.json
feat(tests): add integration tests comment Adding test harness Using key strokes in main window test Tests work now Clean up argument variables Rename list manager and get rid of old spec-helper methods Extract out time overrides from spec-helper Spectron test for contenteditable fix spec exit codes and boot mode fix(spec): cleanup N1.sh and make specs fail with exit code 1 Revert tests and get it working in window Move to spec_integration and add window load tester Specs pass. Console logs still in Remove console logs Extract N1 Launcher ready method Make integrated unit test runner feat(tests): adding integration tests Summary: The /spectron folder got moved to /spec_integration There are now unit tests (the old ones) run via the renamed `script/grunt run-unit-tests` There are now integration tests run via the command `script/grunt run-integration-tests`. There are two types of integration tests: 1. Tests that operate on the whole app via Selenium/Chromedriver. These tests have access to Spectron APIs but do NOT have access to any JS object running inside the application. See the `app-boot-spec.es6` for an example of these tests. This is tricky because we want to test the Main window, but Spectron may latch onto any other of our loading windows. Code in `integration-helper` give us an API that finds and loads the main window so we can test it 2. Tests that run in the unit test suite that need Spectron to perform integration-like behavior. These are the contentedtiable specs. The Spectron server is accessed from the app and can be used to trigger actions on the running app, from the app. These tests use the windowed-test runner so Spectron can identify whether the tests have completed, passed, or failed. Unfortunately Spectron can't access the logs , nor the exit code of the test script thereby forcing us to parse the HTML DOM. (Note this is still a WIP) I also revamped the `N1.sh` file when getting the launch arguments to work properly. It's much cleaner. We didn't need most of the data. Test Plan: new tests Reviewers: juan, bengotow Differential Revision: https://phab.nylas.com/D2289 Fix composer specs Tests can properly detect when Spectron is in the environment Report plain text output in specs fixing contenteditable specs Testing slow keymaps on contenteditable specs Move to DOm mutation Spell as `subtree` not `subTree`
2015-11-20 07:29:49 +08:00
// argv[3] = NYLAS_ROOT_PATH=/path/to/nylas/root
var babelOptions = require('../../static/babelrc.json');
2016-05-04 03:37:32 +08:00
require('babel-register')(babelOptions);
feat(tests): add integration tests comment Adding test harness Using key strokes in main window test Tests work now Clean up argument variables Rename list manager and get rid of old spec-helper methods Extract out time overrides from spec-helper Spectron test for contenteditable fix spec exit codes and boot mode fix(spec): cleanup N1.sh and make specs fail with exit code 1 Revert tests and get it working in window Move to spec_integration and add window load tester Specs pass. Console logs still in Remove console logs Extract N1 Launcher ready method Make integrated unit test runner feat(tests): adding integration tests Summary: The /spectron folder got moved to /spec_integration There are now unit tests (the old ones) run via the renamed `script/grunt run-unit-tests` There are now integration tests run via the command `script/grunt run-integration-tests`. There are two types of integration tests: 1. Tests that operate on the whole app via Selenium/Chromedriver. These tests have access to Spectron APIs but do NOT have access to any JS object running inside the application. See the `app-boot-spec.es6` for an example of these tests. This is tricky because we want to test the Main window, but Spectron may latch onto any other of our loading windows. Code in `integration-helper` give us an API that finds and loads the main window so we can test it 2. Tests that run in the unit test suite that need Spectron to perform integration-like behavior. These are the contentedtiable specs. The Spectron server is accessed from the app and can be used to trigger actions on the running app, from the app. These tests use the windowed-test runner so Spectron can identify whether the tests have completed, passed, or failed. Unfortunately Spectron can't access the logs , nor the exit code of the test script thereby forcing us to parse the HTML DOM. (Note this is still a WIP) I also revamped the `N1.sh` file when getting the launch arguments to work properly. It's much cleaner. We didn't need most of the data. Test Plan: new tests Reviewers: juan, bengotow Differential Revision: https://phab.nylas.com/D2289 Fix composer specs Tests can properly detect when Spectron is in the environment Report plain text output in specs fixing contenteditable specs Testing slow keymaps on contenteditable specs Move to DOm mutation Spell as `subtree` not `subTree`
2015-11-20 07:29:49 +08:00
var chalk = require('chalk')
var util = require('util')
console.errorColor = function(err){
if (typeof err === "string") {
console.error(chalk.red(err));
} else {
console.error(chalk.red(util.inspect(err)));
}
}
feat(tests): add integration tests comment Adding test harness Using key strokes in main window test Tests work now Clean up argument variables Rename list manager and get rid of old spec-helper methods Extract out time overrides from spec-helper Spectron test for contenteditable fix spec exit codes and boot mode fix(spec): cleanup N1.sh and make specs fail with exit code 1 Revert tests and get it working in window Move to spec_integration and add window load tester Specs pass. Console logs still in Remove console logs Extract N1 Launcher ready method Make integrated unit test runner feat(tests): adding integration tests Summary: The /spectron folder got moved to /spec_integration There are now unit tests (the old ones) run via the renamed `script/grunt run-unit-tests` There are now integration tests run via the command `script/grunt run-integration-tests`. There are two types of integration tests: 1. Tests that operate on the whole app via Selenium/Chromedriver. These tests have access to Spectron APIs but do NOT have access to any JS object running inside the application. See the `app-boot-spec.es6` for an example of these tests. This is tricky because we want to test the Main window, but Spectron may latch onto any other of our loading windows. Code in `integration-helper` give us an API that finds and loads the main window so we can test it 2. Tests that run in the unit test suite that need Spectron to perform integration-like behavior. These are the contentedtiable specs. The Spectron server is accessed from the app and can be used to trigger actions on the running app, from the app. These tests use the windowed-test runner so Spectron can identify whether the tests have completed, passed, or failed. Unfortunately Spectron can't access the logs , nor the exit code of the test script thereby forcing us to parse the HTML DOM. (Note this is still a WIP) I also revamped the `N1.sh` file when getting the launch arguments to work properly. It's much cleaner. We didn't need most of the data. Test Plan: new tests Reviewers: juan, bengotow Differential Revision: https://phab.nylas.com/D2289 Fix composer specs Tests can properly detect when Spectron is in the environment Report plain text output in specs fixing contenteditable specs Testing slow keymaps on contenteditable specs Move to DOm mutation Spell as `subtree` not `subTree`
2015-11-20 07:29:49 +08:00
jasmine.NYLAS_ROOT_PATH = process.argv[3].split("NYLAS_ROOT_PATH=")[1]
jasmine.UNIT_TEST_TIMEOUT = 120*1000;
jasmine.BOOT_TIMEOUT = 30*1000;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 30*1000
Promise = require('bluebird')
Promise.config({
warnings: true,
longStackTraces: true,
cancellation: true
})
process.on("unhandledRejection", function(reason, promise) {
if (reason.stack) { console.errorColor(reason.stack); }
console.errorColor(promise);
});
feat(error): improve error reporting. Now `NylasEnv.reportError` Summary: The goal is to let us see what plugins are throwing errors on Sentry. We are using a Sentry `tag` to identify and group plugins and their errors. Along the way, I cleaned up the error catching and reporting system. There was a lot of duplicate error logic (that wasn't always right) and some legacy Atom error handling. Now, if you catch an error that we should report (like when handling extensions), call `NylasEnv.reportError`. This used to be called `emitError` but I changed it to `reportError` to be consistent with the ErrorReporter and be a bit more indicative of what it does. In the production version, the `ErrorLogger` will forward the request to the `nylas-private-error-reporter` which will report to Sentry. The `reportError` function also now inspects the stack to determine which plugin(s) it came from. These are passed along to Sentry. I also cleaned up the `console.log` and `console.error` code. We were logging errors multiple times making the console confusing to read. Worse is that we were logging the `error` object, which would print not the stack of the actual error, but rather the stack of where the console.error was logged from. Printing `error.stack` instead shows much more accurate stack traces. See changes in the Edgehill repo here: https://github.com/nylas/edgehill/commit/8c4a86eb7ee1a06249a9ae35397e2084a09ad1dc Test Plan: Manual Reviewers: juan, bengotow Reviewed By: bengotow Differential Revision: https://phab.nylas.com/D2509
2016-02-04 07:06:52 +08:00
process.on("uncaughtException", function(error) {
if (error.stack) { console.errorColor(error.stack); }
console.errorColor(error);
});