Mailspring/spec_integration
Evan Morikawa 918090a4e1 feat(error): improve error reporting. Now NylasEnv.reportError
Summary:
The goal is to let us see what plugins are throwing errors on Sentry.

We are using a Sentry `tag` to identify and group plugins and their
errors.

Along the way, I cleaned up the error catching and reporting system. There
was a lot of duplicate error logic (that wasn't always right) and some
legacy Atom error handling.

Now, if you catch an error that we should report (like when handling
extensions), call `NylasEnv.reportError`. This used to be called
`emitError` but I changed it to `reportError` to be consistent with the
ErrorReporter and be a bit more indicative of what it does.

In the production version, the `ErrorLogger` will forward the request to
the `nylas-private-error-reporter` which will report to Sentry.

The `reportError` function also now inspects the stack to determine which
plugin(s) it came from. These are passed along to Sentry.

I also cleaned up the `console.log` and `console.error` code. We were
logging errors multiple times making the console confusing to read. Worse
is that we were logging the `error` object, which would print not the
stack of the actual error, but rather the stack of where the console.error
was logged from. Printing `error.stack` instead shows much more accurate
stack traces.

See changes in the Edgehill repo here: 8c4a86eb7e

Test Plan: Manual

Reviewers: juan, bengotow

Reviewed By: bengotow

Differential Revision: https://phab.nylas.com/D2509
2016-02-03 18:06:52 -05:00
..
fixtures add(integration-test): Adds test for onboarding flow with Exchange 2015-12-14 15:14:58 -08:00
helpers refactor(contenteditable): new ContenteditableExtension API 2015-12-21 19:58:01 -08:00
jasmine feat(error): improve error reporting. Now NylasEnv.reportError 2016-02-03 18:06:52 -05:00
clean-app-boot-spec.es6 add(integration-test): Adds test for onboarding flow with Exchange 2015-12-14 15:14:58 -08:00
contenteditable-integration-spec.es6 fix(composer): list creation edge case fixes and tests 2016-01-22 10:36:15 -08:00
logged-in-app-boot-spec.es6 feat(spec): add config dir to integration specs 2015-12-10 10:52:20 -05:00
package.json feat(spec): add config dir to integration specs 2015-12-10 10:52:20 -05:00
README.md docs(tests): add docs about integration testing 2015-12-11 16:25:25 -05:00

Integration Testing

In addition to unit tests, we run integration tests using ChromeDriver and WebdriverIO through the Spectron library.

Running Tests

script/grunt run-integration-tests

This command will, in order:

  1. Run npm test in the /spec_integration directory and pass in the NYLAS_ROOT_PATH
  2. Boot jasmine and load all files ending in -spec
  3. Most tests in beforeAll will boot N1 via the N1Launcher. See spec_integration/helpers/n1-launcher.es6
  4. This instantiates a new spectron Application which will spawn a ChromeDriver process with the appropriate N1 launch args.
  5. ChromeDriver will then boot a Selenium server at http://localhost:9515
  6. The ChromeDriver / Selenium server will boot N1 with testing hooks and expose an controlling API.
  7. The API is made easily available through the Spectron API
  8. The N1Launcher's mainWindowReady or popoutComposerWindowReady or onboardingWindowReady methods poll the app until the designated window is available and loaded. Then will resolve a Promise once everything has booted.

Writing Tests

The Spectron API is a pure extension over the Webdriver API. Reference both to write tests.

Most of the methods on app.client object apply to the "currently focused" window only. Since N1 has several windows (many of which are hidden) the N1Launcher extension will cycle through windows automatically until it finds the one you want, and then select it.

Furthermore, "loaded" in the pure Spectron sense is only once the window is booted. N1 windows take much longer to full finish loading packages and rendering the UI. The N1Launcher::windowReady method and its derivatives take this into account.

You will almost always need the minimal boilerplate for each integration test:

describe('My test', () => {
  beforeAll((done)=>{
    // Boot in dev mode with no arguments
    this.app = new N1Launcher(["--dev"]);
    this.app.mainWindowReady().finally(done);
  });

  afterAll((done)=> {
    if (this.app && this.app.isRunning()) {
      this.app.stop().finally(done);
    } else {
      done()
    }
  });

  it("is a test you'll write", () => {
  });

  it("is an async test you'll write", (done) => {
    doSomething.finally(done)
  });
});

Executing Code in N1's environment

The app.client.execute and app.client.executeAsync methods are extremely helpful when running code in N1. Those are documented slightly more on the WebdriveIO API docs page here

it("is a test you'll write", () => {
  this.app.client.execute((arg1)=>{
    // NOTE: `arg1` just got passed in over a JSON api. It can only be a
    // primitive data type

    // I'M RUNNING IN N1
    return someValue

  }, arg1).then(({value})=>{
    // NOTE: the return is stuffed in an attribute called `value`. Also it
    // passed back of a JSON API and can only be a primitive value.
  })
});

Debugging tests.

Debugging is through lots of console.loging.

There is code is spec_integration/jasmine/bootstrap.js that attempts to catch unhandled Promises and color them accordingly.

If you want to access logs from within N1 via the app.client.execute blocks, you'll have to either package it up yourself and return it, or use the new app.client.getMainProcessLogs() just added into Spectron.