More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
Summary:
The TaskQueue does it's own throttling and has it's own processQueue retry timeout, no need for longPollConnected
Remove dead code (OfflineError)
Rename long connection state to status so we don't ask for `state.state`
Remove long poll actions related to online/offline in favor of exposing connection state through NylasSyncStatusStore
Consoliate notifications and account-error-heaer into a single package and organize files into sidebar vs. header.
Update the DeveloperBarStore to query the sync status store for long poll statuses
Test Plan: All existing tests pass
Reviewers: juan, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D2835
Summary:
The goal is to let us see what plugins are throwing errors on Sentry.
We are using a Sentry `tag` to identify and group plugins and their
errors.
Along the way, I cleaned up the error catching and reporting system. There
was a lot of duplicate error logic (that wasn't always right) and some
legacy Atom error handling.
Now, if you catch an error that we should report (like when handling
extensions), call `NylasEnv.reportError`. This used to be called
`emitError` but I changed it to `reportError` to be consistent with the
ErrorReporter and be a bit more indicative of what it does.
In the production version, the `ErrorLogger` will forward the request to
the `nylas-private-error-reporter` which will report to Sentry.
The `reportError` function also now inspects the stack to determine which
plugin(s) it came from. These are passed along to Sentry.
I also cleaned up the `console.log` and `console.error` code. We were
logging errors multiple times making the console confusing to read. Worse
is that we were logging the `error` object, which would print not the
stack of the actual error, but rather the stack of where the console.error
was logged from. Printing `error.stack` instead shows much more accurate
stack traces.
See changes in the Edgehill repo here: 8c4a86eb7e
Test Plan: Manual
Reviewers: juan, bengotow
Reviewed By: bengotow
Differential Revision: https://phab.nylas.com/D2509
Summary:
Fixes T4291
If I made a final edit to a pre-existing draft and sent, we'd queue a
`SyncbackDraftTask` before a `SendDraftTask`. This is important because
since we have a valid draft `server_id`, the `SendDraftTask` will send by
server_id, not by POSTing the whole body.
If the `SyncbackDraftTask` fails, then we had a very serious issue whereby
the `SendDraftTask` would keep on sending. Unfortunately the server never
got the latest changes and sent the wrong version of the draft. This
incorrect version would show up later when the `/send` endpoint returned
the message that got actually sent.
The solution was to make any queued `SendDraftTask` fail if a dependent
`SyncbackDraftTask` failed.
This meant we needed to make the requirements for `shouldWaitForTask`
stricter, and block if tasks failed.
Unfortunatley there was no infrastructure in place to do this.
The first change was to change `shouldWaitForTask` to `isDependentTask`.
If we're going to fail when a dependent task fails, I wanted the method
name to reflect this.
Now, if a dependent task fails, we recursively check the dependency tree
(and check for cycles) and `dequeue` anything that needed that to succeed.
I chose `dequeue` as the default action because it seemed as though all
current uses of `shouldWaitForTask` really should bail if their
dependencies fail. It's possible you don't want your task dequeued in this
dependency case. You can return the special `Task.DO_NOT_DEQUEUE_ME`
constant from the `onDependentTaskError` method.
When a task gets dequeued because of the reason above, the
`onDependentTaskError` callback gets fired. This gives tasks like the
`SendDraftTask` a chance to notify the user that it bailed. Not all tasks
need to notify.
The next big issue was a better way to determine if a task truely errored
to the point that we need to dequeue dependencies. In the Developer Status
area we were showing tasks that had errored as "Green" because we caught
the error and resolved with `Task.Status.Finished`. This used to be fine
since nothing life-or-death cared if a task errored or not. Now that it
might cause abortions down the line, we needed a more robust method then
this.
For one I changed `Task.Status.Finished` to a variety of finish types
including `Task.Status.Success`. The way you "error" out is to `throw` or
`Promise.reject` an `Error` object from the `performRemote` method. This
allows us to propagate API errors up, and acts as a safety net that can
catch any malformed code or unexpected responses.
The developer bar now shows a much richer set of statuses instead of a
binary one, which was REALLY helpful in debugging this. We also record
when a Task got dequeued because of the conditions introduced here.
Once all this was working we still had an issue of sending old drafts.
If after a `SyncbackDraftTask` failed, now we'd block the send and notify
the users as such. However, if we tried to send again, there was a
separate issue whereby we wouldn't queue another `SyncbackDraftTask` to
update the server with the latest information. Since our changes were
persisted to the DB, we thought we had no changes, and therefore didn't
need to queue a `SyncbackDraftTask`.
The fix to this is to always force the creation of a `SyncbackDraftTask`
before send regardless of the state of the `DraftStoreProxy`.
Test Plan: new tests. Lots of manual testing
Reviewers: bengotow
Reviewed By: bengotow
Subscribers: mg
Maniphest Tasks: T4291
Differential Revision: https://phab.nylas.com/D2156