Summary:
This commit bumps the version of sequelize to point to the latest
version in our fork. This version bump also includes latest updates from
upstream sequelize since the last time we bumped the version.
Notably, it includes a critical fix that will prevent the sync
loop from getting stuck on rare occasions.
Specifically, sequelize's `Query.prototype.run` could in some cases return a
Promise that would /never/ resolve or reject, effectively halting the
execution of any code that was waiting for that promise. This was due to
a lack of error handling inside a the query's `afterExecute` function; if an
error was thrown there, the enclosing Promise would never reject, and
the error would just remain uncaught. An example of such an error that
could cause this scenario is: T7742 - https://sentry.io/nylas/nylas-mail/issues/230016155/
Sync getting halted in this way can produce a variety of user facing
bugs like not being able to send or it taking an absurd amount of time
(hours), tasks never finishing or taking an absurd amount of time to
complete, new mail not arriving, among others
The commit that fixes this is: 0deeda9e1a
The full set of changes introduced by this version bump are: https://github.com/nylas/sequelize/compare/nylas-3.30.0...nylas-3.40.0
**Note:**
It is important to note that sequelize might still halt the sync loop if
there are other places in the code that can return Promises that never
resolve or reject under certain circumstances. I can consistently reproduce this
scenario when an error is thrown inside `afterExecute`, and I've seen it
happen in the wild, but I've also ran into this type of sync loop halting (db
promises never resolve) without any sequelize errors being thrown, which
suggests that there are other places in the sequelize code that might end
up returning a Promise that will halt sync.
Unfortunately, we don't have a good way to detect and report when this
happens yet, but we are adding one in upcoming diffs in order have data
on how many people are running into this, and/or if this patch completely
fixes the issue. Otherwise, we'd need to audit sequelize's code.
Should resolve T7837 and T7767
Test Plan: manual
Reviewers: spang, mark, halla, khamidou, evan
Reviewed By: evan
Subscribers: mg, tomasz
Maniphest Tasks: T7767, T7837
Differential Revision: https://phab.nylas.com/D4152
Summary:
We want to enable code to control whether fetching is throttled or not.
We basically only want to throttle syncing the historical message
archive. New messages, initial inbox sync, and unknown search results
should not be throttled. Also some drive-by code refactoring.
Test Plan: Run locally, verify that things still work
Reviewers: spang, evan, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D4126
Summary:
This commit splits apart the `AccountDeltaConnection` class, which was
in charge of listening to both cloud /and/ local deltas by way of an
artificial interface, `DeltaStreamingInMemoryConnection`.
Splitting this into 2 modules with separate responsibilities will hopefully
make this code easier to reason about and reduce some cruft and unnecessary
indirection.
Specifically, this commit makes it so:
- `DeltaConnectionManager` is only in charge of starting and ending `DeltaStreamingConnection`s, which are solely in charge of listening to deltas from the cloud api
- `LocalSyncDeltaEmitter` no longer unnecessarily emits events for the `deltas` package to listen to but rather directly processes and saves those deltas from the K2 db to edgehill.db
- `LocalSyncDeltaEmitter` is also in charge of keeping track of the latest received cursor, under its own JSONBlob key in edgehill.db. This migrates localSync cursors saved under the old key.
- `LocalSyncDeltaEmitter` is now instantiated and managed from within the `SyncProcessManager` as opposed to the `SyncWorker`. Apart from removing extra state from the `SyncWorker`, this removes dependencies on the client-app environment from the sync-worker.
- `DeltaStreamingInMemoryConnection` and `AccountDeltaConnection` are now gone
(Sorry for the big diff! This one was a little hard to split up without landing something broken)
Depends on D4121
Test Plan: manual + unit tests planned in upcoming diff
Reviewers: halla, mark, evan, spang
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4122
Summary:
This is to make it available to all plugins in the app
Depends on D4120
Test Plan: manual
Reviewers: halla, mark, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4121
Summary:
This commit completely refactors `DeltaStreamingConnection`, notably
introducing the following changes:
- Right now, `AccountDeltaConnection` establishes both delta connections to the cloud api and to the `client-sync` database (K2). This class is meant to disapper in favor of splitting into two different classes meant for syncing with the n1Cloud api and the local database. Specifically, `DeltaStreamingConnection`'s only responsibility is now to connect to the n1Cloud API and establish an http streaming connection for metadata deltas, etc. This class no longer unecessarily inherits from `NylasLongConnection`, which removes a lot of unecessary callback indirection.
- The statuses of the n1Cloud delta streaming connections are now stored in as JSONBlobs in edgehill.db under new keys. This commit ensures that the data is correctly migrated from the old key (`NylasSyncWorker:<account_id>`).
- The `DeltaStreamingConnection` now correctly retries when closed or errors. This logic previously existed, but was removed during the K2 transition: https://github.com/nylas/nylas-mail/blob/n1-pro/internal_packages/worker-sync/lib/nylas-sync-worker.coffee#L67
- Delta connection retries now backoff using the `ExponentialBackoffScheduler`
- Attempt to restore the delta connection whenever the app comes back online
Depends on D4119
Test Plan: manual + planned unit tests in upcoming diff
Reviewers: halla, mark, evan, spang
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4120
Summary:
For legacy reasons, NylasLongConnection did not take the `onResults`
callback as an option, but rather it was a method you had to call on the
connection, which took the callback, and which you had to call after
instantiating the connection. This was an annoying and clunky interface.
This commit makes it so NylasLongConnection takes an `onResults` option
as one would expect it to, and updates any references to the old
interface
Depends on D4118
Test Plan: manual
Reviewers: halla, mark, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4119
Summary:
Turns out, NylasSyncStatusStore doesn't need to concern itself with the
status of delta connections.
This is legacy cruft from the days when this store used to provide the
online status of the app based on the delta connections. Now we have an
OnlineStatusStore, so we don't need this at all
Instead of stuffing that state in that store, this commit adds a small
DeltaConnectionStatusStore to provide an easy way to listen to changes
on the status of delta connections. The only part of the app concerned
with that state is the DeveloperBarStore in order to render the green
circles for delta connections in the worker window.
Depends on D4117
Test Plan: manual
Reviewers: mark, halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4118
Summary:
Generating the daily build always takes a bit of effort, so I wrote the
script to automate the process and allow anyone to do it as well.
This script will:
- Check your current working directory is in a good state
- Bump the version number in package.json
- Edit CHANGELOG.md with the git log diff between current master and the last version tag (you can also manually edit it). This is so writing the CHANGELOG before a release takes less effort; you will just need to open the CHANGELOG file and edit the commit messages.
- Commit the changes with a commit message of: bump(version) <next_version>
- Add new version tags to nylas-mail and k2
- Push the changes to Github
- If provided with the flag, it will build the app. This is intended to be used in our mac mini machine
There are a few TODO's left in here, like updating the daily channel,
and emailing a notification when the channel is updated with the buid urls.
When this lands, we will configure our mac mini to run this script daily/nightly.
Also, any engineer should be able to trigger a daily build easily by just
running this script.
For sample output, see this commit which was generated by this script:
000fa88ebb
Test Plan: manually running it
Reviewers: halla, spang, mark, khamidou, evan
Reviewed By: evan
Subscribers: tomasz
Differential Revision: https://phab.nylas.com/D4123
Summary:
If an error is thrown while opening an imap box, we need to make sure to
set the `_isOpening` back to false
Test Plan: manual
Reviewers: mark
Reviewed By: mark
Differential Revision: https://phab.nylas.com/D4134
Summary:
In many cases when send fails, the server settings may be incorrect.
Send this data to Sentry with the original error in order to help us
diagnose.
I think the grouping will only contain settings for a single user
getting the same error, but should still be good enough to do further
investigation, e.g. https://sentry.io/nylas/nylas-mail/issues/230529222/
We can always go look up the requests in the n1Cloud logs if more info is
needed.
Test Plan: yolo
Reviewers: mark, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D4131
Summary:
We have a bunch of different endpoints and it'd be helpful to know which
endpoints are failing, not just which status codes are being thrown. e.g. while
trying to reproduce send failures, I've been able to cause /auth to fail with a
504, but I couldn't tell which endpoint was failing with the 504 until I added
this extra differentiator.
Test Plan: manual
Reviewers: mark, juan
Reviewed By: mark, juan
Differential Revision: https://phab.nylas.com/D4128
Arc land messed up and landed a not fully merged branch. (Seriously – I
had merged a copy of my branch with master to see how easy it would be.
Because I didn't want to merge the whole thing, I blindly committed my
changes and switched back to my real branch). To my great surprise, arc
decided to use the wrong branch when landing it.
Original commit message:
Summary:
Finally, here it is! Send later, with support for open tracking but
without support for attachments yet. It took me some time to find the
right way to do things.
**The send later dilemna**
There's two ways we could handle send later:
1. do everything on the client
2. process the message in the cloud
1. is very tempting because it would make the cloud server very
simple. Unfortunately, it has some serious limitations, for example,
setting the "Date" message header. That's why I chose to go with 2. When
a user presses the "Send Later" button, we save the open/link tracking
metadata and fills in all the required fields. I added a custom endpoint
to the K2 API to do this, `/drafts/build`. After that, we save the JSON
contents of the message as metadata.
When we process metadata, we simply create a MIME message from the
JSON and send it.
**Limitations**
Right now, send later doesn't support and attachments. There's also
some minor code duplication which needs to be refactored away.
Test Plan: Tested manually. Checked that regular send still worked, too.
Reviewers: mark, spang, halla, juan, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4054
Summary:
Finally, here it is! Send later, with support for open tracking but without support for attachments yet. It took me some time to find the right way to do things.
**The send later dilemna**
There's two ways we could handle send later:
1. do everything on the client
2. process the message in the cloud
1. is very tempting because it would make the cloud server very simple. Unfortunately, it has some serious limitations, for example, setting the "Date" message header. That's why I chose to go with 2. When a user presses the "Send Later" button, we save the open/link tracking metadata and fills in all the required fields. I added a custom endpoint to the K2 API to do this, `/drafts/build`. After that, we save the JSON contents of the message as metadata.
When we process metadata, we simply create a MIME message from the JSON and send it.
**Limitations**
Right now, send later doesn't support and attachments. There's also some minor code duplication which needs to be refactored away.
Test Plan: Tested manually. Checked that regular send still worked, too.
Reviewers: mark, spang, halla, juan, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4054
Summary:
Keeping a sourceMapCache uses a great deal of memory. Don't do it if we
don't have to.
Test Plan: Run locally, verify that we no longer generate a sourceMapCache
Reviewers: evan, juan, spang
Reviewed By: evan, juan, spang
Differential Revision: https://phab.nylas.com/D3964
Summary:
We never want to ship the last release we'll ever ship.
This can happen if the new version's autoupdater is broken.
To test this we upload to a version n+1 higher than the current one.
Test Plan: manual
Reviewers: juan, spang, halla
Reviewed By: halla
Differential Revision: https://phab.nylas.com/D4125
Summary:
This diff adds a timeout parameter to queueMessageForProcessing that can
be used to override the CPU-throttled-by-default timeout used by the
message processing queue. The FetchMessagesInFolder sync task now
overrides with a timeout of 0 when it is queuing specific UIDs that it
has downloaded. The thinking here is that if we've requested a specific
set of UIDs, it's very important to sync those quickly as this indicates
a higher priority (e.g. initial inbox sync, search result syncing, etc).
Test Plan: Run locally, verify that initial sync and search syncing run faster
Reviewers: spang, evan, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D4114
Summary:
When searching using IMAP/Gmail commands we sometimes get back UIDs for
messages that we have yet to sync. Previously we would just ignore these
results, which would decrease the quality search results for quite some
time during initial sync. This diff enables us to eagerly sync the unknown
messages we get back from the provider by creating a syncback task which
interrupts the sync loop and runs a sync task for the unknown UIDs.
Test Plan: Run locally, verify that we sync unknown messages
Reviewers: spang, evan, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D4101
Summary:
This commit adds a new option to the `reportError` interface to allow
to rate limit specific kinds of errors that are known to be reported a
lot if they fall in a loop for example.
For now, it only allows to impose a fixed reporting rate per hour for a specific
error, and will not report any errors past that rate before an hour has
passed.
Depends on D4099
Test Plan:
manually check that it reports and does not report errors based on the rate
limit data passed
Reviewers: khamidou, mark, evan, halla, spang
Reviewed By: evan, halla, spang
Differential Revision: https://phab.nylas.com/D4100
Summary:
Add better messaging and logging to the console.
Depends on D4098
Test Plan: manual
Reviewers: evan, khamidou, spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D4099
Summary:
Given that NylasAPIRequest reports all APIErrors for requests that
return 5xx, we were double reporting access token errors, which caused 2
groups in sentry and makes it difficult to track.
This commit removes the extra reporting for that error in the sync loop
Test Plan: manual
Reviewers: evan, khamidou, spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D4098
Summary:
Given that we backoff exponentially in the sync loop when we encounter
RetryableErrors (e.g. network errors), if the app goes offline and
reaches the maximum retry backoff of 5 minutes, and if then the app comes
back online, in the worst case, the sync loop will idle for 5 minutes before trying to
sync, which is undesireable.
To mitigate this, everytime we come online we can wake all of the sync
workers. This shouldn't have any adverse effects
Test Plan: manual
Reviewers: evan, mark, spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D4097
Summary:
Given that `lerna bootstrap` does not install optional dependencies, we
need to manually run `npm install` inside `client-app` so
`node-mac-notifier` get's correctly installed and included in the build
See https://github.com/lerna/lerna/issues/121
Fixes T7887
Test Plan: manual
Reviewers: halla, evan
Reviewed By: evan
Maniphest Tasks: T7887
Differential Revision: https://phab.nylas.com/D4076
Summary:
This is mostly to clean up the logic here a little bit.
Specifically:
- The backoff when encountering retryable errors is now truly exponential
- When encountering a permanent error, we back off for a minute
- If we don't encounter consecutive RetryableErrors, we clear the exponential backoff. Previously, we would just clear it when sync completed without errors.
Test Plan: manual
Reviewers: khamidou, spang, evan, mark
Reviewed By: mark
Differential Revision: https://phab.nylas.com/D4086
Summary:
This commit makes it so we /always/ catch errors thrown in the request
established inside `NylasLongConnection`. Specifically, it catches an
uncaught `socket hang up` error which is flooding Sentry unnecessarily
(https://sentry.io/nylas/nylas-mail-old-1030/issues/213178758/).
This error is thrown when the long connection is manually ended before
any data was received from the request (which happens most times during
search when you clear the search query).
The problem was that when we end the long connection, we remove all event listeners
from the request, and /then/ the error is thrown, at which point there is no error
handler to catch it. The solution is simply to always have an error handler attached to
the request object.
This commit also adds extra error handling that seems to have been missing.
Test Plan:
manually check that delta connection is still working, and that error
is no longer thrown when you search and then immediately clear the search
Reviewers: mark, evan, spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D4096