Summary:
This will solve T7579 when saving messages to the sent folder. I
attempted to clean up the references code but decided it was better left for a
new diff, so added a bunch of TODO's in this diff
Test Plan: manual
Reviewers: halla, spang, evan
Reviewed By: spang, evan
Differential Revision: https://phab.nylas.com/D3766
Summary:
We've been syncing drafts messages but not the drafts flag in K2, making
them appear in Edgehill as regular old messages.
This commit makes K2 sync the drafts flag, and also correctly label
folders called "Drafts" with the 'drafts' role.
Because 2-way syncing of drafts is very complex and error-prone since
you need to add new drafts and delete the old ones on every update, and
we reaally don't want to do things like create multiplying draft copies
or accidentally lose a draft someone started composing elsewhere, we
simply exclude messages marked as drafts from being serialized to
Edgehill through the delta stream for now. This removes the confusing
behaviour and also sets a better stage for completing drafts sync later.
Eventually we will also want to add functionality to allow users to
select their drafts folder, but for now this code does the right thing
in many more cases.
While investigating this behaviour, I also discovered a bug we've never
seen before where Gmail isn't applying the \Draft flag to draft
messages, no matter which folder we fetch them from. :-/ This is very
unfortunate and there's no way for us to work around it other than to
fetch messages in the Drafts folder and manually apply the flag locally,
since "drafts" is not a label in Gmail, only another IMAP folder. Brandon
Long from the Gmail team says that this is because they've had
problems with clients which sync drafts, so the Gmail web client and
mobile apps do not set the \Draft flag on drafts. (I don't get how this
solves their problem, but okay.) Let's solve the issue on Gmail if it
comes up by user demand—should be relatively straightforward to
implement, but it adds sync work & complexity.
Fixes T7593
Test Plan: manual
Reviewers: halla, juan
Reviewed By: juan
Maniphest Tasks: T7593
Differential Revision: https://phab.nylas.com/D3749
Summary:
We don't want to run message processing full tilt when a user isn't plugged in.
This diff adds some detection logic that causes message processing to be
throttled/unthrottled when a user unplugs/plugs in their computer.
Test Plan: Run locally unplugged and plugged in, verify that CPU use goes up/down
Reviewers: evan, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3759
Summary:
See title
Depends on D3744
Test Plan: tested locally
Reviewers: spang, evan, juan
Reviewed By: spang, evan, juan
Differential Revision: https://phab.nylas.com/D3745
Summary:
I thought it was gonna be OK that we kept all HTML parts in a multipart/alternative
MIME structure because the world is a sane place and nobody would ever put more
than one HTML part in a multipart/alternative structure.
I was wrong.
We have found extraterrestrial life^W^WI mean emails which contain duplicate,
exactly the same MIME parts within a multipart/alternative MIME structure: two
text/plain parts and two text/html parts. This is likely due to a broken MIME
implementation, or perhaps a bug in someone's email script. So, we should
only keep one text/html MIME part if there are multiple.
Test Plan:
manual for now---added this to my mail parsing regression test list
for implementation once we unify the DBs and have a roughly stable code
structure
Reviewers: halla, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3750
Summary:
Rather than having a strict model where we don't decode the message if we
don't specifically recognize the CTE, treat any CTEs we don't recognize as
having no encoding. There are several CTE strings that could mean this (e.g.
7bit, 7BITS, 8-bit, binary, NONE, utf8), and we don't want to check for them
all. Additionally, if there is a CTE we don't support, the user will likely
see rendering issues and contact support. This will allow us to obtain more
concrete information about these messages.
Test Plan: manual
Reviewers: spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D3748
Summary:
Given that we were marking the account as errored if we've encountered
enough RetryableErrors, we would show the red box to the user when in
fact the problem was the user was offline, causing confusion
If the user is offline, we will constantly get RetryableErrors in the
sync loop, and we can't mark the account as errored in that case.
Test Plan: manual
Reviewers: evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D3752
Summary:
This commit ensures that we handle transient errors correctly when refreshing
tokens
Test Plan: manual
Reviewers: khamidou, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D3740
Summary:
In the sync worker:
- Move the backoff logic inside `scheduleNextSync`, where all logic to schedule the next sync loop now lives
- If we've retried a RetryableError a bunch of times, show the error to the user, otherwise the user might think the app is not working for no reason
- Clean up logging
In the message processor:
- Report message processing errors to sentry!
Sync Process Manager:
- Listen to new `Actions.debugSync` to show the Activity Window and open dev tools
Test Plan: manual
Reviewers: khamidou, evan
Reviewed By: khamidou, evan
Differential Revision: https://phab.nylas.com/D3736
Summary:
This commit makes it so our syncback tasks send as few imap commands as possible by passing a set of UIDs whenever possible. Previously, we would send 1 command per message, with a single UID, which was very wasteful given that we can pass a set of UIDs. This is especially helpful for operating on threads with a large number of messages.
Syncback actions will now group all messages in a thread by the folder they belong to, and issue a single operation on the folder box. When removing all labels from a thread (setting labels to []), we need to issue a command of the form `box.delLabels(uids, labels)`, so we also group messages by set their set of labels to issue as few commands as possible.
This commit only batches imap commands, but we can still batch syncback actions themselves, which can be implemented in a separate patch.
Test Plan: manual
Reviewers: evan, mark, spang
Reviewed By: spang
Subscribers: halla, mg
Differential Revision: https://phab.nylas.com/D3719
Summary: Double-firing protection since the DatabaseStore can now fire this
Test Plan: manual
Reviewers: juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3730
Summary:
We weren't, which meant that us sending with multi-send or generic IMAP
broke threading. :(
Test Plan: manual
Reviewers: juan, evan
Reviewed By: juan, evan
Differential Revision: https://phab.nylas.com/D3718
Summary:
Before this commit, if folder sync was complete, and the account didn't support CONDSTORE (e.g. Office365, Yahoo), we would only check for attribute updates every 10 minutes.
This commit makes it so we always check for attribute updates if the server doesn't support CONDSTORE
So for example, when marking a thread as read, we would perform the optimistic update in N1, queue the syncback task which would succeed, but the thread in k2s db would never get updated and become stale, with an unreadCount > 0. If we emitted a delta for that thread during the window of time where we ignored attribute updates, it would be set as unread again in N1, even though all of its messages were read.
This still doesn't guarantee that it wont happen (we could still get a delta for the thread before we actually fetch the attribute updates from IMAP), but before this commit it was sure to happen. This should be properly fixed with the sync scheduler refactor
Test Plan: manual
Reviewers: evan, mark, spang
Reviewed By: mark, spang
Differential Revision: https://phab.nylas.com/D3714
Summary:
Previously, we were not pripritizing archive sync when getting folders to sync, causing it to be synced almost last. I believe this was causing the issues regarding archived items coming back, because we would optimistically archive in N1, but the changes wouldn't be reflected in K2's database until we synced the archive, causing the data to become out of sync. If for whatever reason we got a delta for any of those messages before the archive was synced, they would pop back in the inbox because in k2, they were still in the inbox. This was exacerbated by the fact that all syncback tasks would interrupt the loop, so we would reach the archive until very late, making this scenario way more likely.
This still wont guarantee that it wont happen, because we dont do /any/ optimistic updates in K2, so we could still get deltas before we actually sync the folder, but makes the scenario way less likely. This should be properly fixed with the sync scheduler refactor
Test Plan: manual
Reviewers: spang, evan, mark
Reviewed By: mark
Differential Revision: https://phab.nylas.com/D3716
Summary:
All we do is use the SEARCH X-GM-RAW IMAP extension to find the UIDs
to prioritize at the beginning of initial sync, and download these UIDs
until there are none left. Then we continue downloading All Mail as
usual.
Because of the way we batch via ranges, the most expedient way to
implement this means that all prioritized emails will end up being
downloaded twice (the second time we'll detect that the message exists
and do nothing).
This seems like a worthwhile tradeoff for quick appearance of the
messages in a user's inbox.
Test Plan: manual
Reviewers: evan, juan
Reviewed By: evan, juan
Differential Revision: https://phab.nylas.com/D3706
Summary:
Don't show the attachment icon on threads that only have inline
images. We do this by assuming that inline images have a contentID,
and regular attachments do not. Also updates the way we send
attachments in order to adhere to this standard.
Test Plan: tested manually
Reviewers: spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D3696
Summary:
When syncing folders, we check if the folder needs syncing by checking if it has any new messages via the STATUS command (STATUS returns uidnext, highestmodseq among others, and is cheaper than SELECT)
However, we can't issue a STATUS on a box that is already selected. Previously, if the box was already selected, we would just return it, but this was incorrect because we wouldn't get the latest box values (e.g. uidnext), causing us to think that there were no updates available, and skip syncing folders that actually needed to be synced.
Now, if the box is already selected when getting the status, we have to re select it to refresh the latest values
Test Plan: manual
Reviewers: evan, khamidou, spang
Reviewed By: spang
Differential Revision: https://phab.nylas.com/D3697
Summary:
This commit also lowers the batch size of messages to fetch on folder sync down to 30. This is in order to prevent sync from getting stuck if we queue too many syncback tasks-- given that we only update the range of fetched uids after we've actually fetched and processed messages, if the batch size is too big and we interrupt too often, we might end up never advancing the range and re fetching the same messages over and over.
This also makes the sync loop run faster through all folders in general.
Depends on D3689 to make sure that the batch size actually reflects a message count, i.e. to ensure that we are making /visible/ progress.
Test Plan: manual
Reviewers: spang, khamidou, evan
Reviewed By: evan
Maniphest Tasks: T7477
Differential Revision: https://phab.nylas.com/D3692
Summary:
Because we optimistically fetch UIDs by expanding a range without looking
at the actual UIDs in the inbox and the actual space of UIDs with messages
attached may be sparse due to message moves, we need to track how many
messages we actually download during a range expansion and continue
expanding the range if we haven't downloaded enough messages.
If we reach a large gap where we download no messages at all during a batch, we
pause and check the actual UID list for the folder for the next UID to
download, as otherwise we may spin indefinitely fetching UIDs that don't exist.
(Example: my "Deleted Items" folder had about 300k worth of empty UIDs between
a very small UID and a very large UID. With the new system, this registers as a
completed sync within a single iteration as soon as sync hits the gap.)
Test Plan: manual
Reviewers: juan, evan
Reviewed By: juan, evan
Differential Revision: https://phab.nylas.com/D3689
Summary:
This patch changes the sync worker to back off exponentially when there is an issue syncing an account. This has two goals:
- first, it's a bit dangerous to retry immediately. We don't want hundreds of thousands of machines trying to refresh tokens unsuccessfully because our service is struggling.
- second, it's nicer on the CPU to wait a bit between retries.
Currently, we sleep for at most 2 minutes, with some random jitter added.
Test Plan: Tested manually, stared at the code a long time.
Reviewers: evan, juan
Reviewed By: evan, juan
Differential Revision: https://phab.nylas.com/D3684
Summary:
Various errors are thrown when the sync worker tries accessing
a database that we've already deleted, so make sure the sync
worker has been stopped before we remove the database. This diff
involves modifying `Interruptible` so that `interrupt()` returns
a promise that resolves once the interrupt has been completed.
Addresses T7472
Test Plan: manual
Reviewers: evan, juan
Reviewed By: evan, juan
Differential Revision: https://phab.nylas.com/D3679
Summary:
On MG's machine this function is EXTREMELY non performant and causes
things like archive to lock up when the console is running here for some
reason. Not entirely sure exactly what's causing it, but there were some
simple DB cleanups that will make it faster for large queries.
There's likely other things involved since the sequelize DB being locked
up shouldn't affect the peformLocal of the edgehill db for things like
archive. Still looking into that
Test Plan: manual
Reviewers: juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3683
Summary:
Before trying to sync a folder, check if we actually need to do so. This will prevent us from doing unnecessary work that slows down the sync loop (like performing SELECT commands)
We will perform a folder sync if any of the following are true
- The folder hasn't been completely synced
- There are new messages (using imap STATUS command)
- There are attribute changes indicated via highestmodseq (using imap STATUS command)
- If server doesn't support highestmodseq, it has passed enough time since we last ran an attribute scan on the folder.
Addresses T7513
Test Plan: manual
Reviewers: evan, halla, spang
Reviewed By: halla, spang
Differential Revision: https://phab.nylas.com/D3675
Summary:
Currently, our mail sync strategy of expanding UID ranges from UIDNEXT
backwards until a UID of 1 implicitly assumes that every UID corresponds to an
actual message. This assumption is incorrect, and results in several
significant bugs regarding sync status.
This patch fixes issue 1:
Since UIDs are persistent and, so long as the UIDVALIDITY is valid, ascend
monotonically upward, every time you move a message to a new folder you "lose"
UIDs lower down in the range. In my work Inbox, where I get a lot of mail,
archive all the time, and generally have only a small number of threads in the
mailbox, the smallest UID is over 100k. This means that, after all my inbox
messages are synced, the sync loop will continue attempting to download
nonexistent old messages in this mailbox for hundreds of sync iterations, and
will not mark the mailbox as fully synced until fetchmin reaches 1, regardless
of the fact that there are no actually messages being pulled down.
This patch needs a small associated patch to N1 to update how sync status is
calculated (coming soon).
The next patch in this series will deal with gaps in the UIDspace that slow
down syncing of a folder.
Test Plan: manual
Reviewers: halla, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3677
Summary:
We want to do this in order to prevent send tasks from blocking the sync loop given that they can take a very long time to run. This is especially true when sending emails with large attachments to multiple recipients.
There is no real way to make sending in these cases faster, but we can prevent it from blocking the sync loop at least, especially because sending is mostly I/O bound.
This is a bit messy actually, but should be fixed when we properly implement a sync scheduler
Also added a limit to the total size of attachments you can upload to try to prevent weird EPIPE errors when sending.
See: D3670.
Also moved and renamed stuff a little
Test Plan: manual
Reviewers: halla, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D3669
Summary: Allows us to reset accounts in local-sync too
Test Plan: manual
Reviewers: mark, juan
Reviewed By: juan
Differential Revision: https://phab.nylas.com/D3672