2015-05-20 07:06:59 +08:00
|
|
|
_ = require 'underscore'
|
2017-03-31 08:38:07 +08:00
|
|
|
{NylasAPI, NylasAPIHelpers, NylasAPIRequest, Actions, DatabaseStore, DatabaseWriter, Account, Thread} = require 'nylas-exports'
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
DeltaStreamingConnection = require('../../src/services/delta-streaming-connection').default
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2017-03-07 09:26:07 +08:00
|
|
|
# TODO these are badly out of date, we need to rewrite them
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
xdescribe "DeltaStreamingConnection", ->
|
2015-04-07 02:46:20 +08:00
|
|
|
beforeEach ->
|
|
|
|
@apiRequests = []
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
spyOn(NylasAPIRequest.prototype, "run").andCallFake ->
|
2016-12-02 02:17:41 +08:00
|
|
|
@apiRequests.push({requestOptions: this.options})
|
2016-11-29 09:33:14 +08:00
|
|
|
@localSyncCursorStub = undefined
|
2016-12-02 02:17:41 +08:00
|
|
|
@n1CloudCursorStub = undefined
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
# spyOn(DeltaStreamingConnection.prototype, '_fetchMetadata').andReturn(Promise.resolve())
|
2017-03-31 08:38:07 +08:00
|
|
|
spyOn(DatabaseWriter.prototype, 'persistJSONBlob').andReturn(Promise.resolve())
|
2015-12-08 08:52:46 +08:00
|
|
|
spyOn(DatabaseStore, 'findJSONBlob').andCallFake (key) =>
|
2015-10-10 04:42:24 +08:00
|
|
|
if key is "NylasSyncWorker:#{TEST_ACCOUNT_ID}"
|
|
|
|
return Promise.resolve _.extend {}, {
|
2016-11-29 09:33:14 +08:00
|
|
|
"deltaCursors": {
|
|
|
|
"localSync": @localSyncCursorStub,
|
|
|
|
"n1Cloud": @n1CloudCursorStub,
|
|
|
|
}
|
|
|
|
"initialized": true,
|
2015-10-10 04:42:24 +08:00
|
|
|
"contacts":
|
|
|
|
busy: true
|
|
|
|
complete: false
|
|
|
|
"calendars":
|
|
|
|
busy:false
|
|
|
|
complete: true
|
|
|
|
}
|
|
|
|
else if key.indexOf('ContactRankings') is 0
|
|
|
|
return Promise.resolve([])
|
|
|
|
else
|
|
|
|
return throw new Error("Not stubbed! #{key}")
|
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(DeltaStreamingConnection.prototype, 'start')
|
2017-06-22 04:12:49 +08:00
|
|
|
@account = new Account(id: TEST_ACCOUNT_CLIENT_ID, organizationUnit: 'label')
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
@worker = new DeltaStreamingConnection(@account)
|
2016-12-02 02:17:41 +08:00
|
|
|
@worker.loadStateFromDatabase()
|
|
|
|
advanceClock()
|
|
|
|
@worker.start()
|
2016-03-11 03:06:06 +08:00
|
|
|
@worker._metadata = {"a": [{"id":"b"}]}
|
2016-12-02 02:17:41 +08:00
|
|
|
@deltaStreams = @worker._deltaStreams
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2015-05-20 06:59:37 +08:00
|
|
|
it "should reset `busy` to false when reading state from disk", ->
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
@worker = new DeltaStreamingConnection(@account)
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume')
|
2016-12-02 02:17:41 +08:00
|
|
|
@worker.loadStateFromDatabase()
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.contacts.busy).toEqual(false)
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
describe "start", ->
|
2016-01-30 08:06:33 +08:00
|
|
|
it "should open the delta connection", ->
|
2015-04-07 02:46:20 +08:00
|
|
|
@worker.start()
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@deltaStreams.localSync.start).toHaveBeenCalled()
|
|
|
|
expect(@deltaStreams.n1Cloud.start).toHaveBeenCalled()
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2016-11-22 05:38:30 +08:00
|
|
|
it "should start querying for model collections that haven't been fully cached", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
waitsForPromise => @worker.start().then =>
|
|
|
|
expect(@apiRequests.length).toBe(7)
|
|
|
|
modelsRequested = _.compact _.map @apiRequests, ({model}) -> model
|
|
|
|
expect(modelsRequested).toEqual(['threads', 'messages', 'folders', 'labels', 'drafts', 'contacts', 'events'])
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(modelsRequested).toEqual(['threads', 'messages', 'folders', 'labels', 'drafts', 'contacts', 'events'])
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2015-10-06 07:22:22 +08:00
|
|
|
it "should fetch 1000 labels and folders, to prevent issues where Inbox is not in the first page", ->
|
2015-10-04 14:53:59 +08:00
|
|
|
labelsRequest = _.find @apiRequests, (r) -> r.model is 'labels'
|
|
|
|
expect(labelsRequest.params.limit).toBe(1000)
|
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
it "should mark incomplete collections as `busy`", ->
|
|
|
|
@worker.start()
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2016-12-02 02:17:41 +08:00
|
|
|
nextState = @worker._state
|
2015-04-07 02:46:20 +08:00
|
|
|
|
2015-07-23 02:18:23 +08:00
|
|
|
for collection in ['contacts','threads','drafts', 'labels']
|
2015-05-20 06:59:37 +08:00
|
|
|
expect(nextState[collection].busy).toEqual(true)
|
|
|
|
|
|
|
|
it "should initialize count and fetched to 0", ->
|
|
|
|
@worker.start()
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2016-12-02 02:17:41 +08:00
|
|
|
nextState = @worker._state
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2015-07-23 02:18:23 +08:00
|
|
|
for collection in ['contacts','threads','drafts', 'labels']
|
2015-05-20 06:59:37 +08:00
|
|
|
expect(nextState[collection].fetched).toEqual(0)
|
|
|
|
expect(nextState[collection].count).toEqual(0)
|
|
|
|
|
2015-08-14 02:20:36 +08:00
|
|
|
it "after failures, it should attempt to resume periodically but back off as failures continue", ->
|
|
|
|
simulateNetworkFailure = =>
|
2016-11-22 05:38:30 +08:00
|
|
|
@apiRequests[0].requestOptions.error({statusCode: 400})
|
2015-08-14 02:20:36 +08:00
|
|
|
@apiRequests = []
|
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume').andCallThrough()
|
2016-09-23 07:17:02 +08:00
|
|
|
spyOn(Math, 'random').andReturn(1.0)
|
2015-05-20 06:59:37 +08:00
|
|
|
@worker.start()
|
2015-08-14 02:20:36 +08:00
|
|
|
|
2016-09-23 07:17:02 +08:00
|
|
|
expectThings = (resumeCallCount, randomCallCount) =>
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(resumeCallCount)
|
2016-09-23 07:17:02 +08:00
|
|
|
expect(Math.random.callCount).toBe(randomCallCount)
|
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1, 1)
|
2016-09-23 07:17:02 +08:00
|
|
|
simulateNetworkFailure(); expectThings(1, 1)
|
2016-12-02 02:17:41 +08:00
|
|
|
advanceClock(4000); advanceClock(); expectThings(2, 1)
|
2016-09-23 07:17:02 +08:00
|
|
|
simulateNetworkFailure(); expectThings(2, 2)
|
2016-12-02 02:17:41 +08:00
|
|
|
advanceClock(4000); advanceClock(); expectThings(2, 2)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(3, 2)
|
2016-09-23 07:17:02 +08:00
|
|
|
simulateNetworkFailure(); expectThings(3, 3)
|
2016-12-02 02:17:41 +08:00
|
|
|
advanceClock(4000); advanceClock(); expectThings(3, 3)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(3, 3)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(4, 3)
|
2016-09-23 07:17:02 +08:00
|
|
|
simulateNetworkFailure(); expectThings(4, 4)
|
2016-12-02 02:17:41 +08:00
|
|
|
advanceClock(4000); advanceClock(); expectThings(4, 4)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(4, 4)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(4, 4)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(4, 4)
|
|
|
|
advanceClock(4000); advanceClock(); expectThings(5, 4)
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2015-09-24 01:46:07 +08:00
|
|
|
it "handles the request as a failure if we try and grab labels or folders without an 'inbox'", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume').andCallThrough()
|
2015-09-24 01:46:07 +08:00
|
|
|
@worker.start()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1)
|
2015-09-24 01:46:07 +08:00
|
|
|
request = _.findWhere(@apiRequests, model: 'labels')
|
|
|
|
request.requestOptions.success([])
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1)
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(30000); advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(2)
|
2015-09-24 01:46:07 +08:00
|
|
|
|
|
|
|
it "handles the request as a success if we try and grab labels or folders and it includes the 'inbox'", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume').andCallThrough()
|
2015-09-24 01:46:07 +08:00
|
|
|
@worker.start()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1)
|
2015-09-24 01:46:07 +08:00
|
|
|
request = _.findWhere(@apiRequests, model: 'labels')
|
|
|
|
request.requestOptions.success([{name: "inbox"}, {name: "archive"}])
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1)
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(30000); advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(1)
|
2015-09-24 01:46:07 +08:00
|
|
|
|
2016-02-27 05:52:19 +08:00
|
|
|
describe "delta streaming cursor", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
it "should read the cursor from the database", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(DeltaStreamingConnection.prototype, 'latestCursor').andReturn Promise.resolve()
|
2016-02-27 05:52:19 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
@localSyncCursorStub = undefined
|
2016-12-02 02:17:41 +08:00
|
|
|
@n1CloudCursorStub = undefined
|
2016-02-27 05:52:19 +08:00
|
|
|
|
|
|
|
# no cursor present
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
worker = new DeltaStreamingConnection(@account)
|
2016-12-02 02:17:41 +08:00
|
|
|
deltaStreams = worker._deltaStreams
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(deltaStreams.localSync.hasCursor()).toBe(false)
|
|
|
|
expect(deltaStreams.n1Cloud.hasCursor()).toBe(false)
|
2016-12-02 02:17:41 +08:00
|
|
|
worker.loadStateFromDatabase()
|
2016-02-27 05:52:19 +08:00
|
|
|
advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(deltaStreams.localSync.hasCursor()).toBe(false)
|
|
|
|
expect(deltaStreams.n1Cloud.hasCursor()).toBe(false)
|
2016-02-27 05:52:19 +08:00
|
|
|
|
2016-12-02 02:17:41 +08:00
|
|
|
# cursor present in database
|
2016-11-29 09:33:14 +08:00
|
|
|
@localSyncCursorStub = "new-school"
|
2016-12-02 02:17:41 +08:00
|
|
|
@n1CloudCursorStub = 123
|
2016-02-27 05:52:19 +08:00
|
|
|
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
worker = new DeltaStreamingConnection(@account)
|
2016-12-02 02:17:41 +08:00
|
|
|
deltaStreams = worker._deltaStreams
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(deltaStreams.localSync.hasCursor()).toBe(false)
|
|
|
|
expect(deltaStreams.n1Cloud.hasCursor()).toBe(false)
|
2016-12-02 02:17:41 +08:00
|
|
|
worker.loadStateFromDatabase()
|
2016-02-27 05:52:19 +08:00
|
|
|
advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(deltaStreams.localSync.hasCursor()).toBe(true)
|
|
|
|
expect(deltaStreams.n1Cloud.hasCursor()).toBe(true)
|
|
|
|
expect(deltaStreams.localSync._getCursor()).toEqual('new-school')
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(deltaStreams.n1Cloud._getCursor()).toEqual(123)
|
2016-02-27 05:52:19 +08:00
|
|
|
|
2016-07-28 05:29:35 +08:00
|
|
|
it "should set the cursor to the last cursor after receiving deltas", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(DeltaStreamingConnection.prototype, 'latestCursor').andReturn Promise.resolve()
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
worker = new DeltaStreamingConnection(@account)
|
2016-07-28 05:29:35 +08:00
|
|
|
advanceClock()
|
2016-12-02 02:17:41 +08:00
|
|
|
deltaStreams = worker._deltaStreams
|
2016-07-28 05:29:35 +08:00
|
|
|
deltas = [{cursor: '1'}, {cursor: '2'}]
|
2016-11-29 09:33:14 +08:00
|
|
|
deltaStreams.localSync._emitter.emit('results-stopped-arriving', deltas)
|
|
|
|
deltaStreams.n1Cloud._emitter.emit('results-stopped-arriving', deltas)
|
2016-07-28 05:29:35 +08:00
|
|
|
advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(deltaStreams.localSync._getCursor()).toEqual('2')
|
|
|
|
expect(deltaStreams.n1Cloud._getCursor()).toEqual('2')
|
2016-07-28 05:29:35 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
describe "_resume", ->
|
2016-03-11 03:06:06 +08:00
|
|
|
it "should fetch metadata first and fetch other collections when metadata is ready", ->
|
|
|
|
fetchAllMetadataCallback = null
|
2016-12-02 02:17:41 +08:00
|
|
|
spyOn(@worker, '_fetchCollectionPage')
|
2016-03-11 03:06:06 +08:00
|
|
|
@worker._state = {}
|
2016-11-29 09:33:14 +08:00
|
|
|
@worker._resume()
|
|
|
|
expect(@worker._fetchMetadata).toHaveBeenCalled()
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._fetchCollectionPage.calls.length).toBe(0)
|
|
|
|
advanceClock()
|
|
|
|
expect(@worker._fetchCollectionPage.calls.length).not.toBe(0)
|
2016-03-11 08:03:32 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
it "should fetch collections for which `_shouldFetchCollection` returns true", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
spyOn(@worker, '_fetchCollectionPage')
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_shouldFetchCollection').andCallFake (collection) =>
|
2016-12-02 02:17:41 +08:00
|
|
|
return collection.model in ['threads', 'labels', 'drafts']
|
2016-11-29 09:33:14 +08:00
|
|
|
@worker._resume()
|
2016-12-02 02:17:41 +08:00
|
|
|
advanceClock()
|
|
|
|
advanceClock()
|
|
|
|
expect(@worker._fetchCollectionPage.calls.map (call) -> call.args[0]).toEqual(['threads', 'labels', 'drafts'])
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2017-01-12 09:24:30 +08:00
|
|
|
it "should be called when Actions.retryDeltaConnection is received", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(DeltaStreamingConnection.prototype, 'latestCursor').andReturn Promise.resolve()
|
2016-07-27 17:56:55 +08:00
|
|
|
|
|
|
|
# TODO why do we need to call through?
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume').andCallThrough()
|
2017-01-12 09:24:30 +08:00
|
|
|
Actions.retryDeltaConnection()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume).toHaveBeenCalled()
|
2015-10-09 10:02:54 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
describe "_shouldFetchCollection", ->
|
2016-03-11 03:06:06 +08:00
|
|
|
it "should return false if the collection sync is already in progress", ->
|
2015-05-20 06:59:37 +08:00
|
|
|
@worker._state.threads = {
|
|
|
|
'busy': true
|
|
|
|
'complete': false
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
expect(@worker._shouldFetchCollection({model: 'threads'})).toBe(false)
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2016-03-11 03:06:06 +08:00
|
|
|
it "should return false if the collection sync is already complete", ->
|
2015-05-20 06:59:37 +08:00
|
|
|
@worker._state.threads = {
|
|
|
|
'busy': false
|
|
|
|
'complete': true
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
expect(@worker._shouldFetchCollection({model: 'threads'})).toBe(false)
|
2016-03-11 03:06:06 +08:00
|
|
|
|
|
|
|
it "should return true otherwise", ->
|
|
|
|
@worker._state.threads = {
|
|
|
|
'busy': false
|
|
|
|
'complete': false
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
expect(@worker._shouldFetchCollection({model: 'threads'})).toBe(true)
|
2016-03-11 03:06:06 +08:00
|
|
|
@worker._state.threads = undefined
|
2016-12-02 03:00:20 +08:00
|
|
|
expect(@worker._shouldFetchCollection({model: 'threads'})).toBe(true)
|
2016-03-11 03:06:06 +08:00
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
describe "_fetchCollection", ->
|
2016-03-11 03:06:06 +08:00
|
|
|
beforeEach ->
|
|
|
|
@apiRequests = []
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2016-03-11 03:06:06 +08:00
|
|
|
it "should pass any metadata it preloaded", ->
|
|
|
|
@worker._state.threads = {
|
|
|
|
'busy': false
|
|
|
|
'complete': false
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'threads'})
|
2016-11-22 05:38:30 +08:00
|
|
|
expect(@apiRequests[0].model).toBe('threads')
|
|
|
|
expect(@apiRequests[0].requestOptions.metadataToAttach).toBe(@worker._metadata)
|
2016-03-11 03:06:06 +08:00
|
|
|
|
2016-06-03 09:46:43 +08:00
|
|
|
describe "when there is no request history (`lastRequestRange`)", ->
|
2015-10-09 10:02:54 +08:00
|
|
|
it "should start the first request for models", ->
|
|
|
|
@worker._state.threads = {
|
|
|
|
'busy': false
|
|
|
|
'complete': false
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'threads'})
|
2016-11-22 05:38:30 +08:00
|
|
|
expect(@apiRequests[0].model).toBe('threads')
|
|
|
|
expect(@apiRequests[0].params.offset).toBe(0)
|
2015-10-09 10:02:54 +08:00
|
|
|
|
2016-06-03 09:46:43 +08:00
|
|
|
describe "when it was previously trying to fetch a page (`lastRequestRange`)", ->
|
2015-10-09 10:02:54 +08:00
|
|
|
beforeEach ->
|
|
|
|
@worker._state.threads =
|
|
|
|
'count': 1200
|
|
|
|
'fetched': 100
|
|
|
|
'busy': false
|
|
|
|
'complete': false
|
|
|
|
'error': new Error("Something bad")
|
2016-06-03 09:46:43 +08:00
|
|
|
'lastRequestRange':
|
2015-10-09 10:02:54 +08:00
|
|
|
offset: 100
|
|
|
|
limit: 50
|
|
|
|
|
2016-06-03 09:46:43 +08:00
|
|
|
it "should start paginating from the request that was interrupted", ->
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'threads'})
|
2015-10-09 10:02:54 +08:00
|
|
|
expect(@apiRequests[0].model).toBe('threads')
|
|
|
|
expect(@apiRequests[0].params.offset).toBe(100)
|
|
|
|
expect(@apiRequests[0].params.limit).toBe(50)
|
|
|
|
|
|
|
|
it "should not reset the `count`, `fetched` or start fetching the count", ->
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'threads'})
|
2015-10-09 10:02:54 +08:00
|
|
|
expect(@worker._state.threads.fetched).toBe(100)
|
|
|
|
expect(@worker._state.threads.count).toBe(1200)
|
|
|
|
expect(@apiRequests.length).toBe(1)
|
2015-05-20 06:59:37 +08:00
|
|
|
|
2016-04-01 05:58:16 +08:00
|
|
|
describe 'when maxFetchCount option is specified', ->
|
|
|
|
it "should only fetch maxFetch count on the first request if it is less than initialPageSize", ->
|
|
|
|
@worker._state.messages =
|
|
|
|
count: 1000
|
|
|
|
fetched: 0
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'messages', initialPageSize: 30, maxFetchCount: 25})
|
2016-04-01 05:58:16 +08:00
|
|
|
expect(@apiRequests[0].params.offset).toBe 0
|
|
|
|
expect(@apiRequests[0].params.limit).toBe 25
|
|
|
|
|
|
|
|
it "sould only fetch the maxFetchCount when restoring from saved state", ->
|
|
|
|
@worker._state.messages =
|
|
|
|
count: 1000
|
|
|
|
fetched: 470
|
2016-06-03 09:46:43 +08:00
|
|
|
lastRequestRange: {
|
2016-04-01 05:58:16 +08:00
|
|
|
limit: 50,
|
|
|
|
offset: 470,
|
|
|
|
}
|
2016-12-02 03:00:20 +08:00
|
|
|
@worker._fetchCollection({model: 'messages', maxFetchCount: 500})
|
2016-04-01 05:58:16 +08:00
|
|
|
expect(@apiRequests[0].params.offset).toBe 470
|
|
|
|
expect(@apiRequests[0].params.limit).toBe 30
|
|
|
|
|
2016-11-29 09:33:14 +08:00
|
|
|
describe "_fetchCollectionPage", ->
|
2016-04-01 05:58:16 +08:00
|
|
|
beforeEach ->
|
|
|
|
@apiRequests = []
|
|
|
|
|
|
|
|
describe 'when maxFetchCount option is specified', ->
|
|
|
|
it 'should not fetch next page if maxFetchCount has been reached', ->
|
|
|
|
@worker._state.messages =
|
|
|
|
count: 1000
|
|
|
|
fetched: 470
|
2016-11-29 09:33:14 +08:00
|
|
|
@worker._fetchCollectionPage('messages', {limit: 30, offset: 470}, {maxFetchCount: 500})
|
2016-04-01 05:58:16 +08:00
|
|
|
{success} = @apiRequests[0].requestOptions
|
|
|
|
success({length: 30})
|
|
|
|
expect(@worker._state.messages.fetched).toBe 500
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(2000); advanceClock()
|
2016-04-01 05:58:16 +08:00
|
|
|
expect(@apiRequests.length).toBe 1
|
|
|
|
|
|
|
|
it 'should limit by maxFetchCount when requesting the next page', ->
|
|
|
|
@worker._state.messages =
|
|
|
|
count: 1000
|
|
|
|
fetched: 450
|
2016-11-29 09:33:14 +08:00
|
|
|
@worker._fetchCollectionPage('messages', {limit: 30, offset: 450 }, {maxFetchCount: 500})
|
2016-04-01 05:58:16 +08:00
|
|
|
{success} = @apiRequests[0].requestOptions
|
|
|
|
success({length: 30})
|
|
|
|
expect(@worker._state.messages.fetched).toBe 480
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(2000); advanceClock()
|
2016-04-01 05:58:16 +08:00
|
|
|
expect(@apiRequests[1].params.offset).toBe 480
|
|
|
|
expect(@apiRequests[1].params.limit).toBe 20
|
|
|
|
|
2015-05-20 06:59:37 +08:00
|
|
|
describe "when an API request completes", ->
|
|
|
|
beforeEach ->
|
|
|
|
@worker.start()
|
2015-08-07 05:35:52 +08:00
|
|
|
advanceClock()
|
2016-11-22 05:38:30 +08:00
|
|
|
@request = @apiRequests[0]
|
2015-05-20 06:59:37 +08:00
|
|
|
@apiRequests = []
|
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
describe "successfully, with models", ->
|
2015-10-01 01:47:33 +08:00
|
|
|
it "should start out by requesting a small number of items", ->
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
expect(@request.params.limit).toBe DeltaStreamingConnection.INITIAL_PAGE_SIZE
|
2015-10-01 01:47:33 +08:00
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
it "should request the next page", ->
|
2015-05-20 06:59:37 +08:00
|
|
|
pageSize = @request.params.limit
|
2015-04-07 02:46:20 +08:00
|
|
|
models = []
|
2015-05-20 06:59:37 +08:00
|
|
|
models.push(new Thread) for i in [0..(pageSize-1)]
|
2015-04-07 02:46:20 +08:00
|
|
|
@request.requestOptions.success(models)
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(2000); advanceClock()
|
2015-04-07 02:46:20 +08:00
|
|
|
expect(@apiRequests.length).toBe(1)
|
2015-10-01 01:47:33 +08:00
|
|
|
expect(@apiRequests[0].params.offset).toEqual @request.params.offset + pageSize
|
|
|
|
|
|
|
|
it "increase the limit on the next page load by 50%", ->
|
|
|
|
pageSize = @request.params.limit
|
|
|
|
models = []
|
|
|
|
models.push(new Thread) for i in [0..(pageSize-1)]
|
|
|
|
@request.requestOptions.success(models)
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(2000); advanceClock()
|
2015-10-01 01:47:33 +08:00
|
|
|
expect(@apiRequests.length).toBe(1)
|
|
|
|
expect(@apiRequests[0].params.limit).toEqual pageSize * 1.5,
|
|
|
|
|
|
|
|
it "never requests more then MAX_PAGE_SIZE", ->
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
pageSize = @request.params.limit = DeltaStreamingConnection.MAX_PAGE_SIZE
|
2015-10-01 01:47:33 +08:00
|
|
|
models = []
|
|
|
|
models.push(new Thread) for i in [0..(pageSize-1)]
|
|
|
|
@request.requestOptions.success(models)
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(2000); advanceClock()
|
2015-10-01 01:47:33 +08:00
|
|
|
expect(@apiRequests.length).toBe(1)
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
expect(@apiRequests[0].params.limit).toEqual DeltaStreamingConnection.MAX_PAGE_SIZE
|
2015-05-20 06:59:37 +08:00
|
|
|
|
|
|
|
it "should update the fetched count on the collection", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.fetched).toEqual(0)
|
2015-05-20 06:59:37 +08:00
|
|
|
pageSize = @request.params.limit
|
|
|
|
models = []
|
|
|
|
models.push(new Thread) for i in [0..(pageSize-1)]
|
|
|
|
@request.requestOptions.success(models)
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.fetched).toEqual(pageSize)
|
2015-04-07 02:46:20 +08:00
|
|
|
|
|
|
|
describe "successfully, with fewer models than requested", ->
|
|
|
|
beforeEach ->
|
|
|
|
models = []
|
|
|
|
models.push(new Thread) for i in [0..100]
|
|
|
|
@request.requestOptions.success(models)
|
|
|
|
|
|
|
|
it "should not request another page", ->
|
|
|
|
expect(@apiRequests.length).toBe(0)
|
|
|
|
|
|
|
|
it "should update the state to complete", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.busy).toEqual(false)
|
|
|
|
expect(@worker._state.threads.complete).toEqual(true)
|
2015-05-20 06:59:37 +08:00
|
|
|
|
|
|
|
it "should update the fetched count on the collection", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.fetched).toEqual(101)
|
2015-04-07 02:46:20 +08:00
|
|
|
|
|
|
|
describe "successfully, with no models", ->
|
|
|
|
it "should not request another page", ->
|
|
|
|
@request.requestOptions.success([])
|
|
|
|
expect(@apiRequests.length).toBe(0)
|
|
|
|
|
|
|
|
it "should update the state to complete", ->
|
|
|
|
@request.requestOptions.success([])
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.busy).toEqual(false)
|
|
|
|
expect(@worker._state.threads.complete).toEqual(true)
|
2015-04-07 02:46:20 +08:00
|
|
|
|
|
|
|
describe "with an error", ->
|
2015-10-09 10:02:54 +08:00
|
|
|
it "should log the error to the state, along with the range that failed", ->
|
2015-04-07 02:46:20 +08:00
|
|
|
err = new Error("Oh no a network error")
|
|
|
|
@request.requestOptions.error(err)
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.busy).toEqual(false)
|
|
|
|
expect(@worker._state.threads.complete).toEqual(false)
|
|
|
|
expect(@worker._state.threads.error).toEqual(err.toString())
|
|
|
|
expect(@worker._state.threads.lastRequestRange).toEqual({offset: 0, limit: 30})
|
2015-04-07 02:46:20 +08:00
|
|
|
|
|
|
|
it "should not request another page", ->
|
|
|
|
@request.requestOptions.error(new Error("Oh no a network error"))
|
|
|
|
expect(@apiRequests.length).toBe(0)
|
|
|
|
|
2015-10-09 10:02:54 +08:00
|
|
|
describe "succeeds after a previous error", ->
|
|
|
|
beforeEach ->
|
|
|
|
@worker._state.threads.error = new Error("Something bad happened")
|
2016-06-03 09:46:43 +08:00
|
|
|
@worker._state.threads.lastRequestRange = {limit: 10, offset: 10}
|
2015-10-09 10:02:54 +08:00
|
|
|
@request.requestOptions.success([])
|
|
|
|
advanceClock(1)
|
|
|
|
|
2016-06-03 09:46:43 +08:00
|
|
|
it "should clear any previous error and updates lastRequestRange", ->
|
2016-12-02 02:17:41 +08:00
|
|
|
expect(@worker._state.threads.error).toEqual(null)
|
|
|
|
expect(@worker._state.threads.lastRequestRange).toEqual({offset: 0, limit: 30})
|
2015-08-29 04:24:05 +08:00
|
|
|
|
2015-04-07 02:46:20 +08:00
|
|
|
describe "cleanup", ->
|
2016-01-30 08:06:33 +08:00
|
|
|
it "should termiate the delta connection", ->
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@deltaStreams.localSync, 'end')
|
|
|
|
spyOn(@deltaStreams.n1Cloud, 'end')
|
2015-04-07 02:46:20 +08:00
|
|
|
@worker.cleanup()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@deltaStreams.localSync.end).toHaveBeenCalled()
|
|
|
|
expect(@deltaStreams.n1Cloud.end).toHaveBeenCalled()
|
2015-05-20 06:59:37 +08:00
|
|
|
|
|
|
|
it "should stop trying to restart failed collection syncs", ->
|
2015-06-16 09:29:59 +08:00
|
|
|
spyOn(console, 'log')
|
2016-11-29 09:33:14 +08:00
|
|
|
spyOn(@worker, '_resume').andCallThrough()
|
2015-05-20 06:59:37 +08:00
|
|
|
@worker.cleanup()
|
fix(spec): add support for async specs and disable misbehaving ones
More spec fixes
replace process.nextTick with setTimeout(fn, 0) for specs
Also added an unspy in the afterEach
Temporarily disable specs
fix(spec): start fixing specs
Summary:
This is the WIP fix to our spec runner.
Several tests have been completely commented out that will require
substantially more work to fix. These have been added to our sprint
backlog.
Other tests have been fixed to update to new APIs or to deal with genuine
bugs that were introduced without our knowing!
The most common non-trivial change relates to observing the `NylasAPI` and
`NylasAPIRequest`. We used to observe the arguments to `makeRequest`.
Unfortunately `NylasAPIRequest.run` is argumentless. Instead you can do:
`NylasAPIRequest.prototype.run.mostRecentCall.object.options` to get the
`options` passed into the object. the `.object` property grabs the context
of the spy when it was last called.
Fixing these tests uncovered several concerning issues with our test
runner. I spent a while tracking down why our participant-text-field-spec
was failling every so often. I chose that spec because it was the first
spec to likely fail, thereby requiring looking at the least number of
preceding files. I tried binary searching, turning on and off, several
files beforehand only to realize that the failure rate was not determined
by a particular preceding test, but rather the existing and quantity of
preceding tests, AND the number of console.log statements I had. There is
some processor-dependent race condition going on that needs further
investigation.
I also discovered an issue with the file-download-spec. We were getting
errors about it accessing a file, which was very suspicious given the code
stubs out all fs access. This was caused due to a spec that called an
async function outside ot a `waitsForPromise` block or a `waitsFor` block.
The test completed, the spies were cleaned up, but the downstream async
chain was still running. By the time the async chain finished the runner
was already working on the next spec and the spies had been restored
(causing the real fs access to run).
Juan had an idea to kill the specs once one fails to prevent cascading
failures. I'll implement this in the next diff update
Test Plan: npm test
Reviewers: juan, halla, jackie
Differential Revision: https://phab.nylas.com/D3501
Disable other specs
Disable more broken specs
All specs turned off till passing state
Use async-safe versions of spec functions
Add async test spec
Remove unused package code
Remove canary spec
2016-12-13 04:12:20 +08:00
|
|
|
advanceClock(50000); advanceClock()
|
2016-11-29 09:33:14 +08:00
|
|
|
expect(@worker._resume.callCount).toBe(0)
|
[client-app] Consolidate delta connection stores, rm deltas internal_pkg
Summary:
This commit consolidates the `DeltaConnectionStatusStore` and the
`DeltaConnectionStore` which kept track of very similar state and made
sense to be the same store (as per feedback in D4118#77647)
Given that this state needs to be available app-wide for plugins to
query the status of delta connections, `internal_packages/deltas` was
removed (given that it only activated that store), in favor of having the
unified store inside `src/flux/stores` and available via `nylas-exports`
The `deltas` package also contained some contacts-ranking code, which is
no longer in use until we restore that fetaure, so I created a
`internal_packages/contact-rankings` which contains this unused code for
now.
Test Plan:
manually open, close, end delta connections, verify that I'm getting
correct results. unit tests to come
Reviewers: halla, spang, evan
Reviewed By: evan
Differential Revision: https://phab.nylas.com/D4145
2017-03-09 05:56:25 +08:00
|
|
|
|