2015-11-20 07:29:49 +08:00
|
|
|
_ = require 'underscore'
|
|
|
|
|
|
|
|
# Public: To make specs easier to test, we make all asynchronous behavior
|
|
|
|
# actually synchronous. We do this by overriding all global timeout and
|
|
|
|
# Promise functions.
|
|
|
|
#
|
|
|
|
# You must now manually call `advanceClock()` in order to move the "clock"
|
|
|
|
# forward.
|
|
|
|
class TimeOverride
|
|
|
|
|
|
|
|
@advanceClock = (delta=1) =>
|
|
|
|
@now += delta
|
|
|
|
callbacks = []
|
|
|
|
|
|
|
|
@timeouts ?= []
|
|
|
|
@timeouts = @timeouts.filter ([id, strikeTime, callback]) =>
|
|
|
|
if strikeTime <= @now
|
|
|
|
callbacks.push(callback)
|
|
|
|
false
|
|
|
|
else
|
|
|
|
true
|
|
|
|
|
|
|
|
callback() for callback in callbacks
|
|
|
|
|
|
|
|
@resetTime = =>
|
|
|
|
@now = 0
|
|
|
|
@timeoutCount = 0
|
|
|
|
@intervalCount = 0
|
|
|
|
@timeouts = []
|
|
|
|
@intervalTimeouts = {}
|
|
|
|
@originalPromiseScheduler = null
|
|
|
|
|
|
|
|
@enableSpies = =>
|
|
|
|
window.advanceClock = @advanceClock
|
|
|
|
|
|
|
|
window.originalSetInterval = window.setInterval
|
|
|
|
spyOn(window, "setTimeout").andCallFake @_fakeSetTimeout
|
|
|
|
spyOn(window, "clearTimeout").andCallFake @_fakeClearTimeout
|
|
|
|
spyOn(window, "setInterval").andCallFake @_fakeSetInterval
|
|
|
|
spyOn(window, "clearInterval").andCallFake @_fakeClearInterval
|
|
|
|
spyOn(_._, "now").andCallFake => @now
|
|
|
|
|
|
|
|
# spyOn(Date, "now").andCallFake => @now
|
|
|
|
# spyOn(Date.prototype, "getTime").andCallFake => @now
|
|
|
|
|
|
|
|
@_setPromiseScheduler()
|
|
|
|
|
|
|
|
@_setPromiseScheduler: =>
|
|
|
|
@originalPromiseScheduler ?= Promise.setScheduler (fn) =>
|
fix(spec): fix asynchronous specs
Summary:
I believe I discovered why our tests intermittently fail, why those
failures cause a cascade of failures, and how to fix it.
The bug is subtle and it's helpful to get a quick refresher on how various
parts of our testing system work:
First a deeper dive into how Promises work:
1. Upon creating a new Promise, the "executor" block (the one that gets
passed `resolve` and `reject`), gets synchronously run.
2. You eventually call `resolve()`.
3. The minute you call `resolve()`, Bluebird's `async.js` queues up
whatever's downstream (the next `then` block)
4. The queue gets processed on every "tick".
5. Once the "tick" happens, our queue processes and downstream `then`
blocks get run.
6. If any more Promises come in before the "tick", they get added to the
queue. Nothing gets processed until the "tick" happens.
The important takeaway here is this "tick" in step 4. This "tick" is the
core of what makes Promises asynchronous. By default, Bluebird in our
Node-like environment uses `setImmediate` as the underlying
implementation.
Our test environment is different. We do NOT use `setImmediate` in our test
environment.
We use Bluebird's `Promise.setScheduler` API to implement our own "tick". This
gives us much greater control over when Promises advance. Node's `setImmediate`
puts you at the whim of the underlying event loop.
Before today, our test "tick" implementation used `setTimeout` and
`advanceClock` triggered by `process.nextTick`.
Let me also quickly explain `setTimeout` in our test environment.
We globally override `setTimeout` in our test environment to not be based on
actual time at all. There are places in our code where we wait for several
hundred miliseconds, or have timeouts places. Instead of "sleeping" some amount
of time and hoping for the best, we gain absolute control over "time". "Time"
is just an integer value that only moves forward when you manually call
`advanceClock()` and pass it a certain number of "milliseconds".
At the beginning of each test, we reset time back to zero, we record
setTimeouts that come in, and as advanceClock gets called, we know if we need
to run any of the setTimeout callbacks.
Back to the Promise "tick" implementation. Before today, our testing "tick"
implementation relied our our stubbed `setTimeout` and our control of time.
This almost always works fine. Unfortunately tests would sometimes
intermittently fail, and furthermore cause a cascade of failures down the road.
We've been plauged with this for as long as I can remember. I think I finally
found how all of this comes together to cause these intermittent failures and
how to fix it.
The issue arises from a test like the one in query-subscription-pool-spec. We
have tests here (and subtly in other yet unknown places) that go
something like this:
```
it("TEST A", () => {
Foo.add(new Thing())
expect(Foo._things.length).toBe(1)
})
it("TEST B", () => {
expect(true).toBe(true)
})
```
At the surface this test looks straightforward. The problem is that
`Foo.add` may down the line call something like `trigger()`, which may
have listeners setup, which might try and asynchronously do all kinds of
things (like read the fs or read/write to the database).
The biggest issue with this is that the test 'finishes' immediately after
that `expect` block and immediately moves onto the next test. If `Foo.add`
is asynchronous, by the time whatever downstream effects of `Foo.add` take
place we may be in a completely different test. Furthremore if those
downstream function errors, those errors will be raised, but Jasmine will
catch them in the wrong test sending you down a rabbit hole of dispair.
It gets worse.
At the start of each test, we reset our `setTimeout` stub time back to
zero. This is problematic when combined with the last issue.
Suppose `Foo.add` ends up queuing a downstream Promsie. Before today, that
downstream Promise used `setTimeout(0)` to trigger the `then` block.
Suppose TEST A finishes before `process.nextTick` in our custom scheduler
can call `advanceClock` and run the downstream Promise.
Once Test B starts, it will reset our `setTimeout` stub time back to zero.
`process.nextTick` comes back after Test B has started and calls
`advanceClock` like it's supposed to.
Unfortunately, because our stub time has been reset, advanceClock will NOT
find the original callback function that would have resolved `Foo.add`'s
downstream Promise!!!
This means that Bluebird is now stuck waiting for a "tick" that will never
come anymore.
Since Bluebird thinks it's waiting for a "tick", all future Promises will
get queued, but never called (see Step 6 of the Promise description
above).
This is why once one test fails, downstream ones never complete and
Jasmine times out.
The failure is intermittent because `process.nextTick` is racing agianst a
test finishing, the next one starting, and how many and how far downstream
promises are setup.
Okay. So how do we fix this? First I tried to simply not reset the time back to
zero again in our stubbed time-override. This doesn't work because it simply
exposes the diasterous consequences of downstream Promises resolving after a
test has completed. When a test completes we cleanup objects, unmount React
components. Those downstream promises and timeouts come back and throw all
kinds of errors like: "can't read property x of undefined" and "can't find a
match for component X".
The fix that works the best is to simply MAKE PROMISES FULLY SYCNRHONOUS.
Now if you look at our custom Promise Test Scheduler in time-override,
you'll see that it immediately and sychronously calls the function. This
means that all downstream promises will run BEFORE the test ends.
Note that this is designed as a safeguard. The best way to make a more
robust test is to declare that your funtion body is asynchronous. If you
call a method that has downstream effects, it's your responsibility to
wait for them to finish. I would consider the test example above a very,
very subtle bug. Unfortunately it's so subtle that it's unreasonable to
expect that we'll always catch them. Making everything in our testing
environment synchronous ensures that test setup and cleanup happen when we
intuitively expect them to.
Addendum:
The full Promise call chain looks something like this:
-> `Promise::_resolveCallback`
-> `Promise::_fulfill`
-> `Promise::_async.settlePromises`
-> `AsyncSettlePromises`
-> `Async::_queueTick`
-> CHECK `Async::_isTickUsed`
-> `Async::_schedule`
-> `TimeOverride.scheduler`
-> `setTimeout`
-> `process.nextTick`
-> `advanceClock`
-> `Async::_drainQueues`
-> `Async::_drainQueue`
-> `THEN BLOCK RUNS`
-> `Maybe more things get added to queue`
-> `Async::_reset`
-> `Async::_queueTick` works again.
Test Plan: They now work
Reviewers: halla, mark, spang, juan, jackie
Reviewed By: juan, jackie
Differential Revision: https://phab.nylas.com/D3538
2016-12-21 08:47:26 +08:00
|
|
|
fn()
|
2015-11-20 07:29:49 +08:00
|
|
|
|
|
|
|
@disableSpies = =>
|
|
|
|
window.advanceClock = null
|
|
|
|
|
|
|
|
jasmine.unspy(window, 'setTimeout')
|
|
|
|
jasmine.unspy(window, 'clearTimeout')
|
|
|
|
jasmine.unspy(window, 'setInterval')
|
|
|
|
jasmine.unspy(window, 'clearInterval')
|
|
|
|
|
|
|
|
jasmine.unspy(_._, "now")
|
|
|
|
|
|
|
|
Promise.setScheduler(@originalPromiseScheduler) if @originalPromiseScheduler
|
|
|
|
@originalPromiseScheduler = null
|
|
|
|
|
|
|
|
@resetSpyData = ->
|
|
|
|
window.setTimeout.reset?()
|
|
|
|
window.clearTimeout.reset?()
|
|
|
|
window.setInterval.reset?()
|
|
|
|
window.clearInterval.reset?()
|
|
|
|
Date.now.reset?()
|
|
|
|
Date.prototype.getTime.reset?()
|
|
|
|
|
|
|
|
@_fakeSetTimeout = (callback, ms) =>
|
|
|
|
id = ++@timeoutCount
|
|
|
|
@timeouts.push([id, @now + ms, callback])
|
|
|
|
id
|
|
|
|
|
|
|
|
@_fakeClearTimeout = (idToClear) =>
|
|
|
|
@timeouts ?= []
|
|
|
|
@timeouts = @timeouts.filter ([id]) -> id != idToClear
|
|
|
|
|
|
|
|
@_fakeSetInterval = (callback, ms) =>
|
|
|
|
id = ++@intervalCount
|
2016-10-14 04:31:40 +08:00
|
|
|
action = =>
|
2015-11-20 07:29:49 +08:00
|
|
|
callback()
|
|
|
|
@intervalTimeouts[id] = @_fakeSetTimeout(action, ms)
|
|
|
|
@intervalTimeouts[id] = @_fakeSetTimeout(action, ms)
|
|
|
|
id
|
|
|
|
|
|
|
|
@_fakeClearInterval = (idToClear) =>
|
|
|
|
@_fakeClearTimeout(@intervalTimeouts[idToClear])
|
|
|
|
|
|
|
|
module.exports = TimeOverride
|