fix(queue): emit retrying Queue Local Event instead of failed#186
fix(queue): emit retrying Queue Local Event instead of failed#186hughsw wants to merge 5 commits intobee-queue:masterfrom
Conversation
|
Apologies for the commit churn -- It's taking a little while to get my build/test configuration running in a Docker container.... |
skeggse
left a comment
There was a problem hiding this comment.
This makes sense to me, in a vacuum. I think it may break existing code that has come to expect jobs that fail and then retry to emit the failed event. What do you think about emitting both retrying and failed when a job fails and retries?
|
I have undone the destructuring syntax, and fixed the test (grrr, so I thought... Tests are failing due to timeout; I still have a lot to learn aboue bee-queue and bee-queue testing...). The question about emitting both 'failed' and 'retrying' is interesting. My gut reaction is that the two messages is a more surprising behavior than what the docs imply will happen, and what the names of the events imply. But, two messages doesn't contradict what the docs say, and the retrying docs even mention that the job being retried has just failed. But that's also just being explicit -- 'retrying' means the job failed. Emitting only one of 'retrying' or 'failed' for the queue when a job fails seems less surprising. It's what the PubSub events do. It's what the jobs do. The retrying messages let you know the job is still viable. 'succeeded' or 'failed' are the final word on the job. Otherwise you have to actively dig around in implementation-specific details to make a determination about the job's final status... |
|
Indeed, the reason I filed the issue, #184 was that Queue Local is a simple way to manage job lifecycle that didn't have the complexity or overhead of the PubSub or the Job callbacks. It would be very surprising that Queue Local callbacks have different semantics than the other two sets, and have no simple way to know if a job has really failed yet... |
|
I agree with everything you're saying, and I don't think we're in a position to ship a new major revision of the library. Shipping the change that makes Thanks also for pointing out the documentation/confusion problems here - I think we can help with that some with more documentation, but I don't see an optimal/foolproof solution here. |
|
I agree. Here's the simplest path forward that I can see:
|
This fixes the problem reported in #184
This fix could conceivably break code that has worked around the problem, e.g. by looking at
job.options.retriesorjob.statusin their queue failed handler...