Tuesday, 15 April 2014

ruby - Rails consumption of external API requiring staggered consumption -



ruby - Rails consumption of external API requiring staggered consumption -

i using external service perform search application.

the results of search need collected multiple partners , take between 10 , 90 seconds complete. while results beingness collected repeatedly polling search session collect results have been prepared.

as , when have new results shifting these client via sse.

i polling every 5 seconds or so.

how should running process without absolutely nuking 1 of threads 90 seconds (running puma + nginx). need maintain controller's state force sses requesting client , unsure of best way of dealing delays between polls.

much appreciated

you have give on sses, if want release threads. in order receive sses browser maintains long-living connection webserver , in case of puma, each client connection handled separate thread.

however, if want polling of partial results can utilize next strategy:

start background search job e.g sidekiq cache partial results each search request within in-memory store redis poll results redis

another alternative might moving messaging problem evented server. evented servers not spawn separate thread on each connection, no matter long-lived or not.

one such evented server, integrates rails, faye. procedure be:

client subscribes on faye message channel client intiates seach search performed within background job(sidekiq) background job periodically publishes partial results on same faye channel

actually puma multithreaded setup intends maintain going through of this. increment number threads , processes far scheme allows , see how performs. adding more ram or servers allways cheaper , allows focus on features.

messaging faye

edit 1 rethinking benefit of moving search in background job. sidekiq has it's own thread pool , sidekiq thread not differ puma thread. search task has done anyway. threads suspended of time, wating io. so, benefit of above 2 solutions proper resource balancing. allows define how many threads used search job , how many app server. so, how next strategy:

deploy app twice on same or different machines configure nginx routing/loadbalancing search queries sse 1 app instance configure serving rest of app sec instance have not single thing of app logic changed profit

you can abandon polling completly , stick sses

ruby-on-rails ruby multithreading performance concurrency

No comments:

Post a Comment