Tag Archives: ARIMA

How to parallelize with R and Hadoop tonite! Complete ARIMA source code strategy walkthrough online Meetup Oct 23!

Hi there

Join Ram Venkat tonite at 7PM Eastern Standard Time to learn about how he uses Hadoop and R for his parallel processing with Python. This is on tonite via my GotoMeeting online virtual meeting. Login details:

1.  Please join my meeting, Monday, October 15, 2012 at 7:00 PM Eastern Daylight Time.

2.  Use your microphone and speakers (VoIP) – a headset is recommended.  Or, call in using your telephone.

Dial +1 (647) 497-9373
Access Code: 275-963-877
Audio PIN: Shown after joining the meeting

Meeting ID: 275-963-877

Also, another Meetup is slated for North York Ont Monday 10/22 at 7pm EST.


Lastly, another Premium Membership Meetup is slated for Tues 10/23 on a complete walkthrough of my ARIMA modelling R script. It includes fast data capture as well as a function for automatic best fit.

–> Join now go get access to this Oct/23 event! <–

Got a question,? Let me know.
Thanks Bryan


Is the smartest way to parallelize this ARIMA function within R? Only for Windows? Use quantstart and backtest R packages?

Is the smartest way to parallelize this ARIMA function within R? Only for Windows? Use quantstart and backtest R packages?

This came from https://stat.ethz.ch/pipermail/r-sig-finance/2011q2/008143.html

I don’t think this is the most intelligent way to parallelize this. Comment on what you think!


The easiest probably would be to use the multicore package (linux) on
one machine, but if you’re feeling ambitious, there’s also the
possibility of using doSNOW, but there’s some small idosyncracies that
will leave you (or at least it did for me) pulling your hair out trying
to figure out why certain things aren’t working.

If you’re on Windows only, another single box solution would be the
“doSMP” and “foreach” packages that were released by Revolution into CRAN.

here’s a short example of how I use it on Windows (I have a more
complicated multiple computer script buried somewhere using doSNOW on


clust <- startWorkers(4)

symbols = c(“SPX”,”DIA”,”QQQQ”)

# the function that you want to parallelize, gets exported to each
“node” — could insert your backtest code here
parallel.arima <- function(data) {
tmp = get(data)
fit = auto.arima(ts(Cl(tmp)), approximation=TRUE, allowdrift=TRUE,

res <- foreach(dat=symbols, .export=symbols) %dopar% parallel.arima(dat))

There’s more info on the r-sig-hpc list regarding some of the finer
details of the packages mentioned above. Standard disclaimer, this
probably isn’t the “best” way to do it but it should give you some idea
of where to start.


On 06/24/2011 07:00 AM, benjamin sigel wrote:
> Hi, > > I would like to run multiple backtests with R on intraday data, using > “quantstrat” and “backtest package” and I was wondering what would be the > most time efficient hardware solution between these two: > > – 1 PC: *1 Quad-Core* (Intel® Core™ i5-2300, 2.8 GHz (up to 3.1 GHz with > Turbo Boost) /6GB installed DDR3 RAM (1066 MHz) + *16GB maximum RAM capacity > * > > OR > > – *2 PC’s Hooked-up:* 2 Dual-core (Intel® Core™ i3-550 Processor, 3.20 GHz, > 4 MB Smart Cache, 4GB DDR3 + *maximum expandable memory 16GB* *each* > > Many Thanks for your help, > > Ben