Tag Archives: Quant Analytics

Quantitative analysis is about to move ahead in Singapore and Asia as a whole

Unlike in the West, Asian banks haven’t hired a whole lot of quants (yet). That seems to be about to change.

Because industry watchers are claiming that Asia needs to develop its quantitative finance field or risk being left behind. Left behind in what, exactly?

Well, Asian banks need quant help to roll out new products and move them up the value chain. More sophisticated derivatives, for starters.

And it seems the local schools are stepping up to the challenge:

“Professor Lim Kian Guan, professor of finance at the Singapore Management University, said: “SMU is starting a masters programme (in quantitative finance). We try to include a curriculum that addresses issues like trading, algorithm trading, low latency, high frequency trading …’”

In other words, North American and European style quant practises look to be taking root in Asia. It looks like Asia is the next great quant frontier, but with such a multitude of countries that have different regulatory authorities, it will be quite a challenge to get everything running smoothly.

To read the original article, go here now:

Asian banks need to hire more quantitative analysts: experts

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant analytics basic: Decent explanation of lag with this GARCH(1,1) volatility estimation

Quant analytics basic: Decent explanation of lag with this GARCH(1,1) volatility estimation

YouTube Preview Image

Just watch and learn I guess

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant analytics basics: Detailed and easy to understand Youtube video on what is linear regression?

Quant analytics basics: Detailed and easy to understand Youtube video on what is linear regression?

As I start breaking down a few R scripts, this linear regression concept got confusing for us those that are weak minded in Math. Here is very good video:

http://www.youtube.com/watch?v=ocGEhiLwDVc

I find Bionic Turtle really good in breaking down what appears to be a difficult concept but it is not when you watch this.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Is the best way to parallelize ARIMA in R using the quantstrat and backtest R packages?

I wondered about this post.

http://quantlabs.net/r-blog/2012/08/is-the-smartest-way-to-parallelize-this-arima-function-within-r-only-for-windows-use-quantstart-and-backtest-r-packages/

I really don’t want to hack the R packages to parallelize from within those.

If you got a better suggestion, please let me know by commenting.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant analytics market data capture: so what the general differences between iqfeed and the nxcore products?

Quant analytics market data capture: so what the general differences between iqfeed and the nxcore products?

Dave Forss: the base rate of our NxCOre is $500/month, iQFeed is $65/month. NxCore is an institutional feed and used mostly by brokers and large hedge sunds that need to update thousands of symbols at a time. iQFeed base limit is 500 simultaneous symbols. NxCore has years of tick history, iQFeed has 120 days of tick history.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

I use quant analytics: See why 3400 readers saw my pick for John Deere thanks to Yahoo Finance and Seeking Alpha

I use quant analytics: See why 3400 readers saw my pick for John Deere thanks to Yahoo Finance and Seeking Alpha

I just did my first posting on SeekingAlpha.com. It got over 3400 views in less than a day. Pretty killer I think. Hopefully, you can start seeing more stock picks coming out of systems here at QuantLabs.net.

http://seekingalpha.com/article/711101-profitable-waves-on-john-deere-throwing-in-my-lot-with-jim-rogers

Join my membership to see how  I do it

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant analytics interaction terms: We need lower terms despite significance when higher order polynomial is? Tradeoffs?


Quant analytics interaction terms: We need lower terms despite significance when higher order polynomial is? Tradeoffs?

For some of the respondent level models I am working on, we are trying to test for different interaction effects between media. I have been working on the premise of testing for significant first order effects, and testing for higher order interaction effects. In some models, I have tested for interactions even if the first order effects were insignificant. My question is: if x1*x2 term (interaction) is significant, should we need to have both x1 and x2 terms in the model irrespective of their significance? In the case when both x1 and x2 are insiginificant but x1*x2 is, what is the best way to specify the model? My feeling is that leaving out the first order effects (if they are insignificant) will give us biased coefficients for the interaction terms. Any thoughts?

==

 

You are exactly right that if there is ANY first order effect, it will bias your estimates of interaction. (My mantra: Even if an effect is not stat. significant in a particular test and data set, it may still be large enough to be important.) So keep first-order effects.
The only exception I’m aware of is when prior theory tells you that a particular first order effect is zero. For example, surface area is equal to height x width x a constant. In that situation, height alone, and width alone, should be left out. But it’s rare to have such a clear model.

 

==

I think that bias is what we have seen when we left out the first order terms for being insignificant while keeping the interaction terms in. So, with interaction terms, it is a tradeoff between bias (due to insignificant variables in the model) in estimates and variance of estimates (due to correlated terms coming into the model), right.

 

==

As more of a data miner than a statistician, I would ask which model performs better on a held back sample of the data? That is the model with the least bias. Interpretability is another story – it’s harder to say what models with both first and second order terms are telling you. But I also like to augment with a decision tree for insights into interactions between variables

 

==

Significance is not the important thing in this case (maybe not in any case). It is very rare that models with interactions but without the constituent main effects make sense. Here’s an example of such a model:
case 1
IV1 = 10, IV2 = 10
Predicted b0 + 100b1b2

case 2
IV1 = 10, IV2 = 0
predicted b0

case 3
IV1 = 0, IV2 = 10
predicted b0

case 4
IV1 = 0, IV2 = 0
predicted b0

so, you can see that if either IV is 0, then with only interactions, the predicted value is the same (by force, regardless of the data). This rarely makes sense.

You can plug in other numbers and see what the model is forcing to have happen.

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

STDFIN accouncement for open source quant analytics library in C++

STDFIN accouncement for open source quant analytics library in C++
* Group: Financial Engineering Group
* Subject: Announcement from Financial Engineering Group

Dear fellow Financial Engineer,

We are proud to present STDFIN, an new open-source initiative for
building modern, clean, fast, C++ open-source libraries for the
financial industry.

We have just started last week, and we are hoping that you are
interested in helping us shape this project and contribute some of
your great ideas.

How can you help?
* join the LinkedIn group: "stdfin: Standard Financial C++
Libraries": http://www.linkedin.com/groups?gid=4490084 [1]
* brainstorm on the forums of project website at http://stdfin.org/
[2]
NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant analytics: Can we use lower order terms in a model when higher orders become significant?

Quant analytics: Can we use lower order terms in a model when higher orders become significant?

For some of the respondent level models I am working on, we are trying to test for different interaction effects between media. I have been working on the premise of testing for significant first order effects, and testing for higher order interaction effects. In some models, I have tested for interactions even if the first order effects were insignificant. My question is: if x1*x2 term (interaction) is significant, should we need to have both x1 and x2 terms in the model irrespective of their significance? In the case when both x1 and x2 are insiginificant but x1*x2 is, what is the best way to specify the model? My feeling is that leaving out the first order effects (if they are insignificant) will give us biased coefficients for the interaction terms. Any thoughts?

==

 

Heck yes. You are exactly right that if there is ANY first order effect, it will bias your estimates of interaction. (My mantra: Even if an effect is not stat. significant in a particular test and data set, it may still be large enough to be important.) So keep first-order effects.
The only exception I’m aware of is when prior theory tells you that a particular first order effect is zero. For example, surface area is equal to height x width x a constant. In that situation, height alone, and width alone, should be left out. But it’s rare to have such a clear model.

 

==

Great, Thank you! I think that bias is what we have seen when we left out the first order terms for being insignificant while keeping the interaction terms in. So, with interaction terms, it is a tradeoff between bias (due to insignificant variables in the model) in estimates and variance of estimates (due to correlated terms coming into the model), right.

 

==

As more of a data miner than a statistician, I would ask which model performs better on a held back sample of the data? That is the model with the least bias. Interpretability is another story – it’s harder to say what models with both first and second order terms are telling you. But I also like to augment with a decision tree for insights into interactions between variables

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Need new end of day feeds for quant analytics and my high frequency trading platform

Need new end of day feeds for quant analytics and my high frequency trading platform

Hi,
I am having some problems with my current end of day data feeds for NYSE, NASDAQ, AMEX and OTCBB. My options are fine. I may have to replace them shortly unless they can repair a few problems, specifically new issue handling and too many missing quotes.
Are there any recommendations for vendors that could be shared?
Regards

 

==

Eoddata.com is wht i use

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!