Ranking views using entropy pooling in quant analytics

(Last Updated On: April 25, 2012)

Ranking views using entropy pooling in quant analytics

Hi everybody,

I wanted to ask you about ranking views on expectations (as in http://symmys.com/node/158), because I encountered an unexpected problem. I apply the entropy pooling on a portfolio of N stocks, and I use only the simple-returns as risk-factors:

– View specified as a full-ranking ( E(r1) > E(r2) > … > E(rN) )
– Entropy pooling to get the probabilities in order to compute the objective (E(ri) for all i in (1, 2, …, N)) and the constraints (CVaR-controlled allocation)

Unfortunately, the numerical minimization always give expectations like E(r1) = E(r2) = … = E(rN) (the constraint always “hit the barrier”). When I set a view like E(r1) > E(r2) and leave the other assets free, I also get E(r1) = E(r2).

Thus, I went back to the two assets – two data points – two views (probabilities sum up to one, and ranking view between the assets), and I found E(r1) = E(r2) analytically.

Then I was able to find an analytical formula for the dual formulation of a symmetrized version of the KL-divergence, but the results (both numerical and analytical) are exactly the same as with entropy pooling.

Did you also notice this problem? Does someone have a solution?

 

==

The issue you encountered depends on how the ranking view relates to the prior distribution of the returns.
Let us consider the bivariate case. If your prior is already such that E(r1) > E(r2), and your view is in the same direction, i.e. E(r1) >= E(r2), then the posterior will be equal to the prior, and the view will be satisfied as a strict inequality E(r1) > E(r2) in the posterior.
If on the other hand with the above prior your view is E(r1)<= E(r2) and thus it contradicts the prior, then the posterior will feature E(r1)=E(r2), which is the closest solution to the prior that satisfy the view.

 

==

 

I got another conceptual question about the entropy pooling.

Suppose you have T observations of two risk-factors X and Y (that is (X_i, Y_i), for i in [1,2,…,T]) which are completely independant (take the dummy case of a two independent gaussians for example). Their prior (standard) esperance estimator is E(X) = 1/T * sum(X_i) = x and E(Y) = = 1/T * sum(Y_i) = y.

Let’s assume now that we want to implement a view on X such that E*(X) = x*, that is we get a new set of probabilities p_i (i in [1,2,…,T]) which are different (even if those deviations are very small) from 1/T. Thus, if we use those new probabilities for Y too as it seems to be the case in your case studies, we have E*(Y) = y*, different from E(Y) = y.

How is that possible since the two risk-factors are assumed to be independent? Is there a way to take the dependence structure between the risk-factors into account when applying the pooled probabilities within this framework?

==

 

In the gaussian case, you could use the analytic formula. If you apply the analytic formula just to x (ie. you apply it univariately), then it devolves to just whatever your view is. That means it will not impact y.

For the full algorithm, the reason why you don’t get the right answer is because by treating them separately you’re not letting the optimizer know that there’s no correlation between the two assets. The EP algorithm minimizes the difference between two distributions. If it doesn’t know what one of the distributions is, then how can it be expected to minimize anything?

All you have to do is just gather x and y (and z, etc) to a single matrix before applying the algorithm

==

 

OK, the gaussian case is obvious, but:

“If it doesn’t know what one of the distributions is, then how can it be expected to minimize anything?

All you have to do is just gather x and y (and z, etc) to a single matrix before applying the algorithm. ”

I understand you and this is basically what I did, but if you don’t express views on y (and z, etc), it won’t change anything. My problem lies in the fact that this algorithm modifies only one set of probabilities for all risk-factors. If you take two independent risk-factors X and Y with different dynamics (e.g. X ~ arch(10) and Y ~ garch(1,1) with independent innovations), and express a view only on X, how can you let the optimizer know about the distribution of Y? The idea of numerical entropy minimization is to set views as linear constraints of a convex optimization problem, but the risk-factors on which no views are expressed are not an input of the algorithm, or am I missing something?

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!
This entry was posted in Quant Analytics and tagged , , , on by .

About caustic

Hi i there My name is Bryan Downing. I am part of a company called QuantLabs.Net This is specifically a company with a high profile blog about technology, trading, financial, investment, quant, etc. It posts things on how to do job interviews with large companies like Morgan Stanley, Bloomberg, Citibank, and IBM. It also posts different unique tips and tricks on Java, C++, or C programming. It posts about different techniques in learning about Matlab and building models or strategies. There is a lot here if you are into venturing into the financial world like quant or technical analysis. It also discusses the future generation of trading and programming Specialties: C++, Java, C#, Matlab, quant, models, strategies, technical analysis, linux, windows P.S. I have been known to be the worst typist. Do not be offended by it as I like to bang stuff out and put priorty of what I do over typing. Maybe one day I can get a full time copy editor to help out. Do note I prefer videos as they are much easier to produce so check out my many video at youtube.com/quantlabs