Random Finds in Heterodox Economics, #2

by Daniel on February 6, 2004

Apologies in advance because this edition of RFHE is not really going to be all that good. It’s a grab bag of things I’ve picked up relevant to personal hobby horses of mine. Lots of people sent me some really good stuff in response to the last one, for which thank yoyu very much. Unfortunately, my chaotic email management habits came through a minor MyDoom infestation about as well as I thought they were going to. I should be able to find all the stuff I had pretty soon; otoh, if any of you were to resend it, that would be just lovely. So, apologies, promises of something better next time, and please regard this inconsistency in quality as charming rather than annoying.

Situations Vacant: The ISMA Centre for Financial Economics at Reading, home of Carol Alexander who wrote “Market Models”, one of the few finance textbooks I keep next to my computer, has a vacancy for a postgrad researcher on an ESRC project, if your medieval Latin is up to it. They’re looking into the history of contracts for the forward sale of wool written between Cistercian monasteries in England and Italian bankers in the thirteenth century. Sounds absolutely fascinating, so if any CT reader gets the job, drop me an email (just to make it clear, I would be no help at all, I’m just interested).

Chap of the Week is Peter Temin. That makes two MIT citations, which surprises me as I’d never thought of MIT as a hotbed of heterodoxy. To be honest the papers on his MIT site aren’t particularly heterodox; they’re high quality but pretty mainstream economic history. But he gets the award for a) seeming like a nice old boy and b) Coauthoring this number with Hans-Joachim Voth. Basically, Temin and Voth have got access to the historical records of Hoare’s Bank, a very old goldsmith’s bank in London. They’ve got a few papers out of it, most of which are on SSRN, but this is the interesting one because it deals with the ledger recording Hoares’ dealings in the stock of the South Sea Company. Not only is the narrative incredibly interesting (to be honest, I could have done without having the laboured parallels to the dot com episode spelled out to me though), but the point they make is that Hoares very definitely did not operate the kind of stabilising speculation that you would have expected from a well-informed investor. Their trading massively outperformed the South Sea company stock price, and this did not apper to be due to insider information or front running its clients. They “rode the bubble”, buying on the way up and selling on the way down and made their money that way. It’s a great paper, although the typesetting in the version currently on SSRN is a little bit screwed.

This is of interest to me, because some while ago I wrote a blog post called “DeLong and the shorts”, which I unapologeticaly plug here, which argued that a rational, perfectly informed and well-capitalised speculator would not optimise his returns by popping bubbles. The record of Hoare’s trading seems to suggest that, empirically, they found it worthwhile not to do this, and a sweet little paper by Suleyman Basak of LBS and Benjamin Croitoru of McGill University provides the necessary theoretical underpinnings. To be honest, the paper frightened the shit out of me – it looks like a train crashed into an algebraic symbol factory and I swear that the underlying intuition is simple enough that it doesn’t need the heavy guns of dynamic programming brought out in this way – but if you struggle through the maths it seems sound. Basically, as they say “When the arbitrageur has market power in the securities markets,he will take account of the price impact of his trades on the level of mispricing across the risky assets.This consideration will be shown to induce much richer arbitrageur trades than those in the competitive case,in which under mispricing the arbitrageur simply took on the maximum trade allowed by the position limit.” And thus, will allow mispricings, potentially significant misprcings, to develop. My own contention expressed in the blog post linked above, is that the “non-competitive case” is the only case worth considering, because an abitrageur with no market power is close to being a contradiction in terms. All good stuff.

In comments to the last RFHE, we got a few points on the Cambridge Capital Controversy from Robert Vienneau, the Sisyphus of online Sraffaism and maintainer of the Frequently (sic) Asked Questions about the Labour Theory of Value page. Here’s a directory of his writings, and if you read them all you’ll have spent a pleasant afternoon and come out of it with the toolkit to win pretty much any argument you care to have on the subject of capital theory. The great thing about the capital debate is that one side is provably right, and the other side is provably wrong, and the side that’s wrong is the one that runs the economics profession. The real nuggets are here and here. Also of interest is this piece, which demonstrates an important point to remember next time you’re sharpening the pen for another critique of economic theory; hom economicus is much less important an assumption than is commonly believed. Most of the important things an economist might want to assert can be proved without specific assumptions about human nature; here Vienneau derives a labour demand schedule from a linear programming model. However, the conclusions derived in this way can be pretty interesting; the labour demand function derived has only a few points which could make sense as equilibria, making the whole business of supply/demand analysis problematic.

And penultimately, a funny little thing from Peter Bossaerts (author of “The Paradox of Asset Pricing”, which I liked) on “Neo-Austrian Theory Applied to Financial Markets”. Quite a misleading title, as it’s got f-all to do with Austrian theory as I understand it; it’s a Santa-Fe type simulation exercise in which artificially generated traders demonstrate (as artifically generated traders always do) significant regularities in their behaviour, but not in a way that’s easy to predict.

Finally two from J Barkley Rosser Jr, who I’ve mentioned earlier is the only person on earth I’d trust to write a sentence which combined “chaos theory” and “economics” and get both right. Epistemological Implications of Economic Complexity strikes me as an important one in Post-Keynesian economics; if I’ve scanned it right, Barkley Rosser is arguing that there are some economic questions where it is completely impossible (epistemologically andontologically) to assign probabilities to future events, or more generally, to make decisions based on any rational process or set of rules. It’s the first step on the way toward providing rigorous foundations for some important points about probabilitiy which Keynes just asserted andI think that this has big implications for all sorts of questions and not just in economics. This carries on from the argument in his Metroeconomica piece called All That I Have To Say Has Already Crossed Your Mind, a paper which I like so much I’m going to link to it again.

Anyway, there you go. more soon.

{ 32 comments }

1

phil 02.06.04 at 7:52 pm

If you don’t like dd’s latest post, wait a minute.

2

Chirag Kasbekar 02.06.04 at 8:26 pm

Count me as a fan of Barkley Rosser. One interesting thing about that ‘All That I Have To Say Has Already Crossed Your Mind’ paper is that Rosser’s (who I believe is post-Keynesian) co-author, Roger Koppl, is an Austrian of sorts.

3

Bill Carone 02.06.04 at 8:52 pm

I checked out the FAQ on labor theory of value, and it didn’t answer my question. It seems that you think it is the correct theory, so maybe you can help me.

Three workers, A, B, and C. They can work for three years and produce something that is worth $315. Here is how it works.

– For 1 year, A works on the product,
– then for 1 year, B works on the product,
– then for 1 year, C works on the product and finishes it.

The workers sell the product for $315, then try to figure out how to split the money.

First idea: split equally, $105 each. Each person spent one year, so each deserves equal shares.

This idea is silly; anyone who doesn’t think so, give me $1 million and I’ll pay you $1 million plus one extra dollar thirty years from now. You should be happy! Of course you wouldn’t be, since the first idea ignores the time value of money, and time preference generally.

Second idea: A gets $110, B gets $105, C gets $100. This is because A, while he did work for a year just like the others, had to wait two years for his payoff. B only had to wait one year, and C didn’t have to wiat at all. Therefore, A should get more (and assuming a simple 5% discount rate, we get the above).

The second idea makes sense to me at least. However, it seems to make a mockery of the labor theory.

A capitalist could make profit by simply paying A and B $100 each at the end of their year of work, so they didn’t have to wait until the end of the production for their money. Then A gets $100 after one year, B gets $100 after two years, and C gets $100 after three years, and the capitalist gets $15 after three years. This is equivalent (to the workers) to the second idea above.

But wait! The capitalist did no labor to get his $15, and therefore is exploiting the workers, right?

Where have I gone wrong? Is the labor theory of value contradicted by the time value of money? If so, I’m pretty sure I know which is “provably wrong.” :-)

4

bill carone 02.06.04 at 9:02 pm

“there are some economic questions where it is completely impossible (epistemologically andontologically) to assign probabilities to future events, or more generally, to make decisions based on any rational process or set of rules.”

What is an example of an event that I cannot assign a probability to, and that has some reasonable amount of importance to a decision-maker?

5

bill carone 02.06.04 at 9:12 pm

I think I fast-forwarded through the ending of the labor theory comment: the capitalist actually has the following cash flow:

-$100 in year 1
-$100 in year 2
+$215 in year 3

Not just +$15 in year 3.

Does this still count as exploitation, since he has earned something without laboring?

6

dsquared 02.06.04 at 10:52 pm

Bill; the LTV isn’t a theory of distribution. The question of who gets what is separate from the LTV; the LTV at base just says that the value of something is the socially necessary labour time used in producing it. I don’t personally believe that LTV is the be-all and end-all, just that economics needs a value theory of some sort and at the moment it’s the only game in town.

Re your second question, have a look at the second Barkley Rosser paper linked above, on the Holmes/Moriarty problem. The idea here is that it is not always obviously legitimate to use the Principle of Insufficient Reason in cases of ignorance.

7

Chirag Kasbekar 02.06.04 at 10:57 pm

“just that economics needs a value theory of some sort and at the moment it’s the only game in town”

You mean the ‘objective’ type, don’t you? And why?

8

John Quiggin 02.07.04 at 12:23 am

I must say I’m underwhelmed by the counterexamples that form the basis of the Cambridge (UK) side of the controversy. Do they show anything more than the fact that results of any kind derived for the case N=2 rarely hold exactly for general N?

And for the specific point that prices aren’t indicators of relative scarcity we don’t even need to go past N=2. Giffen goods and backward-bending labor supply are both good examples for this (though it’s doubtful that any real Giffen goods exist).

I agree on Rosser, though, and will post more on a related topic soon, I hope.

9

dsquared 02.07.04 at 12:51 am

must say I’m underwhelmed by the counterexamples that form the basis of the Cambridge (UK) side of the controversy. Do they show anything more than the fact that results of any kind derived for the case N=2 rarely hold exactly for general N?

Well yes John mate. They show that aggregative measures of capital can’t be found independently of an assumption about the rate of profit, and thus can’t be used to support marginal product theories of profit. That is pretty huge, innit?

I mean seriously. If there is no non-question-begging way to define inputs of capital, then you can’t use a Cobb-Douglas production function, all of Solow is useless and you are pushed in the direction of a theory of the wage rate which exploitation figures much more highly than marginal product.

This is one point on which I disagree with Vienneau; I think that the nonaggregative nature of capital is the whole point of the CCC, although this is because I’m not a Sraffian.

10

Chris Bertram 02.07.04 at 9:24 am

Having puzzled over the transformation problem about 20 years ago, I finally gave up on the LToV after reading Jerry Cohen’s essay “The Labour Theory of Value and the Concept of Exploitation”. Any reason why I should reconsider Daniel?

11

Chirag Kasbekar 02.07.04 at 2:42 pm

Speaking of transformation and while we wait for Daniel to give us an explanation, here’s a joke (not a great one, but one):

http://eh.net/lists/archives/hes/jul-2001/0019.php

12

Abiola Lapite 02.07.04 at 3:16 pm

“an abitrageur with no market power is close to being a contradiction in terms.”

That does not gibe with my experience in the financial sector. Most arbitrageurs don’t have the deep pockets of a George Soros or an LCTM.

13

Dave 02.07.04 at 6:38 pm

Y1 -100 = Y3 -110
Y2 -100 = Y3 -105
Y3 +215 = Y3 +215

I see no exploitation, but then, I don’t see any profit, either.

14

mike d 02.07.04 at 7:13 pm

as one of the guys who asked about the CCC the first time around, I appreciate the links. squeaky wheel gets the oil, i suppose.

as to John’s comment above:
(though it’s doubtful that any real Giffen goods exist)
I’m going back to Intro Micro Theory here, but I thought that potatoes during the Irish famine were considered the best bet on being a Giffen good- is that no longer the case?

15

bill carone 02.07.04 at 8:26 pm

“I see no exploitation, but then, I don’t see any profit, either.”

I clearly don’t understand LTV, and need to do more research if I want to understand it.

I thought that it (and its followers) would object to a capitalist making 5% return on investment without exerting any labor. The fact that that plan would be equivalent to a fair plan for the workers to split the payout confused me.

16

humeidayer 02.08.04 at 5:35 pm

I clearly don’t understand LTV, and need to do more research if I want to understand it.worth

I am confused as well. I thought the labor theory of value was dead and economic subjectivism was mainstream.

Seems to me the monetary value of something is determined by what someone is willing to pay for it. You can spend your whole life creating something no one wants. If you do, if there’s no demand for what you’ve created, the value of your product and, consequently, the value of your labor will be nada.

It also seems to me that the labor theory of value and the Marxist conception of exploitation lead to some rather absurd conclusions. I’ve yet to see a Marxist claim workers exploit capitalists when businesses lose money, as businesses often do.

Additionally, following this perspective, it seems that market demand determines the degree of exploitation. If there’s no profit, there’s no exploitation. If there’s a sudden surge in demand and profits rise, then there’s exploitation. The only way to avoid exploitation is to perpetually adjust the wages of workers to match changes in consumer demand. But I think this quickly leads right back to the subjectivist position that says demand determines value.

17

dsquared 02.08.04 at 9:07 pm

Any reason why I should reconsider Daniel?

Not if you care about exploitation, yes if you care about theories of value. Basically, without something like a value theory, you can’t come up with something as basic as a noncircular definition of the capital stock, which I (and few others) regard as a bit of a theoretical problem.

18

Anarch 02.08.04 at 9:33 pm

The great thing about the capital debate is that one side is provably right, and the other side is provably wrong…

Provable in what sense?

19

dsquared 02.08.04 at 10:23 pm

The mathematical sense. It’s now settled ground that the approach taken by the Cambridge (US) side – that one could measure the capital stock taken as an aggregate by using the dollar prices of capital equipment – is not satisfactory given that it means that anything you then go on to say about the rate of profit is a circularity (because the price of capital equipment depends on the rate of profit). There is some room for dispute about how important this is (see above) but the central point is purely mathematical.

20

dsquared 02.08.04 at 11:01 pm

That does not gibe with my experience in the financial sector. Most arbitrageurs don’t have the deep pockets of a George Soros or an LCTM.

Yeh, but come on … you don’t need anything like the size of bat those guys have to be able to move prices.

21

bill carone 02.10.04 at 4:40 pm

Comments on “All I have to say has already crossed your mind,” especially Argument II

I haven’t been able to get the Morgenstern, Clayton, or Diaconis papers, so perhaps they explain some of the following.

“The essence of the result we derive below has been expressed by Clayton 1986 (p. 38). Imagine

“A and B are witness to some coin tossing. Bayesian A is firmly committed to the belief that all coins are fair, and so uses d1/2 for q, the probability of heads. B is firmly committed to the belief that coins are never fair, and uses a uniform prior on [0,1/3]U[2/3,1]. Both A and B will use Bayes theorem to coherently update their priors as they see data, but they will never agree, nor should they”

Probabilities should represent information, not mere belief. Before I start calculating, I would want to know what information A has that committed A to A’s belief, and the same for B. Note that all it takes to destroy the result is the minimal claim “I might be wrong,” putting a small density on all points between 0 and 1.

It doesn’t seem that this is the essence of Rosser’s argument II. Let me try to summarize it.

You are a single decision-maker, trying to decide what to do. The problem is, there is another person who is trying to outthink you; your opponent will anticipate your action, or anticipate your anticipation of his anticipation of your action, etc. Let n be the number of your opponent’s anticipations (i.e. the two examples in the previous sentence are n=1 and n=2).

Now, if you knew n for sure, then you could choose the optimal decision (which in this case is a particular mixed strategy). However, as n gets large, the optimal decision might not converge, but instead oscillate between two choices.

Rosser then (erroneously, I think) concludes that there is no solution to the problem, so all such decisions must be made, not by careful rational analysis, but by animal spirits.

Two problems I see with this argument.

First, the solution might converge; in his example it doesn’t, but in others it does. Then, some decisions could be made using rational analysis. The example given involved the words “Cauchy” and “posterior mean” which set off alarm bells in my head (Cauchy distributions are famous for not having a mean, so often it can cause hidden problems when you try to use means as summary measures). It also uses continuous, infinite-ranged probability distributions, which often cause problems when people don’t remember that they are just limits of discrete, finite-ranged distributions (this is quite common in the probability literature, and causes all sorts of nonsense to be published; people forget that infinity isn’t a real number, either for engineers or mathematicians. I’m not sure if Diaconis or Rosser are particularly susceptible to this sort of thing).

Second, if the mathematical limit does not converge, you can’t make any other conclusion other that “my model (or the problem) is ill-posed.”

The model above says “We don’t know much about n, other than it could get large. Let’s not think about it too much, and just see what happens when n gets really large. Maybe it will give us a sensible answer.” When the limit doesn’t converge, you should then refine your model; the divergence tells you, “Look, this particular “n gets large” shortcut, although it works a lot of the time, isn’t open to you in this case. Go back and do some more work.”

In this case, the “more work” that is needed is a distribution on n. After all, no one actually anticipates infinitely often. So figure out what you know about your opponent’s anticipation strategy. My friends often win games this way (“Bill will most likely think either three or four moves ahead”).

Perhaps we could put a uniform distribtuion on all possible n from 1 to N, then see what happens when N goes to infinity. In the example in the paper, where for all even n the posterior mean was y and for all odd n the posterior mean was -y, such a limit would give a posterior mean of zero (which, it seems, is the solution Rosser wants it to give).

So, instead of non-convergence = no answer, non-convergence = bad model that needs to be fixed. Here, instead of assuming n is large, actually model what you know about it.

Now there may be a little problem with my idea. This strategy I’ve suggested might just be a subset of the strategies already considered. In other words, adding this “put a distribution on n” step in the argument might lead to a whole other anticipation game, as he anticipate what n-distribution I am usign. I don’t think this is the case; here, the n-distribution describes my information, whereas before, the mixed distribution described my actions. However, the information about n depends on what he thinks about my distribution on n, so there still may be a problem.

22

bill carone 02.10.04 at 7:45 pm

Comment on Rosser “Epistemology …,” especially about knowledge of chaotic systems.

“One simply cannot guarantee exact prediction, or even very close prediction with any certainty, as long as one is expending a finite effort to obtain information regarding the system, its internal relations, its initial conditions, its parameter values, and so forth.”

Is he confusing complexity with uncertainty here?

For example, if I assign a distribution on initial conditions, parameter values, and so forth, then for each possibility, I can compute exactly what will happen in the system, even though it is chaotic, no? Then, I use standard Bayesian calculations to figure out a distribution on what will happen.

The end result may be (in fact, in a chaotic system, most likely will be) that I know very little about what will happen, but there is no problem assigning probabilities that describe that ignorance precisely, is there?

In other words, I don’t see an argument for any epistemological problem with assigning probabilities over the results of chaotic systems. The result I see is that, with chaotic systems, the calculated probabilities will indicate ignorance.

Can you point out a specific example that I can look at to understand what you mean when you said “it is not always obviously legitimate to use the Principle of Insufficient Reason in cases of ignorance.”

23

Dave 02.11.04 at 4:51 am

It’d be good to be clear on simple things, like time value of money, before worrying about things like labor theory of value, or cauchy distributions*.

In the example given, as far as I can tell, the capitalist is simply breaking even; we’ll chain the other direction this time:

Y1 -100 = Y1 -100
Y2 -100 = Y1 -95
Y3 +215 = Y1 +195

* a) it doesn’t take a cauchy distribution for means to not be very useful. The mean is simply the first coefficient in an approximation, and trying to treat highly skewed distributions with only the mean is somewhat like approximating addresses with only latitude.
b) the cauchy distribution is well known to those of us who have ever worked someplace where “Dilbert” is popular: theoretically, projects are supposed to behave like they have expected completion dates, and naively one might expect that the longer one has gone without observing completion, the more likely completion will be in the near future; practically, as Schroeder points out, the longer many projects have gone on without success, the farther in the future their expected success will be. CS people know about Real Soon Now, and folk wisdom knows about Mañana.

24

dsquared 02.11.04 at 4:03 pm

Bill: When you assign a uniform distribution over outcomes because you have no information (as you explicitly suggest above), that’s a version of the principle of insufficient reason. Most of the time it will work. Sometimes (particularly when dealing with complex systems), it could be very bad indeed.

25

bill carone 02.11.04 at 5:01 pm

Dave, thanks for responding.

“It’d be good to be clear on simple things, like time value of money, before worrying about things like labor theory of value, or cauchy distributions”

That’s fair. Let’s see now.

“In the example given, as far as I can tell, the capitalist is simply breaking even; we’ll chain the other direction this time:”

Yes, if the capitalist has a 5% time preference as well (like the workers), then he breaks even. If he has a smaller time preference, he makes a profit (is it commone for capitalists to have smaller or larger discout rates than workers? Workers need the money, but capitalists have more investment opportunities?) However, that was not my point.

My point was that he made _any monetary return at all_ on his original investment at all without putting in any labor.

I was confused by the fact that the capitalist could make a 5% return on investment without putting in any socially useful labor. I thought that adherents of the labor theory of value (LTV) would object to that. I then showed that it was equivalent to another distribution only to the workers that a LTV adherent would, I thought, accept.

D^2 then, I believe, put me right when he told me that the LTV doesn’t take a position on distribution, but I wanted to hear what other people had to say as well.

So, would you object to the capitalist making 5% on his investment without putting any labor into it?

“it doesn’t take a cauchy distribution for means to not be very useful”

True; I only meant that Cauchy distributions _frequently_ give strange results when dealing with means and variances, rather than some other summary measures. It is usually the catch-all counterexample to anything involving means and variances (e.g. law of large numbers, minimum variance unbiased estimators, etc.).

“theoretically, projects are supposed to behave like they have expected completion dates, and naively one might expect that the longer one has gone without observing completion, the more likely completion will be in the near future”

I’m not sure this is true, even theoretically. For example, if a project will be completed in an exponential amount of time, then the expected time (and the distribution of the time) until completion never changes (Dilbert would be proud :-)

This is a standard fallacy when means are involved.

“the longer many projects have gone on without success, the farther in the future their expected success will be”

Again, I don’t think this is poses any theoretical problem; I teach this in first-quarter probability. Many lifetime distributions have similar qualities. For example, when I buy a computer game, the expected length of time until it breaks down starts out at X, say. Once I actually get it installed and start playing, the expected time until it breaks down increases dramatically; it “wears-in.” Isn’t that what you experience?

Another example; my wife will have a child soon; when it is born. I will assign X as its expected life. A year later, the expected life will have increased, no?

26

bill carone 02.11.04 at 5:01 pm

Dave, thanks for responding.

“It’d be good to be clear on simple things, like time value of money, before worrying about things like labor theory of value, or cauchy distributions”

That’s fair. Let’s see now.

“In the example given, as far as I can tell, the capitalist is simply breaking even; we’ll chain the other direction this time:”

Yes, if the capitalist has a 5% time preference as well (like the workers), then he breaks even. If he has a smaller time preference, he makes a profit (is it commone for capitalists to have smaller or larger discout rates than workers? Workers need the money, but capitalists have more investment opportunities?) However, that was not my point.

My point was that he made _any monetary return at all_ on his original investment at all without putting in any labor.

I was confused by the fact that the capitalist could make a 5% return on investment without putting in any socially useful labor. I thought that adherents of the labor theory of value (LTV) would object to that. I then showed that it was equivalent to another distribution only to the workers that a LTV adherent would, I thought, accept.

D^2 then, I believe, put me right when he told me that the LTV doesn’t take a position on distribution, but I wanted to hear what other people had to say as well.

So, would you object to the capitalist making 5% on his investment without putting any labor into it?

“it doesn’t take a cauchy distribution for means to not be very useful”

True; I only meant that Cauchy distributions _frequently_ give strange results when dealing with means and variances, rather than some other summary measures. It is usually the catch-all counterexample to anything involving means and variances (e.g. law of large numbers, minimum variance unbiased estimators, etc.).

“theoretically, projects are supposed to behave like they have expected completion dates, and naively one might expect that the longer one has gone without observing completion, the more likely completion will be in the near future”

I’m not sure this is true, even theoretically. For example, if a project will be completed in an exponential amount of time, then the expected time (and the distribution of the time) until completion never changes (Dilbert would be proud :-)

This is a standard fallacy when means are involved.

“the longer many projects have gone on without success, the farther in the future their expected success will be”

Again, I don’t think this is poses any theoretical problem; I teach this in first-quarter probability. Many lifetime distributions have similar qualities. For example, when I buy a computer game, the expected length of time until it breaks down starts out at X, say. Once I actually get it installed and start playing, the expected time until it breaks down increases dramatically; it “wears-in.” Isn’t that what you experience?

Another example; my wife will have a child soon; when it is born. I will assign X as its expected life. A year later, the expected life will have increased, no?

27

bill carone 02.11.04 at 5:24 pm

“When you assign a uniform distribution over outcomes because you have no information (as you explicitly suggest above), that’s a version of the principle of insufficient reason.”

Well, there are lots of rationales for it: transformation groups and maximum entropy are other ones that lead to the same sorts of conclusions. See Jaynes ; examples of transformation groups leading to uniform distributions are in the section beginning on page 17.

“Most of the time it will work. Sometimes (particularly when dealing with complex systems), it could be very bad indeed.”

I would be eternally in your debt if you would answer two questions about this (ever since your post, they are driving me crazy):

1) Can you give an example where the principle of insufficient reason doesn’t work when assigning probabilities to a finite set of possibilities? I’m pretty sure none exist, but I’m willing to learn.

This usually clinches it for me; probability theory (and I believe, with far less confidence, mathematics) are about finite sets and well-behaved limits of finite sets.

2) Can you give an example of one of the complex systems when it is “very bad indeed” to assign equal probabilities if you have no information. I thought you said I would find an example in one of the Rosser papers, but I couldn’t; if you could point me to one in there, or another you know about, I would really appreciate it.

28

dsquared 02.18.04 at 1:52 pm

Bill: (sorry for delay).

For the first, the PIR is going to give you wrong answers when you’re trying to make guesses about a set of outcomes which doesn’t have a probability measure. ( For example, you can’t make sense of the Argument From Design or Pascal’s Wager based on probability theory for this reason).

2: It’s implicit in the Rosser papers that probabilistic reasoning in the sense of assuming well-defined and objective probability measures is often going to come to grief in situations where there is a lot of game-theoretic structure.

For example, think about the difference between the following two bets:

1. I throw a dice in plain view and pay you 10x your stake if it lands ‘6’.

2. I throw a dice behind a screen, which you’re never allowed to see. About 1 time in 6, so far in plays of this game to date, I say “Hey, you rolled a 6” and pay you 20x your stake, and about 5 times in 6 so far I have said “Sorry, better luck next time” and kept your stake.

You’ve just inherited $1,000,000 with the condition that you have to risk it all on one spin of a game of chance. Which of the two options above would you risk it on?

The risk here is your assessment of whether I’d rip you off or not. I don’t see how there’s any rational principle at all on which you could make the decision between these gambles, but that doesn’t mean they’re equally attractive.

29

bill carone 02.18.04 at 9:27 pm

Dsquared,

First, a sincere thanks for the response.

“For the first, the PIR is going to give you wrong answers when you’re trying to make guesses about a set of outcomes which doesn’t have a probability measure.”

Can you give me an example of a finite set without a probability measure?

In the Pruss paper you linked, he starts with infinite sets:

“Suppose I know with complete certainty that X could have any value in the interval [0,1], and there is no reason to favor any value over any other.”

“Suppose, however, that (a) I believe the universe is infinite in extent,”

When you start with a finite set, then take a well-behaved limit to an infinite set, then probability calculations always work; they give the correct (usually intuitive) answer when such a limit exists, and refuses to answer when it doesn’t. For example, in Pruss:

“For instance, a quick calculation shows that the likelihood that the blast will occur within a thousand miles of Earth is less than 10^-44 of the likelihood that the blast will occur somewhere in the Andromeda Galaxy. Any likelihood that is that much smaller than some other likelihood is negligible.”

“Admittedly, all the reasoning here cannot be reconstructed within a standard Bayesian epistemology based on the classical probability calculus.”

It can; all you do is start with a finite universe and take a limit to an infinite universe; in the limit the first _probability_ is less than 10^-44 times the second _probability_. You get the same intuitive answer as Pruss, using standard probability theory.

The limit method always works, and it satisfies our intuition about “infinite” cases when the limit converges.

When the limit isn’t well-behaved, the mathematics is telling you “You can’t use this shortcut on this problem; you can’t just say N is very, very large, you actually have to figure out how big you think N is.”

The St. Petersburg case is an example; the limit diverges (increases without bound). This means you can’t conclude anything about the answer; you can’t just say “Imagine we could play forever,” you need to be more specific. Once you put a limit on the amount of money or the number of plays, intuitive answers pop out.

You can solve most (all?) “infinite” paradoxes this way: Do all the analysis with finite quantities, then see what happens when those quantities increase indefinitely. This is the method Gauss used:

“I protest against the use of infinite magnitude as something accomplished, which is never permissible in mathematics. Infinity is merely a figure of speech, the true meaning being a limit.”

2. So it isn’t chaos theory I should be looking into, it is just game theory, right? Game theory often upsets me for similar reasons to the stuff above: it assumes an infinity already exists (an infinity of anticipations). But I understand it much more than I do chaos theory.

“probabilistic reasoning in the sense of assuming well-defined and objective probability measures”

I don’t know what you mean by “objective” here. See below.

“2. I throw a dice behind a screen, which you’re never allowed to see. About 1 time in 6, so far in plays of this game to date, I say “Hey, you rolled a 6” and pay you 20x your stake, and about 5 times in 6 so far I have said “Sorry, better luck next time” and kept your stake.”

So I don’t know if you are honest or not here?

The “five times out of six” is relevant, but not a complete description of what I know. Probability is about _all_ of my information.

My judgment of your honesty would enter into the probability calculation. For example, if you were my friend and the money would go to Bill Gates if I lose, I would assign a different probability than if you were a stranger and the money went to you if I lose.

It would also be different if you didn’t know the bet was for $1 million this time (or if all the bets in the past were for $1 million).

I’m missing the point you are making here:
– I don’t see a problem with my assigning a probability representing my information, then using standard decision theory.
– I also am missing any game-theoretic issues (“he thinks that I think that he thinks …”) as in Rosser’s paper.

30

dsquared 02.19.04 at 3:20 pm

Can you give me an example of a finite set without a probability measure?

Pascal’s wager, if I understand it correctly, poses a question where there’s a binary choice without a probability measure.

In general, I think we’re talking at cross purposes. Neither myself nor Keynes would disagree that you can assign a probability based on your perception of my honesty. What we’d question is whether probability in this sense is something that should be put on the same logical footing as frequentist probability; the probability that a dice turns up the right number.

For example, game 3, which is the same as game 2 but it’s my brother, about whom you know as little as you know about me, doing the rolling. What I think Keynes and I would say would be that there is no principled way in which you can say either that gamble 2 is preferred to gamble 3, or that you are indifferent between them. This is Keynesian probability; it’s a theory of Bayesian probability-as-degree-of-belief, but it doesn’t require a complete ordering.

31

bill carone 02.19.04 at 5:12 pm

“Pascal’s wager, if I understand it correctly, poses a question where there’s a binary choice without a probability measure.”

Do you have a source on that? (I can’t find a source on Google) It doesn’t match my understanding of a probability measure. I’m no expert, though.

This is the wager where either an afterlife exists or not, and you choose to be religious or not?

I thought the problem was an infinite utility (or multiple infinite utilities), not about probabilities.

“In general, I think we’re talking at cross purposes.”

Sorry, I am clearly misunderstanding; first I thought it was about chaos theory, then game theory, and now it is probability theory.

“For example, game 3, which is the same as game 2 but it’s my brother, about whom you know as little as you know about me, doing the rolling. What I think Keynes and I would say would be that there is no principled way in which you can say either that gamble 2 is preferred to gamble 3, or that you are indifferent between them.”

I don’t see why either of you would think that.

Here is the principle: If you have the same information about two possibilities, then you should assign the same probabilities to them. In other words, if every piece of information you have points equally to A as to B, then p(A)=p(B).

This is what I mean by the “principle of insufficient reason”; why would you not use it in this case?

My reasoning goes as follows:
– my probability of winning game 2 equals my probability of winning game 3 (whatever it is). Why? You have specifically told me that I have no information that distinguishes your action and your brother’s action.
– Same probabilities, same outcomes implies indifference.

Intuitively, if I had to play one, I’d be indifferent between the two; I have no information one way or the other about which is more likely to win.

This also is the intuitively correct result, no? How could it be otherwise?

“What we’d question is whether probability in this sense is something that should be put on the same logical footing as frequentist probability”

These probabilities work exactly the same way as frequentist probabilities, as shown by Cox; see Jaynes. Briefly, if you represent your information as real numbers, and you want to reason consistently, then he shows that those real numbers must obey two rules:

p(A|C) + p(not-A|C) = 1
p(A and B|C) = p(A|C)p(B|AC)

These are the same rules that frequentists use for finite, discrete sets of possibilities.

So probabilities representing your information combine in the same way as frequencies do.

32

bill carone 02.22.04 at 8:48 pm

“Pascal’s wager, if I understand it correctly, poses a question where there’s a binary choice without a probability measure.”

I’ve checked with a few sources, and it seems that any assignment of probabilities (p) and (1-p) to possibilities (A) and (not-A) is a probability measure.

Let S be a set containing A and not-A.

Look at this collection containing these subsets of S:
– empty
– A
– not-A
– A, not-A (i.e. the set S)

These form a sigma-algebra, since

– S is in it,
– If X is in it, so is its complement, and
– The algebra is closed under any countable number of set operations.

The measure that assigns the following measures to the subsets is a probability measure.

– p(empty) = 0
– p(A) = p
– p(not-A) = 1-p
– p(S) = 1

Do you see something wrong with the above?

Comments on this entry are closed.