Two Envelopes paradox

Problem definition
Generalized version of Two Envelopes problem
Variants vG, vD, vS, vA, vN …
Resolution of paradox
Expected values and paradox
Infinite Expected values
Expected gain and paradox
Optimal strategy if we can look into envelope
Solution of Generalized version ( answer to Q1 )
Optimal solution of Generalized version (answer to Q2)

vD: default variant where player select envelope
vS: standard default variant where second envelope is doubled
vX: standard variant where we directly choose value X
vU: simple uniform variant
vUc: continuous uniform variant
vA: ‘always better’ 2^n/3^{n+1} variant
vAc: continuous ‘always better’ variant
vN: Nalebuff asymmetric variant
Summary


Problem definition

Standard version of this paradox, to quote Wikipedia, is :

Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?

Further description of steps that lead to apparent paradox:

  1. Denote by A the amount in the player’s selected envelope.
  2. The probability that A is the smaller amount is 1/2, and that it is the larger amount is also 1/2.
  3. The other envelope may contain either 2A or A/2.
  4. If A is the smaller amount, then the other envelope contains 2A.
  5. If A is the larger amount, then the other envelope contains A/2.
  6. Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2.
  7. So the expected value of the money in the other envelope is : \frac{1}{2}(2A)+\frac{1}{2}(\frac{A}{2})=\frac{5}{4}A
  8. This is greater than A so, on average, the person reasons that they stand to gain by swapping.
  9. After the switch, denote that content by B and reason in exactly the same manner as above.
  10. The person concludes that the most rational thing to do is to swap back again.
  11. The person will thus end up swapping envelopes indefinitely.
  12. As it is more rational to just open an envelope than to swap indefinitely, the player arrives at a contradiction.

That is basic and fairly generic definition of the paradox. But, while it does not define how is money selected for envelopes, it does specify that initial envelope is picked at random – which exclude certain variants of Two Envelopes problem like Nalebuff variant ( where specific envelope is given to player ).

Generalized version of Two Envelopes problem

It is possible to generalize problem so that it covers practically all variants than can be called “Two Envelopes paradox”. Assume we have host who choses values in envelopes, and player who gets envelope and chooses to switch or not:

  • host randomly select value R with p1(r) probability distribution
  • host puts X=V(R) amount into first envelope
  • host puts double amount in second envelope (2X) with probability D, otherwise puts half (X/2)
  • player is given or select first envelope with probability F, otherwise player gets second envelope

Q1) without looking in envelope, should player switch to other envelope ?
Q2) with looking in envelope, when should player switch to maximize gain ?

Each variant of Two Envelopes problem can be defined with these four parameters: p1(r), V(r), D, F. Question Q1 is one that supposedly lead to paradox in original variant, while Q2 is optional question about optimal strategy to maximize gains.

First parameter is probability distribution p1(r) that determine how probable is intermediate value R that is then used in ‘value’ function V(R) to get X amount which goes into first envelope. For continuous values of X, R has continuous probability distribution and p1(r) is proper probability density function such that \int p1(r) dr = 1. If R has discrete probability distribution then p1(r) is proper probability mass function such that \sum\limits_{for\;all\;r} p1(r)=1.

Generic problem is using intermediate random variable R and its associated probability distribution p1(R) instead of directly generating value X using probability distribution p1_x(X) due to several reasons:

  • to be able to cover with generic problem those variants like vA that state “randomly select R and then put 2^R into first envelope”
  • to cover variants that have proper probability distribution for R even if they have improper distribution for X
  • to cover variants that use continuous distributions to generate discrete values, like p1(R)=e^-R and X= ceiling(100*R)

Both parameters F and D are probability values and so in range 0..1, assumed constant for given variant (even when unspecified, like in original variant ). Default Two Envelopes problem states that F=\frac{1}{2}, and does not specify D ( paradox claims remain same, regardless if host double or half value in second envelope ). But some modified variants ( like Nalebuff ) use different F ( F=1, host always give first envelope to player) and specify D (D=\frac{1}{2}, to ensure \frac{5}{4}x outcome if switching ).

It is possible to even more generalize Two Envelopes problem, by making even probability for host to double value in second envelope and probability for player to get first envelope as functions of R, meaning not only V(r) but also D(r) and F(r). While this would make it harder to find general solution (there is one presented below), it would make it even harder to create meaningful problem variant that would have visible paradox – since paradox need believable claim of type “so for every value of X it is better to switch”, and it is hard to make that claim obvious if benefits of switching change with each value.

Variants vG, vD, vS, vA, vN …

Variants of “Two Envelopes” problem differ in what and how they define one of four parameters in generalized definition of problem. Variants are named as ‘vX’, to be easier to refer to them in text, but it should be noted that some of those variants are actually sub-variants of other more generic ones ( eg. vA is subvariant of vS which is subvariant of vD), with all of them being sub-variants of most generic vG :

  • vG (Generic) : all four parameters F, D, V(R) and p1(R) are unspecified ( and thus any valid)
    • vD (Default, F=\frac{1}{2} ): player select envelope so F=\frac{1}{2} while D, V(R) and p1(R) are unspecified
      • vS (Standard, D=1): F=\frac{1}{2} , D=1 while V(R) and p1(R) are any
        • vX (Direct standard, X=V(R)=R ): F=\frac{1}{2} , D=1, X=V(R)=R , while p1(X) is any
          • vU (uniform discrete): F=\frac{1}{2} , D=1 , V(R)=R, p1(R)= \frac{1}{N} for R \in 1,2,..N
          • vUc (uniform continuous): F=\frac{1}{2} , D=1 , V(r)=r, p1(r)= \frac{1}{N} for r \in [0,N]
        • vB (Binary, V(R)=2^R): F=\frac{1}{2} , D=1 , V(R)=2^R, while p1(X) is any
          • vA (discrete ‘always better’) F=\frac{1}{2} , D=1 , V(R)=2^R , p1(R)=\frac{2^R}{3^{R+1}}
          • vAc (continuous ‘always better’) F=\frac{1}{2} , D=1 , V(R)=2^R , p1(r)= ln(\frac{3}{2})(\frac{2}{3})^r
    • vN (Nalebuff): F=1 , D=\frac{1}{2} , while p1(R) and V(R) are unspecified/any

For Default variant vD and all of its subvariants it is proven below that formulas for their expected values when always switching are same as expected values when never switching (E_{as}=E_{ns}) and thus paradox does not exist. Formulas are also derived for expected values if switching when value inside envelope is less than some value H, E_h(H), and for finding optimal value H_{opt} that maximize E_h \geq E_{ns}.

For Nalebuff asymmetric variant vN it is shown that it indeed has better expected value if switching, but that it is not same if again switching back – as it’s name suggest, it is ‘asymmetric’ and same reasoning can not be applied for other envelope after switch.

Resolution of paradox

There are many proposed resolutions of this paradox. Most of them focus on step #6 from 12 steps – stating that it is impossible to put money in envelopes in such a way that for any possible value in selected envelope other envelope has equal chance of containing double or half.

Simple example would be if we randomly with equal chance select for first envelope out of 2,3,4 and put double in second envelope. It is easy to see that claims from above paradox are valid only if you have 4 in your envelope – only then other envelope has equal 1/2 chance of being double (8) or half (2), and only then your expected value if you switch would be 5/4x. But consider case when you have 1 or 3 in your envelope – there is 100% chance that it is smaller value, not claimed 1/2, and it give you even larger 2x if you switch. On the other hand, when you have 6 or 8 in your envelope there is 0% chance that other envelope has double that value, and switching would clearly result in loss with expected x/2. It can be shown that all possible gains in switching on smaller values 1,2 or 4 are exactly offset by losses if switching on larger 6 or 8.

In fact, in almost all variants paradox does not exist due to one of these two claims being false:

#2) equal chance other is smaller/larger: the probability that A is the smaller amount is 1/2, and that it is the larger amount is also 1/2 ( for any possible value in envelope )

#9) indistinguishable envelopes: after the switch, denote that content by B and reason in exactly the same manner as above

But for almost any proposed resolution some counterexample was proposed, by either slightly modifying problem setup or how money is selected for envelopes, that counter suggested resolution. Examples of such modified problems are Nalebuff variant that guarantee #2 for first envelope, or \frac{2^n}{3^{n+1}} variant where it appears that it is always best to switch.

Expected values and paradox

Paradox can be resolved by pointing to each separate false claim, but it can also be resolved by calculating expected value if we would never switch envelope (E_{ns}) and showing that it is same as expected value if we always switch envelope (E_{as}). Expected value can be described as an average value that would be earned if this game is played many times. Thus it is product of amount of money in selected envelope and probability for that amount to appears.

To really prove that there is no paradox, it is not enough to just calculate expected values for one specific case – we need to find formulas for expected values and show that they are identical when switching or not switching.

We can use several different approaches to find formulas for expected value, and they notably differ in selecting random variable over which we would sum or integrate: over values of first envelope ( ‘first’ is one where host first puts money as defined in generalized problem ) or over values in envelope that player gets/select or over possible values for random number generated before we determine value for first envelope etc. Other difference is whether probability is continuous or discrete, which determine whether we sum or integrate over all possible probabilities. To demonstrate those different approaches, we will simplify and consider here only subset of generalized problem where V(r)=R ( but solution for fully generalized problem is given in later section, using one of approaches demonstrated here ). This means subvariant where we directly generate random value X for first envelope (so p_x(x) instead of p1(r)), and we always double second envelope ( so first one is always smaller).

First approach to find expected value when we never switch (E_{ns}) is going over all possible values in smaller envelope, and p_x is probability to put X as value in smaller envelope, while p_{chose\,smaller} is probability that player will get envelope with smaller value ( p_{chose\,smaller} =\frac{1}{2} when player select randomly one envelope ):

\sum\limits_{for\,all\,possible\,smaller\,X}p_x(X)=1 ( condition for valid selection of values)

p_{chose\,smaller}+p_{chose\,larger} =1

E_{ns}= \sum\limits_{all\,smaller\,X}p_x(X) \cdot (p_{chose\,smaller} \cdot X+p_{chose\,larger} \cdot 2X)

Second approach is instead going over all possible values in selected envelope, and p_{sx} is probability for player to select/get envelope with value X:

\sum\limits_{for\,all\,possible\,selected\,X}p_{sx}(X)=1 ( condition for valid selection of values)

E_{ns}= \sum\limits_{all\,selected\,X}p_{sx}(X) \cdot X

Both approaches are also applicable when probability to put values in envelopes is not discrete but continuous ( which is arguably more generic approach, since continuous probability selection can be extended to cover discrete ones) . In this case p_x(X) is continuous probability density function for putting value in smaller envelope:

\int\limits_{x=0}^{\infty} p_x(x) dx = 1

E_{ns}= \int\limits_{x=0}^{\infty} p_x(x) \cdot (p_{chose\,smaller} \cdot x + p_{chose\,larger} \cdot 2x) dx

And accordingly if we use p_{sx}(x) as continuous probability density function to have value X in envelope we select/get:

\int\limits_{x=0}^{\infty} p_{sx}(x) dx = 1

E_{ns}= \int\limits_{x=0}^{\infty} p_{sx}(x) \cdot x \;dx

Finding expected value if we always switch (E_{as}) is done in the same way, just swapping sides for smaller and larger payoff, while p_{x\,is\,smaller}(x) is probability that player will get envelope with smaller value ( which in this case depends on X, and is different than p_{chose\,smaller} ). So:

p_{x\,is\,smaller}(X)+p_{x\,is\,larger}(X) =1

E_{as}= \sum\limits_{all\,smaller\,X}p_x(X) \cdot (p_{chose\,smaller}*2X+p_{chose\,larger}*X) , or

E_{as}= \sum\limits_{all\,selected\,X}p_{sx}(X) \cdot (p_{x\,is\,smaller}(X) \cdot 2X+p_{x\,is\,larger}(X) \cdot \frac{X}{2}) , or

E_{as}= \int\limits_{x=0}^{\infty} p_x(x) \cdot (p_{chose\,smaller} \cdot 2x+p_{chose\,larger} \cdot x) dx , or

E_{as}= \int\limits_{x=0}^{\infty} p_{sx}(x) \cdot (p_{x\,is\,smaller}(x) \cdot 2x+p_{x\,is\,larger}(x) \cdot \frac{x}{2}) dx

Some variants of Two Envelopes problem can separate probability distribution and value in envelope, by first selecting random R with some proper probability distribution p_r(R), and then put V(R) in envelope ( function based on R, instead of just putting R in envelope) and then double that value for second envelope. Previous approach of directly selecting X is subset of this one, when X=V(R)=R. We can modify formulas for expected values for those variants:

E_{ns}= \sum\limits_{all\,R}p_r(R) \cdot (p_{chose\,smaller} \cdot V(R)+p_{chose\,larger} \cdot 2 \cdot V(R))

E_{as}= \sum\limits_{all\,R}p_r(R) \cdot (p_{chose\,smaller}\cdot 2 \cdot V(R)+p_{chose\,larger}\cdot V(R))

For default Two Envelopes variants, where player is allowed to select envelope, we always have p_{chose\,smaller} = p_{chose\,larger} = \frac{1}{2} , so :

E_{ns}= \sum\limits_{all\,R}p_r(R) \cdot (\frac{1}{2} V(R)+\frac{1}{2}  2 \cdot V(R)) = \frac{3}{2} \sum\limits_{all\,R}p_r(R) V(R)

E_{as}= \sum\limits_{all\,R}p_r(R) \cdot (\frac{1}{2} 2 \cdot V(R)+\frac{1}{2} V(R)) = \frac{3}{2} \sum\limits_{all\,R}p_r(R) V(R)

Therefore for all standard variants expected value if switching is equivalent to expected value when not switching (E_{as} \equiv E_{ns}) , regardless of way in which money is put in envelopes . That means it is same if you switch envelopes or not, which resolves paradox.

Infinite Expected values

In some cases expected values may be infinite, even with valid probability distributions. One example is vA variant discussed later, where p_r(R)=\frac{2^n}{3^{n+1}} and X=2^R. When those variants have player selecting envelope, their expected values when switching or not switching will still be same as demonstrated above – same as in ‘same formula’, but they would be infinite values.

Some people would say that we can not compare two infinite expected values and state that they are ‘same’, and thus we can not use them to resolve paradox. Other people would say that variants with infinite expected values are invalid (there is not enough money in the world), so there is no need to resolve them.

Both of those positions are actually wrong. Second position is easier to disprove – since variants like vA are doable in reality (just write amounts on cheque instead of putting actual money, and it is irrelevant if you can not draw on that cheque), we can not ignore them. First position is harder to disprove but it rely on fact that we do not need to directly ‘compare’ expected values, we just need to prove that they are indistinguishable – in which case it would not matter if we switch or not, again resolving paradox.

In this case, we want to prove that when we have expected values with same formulas for both switching and not switching, even when those expected values are infinite they are indistinguishable and thus it is same if we switch or not .

One way to prove it is to imagine two players playing game many times in parallel: A will always chose not to switch in his game and B will always chose to switch in his game. Can we distinguish those two players just by their total earned amount when they exit after same (possibly very large) number of games ?

When game has identical formula for expected values when switching or not, we would not be able to tell which was player A and which was player B even if expected value was infinite. While with infinite expected value their total earned amount could be significantly different even after millions of games, we would not be able to say which one was lucky enough this time to get that rare very high value in one of their envelopes. In other words, players A and B have equal chance to exit with higher amount, and equal chance for how more/less they could have compared to other player.

This makes them indistinguishable, thus proving out point that it is same if they switch or not. This same principle holds regardless if expected values were infinite or not, with only difference than non-infinite expected values would tend to result in very similar total earned money for both players. Note that we have even higher order of ‘indistinguishability’ that we really need : we only needed total expected values to be indistinguishable, but that could be result even when formulas for expected values are different but total sum result in same expected value. But when we have same formulas for expected values we also have same probability for any individual value in envelopes. In other words, we could not distinguish between players A or B even if we can see amount they earn after each individual game. For infinite expected values that means we will see they both have almost exact same frequency of lower values (that appear more often), and would potentially differ in those much rarer very high values.

Above claims would be harder to prove if formulas were infinite sums involving both positive and negative signs – like for example in expected expected gains. In those cases results of sums would not only be converging to some finite value or diverging to infinity ( but with same ‘indistinguishable’ behavior as described here). Infinite sums with alternating signs can also result in undefined finite value – value that would periodically change as each new element is added to sum. Those sums could apparently have different result values depending on how we rearrange elements, but it is only ‘apparent’ value – actual result of such sum is ‘undefined’. If two values would have same formula with such undefined sums we could still argue that they are indistinguishable and could be compared, but if such sum is single result as in expected expected gain case we could not prove validity of such sum in infinite case. But for expected values used here, sums or integrals always involve positive values, so there is no ambiguity.

Another way to prove that infinite expected values are comparable is to imagine three players, each doing single game : A will chose to not switch while B and C will chose to switch. But before they open their envelopes, player C can chose to swap his envelope with either player A or B, or to keep his own. Can he expect better value if he swap with B or if he keeps his own envelope? What about A?

Depending on your position regarding “we can compare infinite expected values”, your answers to “Can he expect better value if he swap with B or if he keeps his own envelope?” can be:

  • ‘no’ : he can not expect better value if he swap with B ( for example because they both played in exactly same way – both were switching ). But regardless of why you chose to say ‘no’, it means that you are able to ‘compare’ those two cases even when their expected values are infinite – and thus same logic can be applied to A, with conclusion that they are indistinguishable
  • ‘undefined’ : if you think we are unable to compare infinite expected values, we can not say if C or B can expect better. But that makes choice of B indistinguishable from choice of A, making again switching or not switching indistinguishable
  • ‘yes’ : probably no one will chose this answer. It could be proven that it is invalid answer but, regardless of reason behind it, it follows same logic as ‘no’ – it means you accept ability to compare infinite expected values. So we can just claim that two same expected formulas are ‘same’ even with infinite value, again making switching or not switching indistinguishable

Yet another approach is to establish even higher order of ‘comparability’ between two expected value formulas that both has infinite value – not just if they are indistinguishable, but to actually compare ‘how many times is one better than the other’ . Assume we select random value n ( n=0,1, … ) with probability \frac{2^n}{3^{n+1}}, and put 2^n in first envelope. Then we select another value (n_2) with same probability distribution and put this time 3*2^{n_2} in second envelope. Which envelope you should select?

In this case values in two envelopes do not depend on each other and envelopes will even contain different possible numbers ( eg second envelope can have 3, but can not have 4 ). Also, both envelopes have infinite expected value but it is clear that if first envelope has expected value E_1, second envelope has triple that value, or E_2= 3*E_1.

Even those subscribing to ‘we can not compare infinite expected value’ would be hard pressed here not to state that it is always better to select second envelope. In fact, we could say that selecting second envelope should on average result in triple gain. In other words, if relative ratio of expected values is valid number, it should be valid even if expected values were infinite. In this case Switch\;ratio= \frac{E_2}{E_1}= 3. And we should switch only if Switch\;ratio > 1, which in this case is true. Applied to our general formula for any Two Envelope problem where player can select envelope:

Switch\;ratio= \frac{E_{as}}{E_{ns}}= \frac{\frac{3}{2} \sum\limits_{all\,R}p_r(R) V(R)}{\frac{3}{2} \sum\limits_{all\,R}p_r(R) V(R)}=1 \Rightarrow does not matter if we switch

Thus we proved that when expected values for switching and not switching have same formula, even if those result in infinite values, it is indistinguishable if we switch or not – therefore avoiding paradox. And since we previously proved that expected values for switching and not switching always have same formula and values if player can chose envelope, this resolve paradox for all standard Two Envelope variants.

We could drive this logic even further and say that , regardless if expected values are same for switching or not switching and regardless of infinity of expected values or even invalidity of probability distributions, we can either distinguish between player who always switch and one who never switch or we can not, based just on their total earning. If we can distinguish, we should chose to do whatever one with better total earning is doing – and there is no paradox, since we should not ‘chose back’ else we would get lower value. If we can not distinguish between them, then it is same if we switch or not – again resolving paradox.

Expected gain and paradox

Alternative way to resolve paradox is to show that there is no gain if switching. In other words, showing that difference between expected value when switching and not switching is zero. Simple formula for expected gain would be:

If E_{gain} = E_{as} - E_{ns} = 0 \Rightarrow there is no paradox

While this simple formula is easy to understand, it has certain disadvantages : it still require finding separate formulas for E_{as} and E_{ns} and it is not very informative ( unlike formulas for E_{as} and E_{ns} which can tell players what to expect after game ). But, most importantly, it is hard to prove validity of this approach if those expected values are infinite. We can claim resolution of paradox even in case of infinite expected values when comparing two values with identical formulas – we only need to accept that values derived with identical formulas are identical and indistinguishable even if infinite. But claiming that subtracting those infinite values result in zero could be considered as additional “extension” of infinite arithmetic, especially since we can not use ‘indistinguishability’ to prove it.

There is better approach : expected expected gain (EEG). Instead of gain as difference between total expected values, for each possible value in selected envelope determine expected gain as difference between expected value if switching vs not switching only for that value in envelope. That would be ‘expected gain’ for that specific value in envelope, but since values in envelope do not all have same chance to appear, total gain would be sum of those individual expected gains multiplied by probabilities of each value. In other words, expected value of expected gains , and thus double ‘expected’ in name ‘expected expected gain’, or EEG for short.

While finding value for such expected expected gain would still be less informative to players than expected values, it could avoid other two disadvantages : it does not need calculating E_{as} and E_{ns}, and in some cases it could avoid infinite results. Just like expected values, finding formulas for EEG could use sum/integral over different probability distributions: p1(r) for generated value in ‘first’ envelope, or probability of values selected by player etc. In any case, proving resolution of paradox would in this case be:

If EEG = 0 \Rightarrow there is no paradox

But proving above is still more problematic than proving E_{as} \equiv E_{ns}, especially for only case where ‘paradox’ can still remain: infinite expected values. It is theoretically possible that for some specific variants we could have finite EEG even with infinite expected values. But since such EEG would involve sums ( or integrals ) of individual expected values, that means infinite sums of alternating positive and negative signs – and it is known that such infinite sums can result in ‘undefined’ values : apparent different finite results, depending on how we rearrange elements in sum.

When proof depends on comparing two entities with same formulas but possible infinite values ( like comparing E_{ns} to E_{as}) , we can claim that since those values are indistinguishable they must be considered ‘same’ – and it would maybe hold as line of reasoning even for undefined sums with alternating signs ( regardless how you decide to rearrange that sum, when done to both entities it would result in same values ). But it can not be done for single entity – proof of EEG claim is not comparing EEG formula to some other EEG formula, but to zero. And we can not claim that such undefined EEG is ‘undistinguishable’ from zero – depending on how we decide to calculate/rearrange such EEG sum, it will result in different values while zero will always be zero.

Bottom line is that EEG with infinite sum ( due to alternating signs) is not valid method to either prove or disprove existence of paradox. Because of that, expected gains ( or expected expected gains ) are not used in this document to prove resolution of the paradox – all proofs are based on formulas for expected values, which even for infinite cases are sums/integrals for always positive values which either converge to same finite value or diverge to infinity in ‘same’, undistinguishable way.

Optimal strategy if we can look into envelope

Finding E_{ns} and E_{as} and showing that they are same is enough to resolve paradox, as they will show that it is same if we always swap or never swap – which are only options that we have if we can not look into envelope that we got before deciding whether to switch or not.

But if we are allowed to look into envelope, it is possible to make more informed decision on switching, which should yield optimal expected value E_{opt} which is better that E_{ns} and E_{as}. General approach to optimal strategy is to switch if we see smaller value in envelope and do not switch if we see larger value in envelope. Obviously, “larger” or “smaller” will depend on probability distribution when selecting value for envelopes, but in any case it means that optimal strategy would require finding optimal H so that :

If value in envelope is X, switch when X \leq H

Above is not only better but also optimal approach when probability and value functions are monotonous. We can define expected value if we switch on H, E_h(H) which is calculated similar to E_{ns} and E_{as} , by switching before H and keeping after H. Assuming p_{sx}(x) is probability to get X in your selected envelope, and p_{smaller}(x) is probability that X is smaller of two values, then formula for E_m(H) , by integrating over possible selected values in envelope, is :

E_h(H)= \int\limits_{x=0}^{H} p_{sx}(x) \cdot [p_{smaller}(x) \cdot 2x+p_{larger}(x) \cdot \frac{x}{2}] dx +  \int\limits_{x=H}^{\infty} p_{sx}(x) \cdot x \;dx

Extreme values for H are zero and infinity, and they correspond to never switching and always switching:

E_h(0) = E_{ns}

E_h(\infty) = E_{as}

In default Two Envelope variants, where player can select envelope, E_h(H) will rise from E_{ns} with increasing H up to some H_{opt} when expected value will be E_{opt} \geq E_{ns}, after which it will fall back toward E_{as}=E_{ns} as H goes to infinity.

Actual optimal value H depends on specific parameters of problem, like probability distribution. But since any H between zero and infinity can only increase expected value when compared to always switching or always not switching, rule of thumb approach ( when not knowing specific random distribution parameters, but participating in multiple choices ) would be to switch if X is smaller than half of largest seen value so far. If participating only in single choice, approximate optimal strategy without knowing parameters would be:

Switch if value in opened envelope is smaller than half of whatever you guess could be maximal value in envelopes.

Solution of Generalized version ( answer to Q1 )

Most variants of Two Envelope problem ‘fail’ at either not satisfying #2 claim ( for any possible value X in selected, probability that X is the smaller amount is 1/2, and that it is the larger amount is also 1/2 ) or not satisfying #9 claim ( after the switch reason in exactly the same manner as before, and conclude to switch again ).

But instead of specifically pointing out different failings of different variants, generic approach would be to disprove one of two common claims that they all have:

c1) Even without looking in first envelope, it is better to switch to other envelope

c2) After you switch to second envelope, same reasoning can be applied and you should switch again

Different variants arrive at those claims by different approaches but they all end with those, leading to apparent ‘paradox’.

First claim would be valid if expected value when player switch (E_{as}) is larger than expected value when player does not switch (E_{ns}). Therefore it is enough to either find formulas for those two expected values and show that E_{as} \equiv E_{ns}, or calculate actual expected values when possible and show that E_{as} \leq E_{ns}.

Expected value when player does not switch can be calculated in different ways, but probably easiest is to calculate over every possible value R ( and corresponding value X ) in first envelope. For discrete probabilities that would be:

E_{ns} = \sum\limits_{all\,R}p1(R) \cdot (p_{given\,first\,env}\cdot X_{in\;first\;envelope}+p_{given\,second\,env}\cdot X_{in\;second\;envelope})

Where X_{in\;first\;envelope}= V(R) is value in first envelope (one where host first put money, not one that player necessary took) and X_{in\;second\;envelope} is value in second envelope ( which will be 2*first if doubled or first/2 if halved):

X_{in\;second\;envelope} = p_{doubled\,first}\cdot 2\cdot V(R) + p_{halved\,first}\cdot \frac{V(R)}{2}

When we replace F=p_{given\,first\,env} and D=p_{doubled\,first}, we get:

X_{in\;second\;envelope} = D\cdot 2\cdot V(R) + (1-D)\cdot \frac{V(R)}{2} = \frac{3D+1}{2} V(R)

E_{ns} = \sum\limits_{all\,R}p1(R) \cdot V(R)\cdot (F+(1-F)\cdot \frac{3D+1}{2}))

E_{ns} = \sum\limits_{all\,R}p1(R) \cdot V(R)\frac{1}{2} (1+3D+F-3FD)

It is exactly same approach for expected value when not switching with continuous probabilities, only using integration instead of sum:

E_{ns} = \int\limits_{all\,r}p1(r) \cdot (p_{given\,first\,env}\cdot X_{in\;first\;envelope}+p_{given\,second\,env}\cdot X_{in\;second\;envelope})

E_{ns} = \int\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2} (1+3D+F-3FD) \cdot dr


Expected value when switching is calculated in similar way, only value in second envelope is used when first envelope was given ( and vice versa):

E_{as}= \sum\limits_{all\,R}p1(R) (p_{given\,first\,env}\cdot X_{in\;second\;envelope}+p_{given\,second\,env}\cdot X_{in\;first\;envelope})

E_{as}= \sum\limits_{all\,R}p1(R) \cdot V(R) (F\frac{3D+1}{2}+(1-F)))

E_{as} = \sum\limits_{all\,R}p1(R) \cdot V(R)\frac{1}{2} (2-F+3FD)

And again same approach for expected value when switching with continuous probabilities:

E_{as} = \int\limits_{all\,r}p1(r) \cdot (p_{given\,first\,env}\cdot X_{in\;second\;envelope}+p_{given\,second\,env}\cdot X_{in\;first\;envelope})

E_{as} = \int\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2}(2-F+3FD) \cdot dr

As it can be seen, expected values when switching or not switching have almost same formula, sums/integration over same range with only difference in constant coefficient influenced by F and D parameters. We can denote as Z that common sum/integrate part:

Z = \sum\limits_{all\,R}p1(R) \cdot V(R) , for discrete distributions or

Z = \int\limits_{all\,r}p1(r) \cdot V(r)\cdot dr , for continuous distributions

E_{ns} = \frac{1+3D+F-3FD}{2} Z

E_{as} = \frac{2-F+3FD}{2} Z

When we compare those two expected values, we can see that they are not necessarily always same, thus leaving some room for paradox. Condition for those two to be same ( and thus eliminate paradox, since it would be same if switching or not switching) is :

1+3D+F-3FD = 2-F+3FD \Rightarrow E_{ns} \equiv E_{as}

3D+2F = 1+6FD \Rightarrow E_{ns} \equiv E_{as}

We use \equiv symbol to denote ‘equivalency’ of two formulas, meaning that they are ‘exactly’ same : not only their values are same but they are calculated in the same way, over same integral/sum ranges, and thus comparable ‘same’ even when calculated value is infinite. Notably, for all variants of Two Envelopes problem that follow original setup and have player picking up one envelope, we have F=\frac{1}{2}\; and thus:

E_{ns} \equiv E_{as}  = \frac{3}{4}(1+D) \cdot Z

That means expected values when switching or not switching are always same if we let player choose envelopes or in any other variant where host give first envelope with \frac{1}{2} chance. Therefore in all of those variants paradox is resolved – it is same if you switch or keep envelope.

There are some variants that do not have F=\frac{1}{2}, like Nalebuff vN variant. In that particular case we have F=1, D=\frac{1}{2}, and thus:

E_{ns} = Z but E_{as} = \frac{5}{4}Z

Therefore if you switch you can expect E_{as}/E_{ns}=\frac{5}{4}x more value even without looking in envelope. But Nalebuff variant fails to be paradox due to step #9, since same reasoning can not be applied to second envelope – envelopes are not indistinguishable. In fact, if you switch back, you can now expect just Z, or less than what you could expect if you stayed with second envelope.

In general, variants that state “one envelope is given to player with chance different from other envelope” would have hard time to satisfy envelopes being indistinguishable – and any other variant which have usual \frac{1}{2} chance for player to get any envelope is proven above to have same expected value regardless if player switch or not.

This is true even for variants that have infinite expected value – in those cases, Z would go to infinity, but it remains identical between switching and not switching. In other words, both switching or not switching have ‘same’ infinite expected value. While we can not compare infinite values, we can compare identical formulas that may result in infinite values.

There is another interesting condition that guarantee same expected values if switching or not switching, D=\frac{1}{3} , meaning value in second envelope should be doubled in \frac{1}{3} of the cases and halved in \frac{2}{3} of the cases. That yield same expected values that even do not depend on F ( probability to be given first envelope ):

E_{ns} \equiv E_{as}  =  Z

Which means that even Nalebuff variants would have same expected value when switching if second envelope was doubled in \frac{1}{3} cases instead of \frac{1}{2}. But since it does not result in paradox, there is no actual Two Envelope variant that uses D=\frac{1}{3}.

As mentioned before, it is possible to even more generalize Two Envelopes problem (“super-generalization“) by making even probability for host to double value in second envelope and probability for player to get first envelope as functions of R. Solution formulas would be same, only we could not extract constant coefficient (based on D and F) in front of sum/integral, and those elements must remain inside:

E_{ns} = \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2} (1+3D(r)+F(r)-3F(r)D(r))

E_{as} = \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2} (2-F(r)+3F(r)D(r))

Example would be if we randomly select x \in (2,4,6) for first envelope with equal chance (p1(2)=p1(4)=p1(6)=\frac{1}{3}) and for 2 and 6 we randomly double/half second envelope (D(2)=D(6)=\frac{1}{2}) and let player chose (F(2)=F(6)=\frac{1}{2}), but for 4 we always double second envelope (D(4)=1) and always give that one to player (F(4)=0). So we would have:

E_{ns}= \frac{1}{2 \cdot 3} [(2+6)*(1+\frac{3}{2}+\frac{1}{2}-\frac{3}{2\cdot 2})+4*(1+3)] = \frac{1}{6} [8*\frac{18}{4}+4*4)] = \frac{17}{3}

E_{as} = \frac{1}{2 \cdot 3} [(2+6)* (2-\frac{1}{2}+\frac{3}{2\cdot 2})+4*(2-0+0)]=\frac{1}{6} [8*\frac{9}{4}+4*2]= \frac{13}{3}

In this exotic case, it is better not to switch. Such super-generalized variants would make it hard to make general statement regarding conditions when E_{ns} \equiv E_{as} as function of D(r) and F(r) – which we were able to do when we only had constant D and F. But it would also make it very hard to state any meaningful paradox variant, so further analyzing super-generalized version offer little to no benefits.

There is one subvariant of super-generalized version that is worth considering and that is the case for default Two Envelopes problem where player select envelopes. We can show that in such case E_{ns} \equiv E_{as} even when decision to double or half is different for different values. Replacing F(r)=\frac{1}{2} in previous formulas result in:

E_{ns} = \scriptstyle \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2} (1+3D(r)+\frac{1}{2}-\frac{3}{2}D(r)) = \displaystyle \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{3}{4} (D(r)+1)

E_{as} = \scriptstyle \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{1}{2} (2-\frac{1}{2}+\frac{3}{2}D(r)) = \displaystyle \sum\limits_{all\,r}p1(r) \cdot V(r)\frac{3}{4} (D(r)+1)

Which proves that expected values when switching or not switching are always same for default variant , regardless of any possible way to generate values in envelopes:

E_{ns} \equiv E_{as} = \frac{3}{4} \sum\limits_{all\,r} (D(r)+1) \; p1(r) V(r) , for discrete distributions

E_{ns} \equiv E_{as} = \frac{3}{4}\int\limits_{all\,r} (D(r)+1) \; p1(r) V(r)\; dr , for continuous distributions

Optimal solution of Generalized version (answer to Q2)

As it was discussed in chapter about optimal solution, when we can open envelope we can make more informed decision whether to switch or not, thus allowing strategies that would yield better expected value compared to always switching or always not switching. While actual optimal strategy would depend on probability distributions, general optimal approach is :

If value in envelope is X, switch when X \leq H_{opt}

Above is not only better but also optimal approach when probability and value functions are monotonous. To find that H_{opt}, we need formula for E_h(H), expected value if switching when our picked envelope contains less than H. Name letter ‘H’ was used since most frequently optimal solution is close to half of maximal value in envelopes, but that obviously depend on variant parameters.

Finding general solution formula is possible as long as inverse function R=\bar{Vi}(x) is available, and as long as both V(r) and \bar{Vi}(x) are monotonous functions. Assumptions and conditions:

  • R_{min} \leq r \leq R_{max}, where R_{min} can be -\infty and R_{max} can be +\infty , possible range of R
  • \forall r\in [R_{min},R_{max}] ,\;\exists \bar{Vi}(x) \text{ such that }  \bar{Vi}(V(r)) = r , existence of inverse V(r)
  • both V(r) and \bar{Vi}(x) are monotonous and X_{min}=V(R_{min}) , X_{max}=V(R_{max}) ( can be ±∞)

Solving for expected value if we switch when X \leq H can follow similar approach like we did for generalized version of E_{ns} and E_{as}, only switching between those two at point X=H. It is easiest if we integrate always over possible ‘r‘ values in first envelope, and split formula in four parts, one for each combination of player getting first or second envelope and of second envelope being double or half of first.

p_{1d} = F*D , probability to get first envelope and second envelope is double of first

p_{2d} = (1-F)*D , probability to get second envelope which is double of first

p_{1h} = F*(1-D) , probability to get first envelope and second envelope is half of first

p_{2h} = (1-)F*(1-D) , probability to get second envelope and it is half of first envelope

E_h(H)= p_{1d}\cdot E_{1d}(H)+p_{2d}\cdot E_{2d}(H)+p_{1h}\cdot E_{1h}(H)+p_{2h}\cdot E_{2h}(H)

For example, if player select second envelope when it has half value of first one and he switch if envelope he got is under H, it means he will switch if first envelope contains less than 2H, which means iterating over r= R_{min} . . \bar{Vi}(2H) for first envelope (and since he is switching, taking value from first envelope, which is V(r) in this case). On the other hand, he will keep his (second) envelope if it’s value is over H, meaning value in first envelope was over 2H , which means iterating over r= \bar{Vi}(2H) .. R_{max} for first envelope (and since he is not switching, taking value from that second envelope, which is half of first, so value is \frac{V(r)}{2}) . So E_{2h} (expected value for case when player select second envelope when it has half value of first one) would be :

E_{2h}(H) = \int\limits_{r=R_{min}}^{\bar{Vi}(2H)} p1(x) \cdot V(r) dr  + \int\limits_{r=\bar{Vi}(2H)}^{R_{max}} p1(x) \cdot \frac{V(r)}{2} dr

We can introduce function for expected value in range, to simplify formula:

Z(a,b)= \int\limits_{r=\bar{Vi}(a)}^{\bar{Vi}(b)} p1(x) \cdot V(r) dr

E_{2h}(H)= Z(X_{min},2H)+\frac{1}{2} Z(2H,X_{max})

Following same logic, formulas for each part of E_h(H) are:

E_{1d}(H)=  2*Z(X_{min},H)+Z(H,X_{max})

E_{2d}(H)=  Z(X_{min},\frac{H}{2})+2*Z(\frac{H}{2},X_{max})

E_{1h}(H)=  \frac{1}{2}Z(X_{min},H)+Z(H,X_{max})

E_{2h}(H)= Z(X_{min},2H)+\frac{1}{2} Z(2H,X_{max})

To simplify, we can use single boundary integral W instead of double boundary integral Z:

\bf W(x)= \int\limits_{r=R_{min}}^{\bar{Vi}(x)} p1(r) \cdot V(r) dr , partial expected value

Z(a,b) =  W(b)-W(a) , true since \bar{Vi}(x) is monotonous

W(x \leq X_{min})=0

W(x \geq X_{max})= W(X_{max}) , meaning W(\infty)= W(X_{max})

Exactly same calculation can be done in case of discrete probability distribution, only in this case partial expected value W will be sum instead of integral:

\bf W(x)= \sum\limits_{R \leq \bar{Vi}(x)} p1(R) \cdot V(R)

Thus above parts of expected value, given different take/double options, correspond to:

E_{1d}(H)=  2*W(H)+W(X_{max})-W(H)= W(H)+W(X_{max})

E_{2d}(H)=  W(\frac{H}{2})+2*W(X_{max})-2*W(\frac{H}{2}) = 2*W(X_{max})-W(\frac{H}{2})

E_{1h}(H)=  \frac{1}{2}W(H)+W(X_{max})-W(H)=W(X_{max})-\frac{1}{2}W(H)

E_{2h}(H)= W(2H)+\frac{1}{2} W(X_{max})-\frac{1}{2}W(2H) = \frac{1}{2} (W(X_{max})+W(2H))

If we group them now in total E_h(H), we get:

    \begin{flalign*}E_h(H)&= p_{1d}\cdot E_{1d}(H)+p_{2d}\cdot E_{2d}(H)+p_{1h}\cdot E_{1h}(H)+p_{2h}\cdot E_{2h}(H) \\&=p_{1d}[W(H)+W(X_{max})]+p_{2d}[2*W(X_{max})-W(\frac{H}{2})] \\&\hphantom{=} + p_{1h}[W(X_{max})-\frac{1}{2}W(H)]+p_{2h}\frac{1}{2} [W(X_{max})+W(2H)] \\&=  W(X_{max})[ p_{1d}+ 2\cdot p_{2d}+p_{1h}+ \frac{p_{2h}}{2}] + W(2H)\frac{p_{2h}}{2} \\&\hphantom{=}  + W(H)[  p_{1d}-\frac{p_{1h}}{2}] - W(\frac{H}{2})p_{2d} \\&= W(X_{max})\frac{1}{2}[2FD+4(1-F)D+2F(1-D)+(1-F)(1-D)] \\&\hphantom{=}+ W(2H)\frac{1}{2} (1-F)(1-D) + W(H)\frac{1}{2}[ 2FD-F(1-D)] - W(\frac{H}{2}) (1-F)D \\&= \frac{1-3FD+F+3D}{2}W(X_{max}) + \frac{(1-F)(1-D)}{2}W(2H) \\&\hphantom{=} + \frac{F(3D-1)}{2}W(H)-D(1-F)W(\frac{H}{2}) \end{flalign*}

Final formula for expected value if player switch when envelope contains less than H is :

\scriptstyle\bf E_h(H)= \frac{1-3FD+F+3D}{2}W(X_{max}) + \frac{(1-F)(1-D)}{2}W(2H) + \frac{F(3D-1)}{2}W(H)-D(1-F)W(\frac{H}{2})

We can check that H values zero and infinity correspond to ‘never switch’ E_{ns} and ‘always switch’ E_{as} cases, and match exactly previous formulas from solution for Q1 question :

E_h(0)=E_{ns}= \frac{1-3FD+F+3D}{2}W(X_{max})

E_h(\infty) = E_{as} = \scriptstyle\frac{1}{2}[  1-3FD+F+3D + (1-F)(1-D) + F(3D-1) - 2D(1-F)  ] W(\infty) \displaystyle= \frac{2+3FD-F}{2} W(X_{max})

Since most variants are using either 0, \frac{1}{2} or 1 for parameters F ( how often player gets first envelope) and D ( how often is second envelope doubled instead of halved ), we can make table that shows solution coefficients for each of those, with honorary mention of D=\frac{1}{3} ( which always result in same Ens=Eas, regardless of F):

FDvariants EnsEas Eh(H) coefficients
W(∞)W(∞) W(∞)W(2H)W(H)W(H2)
121V3,std 3232 32 12 12
1212std 9898 98 18 18 28
120std 3434 34 14 14
11 12 1 1
112V2 154 1 14
113 11 1
10 112 1 12
01 21 2 -1
012 541 54 14 12
00 121 12 12

Function E_h(H) can be used to find optimal H_{opt} which result in maximal value for E_h(H), by comparing E_h(H) for H values that are either:

  • solutions to \frac{dE_h(H)}{dH}=0 , potential maximums of expected value
  • values of H where E_h(H) is discontinued ( eg integral boundaries, distribution boundaries …)
  • solutions to \frac{d\frac{E_h(H)}{E_{ns}}}{dH}=0, potential maximums of ratio to expected value without switching
  • solutions to \frac{d(E_h(H)-E_{ns})}{dH}=0, potential maximums of difference to expected value without switching

Since first part of E_h(H) is always equal to E_{ns}, when finding maximums it is usually easiest to use last approach with delta function that will have only last three coefficients from above table:

\Delta E(H)= E_h(H)-E_{ns} =  \scriptstyle \frac{(1-F)(1-D)}{2}W(2H) + \frac{F(3D-1)}{2}W(H)-D(1-F)W(\frac{H}{2})

\frac{\Delta E(H)}{dH} =0  \Rightarrow result in H that may be optimal/maximum

Finding those maximums would require derivative of partial expected value function W(H):

\bf Wd'(x)= \frac{dW}{dx} = \frac{p1(\bar{Vi}(x)) \cdot x}{\frac{dV}{dr}(\bar{Vi}(x))}

Wd'(x \geq X_{max}) = Wd'(X_{max}) , due to same condition on W(x)

\frac{d\Delta E(H)}{dH} = 0= \scriptstyle \frac{(1-F)(1-D)}{2}Wd'(2H) + \frac{F(3D-1)}{2}Wd'(H)-D(1-F)Wd'(\frac{H}{2})

Range of values in first envelope is [X_{min}, X_{max}] but since second envelope can contain half or double of those, potential range of values in second envelope is [\frac{X_{min}}{2}, 2 \cdot X_{max}], which is then same as possible range for H. Due to condition Wd'(x \geq X_{max}) = Wd'(X_{max}), entire possible range for H \in [\frac{X_{min}}{2}, 2 \cdot X_{max}] can be split into three different equations whose solutions may be optimal H:

\scriptstyle H \in [\frac{X_{min}}{2},\frac{X_{max}}{2}] \Rightarrow  \bf \frac{(1-F)(1-D)}{2}Wd'(2H) + \frac{F(3D-1)}{2}Wd'(H)-D(1-F)Wd'(\frac{H}{2}) = 0

\scriptstyle H \in [\frac{X_{max}}{2}, X_{max}] \Rightarrow  \bf \frac{(1-F)(1-D)}{2}Wd'(X_{max}) + \frac{F(3D-1)}{2}Wd'(H)-D(1-F)Wd'(\frac{H}{2}) = 0

\scriptstyle H \in [X_{max}, 2 X_{max}] \Rightarrow  \bf \frac{1+4FD-2F-D}{2}Wd'(X_{max}) -D(1-F)Wd'(\frac{H}{2}) = 0

Candidates for optimal H_{opt} are solutions to any of above three equations (when those solutions exist), and four boundary values \frac{X_{min}}{2}, \frac{X_{max}}{2}, X_{max}, 2 X_{max} (which may be optimal when solutions to \frac{\Delta E(H)}{dH} =0 equations do not exist). Optimal H_{opt} is candidate with highest E_h(H) value.

Same approach can be used for discrete probability distributions, only using second formula for Wd'(x)= \frac{p1(\bar{Vi}(x)) \cdot x}{\frac{dV}{dr}(\bar{Vi}(x))} may yield approximate optimal value, so it is advisable to use first one \bf Wd'(x)= \frac{dW}{dx}, with derivation of actual sum. Boundary candidates \frac{X_{min}}{2}, \frac{X_{max}}{2}, X_{max}, 2 X_{max} remain same for discrete variant.

vD: Default variant where player select envelope

Default variant of Two Envelopes problem is one described as initial definition, where only defined parameter is that player select randomly among two offered envelopes with equal chance, so F=\frac{1}{2}.

Fact that player has equal chance to select either envelope is enough to result in same expected values for switching and not switching. From general solution we saw that general formulas for expected values when player never switch, always switch or switch when value in envelope is under H, are respectively:

E_{as} = \frac{2+3FD-F}{2} W(X_{max})

\scriptstyle E_h(H)= \frac{1-3FD+F+3D}{2}W(X_{max}) + \frac{(1-F)(1-D)}{2}W(2H) + \frac{F(3D-1)}{2}W(H)-D(1-F)W(\frac{H}{2})

When we replace F=\frac{1}{2}, we get:

\bf E_{ns}= \frac{3}{4}(1+D) W(X_{max})

\bf E_{as} = \frac{3}{4}(1+D) W(X_{max}) \Rightarrow E_{as} \equiv E_{ns}

\scriptstyle\bf E_h(H)= \frac{3}{4}(1+D) W(X_{max}) + \frac{1-D}{4}W(2H) + \frac{3D-1}{4}W(H)-\frac{D}{2}W(\frac{H}{2})

Where function W retains same definition as in general solution:

W(x)= \int\limits_{r=R_{min}}^{\bar{Vi}(x)} p1(r) \cdot V(r) dr or W(x)= \sum\limits_{R \leq \bar{Vi}(x)} p1(R) \cdot V(R)

Wd'(x)= \frac{dW}{dx} = \frac{p1(\bar{Vi}(x)) \cdot x}{\frac{dV}{dr}(\bar{Vi}(x))} , derivative when \bar{Vi}(V(r))=r

That proves E_{as} \equiv E_{ns}, meaning expected values for player if he switch or does not switch are same as long as player select randomly among two offered envelopes. In other words, regardless of probability distribution and how host select money in envelopes, it will always be same if player choses to switch or not, thus resolving paradox for any subvariant of this type, which include almost all except Nalebuff variant.

As it was shown before, this equality of expected values when switching or not switching remains even if D is not constant but varies depending on r :

E_{ns} \equiv E_{as} = \frac{3}{4} \sum\limits_{all\,r} (D(r)+1) \; p1(r) V(r) , for discrete super-generalization

E_{ns} \equiv E_{as} = \frac{3}{4}\int\limits_{all\,r} (D(r)+1) \; p1(r) V(r)\; dr , for continuous super-generalization

This conclusion is valid even for infinite expected values, since formulas for those expected values are identical. But variants with infinite expected value will not have ‘optimal strategy’ in general, since expected value will be same not only if player always switch or never switch, but also if he switch when value is less than any potential optimal value H. If we look at ratio of expected optimal value and expected value when always/never switch, we have:

\scriptstyle E_h(H)= E_{ns} + \frac{1-D}{4}W(2H) + \frac{3D-1}{4}W(H)-\frac{D}{2}W(\frac{H}{2})

Optimal Ratio = \frac{E_h(H)}{E_{ns}} = 1 + \frac{1}{E_{ns} 3(1+D)} [ (1-D)W(2H) + (3D-1)W(H)-2DW(\frac{H}{2})}

When expected value on switching and not switching is same but infinite, we have ‘Optimal Ratio’ equals one, meaning we can not get better result with any possible ‘switch if x \leq H‘ strategy:

E_{ns} \rightarrow \infty \Rightarrow Optimal Ratio \rightarrow 1 + 0 = 1

Therefore answer to Q2 question ‘when should player switch?’ is ‘when his envelope contains X \leq H_{opt} unless expected value is infinite , when any strategy is same and thus optimal’. Note that answer to Q1 question ‘does it matter if player switch or not, if he is not allowed to look inside envelope?’ remains same ‘no, it does not matter’ even for infinite expected values.

vS: Standard default variant where second envelope is doubled

This is subvariant of default variant vD, where we also have player selecting envelope but we additionally specify that value in second envelope will be double of value in first one ( where ‘first’ and ‘second’ denote order in which host put money in them, and player can select any of them with equal probability). This is probably most standard version of Two Envelopes paradox, and covers all subvariants like vU, vA and vAc, where player can equally select any envelope (F=\frac{1}{2}) and where value in second envelope is doubled (D=1).

When we apply those parameters F=\frac{1}{2} and D=1 to either general solution or just apply parameter D=1 to solution of vD variant, we get:

\scriptstyle\bf E_{ns} \equiv E_{as} = \frac{3}{2}W(X_{max}) \displaystyle \Rightarrow same expected values if switching or not, which resolves paradox

\scriptstyle\bf E_h(H)= \frac{3}{2}W(X_{max}) + \frac{1}{2}W(H)-\frac{1}{2}W(\frac{H}{2}) , expected value if switch when opened envelope has \bt value \leq H

When we apply parameters to solution for optimal H, since D=1 possible values in envelopes are H \in [X_{min}, 2 X_{max}], so number of possible ranges are reduced (no [\frac{X_{min}}{2}, X_{min}] range). Partial expected value formula and its derivatives for this subvariant are unchanged:

W(x)= \int\limits_{r=R_{min}}^{\bar{Vi}(x)} p1(r) \cdot V(r) dr for continuous distributions

W(x)= \sum\limits_{R \leq \bar{Vi}(x)} p1(R) \cdot V(R) for discrete distributions

Wd'(x)= \frac{dW}{dx} \cong \frac{p1(\bar{Vi}(x)) \cdot x}{\frac{dV}{dr}(\bar{Vi}(x))} , where second one is only approximate for discrete

Therefore candidates for optimal value are solutions ( if they exist) to any of following equations, where X are possible values in smaller envelope:

\scriptstyle H \in [X_{min}, 2 X_{max}] \Rightarrow  \displaystyle \bf p1(\frac{H}{2}) = 2 p1(H) from ‘always better’ condition

\scriptstyle H \in [X_{min},X_{max}] \Rightarrow E_h(H)= \frac{3}{2}W(X_{max}) + \frac{1}{2}W(H)-\frac{1}{2}W(\frac{H}{2}) \Rightarrow  \bf Wd'(H_{opt}) = Wd'(\frac{H_{opt}}{2})

\scriptstyle H \in [X_{max}, 2 X_{max}] \Rightarrow E_h(H)= 2 W(X_{max}) -\frac{1}{2}W(\frac{H}{2}) \Rightarrow \bf Wd'(X_{max}) = Wd'(\frac{H_{opt}}{2})

or H_{opt}= X_{max} , middle boundary

Actual optimal value \bf H_{opt} is candidate from above with largest expected value \bf E_h(H). It is usually solution to one of equations if it exists and is within boundaries, or middle boundary H_{opt}= X_{max} otherwise ( since other two boundaries correspond to E_h(X_{min})=E_{ns} and E_h(2 X_{max})=E_{as} which were proven to be lower than optimum ( E_{ns} \equiv E_{as} \leq E_{opt} ).

So for any subvariant of vS standard variant ( and those are almost all presented here, except Nalebuff variant ) it holds true that expected value if switching is same as if not switching, E_{ns} \equiv E_{as} = \frac{3}{2}W(X_{max}).

Another formula that can be derived for vD variant is expected value when switching on any specific value in envelope \chi, useful when describing ‘paradox’ situations. Note that this ‘\chi‘ denote possible value in selected envelope, and not possible values in first envelope:

p1_x(\chi)= p(\chi is in first envelope) = p1(\bar{Vi}(\chi))

p2_x(\chi)= p(\chi is in second envelope) = p(\frac{\chi}{2} is in first envelope) = p1(\bar{Vi}(\frac{\chi}{2}))

p_{\chi s}= p(\chi is smaller) = \frac{p1_x(\chi)}{p1_x(\chi)+p2_x(\chi)} = \frac{1}{1+\frac{p1(\bar{Vi}(\frac{\chi}{2}))}{p1(\bar{Vi}(\chi))}}

\bf E_{sw}(\chi)= expected value if switch = p(\chi is smaller)*2\chi + p(\chi is larger)*\frac{\chi}{2} = \bf \frac{\chi}{2}(3 p_{\chi s} + 1)

\bf Ratio_{sw} =\frac{E_{sw}(\chi)}{\chi} = 2*p_{\chi s}+\frac{1-p_{\chi s}}{2} = \bf \frac{3}{2} p_{\chi s} + \frac{1}{2}

‘Always better’ condition: \forall \chi \Rightarrow E_{sw}(\chi) \geq \chi \Rightarrow \bf p1(\bar{Vi}(\frac{\chi}{2})) \leq 2 p1(\bar{Vi}(\chi)) for \forall \chi

When above ‘always better’ condition is satisfied we can state that it is always better to switch because expected value when switching is better than not switching for any possible value in envelope. It is put in quotes because it is often used as argument even when it is not true for really all possible values in envelope. But it is useful equation even when not really satisfied for every \chi, since solution of \scriptcase p1(\bar{Vi}(\frac{H}{2})) = 2 p1(\bar{Vi}(H)) will often result in optimal H_{opt}.

Note that ‘always better’ condition and Ratio_{sw} depend on value function X=V(R) and on distribution of probable values in first envelope p1(r). This ratio is useful to demonstrate ‘paradox’, but it has exactly same fail points as any specific paradox variant : p(\chi is smaller) is not same for all possible values of \chi. For example, for all variants that claim that there is same probability of having \chi and \frac{\chi}{2} in envelopes, we get apparently satisfied ‘always better’ condition, with p_{\chi s}=\frac{1}{2} and Ratio_{sw} = \frac{5}{4}}.

vX: standard variant where we directly choose value X

This is subvariant of standard variant, where we do not use intermediary random variable R but instead have random distribution directly for value in first envelope X. In other words, V(R)=R ( or X=R), while keeping all other parameters of parent variants (D=1 from standard variant vS, and F=\frac{1}{2} from default variant vD).

This keep all the same conclusions as we had in standard variant vD, and formulas for expected values are not changed since they depend on W:

\scriptstyle\bf E_{ns}= E_{as} = \frac{3}{2}W(X_{max})

\scriptstyle\bf E_h(H)= \frac{3}{2}W(X_{max}) + \frac{1}{2}W(H)-\frac{1}{2}W(\frac{H}{2})

But it simplify formulas for optimal values, since W itself do not use V(R):

  • continuous: W(x)= \int\limits_{r=X_{min}}^{x} p1(r) \cdot r \; dr , Wd'(x)= \frac{dW}{dx} = p1(x) \cdot x
  • discrete: W(x)= \sum\limits_{r \leq x} p1(r) \cdot r , Wd'(x)= \frac{dW}{dx} \approx p1(x) \cdot x

Optimal values are in range H \in X_{min} .. 2 X_{max}, either as solution to first two conditions or as one of boundaries listed in third one ( where X_{min} and X_{max} are minimal and maximal values for first/smaller envelope):

\scriptstyle H \in [X_{min}, 2 X_{max}] \Rightarrow  \displaystyle \bf p1(\frac{H}{2}) = 2 p1(H) from ‘always better’ condition

\scriptstyle H \in [X_{min},X_{max}] \Rightarrow  E_h(H)= \frac{3}{2}W(X_{max}) + \frac{1}{2}W(H)-\frac{1}{2}W(\frac{H}{2}) \Rightarrow \bf Wd'(H_{opt}) = Wd'(\frac{H_{opt}}{2}) ,

\scriptstyle H \in [X_{max}, 2 X_{max}] \Rightarrow  E_h(H)= 2 W(X_{max}) -\frac{1}{2}W(\frac{H}{2}) \Rightarrow \bf Wd'(X_{max}) = Wd'(\frac{H_{opt}}{2}),

or H_{opt}=X_{min} at boundary

vU: simple uniform variant

Probably first variant that people think about when hearing about Two Envelope paradox, this variant assume that any value in smaller envelope has equal chance to appear:

First envelope can contain any value of 1,2,3,..N dollars with equal probability, and double of that amount in second envelope. You may pick one envelope. Without inspecting it, should you switch to other envelope ?

When we calculate ratio of expected value if we switch or not switch, we get claimed \frac{5}{4} from default paradox – it ‘appears’ that it is always better to switch.

This variant fails on #2 claim from paradox ( that any value for selected envelope has equal chance to be larger or smaller value ). Possible values in selected envelope range from 1 to 2*N – if first envelope has 1..N then second has 2..2N, and you choose them randomly. While for some selected values claim #2 may hold true ( eg. even numbers in selected envelope X \leq N, where expected value if switching is \frac{5}{4}X, or gain +\frac{X}{4}), it is easy to see that claim does not hold true for all possible selected values. For example, it is obvious that any odd number must be smaller value and thus it does not have “equal chance to be larger” – although in case of smaller value X it only increase expected value if you switch (to 2X, or gain +X). But on the opposite side are selected values X>N, where it is obvious they must be larger values and any switching would result in loss ( expected \frac{X}{2}, or gain -\frac{X}{2} ). But since those selected values X where switching result in loss (-\frac{X}{2}) are larger than values where switching result in gains ( +\frac{X}{4} or +X), they exactly cancel each other and there is no gain if switching without looking into envelope – thus there is no paradox.

Since this vU variant is subvariant of standard vX variant, we can use vX solution formulas to find both expected values when switching or not switching. Parameters of this variant are p1(r)=1/N for N \in 1,2,..N ( discrete uniform probability distribution for first envelope), X=V(r)=R ( probability directly applies to value X in envelope), F=\frac{1}{2} ( player randomly select envelope with equal chance) and D=1 ( second envelope is always double of first ). Thus :

W(x)= \sum\limits_{R \leq \bar{Vi}(x)} p1(R) \cdot V(R) = \sum\limits_{r=1}^{x} \frac{r}{N} = \frac{x (x+1)}{2N}

E_{ns}= E_{as} = \frac{3}{2}W(X_{max}) = \frac{3}{2} \frac{N (N+1)}{2N}= \frac{3}{4}(N+1)

As mentioned before, expected value if switching is same as expected value if not switching, thus making it irrelevant if player switch and resolving paradox. But we can go further and ask Q2 question “If player look inside envelope, when should he switch?”. Using optimal approach “Switch if envelope contains X \leq H“, we can plug variant parameters into formulas for vX subvariant :

\scriptstyle E_h(H)= \frac{3}{2}W(X_{max}) + \frac{1}{2}W(H)-\frac{1}{2}W(\frac{H}{2})

\scriptstyle E_h(H)= \frac{3}{4}(N+1) + \frac{H (H+1)}{4N}-\frac{H (H+2)}{16N})

Finding optimal H with maximal E_h(H) require derivative of partial expected value W, which can be found either using first exact formula or second formula that is exact for continuous distribution but only approximate for discrete :

Wd'(x)= \frac{dW}{dx} = \frac{x}{N}+\frac{1}{2N}

Wd'(x) \approx p1(x) \cdot x \approx \frac{x}{N}

Since in this case it is easy to find exact Wd'(x)=\frac{x}{N}+\frac{1}{2N}, we can use that one instead of approximate ( although they would both result in same optimal value). Optimal value can be found in ranges:

\scriptstyle H \in [1,N] \Rightarrow   Wd'(H_{opt}) = Wd'(\frac{H_{opt}}{2}) \Rightarrow H_a=0

\scriptstyle H \in [N, 2N] \Rightarrow Wd'(X_{max}) = Wd'(\frac{H_{opt}}{2}) \Rightarrow H_b=2N

or H_{opt}=N

First two derivation formulas do not yield actual maximums, so if we compare E_h(H) for all those three candidates for optimal value, we see than \bf H_{opt}= N, with optimal expected value:

E_{opt}= E_h(H_{opt}) = E_h(N) = \frac{3}{4}(N+1) + \frac{N (N+1)}{4N}-\frac{N (N+2)}{16N})= \bf  \frac{15N+14}{16}

Optimal ratio = \frac{E_{opt}}{E_{ns}} = \frac{15N+14}{12N+12}

When we look at ‘optimal ratio’ which compare optimal expected value if we switch when we have less than N inside envelope to expected value without looking into envelope (which is same if switching or not switching), for large N we get Optimal ratio ~ \frac{5}{4}, which means that optimal strategy is around +25% better than always switching or always not switching.

vUc: continuous uniform variant

This is another subvariant of vX variant, almost same as vU uniform variant but with continuous probability distribution ( envelopes contain cheque with potential fractional values).

First envelope can contain any value between 0 and N dollars with equal probability, and double of that amount in second envelope. You may pick one envelope. Without inspecting it, should you switch to other envelope ?

Probability density function p1(x)= 1/N with x \in [0,N], and all other parameters are same as in discrete uniform variant vU ( F=\frac{1}{2}, D=1, X=R ).

Using continuous formulas from vX, we get :

W(x)= \int\limits_{r=0}^{x} \frac{r}{N} dr= \bf \frac{x^2}{2N}

\scriptstyle\ E_{ns}= E_{as} = \frac{3}{2}W(N) =  \bf \frac{3N}{4}

And optimal switching values as :

Wd'(x)= \frac{dW}{dx} = \frac{x}{N}

\scriptstyle H \in [0,N] \Rightarrow  H_a=0 , \bf E_h(H)= \frac{3N}{4} + \frac{3 H^2}{16N}

\scriptstyle H \in [N, 2N] \Rightarrow  H_b=0 , \bf E_h(H)= N - \frac{H^2}{16N}

or H_{opt}=N

Since there are no valid solutions to equations, optimal solutions is H_{opt}=N and we can see that optimal ratio is always +25%, similar to discrete version vU:

\bf H_{opt}=N

\bf E_{opt}= \frac{15N}{16}

Optimal ratio = \frac{E_{opt}}{E_{ns}} = \bf \frac{5}{4} = +25% \forall N

vA: ‘always better’ variant with probability \frac{2^n}{3^{n+1}}

This is variant where selection process for money in envelope is chosen to “always” result in better outcome if you switch (hoping to keep #2/#6 claims always true), while letting you select envelopes and thus keeping them indistinguishable ( making #9 claim always true).

Discrete random value n ( n=0,1, … ) is selected with probability \frac{2^n}{3^{n+1}}, and 2^n dollars are put in first envelope while double that amount is put in second envelope. You may pick one envelope. Without inspecting it, should you switch to other envelope ?

It can be shown that except for n=0 (where swapping gives you 2x expected value, or 2 instead of 1), for all other values in your envelope a=2^n your expected value if you swap is better (\frac{11}{10}a), so apparently it is always better to switch. And, since envelopes are indistinguishable, same logic can be applied to that envelope suggesting to switch back – leading to infinite switches and paradox.

One feature of this variant is that it has infinite expected value, and often “resolution” of paradox is suggested as ‘since it is impossible to have infinite amount of money, any problem that has infinite expected value is invalid’. But that is not exactly valid reasoning, since problem could be restated as writing any value on cheques and putting them in envelopes – regardless if it is actually possible to draw on those cheques.

Probably easiest way to resolve this paradox is to recast problem to variant with limited maximal value 2^M ( thus n=0..M-1 ), and calculate expected value if you always switch compared to expected value if you never switch – and then see how those compare when M goes to infinity. To keep proper probability distribution, we have p(n)=\frac{3^M}{3^M-2^M}\frac{2^n}{3^{n+1}} ( which properly sums to 1 in n=0..M-1 range )

It will be shown that, when we limit maximal amount in envelope to 2^M, expected value for switching is exactly same as for not switching ( \frac{3}{2}\frac{4^M-3^M}{3^M-2^M}). While for all values in envelope up to 2^{M-1} it is better to switch (with small gain of \frac{x}{10}), for ‘last’ possible value in envelope 2^M it is worse to switch since other envelope can only be smaller, thus resulting in much larger loss of -\frac{x}{2}=-2^{M-1} that exactly cancel previous smaller gains. It holds true for any M, even when M goes to infinity, thus proving that it is always same to switch or not – resolving paradox.

Calculating expected value if we never switch (E_{ns}) can be done by :

E_{ns}= \sum\limits_{for\,all\,possible\,smaller\,X}p_x(X)*(p_{chose\,smaller}*X+p_{chose\,larger}*2X)

E_{ns}= \sum\limits_{n=0}^{M-1}p(n)*(\frac{1}{2}2^n+\frac{1}{2}*2^{n+1}) = \frac{3^M}{3^M-2^M}\sum\limits_{n=0}^{M-1}\frac{2^n}{3^{n+1}}2^{n-1}(1+2)

E_{ns}= \frac{3^M}{3^M-2^M}\sum\limits_{n=0}^{M-1}\frac{2^{2n-1}}{3^n}=\frac{3^M}{3^M-2^M}\frac{3}{2}\frac{4^M-3^M}{3^M}= {\bf \frac{3}{2}\frac{4^M-3^M}{3^M-2^M}}

Calculating expected value if we always switch (E_{as}) can be done same way, only switching 2X and X:

E_{as}= \sum\limits_{for\,all\,smaller\,X}p_x(X)*(p_{chose\,smaller}*2X+p_{chose\,larger}*X)

E_{as}= {\bf \frac{3}{2}\frac{4^M-3^M}{3^M-2^M}}

Unsurprisingly, it yields same result as E_{ns}. We can also calculate E_{as} in a different way, by summing over all possible selected values as opposed to above all possible values in smaller envelope:

E_{as}= \sum\limits_{for\,all\,possible\,selected\,X}p_{sx}(X)*(p_{X\,is\,smaller}*2X+p_{X\,is\,larger}*\frac{X}{2})

E_{as}=\sum\limits_{n=0}^{M}p_{sn}(n)*(p_{smaller}(n)*2*2^n+p_{larger}(n)*\frac{2^n}{2})

This is more complicated way, but given probability to select envelope with 2^n is p_{sn}(n) , and probability that 2^n is smaller as p_{smaller}(n) :

p_{sn}(n= 1..{M\!-\!1})=\frac{p(n-1)+p(n)}{2} ,\;  p_{sn}(0)=p(0) ,\; p_{sn}(M)=p(M-1)

p_{smaller}(n=1..M-1)=\frac{3}{5} ,\; p_{smaller}(0)=1 ,\; p_{smaller}(M)=0

When summed over range of possible selected values in envelope, including special cases for n=0 and n=M, it result in exactly same expected value:

E_{as}(M)=\;E_{ns}(M)=\;{\bf \frac{3}{2}\frac{4^M-3^M}{3^M-2^M}}

Third way to find expected values is to use previously proven general solution formula, with possible R \in 0..M-1 for values in first/smaller envelope, ie X_{max}=2^{M-1} :

W(x)= \sum\limits_{R \leq \bar{Vi}(x)} p1(R) \cdot V(R)

E_{ns}= E_{as} = \frac{3}{2}W(X_{max})= \frac{3}{2} \sum\limits_{R=0}^{M-1} \frac{3^M}{3^M-2^M}\frac{2^R}{3^{R+1}} 2^R = \frac{3}{2} \frac{3^M}{3^M-2^M} \sum\limits_{R=0}^{M-1} \frac{4^R}{3^{R+1}}

E_{ns}= E_{as} =  \frac{3}{2} \frac{3^M}{3^M-2^M} ((\frac{4}{3})^M-1) = \;{\bf \frac{3}{2}\frac{4^M-3^M}{3^M-2^M}}

We can see that expected value if we always switch is same to expected value if we never switch, regardless of how large is M – which resolves paradox, since it is same if you switch or not.

It holds true even if M goes to infinity, when this modified variant become vA : ratio of E_{as}/E_{ns} remains 1, and it remains same if switching or not. Namely:

\lim_{M \to \infty} p(n)= \lim_{M \to \infty} \frac{3^M}{3^M-2^M}\frac{2^n}{3^{n+1}}= \frac{2^n}{3^{n+1}} (same probability distribution as vA)

\lim_{M \to \infty}\frac{E_{as}(M)}{E_{ns}(M)} =1 ( expected values if switching or not switching remain same )

Thus paradox does not exist even if we allow infinite expected values – since when we compare switching to not switching, not only expected values are comparably same but even probability to earn any specific value is same – for example, player who always switch will get 4 dollars just as often as player who never switch. This equality of switching vs not switching holds true even if we do not invoke any ‘economic’ limits like maximal amount of money in the world.

vAc: continuous ‘always better’ variant

Previous discrete ‘always better’ variant can also be presented with continuous probability distribution:

Continuous random value r (0 < r \leq \infty ) is selected with probability density function p(r)=3 ln(\frac{3}{2}) \frac{2^r}{3^{r+1}}), and 2^r dollars are put in first envelope while double that amount is put in second envelope. You may pick one envelope. Without inspecting it, should you switch to other envelope ?

Just like in previous example, it can be shown that whatever value X we get in envelope, it has \frac{2}{5} chance to be smaller value. Probability that selected X is smaller value is same as probability that its related R is smaller one ( where R=log_2(X) ) and R can appear in selected envelope only if it was smaller value put in first envelope ( with p(r) probability ) or if it was larger value put in second envelope ( in which case smaller envelope had p(r-1) probability). Therefore probability that R is smaller is:

p_{smaller\;r}= \frac{p(r)}{p(r)+p(r-1)}= \frac{1}{1+\frac{p(r-1)}{p(r)}} = \frac{1}{1+\frac{3}{2}}= \frac{2}{5}

Therefore expected value if we switch would be \frac{2}{5} \cdot 2x+ \frac{3}{5}\cdot \frac{x}{2}= \frac{11}{10}\cdot x , so it appears to be always beneficial to switch. And since two envelopes are indistinguishable, same logic should apply leading to paradox.

This paradox can be resolved in same way as above discrete variant, by making subvariant problem where r is limited to some maximal value M, and then comparing expected values when always or never switching as M goes to infinity. That approach would again show that E_{as}(M)=\;E_{ns}(M) for any M.

Another approach is to find formulas for E_{as} and E_{ns} using solution for standard default variant (vS) of Two Envelopes problem where second envelope is doubled, and show that they are equal:

E_{ns}= E_{as} = \frac{3}{2}W(X_{max})

W(x)= \int\limits_{r=R_{min}}^{\bar{Vi}(x)} p1(r) \cdot V(r) dr

E_{ns}= E_{as} = \frac{3}{2} \int\limits_{r=0}^{\infty} 3 ln(\frac{3}{2}) \frac{2^r}{3^{r+1}} 2^r dr = \frac{9}{2} ln(\frac{3}{2}) \int\limits_{r=0}^{\infty}  \frac{4^r}{3^{r+1}} dr

Since \int\limits_{r=0}^{\infty}  \frac{4^r}{3^{r+1}} dr diverges, expected values are actually infinite – but they are still comparable and indicate that it is same if we switch or not.

vN: Nalebuff asymmetric variant

Money is put in first envelope, and coin is tossed to decide if double or half that amount will be put in second envelope. You are given first envelope. Without inspecting it, should you switch to other envelope ?

It is variant to original problem, since instead of player being allowed to select envelopes ( with probability 1/2 ) , he is always given first envelope. Unlike original variant, for Nalebuff V2 variant claims #2 and #6 are always correct. But it fails on step #9, since envelopes are not indistinguishable as they were in original problem where you were allowed to pick them with 1/2 chance. Here you know that you are given first envelope for which second envelope has equal chance of double or half, regardless of what value is currently in that envelope, and it is indeed better for you always to switch. But second envelope does not have same chance, so two envelopes are not indistinguishable, and we can not apply same logic to other envelope, thus step #9 fails and there is no paradox – you should always switch and keep other envelope.

Simple example would be if we select among 2,4,6 for first envelope. Second envelope can have double or half, so equally possible outcomes are (2,1)(2,4)(4,2)(4,8)(6,3)(6,12). For every possible value in first envelope you indeed have equal chance that second envelope is double or half. But it is clearly not true for second envelope: if it has 1 or 3 other envelope can only be double, and if it has 8 or 12 then other envelope can only be half. In short, swapping back from second envelope will not give you expected 5/4x for any x.

Summary

This article shows that any version of Two Envelope ‘paradox’ that has player select with equal chance one of two envelopes is not a paradox, since expected value if player keep selected envelope is same to expected value if player switch to other envelope. It is also shown that any version where envelopes are not selected with same chance is also not a paradox, since envelopes are not indistinguishable and same calculations do not apply to both of them.

Article defines generalized definition of Two Envelopes problem which include not only default versions but also asymmetric Nalebuff version ( cases when player does not chose envelope ), and derives formulas for expected values in cases of switching or not switching.

In addition to just resolving paradox using formulas for ‘always switching’ and ‘never switching’ expected values, article also derive formula for ‘optimal switching’ when player is allowed to look inside selected envelope before deciding whether to switch or not using “switch if value is less than H” approach ( with derived formula for H_{opt} ).

Beside general solutions for expected values and optimal strategy for fully generalized definition of problem, article also lists several specific problem variants – including those with infinite expected values which are shown to be comparable using ‘indistinguishability’ approach.

Analysis of living population density per countries

By definition, population density is a measurement of population per unit area. Classical way to calculate population density of a country was to divide entire population count with entire area of that country.

But that simple approach has several shortcomings, and number of alternate measurement methods for population density measurements exist. Measurement that is analyzed in this article is one closely related to level of urbanization :

Living population density – density metric which measures the density at which the average person lives.

Final result of this analysis is presented in table and interactive map below, based on NASA SEDAC/CIESIN 30 arc-seconds gridded world population data for 2020, adjusted to UN WPP 2015 population count. Rest of this article explains in details used methodology and includes resources needed to recalculate data.

(*) star next to country name in above table marks group of countries, see below for their definition



On wikipedia there are few approaches mentioned for calculating similar type of density:

  1. Median density – a density metric which measures the density at which the average person lives. It is determined by ranking the census tracts by population density, and taking the density at which fifty percent of the population lives at a higher density and fifty percent lives at a lower density.
  2. Population-weighted density – a density metric which measures the density at which the average person lives. It is determined by calculating the standard density of each census tract, assigning each a weight equal to its share of the total population, and then adding the segments.

While both methods have advantage over simple population density definition, they also have few shortcomings when we are talking about ‘living population density per world countries’:

  • they are focused to densities of urban areas and are thus not suitable to calculate density of entire country
  • they rely on ‘census tract’ population data, which is available in US but is not easily available for most of the countries in the world

For purposes of this analysis I implemented method that is similar to ‘population-weighted density’, but instead of US census tracts data it uses population and area data for each 30 arc-second grid cell in the world ( from NASA SEDAC ) . That allows it to avoid above issues, and can be described as :

  • Living density of a country – a density metric which measures the density at which the average citizen of a country lives. It is determined by calculating population density of each 30 arc-second cell of the country, assigning each a weight equal to its share of the total country population.

“Classical” average population density per world country

To quickly check how well classical population density match urbanization levels of world countries, we can use interactive map below ( which looks similar to map at wikipedia entry about population density ) :

That map immediately demonstrate that “classic” population density does not relate well to expected “urbanization” levels. For example, Canada has classic population density 10 times lower than US, but even cursory internet search will show that Canada has 70%80% of people living in urban/metropolitan areas, about same as US ! So while urbanization levels are similar between US and Canada, classic population density of US is an order of magnitude higher than Canada. Similar results would be shown for Russia, Australia, Brazil and many other countries – their urbanization levels do not match their classical population density.

“Classic” population density obviously is not good indicator for urbanization level. Main reason is that it shows density that would be realistic only if population is evenly spread across entire country. And that is not true in any country, but especially not in countries like Canada ( or Australia, or … ) where most people are concentrated in several big cities, and rest of the land is mostly uninhabited.

To demonstrate, lets imagine country that consists of single city – for example, Singapore. Imagine that it has 100km2 area, with 1 million population evenly spread across city – classic population density would be 10,000 ppl/km2 , and it would really accurately reflect situation in which every citizen of that city-state lives in.

Now imagine that Singapore buys additional 9,900km2 of empty land adjacent to it, so it expand its country area to total 10,000km2, while still only having same 1M people living in original 100km2 city. Classic population density would now show that Singapore has just 100 ppl/km2 density … 100 times lower than before ! And yet, every citizen in Singapore still live in same city as before, under same conditions – which means every citizen still live surrounded by 10,000 people on average on square kilometer. This demonstrate that “classic” population density is bad indicator of average population density as seen by average citizen.

That last sentence is key reason for the problem – we need “living” population density that shows average situation in which citizens of that country live, instead of “classical” density which simply shows total population over total country area. In imaginary ‘Singapore’ example above, we need number that shows livingDensity= 10,000ppl/km2 even when Singapore has 1M people over 10,000km2 ( because all of them live on just 100km2 city area), instead of number that shows classicDensity=100 ppl/km2.

Mathematically, formula for such density would be:

(1)   \begin{equation*}  living\;density= \frac{\sum\limits_{for\;each\,citizen}density\;where\;that\,citizen\;lives}{Total\;country\;population}   \end{equation*}

It is identical to formula (2) below, where population%(A) represent fraction of total country population living at that area A ( at that square kilometer), and density(A) represent population density at that area A :

(2)   \begin{equation*}  living\;density= \sum\limits_{A=for\;each\,km^2}population\%(A) * density(A)  \end{equation*}

In our Singapore example, that would be equal to sum over 100km2 of city area, each km2 with population%= 10,000ppl/1Mppl= 1% and density=10,000ppl/km2, and sum over remaining 9900km2 of uninhabited area, each km2 there with population%=0 and density=0. In total, imaginary Singapore living Density = 100*(1%*10,000ppl/km2)+9900*(0%*0) = 100*100ppl/km2= 10,000 ppl/km2 … exactly what we wanted ! It shows that average imaginary Singapore citizen lives in 10,000 ppl/km2 density, both in case before expanding to empty area and after.

Living population density per world country

Living population density ( henceforth just “living density”, and other one will be called “classic density”) is value that should much better represent urbanization of countries , as mentioned previously.

To calculate living density values for each country in the world, I used NASA statistical data for world grid from :

NASA SEDAC (Socioeconomic Data and Applications Center)

They provide earth population data in different formats, but most suited for above calculation are GIS data in GEOtiff formats with 30 seconds resolution, because 30seconds correspond closely to 1km2 : there are 2 of those in minute, and 60*2 in degree, so 360*60*2=43,200 around entire 40,000km of Earth circumference – which averages to square-ish areas with around 40,000km/43,200 = 0.93km sides, so around 0.86 km2 on average ( less than 1km2 each ). Also, 30 arc-seconds is highest resolution ( most detailed data ) available.

Population distribution based on latest SEDAC data ( for 2020, but adjusted to UN WPP 2015 counts ) is visualized in image below:

Image resolution is 10800 x 4500. Each pixel represent area smaller than 4x4km. Red color for cities, brown for urban, green for rural.

That data presents certain problem for previous ‘Living Density’ formula (2), because that formula is fixed to exact 1km2 units, and NASA GEO data is given for “almost” each 1km2, but not exactly – as explained above, it is closer to 0.9km2 and, more importantly, it is not always same area for each cell.

So, to generalize previous formula and make it more suitable, we observe that:

(3)   \begin{equation*}  population\%(A) =  population(A)/Total\;Population  \end{equation*}

(4)   \begin{equation*}   \[ density(A) =  population(A)/area(A)  \end{equation*}

Therefore, if we substitute (3) and (4) in (2), we get:

(5)   \begin{equation*}   population\%(A) * density(A) = population(A)^2/area(A)/Total\;Population \end{equation*}

And since TotalPopulation is constant that does not depend on selected area, we can write previous formula (2) for living density as:

(6)   \begin{equation*}  \textbf{Density = living\;density =  } \frac{\sum\limits_{A=for\;each\,area}\frac{population(A)^2}{area(A)} }{Total\;Population}  \end{equation*}

For above formula to be correct, areas can be different sizes but each area must be evenly populated ( homogenous ), regardless of its size. Also, to really reflect ‘living’ density, it should only include land area, excluding bodies of water.

Applying this formula on our previous ‘Singapore’ example shows that it simplifies calculation – we only consider two ‘areas’ : 100km2 city area and 9900km2 of outside area: living density = (population_sity^2/city_area + population_outside^2/outside_area)/TotalPopulation = (1M^2/100km2+0^2/9900^2)/1M = 1M^2/100km2/1M = 1M/100km2= 10,000 ppl/km2 … same correct result as before.

New formula is especially suitable when used on already mentioned NASA GEO data, since it allows summing over areas of different sizes. And because each cell there has under 1km2 area, we can safely assume that within such small area population is evenly/homogenously distributed. For calculation it needs only three data sources, all at 30 sec resolution :

  1. population count – how many people for each cell
  2. land area – actual land area for each cell
  3. national grid – to which country each cell belongs

Technical difficulties in calculating and showing “living density”

Processing of NASA geo data sets had several technical issues that needed to be overcome:

  • GIS data was too large for normal arrays : 43200 x 21600 float numbers, resulting in almost 1 billion array elements, with almost 4GB size . Solution was to use gcAllowVeryLargeObjects enabled=”true” in C#
  • Data needed too much RAM : even if above would allow c# to handle such arrays, they took too much RAM ( especially since 3 or 4 of GEO data arrays needed to be processed at same time, as listed above : population, area, nations…). Solution was to process data in configurable bands – for example, in 6000 lines per band, so around 4 bands for total data.
  • GEO tiff data needs decoding: to avoid tangential effort of decoding geotiff format, I used OSGeo.GDAL nuget from https://gdal.org
  • there were errors in GEO data: negative populations, land areas etc… solution was to detect cases when they were for ‘uninhabited land’ ( like deserts and ice) or ‘no land’ ( like lakes and seas )
  • there were statistical errors in GEO data: some countries added to twice their real population in 2020 data ( like Romania). Solution was to use “UN adjusted” datasets
  • final result visualization: while I made my own visualization maps and sortable tables ( in same app used to process geo data), for embedding in HTML posts I used datawrapper.de

Analysis of “Living” population density per world country

Applying above method to process geo data and get living population density for each country in the world resulted in exported CSV file that ( in addition to country codes, population and area) included three calculated values for each world country:

  • living density – average population density where citizens live
  • classic density – simple total country population divided by total country area
  • concentration index – ratio of living over classic densities

That resulting CSV file can be downloaded from ‘get the data’ link under each map, or in download section at the end of this article ( bundled with my application for processing original NASA data ).

Map below demonstrate resulting living densities for world countries:

Difference to classical densities are immediately visible – especially if we look at countries like Canada or Australia. Now, they have similar ( even slightly higher ) average population density than US – indicating that less people live in small rural areas, and more people are concentrated in cities.

Most countries around the world have population density in range 1500-4000 ppl/km2 .

Exceptions are some countries with higher living density like China ( 5900 ), Brazil ( 6100 ), Egypt ( 12,500 ) and especially Mexico ( over 14,000 ppl/km2 ). While Egypt was expected to have high living density ( most people are forced to live close to river Nile ), Mexico was not so expected – but supposedly countries that has large unhospitable areas will tend to have more of the population concentrated into cities and less people in those (unhospitable) rural areas. Examples are countries with deserts (Morocco, Egypt), jungle (Brazil), or in general lot of barren/infertile land (China, Mexico).

There are also countries with lower living density, like Germany (1030 ppl/km2) or Poland or number of other European countries.

For some countries reason for low living density could be lower quality of NASA geo data. Some of those countries ( like Bulgaria, North Macedonia, Moldova ) appear to miss city areas in NASA data set – instead they have city population spread evenly over larger ‘regional’ areas, so they appear as lower density while still keeping same population. That could be result of census data for those countries being available only on regional level, as opposed to smaller areas. It must be noted that density numbers presented here depend on accuracy of underlying NASA geo data, more specifically on data resolution. If resolution of the data is worse than 30 arc-seconds (~1km2) for some countries, they can still have accurate total population but their cities may be shown as larger low-density areas instead of smaller high-density areas, and their living density will show as lower than actual. But those countries are in minority and can be visually detected on 10800 x 4500 map above, or in application from download section – those countries will miss red urban areas at positions of their cities and will instead have evenly spread brown or green population areas, often within inner region/county borders. For most of the countries, NASA geo data appears to be valid for population and density distribution at each km2.

We can see that US has similar living population density ( around 2250 ppl/km2 ) to many of European countries, but not all – because there is quite a difference among European countries as mentioned before, even comparing countries with similar population, economy development levels and quality of geo data, like UK, France and Germany – which have 4180, 2800 and 1000 ppl/km2 living population densities respectively. But it is almost certain that individual US states would also have different living density, so best way to compare US to Europe would be to aggregate all European countries, which is presented below – where EU is 27 countries of European Union ( without UK ), Europe consisting of countries entirely on the continent (44 countries and 7 smaller territories), Europe+ is wiki definition of Europe ( with Russia, Turkey, Azerbaijan, Armenia, Kazakhstan and Georgia) , and NA refers to Northern America which contains US, Canada, Greenland and few small countries :

Density [ppl/km2]
CountryPopulation Area [km2]classicLiving
US333,421,581 9,090,390372,244
EU441,176,165 4,039,0201092,161
Europe597,383,727 5,742,3151042,296
 
Europe+859,073,201 25,627,288342,564
NA371,143,896 20,467,094182,345

It demonstrate that while US has significantly lower ‘classic’ density than EU/Europe ( three times, due to smaller population over larger area ), they have practically same living population density, around 2200 ppl/km2. Extended European definition, that adds large countries like Russia and Kazakhstan, results in huge area ( 2-3 times larger than US), but with larger population it amount to about same classic density as US – while still having living density at similar levels ( around 2500 ppl/km2). Similar case is for Northern America, which adds two large and mostly empty countries (Canada and Greenland) to US, resulting in two times lower classical density – but even there, living density remains similar (2345 ppl/km2).

Which indicate that on average US and Europe have similarly high levels of urbanization ( while certainly differences exists between individual US states or European countries ). It also demonstrate that, whenever most of population is concentrated in cities, it does not matter how empty or large is rest of the country – living density ( density seen as average citizen ) will usually be close to average city density.

Uneven concentration of population

When looking at both “classic” population density and “living” one, some countries have much higher difference than the others.

In fact, ratio of living density vs classic density is direct indicator of how “uneven” is population concentration in the country. In hypothetical country where population is ideally evenly distributed across entire country area, those two densities would be the same ( for example, in our hypothetical Singapore example while entire area was just 100km2 of the city ). But when country has most population crammed in several cities and with large uninhabited areas, then living density becomes much higher than classic density. Examples are Canada or Australia – they both have less evenly spread population across country than US for example.

Therefore, we could state formula:

(7)   \begin{equation*}  uneven\;index =  \frac{living\;density}{classic\;density}   \end{equation*}

So I made third map, showing above mentioned ‘uneven index’ as “ratio of population density” , which is measure of how homogenous ( even spread of population, low ratio index ) or non-homogenous ( uneven spread of population, high ratio index ) are population per countries:

Countries that are especially uneven are some large countries with small populations concentrated in few cities ( like Canada, Australia, Mongolia ) , barely populated countries like Greenland, or some presumably desert countries like Mauritania and Namibia.

But more interesting, and surprising, are “most even” countries: quite different mix of countries, like several central European countries ( Germany, Poland, low lands ) , south-east European countries ( Bulgaria, Croatia, Bosnia,…), India, some African countries (South Sudan, Uganda ) etc. Very different countries, both in development level, size and , most interestingly, in population densities ( living and classical ). Yet all of them share same trait: they have more evenly distributed population across country than most of the other countries.

Some fun/interesting questions related to concentration levels :

Q1: What Germany, India and North Korea have in common ?

A1: They have more evenly spread population across country than most other countries.


Q2: If we know that uncontrolled reentry of large space junk will hit certain country, but we do not know where, what is probability it will endanger some citizens of that country ?

A2: Inversely proportional to ‘concentration index’ of that country. So US would have 1 in 60 (under 2%), Canada 1 in 800 ( around 0.1%) and San Marino 1 in 1 (100%). Basically, darker colored countries on “concentration” map would have lower chance of some citizen being hit by space debris ( under assumption that we somehow know which country will be hit, but not where )

Downloadable resources

In order to process NASA geo data and export summary country CSV file, I made application that can be downloaded in ZIP form from :

Since it is C# application, it requires .NET Framework 4.7.2 ( which should be included in Windows 10 April 2018 Update Version 1803 and later, or can be installed independently ).

Once it is unzipped to its folder, notable files are :

  • GeoTiff.exe – main executable
  • saved_*.* files : cached pre-calculated files from latest NASA data, that was used for this article and for linked maps
  • predef_*.* files : used in case of ‘Recalc’ with new NASA data ( contains names/codes for countries and cities )
  • exportedCountries.csv : summary file with country data, used to import for maps

While main purpose of this application was to process GEO data (calculating population density ) and make export files, it also has limited visualization capabilities. Both countries and cities can be explored on geo map shown within application, sorted by population/ area/ density, searched by name, and visualized on map ( double-clicking city or country row in tables, or right-clicking on map ). Main map is made from NASA geo data directly, linked to smaller embedded google map.

In addition to standard UN countries, application also calculates aggregated data for Northern America and for Europe ( in three variants, since “Europe” is not exactly well defined term ) :

As mentioned before, I made that application to also detect largest connected cities in the world. Cities are listed in separate tab, with their “connected metropolitan” area and population. Those numbers are dependent on configurable parameters : ‘city density’ ( default 2000 ppl/km2) and ‘range’ ( default max 6km of non-city ‘jump’ allowed ). Any change of those parameters require new recalculation ( using NASA geo files ). Example of largest “connected city” in the world under default parameters is :

Note that this is not production level application – it does not have polished UI and performance is not optimized for visualization (only for data processing). Only reason that it has visualization at all is lack of 3rd party visualization tools for cities or arbitrary areas. For countries, 3rd party tools like datawrapper are good for visualization and I have used them for maps in this article. But for cities I was forced to make my own solution in this application.

Optional data files are needed for new recalculation and warning with download instructions will be displayed if ‘Recalculate’ is attempted without them. Those files can be downloaded from NASA SEDAC site :

  1. population count – how many people for each cell
  2. land area – actual land area for each cell
  3. national grid – to which country each cell belongs

Truel problem – solved with Jupyter / Python

About a year ago I decided to estimate usability of Jupyter notebook documents with Python code. Since both Python and Jupyter were new to me at that time, I selected real world problem to solve using them, specifically to solve “Truel” problem :

Several people are doing duel. Given their probabilities to hit, what are probability of each of them to win and who should they choose as optimal initial target ?

Resulting solution, as static HTML page showing results of Truel analysis using python functions, can be seen at link below :

Truel solver as Python based Jupyter notebook

Of course, main point in using Jupyter was to have interactive document. That document ( including both python source and Truel problem analysis) is available at GitHub repository , and also in ZIP form at this site. ( zip also contains already precalculated cache file, to save some 45 min of initial calculation time ). Document is best used with JupyterLab

While above link demonstrate how that solution was used to analyse Truel cases, point of this blog is to give my summary on usability of Python and Jupyter notebooks – which was initial reason why I decided to solve “Truel” problem .

Shortest possible summary would be:

Jupyter/Python/numba combination was excellent match for this problem

Especially suitable was Jupyter document, because it allows interactivity and easy analysis of different cases, while still resulting in visually good looking document. Great thing about Jupyter notebook is that it does not recalculate entire document when one cell is calculated – it remembers already calculated variables or compiled parts. This is in contrast to running same code in Visual Studio – where each small change required execution of entire Python code.

Python itself was not so excellent match out of the box for this problem, because problem is very computationally intensive – especially for 2D analysis where python functions need to solve millions of times for 1000×1000 images. And Python, by default, was much slower than solutions in C# for example. In situation where interactivity is important, it was not acceptable to wait 10+min for every analysis image. But, apart from speed, Python was good match due to simplicity of coding and especially due to great modules like numpy ( for array/matrix operations) and matplotlib ( for 2D visualization )

Python performance issues gave me reason to explore numba – which is Python module that allows ‘just in time’ compilation of python code. Eventually that proved right combination – numba accelerated python functions were fast enough to produce 2D solutions in seconds on average, which was acceptable from interactivity point of view.

Problems and shortcomings of Jupyter/Python/numba ( and workarounds )

While eventually this proved to be good match, each of those technologies had some problems or limitations – some of them were overcome in this solution, while some remain:

  • python is slow – standard python is slow when millions of complex calculations are needed. But this can be overcome by using numba
  • numba often requires rewriting python code – mostly due to type ambiguities, but also some python features are not supported in numba. This can not be exactly overcome, but is easy to comply when writing numba code from start. Modifying old python code to numba is also not hard usualy – but can be tricky in some cases.
  • Jupyter notebook does not have debug option – some bugs are hard to detect without that. This can be be overcome by running same code in Visual Studio , and debugging there. Not ideal option, since it may require slight code rearrangement – and also does not support numba debugging ( solvable by temporary marking functions as non-numba, since numba code is also valid python code ).
  • Jupyter nootebook often requires ‘run all cells’ – and that can result in 30min computation for entire Truel document, which have many complex 2D comparisons ( most needing just few seconds, but some need few minutes each ) . I solved this problem by introducing cache for large results ( eg 2D analysis data ), and running code again without forcing recalculation will simply retrieve last result from cache – resulting in 30x faster ‘run all cells’ ( with majority of remaining time spent to recompile all numba functions )

Conclusion:

Jupyter notebook document, based on Python with numba accelerated functions and matplotlib enabled visualizations , was great match for this problem – and likely to be good match for any similar problem that requires interactivity and visualization.

Publishing Visual Studio dotnet app to Linux


Initially I published my Blazor proof-of-concept projects to Azure , and that is fairly straightforward to setup in Visual Studio ( after initial fairly complicated Azure site setup ). But I decided to use Linux as publish target due to two reasons:

  • to reuse same DigitalOcean droplet used for this WordPress site
  • to have sites under my gmnenad.com domain, at reasonable price

Azure allows using your domain instead of appName.azurewebsites.net , but if you want also SSL on those custom domains ( which you must have in order to install PWA apps, like https://orao.gmnenad.com ), then Azure require moving to at least B1 tier – which is both more expensive ( over $50/month , compared to $0 for F1 or under $10/month for D1 ) and with worse performance ( just 1 CPU, compared to multiple CPUs on shared F1/D1 plans ).

But while reasons for publishing dotnet apps on Linux hosts instead of Azure will be different and subjective for most people, problem remains same :

How to publish dotnet core app from Visual Studio to your Linux host in the most efficient way ?


Visual Studio Publish options

While manual publishing of files to any host is always possible, what I needed is “one-click” publish integrated in standard Visual Studio publish process ( right-click VS project, ‘publish’ ), and currently supported options are:

  • Azure
  • Docker Container Registry
  • Folder
  • FTP/FTPS Server
  • Web Server (IIS)

Obviously, for publishing directly to Linux host in order to be served with same web server ( Apache or Nginx ) as WordPress site, I had to ignore Azure and IIS, and even Docker options. Logical choice was therefore FTP/FTPS Server. But …

FTP/FTPS Server was bad option

FTP was not installed by default on Linux droplets that I used, and furthermore I consider simple FTP too insecure so I installed FTPS , which ( in short version ) included :

  • installing vsftpd
  • creating ftpuser and linking his /home/ftpuser/ftp/www to /var/www
  • mount has to be used ( add to /etc/fstab) , since FTPS do not work with symlinks
  • create openssl certificate for vsftpd.pem
  • significantly change default vsftpd.conf ( ssl options, chroot, userlist, passive…)
  • allow at firewall (ufw) FTP direct and passive ports (20,21,11000-12000)
  • in Visual Studio, add FTP Publish Profile ( to /ftp/www/appName )

This “almost” works and allows standard “one-click” publish from Visual Studio , but has significant drawbacks:

  • complicated to setup
  • security implications ( another user, more open ports …)
  • Visual Studio report 426 error for every copied file, and report ‘Failed Publish’ at end
  • slow transfer ( maybe partially due to all those reported errors )
  • no automatic restart of dotnet service on Linux

Reason why I mentioned “almost work” even with VS reporting failure is because all files end up transferred to Linux – that reported error is difference in how Windows and vsftpd think they should do FTPS : vsftpd expect other side to confirm ( with code 3 ) when it ends SSL session for one uploaded file, and when windows do not send that code, vsftpd sends 426 error back. Note that it is not windows vs Linux issue, since I tested curl in Linux, and it has same problem with vsftpd.

But while I could ignore error(s) reported by VS, main issue was that after FTPS publish I still had to manually SSH to Linux box to restart service for that dotnet app before change is visible in browser.

End result is that using FTPS was not a good option, which is also reason why I didn’t give details here about specific steps listed above . Instead, I moved to right option:


Folder publish is right option

Of course, folder publish on its own will only publish locally, so it had to go in tandem with some app that support file transfer. Initially I tried FileZilla but it does not have scripting support, so much better option was WinSCP – it does support scripts, and is very good choice even for other file operations between windows and Linux ( unrelated to publish ).

Short version of what is needed with this approach:

  1. install WinSCP and make it work with your Linux box using SSH keys as wwwuser
  2. create Visual Studio Folder Publish Profile
  3. create WinSCP script and modify FolderPublish profile to call that script


First step is standard one and not related specifically to Visual Studio. While it is mostly straightforward, here is detailed description of Linux steps to create wwwuser, pair of SSH keys, and allow that user to SSH using those keys . It was done on Ubuntu 18.04 (Bionic).

# assume those commands are run as root in terminal

# add new wwwuser ( in www-data Apache group, optional ) and set his password 
adduser --ingroup www-data wwwuser 
chpasswd wwwuser:*somepassword*

# switch to wwwuser, so keygen generate folder in his home
su - wwwuser
# create new SSH pair of keys in /home/wwwuser/.ssh folder
ssh-keygen
# insert public key to allow wwwuser to SSH with its private key
cat /home/wwwuser/.ssh/id_rsa.pub >> /home/wwwuser/.ssh/authorized_keys
# set ReadOnly to authorized_keys
chmod 644 /home/wwwuser/.ssh/authorized_keys

# MOVE KEYS FROM /home/wwwuser/.ssh , leave only authorized_keys

Above should allow to connect from WinSCP to Linux box using wwwuser. Those keys should be moved from Linux box – they are not needed there anymore, and private key (id_rsa) will be needed on Windows box for WinSCP. To test it: install WinSCP, open “New Session” , enter your server IP or domain, then press “Advanced” button and enter path to private key in SSH/Authentication section, as shown below:

Allow WinSCP to automatically convert that private key from Linux format to its own format and save it as PuTTy *.ppk file in same folder. It should be enough for “Login” to work – after which you can save it for further use.

Second step is also standard one, not related to Linux – creating Visual Studio “Folder” publish profile.

Right click on project in Visual Studio, select “Publish” and then “New” if this is not first publish profile. Select “Folder” option, and after Next, leave options as default ( it offers location as “bin\Release\netcoreapp3.1\publish\” ) and just Finish creation. You can select “Edit” to change few options that were not available at creation step, but I tend to leave those as defaults too. This created new “FolderProfile.pubxml” in VS project under Properties/PublishProfiles .

To test , just right-click on VS project, select Publish and click “Publish” button – it should build your dotnet core app, and store it in “publish” folder from above.

Third step is where we integrate publish process with WinSCP, to automatically transfer published files to Linux.

It had two challenges:

  • make WinSCP script to non-interactively copy files AND restart app service
  • find correct place in VS publish process ( AfterTargets=”???” )

While previous two steps are agnostic toward type and location of actual dotnet app on Linux box, in this step we need to know that for script . In my case, I had following assumptions :

  • app type: dotnet app hosted by Kestrel ( which was set as Linux service systemctl start appName )
  • app location: /var/www/appName
  • name of systemctl service is same as name of folder under /var/www : “appName”
  • Linux script “restart_app.sh” was copied to /var/linuxVM

Location is usual one for web apps, and hosting dotnet app as systemctl service which runs Kestrel local web server with Apache/Nginx proxy in front is standard “type” of hosting both for Nginx or Apache web servers .

For WinSCP script, I created file “publishLinux.sh” in VS project root:

# open sftp session with wwwuser SSH key
open sftp://wwwuser@yourDomain.com/ -hostkey="ssh-ed25519..yourHostKey=" -privatekey="C:\path\to\private\key\user.ppk"

# create /www/appName folder ( but ignore error if already exists , with batch continue )
cd "/var/www"
option batch continue
mkdir "%2%"
option batch off

# go to appName folder and delete all old files. CD should throw error if folder does not exist
cd "%2%"
rm *.*

# on Windows, go to publish folder, and copy all from it to www/appName on Linux
lcd "%1%"
put * 

# restart app service on Linux
call sudo /var/linuxVM/restart_app.sh "%2%"

# finish script with OK 
exit

This script uses two parameters:

  • %1% : first parameter is location of published files on Visual Studio machine
  • %2%: second parameter is name of my app ( one word )

For script to work, you need to copy previously defined private SSH key to VisualStudio machine. If you manually opened WinSCP section as mentioned at end of first step, easiest way to get those values is to select any file on left side panel in WinSCP ( on Windows side ) and click “Upload” button. That will open Upload Dialog, where you should expand “Transfer Settings” combo box/button and select “Generate Code…“. That will show commands in “Script File” format, and you only need to copy first ‘open sftp:…” line ( which has correct host key and path to private key ) over to above script.

For restarting our Linux app after publish is done, WinSCP script relies on “restart_app.sh” bash script previously copied to /var/linuxVM folder :

#!/usr/bin/env bash
if [ ! -z "$1" ]; then
	# restart app service
	# instead of systemctl restart, since this will start even if it was stopped before
	sudo systemctl stop "$1" 2>null
	sudo systemctl start "$1" 2>null
	# reload proxy server too, may be needed by some indexed apps
	sudo systemctl reload apache2 2>null
fi

This restart_app.sh script was made with few assumptions:

  • we use Apache as proxy server. Alternative last line for Nginx reload would be: sudo nginx -s reload
  • we can have both “proxied” apps ( with Apache/Nginx “ProxyPass” to http://localhost:500x hosted by dotnet Kestrel ) and “indexed” apps ( where app index.html is served directly by Apache or Nginx, option for Blazor WASM apps )
  • we added to /etc/sudoers : wwwuser ALL=NOPASSWD: /var/linuxVM/restart_app.sh *

If we only use “proxied” apps ( since Blazor WASM apps can also be used that way ) , we may not need to reload web server, so “reload Apache” would not be needed in “restart_app.sh”. Also, in order to allow ‘call sudo /var/linuxVM/restart_app.sh “%2%”‘ from WinSCP script without being asked for sudo password, we need to add our “restart_app.sh” script to “/etc/sudoers” file ( wildcard * at end will allow us to supply any parameter ). In theory it would be possible to call above commands directly from “publishLinux.sh” WinSCP script using call , and skip “restart_app.sh” – but it would require changes on all our Visual Studio installations if we change from Apache to Nginx and, more importantly, would require giving sudoers rights to wwwuser for unrestricted “systemctl *”, which is not good security practice. Using our “restart_app.sh” script also allows us to further check if supplied appName is one of ours (if we want more security).

Last part is calling “publishLinux.sh” from Visual Studio publish profile. To do that, open Properties / PublishProfiles / “FolderProfile.pubxml” in Visual Studio ( that is Properties folder under project root, not project options ) and add new <Target Name> section after last </PropertyGroup> , so that modified profile looks like this:

<?xml version="1.0" encoding="utf-8"?>
<!--
https://go.microsoft.com/fwlink/?LinkID=208121. 
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <DeleteExistingFiles>True</DeleteExistingFiles>
    <ExcludeApp_Data>False</ExcludeApp_Data>
    <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
    <LastUsedBuildConfiguration>Release</LastUsedBuildConfiguration>
    <LastUsedPlatform>Any CPU</LastUsedPlatform>
    <PublishProvider>FileSystem</PublishProvider>
    <PublishUrl>bin\Release\netcoreapp3.1\publish\</PublishUrl>
    <WebPublishMethod>FileSystem</WebPublishMethod>
    <SiteUrlToLaunchAfterPublish />
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <ProjectGuid>20245b63-e767-4b2a-8261-312f840e8213</ProjectGuid>
    <SelfContained>false</SelfContained>
  </PropertyGroup>

  <Target Name="LinuxPublish" AfterTargets="FileSystemPublish">
    <Message Importance="high" Text="*** Linux Publish             ... copying to LinuxVM ... " />
    <Exec Command="call "C:\Program Files (x86)\WinSCP\WinSCP.exe" /ini=nul /script=publishLinux.sh /parameter // "$(PublishUrl)" appName " />
  </Target>

</Project>

As mentioned above, finding correct place in VS publish process to insert our call is important. Here I had to do few trials and errors until I found AfterTargets=”FileSystemPublish” to be suitable ( called after publish folder is complete, and called regardless if rebuild was done or not ). Since this may change in the future , if Microsoft change publish process order, one way to find best AfterTarget is to set VS [ Tools-> Options-> Projects and Solutions-> Build and Run-> MSBuild project build output verbosity ] option from default “Minimal” to “Diagnostic” , then run publish and find in output what was last ‘Done building target “XYZ”‘ or similar message mentioning completion of some target, then use that last mentioned target name.

Only thing that need change in above FolderProfile.pubxml is ‘appName’ at the end of exec call ( line #23 ), which will define both folder on Linux where to copy and Linux service to restart. Other parameter ( publish folder location ) is automatically set by $(PublishUrl). In case that publish is failing, you can add “/log=WinSCP.log” before “/script=” in line #23 , as debug option.

That means each dotnet project will have its own FolderProfile.pubxml ( with its own publish folder and appName ), but they all can call same publishLinux.sh .

Good thing about this approach is that publish process will wait until file transfer is done, and correctly report success ( or failure if something was not copied ), with output similar to:

...
*** Linux Publish             ... copying to LinuxVM ... 
call "C:\Program Files (x86)\WinSCP\WinSCP.exe" /ini=nul /script=publishLinux.sh /parameter // "bin\Release\netcoreapp3.1\publish\" appName 
Web App was published successfully file:///E:/sourcePath/bin/Release/netcoreapp3.1/publish/

========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Publish: 1 succeeded, 0 failed, 0 skipped ==========

Benefits of this Folder option over FTPS option :

  • easier to setup
  • no additional services and open ports on Linux
  • Visual Studio correctly report success or error
  • faster transfer
  • automatic restart of dotnet service on Linux

End result is real “one-click” publish of dotnet app from Visual Studio to Linux host.

Starting Blog

As mentioned previously, blogging is not primary goal for this site but it will still be used for interesting issues related to projects mentioned on this site, technologies used or just some general issues.

Few blogs that are “soon to come” will be related to issues of hosting Blazor apps on Linux machine like this one.