Russell’s paradox

Let A be any set. We define B such as:
B = \{x \in A : x \notin x\}

Can it be that B \in A? No.

Proof by contradiction.

Assume B \in A.

  • If B \in B then by the definition of B:
    B \notin B. Contradiction.
  • If B \notin B then by the definition of B:
    B \in A \text{ and } B \notin B \implies B \in B. Contradiction.

Therefore B \notin A.

Credits for the succinct proof.


Some notes on Decision Theory

Once upon a time, I took a course in Decision Theory at ETH. Here are some notes:

normative theory vs descriptive theory (maths vs psychology)

prospect theory

options vs choices

money pump

bond vs shares

Main notions:

  1. Slide
    1. Normative: Perfect rationality, optimality, payoff maximality
    2. Descriptive: How real people think and make decisions.
      1. bias: persistent and systematic deviation from rationality
    3. Prescriptive: How can we help people to make better decisions. How can we structure the choice environment to minimize decision errors and biases.
    4. People use limited heuristics in making decisions
    5. Decision maker
      1. individual, couple, family, firm, country
      2. any monolithic cognitive agent
    6. Goals, preferences, wants, desires
    7. options, alternatives, courses of action
    8. beliefs
    9. Decision under
      1. certainty
        1. all options and outcomes are completely known to the DM (military diet, ups example)
      2. risk
        1. When all the choices and options are known but there are well defined stochastic nodes (gambling, insurance, investing)
      3. uncertainty
        1. options are not well defined, or probabilities are not defined (or both) (examples most of life)
    10. Preferences
      1. Revealed, Complete, Transitive, Independent, Stable
    11. People often satisfice instead of optimize
    12. People guess, estimate and try to do “well enough’’ rather than optimally.
    13. Rationality is bounded in two ways.
      1. Information from the environment
      2. cognitive capacity
    14. Sometimes close enough is fine, sometimes not
    15. Cognitive illusions: Monty hall problem
    16. Limitations in attention and memory. There are tools which compensate for that.
    17. Multi Attribute Evaluation (MAUT)
      1. identify options
      2. define goals
      3. quantify subjective evaluations
      4. weight and scale different evaluations so they can be aggregated meaningfully.
      5. make reasonable and justifiable decisions
    18. Problem, Objectives, Alternatives, Consequences, Tradeoffs (PrOACT)
    19. Comparative, multiple stakeholders (probably not equally important), multiple goals (maybe not equally important), multiple choices with multiple attibutes, judgements are necessary, quantify!
      1. Identify the decision maker
      2. Identify the alternatives
      3. the relevant attributes
      4. assign values to the attributes of each alternatives
      5. determine a weight for each attribute
      6. for each alternative, compute a weighted mean
      7. make a provisional decision
      8. get feedback and run a sensitivity analysis
    20. compensatory vs noncompensatory
      1. sometimes some alternatives are not acceptable (minimum requirements)
    21. minor changes may cause a change in the resulting decision
    22. “fair’’ method and easy to develop political support
    23. analytic hierarchy process: binary comparison of pairs of attributes in terms of their importance and then assign a weight (how much better is on alternative compared to the other)
    24. built in check for transitivity and consistency
    25. a well defined process to turn subjective judgements into more objective ratings
  2. Secretary problem
    1. What makes a good decision (good outcome or good reasons?)
    2. expected value
    3. probability (classical, frequency, subjective)
    4. it can be hard for people to reason about probabilities correctly (monty hall, other pitfalls)
    5. birthday problem
    6. bayes rule
    7. st petersburg gamble
    8. Moral worth: Bernoulli: the worth of something is not the same as its value → Definition of utility (utility = value^a)
    9. Expected utility does not work always
    10. Allais gambles. The result is a contradiction to EUT.
    11. Feelings about risk: People will accept 1000 times greater risks if they are their own volition, rather than involuntary
    12. O.J. SImpson example
    13. People are generally risk averse, but to varying degrees
    14. EVT defines rationality if the goal of the DM is to maximize value in the long run
    15. EUT defines rationality if the goal of the DM is to maximize utility (moral worth) in the long run
    16. Risk seeking in the domain of losses, risk averse in the domain of gains
    17. Decision framing
    18. Money trump
  3. Sequential investment problem
    1. Persistent cognitive noise
    2. systematic departures from EUT – biases
    3. Friedman-Savage utility function
    4. Prospect theory
      1. descriptive account of decision theory under risk
      2. aims to account for systematic deviations from eut and evt
      3. Coding — outcomes relative to the current state in terms of gains and losses
      4. combination — joining similar prospects
      5. segregation — isolation of sure things
      6. cancellation — discarding shared components
      7. simplification — rounding
      8. elimination of dominated options
    5. probability weighting function — the estimation that people do on probabilities (magnification of small values in the expense of larger values)
    6. 4 fold pattern of risk preferences
      1. risk seeking over low p gains — lotteries
      2. risk averse over low p losses — insurance
      3. risk averse over high p losses — certainty effect
      4. risk seeking over high p losses — end of the day effect
    7. certainty effect
      1. insurance which does not pay with a very small probability
    8. end of the day effect
      1. people make large bets on the last day to “break even’’
    9. Shortcomings
      1. average choices on average
      2. not for every one
      3. not even for someone over the domain of time
      4. no insight — just better predictions
      5. does not scale. (portfolio example, mulistage risky decisions)
    10. Dynamic decision contexts are defined by:
      1. sequential choices are made and they are interspersed with updated information about the environment
      2. the decision environment changes over time (depending on the choices of DM)
    11. normative approaches
      1. maximize expected payoff
      2. dynamic programming
    12. descriptive approaches
      1. case studies
      2. experimental methods (behavioral laboratories)
      3. human behaviour is compared to the normative approach
      4. identification of systematic departures from optimality — existent cognitive mechanisms can be accounted for
      5. ⇒ people predictably irrational (underlying assumption)
    13. optimal stopping
      1. example: secretary problem
      2. target: select a maximum value
      3. full information games
        1. perfectly known distribution
        2. draw a number from [0,20] — given the relative order of the number that you encountered so far but not the number
      4. partial information
        1. known distribution but not its parameters
        2. german tank problem
      5. no information
        1. iid random variables with unknown distribution
        2. secretary problem
        3. Sometimes called the Sultan’s Dowry Problem, the Marriage Problem, or the Fussy Suitor Problem
        4. optimal decision policy — threshold (e^{-1})
        5. generalized secretary problem
        6. persistent bias for stopping too soon
        7. probability distortion and not risk aversion is the (assumed) answer
      6. Other dynamic decision problem
        1. sell a good over a time interval where offers in a specific range arrive with a probability
    14. Kelly criterion
      1. always bet the same proportion (2p – 1)
      2. maximizes the expected geometric mean
      3. poor criterion in prediction decision makers behavior
      4. people have strong urge to adjust their investments
      5. evidence of learning to adjust less over rounds
      6. if we charge people when they adjust, we help them
    15. sequential investment problem
      1. optimal policy: bet max or not according to whether we are “above or below the expectation’’
    1. Heuristics as rule of thumb.
    2. gambler’s fallacy (law of small numbers)
    3. hot hand fallacy (law of small numbers)
    4. stars
    5. availability heuristics (how easy it is to recall a piece of information)
    6. anchoring and adjustment
      1. factorial where presented the larger number first or last
      2. use of early values to predict
    7. framing effect (the influence of questions)
    8. confirmation bias
    9. are people really irrational — heuristics evolved for some reason. Carefully designed corner case are examined.
    1. public goods game
      1. selfish players take no interest
      2. social optimal is far off
      3. many people are pro-social
      4. design vector according to the answers
    2. overconfidence quiz
      1. attentional
        1. selective memory/information search: confirmation bias
        2. selective encoding: excuses/rationalizations
        3. selective encoding : rewarded events are remembered better
      2. motivational
        1. need to appear competent and confident to tothers and oneself. confidence and optimism help to get things done.
    1. bordeaux and regression
    2. wisdom of crowds
    3. risky decision theory and gambling
    4. rush of winning
    5. highly reinforcing

Hofstadter sequence

On the book Gödel, Escher, Bach: an Eternal Golden Braid, on Chapter III page 73 there is a puzzle on a sequence of numbers:

1  3  7  12  18  26  35  45  56  69  …

The puzzle is about characterising the sequence of numbers, that is, coming up with a way to find the next number of the sequence. You may want to try it yourself before reading the rest of the post.

The section of the chapter where this puzzle belongs talks about Recursive Enumerable Sets vs Recursive Sets. The main point of the chapter is that:

There exist formal systems whose set of non-theorems is not the set of theorems of any formal system.

When the author presents the puzzle, he calls for some reflection on the chapter in order to find a formula that describes the sequence.

Trying to solve that, I started observing the differences between subsequent numbers, which in our case is

2  4  5  6  8  9  10  11  13

And then again I looked at the differences of between those numbers

2  1  1  2  1  1  1  2

Now, this sequence rings a bell. The notes in music have differences in tones and semitones:

si (1/2) do (1/1) re (1/1) mi (1/2) fa (1/1) sol (1/1) la (1/1) si

where 1/2 represents a semitone and 1/1 represents a tone.

If we look carefully, we notice that the last sequence of differences that we found is the same one as the one in tones and semitones where the fractions are reversed.

I thought this could be a solution since one of the main topics of the book is the interplay of music and maths. On the other hand, the discussion on the chapter was not used. By looking up the description of the sequence on the web, I found the Hofstadter sequence. Looking at the sequence, I had doubts that I had found an equivalent definition.

Finally, a simple Python script to check whether the two sequences are the same.

Apparently, they are not equivalent. Moreover, we don’t even know whether the Hofstandter sequence is defined for all n.

Nevertheless, the beginning of the sequence presents a funny coincidence and writing the Python code to check the non-equivalence of the two sequences made me appreciate even more the power of generators in Python.

On diversity

“The people of the Settlements select themselves. They select people like themselves. There’s a shared culture, even, to some extent, a shared physical appearance on each Settlement. Earth, on the other hand, is, and through all of history has been, a wild mixture of cultures, all enriching each other, competing with each other, suspicious of each other. Tanayama and many other Earthmen– myself, for instance–consider such a mixture to be a source of strength, and feel that cultural homogeneity on the Settlements weakens them and, in the long run, shortens their potential life span.”

From the book Nemesis by Asimov, page 123.

Floating number operations

Because of how floating numbers work, we can have some surprising results. Can you guess what the next snippet is printing?

from __future__ import print_function

print("The answer is:", int((0.1 * 3 - 0.3) ** -0.1))

Exercise for the reader.

Can you use the fact that:

import numpy as np

v = np.spacing(np.single()) / 2
s = 1.0
s + v   # prints 1.0
s = 0.0
s + v   # prints > 0.0

To make a statement that 1.0 + 1.0 = 2.0 sometimes while 1.0 + 1.0 = 1.0 some other times?

Hint: Use a loop.

If someone requests it, I will post the solution. Credits to Prof. Gallopoulos for this second exercise.

A child can teach an adult three things…

“A child can teach an adult three things: to be happy for no reason, to always be busy with something, and to know how to demand with all his might that which he desires.” – Paulo Coelho

Two volumes of Pushkin [math problem]

Two volumes of Pushkin, the first and the second, are side-by-side on
a bookshelf. The pages of each volume are 2 cm thick, and the cover – front
and back each – is 2 mm. A bookworm has gnawed through (perpendicular to
the pages) from the first page of volume 1 to the last page of volume 2. How
long is the bookworm’s track?

[This topological problem with an incredible answer – 4 mm – is absolutely
impossible for academicians, but some preschoolers handle it with ease.]

Then the author of the problem 13 writes at the end:

I tried to illustrate with this problem the difference in approaches to tasks by mathematicians and physicists, in my invited paper in the journal “Physics – Uspekhi” for the 2000 Christmas jubilee. My success surpassed by far what I intended: the editors, unlike preschoolers, on whom I had been basing my experience, failed to solve the problem, and therefore altered it to match my 4mm answer as follows: instead of from the first page of volume 1 to the last page of volume 2″ they typeset from the last page of volume 1 to the rst page of volume 2. This true story is so implausible that I am including it here: the proof is the editors’ version published by the journal.

If you still can’t figure out why that holds and the “obvious” answer is incorrect, then as hint: the first book is on the left and the second book is on the right.