Reddit is a great place. The social media network that has been around since it was founded in June of 2005 has grown to incredible proportions. People use the site for everything from discussing controversial topics like vaccines to sharing hilarious topical memes. And somewhere in between the down right serious and the down right ridiculous exists the subreddit r/AskReddit. People ask all sorts of questions. Sometimes, the replies lead to humor and sometimes they lead to stunning revelations.

Recently, one Redditor asked the community, "which paradox just mind-fucks a person?" Whatever we expected, we're not entirely sure... but these answers have us shook.

a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true.

via: giphy

The unexpected hanging paradox.

A prisoner is sentenced to death, and he is told that he will be hanged on a day next week, and on the day he won’t be expecting it.

The prisoner thinks to himself, well if they hang me on Saturday, then I will expect it because there is no other possible day, so it won’t be on Saturday.

Then he realizes, well it won’t be Friday either, because I know that if I make it to Friday, they won’t hang me on Saturday, so it has to be Friday, which means that they won’t hang him on a Friday.

He realizes this logic would continue for each day of the week, and so he concludes that there is no possible day for them to hang him unexpectedly, so he thinks they must not plan to hang him.

On Wednesday they hang him, and he is completely surprised.

The more you think about this paradox, the less sense it makes.

The Ship of Theseus always kind of fucked me.

So, there’s this Greek dude called Theseus, and he’s on a very long boat trip home. His ship needs repair, they stop, replace a few rotten boards, and continue. Due to the particularily strenuous nature of this very long trip, several more of these stops for repairs are made, until, by the very end, not a single board from the original vessel remains.

Is this still the same vessel? If not, when did it cease to be?

via: webmindset

Pinocchio says “My nose will grow after I finish this sentence”

Does it?

Braess’ paradox

From wiki “the observation that adding one or more roads to a road network can end up impeding overall traffic flow through it. The paradox was postulated in 1968 by German mathematician Dietrich Braess, who noticed that adding a road to a particular congested road traffic network would increase overall journey time.”

That “this page is intentionally left blank” page. The page isn’t even blank anymore!

via: giphy

The UK ‘triple lock’ that people moving to the UK experience:

Need proof of address and photographic ID to open a bank account

Need a bank account and photographic ID to rent a place

Need a bank account and an address to get sent your photographic ID

The Halting Problem.

You cannot create an algorithm that looks at a different algorithm and its input, then decide whether or not that algorithm will reach the end.

This is too complicated to prove in a single Reddit comment, so watch this video if you are interested.

EDIT: Oh, bugger, I’ll prove it myself:

Consider this scenario:

Algorithm P is a copier. Give an input, and it will output that same thing as two separate outputs.

Algorithm H is the algorithm that predicts whether a different algorithm will reach the end (it will halt). It accepts two inputs (the algorithm and the input for the algorithm) and outputs “YES” if the algorithm halts and “NO” if the algorithm doesn’t halt.

Algorithm F is a algorithms that says “Hello” if it’s given the input “NO”. It gets stuck in an infinite loop (doesn’t halt) if it’s given the input “YES”.

Now combine all three of these algorithms in order to make algorithm X. Feed algorithm X as the input to algorithm X. First thing that will happens is that algorithm P will spit out two copies of algorithm X and gives them to algorithm H.

Algorithm H now has to decide whether algorithm X will halt if given algorithm X. If algorithm H says “YES” (X will halt), it will cause algorithm F to get stuck, and therefore X will not halt. If algorithm H says “NO” (X won’t halt), it will cause algorithm F to just say “Hello”, and therefore X will not halt.

Either way, algorithm H is wrong. It’s impossible to design an algorithm that can correctly predict whether any arbitrary algorithm will halt given a given input.

via: giphy

Jim is my enemy. But it turns out that Jim is also his own worst enemy.

And the enemy of my enemy is my friend. So, Jim is actually my friend.

But…because he is his own worst enemy, the enemy of my friend is my enemy.

So, actually Jim is my enemy.

But…

So i know this is just a silly thing but…..

At my old work, my department was food service. In our prep room, you had to always wear an apron. Always, no exceptions.

When leaving the preproom, you had to take your apron off to prevent cross contamination.

The bosses were trying to figure out where to put the hooks. Inside in the back of the door, or outside on the wall.

The Banach Tarski paradox is one hell of a mind fuck.

Its basically taking something, and rearranging it to form another exact copy of itself while still having the complete original. Like taking a sphere, which has infinite points on it and drawing line from every “point” on its surface to the center, or the core of the sphere.

Then you separate the lines from the sphere, but because there is infinite points you now have an exact copy of the original sphere.

via: tenor

The coastline paradox.

The more accurately you measure a coastline, the longer it gets… to infinity.

One of my favorites is Xeno’s Paradox.

In order to leave my apartment, just for example, I have to walk half way to my front door. Then I have to walk half the remaining distance. Then half that distance, ad infinitum. In theory, I should never be able to reach the door.

Now I love this paradox, because we’ve actually solved it. It was a lively, well-discussed debate for millennia. At least a few early thinkers were convinced that motion was an illusion because of it!

It was so persuasive an argument that people doubted their senses!

Then Leibniz (and/or Newton) developed calculus and we realized that infinite sums can have finite solutions.

Paradox resolved.

It makes me wonder what “calculus” we are missing to resolve some of these others.

EDIT: A lot more people have strong opinions about Zeno’s Paradox than I thought. To address common comments:

1.) Yes, it’s Zeno, not ‘Xeno’. Blame autocorrect and my own fraught relationship with homophones.

2.) Yes there are three of them.

3.) If you’re getting hung up on the walking example, think of an arrow being shot at a fleeing target. First the arrow has to get to where the target was. But at that point, the target has moved. So the arrow has to cover that new distance. But by then, the target has moved again, etc. So the arrow gets infinitesimally closer to the target, but doesn’t ever reach it.

4.) Okay, you think you could have solved it if you were living in ancient Greece. I profoundly regret that you weren’t born back then to catapult our understanding two millenia into the future.

5.) Yes, I agree Diogenes was a badass.

I hope this covers everything.

via: missedinhistory

Newcomb’s Paradox:

There are two boxes, A and B. A contains either $1,000 or $0 and B contains $100. Box A is opaque, so you can’t see inside, Box B is clear, so you can see for sure that there is $100 in it.

Your options is to choose both boxes, or to choose only Box A.

There is an entity called “The Predictor”, which determines whether or not the $1,000 will be in Box A. How he chooses this is by predicting whether or not you will choose both boxes, or just Box A. If the Predictor predicts that you will “two box”, he will leave Box A empty. If he predicts that you will “one box”, he will put the $1,000 in Box A. He is accurate “an overwhelming amount of the time”, but not 100%. At the time of your decision, the contents of Box A (i.e. whether or not there is anything in it) are fixed, and nothing you do at that point will change whether or not there is anything in the box.

It is a paradox of decision theory that rests on two principles of rational choice. According to the principle of strategic dominance:

There are only two possibilities, and you don’t know which one holds:

Box A is empty: Therefore you should choose both boxes, to get $100 as opposed to $0.

Box A is full: Therefore you should choose both boxes, to get $1,100 as opposed to just $1,000.

Therefore, you should always choose both boxes, since under every possible scenario, this results in more money.

BUT:

According to the principle of expected value:

Choosing one box is superior because you have a statistically higher chance of getting more money. Most of the people who have gone before you who have chosen one box have gotten $1,000, and most that have chosen both boxes have gotten only $100. Therefore, if you analyze the problem statistically, or in terms of which decision has the higher probability of resulting in a higher outcome, you should choose only one box. Imagine one billion people going before you, and you actually seeing so many of them have this outcome. Any outliers became insignificant.

In terms of strategic dominance, two-boxing is always superior to one-boxing because no matter what is in Box A, two-boxing results in more money. One-boxing, on the other hand, has a demonstrably higher probability of resulting in a larger amount of money. Both of these choices represent fundamental principles of rational choice. There are two rival theories, Causal Decision Theory (which supports strategic dominance) and Evidential Decision Theory (which supports expected utility). It is pretty arcane but one of the most difficult paradoxes in contemporary philosophy.

Robert Nozick summed it up well:

“To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

There are also an offshoot of Newcomb’s Paradoxes called medical Newcomb’s Problems. I’ve been in a situation like this before, I’ll describe it:

I went on an antidepressant, and there’s a history of manic depression in my family. My psychiatrist told me that for some people, antidepressants bring out their manic phase, and they find out they have manic depression. They already did have manic depression, so it doesn’t cause it, it just reveals it. She told me to watch out for any impulsive decisions I making, as that can be a sign of a manic phase.

I was in line at a convenience store and thought: should I buy a black and mild? I don’t really smoke, but for some reason it seemed appealing. Then I realized, that seems like an impulsive decision. But, if it is an impulsive decision, and I go through with it, and do indeed have manic depression, then I should just do it anyways. After all, it’s not making me have manic depression, it’s simply revealing something to me that I already had. On the other hand, if I don’t do it, then I have no evidence that I have manic depression, meaning that there truly is less evidence, and therefore I have no reason to believe that I have manic depression.

Expected utility = don’t buy the black & mild Strategic dominance = buy the black & mild

These situations aren’t quite as easy to see, but they’re interesting anyways.

I’m doing quite well now and all indication is that I do not have manic depression.

Were you familiar with any of these paradoxes? Share your thoughts in the comments now!