Good afternoon smart people...

I have two questions relating to the Law of Large Numbers, and Standard Deviation. Any thoughts or responses would be greatly appreciated!

FYI – I will stipulate in advance that when I took my college entrance exams, ( lo these many years ago ), my verbal-related scores were maxed out and my math-related scores were in the low 50's. Historically, for me, that means that if you communicate your responses very clearly and very simply there is an average chance that my mathematically-retarded brain might understand it. I will probably need to ask a few more questions to understand any responses. Thanks in advance for any help you fine folks might offer in overcoming my probabilistic obtuseness.

The Law of Large Numbers states that *the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

Well and good. That makes sense to me. If I flip a fair coin 10 times it would be no surprise that even though the expected odds are 50% for each possible outcome, the observed result might well come out to be 6-4 or even 7-3 biased to one outcome or the other. That doesn't mean that the coin is biased. It is just expected deviation. If I understand correctly though, if I flip that fair coin 100 times, the odds of deviation from the expected 50-50 outcome drop. And if I flip the coin 1000 times the odds of deviation drop even more. The observed results should get closer to 50-50, correct?

From googling around on the internet, I have determined that out of a sample size of 100 events, ( 100 fair coin flips ), the standard deviation from the expected result ( 50-50 ) would be 5 and that there is a 68.2% chance of the observed results falling within that deviation range. I have also learned that two standard deviations from the expected result would be 10 and that there is a 95.45% chance of the observed results falling within that deviation range given 100 events / flips.

So here is my first question- what would be the standard deviation on 1000 flips of that same fair coin, and what would be the chances of the observed results falling within that standard deviation range ( within one standard deviation of the expected result ). What would be the chances of the observed results falling within TWO standard deviations of the expected results? If I understand the Law of Large Numbers correctly, then I assume that the standard deviation should be a smaller percent of the total number of events given 1000 events as compared to what it was given 100 events , but I do not know how to calculate that new standard deviation.

Next question- given that the observed results are supposed to get closer to the expected average as the sample size of events grow, how large of a sample size would you need to have before you were able to look at the results and conclude that the coin itself was biased or that the events were non-random? Say for instance, if I had a coin and I flipped it 1000 times and the results came out to be 550 Heads and 450 tails, what would be the odds against that happening if the coin were truly non-biased and the events were truly non-random? That would be a deviation of 5%, which for 100 flips would be standard deviation and should be expected to occur 31.8% of the time. But how often should one expect that 5% deviation with 1000 flips? Odds of 5% deviation on 2000 flips? How large of a sample size with a 5% deviation from 50-50 before you grow highly suspicious that the events are not random / that the coin is biased to a particular result?

I have some observed results out of a large sample size of real-life events that would seem to intuitively indicate that they are non-random. ( a 5% bias towards one result out of 1000 events that should theoretically have an equally likely binary solution set / outcome of the observed results being one way or the other). I am trying to decide whether there is a likely bias or if I am just seeing standard deviation and imputing bias where there may be none. I know that intuition is often disproved by mathematical reality. Perhaps being patient and increasing the sample size of events to 2000 would clear this all up.

As my old Coach used to say " There are no stupid questions, only stupid people! "

Thank You in advance!