The claim that the unvaccinated were less likely to get tested is spurious in so much as the unvaccinated were systematically required to get tested to gain entrance to large attractions, clubs, football grounds etc. The case numbers in the UK never went below 25,000 from late June. This was most likely fully vaccinated people complacently going about their lives as if they were fully protected believing the vaccines actually worked.
That's a good observation. The unvaccinated subgroup may have been tested more but exposed less, and the vaccinated subgroup tested less and exposed more - over a particular time window. But as I understand it in the UK the unvaccinated were still allowed to participate in society so the exposure should be not be too dissimilar. This is also confounded by what we know about inaccuracy of PCR testing given it was applied at a different rate to the two groups..
In Australia there was a period of about 5 months (end of June till December) that unvaccinated were locked out of all but essential activities like purchase of food. When this was relaxed Omicron started. So if the data were available here it is likely biased over that period. However, even given the uncertainty, it's clear vaccination does not stop infection as originally promised and restriction of civil liberties based on vaccination status is unsupportable.
The lack of expertise in the collection of data is something to learn from in this pandemic.
Do you really think there is a "lack of expertise in the collection of data"? As you point out multiple times, there's a visible effort to control the narrative and fudging things like definitions in order to achieve that. It seems much less benign than lack of expertise.
This stuffs not rocket science, I find it hard to believe no proper statistical analysis has been done. When the inconsistencies are pointed out by the likes of Norman Fenton they don’t provide a proper response, they just make excuses and say things akin to “nothing to see here, move along”. Anyone that’s taken an undergraduate course in basic statistics should be smelling a rat.
The quality of data throughout has been absolutely appalling, to the extent that one cannot say anything with any real degree of certainty. I have come to just try and work with the raw data but then one is accused of 'missing context' which is nothing more than a 'failure' to frame data within the official narratives.
Thanks, I had a look at it. For the week after this analysis the reports used 3 dose, to make the cases look better. And presumably after initially seeming better it has got worse. I think there is a way to get 2 dose out of the numbers. When I get time I'm going to follow that up.
Hi Andrew, If you like crunching numbers it might be worth looking at the numbers in the NSW Weekly Surveillance Reports and www.covid19data.com.au there’s a treasure trove of Australian raw data. I’m currently crunching the NSW data to try and untangle the time shifted nature of “Two Effective Doses” etc. to work out the “true denominators” so I can properly estimate the vaccine effectiveness over time in NSW. My preliminary hunch is it’s a statistical illusion created by the time shifting caused by the definition of “effective dose” much like seen in some of Norman Fenton’s video where he time shifts death reporting to show how easily data can be misinterpreted. Also, you should check out John Dee’s substack, he amazingly randomised a large UK NHS data set and still found vaccinations to be effective. Your simple example above also applies to the NSW data, however, just dividing the numbers like have done doesn’t take into effect the group size. It gives you a clue as to what’s going on but not the whole picture.
P.S Norman Fenton’s Twitter post directed me here, he’s seams really good at what he does not sure why more people aren’t noticing the inconsistencies in the narrative.
to compare with what was found for UK. Similar results. There were a lot of challenges to get the numbers needed. I've put some of that in second part of post.
Interested to follow what you find if you are looking at NSW data. It's very helpful to compare notes.
And Yes Professor Fenton's work was what inspired me to look further into this.
Hi Andrew, Professor Fenton has also inspired me to have a “good look” at the data, especially the “statistical illusion” created by shifting the time base, which I believe is distorting the true picture in the NSW data. I can privately send you my “hypothesis” which is a work in progress and some unusual patterns in the data that led me take take the approach I have. Basically, I’m fitting a function that has the “correct”characteristics to the rollout and case data and modeling a placebo (i.e 0 effectiveness) to see if it lines up with the NSW data. Any variation from placebo should be the true effectiveness. I tried a number of functions and the logistic function had all the “properties” I wanted. I think I may be just be reinventing my own form of “logistic regression” but I started before I knew what logistic regression was and it gave me an insight into how time shifting effects the data i.e what is the “right” number to use in the denominators for comparison. I’ve looked at a study where they have used logistical analysis to determine effectiveness and it shows an “upward tick” at the end indicating the effective wanes over time then starts to rise again. This shouldn’t happen, indicating to me a time shift is occurring in their analysis or data. My background is in Engineering, not in statistics but the underlying differential equations are the same. My “math” should work but my “statistical” explanation may be incorrect.
The claim that the unvaccinated were less likely to get tested is spurious in so much as the unvaccinated were systematically required to get tested to gain entrance to large attractions, clubs, football grounds etc. The case numbers in the UK never went below 25,000 from late June. This was most likely fully vaccinated people complacently going about their lives as if they were fully protected believing the vaccines actually worked.
That's a good observation. The unvaccinated subgroup may have been tested more but exposed less, and the vaccinated subgroup tested less and exposed more - over a particular time window. But as I understand it in the UK the unvaccinated were still allowed to participate in society so the exposure should be not be too dissimilar. This is also confounded by what we know about inaccuracy of PCR testing given it was applied at a different rate to the two groups..
In Australia there was a period of about 5 months (end of June till December) that unvaccinated were locked out of all but essential activities like purchase of food. When this was relaxed Omicron started. So if the data were available here it is likely biased over that period. However, even given the uncertainty, it's clear vaccination does not stop infection as originally promised and restriction of civil liberties based on vaccination status is unsupportable.
The lack of expertise in the collection of data is something to learn from in this pandemic.
Do you really think there is a "lack of expertise in the collection of data"? As you point out multiple times, there's a visible effort to control the narrative and fudging things like definitions in order to achieve that. It seems much less benign than lack of expertise.
This stuffs not rocket science, I find it hard to believe no proper statistical analysis has been done. When the inconsistencies are pointed out by the likes of Norman Fenton they don’t provide a proper response, they just make excuses and say things akin to “nothing to see here, move along”. Anyone that’s taken an undergraduate course in basic statistics should be smelling a rat.
The quality of data throughout has been absolutely appalling, to the extent that one cannot say anything with any real degree of certainty. I have come to just try and work with the raw data but then one is accused of 'missing context' which is nothing more than a 'failure' to frame data within the official narratives.
Hey Andrew, perhaps you’ll find this link interesting https://www.bitchute.com/video/256KlyHOlxJV
Thanks, I had a look at it. For the week after this analysis the reports used 3 dose, to make the cases look better. And presumably after initially seeming better it has got worse. I think there is a way to get 2 dose out of the numbers. When I get time I'm going to follow that up.
Hi Andrew, If you like crunching numbers it might be worth looking at the numbers in the NSW Weekly Surveillance Reports and www.covid19data.com.au there’s a treasure trove of Australian raw data. I’m currently crunching the NSW data to try and untangle the time shifted nature of “Two Effective Doses” etc. to work out the “true denominators” so I can properly estimate the vaccine effectiveness over time in NSW. My preliminary hunch is it’s a statistical illusion created by the time shifting caused by the definition of “effective dose” much like seen in some of Norman Fenton’s video where he time shifts death reporting to show how easily data can be misinterpreted. Also, you should check out John Dee’s substack, he amazingly randomised a large UK NHS data set and still found vaccinations to be effective. Your simple example above also applies to the NSW data, however, just dividing the numbers like have done doesn’t take into effect the group size. It gives you a clue as to what’s going on but not the whole picture.
P.S Norman Fenton’s Twitter post directed me here, he’s seams really good at what he does not sure why more people aren’t noticing the inconsistencies in the narrative.
Hi Ivo, I just did an analysis of NSW data and posted
https://andrewmadry.substack.com/p/could-vaccination-be-more-effective
to compare with what was found for UK. Similar results. There were a lot of challenges to get the numbers needed. I've put some of that in second part of post.
Interested to follow what you find if you are looking at NSW data. It's very helpful to compare notes.
And Yes Professor Fenton's work was what inspired me to look further into this.
Hi Andrew, Professor Fenton has also inspired me to have a “good look” at the data, especially the “statistical illusion” created by shifting the time base, which I believe is distorting the true picture in the NSW data. I can privately send you my “hypothesis” which is a work in progress and some unusual patterns in the data that led me take take the approach I have. Basically, I’m fitting a function that has the “correct”characteristics to the rollout and case data and modeling a placebo (i.e 0 effectiveness) to see if it lines up with the NSW data. Any variation from placebo should be the true effectiveness. I tried a number of functions and the logistic function had all the “properties” I wanted. I think I may be just be reinventing my own form of “logistic regression” but I started before I knew what logistic regression was and it gave me an insight into how time shifting effects the data i.e what is the “right” number to use in the denominators for comparison. I’ve looked at a study where they have used logistical analysis to determine effectiveness and it shows an “upward tick” at the end indicating the effective wanes over time then starts to rise again. This shouldn’t happen, indicating to me a time shift is occurring in their analysis or data. My background is in Engineering, not in statistics but the underlying differential equations are the same. My “math” should work but my “statistical” explanation may be incorrect.
Hi Ivo you can contact me directly via andrewmadry@substack.com
Interested to compare notes. Sounds like an interesting approach. The example of shifting data by Prof Fenton is fascinating.