COVID19: Perspective From A (Sort Of...) Expert-Adjacent Person

What’s the likelihood that this becomes seasonal as opposed to a one-time phenomena?

These are unlikely scenarios. At this time, we have no good reason to believe people don’t establish at least temporary immunity once they clear the virus (as AG indicated, there have been a few case-reports of ‘re-infection,’ but it isn’t at all certain these cases don’t simply represent testing failures).

As for mutating into a superbug…While of course no one can say this can’t happen, it is exceedingly unlikely. Unlike bacteria–for which inappropriate/indiscriminate antibiotic use breeds treatment-resistant strains–the fact that we can’t really treat most viruses means we can’t push them through a similar warp-speed (un)natural selection process.

5 Likes

Like AG, I’m not a virologist. But based on my limited knowledge of the subject, I’m
hard-pressed to come up with a reason why this virus wouldn’t continue to circulate through the population.

Fortunately, I also see no reason why we shouldn’t be able to come up with an effective vaccine.

2 Likes

Yep, this attitude of old people can die so I don’t have to deal with a few weeks of boredom is headshaking stuff.

Do we have any idea of the age breakdown of people going into ICU?

Thanks for the knowledge bombs :+1:

My understanding is that a mask also serves as a not-so-subtle reminder to keep one’s hands away from one’s face.The less eye-rubbing, nose scratching, nail-biting the better.

2 Likes

A good article on Sensitivity and Specificity and the concept of accuracy in medical testing. Apologies for the wall of text but it’s a good article to introduce the concepts and why a test that is 90% sensitive could be much worse in a situation like this than our current policy of self-quarantining.

And the typical caveat that I’m not a Dr.


Headlines like these touting new medical tests often include impressive-sounding claims of accuracy.

Consider this HealthDay story about an experimental breath test said to be “85% accurate” for the detection of stomach cancer.

To many people, an 85% accuracy rate probably sounds pretty good — and the story about the test seemed to encourage a perception of precision. It speculated that the test could lead to “earlier diagnosis and treatment, and better survival” for individuals with stomach cancer.

But HealthNewsReview.org’s experts were not nearly as optimistic.

In their review of the news release that was the basis for the story, they pointed out that the test, if widely adopted, could conceivably lead to hundreds of false-positive results for every person who is correctly identified with stomach cancer. Those false-positive results weren’t mentioned in either the news release or the story that rehashed it. Our reviewers thought they should have been:

If the release is going to discuss potential unproven benefits, it should also mention the potential harms of screening tests including false- positives, false-negatives leading to over- or under-diagnosis. Chief among these harms would be falsely labeling healthy people as possibly having cancer and then subjecting them to invasive testing or even treatments that turn out to be unnecessary.

Sensitivity and Specificity

What else could have been done differently?

Both the news release and the news story would have been improved with discussion of two important concepts in medical testing: sensitivity and specificity.

They are the yin and yang of the testing world and convey critical information about what a test can and cannot tell us. Both are needed to fully understand a test’s strengths as well as its shortcomings.

Sensitivity measures how often a test correctly generates a positive result for people who have the condition that’s being tested for (also known as the “true positive” rate). A test that’s highly sensitive will flag almost everyone who has the disease and not generate many false-negative results. (Example: a test with 90% sensitivity will correctly return a positive result for 90% of people who have the disease, but will return a negative result — a false-negative — for 10% of the people who have the disease and should have tested positive.)

Specificity measures a test’s ability to correctly generate a negative result for people who don’t have the condition that’s being tested for (also known as the “true negative” rate). A high-specificity test will correctly rule out almost everyone who doesn’t have the disease and won’t generate many false-positive results. (Example: a test with 90% specificity will correctly return a negative result for 90% of people who don’t have the disease, but will return a positive result — a false-positive — for 10% of the people who don’t have the disease and should have tested negative.)

It’s important to recognize that sensitivity and specificity exist in a state of balance. Increased sensitivity – the ability to correctly identify people who have the disease — usually comes at the expense of reduced specificity (meaning more false-positives). Likewise, high specificity — when a test does a good job of ruling out people who don’t have the disease – usually means that the test has lower sensitivity (more false-negatives).

Another everyday example

Airport security offers a good example how these tradeoffs play out in practice. To ensure that truly dangerous items like weapons cannot be brought on board an aircraft, scanners at a security checkpoint may also alarm for harmless items like belt buckles, watches, and jewelry. The scanner prioritizes sensitivity and will flag almost anything that seems like it could be dangerous. But that means it also has low specificity and is prone to false alarms; a positive result is much more likely to be a shampoo bottle than it is an explosive device.

The same issues crop up when it comes to testing for deadly diseases like cancer. High sensitivity is desirable: missing cases of actual cancer could lead to delays in treatment that negatively affect outcomes. However, specificity is more important with cancer testing than it is at an airport checkpoint: false-positive results create anxiety and lead to unnecessary and invasive follow-up tests like biopsies. They raise costs for everyone involved and increase the likelihood of experiencing harm. Those harms can be significant enough to outweigh the potential benefits of the test. (Prostate specific antigen [PSA] testing is a good example low specificity test that generates many false-positive results.)

Good and bad media models

Stories don’t have to get too technical to address the issues readers need to know about. This NPR story about a blood test for cancer never mentions sensitivity or specificity, and yet it effectively communicates the problems associated with false-positive and false-negative results. It quotes expert sources who warn that a negative result might be giving “false reassurance to the patient” and that a false-positive result might send patients on “needless and expensive medical odysseys.”

This CNN story about a test for Parkinson’s disease also drew kudos from our reviewers. They praised the story’s provision of sensitivity and specificity values, but noted that readers could have been given more context as to what these statistics mean. “The story could have gone further to explain, for example, that a low specificity test means it will have a high false-positive rate (more people who don’t have the disease are erroneously told that they have it),” they said.

It’s much more common for stories and news releases to overstate the accuracy of tests and gloss over potential harms as we saw in these examples:

  • This Guardian story touted an experimental blood test that was said not only to “detect autism in children” but also “could lead to earlier diagnosis.” Our reviewers noted that the story, based on a study of just 38 children, offered no data to back up its claim; nor did it warn of the harm that a false-positive or false-negative result could inflict on children and their parents.
  • Similar concerns were raised about this USA Today story about a genetic test for breast cancer. Reviewers said the story “doesn’t offer much information that readers can use to make decisions about the use of the recently-approved 23andMe test. For example, what is the rate of false-positive results? Or false negatives? What does that mean for actual risk of developing breast cancer?”
  • A New York Presbyterian Hospital news release touted a test that “detects prostate cancer with 92 percent accuracy.” But as pointed out on our blog, the 92% figure represents the sensitivity of the test – not the accuracy – which is a very different concept.

“The problem with the accuracy statistic is that it’s meaningless,” said Richard Hoffman, MD, MPH, director of the Division of General Internal Medicine at the University of Iowa Carver College of Medicine and the Iowa City VA Medical Center. He noted that in the medical literature, “accuracy” is usually defined as the sum of the true positive and true negative results divided by the sum of all test results. (Read the linked post above for more detail on how this is calculated.)

“This means that even a completely worthless test—unable to detect any patients with disease—would have a high accuracy if most patients do not have the disease. For example, if 10 of 100 patients have the disease, the test detects none of them, the accuracy—based on the true negatives—would be 90%,” he explained.

Many stories aren’t clear on whether they’re using the medical literature’s definition of accuracy or using the term accuracy more generally — which can further add to the confusion.

What’s a ‘good’ test? It depends

The ideal test is one that has both high sensitivity and high specificity, but the value of a test depends on the situation, says Hoffman.

Generally speaking, “a test with a sensitivity and specificity of around 90% would be considered to have good diagnostic performance—nuclear cardiac stress tests can perform at this level,” Hoffman said.

But just as important as the numbers, it’s crucial to consider what kind of patients the test is being applied to. Hoffman noted that even a good test won’t offer much useful information if you’re testing the wrong population.

“If you’re testing people who you know going in are very likely to have the disease, they’re still likely to have the disease even if the test comes up negative,” he said.

The same is true of positive tests in people who are very unlikely to have the disease: “Just because the test comes up positive, that won’t give you much confidence that they have the disease if the prevalence of disease is very low in the patients that you’re testing.” As with an airport scanner looking for weapons, there’s a good chance any positive result is merely a false alarm.

The issue of false alarms is especially important when screening for diseases, such as cancer and HIV, in apparently healthy people who have a low likelihood of having the disease. In those cases the testing is done sequentially in a two-step process, Hoffman said.

“The initial tests are selected because they have high sensitivity (>99% in the case of HIV tests),” he said. “The expectation is that these tests do not miss patients with disease–and that all of those with positive tests (which could be a large proportion) will then undergo the highly-specific diagnostic gold standard test to confirm the diagnosis.”

The second step is meant to rule out the many false-positives resulting from the first test.

Diagnosis vs. screening – a critical distinction

This brings us back to the stomach cancer breath test discussed at the top of post.

Researchers claimed that the test could identify stomach cancer in otherwise healthy-seeming people who showed no signs of disease. Again, this refers to screening – which is finding early, non-symptomatic cases of disease in the general population. That’s different from diagnosis – which is when doctors try to find out exactly what’s wrong in people who are already complaining of symptoms.

Mammograms are used to screen for breast cancer; a positive result requires follow-up with an invasive breast biopsy to confirm the diagnosis.

Although the HealthDay story made claims about the test’s ability to screen for cancer, the study that was the subject of the HealthDay story didn’t look at healthy people. About half of the samples tested came from people who were already known to have cancer, and most of those cases were in the advanced stages. While the test seemed to perform reasonably well in this population where most of the people had cancer (about 80% sensitivity and 80% specificity), applying the test to healthy population would likely generate a disastrous outcome.

Our reviewers ran some hypothetical numbers on a healthy population where the stomach cancer rate is lower – say 1 out of 1,000. (They used round numbers for the purposes of explanation.) They calculated that for a test with 80% specificity (which corresponds to a 20% false-positive rate), there would be 200 false-positive results for every cancer that is accurately identified! This means that 200 hundred people would suffer the anxiety of being told they may have stomach cancer, and then be referred for additional invasive testing to confirm or rule out the possibility of cancer.

Positive predictive value

This example raises a question that is usually top of mind for readers, but often not addressed in news stories: If I test positive for a disease, what are the chances that I actually have the condition that I was tested for?

Positive predictive value (PPV) – a statistic that encompasses sensitivity, specificity, as well as how common the condition is in the population being tested — offers an answer to that question.

In the breath test example, our reviewers calculated 200 false-positives for every person correctly diagnosed with disease. This means that the likelihood of a positive result correctly indicating disease is only 1 out of 201 or 0.5%. Not very good!

How could a test that‘s “85% accurate” in a study turn out to be so dismally inaccurate in practice?

“The problem is that tests are often evaluated in populations with a high prevalence of disease which inflates the positive predictive value,” says Hoffman. “When applied to a lower risk population the predictive value drops” because there are more false-positives. “This is particularly a problem when you are talking about screening, where the prevalence of disease in the population is usually quite low. This has important public health implications because the number of false-positive tests can be in the hundreds of thousands or even millions—and each of those patients will be advised to get the gold standard test.”

That’s one of the reasons why screening for disease in healthy people is so fraught; such tests inevitably flag many people who have nothing to worry about and turn them into patients.

(For reference, the PPV of a PSA test for prostate cancer is about 30%, whereas the PPV of mammograms for breast cancer is said to range from 4.3% to 52.4% depending on the expertise of the radiologist interpreting the image.)

The bottom line on medical tests

While many journalists are enamored with the thought of “simple” blood tests, medical testing is complicated. Readers aren’t well served by stories that blithely tout accuracy figures that don’t reflect reality.

Consumers and journalists can better inform themselves by always considering the sensitivity and specificity of the test, as well as the flip side of those statistics – the false-negative and false-positive rates.

Another useful barometer is the positive predictive value, which reflects the likelihood that a positive test result correctly indicates the presence of the disease.

Finally, journalists should always ask questions about the population that was studied, and whether those people are comparable to the people who would be tested in the real world. And remember that a test that’s reasonable to use in people who already have symptoms of disease (i.e. for diagnosis) may not be useful in people who seem healthy (i.e. for screening). News stories about medical tests must be mindful of the distinction.

https://www.healthnewsreview.org/toolkit/tips-for-understanding-studies/understanding-medical-tests-sensitivity-specificity-and-positive-predictive-value/

2 Likes

@ActivitiesGuy - What do you make of this?

The organ- and cell-specific expression of this gene suggests that it may play a role in the regulation of cardiovascular and renal function, as well as fertility. In addition, the encoded protein is a functional receptor for the spike glycoprotein of the human coronavirus HCoV-NL63 and the human severe acute respiratory syndrome coronaviruses, SARS-CoV and SARS-CoV-2 (COVID-19 virus).

https://www.ncbi.nlm.nih.gov/gene/59272

I’ve asked my mates about this, 2 virologists, and they are saying it’s nasty habit of upper respiratory viruses in the COVID family, but the data isn’t there to make a determination yet.

I’m bordering on ignoring Chinese stats tbh. Italy may have been hit hard, but there seems little reason to trust the CCP’s stats on this.

2 Likes

I posted this in another thread but it really belongs here. Not to derail the thread, but to pause and reflect on these unusual circumstances we find ourselves in. I’m sure he’s got a LOT on his plate right now but he’s still taking the time to post here.

@ActivitiesGuy and I both started posting here around the same time. We both deadlifted 600+ lifetime natty. We both dig kettlebells (my Rogue 70lb is on its way to join the club). We’ve had a ton of great interactions in various training discussions over the years.

All these years later, when something big comes our way for the first time in a long time, he’s still here. He’s become a high level epidemiologist and he’s still here giving us the scoop when we need it the most.

I love you buddy. Thanks for all that you do!

11 Likes

As a guy with an ailing, cancer surviving, diabetic grandfather, whom I dearly love, I have two words for the people who thing that it’s fine if it ravages the elderly because that ain’t in my back yard: fuck off.

5 Likes

I have to catch up with this thread, but I am intensely interested in a good quality discussion and also glad to hear from you!

1 Like

My grandma’s 91 and my parents are both 68. Right there with your thinking buddy.

3 Likes

Jealous of all this. I’m getting a bigger KB as well as it looks like the gym isn’t an option for a bit. But mine will be 55-60 range as.

1 Like

Also not a virologist but feel I can contribute here. Will address mutation first and reinfection next.

  1. On the topic of mutation, I posted this in the other thread but feel it needs to be said again. Mutations are random, despite the influence of Hollywood movies like “Outbreak” and such. On a short timeline they are just as likely to be hurtful to the virus’s survival or infectiousness as helpful to it’s ability to spread and/or kill. Obviously on a long enough timeline with selection pressures that’s not likely to be true, as mutation rate is one factor that determines viral “fitness”, resistance to vaccination or vaccination strategy, likelihood of species hopping like our current pandemic, and treatment strategy (e.g. HIV-1 needs multiple drugs to suppress).

Basically in our current scenario I don’t know and I’m not sure that there’s ANY data on it yet (that I’m aware of). Since it’s an important factor I am sure someone is working on it or published a preprint I don’t know about. Read below only if you’re interested in a general idea of mutation rate.

In a general sense, it’s a very complicated thing to measure so things get messy very quickly. There’s no way I want to cover that whole topic here, and I couldn’t do it anyway because I’m not a virologist. But basically viruses in general have a higher mutation rate because of ineffective “proofreading” when they replicate their genomes. You zip along writing quickly and don’t ever look at the draft you’re typing, basically. RNA viruses lack any proofreading at all, except the family to which coronaviruses belong. These do have limited proofreading.

The vast majority of all mutations will be lethal/negative for the virus. Some will be neutral, very few beneficial. Due to the rate viruses replicate at, though, they can have rapid accumulation of mutations (depending on the copying mechanism, not relavent for this). Coronaviruses in general have a slower rate of mutation due to proofreading ability, and thus a larger genome. Continuing the writing analogy, the less proofreading you have the quicker your “book” turns to gibberish, the more proofreading the longer a book you can write.

  1. On the topic of reinfection, some of this could be testing issues but it’s not clear how much. Someone gets a false positive reading, they go home and isolate, feel better, go out and catch the real thing and test positive again (this time for real). That sort of event can and is happening, especially with the hundreds of thousands of tests being done. One of the downsides of a rapidly developed test for a rapidly evolving situation is that it may have a large likelihood of false positives. It’s “test triage” in a way–we need something that gives us information, so even if there’s a weakness of the test we take the bad with the good, if that makes sense.

However, it’s unclear how much it’s happening versus how much is the virus itself being that insidious or our lack of immunity to a novel pathogen. I do not have any data on current test kits being used (and there look to be several).

I posted in the other thread a study from Cell that showed antibodies to the original SARS virus had cross immunity to CoV2 (meaning the immune system reacts to the current virus if it has the “memory” of the original SARS).

This would mean some protection but not necessarily immunity to infection. This also suggests that people infected by CoV2 would retain protection via immune “memory” after recovering, like you would with other illnesses—this is what you would expect naturally from our life experiences getting bugs lol. Btw, this is somewhat linked to mutation rate as you surmised, since quick mutation means the potential to “hide” from the immune system by losing the sites to which the antibodies bind. Gross oversimplification on many levels. Professors would be mad at me, but you get the gist of it.

I view the cross reactivity reported in the Cell study as a good thing here because it suggests that there are highly conserved genome areas between strains that might make good vaccination targets. #notavirologist

The flip side however is that it is unknown if the weakened state of a recovered patient–or other specific factors like diminished lung capacity or lung fibrosis which AG mentioned above–allows easier reinfection even with antibodies active.

Typed in my phone forgive hastiness/lack of clarity/typos

6 Likes

Statistically within China the biggest mediators with regards to determining whether the disease had a lethal outcome or not was the presence of

  • cardiovascular disease
  • diabetes mellitus
  • hypertension
    The vast majority of those 75+ will have at least one of these “pre-existing” conditions. Just because 99% of those dead were afflicted with some sort of ailment isn’t cause for comfort. With sedentary lifestyles, shitty diets nowadays you’re getting kids developing type 2 diabetes, hypertension etc.

I am on an island, we are seeing a growth curve alongside the lines of exponential growth that was seen in Italy, China etc. Our government as a whole, health minister etc are idiots. They’ve refused to close down schools/universities despite mass gatherings of now 100+ being banned (school could easily fit within this category), we aren’t testing adequately. Even those who are symptomatic aren’t receiving testing unless they’ve been in contact with known cases… It’s a real shit-show here, if you look at how Scott Morrison handled the bushfires, one doesn’t have to be a genius to stipulate perhaps he isn’t the best equipped to handle a pandemic. It may be easier to contain when borders can be effectively shut off (no connected neighboring countries), but if you don’t take adequate preventative measures within the community (post exposure) to begin with this means fuck all.

Wouldn’t be surprised if we have a few million infected a month from now

Been stipulated to be hypertension, type 2 diabetes, CVD etc.

Australia is a close contendor to America pertaining to public health status.

It’s a little bit early to legitimately confirm an exact mortality rate. But within the USA, it appears a frighteningly high percentage of hospitalisations relate to younger people (20-50) without any known prior medical conditions.

To my limited range of knowledge masks are effective IF fitted to the individual wearing them, even then they’re only effective until the mask becomes moist from the users breathing, at which point particles aren’t filtered adequately. This is typically a few hours, so the masks need to be changed multiple times per day. Given the shortage of masks (major problem here in Aus), doctors are getting exposed. When infected doctors may become critically ill and regardless will need to self isolate, so with doctors being exposed… We now may eventually a shortage of them…

They’re partially protective. Coronavirus is transmitted primarily through droplets (coughing/sneezing), masks effectively protect against this. However I do believe tiny viral particles (aerosols) can make their way through the mask… Exponentially better than nothing, but not foolproof. Or am I wrong @EyeDentist, can these masks filter out even the tiniest of viral particles? Furthermore, how many hours is each mask effective for?

2 Likes

Surgical masks are not intended to protect the wearer so much as to protect others (specifically, the unconscious pt on the OR table) from the wearer. They prevent the wearer from sneezing/coughing/spitting/drooling on the surgical field. Now, this is not to say they don’t afford the wearer some protection from gross contamination; eg, blood from a cut artery spurting into your mouth/nose in the OR; spittle/snot droplets expelled during a sneeze or cough flying into your mouth and nose in other settings. Unfortunately, COVID can become aerosolized, which means it can easily flow through the huge spaces between a surgical mask and the wearer’s face. (It can also pass directly through the mask material itself, so taping the mask down wouldn’t help.) Thus, surgical masks are inadequate protection from contracting COVID.

Now, one might reason (as @unreal24278 did above), ‘Well, they provide some protection, so what’s the harm in wearing one?’ The harm is twofold, with the first being theoretic and the second quite real:

First: Wearing a mask may induce a false sense of security in the wearer, leading him/her to engage in behavior (eg, relaxing isolation and social distancing guidelines) that puts him/her and others at risk.

Now, one could argue ‘That’s not me–I’ll still stick to the guidelines, but will wear a mask for a teensy bit more protection when, say, I go to the store.’ But then there’s #2:

Second: We (America, but much of the rest of the world as well) are rapidly running out of surgical masks.

This is real, folks. I am now required to use the same mask from one day to the next, and my podunk hospital has yet to receive its first confirmed case of the virus. You may have seen news footage of hospitals around the country making their own masks with craft-store materials. This is not fake news. And if HC providers and ancillary staff (eg, the people who manage the O2 tanks) can’t get the limited protection afforded by surgical masks, and if we don’t have them to put on pts with suspected COVID…things will be even worse than they already are.

So please, resist the urge to buy surgical masks, and wear one only if instructed to do so by your healthcare provider. Unlike toilet paper, this sort of hoarding will have dire repercussions.

8 Likes

It’s the etc. that needs to be figured out.

At my job we haven’t discussed, at least officially, how to decon masks for re-use. Trolling the web I have found that some agencies/facilities use bleach or alcohol, or UV light to clean “disposable” masks for re-use.

What are you guys doing, if anything?
If you’ve seen official recommendations for deconning masks from the CDC, or anyone else please share a link so I can kick the idea up the chain of command. Thanks!

I linked a generic magazine type article about copper in the other thread, but found this more scientific article:

Relevant parts: SARS-CoV-2 was more stable on plastic and stainless steel than on copper and cardboard, and viable virus was detected up to 72 hours after application to these surfaces ; also: On copper, no viable SARS-CoV-2 was measured after 4 hours and no viable SARS-CoV-1 was measured after 8 hours. On cardboard, no viable SARS-CoV-2 was measured after 24 hours and no viable SARS-CoV-1 was measured after 8 hours (Figure 1A).