Low Carb 4000+ Calorie Diet?.....Please Help

As an aside (euphemism for ‘threadjack’), when Bill Roberts wrote “I don’t recall the specific paper on it that I have read or the exact outcome of the proof, but there’s a proof that to evaluate likelihood that result was caused, one takes the estimated likelihood of causation before the experiment, and then combines this in a given way with the p value,” I’m pretty sure he’s referring to Bayes’ theorem, and its use as an alternative method of hypothesis testing.

As an aside to this aside, the Bayesian approach is oft-touted but underutilized in clinical medicine, specifically with regard to ordering and interpreting diagnostic tests.

[quote]EyeDentist wrote:
As an aside (euphemism for ‘threadjack’), when Bill Roberts wrote “I don’t recall the specific paper on it that I have read or the exact outcome of the proof, but there’s a proof that to evaluate likelihood that result was caused, one takes the estimated likelihood of causation before the experiment, and then combines this in a given way with the p value,” I’m pretty sure he’s referring to Bayes’ theorem, and its use as an alternative method of hypothesis testing.

As an aside to this aside, the Bayesian approach is oft-touted but underutilized in clinical medicine, specifically with regard to ordering and interpreting diagnostic tests.[/quote]

I am writing a long screed to reply to Bill, haha. And yes, it sounds as though he is describing Bayesian statistics.

Bill, I am going to (eventually) post my reply to you in a separate thread. I am currently doing some actual work, and intermittently returning to my reply for short breaks from actual work. But look for it later today. I’ll include your name in the subject line, but I’ll probably post in the Get A Life forum.

[quote]Mxdamien wrote:
So here is what I have drawn out now as far as things in my diet.

Protein - Chicken breast (6 oz/day), 93/7 ground beef (4 oz/day), Salmon (5 oz/day only 3 times per week on cardio/rest days), Whey isolate (1 serving (2 on cardio days)/ post-workout), Casein (1 serving Before Bed), Whole Eggs (4/day - 3/day on cardio days with salmon).

Carbs - Broccoli (8 oz/day), Quinoa (0.5 cup/day), Brown Rice (1 cup/day for lunch)

Fats - Olive Oil (8 Tbs spread through out the day), Heavy Whipping Creme (1 Cup/day), Coconut Oil (2 Tbs/day), Raw Macadamias (5 oz/ spread throughout the day),

Final Macros are:
With Salmon and 3 Whole Eggs ; Without Salmon ( only 4 Whole Eggs as substitute)
Calories - 3,970 w/ Salmon+3 eggs ; Calories - 3,773 w/ 4 eggs
Protein - 192 grams ; Protein - 171 grams
Carbs - 119 grams ; Carbs - 120 grams
Fats - 310 grams ; Fats - 301 grams
Sat Fats - 52 grams ; Sat Fats - 53 grams
MonoUnsat. - 185 grams ; MonoUnsat. - 183 grams

I am worried the amount of total sat. fats are bad since my already high cholesterol. The Heavy Creme has a lot and so does the coconut oil.
My training split is weight training MTRF and sometimes Saturday and that is supplemented in with Cardio MWF. So 3 days out of the week I chose to put down more calories since I am doing basically 2 workouts.
I burn around 3,000 - 3,200 calories a day.
My new goal for now is to just get to 175-180 on a clean healthy diet that won’t kill me or further progress my issue of becoming a diabetic or have heart issues due to cholesterol.
Thoughts? Concerns? Add? Take away? Replace? Substitute? Portions? Any advice is greatly appreciated!
[/quote]

Personally I would rather have at least as much saturated as mono-unsaturated fat. I question 180 grams of monounsaturates because I am not sure you can get 180 without getting a lot of omega-6s too. Even olive oil is giving you nearly 20 grams of omega-6 at that point. Coconut will give you about 3. Same for macadamia I think.

Also, I am only telling you what I would do within your parameters. I don’t know the full extend of what your doc has done and why. I certainly would not cut egg yolk or cream based solely on the blood test results you have shown, but if I had a high calcium score for example, or had already had a heart attack, I might do things differently-I don’t know as I have not researched it much. Cholesterol for example is sent to damaged blood vessels to HEAL them, which is why it is found in arterial plaques, but whether at some point, for a non-healthy person, it becomes like putting too many bandages on a wound, I don’t know. Regarding your doctor, same with some digestive issues.

By the way, have you considered that you might have celiac or an absorption problem.

[quote]Bill Roberts wrote:
The practice so many authors, doctors, and scientists have of lumping fats into groups, instead of speaking of specific ones, and then branding the groups good or bad is just unwarranted oversimplification and comes to wrong conclusions. Almost all the studies they “prove” their points with change more than one thing at a time, or measure a thing which isn’t an accurate predictor of health (for example, total cholesterol or even LDL, versus observed cardiovascular effects or even versus a better measure such as oxidized LDL.)
[/quote]

One quick thought on this, though, while I am here. A primer on how this happens. Much of this will be familiar to well-read folks.

Epidemiologic studies showed that people with high cholesterol levels had a high risk of heart disease. Since cholesterol is found in atherosclerotic plaques, it was very easy to sell a biologically plausible hypothesis that high cholesterol actually causes heart disease, and thus the Collective Field Of Science rushed off to find ways to lower cholesterol.

What was not understood at that time is that high cholesterol may be a marker of disease risk, but does not actually cause heart disease. Cholesterol plays a part in an enormously complex biological process of a developing atherosclerotic plaque, yes. But it doesn’t “cause” the process to begin, nor does its removal stop that process. Yet that’s the way we tend to treat it - as though if we can just get that cholesterol out of people’s blood, it will stop clogging their arteries.

One of my favorite analogies for this is the firefighters. Imagine the town mayor saying that we have too many fires in our town, and since there are always firefighters around when there’s a fire, we’re going to get rid of some of the firefighters to reduce the number of fires.

The firefighters aren’t causing the fire; they’re arriving on the scene to fight the fire. Similarly, the cholesterol isn’t actually causing the atherosclerosis. It “arrives on the scene” in response to injurious processes that are already happening.

And also, high cholesterol being a “marker” of disease risk doesn’t = “everyone who has high cholesterol has high disease risk.” What it means is that, across the entire population, heart disease occurs more commonly in people with high cholesterol - but “people with high cholesterol” end up like that for all different reasons. Not everyone who has high cholesterol has the bad processes going on. Yet they’re treated that way. Again, something that’s poorly understood.

It is so ingrained in physicians’ minds that we must “treat” the high cholesterol that they just cannot wrap their brains around this revelation that cholesterol is not a causal factor in disease.

(One step further about my occupation: after starting in a women’s health group out of necessity - that was the “available” job when I completed my doctorate - my current job as a biostatistician is at the UPMC Heart and Vascular Institute, working with cardiologists and cardiac surgeons).

At our recent journal club, we reviewed an article about an efficacy/safety trial of the latest hot cholesterol-lowering medication (PCSK9 inhibitors). The trials to date have indeed shown massively impressive reductions in LDL-c, but we have not yet had a large conclusive trial about their effectiveness in reducing CVD morbidity and mortality.

This is understandable, by the way; that’s how research has to work. We have small studies that establish a drug can be given safely in the short term on a small number of people, then medium studies that show it actually does what it’s biologically presumed to do, then big studies to show that biological process actually translates to better clinical outcomes.

And yet…in journal club, the cardiologists and cardiology fellows were sitting around the table just blown away by the massive reductions in LDL-c that they were all pretty much ready to skip the next trial. It would be unethical to withhold this drug from people, in their minds, because hey look it offers huge reductions in LDL!

Again: complete lack of understanding that artificially deflating someone’s LDL does not mean that you will reduce their risk of heart disease, because LDL does not CAUSE the disease. LDL is part of the response to a disease process!

These are freaking cardiologists. People that treat heart disease!

It has been a long time for me, I was an ex-phys researcher in my early 20s for a couple of years, two careers ago. Also my other hobby is advanced sabermetrics where we look at things like whether a batter hitting at a certain level at home versus the road is likely to be due to a psychological difference, or a ballpark effect or due to randomness.

If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? I remember being hammered about type 1 and type 2 errors. I think it means that you will only wrongly make a type 1 error less than 5% of the time.

But isn’t a type 1 error a wrong rejection of the null hypothesis? If the null hypothesis is “no relationship” and you have guaranteed that you are only rejecting a true null 5% of the time, doesn’t that mean that 95% of the time there is at least some non-chance relationship?

[quote]mertdawg wrote:
If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? I remember being hammered about type 1 and type 2 errors. I think it means that you will only wrongly make a type 1 error less than 5% of the time.

But isn’t a type 1 error a wrong rejection of the null hypothesis? If the null hypothesis is “no relationship” and you have guaranteed that you are only rejecting a true null 5% of the time, doesn’t that mean that 95% of the time there is at least some non-chance relationship?[/quote]

I will address these things in my long reply to Bill later. You’re sorta-right. The key thing you’re missing is that there’s a massive difference between a “relationship” and a “causal relationship” - and yet most people treat all “associations” as causal associations.

A “real relationship between the variables” is a hazy definition.

Suppose I collected data on the mean BMI of a whole bunch of U.S. cities and the number of miles to the nearest beach. I run a simple linear regression on my data and find that there’s a strong association that closer to the beach => lower BMI, p<0.01.

That doesn’t mean living near a beach magically “causes” people to be in better shape than their inland-dwelling countrymen. It might mean that people in cities closer to beaches value being outdoors, spend more time being outside/active in general, and thus have a lower BMI. But there’s not really a causal relationship between “living close to the beach” and “lower BMI” - it’s mediated through other unmeasured confounders, like the higher activity level of people who decide to live near the beach.

So just saying there’s a “low probability of the result occurring by chance” is NOT equivalent to saying that there’s a causal relationship between two variables.

[quote]ActivitiesGuy wrote:

[quote]mertdawg wrote:
If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? I remember being hammered about type 1 and type 2 errors. I think it means that you will only wrongly make a type 1 error less than 5% of the time.

But isn’t a type 1 error a wrong rejection of the null hypothesis? If the null hypothesis is “no relationship” and you have guaranteed that you are only rejecting a true null 5% of the time, doesn’t that mean that 95% of the time there is at least some non-chance relationship?[/quote]

I will address these things in my long reply to Bill later. You’re sorta-right. The key thing you’re missing is that there’s a massive difference between a “relationship” and a “causal relationship” - and yet most people treat all “associations” as causal associations.
[/quote]

Thanks, I’ll come back later. It came up in part because we were comparing different defensive metrics in baseball (for many specific players) to see if they were likely to be describing the same “thing”. I took a rejection of the null to mean that there was only a 5% chance that the two metrics described completely different things.

Also on a different note, there is the misinterpretation of correlation coefficients. We found for example a .8 correlation between two defensive metrics (using many different players). Again, my memory was that if you square that (to get .64) the .64 means that 64% of the variance in one variable can be predicted by the variance in the other variable.

Thanks, I’m out for a while… Hope the OP got some ideas.

Saying that cholesterol causes heart disease is analogous to saying that bandages cause wounds.

[quote]ActivitiesGuy wrote:

[quote]mertdawg wrote:
If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? [/quote]
[/quote]
Something I would add to AG’s reply is that there can be a high probability, even near certainty, of the result being by chance “despite” this p value.

Let’s say we ask a bunch of people to come up with the most idiotic ideas they can think of as to what treatments might change various measured blood values of rats.

One comes up with the idea that a treatment wherein the water for the rat’s water bottle is swirled as it’s added to the bottle, instead of just poured in as usual. Another conceives of a treatment wherein the researchers are required to speak only in pig Latin while working with them, instead of English. Still another changes brands of light bulbs in the labs, at same lighting intensity and color temperature. In total, 100 idiot ideas are conceived.

And 10 blood test variables are examined for each study.

Most likely, out of these 1000 possible “effects” being studied, 50 or so will “show an effect” to p <= 0.05.

It will be almost certain that not a single real effect existed.

I know my original post was eye-glazing due to length and probably writing style, but it goes over this more. P value in itself does not calculate or show probability of real causal effect existing, or even necessarily much likelihood.

The more variables being measured in a study, and the less plausible the effects in the first place (most possible treatments in fact don’t provide benefit) the more likely that outcome was not caused, but was from chance alone.

There are many studies which measure 20 things at a time. Most such studies will generate, from chance alone not cause, an “effect” if effect is judged by meeting p <= 0.05. Most authors will at least suggest their data supports causal relation. The reader must beware.

Or to give a sabremetrics analogy:

Suppose you start analyzing things that obviously almost can’t be causal.

For example, you count the steps the player takes from the bench to the batter’s box, and separate out cases according to being an even number of steps, or odd.

You separate out cases where for the three front row seats closest to third base, they are occupied by three men, or two men one woman, two women one man, or three women.

Etc, etc. You do a lot of them.

If you study 100 such things, and rely on p <= 0.05 as being good evidence of causal relation, about how many will show up as being an “effect” to p <= 0.05, versus how many are most likely really from chance alone?

Science studies tens or hundreds of thousands of such things each year. Except, being unknown and not particularly likely to have beneficial effect (as most unknown things do not), rather than being obviously silly.

[quote]Bill Roberts wrote:

[quote]ActivitiesGuy wrote:

[quote]mertdawg wrote:
If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? [/quote]
[/quote]
Something I would add to AG’s reply is that there can be a high probability, even near certainty, of the result being by chance “despite” this p value.

Let’s say we ask a bunch of people to come up with the most idiotic ideas they can think of as to what treatments might change various measured blood values of rats.

One comes up with the idea that a treatment wherein the water for the rat’s water bottle is swirled as it’s added to the bottle, instead of just poured in as usual. Another conceives of a treatment wherein the researchers are required to speak only in pig Latin while working with them, instead of English. Still another changes brands of light bulbs in the labs, at same lighting intensity and color temperature. In total, 100 idiot ideas are conceived.

And 10 blood test variables are examined for each study.

Most likely, out of these 1000 possible “effects” being studied, 50 or so will “show an effect” to p <= 0.05.

It will be almost certain that not a single real effect existed.

I know my original post was eye-glazing due to length and probably writing style, but it goes over this more. P value in itself does not calculate or show probability of real causal effect existing, or even necessarily much likelihood.

The more variables being measured in a study, and the less plausible the effects in the first place (most possible treatments in fact don’t provide benefit) the more likely that outcome was not caused, but was from chance alone.

There are many studies which measure 20 things at a time. Most such studies will generate, from chance alone not cause, an “effect” if effect is judged by meeting p <= 0.05. Most authors will at least suggest their data supports causal relation. The reader must beware.
[/quote]

What you’re talking about is sometimes referred to as the ‘familywise error rate’–the probability of whether one of multiple statistical tests will be significant. Fortunately, methods abound for controlling the familywise error rate. So to be fair, no reputable journal would publish a study of the sort you’re describing, ie, in which the authors shotgunned a bunch of tests at the p = .05 level.

No, it happens routinely. The only difference being, the tests will not appear silly as in the example. They would be ordinary things such as testing many blood values, etc.

[quote]EyeDentist wrote:

[quote]Bill Roberts wrote:

[quote]ActivitiesGuy wrote:

[quote]mertdawg wrote:
If p is less than alpha=.05 doesn’t it mean that the null hypothesis is <5% likely to be true? Well doesn’t that mean that there is a 95% chance of at least some real relationship between the variables? [/quote]
[/quote]
Something I would add to AG’s reply is that there can be a high probability, even near certainty, of the result being by chance “despite” this p value.

Let’s say we ask a bunch of people to come up with the most idiotic ideas they can think of as to what treatments might change various measured blood values of rats.

One comes up with the idea that a treatment wherein the water for the rat’s water bottle is swirled as it’s added to the bottle, instead of just poured in as usual. Another conceives of a treatment wherein the researchers are required to speak only in pig Latin while working with them, instead of English. Still another changes brands of light bulbs in the labs, at same lighting intensity and color temperature. In total, 100 idiot ideas are conceived.

And 10 blood test variables are examined for each study.

Most likely, out of these 1000 possible “effects” being studied, 50 or so will “show an effect” to p <= 0.05.

It will be almost certain that not a single real effect existed.

I know my original post was eye-glazing due to length and probably writing style, but it goes over this more. P value in itself does not calculate or show probability of real causal effect existing, or even necessarily much likelihood.

The more variables being measured in a study, and the less plausible the effects in the first place (most possible treatments in fact don’t provide benefit) the more likely that outcome was not caused, but was from chance alone.

There are many studies which measure 20 things at a time. Most such studies will generate, from chance alone not cause, an “effect” if effect is judged by meeting p <= 0.05. Most authors will at least suggest their data supports causal relation. The reader must beware.
[/quote]

What you’re talking about is sometimes referred to as the ‘familywise error rate’–the probability of whether one of multiple statistical tests will be significant. Fortunately, methods abound for controlling the familywise error rate. So to be fair, no reputable journal would publish a study of the sort you’re describing, ie, in which the authors shotgunned a bunch of tests at the p = .05 level.
[/quote]

Actually, reputable journals do routinely publish studies with a bunch of tests at the p = .05 level. I have done so myself, and I have reviewed several of papers which do the same.

Here’s the thing: it’s not always wrong. Several prominent statisticians have published editorials about the problems with adjusting for multiple comparisons. The biggest argument, and IMO a valid one, is that we can usually squabble endlessly over which comparisons count in the “how many comparisons should we adjust for?” decision.

Suppose I do a randomized clinical trial. Two treatment arms.

My primary comparison of interest is whether Drug A reduced all-cause mortality in 5 years of follow-up compared to Drug B. My secondary analyses of interest are whether Drug A reduced a composite outcome of cardiovascular mortality (defined as MI, stroke, and whatever else), incidence of new peripheral vascular disease, and a few other outcomes such as quality of life, healthcare economics, etc. For those of you that haven’t published papers on large clinical trials, these are generally divided amongst several working groups.

It turns out that there’s a slight difference in five-year mortality (Drug A: 10% vs. Drug B: 13%) which turns out to be not quite significant, p=0.06 (bear with me, I’m making numbers up to prove a point). Same deal for cardiovascular mortality (Drug A: 8% vs. Drug B: 10%, p=0.08). There are no significant differences in PVD incidence or healthcare economics, either, although Drug A generally looks “slightly” better than Drug B on both accounts. However, there is an indication that patients receiving Drug A experienced significantly better quality of life during follow-up, p=0.02.

How do we interpret this result? Can I make a statement that Drug A “significantly” improves QoL over Drug B because my p-value is less than 0.05? If I publish a paper that’s focused solely on QoL outcomes, this may be the main comparison presented in my paper.

Or do I have to couch it by noting that this is a secondary outcome, and when I adjust the p-value for multiple comparisons (some of which may not be published yet because they’re being written as separate papers) it no longer is statistically significant?

What if the group writing the PVD outcomes paper looks at multiple comparisons (say, low ABI as primary diagnosis of PVD, along with more severe sequelae like amputation or lower extremity revascularization)? Is my group writing the QoL paper supposed to know that the PVD group added a few additional PVD-relevant outcomes and now take that into consideration for how we adjust our p-values?

That’s an impossible way of doing things.

IMO, we really should get away from p-values entirely, or at least from the strict view of things as “significant” or “not significant” based on whether they fall above or below one magical threshold. Decisions about results of this type are much more nuanced than “Is it statistically significant?” and yet that’s the only way they are ever evaluated.

I couldn’t agree more.

Disclaimer, I’m not a mathematician. Always a tool rather than something I could contribute to.

That said, I do find it very interesting how data should be treated properly and am really looking forward to your reply!

In practice what I find I have to do, and I genuinely don’t think I’d do better with rigorous math, is carefully evaluate all available evidence, known mechanisms, and add in healthy skepticism and treat it all in a gestalt manner. In and of itself, a finding of p <= 0.05, or 0.01 for that matter, is generally at best interesting and can warrant looking into something further but is worth no conclusion.

Actual calculation of probability of being real effect? Too hard, maybe hopeless!

[quote]Bill Roberts wrote:
In and of itself, because variability in biological experiments is typically so great and there can be so many confounding factors, a finding of p <= 0.05, or 0.01 for that matter, is generally at best interesting and can warrant looking into something further but is worth no conclusion.

Actual calculation of probability of being real effect? Too hard, maybe hopeless![/quote]

I think this is what it boils down to. P-values are a tool that we can use in an attempt to weigh the magnitude of a particular relationship. Whether it is 0.052 or 0.048 should not matter that much, really. P-values should be presented as one piece of the puzzle, and it should be left to the reader to decide exactly how valid the conclusions are.

The problem, and we’ll keep going in circles here, is that most people do not have the level of quantitative understanding to do that properly, even very highly educated people in their respective fields. I work with cardiologists and cardiac surgeons; these guys are no dummies, but even they really struggle to understand the nuances of this kind of stuff. They want a simple answer: So does it work, or not?

My sister-in-law is a high school math teacher (and a very bright girl) and even she knows virtually nothing about statistical analysis. It’s a complex field and a niche that almost no one is exposed to until college, and even then unless you major in it, you’re not likely to get more than a single perfunctory class that teaches you how to calculate a mean and do a t-test.

I believe that more applied/research methods classes would be a good start, but even that won’t get us all the way there. The truth is that I feel about 9,000 times more confident in myself doing this stuff today than I did even two years ago (when I finished graduate school and started my first job at the women’s research hospital). Game reps, as one might call them, are the only way to really understand this stuff in depth.

Anyways. This turned into a big threadjack anyway. My apologies.

Well, I am the guilty party there. Apologies also to the OP.

AG, I wanted to point out the main section that was my original focus in this article: http://journal.diabetes.org/diabetesspectrum/00v13n3/pg132.htm

By the way, it turns out that I saved that article but I had not read most of it. It is very interesting:

“Nuttall et al.10 gave nine subjects with mild type 2 diabetes 50 g protein, 50 g glucose, or 50 g protein and 50 g glucose and determined the plasma glucose and insulin responses over the next 5 hours. The glucose response to glucose was as expected, but the glucose response to protein remained stable for 2 hours and then began to decline. When protein and glucose were combined, the peak response was similar to that of glucose alone. However, during the late postprandial period, the glucose response was reduced by 34%. The insulin responses for protein and glucose were similar, but when combined the insulin response was nearly doubled. The glucose decrease when protein and glucose were combined was attributed to the increased insulin response to the combination. See Figure 1.”

The key here is that protein, whether it is turned into glucose or not, is not going to raise blood glucose, and the reason is that it stimulates significant insulin.

This means that it will/may contribute to long term insulin resistance just like high carbs can. Protein (according to the quote) may even reduce blood sugar in the short term because you get insulin in response to protein, (group B) or protein+carbs (group c).

So it may not convert to blood glucose, but it logically still taxes the beta cells and promotes cellular insulin resistance, and for diabetics who have insufficient insulin, the protein can compete with carbs for available insulin and still effectively net the same effect on blood sugar as adding some percentage of those protein grams as carbs might.

For diabetics who or prediabetics who are insulin resistant added protein is going to progress insulin resistance because it is going to stimulate insulin production. It may not destroy beta cells by the glucotoxic mechanism but it will force them to overwork.

Here is a question.
First a couple of points,

  1. At 100 mg/dl 4 liters of human blood contains only 4 grams of glucose. That is 25 mg/dl per gram of glucose.

  2. A type 1 diabetic with no pancreatic function typically only gets 2-4 points of blood glucose increase per gram of glucose ingested. This is true even for a fast acting pure glucose drink. 15 grams of glucose typically causes a 30-60 point surge in blood sugar yet it is enough to fill up 4 liters of blood to the tune of +300-+400 points?

What is happening to the rest of the glucose? The only thing I can imagine is that the body has other non-circulatory fluids that also absorb glucose, like the extracellular region of the intestines, or the lymphatic fluid. Anyway just wondering if anyone has any guess.

Here is another interesting point mentioned in the article: Circulating amino acids stimulate insulin and glucagon secretion. The amino acids that stimulate glucagon are different from those that stimulate insulin secretion.

Anyone know which AAs stimulate insulin and which ones stimulate glucagon? It could provide an option to an emergency glucagon shot for someone with very low blood sugar.