May 1: Methods for measuring urine phosphate in patients with chronic kidney disease

Robinson-Cohen et al., Estimation of 24-hour urine phosphate excretion from spot urine collection: development of a predictive equation. Journal of Renal Nutrition, 2014; 24 (3): p. 194-9.

One thing that has always been tricky for doctors and scientists is how to analyze clinical tests. Most clinical tests come from either blood or urine samples, which are tested for the presence and levels of various compounds. First, scientists must demonstrate that these compounds are correlated (either in relatively high or low doses) to a pathology. One this has been determined, the normal/healthy level of the compound needs to be determined. However, just as your heart rate changes throughout the day as you wake up and perform various activities, so too do the levels of different compounds in your system. Thus you can say, most people tend to have a range of values over the course of a day.

Patients with severe chronic kidney disease often suffer from hypophosphatemia. In order to monitor the amount of phosphate intake of a patient, the gold-standard is considered to be a 24-hour urine collection test. However, as Robinson-Cohen et al. point out, doing this is very time consuming and also prone to error. They wanted to check whether spot urine phosphate measurements could be as effective as this “gold standard”.

Phosphatemia is treated through phosphate binders (which aid in phosphate retention). As this would naturally affect the phosphate levels in urine, Robinson-Cohen et al. piggy-backed their experiment onto the Phosphate Normalization Trial. Patients, all with hyperphosphatemia, were administered either a phosphate binder or a placebo. While the main purpose of the trial was to determine the efficacy of the phosphate binders, Robinson-Cohen et al. were mainly concerned with the accuracy of measurement methods used. 143 patients were measured both with spot checks as well as the traditional 24-hour method. Additionally, they used an external replication of the study to validate their results.

Robinson-Cohen et al. found that when certain parameters were applied to the spot checks, they could be indicative of the results shown in the 24-hour test. Notably, corrections had to be made for age, sex, and weight and the creatine concentration. The phosphate:creatine ratio in the spot checks related fairly accurately to the phosphate excretion in the 24-hour samples, once these other corrections were made. Thus, this newer simpler method seems to be an emerging option, although; judging by the tone of the paper, I’m not sure anyone will be calling it the “gold standard” just yet.

Advertisements

April 30: How much of a toy do you need to swallow before you’re at risk?

Guney and Zagury, Children’s exposure to harmful elements in toys and low-cost jewelry: Characterizing risks and developing a comprehensive approach. Journal of Hazardous Materials, 2014; 271: p. 321-30.

Before being placed on the shelf of a toy store each individual slinky is tested for its slinky-ness at the POOF-Slinky factory to ensure quality factory. You might think that is overkill, but we as consumers are very picky and, with modern technology, very vocal about our feelings on the products we purchase. Even more than our standards when purchasing for ourselves, the standards of parents when purchasing products for their children is even higher.

In this study, Guney and Zagury attempted to determine the risk children would be at through “oral exposure” to metallic toys and jewelry. They studied 3 specific classes of oral exposure:

  1. Ingestion of parts or pieces
  2. Ingestion of smaller scraped-off bits of material
  3. “Saliva mobilization” of material

The US, Canada, and the EU all have different approaches to quality control for these products, including criteria, definitions, and exposure scenarios. Guney and Zagury measured a number of different metal toy and “cheap jewelry” items for the amount of cadmium, copper, nickel, and lead that might be released under the above scenarios. While scaped-off materials never resulted in signicant hazard index measurements, both ingestion of parts or pieces or saliva mobilization had the potential to lead to dangerous exposure.

Guney and Zagury suggest a comprehensive approach that they suggest should be put in place by authorities all over. This system would set bioaccessible limits for 8 “priority elements”: As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb. While they only tested 16 products in this study, Guney and Zagury call for widespread reassessment of commercial toy and jewelry products using their system of metrics.

April 29: A positive role for amyloid proteins

Pajoohesh-Ganji et al., Inhibition of amyloid precursor protein secretases reduces recovery after spinal cord injury. Brain Research, 2014; 1560: p. 73-82.

Say the words “amyloid precursor protein” (APP) to any student majoring in Biochemistry – and they’ll rattle off a bunch of facts for you – all centered around how, basically, APP cleavage leads to amyloid aggregation into “amyloid plaques” important for Alzheimer’s disease, but no one really knows how it works, or what purpose the protein serves when not running around ruining people’s lives. Well, no longer. Pajoohesh-Gangi et al. studied APP secretases; the enzymes that cleaves APP into Amyloid-β protein. The first reaction to this pathway would be the thought that preventing cleavage of APP would be an effective therapy for Alzheimer’s and other amyloid-implicated neurological disorders. However, it has always been postulated that Amyloid-β does actually serve a positive purpose most of the time.

Pajoohesh-Gangi et al. decided to study the effect that inhibiting amyloid secretases would have on the ability of mice to recover from spinal cord injury. The secretases were either inhibited using DAPT, an inhibitor drug, or through use of amyloid secretase knockout mice. When the recovery of mice after induced spinal cord injury was measured, both methods of inhibiting amyloid secretases were found to inhibit functional recovery, both in terms of the white matter spared as well as demonstrated through behavioral testing of the mice. Thus, although a specific method was not elucidated, a positive effect of amyloid-β protein has finally been shown in certain scenarios.

April 28: Efficacy of BROADLINE to prevent heartworm infection in cats

Baker et al., Efficacy of a single dose of a novel topical combination product containing eprinomectin to prevent heartworm infection in cats. Veterinary Parasitology; 202 (1-2): p. 49-53.

Unlike for dogs, there is no curative treatment for feline heartworm infection. Cats are infected through bites from infected mosquitos. The larvae grow into adults inside the cat and are able to move into the vasculature in the lungs as well as other tissues, causing significant health problems.

This study by Baker et al. comprised 3 controlled blinded laboratory studies to evaluate the efficacy of the preventative treatment of a product marketed as BROADLINE. The basic study set up was to inoculate cats with D. immitis larvae by injecting these under the skin. To control for geographic variation, larvae from naturally infected dogs from 3 distinct geographical areas were used (2 in the USA and one in Europe). For each independent group, the cats were randomly allocated to either an infected group or a control. 30 days after injection, Cats may be infected by heartworm, Dirofilaria immitis, through mosquito bites. They can develop severe heartworm disease when infective D. immitis larvae migrate and develop into adults in the pulmonary vasculature orc other tissues. As there is no curative treatment for feline heartworm infection, the monthly administration of preventative treatment is recommended in endemic areas. Three controlled, blinded laboratory studies were conducted to evaluate the preventative efficacy of BROADLINE(®), a novel combination of fipronil, (S)-methoprene, eprinomectin, and praziquantel against D. immitis in cats. In each study, 28 cats were inoculated with approximately 100 (studies 1 and 2) or 40 (study 3) infective third stage D. immitis larvae by subcutaneous injection, thirty days prior to treatment. The larvae were from recent field isolates from naturally infected dogs from three distinct geographic areas (two in the USA and one in Europe). In each study, the cats were allocated randomly to either receive BROADLINE at 30 days after larvae injection (0.5mg eprinomectin/kg of body weight), or a control group with no treatment.  The control group remained untreated.

6 months after the larvae injection the cats were all euthanized and examined closely for heartworms. 68% of the untreated control cats had one or more heartworms, whereas 100% of the treated cats had no heartworms (not even one). No side-effects of treatment were observed. While the manufacturers of this product probably consider this a complete win, their study actually raised more questions in my mind than it actually solved. For instance, if there are mosquitos in the area carrying larvae, this might result in repeated larvae infection of the cats, not just a single instance. Additionally, the scientists fail to point out how the drug kills the heartworms – including information such as whether it targets adults or larvae. If the drug only targets the heartworm at a particular stage of the life-cycle – does the time after infection matter for when to apply the drug? If so, this would therefore be an incomplete test since it didn’t test different application timepoints and the cat owner stands no chance of knowing at which point her cat was bitten. In a related query – if BROADLINE is meant to be preventative, why is it not applied to the cat prior to larvae injection? Also, in terms of efficacy studies, Baker et al. fail to look at the effect if applied regularly every 4-6 weeks as they recommend, instead of once every 6 months.  Nevertheless, it is promising to see real studies being conducted on products that we give our pets to validate their efficacy.

April 27: Cost-burden of kidney transplantation is directly related to post-surgical success of the transplant

Chamberlain et al., the economic burden of post-transplant events in renal transplant recipients in Europe. Transplantation, 2014; 97 (8): p.854-61.

On September 15th, I wrote a post about how it matters whether a kidney transplant comes from a living or a deceased patient. However, regardless of the source of the transplant, and assuming that transplant was matched properly, the main factor determining the lifespan of the kidney transplant is how well the patient takes care of the transplant. In this study, Chamberlain et al. examined the costs of managing patients post-transplantation separated based on their glomerular filtration rate measured at 1 year post-transplant.

Chamberlain et al. looked at over three thousand patients spread over 9 countries in Europe. The 1 year glomerular filtration rate and also their average 3-year costs (not including any immunosuppression therapy or abnormal post-transplant event costs). As they expected, the glomerular filtration rate, a marker of the success of the transplant and functioning capability of the new kidney, was directly correlated with lower costs over the three year period post-transplantation. In order to overcome this, Chamberlain et al. stress the importance of reducing the cost burden of kidney transplants to European healthcare systems through methods to increase renal function over the first few years post-transplantation.

April 26: A study of the metallic compound distribution within a 50 million year old fossilized leaf

Edwards et al., Leaf metallome preserved over 50 million years. Metallomics, 2014; 6 (4): p. 774-82.

In the past studying fossils simply carefully excavating bones and assembling them into the sorts of exhibits you seen in natural history museums. Recently, however, fossils have become even more interesting as we have developed technologies that can study them to get information that was always there but never previously accessible. Carbon dating was the first example of determining more information from a fossilized sample than was visible to the eye. Recently, scientists have been able to extract DNA from recovered wooly mammoths and Neanderthals.

In this study Edwards et al used a combination of x-ray spectroscopy and a technique called “large-scale synchrotron rapid scanning x-ray fluorescence elemental mapping” (SRS-XRF) to look at a fossilized leaf from the Green River Formation in the Rocky Mountains which has been dated to be 50 million years old. Previous studies had shown that the latter could be used to measure trace metals and sulfur compounds that accumulated in the fossil during its lifetime – giving insight into the biosynthetic properties that the animals used to survive. However, no one had applied this method to fossilized leaves before.

Evidence that the SRS-XRF mapping was genuine comes from the fact that different organometallic and organosulfur compounds were not spread evenly throughout the leaves, but rather were associated with discrete areas, outlining biological structures associated with different processes. Importantly, the compounds detected within the various areas of the leaf were also not in the embedding material – indicating that the mapping wasn’t simply picking up contamination from the soil the fossil had been sitting in for millions of years but actually belonged to the fossil itself. Interestingly, Townsend et al. also used the SRS-XRF method to observe compounds in live leaf material, which can be directly compared and contrasted with the fossilized leaves from history. This method is very exciting and allows for new information to be gleaned about fossilized plants throughout millennia.

April 25: How to create a drug powder for pill formation

Dong et a., Clay as a matrix former for spray drying of drug nanosuspensions. International Journal of Pharmacology, 2014; 465 (1-2): p. 83-9.

While much of the research into drug development is, understandably, focused on creating an effective drug and determining the correct dosage, attention also needs to be paid to the way that drugs are delivered. Consider a simple drug consisting of only one main active ingredient. If no drug packaging was used, you’d probably just receive a powder or solution of the drug. How would you accurately measure out how much you need? Most drugs are dosed in milligrams and most people don’t have that kind of scale at home. Additionally, even if you could weigh the amount you need, you would inevitably lose some of the drug on the measuring container, resulting in errors of huge magnitude and user variation. Now consider what would happen if your medicine actually consisted of two active ingredients. Your problem would be doubled because you’d have to measure them both out separately – a mixture might not be homogeneous. Thus we see that even the most sophisticate drug would never have consistently positive results if there wasn’t a simple way of packaging it so that everyone received an identical full dose.

In this paper Dong et al. investigated a new material used to spray dry drugs: montmorillonite, a type of clay. Currently, a number of drugs on the market use sugars (for example, lactose or sucrose) as a material that will form a matrix with the drug when it is sprayed out and dried into discrete powder particles. However, sugars have three known disadvantages; they are unable to prevent drug particles sticking to each other when in suspension, they often do not form uniform powders, and the powders after spray drying still retain too much water. In contrast, Dong et al. demonstrated that the clay they used prevented aggregation of drug molecules when in suspension as well as less water retention/water absorption from surrounding air. Thus, they conclude that this material should be of higher use within the drug packaging industry.