Climate Models Fail Again, Didn’t Predict CO2 Would Green The Western US

Written by MICHAEL BASTASCH

A new study just gave people another reason to be skeptical of climate models relied upon by scientists to predict the future impacts of global warming.

psi 5

Climate models have long predicted man-made global warming would cause the western U.S. to become more arid and brown, but that’s not what happened. A new study examining three decades worth of satellite data found the western U.S. — indeed, the world in general — is greening because of increased carbon dioxide emissions.

It’s another prediction failure from climate models, according to Chip Knappenberger, a scientist at the libertarian Cato Institute. Knappenberger pointed out on Twitter that climate models predicting “browning” in the western U.S. were dead wrong.

Continue Reading No Comments

Current Solar Cycle Continues To Be The Weakest In Almost 200 Years …Planet At The Mercy Of The Sun

Written by P Gosselin

The following is the solar part of the latest post at Die kalte Sonne.

The Sun in March 2016 – Frank Bosse and Fritz Vahrenholt
(Translated, edited by P Gosselin)

Our mother star was once again less active than normal in March. The observed solar sunspot number (SSN) was 54.9, which was about 2/3 of the mean value (82.5) for this month into the cycle. Here’s what the current solar cycle (SC) looks like so far:

psi 2

Continue Reading No Comments

Fusion technology breakthrough could herald demise of coal

Written by Ian Greenhalgh

They can’t keep suppressing new energy technologies indefinitely.

psi 1

The time when the lid can be kept on energy technologies in order to continue the oil-based economy is coming to an end.  While Keshe may be the one we have all heard of, there are many other technologies that have been suppressed and which hold the potential to end our artificially-imposed dependence on black, gloopy hydrocarbons pumped out of the ground.

This latest disclosure of a new breakthrough in nuclear fusion is a reminder that ‘they’ have technologies that would be of immense benefit to mankind and are not letting us have them.

Recall the crushing of ‘cold fusion’ by Dr Stephen Jones back in the 90s and you will start to realise that these technologies exist and are being held back in order to allow the continued existence of the petrodollar and the usurious financial system built around it.

Worst of all, it is over a century since Tesla came up with ‘free’ electricity and they shut him down the moment he admitted that there was no way to put a meter on it.

Continue Reading No Comments

On the bogus German detection of Einstein’s gravitational waves

Written by Stephen J. Crothers

Stephen Crothers deftly dissects the recent detection, by German scientists, of Einstein’s gravitationaltwo black holes waves generated by the spiral merger of two black holes within some assumed Big Bang universe.

Crothers finds that the LIGO-Virgo Collaborations have either incompetently or intentionally and deceptively identified, using the LIGO apparatus, two black holes spiralling to merger, forming a single black hole.

Crothers writes:

Continue Reading No Comments

NASA – Doubling Sea Level Rise By Data Tampering

Written by Tony Heller

NASA has doubled 1880 to 1980 sea level rise since Hansen 1983. In 1983, NASA showed very little sea level rise after 1950. Now they show rapid sea level rise from 1950 to 1980.

psi 1

 1983: 1983_Hansen_etal_2.pdf  2016 :Sea Level

This fraud should not surprise anyone, because they have also doubled global warming via data tampering during that same time period.

Continue Reading No Comments

Irish researchers sweep smartphones clear of super bugs

Written by Joe Fay

Biological cleansing agent bursts forth in home of Westlife

psi 1

A team of Irish scientists has developed a way to neutralise that threatening sump of biological mayhem you just can’t leave home without – the mobile phone.

Happily the nano-technology can also be turned on to lesser sources of harmful bacteria such as children’s toys, kitchen worktops, TV remotes and toilet.

A team led by Prof Suresh C Pillai developed the tech at the Institute of Technology in Sligo has been tackling the problem of preventing the spread of resistant bugs by developing an antimicrobial surface that isn’t itself toxic or require the use of UV light to make it work.

Continue Reading No Comments

The Exxon Climate Papers

Written by Andy May, judithcurry.com

New York Attorney General Eric T. Schneiderman has accused ExxonMobil of lying to the public and investors about the risks of climate change according to the NY Times and has launched an investigation and issued a subpoena demanding extensive financial records, emails and other documents.

Massachusetts, the US Virgin Islands, and California are also investigating ExxonMobil. It is interesting that all but one of the attorneys general are Democrats. exxon The remaining attorney general is Claude Walker of the US Virgin Islands who is a Green leaning Independent. So, this is a very partisan investigation, carefully coordinated with anti-fossil fuel activists. How much is there to it?

I’ve reviewed the 22 internal documents from 1977 to 1989 made available by ExxonMobil here. I’ve also reviewed what I could find on 104 publications (most are peer-reviewed) with ExxonMobil personnel as authors or co-authors. For some of the peer-reviewed articles I only had an abstract and for some I could find the reference but no abstract or text without paying a fee. Below this short essay is an annotated bibliography of all 22 internal documents and 89 of the published papers. The documents are interesting reading, they fill in the history of modern climate science very well. Much of the current debate on climate change was being debated in the same way, and often with the same uncertainties, in 1977.

Continue Reading No Comments

Hiding The Decline In Missouri

Written by Tony Heller

One of the primary excuses used by NOAA to hide the decline in the US temperature record, is Time Of Observation Bias (TOBS.) The theory is that people used to be incredibly stupid and reset their max/min thermometers only once per day – in the afternoon. This would cause double counting of some high temperatures, and result in historical temperatures which were too high.

This theory is easy to test.  Half of the stations in Missouri took their min/max readings during July 1936 in the afternoon, and the other half took them at morning or night during that month. I separated those groups out and found that it makes no difference to summer temperature trends.

The group of morning/night stations has been cooling at almost exactly the rate as the group of all stations. Also note that summer temperatures in the 1930’s are essentially identical in the two groups. No matter what tricks NOAA uses to hide the decline in US temperatures, it doesn’t change the facts.

psi 2

Continue Reading No Comments

Huge coral reef discovered at Amazon river mouth

Written by John Vidal

Scientists astonished to find 600-mile long reef under the muddy water in a site already marked for oil exploration.

psi 1

A huge 3,600 sq mile (9,300 sq km) coral reef system has been found below the muddy waters off the mouth of the river Amazon, astonishing scientists, governments and oil companies who have started to explore on top of it.

The existence of the 600-mile long reef, which ranges from about 30-120m deep and stretches from French Guiana to Brazil’s Maranhão state, was not suspected because many of the world’s great rivers produce major gaps in reef systems where no corals grow.

Continue Reading No Comments

Atlantic Coccolithophores Thrive as the Air’s CO2 Content Rises

Written by co2science.org

As anthropogenic CO2 emissions continue to rise, Rivero-Calle et al. (2015) note that oceanic calcifiers “generally are expected to be negatively affected.” However, they say that “using data from the Continuous Plankton Recorder,” they showed that the abundance of coccolithophores (one-celled marine plants that live in large numbers throughout the upper layers of the ocean) in the North Atlantic “increased from ~2{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} to more than 20{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} from 1965 through 2010.”

And being curious about this phenomenon, they report how they used various models to examine more than 20 possible environmental drivers of this change, finding that CO2 and the Atlantic Multidecadal Oscillation were the best predictors.”

This finding led them to further think that the rising CO2 content of Earth’s atmosphere might have been what had promoted the coccolithophores’ accelerating growth rate over the past half century. And in concluding their paper, the five U.S. researchers state that their study does indeed show (1) “a long-term basin-scale increase in coccolithophores” (see figure below) and that this finding does indeed suggest that (2) “increasing CO2 and temperature have accelerated the growth of a phytoplankton group that is important for carbon cycling,” while further noting that (3) “a compilation of 41 independent laboratory studies supports our hypothesis.”

psi 1

Paper Reviewed
Rivero-Calle, S., Gnanadesikan, A., Del Castillo, C.E., Balch, W.M. and Guikema, S.D. 2015. Multidecadal increase in North Atlantic coccolithophores and the potential role of rising CO2. Science 350: 1533-1537.

Read more at co2science.org

Continue Reading No Comments

More Climate ‘Science.’ Moving the Goalposts

Written by Mike Smith

“There is nothing mysterious about using the gap between models and observations at the end of the period as a measure of differing trends.

When Secretariat defeated the field in the 1973 Belmont by 25 lengths, even contemporary climate scientists did not dispute that Secretariat ran faster than the other horses.”  Steve McIntyre

That Secretariat was a very fast horse is about all you can get universal agreement with climate scientists in the field of metrics. When it comes to measurements of the earth’s temperature, which, of course, is their field, the actual data is often tweaked until it agrees with their global warming hypothesis (the exact opposite of how science should be conducted; more on that topic here).

Unfortunately, I have two new examples of climate scientists moving the goalposts from the back of the end zone to the 12-yard line.

Let’s begin with a discovery I made over at NASA’s climate website.

psi 4

Continue Reading No Comments

Climate: The Only Constant is Change

Written by Patrick Moore

That’s true about life. And it’s true about the climate. The climate has been constantly changing since the earth was formed 4.6 billion years ago.

psi 3

For example, in just the past 2000 years, we have seen the Roman Warm Period, when it was warmer than today…Then came the cooler Dark Ages… Followed by the Medieval Warm period, when it was at least as warm as today… Then we had the Little Ice Age — that drove the Vikings out of Greenland. And, most recently, a gradual 300-year warming to the present day. That’s a lot of changes. And, of course, not one of them was caused by humans.

During the past 400,000 years there have been four major periods of glaciation — meaning that vast sheets of ice covered a good part of the globe — interrupted by brief interglacial periods. We are in one of those periods right now. This is all part of the Pleistocene Ice Age which began in earnest two and a half million years ago. It’s still going on, which means that we are still living in an ice age. That’s the reason there’s so much ice at the poles. Thirty million years ago the earth had no ice on it at all.  

Continue Reading 1 Comment

Fraud Leaves China’s Electric Car Demand in Doubt

Written by Jie Ma & Craig Trudell

The numbers may have looked too good to be true. Suzhou Gemsea Coach Manufacturing Co., operating out of a blue metal shed west of Shanghai, told the Chinese government it produced 3,700 electric vehicles last year — almost all of them in one month.

That month was December, the last before the government reduced subsidies it pays makers of electric and hybrid vehicles to help clear some of the world’s smoggiest skies. When inspectors, accompanied by state TV reporters, swarmed the site in Suzhou to verify the numbers, they found a few Gemsea brand vans parked outside and hardly any modern assembly equipment inside.

The investigation casts doubt on the accuracy of reported Chinese electric-vehicle sales that are double those in the U.S. As the Beijing Auto Show approaches next week, the government is examining whether it doled out money for fake transactions by domestic companies, with the answer potentially curbing bullish expansion plans by Tesla Motors Inc., BYD Co. and other makers.

psi 2

“Such behavior is despicable and deserves severe punishment,” Stella Li, senior vice president for Berkshire Hathaway Inc.-backed BYD, said of possible cheating on subsidies. “By doing that, they disturbed the right market order and also led the government to have second thoughts as to whether they should change the existing funding structure.”

Continue Reading No Comments

‘Impossible’ EmDrive flying saucer thruster may herald new theory of inertia

Written by Andrew Orlowski

An explanation for Roger Shawyer’s seemingly impossible EmDrive has been offered.

psi 1

The RF resonant cavity thruster was first proposed by British aerospace engineer Roger Shawyer in 1999. It bounces microwaves around within a cone-nosed container, with the container moving in the direction of the cone end.

Despite skepticism from the scientific priesthood – who are adamant that an EmDrive would violate the conservation principle – tests in Germany, China and at NASA have found that the EmDrive produces thrust that could not be explained.

Continue Reading No Comments

Cancer Research Is Broken

Written by Daniel Engber

This article is part of Future Tense, a collaboration among Arizona State University,New America, and Slate. On Thursday, April 21, Future Tense will hold an event in Washington, D.C., on the reproducibility crisis in biomedicine. For more information and to RSVP, visit the New America website.

psi 7

The U.S. government spends $5 billion every year on cancer research; charities and private firms add billions more. Yet the return on this investment—in terms of lives saved, suffering reduced—has long been disappointing: Cancer death rates aredrifting downward in the past 20 years, but not as quickly as we’d hoped. Even as the science makes incremental progress, it feels as though we’re going in a circle.

That’s a “hackable problem,” says Silicon Valley pooh-bah Sean Parker, who last week announced his founding of a $250 million institute for research into cancer immunotherapy, an old idea that’s come around again in recent years. “As somebody who has spent his life as an entrepreneur trying to pursue kind of rapid, disruptive changes,” Parker said, “I’m impatient.”

Many science funders share Parker’s antsiness over all the waste of time and money. In February, the White House announced its plan to put $1 billion toward a similar objective—a “Cancer Moonshot” aimed at making research more techy and efficient. But recent studies of the research enterprise reveal a more confounding issue, and one that won’t be solved with bigger grants and increasingly disruptive attitudes. The deeper problem is that much of cancer research in the lab—maybe even most of it—simply can’t be trusted. The data are corrupt. The findings are unstable. The science doesn’t work.

In other words, we face a replication crisis in the field of biomedicine, not unlike the one we’ve seen in psychology but with far more dire implications. Sloppy data analysis, contaminated lab materials, and poor experimental design all contribute to the problem. Last summer, Leonard P. Freedman, a scientist who worked for years in both academia and big pharma, published a paper with two colleagues on “the economics of reproducibility in preclinical research.” After reviewing the estimated prevalence of each of these flaws and fault-lines in biomedical literature, Freedman and his co-authors guessed that fully half of all results rest on shaky ground, and might not be replicable in other labs. These cancer studies don’t merely fail to find a cure; they might not offer any useful data whatsoever. Given current U.S. spending habits, the resulting waste amounts to more than $28 billion. That’s two dozen Cancer Moonshots misfired in every single year. That’s 100 squandered internet tycoons.

How could this be happening? At first glance it would seem medical research has a natural immunity to the disease of irreproducible results. Other fields, such as psychology, hold a more tenuous connection to our lives. When a social-science theory turns to be misguided, we have only to update our understanding of the human mind—a shift of attitude, perhaps, as opposed to one of practice. The real-world stakes are low enough that strands of falsehood might sustain themselves throughout the published literature without having too much impact. But when a cancer study ends up at the wrong conclusion—and an errant strand is snipped—people die and suffer, and amultibillion-dollar industry of treatment loses money, too. I always figured that this feedback would provide a self-corrective loop, a way for the invisible hands of human health and profit motive to guide the field away from bad technique.

Alas, the feedback loop doesn’t seem to work so well, and without some signal to correct them, biologists get stuck in their bad habits, favoring efficiency in publication over the value of results. They also face a problem more specific to their research: The act of reproducing biomedical experiments—I mean, just attempting to obtain the same result—takes enormous time and money, far more than would be required for, say, studies in psychology. That makes it very hard to diagnose the problem of reproducibility in cancer research and understand its scope and symptoms. If we can’t easily test the literature for errors, then how are we supposed to fix it up?

When cancer research does get tested, it’s almost always by a private research lab. Pharmaceutical and biotech businesses have the money and incentive to proceed—but these companies mostly keep their findings to themselves. (That’s another break in the feedback loop of self-correction.) In 2012, the former head of cancer research at Amgen, Glenn Begley, brought wide attention to this issue when he decided to go public with his findings in a piece for Nature. Over a 10-year stretch, he said, Amgen’s scientists had tried to replicate the findings of 53 “landmark” studies in cancer biology. Just six of themcame up with positive results.

Begley blames these failures on some systematic problems in the literature, not just in cancer research but all of biomedicine. He says that preclinical work—the basic science often done by government-funded, academic scientists—tends to be quite slipshod. Investigators fail to use controls; or they don’t blind themselves to study groups; or they selectively report their data; or they skip important steps, such as testing their reagents.

Begley’s broadside came as no surprise to those in the industry. In 2011, a team from Bayer had reported that only 20 to 25 percent of the studies they tried to reproduce came to results “completely in line” with those of the original publications. There’s even a rule of thumb among venture capitalists, the authors noted, that at least half of published studies, even those from the very best journals, will not work out the same when conducted in an industrial lab.

An international effort to pool the findings from this hidden, private research on reliability could help us to assess the broader problem. The Reproducibility Project for Cancer Biology, started in 2013 with money from the Laura and John Arnold Foundation, should be even better. The team behind the project chose 50 highly influential paperspublished between 2010 and 2012, and then set out to work in concert with the authors of each one, so as to reconstruct the papers’ most important, individual experiments. Once everyone agreed upon the details—and published them in a peer-reviewed journal—the team farmed out experiments to unbiased, commercial research groups. (The contracts are being handled through the Science Exchange, a Silicon Valley startup that helps to allocate research tasks to a network of more than 900 private labs.)

Problems and delays occurred at every level. The group started with about $2 million, says Brian Nosek, a psychologist and advocate for replication research, who helped to launch the Reproducibility Project for Cancer Biology as well as an earlier, analogous one for psychology. The psychology project, which began in late 2011 and was published last summer, reproduced studies from 100 different papers on the basis of a $250,000 grant and lots of volunteered time from the participants. The cancer biology project, on the other hand, has only tried to cover half that many studies, on a budget eight times larger—and even that proved to be too much. Last summer the group was forced to scale back its plan from evaluating 50 papers to looking at just 35 or 37.

It took months and months just to negotiate the paperwork, says Elizabeth Iorns, a member of the project team and founder of Science Exchange. Many experiments required contracts, called “material transfer agreements,” that would allow one institution to share its cells, animals, or bits of DNA with another research group. Iorns recalls that it took a full year of back-and-forth communication to work out just one of these agreements.

For some experiments, the original materials could not be shared, red tape notwithstanding, because they were simply gone or corrupted in some way. That meant the replicating labs would have to recreate the materials themselves—an arduous undertaking. Iorns said one experiment called for the creation of a quadruple-transgenic mouse, i.e. one with its genome modified in four specific ways. “It would take literally years and years to produce them,” she said. “We decided that it was not going to happen.”

Then there’s the fact that the heads of many labs have little sense of how, exactly, their own experiments were carried out. In many cases, a graduate student or post-doc did most of the work, and then moved on to another institution. To reconstruct the research, then, someone had to excavate and analyze the former student or post-doc’s notes—a frustrating, time-consuming task. “A lot of time we don’t know what reagents the original lab used,” said Tim Errington, the project’s manager, “and the original lab doesn’t know, either.”

I talked to one researcher, Cory Johannessen of the Broad Institute in Cambridge, Massachusetts, whose 2010 paper on the development of drug-resistance in cancer cells has been selected for replication. Most of the actual work was done in 2008, he told me. While he said that he was glad to have been chosen for the project, the process turned into a nightmare. His present lab is just down the hall from the one in which he’d done the research, and even so he said “it was a huge, huge amount of effort” to sort through his old materials. “If I were on the other side of the country, I would have said, ‘I’m sorry, I can’t help.’ ” Reconstructing what he’d done eight years ago meant digging up old notebooks and finding details as precise as how many cells he’d seeded in each well of a culture plate. “It felt like coming up with a minute-by-minute protocol,” he said.

The Reproducibility Project also wants to use the same materials and reagents as the original researchers, even buying from the same suppliers, when possible. A few weeks ago, the team published its official plan to reproduce six experiments from Johannessen’s paper; the first one alone calls for about three dozen materials and tools, ranging from the trivial—“6-well plates, original brand not specified”—to key cell lines.

Johannessen’s experience, and the project’s as a whole, illustrate that the crisis of “reproducibility” has a double meaning. In one sense, it’s a problem of results: Can a given finding be repeated in another lab? Does the finding tell us something true about the world? In another sense, though, it’s a problem of methodology: Can a given experiment even be repeated in another lab, whatever its results might be? If there’s no way to reproduce experiments, there’s no way to know if we can reproduce results.

Research on the research literature shows the wideness of the methodology problem. Even basic facts about experiments—essential steps that must be followed while recreating research—are routinely omitted from published papers. A 2009 survey of animal studies found that just 60 percent included information about the number, strain, sex, age, or weight of the animals used. Another survey, published in 2013, looked at several hundred journal articles which together made reference to more than 1,700 different laboratory materials. According to the authors, only about half of these materials could be identified by reading the original papers.

One group of researchers tried to reach out to the investigators behind more than 500 original research papers published between 1991 and 2011, and found that just one-quarter of those authors said they had their data. (Though not all were willing to share.) Another 25 percent of the authors were simply unreachable—the research team could not find a working email address for them.

Not all these problems are unique to biomedicine. Brian Nosek points out that in most fields, career advancement comes with publishing the most papers and the flashiest papers, not the most well-documented ones. That means that when it comes to getting ahead, it’s not really in the interest of any researchers—biologists and psychologists alike—to be comprehensive in the reporting of their data and procedures. And for every point that one could make about the specific problems with reproducing biology experiments—the trickiness of identifying biological reagents, or working out complicated protocols—Nosek offers an analogy from his field. Even a behavioral study of local undergraduate volunteers may require subtle calibrations, careful delivery of instructions, and attention to seemingly trivial factors such as the time of day. I thought back to what the social psychologist Roy Baumeister told me about his own work, that there is “a craft to running experiments,” and that this craft is sometimes bungled in attempted replications.

That may be so, but I’m still convinced that psychology has a huge advantage over cancer research, when it comes to self-diagnosis. You can see it in the way each field has responded to its replication crisis. Some psychology labs are now working to validate and replicate their own research before it’s published. Some psychology journals are requiring researchers to announce their research plans and hypotheses ahead of time, to help prevent bias. And though its findings have been criticized, the Reproducibility Project for Psychology has already been completed. (This openness to dealing with the problem may explain why the crisis in psychology has gotten somewhat more attention in the press.)

The biologists, in comparison, have been reluctant or unable to pursue even very simple measures of reform. Leonard Freedman, the lead author of the paper on the economics of irreproducibility, has been pushing very hard for scientists to pay attention to the cell lines that they use in research. These common laboratory tools are often contaminated with hard-to-see bacteria, or else with other, unrelated lines of cells. One survey found such problems may affect as many as 36 percent of the cell lines used in published papers. Freedman notes that while there is a simple way to test a cell line for contamination—a genetic test that costs about a hundred bucks—it’s almost never used. Some journals recommend the test, but almost none require it. “Deep down, I think they’re afraid to make the bar too high,” he said.

Likewise, Elizabeth Iorns’ Science Exchange launched a “Reproducibility Initiative” in 2012, so that biomedical researchers could use her network tovalidate their own findings in an independent laboratory. Four years later, she says that not a single lab has taken that opportunity. “We didn’t push it very hard,” she said, explaining that any researchers who paid to reproduce their research might be accused of having “misused” their grant funding. (That problem will persist until the people giving out the grants—including those in government—agree to make such validations a priority.) Iorns now calls the whole thing an “awareness-building experiment.”

Projects like the ones led by Brian Nosek and Elizabeth Iorns may push cancer researchers to engage more fully with the problems in their field. Whatever number the Reproducibility Project ends up putting on the cancer research literature—I mean, whatever percentage of the published findings the project manages to reproduce—may turn out to be less important than the finding it has made already: Namely, that the process is itself a hopeless slog. We’ll never understand the problems with cancer studies, let alone figure out a way to make them “hackable,” until we’ve figured out a way to make them reproducible.

Read more at www.slate.com

Continue Reading No Comments

Study: Global warming has improved U.S. weather

Written by JunkScience.com

“The vast majority of Americans have experienced more favorable weather conditions over the past 40 years, researchers from New York University and Duke University have found.”

The media release is below.

psi 6

Recent warmer winters may be cooling climate change concern

The vast majority of Americans have experienced more favorable weather conditions over the past 40 years, researchers from New York University and Duke University have found. The trend is projected to reverse over the course of the coming century, but that shift may come too late to spur demands for policy responses to address climate change.

The analysis, published in the journal Nature, found that 80 percent of Americans live in counties where the weather is more pleasant than four decades ago. Winter temperatures have risen substantially throughout the United States since the 1970s, but summers have not become markedly more uncomfortable. The result is that weather has shifted toward a temperate year-round climate that Americans have been demonstrated to prefer.

Continue Reading No Comments