Cancer Research Is Broken

Written by Daniel Engber

This article is part of Future Tense, a collaboration among Arizona State University,New America, and Slate. On Thursday, April 21, Future Tense will hold an event in Washington, D.C., on the reproducibility crisis in biomedicine. For more information and to RSVP, visit the New America website.

psi 7

The U.S. government spends $5 billion every year on cancer research; charities and private firms add billions more. Yet the return on this investment—in terms of lives saved, suffering reduced—has long been disappointing: Cancer death rates aredrifting downward in the past 20 years, but not as quickly as we’d hoped. Even as the science makes incremental progress, it feels as though we’re going in a circle.

That’s a “hackable problem,” says Silicon Valley pooh-bah Sean Parker, who last week announced his founding of a $250 million institute for research into cancer immunotherapy, an old idea that’s come around again in recent years. “As somebody who has spent his life as an entrepreneur trying to pursue kind of rapid, disruptive changes,” Parker said, “I’m impatient.”

Many science funders share Parker’s antsiness over all the waste of time and money. In February, the White House announced its plan to put $1 billion toward a similar objective—a “Cancer Moonshot” aimed at making research more techy and efficient. But recent studies of the research enterprise reveal a more confounding issue, and one that won’t be solved with bigger grants and increasingly disruptive attitudes. The deeper problem is that much of cancer research in the lab—maybe even most of it—simply can’t be trusted. The data are corrupt. The findings are unstable. The science doesn’t work.

In other words, we face a replication crisis in the field of biomedicine, not unlike the one we’ve seen in psychology but with far more dire implications. Sloppy data analysis, contaminated lab materials, and poor experimental design all contribute to the problem. Last summer, Leonard P. Freedman, a scientist who worked for years in both academia and big pharma, published a paper with two colleagues on “the economics of reproducibility in preclinical research.” After reviewing the estimated prevalence of each of these flaws and fault-lines in biomedical literature, Freedman and his co-authors guessed that fully half of all results rest on shaky ground, and might not be replicable in other labs. These cancer studies don’t merely fail to find a cure; they might not offer any useful data whatsoever. Given current U.S. spending habits, the resulting waste amounts to more than $28 billion. That’s two dozen Cancer Moonshots misfired in every single year. That’s 100 squandered internet tycoons.

How could this be happening? At first glance it would seem medical research has a natural immunity to the disease of irreproducible results. Other fields, such as psychology, hold a more tenuous connection to our lives. When a social-science theory turns to be misguided, we have only to update our understanding of the human mind—a shift of attitude, perhaps, as opposed to one of practice. The real-world stakes are low enough that strands of falsehood might sustain themselves throughout the published literature without having too much impact. But when a cancer study ends up at the wrong conclusion—and an errant strand is snipped—people die and suffer, and amultibillion-dollar industry of treatment loses money, too. I always figured that this feedback would provide a self-corrective loop, a way for the invisible hands of human health and profit motive to guide the field away from bad technique.

Alas, the feedback loop doesn’t seem to work so well, and without some signal to correct them, biologists get stuck in their bad habits, favoring efficiency in publication over the value of results. They also face a problem more specific to their research: The act of reproducing biomedical experiments—I mean, just attempting to obtain the same result—takes enormous time and money, far more than would be required for, say, studies in psychology. That makes it very hard to diagnose the problem of reproducibility in cancer research and understand its scope and symptoms. If we can’t easily test the literature for errors, then how are we supposed to fix it up?

When cancer research does get tested, it’s almost always by a private research lab. Pharmaceutical and biotech businesses have the money and incentive to proceed—but these companies mostly keep their findings to themselves. (That’s another break in the feedback loop of self-correction.) In 2012, the former head of cancer research at Amgen, Glenn Begley, brought wide attention to this issue when he decided to go public with his findings in a piece for Nature. Over a 10-year stretch, he said, Amgen’s scientists had tried to replicate the findings of 53 “landmark” studies in cancer biology. Just six of themcame up with positive results.

Begley blames these failures on some systematic problems in the literature, not just in cancer research but all of biomedicine. He says that preclinical work—the basic science often done by government-funded, academic scientists—tends to be quite slipshod. Investigators fail to use controls; or they don’t blind themselves to study groups; or they selectively report their data; or they skip important steps, such as testing their reagents.

Begley’s broadside came as no surprise to those in the industry. In 2011, a team from Bayer had reported that only 20 to 25 percent of the studies they tried to reproduce came to results “completely in line” with those of the original publications. There’s even a rule of thumb among venture capitalists, the authors noted, that at least half of published studies, even those from the very best journals, will not work out the same when conducted in an industrial lab.

An international effort to pool the findings from this hidden, private research on reliability could help us to assess the broader problem. The Reproducibility Project for Cancer Biology, started in 2013 with money from the Laura and John Arnold Foundation, should be even better. The team behind the project chose 50 highly influential paperspublished between 2010 and 2012, and then set out to work in concert with the authors of each one, so as to reconstruct the papers’ most important, individual experiments. Once everyone agreed upon the details—and published them in a peer-reviewed journal—the team farmed out experiments to unbiased, commercial research groups. (The contracts are being handled through the Science Exchange, a Silicon Valley startup that helps to allocate research tasks to a network of more than 900 private labs.)

Problems and delays occurred at every level. The group started with about $2 million, says Brian Nosek, a psychologist and advocate for replication research, who helped to launch the Reproducibility Project for Cancer Biology as well as an earlier, analogous one for psychology. The psychology project, which began in late 2011 and was published last summer, reproduced studies from 100 different papers on the basis of a $250,000 grant and lots of volunteered time from the participants. The cancer biology project, on the other hand, has only tried to cover half that many studies, on a budget eight times larger—and even that proved to be too much. Last summer the group was forced to scale back its plan from evaluating 50 papers to looking at just 35 or 37.

It took months and months just to negotiate the paperwork, says Elizabeth Iorns, a member of the project team and founder of Science Exchange. Many experiments required contracts, called “material transfer agreements,” that would allow one institution to share its cells, animals, or bits of DNA with another research group. Iorns recalls that it took a full year of back-and-forth communication to work out just one of these agreements.

For some experiments, the original materials could not be shared, red tape notwithstanding, because they were simply gone or corrupted in some way. That meant the replicating labs would have to recreate the materials themselves—an arduous undertaking. Iorns said one experiment called for the creation of a quadruple-transgenic mouse, i.e. one with its genome modified in four specific ways. “It would take literally years and years to produce them,” she said. “We decided that it was not going to happen.”

Then there’s the fact that the heads of many labs have little sense of how, exactly, their own experiments were carried out. In many cases, a graduate student or post-doc did most of the work, and then moved on to another institution. To reconstruct the research, then, someone had to excavate and analyze the former student or post-doc’s notes—a frustrating, time-consuming task. “A lot of time we don’t know what reagents the original lab used,” said Tim Errington, the project’s manager, “and the original lab doesn’t know, either.”

I talked to one researcher, Cory Johannessen of the Broad Institute in Cambridge, Massachusetts, whose 2010 paper on the development of drug-resistance in cancer cells has been selected for replication. Most of the actual work was done in 2008, he told me. While he said that he was glad to have been chosen for the project, the process turned into a nightmare. His present lab is just down the hall from the one in which he’d done the research, and even so he said “it was a huge, huge amount of effort” to sort through his old materials. “If I were on the other side of the country, I would have said, ‘I’m sorry, I can’t help.’ ” Reconstructing what he’d done eight years ago meant digging up old notebooks and finding details as precise as how many cells he’d seeded in each well of a culture plate. “It felt like coming up with a minute-by-minute protocol,” he said.

The Reproducibility Project also wants to use the same materials and reagents as the original researchers, even buying from the same suppliers, when possible. A few weeks ago, the team published its official plan to reproduce six experiments from Johannessen’s paper; the first one alone calls for about three dozen materials and tools, ranging from the trivial—“6-well plates, original brand not specified”—to key cell lines.

Johannessen’s experience, and the project’s as a whole, illustrate that the crisis of “reproducibility” has a double meaning. In one sense, it’s a problem of results: Can a given finding be repeated in another lab? Does the finding tell us something true about the world? In another sense, though, it’s a problem of methodology: Can a given experiment even be repeated in another lab, whatever its results might be? If there’s no way to reproduce experiments, there’s no way to know if we can reproduce results.

Research on the research literature shows the wideness of the methodology problem. Even basic facts about experiments—essential steps that must be followed while recreating research—are routinely omitted from published papers. A 2009 survey of animal studies found that just 60 percent included information about the number, strain, sex, age, or weight of the animals used. Another survey, published in 2013, looked at several hundred journal articles which together made reference to more than 1,700 different laboratory materials. According to the authors, only about half of these materials could be identified by reading the original papers.

One group of researchers tried to reach out to the investigators behind more than 500 original research papers published between 1991 and 2011, and found that just one-quarter of those authors said they had their data. (Though not all were willing to share.) Another 25 percent of the authors were simply unreachable—the research team could not find a working email address for them.

Not all these problems are unique to biomedicine. Brian Nosek points out that in most fields, career advancement comes with publishing the most papers and the flashiest papers, not the most well-documented ones. That means that when it comes to getting ahead, it’s not really in the interest of any researchers—biologists and psychologists alike—to be comprehensive in the reporting of their data and procedures. And for every point that one could make about the specific problems with reproducing biology experiments—the trickiness of identifying biological reagents, or working out complicated protocols—Nosek offers an analogy from his field. Even a behavioral study of local undergraduate volunteers may require subtle calibrations, careful delivery of instructions, and attention to seemingly trivial factors such as the time of day. I thought back to what the social psychologist Roy Baumeister told me about his own work, that there is “a craft to running experiments,” and that this craft is sometimes bungled in attempted replications.

That may be so, but I’m still convinced that psychology has a huge advantage over cancer research, when it comes to self-diagnosis. You can see it in the way each field has responded to its replication crisis. Some psychology labs are now working to validate and replicate their own research before it’s published. Some psychology journals are requiring researchers to announce their research plans and hypotheses ahead of time, to help prevent bias. And though its findings have been criticized, the Reproducibility Project for Psychology has already been completed. (This openness to dealing with the problem may explain why the crisis in psychology has gotten somewhat more attention in the press.)

The biologists, in comparison, have been reluctant or unable to pursue even very simple measures of reform. Leonard Freedman, the lead author of the paper on the economics of irreproducibility, has been pushing very hard for scientists to pay attention to the cell lines that they use in research. These common laboratory tools are often contaminated with hard-to-see bacteria, or else with other, unrelated lines of cells. One survey found such problems may affect as many as 36 percent of the cell lines used in published papers. Freedman notes that while there is a simple way to test a cell line for contamination—a genetic test that costs about a hundred bucks—it’s almost never used. Some journals recommend the test, but almost none require it. “Deep down, I think they’re afraid to make the bar too high,” he said.

Likewise, Elizabeth Iorns’ Science Exchange launched a “Reproducibility Initiative” in 2012, so that biomedical researchers could use her network tovalidate their own findings in an independent laboratory. Four years later, she says that not a single lab has taken that opportunity. “We didn’t push it very hard,” she said, explaining that any researchers who paid to reproduce their research might be accused of having “misused” their grant funding. (That problem will persist until the people giving out the grants—including those in government—agree to make such validations a priority.) Iorns now calls the whole thing an “awareness-building experiment.”

Projects like the ones led by Brian Nosek and Elizabeth Iorns may push cancer researchers to engage more fully with the problems in their field. Whatever number the Reproducibility Project ends up putting on the cancer research literature—I mean, whatever percentage of the published findings the project manages to reproduce—may turn out to be less important than the finding it has made already: Namely, that the process is itself a hopeless slog. We’ll never understand the problems with cancer studies, let alone figure out a way to make them “hackable,” until we’ve figured out a way to make them reproducible.

Read more at www.slate.com

Continue Reading No Comments

The regime shift of the 1920s and 1930s in the North Atlantic

Written by Paul Homewood

The warming of the Arctic in the 1920s and 30s is well documented, despite attempts to wipe it from the temperature record. But it is always good to come across another paper, even though this one dates back to 2006.

psi 1

ABSTRACT

During the 1920s and 1930s, there was a dramatic warming of the northern North Atlantic Ocean. Warmer-than-normal sea temperatures, reduced sea ice conditions and enhanced Atlantic inflow in northern regions continued through to the 1950s and 1960s, with the timing of the decline to colder temperatures varying with location. Ecosystem changes associated with the warm period included a general northward movement of fish.

Continue Reading No Comments

Think before you LEAP

Written by Dr. Klaus L.E. Kaiser

The LEAP Manifesto recently published by left-wing associates of the NDP party in Canada has been described as the brainchild of Naomi Klein and her hubby Avi Lewis. The subtitle of the manifesto “A Call for a Canada Based on Caring for the Earth and One Another” is reminiscent of Pope Francis’ recent encyclical “Laudato Si.”

psi 1

Stationary “e-bike” for electric power generation

Continue Reading No Comments

Climate Surprise: Why More CO2 is Good for the Earth

Written by WILLIAM M BRIGGS

I had the good fortune to attend a talk by conservative author Mark Steyn at the Princeton Club in midtown Manhattan on Tuesday, sponsored by Roger Kimball’s The New Criterion and co-sponsored by the newly formed CO2 Coalition, founded by Princeton physicist Will Happer. In the talk, Steyn warned that prostitution will increase because of global warming, and that global warming will also cause impotence in Italian men. This is a compounding tragedy because, of course, all those newly formed prostitutes won’t be able to find customers — at least, not in Italy.

psi 4

It gets worse. Global warming is also responsible for Pre-Traumatic Stress Disorder, a mental malady affecting the reasoning centers of the brain, causing its sufferers to run nervously in ever tighter circles as they demand the government do the impossible and stop the climate from changing.

PreTSD was discovered in the maiden science of the psychology of global warming. We can only surmise that it’s caused when people are confronted with the reality that the average annual global temperature has swung dramatically in past ages — long before humans developed a rage for burning fossils — but that of late those same averages have failed to do anything dramatic and, indeed, have failed to cooperate with global warming predictions, which have soared ever upwards (see page 2 of this report).

Continue Reading 2 Comments

The replication crisis in science has just begun. It will be big.

Written by fabiusmaximus.com

Summary: After a decade of slow growth beneath public view, the replication crisis in science begins breaking into public view. First psychology and biomedical studies, now spreading to many other fields — overturning what we were told is settled science, foundations of our personal behavior and public policy. Here is an introduction to the conflict (there is pushback), with the usual links to detailed information at the end, and some tentative conclusions about effects on public’s trust of science. It’s early days yet, with the real action yet to begin.

“Men only care for science so far as they get a living by it, and that they worship even error when it affords them a subsistence.”
— Goethe, from Conversations of Goethe with Eckermann and Soret.

psi 1

Mickey Kaus referred to undernews as those “stories bubbling up from the blogs and the tabs that don’t meet MSM standards.” More broadly, it refers to information which mainstream journalists pretend not to see. By mysterious processes it sometimes becomes news. A  sufficiently large story can mark the next stage in a social revolution. Game, the latest counter-revolution to feminism, has not yet reached that stage. The replicability crisis of science appears to be doing so, breaching like a whale from the depths of the sea in which it has silently grown.

Continue Reading No Comments

Dinosaurs ‘in decline’ 50 million years before asteroid strike

Written by Pallab Ghosh

The dinosaurs were already in decline 50 million years before the asteroid strike that finally wiped them out, a study suggests.

psi 1

The new assessment adds further fuel to a debate on how dinosaurs were doing when a 10km-wide space rock slammed into Earth 66 million years ago.

A team suggests the creatures were in long-term decline because they could not cope with the ways Earth was changing.

The study appears in PNAS journal.

Researchers analysed the fossil remains of dinosaurs from the point they emerged 231 million years ago up to the point they went extinct.

Continue Reading No Comments

Three Little Known Scientists Who Changed Our World View of Climate

Written by wattsupwiththat.com - Guest Opinion: Dr. Tim Ball

In this age of specialization, it is very difficult for scientists to integrate information and create a wider cross-discipline understanding of how the Earth works. Three scientists, Alfred Wegener, Milutin Milankovitch, and Vladimir Köppen, had such abilities and their work profoundly impacted our view and understanding of the world and climate.

Sadly, because of the glorification of specialization and denigration of generalization, and control of knowledge and education by the government they are little known or understood today. As always happens with a history they are accused of saying things they never said, or not saying things they did say. It is why in all my classes students were required to go back to the source and not perpetuate the practice of what I call “carping on carping.”

Assignment of the three to the arcane backwaters of the history of science and climate reflects the loss of perspective in climate science manifest in the work of the Intergovernmental Panel on Climate Change (IPCC). That political body deliberately directed climate science and world attention to anthropogenic global warming (AGW), and more narrowly to one greenhouse gas, CO2. They even proved the validity of their attention with computer models that pre-determined that CO2 from humans explained 95 percent of all temperature and climate change since 1950.

Wegener, Milankovitch, and Köppen knew each other very well (Wegener married Köppen’s daughter). The three produced groundbreaking individual and specific research, but the fruits of their collaboration led to the production of general global theories that underpin so much of climate and earth sciences today.

Vladimir Köppen’s global climate classification, the basis of most systems in use today, combined meteorology, climatology, and botany so that plants were a primary indicator of climate categories and regions. It introduces the important and mostly overlooked concept of the “effectiveness” of precipitation. Wegener produced the continental drift theory that provides the foundation for geophysics and the understanding of earthquakes and volcanic activity. Milutin Milankovitch, a Serbian mathematician, and climatologist, combined the effects of changes in Sun/Earth relationships to determine their role in varying the amount of energy reaching the Earth and causing climate change.

psi 1

Continue Reading No Comments

THE BIG ONES: SCIENTIST WARNS UP TO 4 QUAKES OVER 8.0 POSSIBLE UNDER ‘CURRENT CONDITIONS’

Written by www.rt.com

Sunday’s devastating earthquake in Ecuador might just be the beginning, according to a seismologist who says that current conditions in the Pacific Rim could trigger at least four quakes with magnitudes greater than 8.0.

psi  2

Roger Bilham, a University of Colorado seismologist, told the Express, “If (the quakes) delay, the strain accumulated during the centuries provokes more catastrophic mega earthquakes.”

A total of 38 volcanoes are currently erupting around the world, making conditions ripe for seismic activity in the Pacific area.

More than 270 people are now confirmed dead after Sunday’s quake in Ecuador, with the number expected to rise.

Continue Reading No Comments

Study: Global warming has improved U.S. weather

Written by JunkScience.com

“The vast majority of Americans have experienced more favorable weather conditions over the past 40 years, researchers from New York University and Duke University have found.”

The media release is below.

psi 6

Recent warmer winters may be cooling climate change concern

The vast majority of Americans have experienced more favorable weather conditions over the past 40 years, researchers from New York University and Duke University have found. The trend is projected to reverse over the course of the coming century, but that shift may come too late to spur demands for policy responses to address climate change.

The analysis, published in the journal Nature, found that 80 percent of Americans live in counties where the weather is more pleasant than four decades ago. Winter temperatures have risen substantially throughout the United States since the 1970s, but summers have not become markedly more uncomfortable. The result is that weather has shifted toward a temperate year-round climate that Americans have been demonstrated to prefer.

Continue Reading No Comments

Lapse Rate Refutes Radiative Greenhouse Effect

Written by Joseph E Postma

Definitive Refutation: This is something I wrote about long ago (reference pg. 16), but in a recent Slayer email exchange I re-realized just how important it was.  Hopefully any Slayers will follow up in the comments if anything else needs to be added. SSD

In Misunderstood Basic Concepts and the Greenhouse Effect, it is stated:

“The tropospheric temperature lapse rate would not exist without the greenhouse effect. While it is true that convective overturning of the atmosphere leads to the observed lapse rate, that convection itself would not exist without the greenhouse effect constantly destabilizing the lapse rate through warming the lower atmosphere and cooling the upper atmosphere.”

However, let us look once again at the derivation for the lapse rate (also reference this previous post):

When there is no longer any gain or loss of energy in a column of gas (i.e., when it is in energy equilibrium), then the energy ‘U’ of an arbitrary parcel of gas is given by the sum of its thermal and gravitational potential energies.  The sum of thermal and gravitational potential energies is:

U = mCpT + mgh

However, this energy is constant since there is no other energy input (or loss), and so its differential is equal to zero:

dU = 0 = mCp*dT + mg*dh

which results in

dT/dh = -g/Cp

Note that this equation and its derivation has no reference to greenhouse gases or thermal radiation at all – the lapse rate is dependent only upon the gas’ thermal capacity, and the strength of gravity, and this will occur for any gas even if it is totally “radiatively inert”, and, even if the gas isn’t undergoing bulk convection; the lapse rate develops at the infinitesimal scale of the action of gravity on the particles of the gas.

Continue Reading

Gavin Schmidt and Reference Period “Trickery”

Written by Steve McIntyre

In the past few weeks, I’ve been re-examining the long-standing dispute over the discrepancy between models and observations in the tropical troposphere.  My interest was prompted in part by Gavin Schmidt’s recent attack on a graphic used by John Christy in numerous presentations (see recent discussion here by Judy Curry).   Schmidt made the sort of offensive allegations that he makes far too often:

@curryja use of Christy’s misleading graph instead is the sign of partisan not a scientist. YMMV. tweet;

@curryja Hey, if you think it’s fine to hide uncertainties, error bars & exaggerate differences to make political points, go right ahead.  tweet.

As a result, Curry decided not to use Christy’s graphic in her recent presentation to a congressional committee.  In today’s post, I’ll examine the validity (or lack) of Schmidt’s critique.

Schmidt’s primary dispute, as best as I can understand it, was about Christy’s centering of model and observation data to achieve a common origin in 1979, the start of the satellite period, a technique which (obviously) shows a greater discrepancy at the end of the period than if the data had been centered in the middle of the period.  I’ll show support for Christy’s method from his long-time adversary, Carl Mears, whose own comparison of models and observations used a short early centering period (1979-83) “so the changes over time can be more easily seen”. Whereas both Christy and Mears provided rational arguments for their baseline decision,  Schmidt’s argument was little more than shouting.

Background

The full history of the controversy over the discrepancy between models and observations in the tropical troposphere is voluminous.    While the main protagonists have been Christy, Douglass and Spencer on one side and Santer, Schmidt, Thorne and others on the other side, Ross McKitrick and I have also commented on this topic in the past, and McKitrick et al (2010) was discussed at some length by IPCC AR5, unfortunately, as too often, deceptively on key points.

Starting Points and Reference Periods

Christy and Spencer have produced graphics in a similar style for several years. Roy Spencer (here) in early 2014 showed a similar graphic using 1979-83 centering (shown below). Indeed, it was this earlier version that prompted vicious commentary by Bart Verheggen, commentary that appears to have originated some of the prevalent alarmist memes.

psi 5

Continue Reading No Comments

Why Science Is Broken, and How To Fix It

Written by WILLIAM M BRIGGS

There’s been a spate of lamentations that science is broken (here, here, here, here). I am a credentialed, working scientist, and I’m here to tell you that, with some exceptions, these cris de coeur are right. Science is a mess.

psi 3

All of science? No. Robotics is doing well, in a creepy sort of way. All sorts of facts about how proteins are formed, turn themselves into pretzels, and scuttle along chemical gradients are hoorayed about. Physicists still have plenty to say about how to makequark soup. And certain fields continue to be driven forward by their own momentum and profit motive, as in fields connected to information technology.

But these areas of flashy progress mask a deep and growing problem with the institution of science. What is the source of those problems? Politics, money and philosophy. Too much of the first two and not enough of the third; or, rather, too much of the wrong kind of all three.

Continue Reading 2 Comments

They Came From Beyond Our Galaxy And Landed In The Ice!

Written by Richard Chirgwin

Boffins’ tale of Neutrino source beyond the Milky Way spotted by Ice Cube observatory.

joe 1

“Big Bird”, a neutrino spotted in December 2012, probably started its life nine billion years ago in a quasar far, far away: so says the international team of boffins who run the IceCube detector beneath the Antarctic ice.

By 2013, the IceCube collaborators believed they’d spotted extragalactic events: now they believe which source outside the Milky Way they came from.

Neutrinos are hard to spot in the first place: they have nearly no interaction with normal matter, and when they do interact, the signature is really hard to separate from noise. Hence locations like IceCube – instruments watching a cubic kilometre of ice in the Antarctic for the Cherenkov radiation flashes that indicate a neutrino actually hit something.

Continue Reading No Comments

British shale oil may be ready to boom

Written by Daniel J Graeber

LONDON, April 18 (UPI) — The so-called Gatwick Gusher, a shale basin in the United Kingdom, could add as much as $74 billion to the nation’s economy, a study finds.

U.K. Oil & Gas Investments commissioned Ernst & Young to examine the future potential of oil production from the Weald shale basin.

“Assuming it can be extracted from a development site at the volumes projected by U.K. Oil & Gas, has the potential to generate significant economic value to the U.K. economy,” the report read.

Oil & Gas U.K., the industry’s lobbying group, said the North Sea oil sector is in for a long period of decline, with less than $1.4 billion in new spending expected in 2016. Inland shale, meanwhile, has the potential to add between $10 billion and $74.6 billion to the British economy in gross value, the commissioned report said.

Continue Reading No Comments

Cosmic Ray Tech May Unlock Pyramids’ Secrets

Written by ROSSELLA LORENZI

A new generation of muon telescopes has been built to detect the presence of secret structures and cavities in Egypt’s pyramids, a team of researchers announced on Friday.

joe 1

Built by CEA (French Alternative Energies and Atomic Energy Commission) the devices add to an armory of innovative, non-destructive technologies employed to investigate four pyramids which are more than 4,500 years old. They include the Great Pyramid, Khafre or Chephren at Giza, the Bent pyramid and the Red pyramid at Dahshur.

The project, called ScanPyramids, is scheduled to last one year and is being carried out by a team from Cairo University’s Faculty of Engineering and the Paris-based non-profit organization Heritage, Innovation and Preservation (HIP Institute) under the authority of the Egyptian Ministry of Antiquities.

Continue Reading No Comments

Coal’s Future Shifts To Developing World

Written by Graham Lloyd, The Australian

There are 2300 new coal plants with 1400GW of capacity planned worldwide. China is planning to keep burning coal and to ship electricity to Germany, where the renewable revolution has made power so expensive it may soon be cheaper to get it from half a world away, from coal. 

psi 1

[….] The US still gets roughly one-third of its electricity from coal. Rather than climate change and renewables, the fall of Peabody is largely a story of heavy debt burden and increased competition from shale gas.

And on this front, coal is not alone. New-generation solar ­energy company and former Silicon Valley darling SunEdison is itself on the verge of bankruptcy after its value plunged from $10bn in July to $650m. The company was once the great hope of renewable energy but has surrendered under the weight of heavy borrowings used to make overly expen­sive acquisitions as part of a poorly thought through strategy.

Continue Reading No Comments