Welcome to Edition 2.26 of the Rocket Report! We’ve got a feature-length report today, stuffed like a stocking full of stories about launch from around the world. This is our final issue before a holiday break, but we’ll be back with all the news that’s fit to lift on January 9.
As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.
Rocket Lab to build second New Zealand pad. Mere days after completing a new launch site in Virginia, Rocket Lab announced this week that it has started work on a second pad at its original launch site in New Zealand. This Pad B at Launch Complex 1 is scheduled to become operational by late 2020, SpaceNews reports.
Weekly launches? … Rocket Lab Chief Executive Peter Beck said the decision to build the second pad was driven by an anticipated increase in its launch rate. After six launches of its Electron rocket in 2019, the company anticipates launching once a month in 2020 and eventually higher cadences. “The additional pad really gives us the capacity to get down to one launch every week, which is what we’ve always been driving to,” he said. (submitted by Ken the Bin)
Air Force plans active smallsat launch campaign in 2020. The small-launch division of the Air Force Space and Missile Systems Center is preparing to launch nine missions in 2020, just about doubling the number of launches conducted in 2019, according to SpaceNews. “It will be a busy year,” said Lt. Col. Ryan Rose, chief of the small launch and targets division, which is part of SMC’s Launch Enterprise Systems Directorate.
Five this year … In 2019, five missions were carried out by the small-launch directorate: STP-2 aboard a SpaceX Falcon Heavy rocket, the Ascent Abort-2 flight test of the launch abort system for NASA’s Orion spacecraft from a modified Peacekeeper missile, STP-27RD aboard a Rocket Lab Electron vehicle, an MDA intercept flight test of the Terminal High Altitude Area Defense system, and (most recently) a flight test of a prototype conventionally configured ground-launched ballistic missile in support of the Pentagon’s Strategic Capabilities Office. (submitted by Ken the Bin)
Vector files for bankruptcy protection. Four months after laying off nearly the entirety of its 150-person staff, the micro-launch company filed for bankruptcy on December 13, SpaceNews reports. Vector had been one of the leading companies in the small-launch-vehicle market until August, when the company said that a “significant change in financing” led it to pause operations.
Funding falls through … The publication said the August layoffs were triggered when one of the company’s major investors, venture fund Sequoia, withdrew its support due to concerns about how the company was managed. That came as Vector was working on a new funding round, and Sequoia’s decision had a domino effect, causing other investors to back out. (submitted by Ildatch and Ken the Bin)
India’s PSLV makes its 50th launch. Earlier this month, the 50th flight of India’s Polar Satellite Launch Vehicle successfully delivered 10 spacecraft from five nations into orbit. This mission was also the 75th overall launch from India’s primary spaceport at Sriharikota, Spaceflight Now reports.
Two failures … The PSLV’s first mission lifted off from Sriharikota on September 20, 1993, but failed to reach orbit after encountering a problem during separation of the rocket’s second and third stages. The PSLV’s only other launch failure occurred August 31, 2017, when the rocket’s payload shroud did not jettison, preventing the rocket from placing an Indian navigation satellite into orbit. The rocket has become a workhorse in the small-satellite launch industry.
Exos Aerospace identifies cause of launch failure. Exos has found the cause of the October launch failure of its Suborbital Autonomous Rocket with Guidance (aka SARGE rocket), according to the company’s co-founder. John Quinn, Exos Aerospace’s co-founder and chief operating officer, told Space.com that a composite part just below the nose cone failed, causing the cone to slide down into the rocket. The booster then flew nearly horizontally, beyond any hope of recovery.
Sticking with it … “What’s really interesting is, the component that failed was one that we replaced,” Quinn said. The replacement was made based on data gathered during the company’s third launch, which was successful. But engineers saw some moderate signs of stress in the composite part, so they decided to put in a new piece for the fourth launch. The company is already targeting another launch date for the first quarter of 2020, depending on the progress of the redesign and certain external matters, such as the renewal of its government launch license. (submitted by Ken the Bin)
New Shepard conducts 12th test flight. On December 11, Blue Origin launched its suborbital New Shepard rocket and crew capsule on an uncrewed test flight from the company’s launch and landing facility in West Texas. This was the 12th total test flight of the New Shepard launch system, and the third such flight in the year 2019, NASASpaceFlight.com reports.
Human flights in 2020? … This mission is expected to be one of the last uncrewed flights scheduled to occur before Blue Origin gets ready to fly human passengers on the New Shepard vehicle, with the first crewed launch most likely occurring sometime next year. The company has yet to begin selling tickets or even set a price for the 10-minute, out-of-this-world experience. Maybe that changes next year. Maybe not. (submitted by Ken the Bin)
Trump ally buys Stratolaunch. Geek Wire reports that the new owner of Stratolaunch, the space venture started by late Microsoft co-founder Paul Allen, is Steve Feinberg, a secretive billionaire with close ties to President Donald Trump. In October, Stratolaunch announced that it had transitioned ownership from Allen’s holding company, Vulcan Inc., but it did not identify who had bought the company.
Jean Floyd says for now … Private-equity firms typically replace existing managers as a prelude to realigning businesses they buy, which can involve firing, automation and offshoring. However, it appears that Jean Floyd, Stratolaunch’s president and CEO since 2015, remains in his role for now. Last week, Floyd tweeted that Stratolaunch had grown from 13 to 87 employees over the past two months. He also reported that the company’s new mission was “to be the world’s leading provider of high-speed flight-test services.” Maybe in 2020 we’ll see what that means.
Copper-titanium alloys show promise. The current titanium alloys often used in additive manufacturing typically cool and bond together into crystals that make them prone to cracking. However, a new report in the journal Nature suggests that a titanium-copper alloy may solve these problems and allow for the stronger material to be used more widely.
Building a stronger booster? … “We report on the development of titanium–copper alloys that have a high constitutional super-cooling capacity as a result of partitioning of the alloying element during solidification, which can override the negative effect of a high thermal gradient in the laser-melted region during additive manufacturing,” authors write in the journal. They say this could have applications in the aerospace and biomedical industries. This work remains at the laboratory level for now, but it could have long-term implications for spaceflight. (submitted by rochefort)
Nearly half of all adults in the US will be obese just 10 years from now, according to new projections published in the New England Journal of Medicine. Nearly a quarter will be severely obese.
Currently, about 40 percent of US adults are obese and about 20 percent are severely obese.
The new modeling study, led by public health researchers at Harvard, attempts to provide the most accurate projections yet for the country’s obesity epidemic, which is increasing at a concerning rate. “Especially worrisome,” the researchers write, “is the projected rise in the prevalence of severe obesity, which is associated with even higher mortality and morbidity and health care costs” than obesity.
For the study, the researchers defined weight categories by body mass index, BMI, defined as the weight in kilograms divided by the square of the height in meters. BMI’s of under 25 were considered underweight or normal weight, 25 to 30 were considered overweight (25 to <30), 30 to 35 were considered moderate obesity, and 35 or over were considered severe obesity.
To model the projections, the researchers drew from 20 years’ worth of data on a nationally representative sampling of more than six million people.
But unlike previous projections that attempted to predict the trajectory of the obesity epidemic, the new analysis tried to compensate for a weakness in the data gathered—that is, that the weight information was mainly collected in a nationwide telephone survey, and people have a general tendency to under-report their actual weight.
To get a more realistic picture of Americans’ weight and where it’s headed, the researchers essentially adjusted the biased survey data to better match the weight distribution seen in another study where weight and body measurements were collected in a standardized exam procedure. That study was smaller—involving just over 57,000 adults—but was still national-representative.
The researchers then used the adjusted date to model projections. While the overall picture is not good, projections for certain demographics and states were particularly concerning. For instance, Alabama, Arkansas, Oklahoma, and West Virginia have projected obesity rates nearing 60 percent 10 years from now. And by 2030, severe obesity will be the most common BMI category nationwide among women, black non-Hispanic adults, and adults in low-income households (less than $50,000), the researchers report.
Overall, severe obesity will be more common in 2030 than obesity was in the 1990s.
The data is a signal that doctors need to be better at addressing and treating obesity, which carries a slew of health risks. Those include an increased risk of type II diabetes, stroke, coronary heart disease, osteoarthritis, some cancers, as well as all-cause mortality.
“In addition to the profound health effects,” the researchers write, “…the effect of weight stigma may have far-reaching implications for socioeconomic disparities as severe obesity becomes the most common BMI category among low-income adults in nearly every state.”
Given how difficult it can be to lose weight, the researchers say more emphasis should be placed on preventing weight gain to begin with.
One of the annoying things that happens when you track developing science is that you keep seeing interesting results on a topic, but none of them quite reaches the significance to justify a news story. This week, another paper that fit the description came to my attention. Again, these particular results weren’t especially exciting, but I’ve decided it gives me an excuse to introduce you to an interesting and potentially significant area of chemistry.
The area of research that keeps grabbing my attention is a fusion of photovoltaic technology and biochemistry. Photovoltaics are useful because they provide a way to conveniently liberate some electrons. And a lot of enzymes work because they do interesting things with electrons they obtain from other molecules. So, in theory, it should be possible to use a photovoltaic device to supply an enzyme what it needs to catalyze useful reactions. And, in many cases, reality matches up nicely with theory.
The new paper focuses on using photovoltaic nanoparticles to drive an enzyme that uses carbon dioxide, incorporating it into a larger molecule. But the researchers behind it also discover the process doesn’t work especially efficiently, and they make some progress toward figuring out why.
All chemical reactions involve rearranging the locations of electrons; electron transfer reactions, however, specifically involve changing the charge states of molecules, oxidizing some and reducing others. While enzymes catalyze a lot of bond rearranging reactions, many of the reactions central to metabolism involve electron transfers. There are entire electron transfer chains involved in breaking down sugars and others that are central to photosynthesis.
(The electrons involved in these reactions often come from metals, which is why so many of these enzymes have iron or some other metal incorporated as a co-factor.)
Many of these enzymes do things that we would be very interested in doing on an industrial scale. They form interesting molecules, make key intermediates in drugs, turn nitrogen gas into fertilizer, or even potentially pull carbon dioxide from the air, incorporating it into useful chemicals. So, it would be great if we could pull these enzymes out of the complex maze of biochemical pathways they’re embedded in and use them separated from all the complexities of the cell.
Unfortunately, that’s far, far easier said than done. Many of these enzymes take starting materials that only exist transiently within the cell. Others rely on chemicals or co-factors that are hard to produce or expensive; without those, they don’t have any way to get the electrons they need to do anything useful.
Photovoltaics offer an alternative to all this messy biochemistry. If the starting chemicals of a reaction are easy to obtain, a little bit of light and a photovoltaic will supply the electrons that are needed. By using photovoltaic nanoparticles, it’s also possible to tune the energy of the electrons to the needs of the enzyme. And it works. Photovoltaic-driven enzymes have made hydrogen from acidic solutions, converted nitrogen gas to ammonia, and pulled one of the oxygens off carbon dioxide.
While these were valuable demonstrations, we haven’t learned a lot about what’s going on at a biochemical level when an enzyme interacts with a photovoltaic material. As a result, it’s hard to know in advance which enzymes will play nice with them. And, if the process isn’t working as well as we’d like, there’s no obvious way to improve things.
In February 2018, an accident at a gas well in Ohio, near the West Virginia border, didn’t make as much national news as it should have. An explosion at the well caused a blowout, with billowing black smoke and gushing natural gas spewing into the air. It didn’t generate as much interest as the three-and-a-half-month-long leak from a California underground gas storage facility in 2015, but a new study published this week shows it was almost as bad.
The study, led by Sudhanshu Pandey at the SRON Netherlands Institute for Space Research, utilized measurements from ESA’s new Sentinel-5P satellite. Although it wasn’t quite officially operational at the time of the accident, the researchers were able to grab data. Unfortunately, although the leak went on for 20 days, there was too much cloud cover to use the data on all but two of those days.
The data, however, is quite good, as this satellite can deliver methane measurements at much higher resolution (about 7 kilometer) than others. The researchers were able to compare against days before the leak and also to compare the levels from upwind and downwind of the leaking well. They also used a simulation including the weather conditions on those days, calculating what the methane plume would look like for different rates of release.
They estimated the rate of methane release at 120 (±32) metric tons per hour. If you don’t know gas well methane leakage rates off the top of your head, that’s a lot. Estimated leakage rates from entire natural gas fields in the US—like the Haynesville Shale straddling the Texas/Louisiana border or the Uinta Basin in northeastern Utah—come in lower than that.
To calculate a total for the entire 20-day release, the researchers use that as their average. That probably underestimates it, though, as the measurement comes two weeks in. You would expect the emissions to start higher and drop as the pressure declines. Using these numbers, the total release is 60,000 (±15,000) tons. The 2015 accident in California—the second biggest recorded methane leak ever in the US—released about 97,000 tons.
Or, for another kind of context, the researchers point out that this Ohio well released more methane in 20 days than the oil and gas activities in most nations around Europe do in an entire year—save only the UK, Germany, and Italy.
The Ohio incident highlights an important fact about methane leakage from the oil and gas industry: it is dominated by a small number of malfunctioning sites often termed “superemitters.” That has made it challenging to accurately calculate the total leakage from the industry. (And to compare the climate impact of natural gas vs. coal power plants.)
Some researchers have tried to visit many sites, measuring leakage rates around each type of equipment to estimate average behavior. Other teams have tried a less targeted approach, flying overhead in aircraft and attempting to measure the overall behavior (and separate it from other sources of methane). That has generally led to larger estimates of leakage than the on-the-ground approach, in part because it’s easier to catch the superemitters.
The researchers say this case illustrates the value of the new Sentinel-5P satellite. Rather than having to know something is malfunctioning, or hoping that the occasional measurement catches a representative day, the satellite may reveal superemitters simply because it’s always watching.
Skulls from central Java may come from the last surviving population of Homo erectus, suggests a new study dating the fossil bed and the surrounding landscape. The population’s death roughly coincides with dramatic changes in the environment, which may have caused the species’ extinction long before the first Homo sapiens reached Southeast Asia.
The “last stand” of Homo erectus?
University of Iowa anthropologist Russell Ciochon and his colleagues dated fossils and sediment layers from a site called Ngandong in a naturally terraced valley carved out of the surrounding hills by the Solo River. In the 1930s, archaeologists unearthed the tops of a dozen Homo erectus skulls, along with two tibiae (shin bones). These fossils seem to be different from older Homo erectus fossils in some important ways, like much larger cranial capacity (which suggests bigger brains) and higher foreheads.
“Ngandong Homo erectus has the largest brain size and highest foreheads of any known Homo erectus,” Macquarie University geochronologist Kira Westaway, a co-author of the study, told Ars. “This indicated an important evolutionary change. The timing of this change is crucial to our interpretation and understanding of our distant cousins.”
In a bid to make sense of it all, Ciochon and his colleagues used uranium-series dating on some newly excavated mammal fossils from the same layer as the Homo erectus skulls. To piece together the whole area’s geological history and see how it might relate to the Homo erectus fossils, they also used other dating methods on sediment and rocks from Ngandong and other sites in the river valley. The results suggest that the bone bed (and therefore the collection of Homo erectus fossils) is between 117,000 and 108,000 years old.
That makes Ngandong the last-known trace of Homo erectus in the world.
“There is always a possibility that someone will find new Homo erectus evidence that is younger and therefore that becomes the last appearance—but this is science! At present, we make an interpretation based on the evidence that we have, and this is that Ngandong represents the last appearance of Homo erectus,” Westaway told Ars.
Of course, that doesn’t mean these were definitely the last of their kind in the world. The fossil record is patchy and imperfect, and we haven’t actually found all of it yet. “Our work provides the age of the last-known appearance of Homo erectus, but this does not mean that it is the age of extinction,” Ciochon told Ars. “Small groups of Homo erectus may have lived longer without leaving fossil evidence. We know that there are no living Homo erectus, but it is difficult to prove when the extinction event happened.”
Close encounters of the hominin kind
The Ngandong dates also strongly suggest that Homo erectus may have gone extinct, at least in Indonesia, long before our species made it that far. The most adventurous Homo sapiens explorers were probably somewhere around the Levant or the Arabian Peninsula at the time and didn’t make it to the islands of Southeast Asia until around 73,000 years ago.
On the other hand, Ciochon and his colleagues’ timing leaves plenty of room for Homo erectus to have encountered Denisovans. That would help explain why the Denisovan genome contains a tiny fraction of genetic material from a much older species (just like many modern people’s genomes contain fragments of Neanderthal and Denisovan DNA).
“This older species is likely Homo erectus,” Ciochon told Ars. “There is considerable speculation about where and when the Denisovans meet Homo erectus and what the results of those interactions were. Our dates support the genetic evidence that Homo erectus could have interbred with the Denisovans.”
But it’s too soon to say for sure. “This is yet to be proven, but establishing a solid chronology for Homo erectus is the first step in this investigation,” Westaway told Ars. “The possibility of intermixing with Denisovans is an exciting prospect well worth exploring.”
Rizal et al. 2019
Rizal et al. 2019
Rizal et al. 2019
Rizal et al. 2019
Rizal et al. 2019
Rizal et al. 2019
Listing image by Rizal et al. 2019
Nearly a decade has passed since NASA first awarded funds to Boeing for the design of a crewed spaceflight capsule, and on Friday we should finally see the Starliner vehicle take flight for its first orbital test.
Officials from Boeing, United Launch Alliance, and NASA all said Tuesday that the Starliner capsule and its Atlas V rocket are ready for launch, scheduled for 6:36am ET (11:36 UTC) Friday from Space Launch Complex 41 in Florida. According to meteorologists, there is an 80 percent chance of favorable launch weather.
“When you see an integrated vehicle like this on the pad, it all becomes real,” said Kathy Lueders, NASA’s manager for the commercial crew program for much of the last decade.
This seven-day orbital test flight—provided Starliner launches on Friday it will dock at the International Space Station on Saturday, and then land back on Earth on Dec. 28th—is essentially a shakedown test for the vehicle before astronauts fly inside of it. A mannequin nicknamed “Rosie” will ride along to gather in situ data for what astronauts will be exposed to.
“This is the culmination of years of hard work, and this is really setting up to be an incredible week,” said John Mulholland, who manages the Starliner program for Boeing.
The capsule will launch on an Atlas V rocket, which is among the most trusted boosters in the world. This configuration of the Atlas rocket will use two side-mounted rocket boosters, and for the first time the Atlas’ Centaur upper stage will employ two, rather than one RL-10 rocket engines. This will provide a smoother, and flatter launch trajectory and subject Starliner to lower gravitational forces, Mulholland said.
Sets up a big 2020
Officials were careful during a news conference on Tuesday to not set a date for a crewed flight test, in which NASA astronauts Michael Fincke and Nicole Mann will fly with Boeing astronaut Chris Ferguson to the station. That will depend upon the success of Friday’s test flight, but if all goes well should happen during the first half of 2020, Mulholland said.
After funding several development efforts in the early 2010s, NASA down-selected to Boeing and SpaceX in 2014 to finalize design and development of their commercial crew vehicles. All told, the agency has paid Boeing $4.8 billion, and SpaceX $3.1 billion for Starliner and the Crew Dragon spacecraft, respectively.
In March, the Dragon vehicle successfully performed an uncrewed flight test to the space station, but the company had a significant setback in April when the same vehicle exploded during a static fire test of its launch escape system. SpaceX and NASA have since diagnosed and fixed that problem, and a critical in-flight abort test is likely to occur in January.
That test, and Boeing’s uncrewed flight test, represent the final stages of the commercial crew development program. Sometime in 2020, nine years after the final space shuttle mission from Florida, humans should again launch into orbit from the United States.
Listing image by Trevor Mahlmann
Charles Babbage is widely recognized as a pioneer of the programable computer due to his ingenious designs for steam-driven calculating machines in the 19th century. But Babbage drew inspiration from a number of earlier inventions, including a device invented in 1804 by French weaver and merchant Joseph Marie Jacquard. The device attached to a weaving loom and used printed punch cards to “program” intricate patterns in the woven fabric. One of these devices, circa 1850, just sold for $43,750 at Sotheby’s annual History of Science and Technology auction.
“Technically, the term ‘Jacquard loom’ is a misnomer,” said Cassandra Hatton, a senior specialist with Sotheby’s. “There’s no such thing as a Jacquard loom—there’s a Jacquard mechanism that hooks onto a loom.” It’s sometimes called a Jacquard-driven loom for that reason.
There were a handful of earlier attempts to automate the weaving process, most notably Basile Bouchon’s 1725 invention of a loom attachment using a broad strip of punched paper and a row of hooks to manipulate the threads. Jacquard brought his own innovations to the concept.
Per Sotheby’s auction website:
Jacquard… conceived of developing a semi-automatic tone-selection device, which would be integrated onto the loom, resulting in quicker production and more intricate patterns. Jacquard’s punch-card system worked much in the same way as a fax machine: each punch in the card directed a black or a white thread into the headstock of the loom, pinpointing the desired thread into place.
The invention was not popular with loom operators, many of whom lost their jobs and took to smashing Jacquard looms in protest.
The mechanism just sold at auction belonged to one of Hatton’s former clients, now deceased. The collector had been trying to establish a museum on the history of computing, grounding his vision in the early attempts to mechanize computation by Jacquard, Babbage, and others. The loom “was his pride and joy,” said Hatton. “He bought the original mechanism and then commissioned somebody to make the loom and a second [mechanism] so he could use it. So the loom is fully operational.”
The piece comes with a large box of accessories, including the original 19th-century punch cards—everything one would need to operate the machine.
Jacquard’s machine was capable of programming patterns with such precision, it was even used in 1886 to create a prayer book woven entirely in silk, using black and gray thread. Only around 50 copies were made. The Sotheby’s auction didn’t offer any rare prayer books this year, but there was a portrait of Jacquard—woven in silk on a Jacquard loom—with a mini-loom and punch card on his desk, as well as another woven silk portrait showing several men in front of a loom holding the aforementioned portrait (so very meta). Those portraits sold for $10,625 and $11,875, respectively.
As the epidemic of opioid abuse and overdoses ravaged the United States—claiming hundreds of thousands of lives—the Sackler family withdrew more than $10 billion from its company, OxyContin-maker Purdue Pharma. That’s according to a new 350-page audit commissioned by Purdue as part of the company’s Chapter 11 bankruptcy restructuring.
The revelation is likely to fuel arguments from some states that say the Sacklers should offer up more cash to settle the more than 2,800 lawsuits accusing them and Purdue of helping to spark the opioid crisis. The plaintiffs in those cases—mostly states and local governments—collectively allege that Purdue and the Sacklers used aggressive and misleading marketing to push their highly addictive painkillers onto doctors and patients.
In a proposed $10-$12 billion settlement, the family has offered at least $3 billion of its own fortune. The family also said it would give up ownership of Purdue, which will transform itself into a public-benefit trust.
While some of the states have tentatively agreed to the deal, 24 states say that the offer isn’t good enough—and that the information in the new audit proves it.
New York Attorney General Letitia James said in a statement:
The fact that the Sackler family removed more than $10 billion when Purdue’s OxyContin was directly causing countless addictions, hundreds of thousands of deaths, and tearing apart millions of families is further reason that we must see detailed financial records showing how much the Sacklers profited from the nation’s deadly opioid epidemic… We need full transparency into their total assets and must know whether they sheltered them in an effort to protect against creditors and victims.
The audit doesn’t reveal the family’s total worth or where all the Purdue money ended up. But what it does reveal doesn’t paint a flattering picture of the Sacklers.
Follow the money
Most strikingly, the audit shows that the family dramatically ramped up its withdrawals as the opioid epidemic raged. Between 1995 and 2007, the family withdrew just $1.3 billion from Purdue. But from 2008 to 2017, the family’s withdrawals totaled $10.7 billion, peaking at $1.7 billion in 2009, as The New York Times points out.
In 2007, Purdue and three of its executives pleaded guilty in federal court to misleading regulators, doctors, and patients about the addictiveness and abuse-potential of OxyContin.
The boost in withdrawals and its timing appear to support the argument from some states that the Sacklers were trying to shield OxyContin profits from the avalanche of litigation they saw coming. As NPR notes, a briefing filed in court and signed by 25 attorneys general read:
The Sacklers knew that the profits were not safe inside Purdue. Richard Sackler warned, in a confidential memo, that the company posed a “dangerous concentration of risk.” Purdue’s CFO stated that a single lawsuit by a state attorney general could “jeopardize Purdue’s long-term viability.” So the Sacklers pulled the money out of the company and took it for themselves. The Sacklers have directed Purdue to pay their family as much as $13 billion.
Where the money ended up is still unclear. As the NYT points out, money was often directed to trusts in places considered tax havens, such as Luxembourg or the British Virgin Islands. The audit also outlined the complex way in which the family sometimes moved money around. In some cases, Purdue money moved through a series of companies, including Rosebay Medical and Beacon Co., holding companies controlled by the family. In 2017, a set of a dozen transactions moved money through several companies before finally directing it to a Japanese division of Mundipharma, the Sacklers’ global pharmaceutical company. The audit does not provide an explanation for the transfers.
The rapidly increasing amount of intermittent power sources on the electrical grid raise questions about how best to make sure a variable supply gets power to consumers when they need it. One idea that has been suggested is what’s called “demand-response pricing”—charging more for electricity when demand is high in order to shift some of it to other times. If people or companies are made aware of the high price, they can choose to forgo power-intensive activities. Or, with smart appliances, even set them to automatically avoid high price periods.
In theory, if enough people turn down their air conditioners or postpone their laundry, demand can drop enough so that a variable supply can meet it.
Right now, our grid isn’t set up for variable pricing; consumers pay a flat rate regardless of how plentiful or scarce the electrons are. But there are plans to make it a regular feature in some areas and a few test cases of variable pricing. Researchers who tracked one of the tests found what may be a significant problem: the most vulnerable populations, like the elderly and poor, tended to end up paying more and having increased health problems.
The motivating principle behind demand-response pricing is that demand is flexible. People can put off running their clothes dryer, or put up with a few hours of somewhat warmer temperatures. But that’s actually not true for everyone. The elderly and disabled, for example, may not tolerate even a few hours of elevated temperatures or could have medical equipment that simply can’t be shut down. This could also hit the poor harder, as they tend to live in housing with less efficient appliances and poor insulation.
To find out whether there was any evidence that this sort of uneven impact was taking place, researchers Lee White and Nicole Sintov tracked a trial run of demand-response pricing. The trial took place in an unnamed utility in the US Southwest during summer. Because of the heat, there tends to be a spike in demand as people get home from work and turn on the air conditioning. To lower this demand, the utility raised the price of electricity used during this peak, offering two different plans. One simply had elevated prices for the whole period where demand was elevated; the second used even higher prices but limited them to a shorter period at the time of highest demand.
To figure out how this affected vulnerable groups, the researchers got thousands of the participants to fill out surveys regarding their experience. That may be a source of bias in the results, as the authors note this selects for people who have the time to handle the surveys, which may affect the most vulnerable more severely. In addition, things like medical problems and experiencing uncomfortable heat were determined by survey responses. Thus, these answers could be affected by subjective experience or poor recollection.
Those limitations aside, the high survey response rate and large population provide a bit of confidence in the data.
Hitting where it hurts
The researchers used the survey responses to identify a number of potentially vulnerable groups: those with children, the poor, the elderly, and the disabled. They then used utility data to find out what they ended up paying and survey data to determine whether they had health issues during the time the demand-based pricing scheme was being trialed. The data allowed them to analyze each group individually, comparing it both to customers using the flat-rate pricing and to the other customers in the trial.
One of the key conclusions of the trial was that everyone who tried demand-based pricing ended up paying more. This isn’t a problem with the concept but, rather, is an indication that the utility didn’t set up a consumer-friendly pricing structure for this trial. That’s something that utilities may want to be cautious about if they don’t want these schemes to pick up a bad reputation ahead of wider rollouts.
But, by comparing groups of people who were all on the scheme, it was possible to identify those who ended up paying more than others with the same pricing setup.
The results confirmed some of the worries. The elderly and disabled ended up paying more than others under the same pricing scheme. (Households with children saw no significant change.) On the plus side, the lower-income households managed to use the policy to cut down on their costs relative to others on a similar pricing setup. Not surprisingly, however, they reported increased discomfort during the period of the test, presumably because they ran the air conditioning less often.
When it comes to any physical consequences of that sort of change, the results were mixed. Those with low incomes or disabilities reported seeking medical attention for heat-related issues more often than others using demand-based pricing, but people with children reported that being needed less often.
The other thing that was notable was that there were subtle differences between the two pricing plans. The authors interpreted this as suggesting that a carefully designed rate might minimize the impacts on vulnerable populations or be used to direct the impacts to the groups who can most afford it. In either case, the data suggests we need to be cautious about rolling out this approach to handling mismatches between supply and demand.
A small but vociferous pack of anti-vaccination protesters deterred New Jersey lawmakers on Monday from voting on a bill to ban vaccination exemptions based on religious grounds.
The vote is now postponed. The bill’s sponsors plan to whip up support and another vote by the end of the legislative session on January 14, 2020.
The bill, S2173, would ban religious vaccination exemptions for kids attending all schools in the state—preschool through college, public and private—as well as those attending childcare centers. Under the proposed law, the only children allowed to be exempted from vaccination requirements are those who have medical grounds that are consistent with those laid out by the Centers for Disease Control and Prevention.
Federal medical exemption guidelines apply to children who have documented and severe reactions to vaccinations or children with medical conditions or treatments that compromise their immune systems, such as cancer patients receiving chemotherapy, children with congenital immunodeficiencies, or those immunocompromised by an HIV infection.
The strict vaccination requirements proposed come in the wake of massive outbreaks of measles this year that threatened the country’s status of having eliminated the highly contagious, vaccine-preventable disease.
The ban also follows a rise in religious exemptions in the state. In the 2018-2019 school year, 2.6 percent of New Jersey students held religious exemptions, up from 1.7 percent in the 2013-2014 school year, according to state records.
If the ban passes, New Jersey will join California, Maine, Mississippi, New York, and West Virginia in having similar bans on religious-based exemptions.
S2173 sailed through the state Assembly Monday. After about 20 minutes of debate, it passed in a 45-25 vote with six abstentions. But this raucous protest from anti-vaccine advocates seemed to derail the scheduled vote in the state Senate.
Trusting the Internet
Roughly two dozen incensed advocates stationed themselves just outside the Senate chamber doors, according to Politico. They taunted and hurled religious incantations at anyone entering or exiting. Some of their chants targeted lawmakers considered to be swing votes on the bill, such as Sen. Joe Lagana (D-Bergen). Protesters shouted “La-ga-na” through the Senate door.
Overall, the clamor was loud enough to drown out actions on the Senate floor.
When lawmakers announced just after 8pm that the vote would be postponed, the protesters cheered. Among them was anti-vaccine advocate Sue Collins, who told The New York Times that the postponed vote was a “victory.”
“The Legislature stood with us,” she said.
Senate President Stephen Sweeney strongly disagreed. “They can cheer all they want. We’re not walking away from it,” he told reporters afterward. Sweeney explained that they delayed the vote because there was “just some people changing back and forth” throughout the day, according to Politico. “We expected to pass the bill and we will pass this bill.”
He added to reporters that “It’s just remarkable how people are looking at this and not trusting the science on it at all. They’re trusting the Internet.”
Some Republican lawmakers opposed to the ban argue that it takes away people’s rights. According to an earlier report on the bill from NJ Advance Media, Sen. Gerald Cardinale (R-Bergen) said that “even though I would make a different choice from the [anti-vaccine] people in this room, it’s their right to be wrong. It’s their own right to follow their conscience.”
However, such an argument neglects the public health aspects of population-wide immunization efforts, proponents of the measure say. Vaccinations protect those immunized from dangerous and life-threatening diseases plus indirectly shield the vulnerable who are too young or medically unable to be vaccinated via herd immunity.
“There is no exemption for drunk driving or wearing a seat belt, there should not be an exemption from a patently safe vaccine that, if not taken, puts the health and well-being of our children at risk,” Senate Majority Leader Loretta Weinberg, the bill’s co-sponsor, told the Associated Press.
Eukaryotes are the category of organisms that include us. We have our DNA partitioned into a nucleus instead of just hanging out loose with other cellular components.
Eukaryotes are thought to have first evolved when a host cell swallowed up a prokaryote, or bacteria. This bacteria paid for its new safe home by providing energy to the cell that engulfed it, eventually persisting in the form of the mitochondria. But the identity of the original host cell is still in dispute. Conventional wisdom held that it was a sort of proto-eukaryote, but what it looked like and how it initially subsumed a bacterial cell was never worked out.
Then, in the 1970s, archaea were discovered. These organisms are single-celled and lack nuclei, like prokaryotes, but their cell membranes and the way they make proteins from DNA are similar to eukaryotes. They are dissimilar enough to both prokaryotes and eukaryotes that they became their own third domain on the tree of life. And they became contenders for the role of eukaryotic ancestor—maybe the cell that initially swallowed a bacterium was an archaea.
Researchers were able to home in on a likely candidate for this proto-eukaryote with the advent of metagenomics, the ability to sequence the DNA of species that cannot be grown in the lab. In 2015, they found a species of archaea that had all the requisite qualifications at a site called Loki’s Castle, a hydrothermal vent under the Arctic. Researchers duly named this organism Lokiarchaea. (Mythology thus informed not only the nomenclature of geologic features on the ocean floor, but of microbes as well.)
The identification of Lokiarchaea led scientists to related species that they inevitably called Thor-, Odin-, and Heimdallarchaeota. The whole group is obviously known as the Asgard superphylum.
But the three-domain tree of life taxonomy—like all other taxonomies—depends on the species used to build the tree, the genetic sequences chosen from those species, and the methods used to compare those sequences to each other. New thinking in the field is that the three domain model was made using a highly limited dataset—36 genes from 104 taxa—and is too simplistic. A two-domain model, in which eukaryotes are a branch of archaea, fits the data much better.
The 3-D camp counters that the genetic similarities between archaea and eukaryotes, upon which much of the 2-D argument relies, are due to contamination of archaea by eukaryotes and that only fast-evolving archaea were included in the analysis, thus skewing the results.
So now a group of evolutionary biologists has undertaken a larger analysis, including over 3,000 gene families from 125 species analyzed using three different methods. Unlike the former study, this one incorporated many uncultivated microbes.
Regardless of which genomic data is used and the method by which it’s sliced and diced, the team concluded that a two-domain tree seems to fit it better. Eukaryotes seem to have arisen from the Asgard archaea branch, which has genes that were considered uniquely eukaryotic, not because they were contaminated by eukaryotic samples, but because they are the ancestral versions of the ones eukaryotes now have.
The reality television game show Biggest Loser is getting a reboot and a revamp after numerous scientific studies and analyses concluded that the extreme weight-loss contest is harmful to both contestants and society at large. But, the changes are unlikely to address the bulk of the criticism.
In an interview with People magazine this month, Biggest Loser host Bob Harper said that the new version is “not about getting skinny, it’s about getting healthy.”
Harper, who is also a personal trainer, told the magazine that “the whole look of the show is going to be so different,” and that it now takes a “whole-body approach” to weight loss.
“We’re looking at changing the way that [the contestants] eat, the way that they think and how they move their body,” Harper said. The show will also emphasize “the importance of managing their stress and how important sleep is when it comes to weight loss.” He added that the show will feature contestants “getting off medication, reversing their type 2 diabetes, lowering their blood pressure.”
The reboot will be out early next year on the USA Network. It previously had a 17-season, 13-year run on NBC, which ended in 2016.
Some of the new messaging and language is likely to draw praise from health experts and researchers, particularly the focus on being “healthy” rather than “skinny.” There are certainly many unhealthy ways to lose weight—which, according to some experts, includes some of the methods used on the show.
Health experts might also appreciate the more holistic approach that the show is taking. Obesity can have many contributing lifestyle and genetic factors.
That said, much of the criticism of the game show will likely stand. It still has contestants trying to lose large amounts of weight quickly, and it still uses competitive weigh-ins, placing emphasis on weight rather than health and potentially encouraging contestants to use dangerous methods to win, such as disordered eating.
Studies of past contestants have indicated that the extreme, rapid weight loss on the show—in some cases hundreds of pounds over the course of just months—can dramatically slow metabolic rates and make weight gain incredibly hard to avoid. Many gain the weight back and more in the months and years that follow. In a 2016 study, 14 former contestants saw a large drop in their resting metabolic rate that lasted at least six years. (Resting metabolic rate is the rate at which the body burns calories while resting. This generally makes up the bulk of calorie burning, exceeding calories burned from breaking down food and exercising.)
At the start of the show, the contestants’ mean resting metabolic rate was 2,607 +/-649 kilocalories per day. By the end, that mean fell to 1,996 +/- 358 kcal per day and, six years later, was still 1,903 +/- 466 kcal/day. Based on their individual weights, the researchers estimate that the contestants were burning an average of about 500 fewer kilocalories a day than would be expected of people their size.
Another six-year follow-up study by the same researchers found that former contestants had to take on more than triple the amount of recommended exercise to keep from packing the pounds back on. Generally, the finding backs up other, larger studies suggesting that exercise may be key to maintaining weight loss (though not typically at such a high level).
All together, the findings suggest that the Biggest Loser contestants were unable to maintain their losses and may have been worse off after the show, in terms of metabolic rate.
In an interview with The Washington Post, Cynthia Thomson, a health promotion sciences professor at the University of Arizona, noted that these undesirable outcomes were unlikely to be lessened by the show’s new “whole-body approach.”
“When you take people who really have quite significant metabolic dysfunction and body size and you do this rapid weight loss, I don’t care if you help them with sleep or you give them a class on stress or teach them how to breathe and relax,” Thomson said. “It’s just not going to be enough if you have put them through this 100-pound weight loss in a very short time period.”
The revamp may not improve things for viewers, either. Several research groups have suggested that the show’s depictions of obese contestants taking on grueling exercise regimens and extremely restrictive diets to quickly lose large amounts of weight end up reinforcing in viewers the stigmatizing idea that weight gain is based on a lack of will-power. A 2012 study found that viewers had significantly higher levels of dislike of overweight people after watching just 40 minutes of the show. Likewise, a 2016 analysis argued that the show undermines public health efforts to combat obesity by sending the message that being obese is a personal failing and potentially ostracizes viewers interested in losing weight.
In the US, nearly 40 percent of adults and 18.5 percent of children and teens (aged 2 to 19) are obese. Experts recommend that people attempting to lose weight do so at a safe, realistic pace by adopting a sustainable diet and tangible, but attainable, exercise goals. And, rather than focusing on achieving some ideal weight, people trying to lose should focus on making changes that improve health, such as lowering cholesterol and blood pressure. Even modest weight loss of five percent of your starting weight can have health benefits, according to the Centers for Disease Control and Prevention.
There’s a rare human trait that doesn’t often make it into debates about what makes our species unique: menopause. Humans are among just a handful of species where females stop reproducing decades before the end of their lifespan. In evolutionary terms, menopause is intriguing: how could it be advantageous for reproductive ability to end before an individual’s life is over?
One possible answer: the power of the grandma’s guidance and aid to her grandchildren. A paper in PNAS this week reports evidence that supports this explanation, showing that killer whale grandmas who have stopped reproducing do a better job of helping their grandchildren to survive than grandmothers who are still having babies of their own.
It’s not all about the babies
The engine of evolution is offspring. In simple terms, the more babies you have that survive, the more your genes are passed on, and the better the chance of the long-term survival of those genes.
But there are other ways to improve the long-term survival of your genes, and that’s where evolution gets a little bit more complicated than just brute-force reproduction. If you invest in your siblings’ children, or your children’s children, you also improve the survival of the genes you share with them. Like every other survival problem that a species must overcome—food, safety, finding a mate—the dynamics of natural selection generate different solutions to the question of how to propagate your genes.
The “grandmother hypothesis” suggests that grandmas play a crucial role in the survival of their grandchildren, which obviously gives the grandmas’ own genes a boost. But that doesn’t explain why humans—along with killer whales, short-finned pilot whales, belugas, and narwhals—stop reproducing with decades left to live. Wouldn’t it be better to just keep having babies of your own and help your grandchildren? Possibly not: in certain species, with certain family dynamics, evolutionary models show that it’s more worthwhile for grandmas to invest all their resources in their grandchildren, rather than compete with their own daughters.
There’s evidence from humans to support this: the grandchildren of post-reproductive grandmas get a survival boost. But there hasn’t been any direct evidence of a post-reproductive grandma benefit in other species that have menopause—like killer whales. Similarly to humans, female killer whales stop reproducing around their late 30s or early 40s but can continue to live for decades after that point. Do killer whales also give their grandkids a boost?
Grandma has tricks up her sleeve
Like humans, killer whales live in intensely social family groups. Also like humans, young killer whales need help finding food even after they’ve been weaned. This means an important role for grandmothers, who can share food with their grandchildren and also impart their decades of accumulated experience and wisdom by guiding their families to historically successful feeding spots.
To test whether post-reproductive killer whale grandmas improve the survival of their offspring, a group of researchers collected data on killer whale populations off the coast of Washington state and British Columbia. They tracked the interactions between hundreds of individual whales, recording births and deaths and controlling for the all-important environmental factor of salmon abundance.
Just like humans, whales can become grandmothers while they’re still having babies themselves. Because they were interested in the effects of menopause, the researchers wanted to compare the effects of grandma whales that had stopped reproducing to those that were still having their own offspring.
The results showed that grandma whales played a significant role in the survival of their grandchildren. Survival rates dropped sharply for whales that had recently lost a grandmother—even adult whales of 15 or 20 years old. And this effect was more marked when the grandmother was no longer reproducing herself. It was also more extreme when salmon abundance was lower, suggesting that the ecological knowledge of grandmother killer whales is a crucial resource for their families.
This result ties in well with previous evidence on menopause in killer whales, which found that menopause meant a reduction in competition for resources between grandmas and their daughters.
Grandmothers with their own calves, the researchers suggest, might be more constrained in how much they’re able to offer leadership to their family groups. And when they’re lactating, they will simply need more food for themselves, reducing how much they can share with their grandchildren. Behavioral studies could help to figure out precisely how interactions change between breeding and post-reproductive grandmas.
With salmon populations declining, the researchers write, the role of grandmothers may become increasingly important in killer whale family groups. It’s a neat example of a delicate ecological balance that we’re only just starting to understand.
By some measures, SpaceX has had a relatively sedate 2019. After all, the company has launched a mere dozen rockets so far this year, in comparison to a record-setting 2018, with 22 overall missions. It should add one more flight to that tally on Monday, with the launch of a large, 6.8-ton communications satellite from Cape Canaveral Air Force Station (see details below).
However, the lower launch cadence masks a year in which SpaceX has made considerable technical progress toward some of its biggest goals—an optimized Falcon 9, satellite Internet, and total launch reusability.
SpaceX founder Elon Musk has long talked about rapid, reusable launch, and in 2019 he continued to make strides toward this vision. The Falcon 9 may have flown less in 2019 due to its lightened manifest, but individual boosters flew more.
During a Starlink satellite launch in November, a Falcon 9 first stage flew for the fourth time. That is the first time the same core, and its nine Merlin engines, have flown four missions and suggests that SpaceX will eventually make good on its goal of eventually flying Falcon 9 first stages 10 times before major refurbishment or retirement.
Trevor Mahlmann for Ars
Trevor Mahlmann for Ars
Trevor Mahlmann for Ars
Trevor Mahlmann for Ars
Trevor Mahlmann for Ars
SpaceX did abandon its efforts to make the whole of the Falcon 9 reusable—it hopes to solve second stage recovery with its Starship project—but the company did have a major breakthrough with the payload fairings of its boosters this year. The company recovered a fairing for the first time in June, and reused one for the first time in November.
Big things are ahead for the Falcon 9 rocket in 2020, as it is likely to launch its first crewed mission—with Doug Hurley and Bob Behknen inside a Dragon spacecraft bound for the International Space Station.
SpaceX also got the jump on its broadband constellation competitors in 2019, with the launch of 120 Starlink Internet satellites over the course of two missions. These flights represented the vanguard of a fleet of more than 10,000 satellites in low-Earth orbit that will provide low-latency connectivity around much of the world and may set the company up to compete with traditional Internet service providers as early as 2020.
The company also stoked controversy among astronomers with its train of satellites, which were clearly visible immediately after launch, and as they raised their orbits. However, SpaceX has recognized those concerns, and company officials have said they will experiment with making the underside of the satellites darker so they are less disruptive to nighttime skies
SpaceX has received the brunt of these criticisms as it is the first of several ventures each planning to launch hundreds to thousands of satellites into low-Earth orbit for broadband Internet service. It remains to be seen whether SpaceX can master the considerable technical challenges of providing seamless Internet from orbit, but the company now has two considerable advantages over its competitors—plenty of operational experience in space from 120 satellites, and a peerless, low-cost, reusable rocket.
It is possible that SpaceX will launch a third batch of 60 Starlink satellites at the very end of this month.
The company also made considerable progress on developing its interplanetary Starship vehicle in 2019. It flew a six-story-tall prototype, Starhopper, twice, with the second flight rising to 150 meters before touching down at its launch site near Boca Chica Beach.
Musk also unveiled a full-scale Starship prototype in September, which it subsequently lost in November during a pressurization test of the vehicle’s fuel tanks.
However, the actual construction of a Starship upper stage for the Super Heavy rocket provided the company, with its iterative design philosophy, invaluable experience. This should serve it well at SpaceX pushes forward with development of Starship prototypes that will be able to make suborbital and then orbital flights—perhaps as early as 2020.
To make this happen the company’s engineers had to tackle several huge hurdles in 2019, including finalizing development of, and accelerating production of the Raptor rocket engine that will power both the Starship and Super Heavy vehicles. According to Musk, the company is ready to ship its 17th Raptor engine to McGregor for testing this week.
Weather conditions appear to be favorable for Monday’s launch attempt of the JCSAT-18/Kacific1 commercial satellite from Space Launch Complex 40 in Florida. The launch window opens at 7:10pm ET (00:10 UTC Tuesday), and closes at 8:38pm ET (01:38 UTC). A backup launch window is available on Tuesday, opening at the same time.
This particular first stage of the Falcon 9 rocket has flown twice, in May, 2019, and July, 2019, on supply missions for the International Space Station. SpaceX will attempt to recover the stage after the launch of the heavy, geostationary-orbit-bound satellite on the Of Course I Still Love You droneship in the Atlantic Ocean. The company will also attempt to catch both fairing halves in separate recovery ships.
The webcast below should go live about 15 minutes before the launch attempt.
Deepfake technology uses deep neural networks to convincingly replace one face with another in a video. The technology has obvious potential for abuse and is becoming ever more widely accessible. Many good articles have been written about the important social and political implications of this trend.
This isn’t one of those articles. Instead, in classic Ars Technica fashion, I’m going to take a close look at the technology itself: how does deepfake software work? How hard is it to use—and how good are the results?
I thought the best way to answer these questions would be to create a deepfake of my own. My Ars overlords gave me a few days to play around with deepfake software and a $1,000 cloud computing budget. A couple of weeks later, I have my result, which you can see above. I started with a video of Mark Zuckerberg testifying before Congress and replaced his face with that of Lieutenant Commander Data (Brent Spiner) from Star Trek: The Next Generation. Total spent: $552.
The video isn’t perfect. It doesn’t quite capture the full details of Data’s face, and if you look closely you can see some artifacts around the edges.
Still, what’s remarkable is that a neophyte like me can create fairly convincing video so quickly and for so little money. And there’s every reason to think deepfake technology will continue to get better, faster, and cheaper in the coming years.
In this article I’ll take you with me on my deepfake journey. I’ll explain each step required to create a deepfake video. Along the way, I’ll explain how the underlying technology works and explore some of its limitations.
Deepfakes need a lot of computing power and data
We call them deepfakes because they use deep neural networks. Over the last decade, computer scientists have discovered that neural networks become more and more powerful as you add additional layers of neurons (see the first installment of this series for a general introduction to neural networks). But to unlock the full power of these deeper networks, you need a lot of data and a whole lot of computing power.
That’s certainly true of deepfakes. For this project, I rented a virtual machine with four beefy graphics cards. Even with all that horsepower, it took almost a week to train my deepfake model.
I also needed a heap of images of both Mark Zuckerberg and Mr. Data. My final video above is only 38 seconds long, but I needed to gather a lot more footage—of both Zuckberg and Data—for training.
To do this, I downloaded a bunch of videos containing their faces: 14 videos with clips from Star Trek: The Next Generation and nine videos featuring Mark Zuckerberg. My Zuckerberg videos included formal speeches, a couple of television interviews, and even footage of Zuckerberg smoking meat in his backyard.
I loaded all of these clips into iMovie and deleted sections that didn’t contain Zuckerberg or Data’s face. I also cut down longer sequences. Deepfake software doesn’t just need a huge number of images, but it needs a huge number of different images. It needs to see a face from different angles, with different expressions, and in different lighting conditions. An hour-long video of Mark Zuckerberg giving a speech may not provide much more value than a five-minute segment of the same speech, because it just shows the same angles, lighting conditions, and expressions over and over again. So I trimmed several hours of footage down to 9 minutes of Data and 7 minutes of Zuckerberg.
At this very moment, you’re a participant in one of the things that makes us human: the telling and consumption of stories. It’s impossible to say when our species began telling each other stories—or when we first evolved the ability to use language to communicate not only simple, practical concepts but to share vivid accounts of events real or imagined. But by 43,900 years ago, people on the Indonesian island of Sulawesi had started painting some of their stories in images on cave walls.
A newly discovered painting in a remote cave depicts a hunting scene, and it’s the oldest story that has been recorded. And if Griffith University archaeologist Maxime Aubert and his colleagues are right, it could also be the first record of spiritual belief—and our first insight into what the makers of cave art were thinking.
A 44,000-year-old hunting story
Across a 4.5 meter (14.8 foot) section of rock wall, 3 meters (9.8 feet) above the floor of a hard-to-reach upper chamber of a site called Liang Bulu’Sipong 4, wild pigs and dwarf buffalo called anoa face off against a group of strangely tiny hunters in monochrome dark red. A dark red hand stencil adorns the left end of the mural, almost like an ancient artist’s signature. Through an opening in the northeast wall of the cave, sunlight spills in to illuminate the scene.
Liang Bulu’Sipong 4 is a living cave, still being reshaped by flowing water, and layers of rock have begun to grow over the painting in spots. The minerals that form those layers include small traces of uranium, which over time decays into thorium-230. Unlike the uranium, the thorium isn’t water-soluble and can only get into the rock via decay. By measuring the ratio of uranium-234 to thorium-230 in the rock, archaeologists can tell how recently the rock layer formed.
The deposits have been slowly growing over the hunting mural for at least 49,300 years, which means the painting itself may be even older than that. That makes the Liang Bulu’Sipong 4 mural the oldest record (that we know of) of an actual story. At first glance, it seems to suggest a game drive, in which people flush animals from cover and drive them toward a line of hunters with spears or other weapons. If Aubert and his colleagues are right about that, it means that somebody 44,000 years ago created a firsthand record of how they made a living.
A scene from legend?
But the oldest story ever recorded by human hands may be something more than a hunting record. “Some, or all, aspects of this imagery may not pertain to human experiences in the real world,” wrote Aubert and his colleagues. Up close, the tiny hunters don’t look quite human; many of them have strangely elongated faces, more like animal muzzles or snouts. One has a tail, and another appears to have a beak.
The figures could represent human hunters clad in skins or masks. Aubert and his colleagues, however, say they look more like therianthropes: human-animal hybrids that show up in cultures around the world, including in 15,500-year-old paintings in the Lascaux caves of France and a 40,000-year-old carved figure from Germany.
Whether they’re human, animal, or a bit of both, the hunters are facing prey animals of monstrous or mythological proportions. In real life, an anoa stands about 100cm (39.4 inches) tall, and an Indonesian wild pig stands only 60cm (23.6 inches) tall. On the wall of Liang Bulu’Sipong 4, though, the creatures loom many times larger than the hunters arrayed against them. It looks like a scene out of a legend, not a dry record of another day’s hunting.
And its presence suggests that Liang Bulu’Sipong 4 may have been a sacred, or at least important, place to the people who once lived in the area. Archaeologists found no trace of the usual debris of human life—stone tools, discarded bones, and cooking fires—anywhere in the cave or in the much larger chamber beneath it. That’s no wonder: Liang Bulu’Sipong 4 is set in a cliff 20 meters above the valley floor, and one doesn’t simply walk in.
“Accessing it requires climbing, and this is not an occupation site,” Aubert told Ars. “So people were going in there for another reason.”
Aubert et al. 2019
Aubert et al. 2019
Aubert et al. 2019
Aubert et al. 2019
Aubert et al. 2019
Aubert et al. 2019
Aubert et al. 2019
Listing image by Aubert et al. 2019
More than 13,000 artificial intelligence mavens flocked to Vancouver this week for the world’s leading academic AI conference, NeurIPS. The venue included a maze of colorful corporate booths aiming to lure recruits for projects like software that plays doctor. Google handed out free luggage scales and socks depicting the colorful bikes employees ride on its campus while IBM offered hats emblazoned with “I ❤️A👁.”
Tuesday night, Google and Uber hosted well-lubricated, over-subscribed parties. At a bleary 8:30 the next morning, one of Google’s top researchers gave a keynote with a sobering message about AI’s future.
Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.
“We’re kind of like the dog who caught the car,” Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn’t immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. “All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren’t covered by that rubric at all,” he said.
Hours later, one of the three researchers seen as the godfathers of deep learning also pointed to the limitations of the technology he had helped bring into the world. Yoshua Bengio, director of Mila, an AI institute in Montreal, recently shared the highest prize in computing with two other researchers for starting the deep learning revolution. But he noted that the technique yields highly specialized results; a system trained to show superhuman performance at one videogame is incapable of playing any other. “We have machines that learn in a very narrow way,” Bengio said. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”
Bengio and Aguera y Arcas both urged NeurIPS attendees to think more about the biological roots of natural intelligence. Aguera y Arcas showed results from experiments in which simulated bacteria adapted to seek food and communicate through a form of artificial evolution. Bengio discussed early work on making deep learning systems flexible enough to handle situations very different from those they were trained on, and made an analogy to how humans can handle new scenarios like driving in a different city or country.
A federal judge on Tuesday roasted Arkansas’ law banning makers of meatless meat products from using words such as “burger,” “sausage,” “roast,” and “meat” in their labeling. The law also established fines of $1,000 for each individual label in violation.
Known as Act 501, the law passed state lawmakers in March but has yet to be enforced. If it had, meatless-meat makers, such as Tofurky, would be forced to stop selling their products in the state, face a ruinous amount of fines, or change their labeling of meatless burgers and sausages to unappetizing and vague descriptors, such as “savory plant-based protein” and “veggie tubes.”
The American Civil Liberties Union (ACLU), The Good Food Institute, and Animal Legal Defense Fund challenged Act 501 on behalf of Tofurky in July. Together, the groups argued that the law amounted to a ham-fisted attempt by meat-backed lawmakers to protect the profits of the dairy and meat industry and stifle popular meatless competition.
On Tuesday, the group earned a first win in the case.
Judge Kristine Baker, of the US District Court for the Eastern District of Arkansas, granted a preliminary injunction that prevents the state from enforcing the law while the legal case is ongoing. In her order, Judge Baker made clear that the law appears to violate the Free Speech Clause of the First Amendment—as Tofurky argued. She determined that the state will likely lose the case.
On the butcher block
Arkansas argued in the case that the purpose of the law is to protect consumers from being misled or confused by false or misleading labeling on meatless products that use meat-associated terms.
“The State argues that Tofurky’s labels for its plant-based products are inherently misleading because they use the names and descriptors of traditional meat items but do not actually include the product they invoke, including terms like ‘chorizo,’ ‘hot dogs,’ ‘sausage,’ and ‘ham roast,'” Judge Baker noted. Such misleading or false labels would not be protected commercial speech under the First Amendment, the state claimed.
But Judge Baker essentially called that argument bologna.
“The State appears to believe that the simple use of the word ‘burger,’ ‘ham,’ or ‘sausage’ leaves the typical consumer confused, but such a position requires the assumption that a reasonable consumer will disregard all other words found on the label,” Judge Baker wrote in her order. She noted that Tofurky product labels contained many terms, qualifiers, and other clues that the products are not, in fact, animal products.
She went on to cite a ruling in a similar case that determined that “Under Plaintiffs’ logic, a reasonable consumer might also believe that veggie bacon contains pork, that flourless chocolate cake contains flour, or that e-books are made out of paper.”
“That assumption is unwarranted,” she went on. “The labels in the record evidence include ample terminology to indicate the vegan or vegetarian nature of the products.” Moreover, she adds, there’s no evidence that any consumer was actually confused as to whether Tofurky’s products contain animal-based meat.
“As a result, Tofurky is likely to prevail on its arguments that its labeling is neither unlawful nor inherently misleading and that Tofurky’s commercial speech warrants First Amendment protection,” she concluded.
In a statement, Brian Hauss, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, who argued the case, said, “We’re glad the court blocked the state’s blatantly unconstitutional effort to stifle competition by censoring speech. Legislatures that have passed or are considering similarly absurd laws in their states should take note of this ruling and correct course now.”
Jaime Athos, CEO of Tofurky, said in an emailed comment: “I am thrilled by the Court’s ruling today, but not particularly surprised. The actions of the Arkansas state legislature were a betrayal of their duty to their constituents and a blatant perversion of the purpose of government. Unfortunately, it is a little early to break out the champagne because this injunction currently only protects Tofurky in Arkansas, and there are many other plant-based companies that are still at risk there, and many other states have passed similarly unconstitutional legislation.”
Meat and dairy industry groups have been increasingly working to try to limit the use of terms like “milk” and “meat” in other states and contexts as meatless and diary-free products continue to grow in popularity. Missouri, Mississippi, Louisiana, and South Dakota have similar anti-veggie-meat labeling laws. In Wisconsin, lawmakers have considered banning non-dairy products from using the word “milk,” such as beverages labeled almond milk.
The latter issue led former FDA commissioner Scott Gottlieb to quip last year that “You know, an almond doesn’t lactate.” He said that the Food and Drug Administration is working on a guidance for the use of the term.
With a one-sentence order Tuesday, an Arkansas judge rejected a request from two unvaccinated University of Arkansas students to have the court block a public health decree that temporarily bars them from classes amid a mumps outbreak.
The Arkansas Department of Health reported that as of December 5, there have been 26 cases of mumps at the university since September. Twenty of those cases occurred in November. According to a recent report in The Washington Post, the outbreak decleated the school’s already struggling football team, knocking out as many as 15 players and a few coaches from the end of its dismal two-win season.
On November 22, the health department issued a directive that any student who had not received two doses of the MMR vaccine (which protects against mumps, measles, and rubella) must either get vaccinated immediately or be barred from classes and school activities for 26 days. As of last week,168 students lacked the vaccinations and were barred from classes.
Two of those unvaccinated students were brothers Shiloh Isaiah and Benjamin Andrew Bemis, who objected to the health department’s directive. They asked the Washington County Circuit to issue a temporary injunction, arguing that the University of Arkansas is “acting in a manner that has violated our civil rights.”
Though Arkansas requires students to be vaccinated, it allows for medical, religious, and philosophical exemptions. The brothers claim that the university “failed to recognize and uphold our philosophical beliefs as enrolled students—beliefs which include the choice to abstain from vaccinations,” according to the Associated Press.
Public health measures
A spokesperson for the university said that it was acting in accordance with the health department. A health department official told the Democrat-Gazette last week that the exemptions don’t apply to a public health directive during an outbreak.
The brothers also argued in their court filing that tests showed they did not have mumps. James G. Hodge Jr., an expert in public health laws, told the Gazette that such an argument is meaningless. “Just because you don’t currently have mumps or measles doesn’t mean you couldn’t have it 24 hours later,” Hodge said.
Judge Doug Martin on Tuesday rejected the brothers’ request succinctly. He did not include an explanation of the decision in the order.
Judge Martin’s rejection is just the latest in a long string of court defeats for students and parents who oppose life-saving immunizations on non-medical grounds. Earlier this year, an unvaccinated Kentucky teen lost his court case and an appeal to block a similar public health directive that barred him from school during a chickenpox outbreak. His attorney revealed that amid the legal fight, the teen contracted the chickenpox.
Parents in Brooklyn, New York, also lost their court battle in April against a vaccination mandate issued during a massive measles outbreak there. Months later, anti-vaccine advocates lost a legal fight in New York to block a new state law that eliminated non-medical vaccination exemptions.
“Generally, parents and students lose, and there’s good reason for it,” Hodge told the Gazette, speaking of other legal challenges to the public health directive in Arkansas. “What is done here is not a punitive measure. It is a public health measure.”
Mumps is a viral infection spread by saliva and respiratory droplets. It causes fever, muscle aches, tiredness, headaches, and swollen salivary glands, which produce the tell-tale puffy cheeks and swollen jaw. The infection can lead to inflammation of the testicles, brain, and spinal cord. In some cases, it can cause deafness. Since 2015, the largest outbreak of mumps in the US was in a close-knit community in northwest Arkansas that involved nearly 3,000 cases.
It’s now possible to store the digital instructions for 3D printing an everyday object into the object itself (much like DNA stores the code for life), according to a new paper in Nature Biotechnology. Scientists demonstrated this new “DNA of things” by fabricating a 3D-printed version of the Stanford bunny—a common test model in 3D computer graphics—that stored the printing instructions to reproduce the bunny.
DNA has four chemical building blocks—adenine (A), thymine (T), guanine (G), and cytosine (C)—which constitute a type of code. Information can be stored in DNA by converting the data from binary code to a base 4 code and assigning it one of the four letters. As Ars’ John Timmer explained last year:
Once a bit of data is translated, it’s chopped up into smaller pieces (usually 100 to 150 bases long) and inserted in between ends that make it easier to copy and sequence. These ends also contain some information where the data resides in the overall storage scheme—i.e., these are bytes 197 to 300. To restore the data, all the DNA has to be sequenced, the locational information read, and the DNA sequence decoded. In fact, the DNA needs to be sequenced several times over, since there are errors and a degree of randomness involved in how often any fragment will end up being sequenced.
DNA has significantly higher data density than conventional storage systems. A single gram can represent nearly 1 billion terabytes (1 zettabyte) of data. And it’s a robust medium: the stored data can be preserved for long periods of time—decades, or even centuries. But using DNA for data storage also presents some imposing challenges. For instance, storing and retrieving data from DNA usually takes a significant amount of time, given all the sequencing required. And our ability to synthesize DNA still has a long way to go before it becomes a practical data storage medium.
This latest breakthrough brings us one step closer to that goal. Several years ago, co-author Robert Grass of ETH Zurich developed a method for marking products with a DNA “barcode” embedded in minuscule glass beads (“nanobeads”), a technology now being commercialized by a spinoff company. That is one key development that enabled this latest approach. The other is a method for storing (at least theoretically) more than 250,000 terabytes of data in a gram of DNA, developed by co-author Yaniv Erlich, chief science officer at MyHeritage, a DNA-based genealogy company.
“All other known forms of storage have a fixed geometry: a hard drive has to look like a hard drive, a CD like a CD. You can’t change the form without losing information,” Erlich said. “DNA is currently the only data storage medium that can also exist as a liquid, which allows us to insert it into objects of any shape.”
The fabricated Stanford bunny holds about 100 kilobytes of data, thanks to the addition of the DNA-containing nanobeads to the plastic used to 3D print it. “Just like real rabbits, our rabbit also carries its own blueprint,” said Grass.
Grass and his colleagues were also able to cut off a piece of the rabbit’s ear to retrieve the embedded DNA. Then they used that information to fabricate a second bunny, repeating this process four times, for a total of five fabricated bunnies. The data did degrade a bit with each subsequent generation, but the decoding program can fill in any blanks so that useable data can still be retrieved.
As a further proof of principle, Grass et al. stored a short film in the glass nano beads and then embedded them into the plexiglass lenses of a pair of glasses. “It would be no problem to take a pair of glasses like this through airport security and thus transport information from one place to another undetected,” said Erlich. It would also be possible to embed blueprint instructions in objects like medical implants, car parts, electronic components, and building materials, which can be difficult to replace.
“Imagine a societal norm in which every object must encode the instructions for making the object,” Stanford University bioengineer Drew Endy told IEEE Spectrum. “Given the incredible information density of DNA data storage, such information could, in some commonplace objects such as refrigerators, also include a fully unabridged guide to rebuilding all of civilization.”
Listing image by ETH Zurich / Julian Koch
Summer 2018 saw some notably extreme weather in multiple locations around the Northern Hemisphere. There were heatwaves in the Western United States, Western Europe, the Caspian region through Siberia, and Japan as well. That’s not necessarily interesting on its face, as there’s always weird weather going on somewhere. But this was not a coincidence, as all these events were physically linked by the physics of the jet stream. It’s a linkage that could contribute to a crisis for food production.
The Northern Hemisphere jet stream is a band of strong winds that marks a boundary between cold Arctic air and warmer mid-latitude air. As the jet stream slides farther north or south, it brings changes in temperatures with it, via the cold and warm fronts that can bring rain.
The jet stream’s path can range from a neat, east-west stripe around the planet to lazy meanders that form serpentine shapes. Large meanders tend to move slowly, setting the stage for extremes like heatwaves (or cold rain in the next meander over). These meanders are affected by the location of continents and oceans, as well as by wind patterns around mountain ranges. Because these locations are fixed, there are some common positions for jet-stream meanders that occur over and over again.
A new study led by Kai Kornhuber at Columbia University found that things form at these locations when the jet stream has five or seven meanders in its path around the world, but not for six or eight meanders. In a five-meander pattern, for example, the northward bends (called “ridges”) tend to set up over the middle of North America, Eastern Europe, and Eastern Asia at the same time. In these ridges, warm air is pulled northward, and warm, sunny conditions prevail because of high air pressure—the recipe for a heatwave.
In the seven-meander pattern, the ridges shift toward Western North America, Western Europe, Western Asia (around the Caspian Sea), and Central Asia.
The researchers identified these patterns using wind data from the last few decades but also by looking for heat extremes. That showed that heat extremes are much more likely in some of these areas during five- or seven-meander jet streams. In central North America, for example, 45 out of the 520 weeks of summertime data had a five-meander pattern. The probability of a heatwave in those 45 weeks was seven times greater than at other times.
When you look for heatwaves occurring in central North America, Eastern Europe, and Eastern Asia at the same time, this pattern jumps out even more clearly. Simultaneous heatwaves were 21 times more likely during five-meander jet-stream patterns. The story is similar for simultaneous heatwaves in North America, Western Europe, and Western Asia during seven-meander patterns.
Does climate change fit into this? That’s not easy to answer. Increasing temperatures mean that heatwaves are more extreme than they used to be regardless of the jet stream’s behavior. But the influence of climate change on the jet stream is genuinely debated. Some studies see a link and project an increase in meandering behavior, but untangling human-caused changes and natural variability has been particularly difficult on this question.
In this case, the dataset used doesn’t show a statistically significant increase in five-meander or seven-meander patterns since 1978. But it does show an impact on agriculture. The researchers also checked their data against crop production in these regions, noting an average drop of four percent in summers with more than one of these events.