September 2016

Megan Geuss
A Manhattan-based federal judge ruled on Monday (PDF) that a man accused of running an illegal Bitcoin exchange website could not have two charges of running an unlicensed money transfer business dropped because Bitcoin is money.
The defendant is Anthony Murgio of Florida, who was arrested in July 2015 in connection with a number of other American and Israeli men who allegedly hacked into JP Morgan Chase, ETrade, and News Corp., among others. Murgio was not directly charged with conducting any of the hacks, but the Justice Department did claim that Murgio ran a sketchy Bitcoin exchange website called with Gery Shalon, the alleged mastermind of the JP Morgan hacks. According to a 2015 indictment, Murgio and others were able to accept shady money from co-conspirators through
Murgio is also accused of misrepresenting his business to financial institutions by creating a front for called the “Collectables Club,” as well as with bribing a small New Jersey credit union to process its electronic payments. Judge Alison Nathan’s Monday order did not impact those charges.
In his motion to dismiss the unlicensed money transfer business charges, Murgio claimed that, because Bitcoins are not considered “funds,” he was not operating an illegal business.
In her order, Nathan denied Murgio’s argument, writing, “Bitcoins are funds within the plain meaning of that term. Bitcoins can be accepted as a payment for goods and services or bought directly from an exchange with a bank account. They therefore function as pecuniary resources and are used as a medium of exchange and a means of payment.”
Whether Bitcoin is considered money or not has been inconsistently decided in a variety of legal and regulatory settings. In July, a Florida judge ruled that cryptocurrency is not money in a case involving a Bitcoin vendor caught in a sting set up by a Miami police detective. In 2013, however, a Texas-based federal judge came to the opposite conclusion in a case involving a Bitcoin-based hedge fund. The Financial Crimes Enforcement Network (FinCEN) also advised in 2013 that Bitcoin-based businesses should be considered Money Services Businesses under US law, but the Internal Revenue Service treats the cryptocurrency as property rather than currency, meaning it’s subject to capital gains tax.
Judge Nathan responded to the IRS argument in her Monday order. “The fact that the IRS treats virtual currency as ‘property,’ rather than ‘currency,’ for tax purposes is irrelevant to the inquiry here,” she wrote. “In fact, the IRS Notice that Murgio cites makes clear that it ‘addresses only the US federal tax consequences of transactions in, or transactions that use, convertible virtual currency.’”

Beth Mole
Chiming in with reminders, data, and tips, our sleek gadgets and handy apps want to program us into being better versions of ourselves: more responsible, productive, healthy. But, sadly, some technology is no match for the chaotic code of an emotional human—particularly one struggling on a diet.
According to a two-year study, wearable fitness trackers designed to coax users into busting moves and burning calories throughout their daily lives didn’t help anyone lose weight. In fact, overweight dieters using the arm-mounted gizmos actually gained more weight on average than those using old-fashioned, tech-less dieting schemes. The study, published Tuesday in JAMA, contradicts earlier studies that found the trackers can boost weight loss. But those earlier trials tended to be smaller and shorter.
The new data, the authors say, suggests that tossing technology at big problems, like fitness, diet, willpower, and motivation, isn’t straightforward and requires more nuanced, long-term studies. “I think we have to be a little bit cautious about simply thinking that what we can do is just add technology to these already effective interventions and expect better results,” lead study researcher John Jakicic, of the University of Pittsburgh, said in an interview with JAMA.
For the study, Dr. Jakicic and his colleagues started with one of those effective behavioral interventions. They enrolled 471 young adults (aged 18-35) who were overweight (with an average weight of around 210 pounds) and wanted to slim down. For the first six months, the participants had to stick to a low-calorie diet, a prescribed fitness plan, log their progress in diet diaries, and attend weekly group counseling sessions.
After six months, everyone had lost weight—about 17-19 pounds on average.
Next, the participants were divided into two groups. One group got the fitness tracker for 18 months, while the other just had to log their activity into a study website (considered a standard dieting method). By the 24-month mark, many participants in both groups had regained some of the weight they lost in the first six months. Those on the standard plan were, on average, 13 pounds lighter than when they started the whole thing (before the six-month intervention). But those using the fitness trackers were, on average, only about eight pounds lighter.
While the results surprised the researchers, the data didn’t provide any clear clues as to why the fitness trackers seemed to sabotage dieters’ weight loss efforts. Perhaps the devices worked to get people moving, but then led them to be hungrier and overeat. Or it’s possible that people might have felt discouraged if they kept track of their fitness each day, felt they weren’t going to meet their daily goal, and then gave up early.
Jakicic says future studies will be necessary to tease such potential factors out, plus test the effectiveness of different wearable fitness tracking devices. “Probably more importantly,” he said, “is for us to try to understand for whom and when these devices are actually very effective.” For some people, fitness trackers might work, he said. For others, they might backfire.
JAMA, 2016. DOI: 10.1001/jama.2016.12858  (About DOIs).

Kyle Orland
We're pretty sure this supposed "leak" of an NX design is fake, but it's still a good mock-up based on the rumors we have heard about the supposed console/portable hybrid.
With the NX just over six months away from launch (if Nintendo's "March 2017" launch roadmap is still to be believed), we're still stuck grasping at straws when it comes to official info about the system. The latest detail drip comes from The Pokemon Company (TPC) president Tsunekazu Ishihara, who seemingly confirmed the long-standing rumors about the system's console/hybrid design in an interview with The Wall Street Journal.
"The NX is trying to change the concept of what it means to be a home console device or a hand-held device [emphasis added]," Ishihara said in the interview. "We will make games for the NX."
The wording of the quote leaves a little wiggle room for interpretation—perhaps Ishihara was guessing at the NX's existence as a console or a hand-held as an either-or proposition. It's also unclear if Ishihara has any specific, insider knowledge of the NX or its development process. The Pokemon Company is partly owned by Nintendo (in conjunction with game developers Creatures and Game Freak), so maybe there has been some hardware information sharing going on between the two companies. But TPC largely operates as its own entity, and Ishihara might have simply been speaking based on previous reports suggesting the system's hybrid design.
In any case, add this small pebble to the very small pile of semi-reliable information we have regarding the still-mysterious NX nearly 18 months after its first announcement. As Daniel Ahmad recently pointed out on Twitter, it was only 191 days from the May 2013 unveiling of the Xbox One to its November 2013 launch. As of this writing, there are 192 days until March 31, 2017. Tick tock, Nintendo.

Exploiting in-car Web browser, researchers gained access to car's control network.

Sean Gallagher

Researchers from Tencent's Keen Security Labs totally hack the Tesla S over Wi-Fi.

Security researchers at the Chinese Internet company Tencent's Keen Security Lab privately revealed a security bug in Tesla Model S cars that allowed an attacker to achieve remote access to a vehicle's Controller Area Network (CAN) and take over functions of the vehicle while parked or moving. The Keen researchers were able to remotely open the doors and trunk of an unmodified Model S, and they were also able to take control of its display. Perhaps most notably, the researchers remotely activated the brakes of a moving Model S wirelessly once the car had been breached by an attack on the car's built-in Web browser.
Tesla has already issued an over-the-air firmware patch to fix the situation.
Previous hacks of Tesla vehicles have required physical access to the car. The Keen attack exploited a bug in Tesla's Web browser, which required the vehicle to be connected to a malicious Wi-Fi hotspot. This allowed the attackers to stage a "man-in-the-middle" attack, according to researchers. In a statement on the vulnerability, a Tesla spokesman said, "our realistic estimate is that the risk to our customers was very low, but this did not stop us from responding quickly." After Keen brought the vulnerability to Bugcrowd, the company managing Tesla's bug bounty program, it took just 10 days for Tesla to generate a fix.
Full details of the attack were not revealed. But in a video demonstrating the attack (shown above), researchers exploited the in-car browser of an unmodified vehicle by intercepting a search for the nearest charging station. The exploit then allowed the researchers to gain remote control over Wi-Fi to door locks, seat adjustments, signals, and other controls including the vehicle's displays. While moving, the researchers were also able to demonstrate remote control of the vehicle's rear hatch and the brakes, bringing the car to a very sudden stop from a computer 12 miles away.
Listing image by El monty

Google makes yet another attempt at instant messaging—this time with a cloud twist.

Ron Amadeo
Four months after announcing the product at Google I/O, Google Allo has finally launched. Allo is yet another attempt at a Google instant messaging platform, and while Google insists it won't shut down its current IM product, Google Hangouts, it's hard to imagine the new thing not replacing the old thing.
So what makes Allo different? The sales pitch is that Allo is an IM client with a Google cloud twist. Like Google Inbox, there's a "smart reply" feature that scans the current chat and generates several pre-typed responses using Google's cloud-powered machine learning. For instance, at a very basic level, if Google detects a "yes" or "no" question, you'll get "yes" or "no" buttons to reply with above the keyboard. Google says that smart reply will "improve over time and adjust to your style."
The other cloud-powered feature is the Google Assistant, which is Google's new chatbot technology that lets you perform Google queries and see results right inside a chat window. This can be things like asking questions, showing a plane flight, or finding nearby restaurants. While you can do all of this at, doing it inside Allo means you can collaborate with a friend. Being able to do things like browse restaurant results together sounds like a great way to make dinner plans.
Allo also has an "Incognito mode" that will encrypt your chat session end-to-end and promises to not store it on a Google server. Encryption is an optional mode, though, not a default. There's also SMS support for your friends that aren't on Allo (which is everyone right now), lots of stickers, and the ability to take a picture and draw on it.
The rest of Allo is pretty much your standard messaging app. It doesn't use your Google account, though—you "sign in" with a phone number, and it doesn't know who you or any of your friends are, which is rather odd. There's also a big deal breaker for some people: client support. Google is currently rolling out clients for Android and iOS, and that's it. There are no desktop or Web clients.

Third judge rules that Playpen search warrant was invalid from the start.

Cyrus Farivar
Andrew Brookes / Getty Images News
A federal judge in Iowa has ordered the suppression of child pornography evidence derived from an invalid warrant. The warrant was issued as part of a controversial government-sanctioned operation to hack Tor users. Out of nearly 200 such cases nationwide that involve the Tor-hidden child porn site known as "Playpen," US District Judge Robert Pratt is just the third to make such a ruling.
"Any search conducted pursuant to such warrant is the equivalent of a warrantless search," Judge Pratt wrote Monday in his 19-page order in United States v. Croghan.
While the charges against Beau Croghan have not been dropped yet, the ruling significantly hinders the government's case.
Earlier this year, federal judges in Massachusetts and Oklahoma made similar rulings and similarly tossed the relevant evidence. Thirteen other judges, meanwhile, have found that while the warrants to search the defendants' computers via the hacking tool were invalid, they did not take the extra step of ordering suppression of the evidence. The corresponding judges in the remainder of the cases have yet to rule on the warrant question.
In all of these cases related to Playpen, a federal magistrate judge in Virginia issued a warrant that was then used to authorize the deployment of this tool, known as a "network investigative technique," or NIT, as a way to locate users.
Under current rules of federal jurisprudence, magistrate judges only have the authority to issue warrants within their own district. However, a change in this rule will almost certainly expand this power to magistrate judges later this year, absent Congressional action. As of now, only more senior federal judges, known as district judges, have the authority to issue out-of-district warrants. So, Judge Pratt concluded, because the warrant was invalid ab initio, or from the beginning, any evidence that resulted from that search must be suppressed.
"Here, by contrast, law enforcement caused an NIT to be deployed directly onto Defendants' home computers, which then caused those computers to relay specific information stored on those computers to the Government without Defendants' consent or knowledge," Judge Pratt wrote.
"There is a significant difference between obtaining an IP address from a third party and obtaining it directly from a defendant’s computer."
As the judge continued:
If a defendant writes his IP address on a piece of paper and places it in a drawer in his home, there would be no question that law enforcement would need a warrant to access that piece of paper—even accepting that the defendant had no reasonable expectation of privacy in the IP address itself. Here, Defendants' IP addresses were stored on their computers in their homes rather than in a drawer.

Our tax dollars at work

As Ars has reported before, investigators in early 2015 used the NIT to force Playpen users to cough up their actual IP address, which made tracking them down trivial. In yet another related case prosecuted out of New York, an FBI search warrant affidavit described both the types of child pornography available to Playpen's 150,000 members and the malware's capabilities.
As a way to ensnare users, the FBI even took control of Playpen and ran it for 13 days before shutting it down. During that period, with many users' Tor-enabled digital shields down—revealing their true IP addresses—the government was then able to identify and arrest the nearly 200 child porn suspects. (However, nearly 1,000 IP addresses were revealed as a result of the NIT’s deployment, which could suggest that even more charges could be filed.)
Privacy-minded experts applauded Judge Pratt's reasoning—that the government should not have the ability, absent proper warrants, to hack into people's computers.
"Judge Pratt correctly interpreted the NIT's function and picked the correct analogy," Fred Jennings, a New York-based lawyer who has worked on numerous computer crime cases, told Ars. Jennings continues:
[Pratt] correctly points out that the usual analogies, to tracking devices or IP information turned over by a third-party service provider, are inapplicable to this type of government hacking. A common theme in digital privacy, with Fourth Amendment issues especially, is the difficulty of analogizing to apt precedent—there are nuances to digital communication that simply don't trace back well to 20th-century precedent about physical intrusion or literal wiretapping.
By contrast to Judge Pratt, other courts have struggled with the basics of how Tor and IP addresses work.
"In attempting to salvage the mess they made with Playpen, [the Department of Justice] has tried to say that the NIT is like a GPS tracking device," Chris Soghoian, a technologist for the American Civil Liberties Union, told Ars.
"And, sadly, several judges have bought it, saying that the defendants traveled virtually to Virginia, and that the NITs were installed in Virginia while they were virtually there."
For its part, the government has said it is not sure how it will deal with the suppression order in Croghan.
"Our office is still in the process of reviewing the judge's order that was issued yesterday," Rachel Scherle, a federal prosecutor in Iowa, told Ars by e-mail. "No decisions have been made as to dismissal or appeal at this time, but I will keep you posted."

The return of Tiangong-1 will come within a year of the launch of Tiangong-2.

Eric Berger
Artist's concept of Tiangong-1 space station with a Shenzhou spacecraft docking.
China says its first space station, launched in 2011, will return to Earth sometime during the second half of 2017. Chinese space officials cannot say exactly when, or where the Tiangong-1 laboratory will return to Earth, however.
The small space station, named "Heavenly Palace," is presently at an orbit of about 370km, Chinese officials said. But it can no longer sustain such a high orbit and will gradually begin falling back to Earth. China's official news service, Xinhua, further reported:
"Based on our calculation and analysis, most parts of the space lab will burn up during falling," she said, adding that it was unlikely to affect aviation activities or cause damage to the ground.
China has always highly valued the management of space debris, conducting research and tests on space debris mitigation and cleaning, Wu said.
Now, China will continue to monitor Tiangong-1 and strengthen early warning for possible collision with objects. If necessary, China will release a forecast of its falling and report it internationally, said Wu.
The 8.5-ton, 10.4-meter-long facility served as an initial test bed for life-support systems in orbit and served as a precursor for China's plans to launch a larger space station in the 2020s. A second "Heavenly Palace," Tiangong-2, was launched earlier this month for further studies. It, too, will eventually return to Earth in an uncontrolled manner.

Exec says console upgraders should wait for "significantly more powerful" Scorpio.

Kyle Orland
I dunno... I still see some jagged edges on this logo...
Remember earlier in the week when we described the emerging competition over the coming world of 4K console gaming? That contest just got a little more direct and personal, judging by comments Microsoft head of Xbox planning Albert Penello made about the PS4 Pro in a recent Eurogamer interview.
"I know that 4.2 teraflops is not enough to do true 4K," Penello said, referencing the reported hardware power of the PS4 Pro, which launches in November at $400. "So, I feel like our product aspired a little bit higher, and we will have fewer asterisks around the 4K experiences we deliver on our box."
Penello's comments followed a more direct comparison between the "true 4K" capabilities of the upcoming Xbox One Scorpio (launching late next year, price unknown) and the PS4 Pro:
I think there are a lot of caveats they're giving customers right now around 4K. They're talking about checkerboard rendering and up-scaling and things like that. There are just a lot of asterisks in their marketing around 4K, which is interesting because when we thought about what spec we wanted for Scorpio, we were very clear we wanted developers to take their Xbox One engines and render them in native, true 4K. That was why we picked the number, that's why we have the memory bandwidth we have, that's why we have the teraflops we have, because it's what we heard from game developers was required to achieve native 4K.
It's a fair argument, at least as far as Sony's system is concerned. PlayStation President Andrew House has said that "the majority [of PS4 Pro games] will be upscaled" to full 4K resolution. That statement echoes what we've heard from developers working on PS4 Pro upgrades, though at least one said the upscaling should be "nearly indistinguishable" from a native 4K experience. In contrast, Microsoft has promised that all of its first-party games on Scorpio will be rendered in native 4K, without upscaling.
To be fair, Penello bracketed his Eurogamer interview by saying his comments "[don't] come with a disrespect for what [Sony is] doing." While Penello said he of course wants to "highlight the things we think make our product advantaged over their product," that doesn't mean engaging in "the historical 'Sega does what Nintendon't' kind of head-on jabs that have happened in the past."
Penello also acknowledged that the relative value of "native 4K" over the PS4 Pro-style upscaling can be subjective. "You and I both know there will be people who claim with absolute certainty that the difference between 1080p and 900p is the most significant thing, and anybody who claims otherwise is blind," he said. "And there will be people who say they can't see a difference. Both people are right in their own minds."
Still, Penello's comments are the clearest indication yet that Microsoft will be using Scorpio's claimed power advantage as a marketing cudgel to convince potential PS4 Pro buyers to wait for a better experience. In fact, Penello put that argument quite directly in the interview: "The guys who want to do this mid-generation upgrade, you're going to get something significantly more powerful next year."

City that wants better Internet gears up for court battle against AT&T.

Jon Brodkin
Google Fiber
The Nashville Metro Council last night gave its final approval to an ordinance designed to help Google Fiber accelerate deployment of high-speed Internet in the Tennessee city, despite AT&T and Comcast lobbying against the measure. Google Fiber's path isn't clear, however, as AT&T said weeks ago that it would likely sue Nashville if it passes the ordinance. AT&T has already sued Louisville, Kentucky over a similar ordinance designed to help Google Fiber.
The Nashville Council vote approved a "One Touch Make Ready" ordinance that gives Google Fiber or other ISPs quicker access to utility poles. The ordinance lets a single company make all of the necessary wire adjustments on utility poles itself, instead of having to wait for incumbent providers like AT&T and Comcast to send work crews to move their own wires.
One Council member who opposed the ordinance asked AT&T and Comcast to put forth an alternative plan, but the council stuck with the original One Touch Make Ready proposal.
"It’s a great day for Nashville," Google Fiber said in response to the vote. "This will allow new entrants like Google Fiber to bring broadband to more Nashvillians efficiently, safely and quickly."
The ordinance now heads to Mayor Megan Barry, who said she plans to sign it into law, The Tennessean reported last night. But she is getting ready for a lawsuit. "Unfortunately, the likelihood of protracted litigation could delay implementation of this law designed to benefit Nashville’s consumers," she said, according to the report. "That is why I encouraged fiber providers to work together on a solution they could all agree upon, which they were not able to do. My hope now is that any potential legal disputes over this new law can be resolved quickly, and we can move forward with expanding fiber access throughout the city.”
Google Fiber owner Alphabet offered to share the company's attorneys with Nashville to fight a lawsuit. AT&T said last month that it expected the ordinance's passage would "result in litigation."
AT&T and Comcast both expressed disappointment in last night's vote. AT&T said the ordinance "is not a good solution for faster deployment of infrastructure," while Comcast said, "we thank the council members who were willing to take a deep look at the risks associated with, and inaccuracies of ‘One Touch’ and stand up for a better solution that is beneficial for all consumers."
AT&T previously complained that the ordinance could disrupt its contract with its workers' union and that Google Fiber crews sometimes have not followed safety codes.

The En-Gedi scroll was a lump of crumbling coal for over 1,700 years, but a new technique "unwrapped" it.

Annalee Newitz
When the En-Gedi scrolls were excavated from an ancient synagogue's Holy Ark in the 1970s, it was a bittersweet discovery for archaeologists. Though the texts provided further evidence for an ancient Jewish community in this oasis near the Dead Sea, the scrolls had been reduced to charred lumps by fire. Even the act of moving them to a research facility caused more damage. But decades later, archaeologists have read parts of one scroll for the first time. A team of scientists in Israel and the US used a sophisticated medical scanning technique, coupled with algorithmic analysis, to "unwrap" a parchment that's more than 1,700 years old.
Science Advances
Found in roughly the same area as the Dead Sea Scrolls, the En-Gedi scrolls were used by a Jewish community in the region between the 8th century BCE and 6th century CE. In the year 600 CE, the community and its temple were destroyed by fire. Archaeologists disagree on the exact historical provenance of the En-Gedi scrolls—carbon dating suggests fourth century, but stratigraphic evidence points to a date closer to the second. Either way, these scrolls could provide a kind of missing link between the biblical texts of the Dead Sea Scrolls and the traditional biblical text of the Tanakh found in the Masoretic Text from roughly the 9th century. As the researchers put it in a paper published in Science Advances:
Dating the En-Gedi scroll to the third or fourth century CE falls near the end of the period of the biblical Dead Sea Scrolls (third century BCE to second century CE) and several centuries before the medieval biblical fragments found in the Cairo Genizah, which date from the ninth century CE onward. Hence, the En-Gedi scroll provides an important extension to the evidence of the Dead Sea Scrolls and offers a glimpse into the earliest stages of almost 800 years of near silence in the history of the biblical text.

How to read a burned scroll with computers

But it wasn't until University of Kentucky computer scientist Brent Seales developed a technique he calls volume cartography that archaeologists actually got that "glimpse." Seales had previously worked on a project to read fire-damaged scrolls from the library of a wealthy Roman whose home in Herculaneum was destroyed in the Pompeii eruption. He suggested that Israel Antiquities Authority archaeologist Pnina Shor scan the scrolls using X-ray micro-CT, which is essentially a very high-resolution CT scan of exactly the same type you might get in a hospital. Indeed, Shor explained in a press conference that her team used a medical imaging facility to produce digital scans that she sent to Seales to analyze in Kentucky.
Seales used a three-step process of "segmentation, texturing, and flattening" to recreate the writing on the surface of one of the scrolls. In segmentation, researchers break the 3D scan down into very small pieces, searching for the surfaces of each page. Because the scroll wasn't just rolled—but actually crushed and burned—each page surface has an arbitrary shape. But eventually, Seales and his team mapped a triangulated surface mesh to each surface and had a pretty good map of where in the scroll they might find ink.
During the texturing and flattening phase, Seales writes that "each point on the surface of the mesh is given an intensity value based on its location in the 3D volume." The higher the intensity, the more likely it is to be writing. One scroll turned out to have metal-based ink, which made the process slightly easier. Finally, during the flattening stage, the team "maps the geometric model (and associated intensities from the texturing step) to a plane, which is then directly viewable as a 2D image." In other words, the scroll is virtually unwrapped, with the letters appearing to glow on its surface. Of course, the scroll was severely damaged by fire, so there are big pieces missing.
Science Advances
During a press conference, Seales explained the above image:
If you look at the top edge, the cutout pattern of the master view image... there are sections that are missing... Imagine that the scroll is being rolled from the left to the right across that figure two. On the left are the outermost layers, and on the right are the innermost layers. The notches along the top edge, and the larger cutouts actually on the bottom edge as well, those are places where the layer being unwrapped comes back around—a re-revolution to a damaged section. After about five complete revolutions, you get to the right-hand side of the master view and you can see the center of the scroll. And in those innermost layers, on the center part of the scroll, you can see scoring marks that look like cracks, but they’re all lined up. That’s where the scribe probably made lines to follow in writing the text.
Seales noted that his volume cartography technique will be released as open source software next year.

Historical significance

Once Seales and his team had this visualization, they still weren't sure what they had. None of them read Hebrew, so they waited with some excitement while Shor and her colleagues analyzed the text. It turned out that the scroll contained the first two chapters of Leviticus, which coincidentally deal with burnt offerings. What's incredible about these chapters, according to archaeologist Emanuel Tov, is that they are virtually identical to medieval Masoretic Text, written hundreds of years later. The En-Gedi scroll even duplicates the exact paragraph breaks seen later in the medieval Hebrew. The only difference between the two is that ancient Hebrew had no vowels, so these were added in the Middle Ages.
Science Advances
Tov called it "100 percent identical with the medieval texts, both in its consonants and in its paragraph divisions." He added, "The same central stream of Judaism that used this Levitical scroll in one of the early centuries of our era was to continue using it until the late Middle Ages when printing was invented... the scroll brings the good news that the ancient source or the medieval text did not change for 2,000 years." In other words, the Jewish community managed to retain the exact wording in their biblical texts over centuries, despite massive cultural upheavals and changes to their languages.
Archaeologist Michael Segal said the En-Gedi scroll "teaches us that the [biblical] text that we have that is used today as the traditional text is a very ancient text in all of its details." He cautioned that of course only the consonants are the same, and we have yet to read the rest of the En-Gedi scrolls. Still, this scroll provides strong evidence that today's Tanakh "already existed in a standardized form in the first century C.E."

A boon to intelligence agents?

The archaeologists involved in this project are eager to use Seales' software to unwrap other damaged scrolls, particularly some of the Dead Sea Scrolls. And Seales wants to do more with the Herculaneum collection to learn more about the reading habits of ancient Romans.
However, there are many other uses for volume cartography. Seales admitted that there had been interest from the intelligence community. "I'm sure that security and intelligence constantly is looking for ways to extract better information noninvasively from materials," he said. "So that’s what we’re doing, and we’re doing it at a very high resolution, so anything that requires the resolution that goes down to microns in the intelligence world will probably be interested in this technique."
So if you're considering a career in spycraft, just remember that burning the evidence may not be enough anymore. The same technique that allows scientists to read ancient burned scrolls will allow intelligence agents to read your charred secret messages, possibly years or decades later.

Science Advances
, 2016. DOI: 10.1126/sciadv.1601247
Listing image by Science Advances

Transponders, mandatory training, and registration demanded under planned law.

Jennifer Baker
Members of the French senate have approved new stricter rules for the use of drones in the country's skies.
A special commission for sustainable development approved the bill on Tuesday, which will require compulsory registration of drones, mandatory installation of RFID or GSM transponders to broadcast owner details, and the possibility of automatic performance-limiting devices.
The law is based loosely on various US models introduced in recent years.
The text approved by the committee on Tuesday leaves it open to the French government to define parameters, but suggests that the rules should apply to any drone heavier than 800 grams. According to a report from France's security agency (secrétariat général de la défense et de la sécurité nationale), drones heavier than 1kg could carry a "petite grenade."
Under the proposed law, commercial or heavy drone operators must have training, and manufacturers or importers would have to include a leaflet on the use of drones, as well as the relevant legislation and regulations.
GSM or RFID tags would be required to transmit the owner’s name, phone number, registration number, and GPS location.
Following a number of highly publicised drone flights over nuclear power plants, and the Elysee Palace, the bill was submitted by Xavier Pintat and Jacques Gautier in March.
According to the committee, "recent years have been marked by the multiplication of incidents involving civilian drones: collisions, near misses with planes, flying over sensitive sites." The new law aims to deal with these problems "without slowing the development of a dynamic economic sector."
If the stringent new drone rules are approved by the national assembly, which will review them next Tuesday, legislation is expected to come into force in July 2018.
Last year, the UK dished out its first conviction for illegal drone activity to a Nottingham man who flew his drones over football stadiums in breach of British laws against flying drones over buildings or congested areas. He received a £1,800 fine and was banned from operating or helping someone else operate drones for two years. Sanctions under the proposed French law are yet to be defined, however.

Mark Walton
Samsung has unveiled its next generation M.2 PCIe SSDs, the 960 PRO and 960 Evo. Like the 950 Pro, which was released last year, the 960 Pro and 960 Evo are PCIe 3.0 x4 drives that use the latest NVMe protocol for data transfer.
As you'd expect, both are faster: the 960 Pro offers a blistering peak read speed of 3.5GB/s and a peak write speed of 2.1GB/s, while the Evo offers 3.2GB/s and 1.9GB/s respectively. The 950 topped out at a mere 2.5GB/s and 1.5GB/s.
The 960 Pro and the 960 Evo are due for release in October. The Pro starts at $329 for 512GB of storage, rising up to a cool $1,299 for a 2TB version. The Evo is a little lighter on the wallet, starting at $129 for a 250GB version, rising to $479 for a 1TB version. UK pricing is yet to be confirmed, but a 512GB 950 Pro currently retails for around £300.
Both the Pro and the Evo use Samsung's brand new Polaris controller (not to be confused with AMD's Polaris graphics architecture), which features a five-core chip rather than the three-core chip used in the 950 Pro. One of the five cores on the controller is dedicated to host communication, while the other four cores are used for flash management.
Both drives also use Samsung's latest 3D V-NAND tech, which allows it to dramatically increase the number of layers present on each NAND flash module by stacking them vertically, thus increasing capacity without having to reduce the size of the fabrication process. The 950 Pro featured 32-layer NAND, but the 960 Pro and Evo get a bump to 48-layers, enabling Samsung to offer the spacious 2TB version.
The key difference between the Pro and the Evo is the type of NAND used: the Pro uses MLC V-NAND, while the Evo uses the cheaper and more tightly packed TLC V-NAND. Random read performance (4KB QD32) on the 512GB 960 Pro is up to 330K IOPS (Input/Output Operations Per Second), with write speeds of up to 330K IOPS. Larger capacities bump that to 440K IOPS and 360k IOPS.
Random read performance on the 256GB 960 Evo is 330K IOPS, with write speeds of up to 300K IOPS. Larger capacities bump that to 380K IOPS and 360k IOPS.
Power draw details for either drive aren't available just yet, but Samsung claims that thermal throttling is less of an issue. The 960 PRO supposedly lasts 50 percent longer before throttling on a sequential read test. According to Anandtech, the performance improvement is partly down to a simple copper sticker placed on one side of the drive, which a Samsung engineer claimed accounted for about 30 percent of the improved thermal performance.
The 960 Pro comes with the same five-year warranty as the 950, or up to 1.2PB written, depending on capacity, while the Evo comes with a mere three-year warranty, or up to 400TB written.

Credentials discovered from workers at 97 percent of Forbes-ranked companies.

Tom Mendelsohn
The UK has proven particularly susceptible to data breaches involving compromised employee account data, according to new research.
Almost every single company ranked by Forbes in its 1,000 biggest in the world list is available to buy online, threat intelligence firm Digital Shadows has said.
E-mail and password combinations belonging to more than five million people working at 97 percent of the top 1,000 companies are apparently available, including around 300,000 stolen from adult dating sites like Ashley Madison, Adult Friend Finder, and Mate1.
Each company on the list in Britain has an average of over 9,000 leaked sets of employee credentials available, more than in the rest of Europe or North America.
In its report, Digital Shadows said that "in 2016, we have witnessed even yet more data breaches made public, including LinkedIn, MySpace, and Dropbox. Data breaches are no longer an aberration; they are the norm."
The LinkedIn data dump alone, it's understood, has put more than 1.6 million corporate accounts into the criminal ecosystem, while nearly 1.4 million Adobe accounts have been compromised, as well as 1.2 million which were attached to MySpace. Digital Shadows said:
It’s perhaps of little surprise that the breaches impacting the global 1,000 companies the most were LinkedIn and Adobe—both services that employees can be expected to sign up to with their work accounts. However, there were also less expected sources.
The high level of corporate credentials from MySpace, for example, should cause organisations to pause for thought. Worse still, gaming sites and dating sites also affected organisations. For Ashley Madison alone, there were more than 200,000 leaked credentials from the top 1,000 global companies.
It highlighted five ways in which compromised credentials are used by criminals.
First, companies' public-facing social media accounts can be taken over; criminals can use "spear-phishing," a technique which targets high-value senior executives using compromised internal accounts; credential stuffing can be used to gain access to other internal accounts for when employees use the same username/password combination for multiple applications; post-breach extortion can be used to blackmail users of sensitive or embarrassing sites like Ashley Madison; and breached datasets can be used to operate botnets, which then sends out spam or other malware.
The data is available to buy on the darknet, often reasonably cheaply. What's more, Digital Shadows found that less of it than expected turned out to be duplicated; only around 10 percent of breached credentials were found to be repeats.
"Simply put, too many people are putting their employer at risk by re-using workplace credentials, such as e-mail addresses and passwords, for their personal lives," Digital Shadows' research analyst Michael Marriott told Ars.
"In our sample of the world’s top 1,000 companies we found that, for companies headquartered in the UK, there are nearly half a million unique leaked credentials being traded by cybercriminals right now," he added.
"Many of these have leaked as a result of being used for clear 'non-work' purposes, such as dating and gaming sites. These compromised credentials hold significant value for cybercriminals and can be used for botnet spam lists, extortion attempts, spear-phishing, and account takeovers. It’s vital that firms get on the 'front foot' and gain cyber situational awareness to spot leaked credentials before it impacts on their business."

Shows like House of Cards and Stranger Things have started a revolution.

Valentina Palladino
Melinda Sue Gordon for Netflix
While Netflix gained popularity by streaming licensed content, the company has been switching gears. According to a Variety report, Netflix wants to make 50 percent of its content original programming over the next few years; the other half will continue to be licensed TV shows and movies.
At the start of 2016, the company announced it would launch 600 hours of original programming, a bump from the 450 hours it released in 2015. Over the next couple of years, Netflix plans to release content owned and produced by the company itself in addition to co-productions and acquisitions. According to Netflix CFO David Wells, the company is currently “one-third to halfway” to reaching its 50 percent goal.
In many cases, Netflix original programming has surpassed the popularity of its licensed content. Shows like House of Cards and Master of None have received numerous awards, and the new show Stranger Things has become a breakout hit in the past few months and has already been renewed for a second season. But Netflix acknowledges that not all of its original programs have been major hits, and the company doesn't believe that they need to be. "We don’t necessarily have to have home runs," Wells is quoted in Variety. "We can also live with singles, doubles, and triples especially commensurate with their cost."
Funding is required to produce that original content, and Netflix has "no immediate plans" to introduce any advertising to its service. Subscription costs not only bring in revenue for the company but also allow it to invest in more original content. Netflix has been in the process up upping its monthly subscription rate to $9.99 for all US users and £7.49 for all UK users, which enables two simultaneous HD streams for each account. That's a $2 or £1.50 increase for those who have been paying $7.99 or £5.99 per month for years, and apparently Netflix has seen higher cancellation rates than it expected since it the increase. However, the company claims between 33 and 50 percent of users come back to the streaming service eventually.

Dan Goodin
A recently fixed security vulnerability that affected both the Firefox and Tor browsers had a highly unusual characteristic that caused it to threaten users only during temporary windows of time that could last anywhere from two days to more than a month.
As a result, the cross-platform, malicious code-execution risk most recently visited users of browsers based on the Firefox Extended Release on September 3 and lasted until Tuesday, or a total of 17 days. The same Firefox version was vulnerable for an even longer window last year, starting on July 4 and lasting until August 11. The bug was scheduled to reappear for a few days in November and for five weeks in December and January. Both the Tor Browser and the production version of Firefox were vulnerable during similarly irregular windows of time.
While the windows were open, the browsers failed to enforce a security measure known as certificate pinning when automatically installing NoScript and certain other browser extensions. That meant an attacker who had a man-in-the-middle position and a forged certificate impersonating a Mozilla server could surreptitiously install malware on a user's machine. While it can be challenging to hack a certificate authority or trick one into issuing the necessary certificate for, such a capability is well within the means of nation-sponsored attackers, who are precisely the sort of adversaries included in the Tor threat model. Such an attack, however, was only viable at certain periods when Mozilla-supplied "pins" expired.
"It comes around every once in a while," Ryan Duff, an independent researcher and former member of the US Cyber Command, told Ars, referring to the vulnerability. "It's weird. I've never seen a bug that presented itself like that."
Certificate pinning is designed to ensure that a browser accepts only specific certificates for a specific domain or subdomain and rejects all others, even if the certificates are issued by a browser-trusted authority. But because certificates inevitably must expire from time to time, the pins must periodically be updated so that newly issued certificates can be accepted. Mozilla used a static form of pinning for its extension update process that wasn't based on the HTTP Public Key Pinning protocol (HPKP). Due to lapses caused by human error, older browser versions sometimes scheduled static pins to expire before new versions pushed out a new expiration date.
During those times, pinning wasn't enforced. And when pinning wasn't enforced, it was possible for man-in-the-middle attackers to use forged certificates to install malicious add-on updates when the add-on was obtained through Mozilla's add-on site. Mozilla on Tuesday updated Firefox to fix the faulty expiration pins, and over the weekend, the organization also updated the add-ons server to make it start using HPKP. Tor officials fixed the weakness last week with the early release of a version based on Tuesday's release from Mozilla.
Duff has a much more detailed explanation here.
The vulnerability was first described here by a researcher who goes by the handle movrcx and who complained that his attempts to privately report the weakness to Tor were "ridiculed." Duff eventually confirmed the reported behavior. The irregular windows in which the vulnerability was active likely contributed to some of the skepticism that initially greeted movrcx's report and made it hard to spot the problem.
"I’d be lying if I said luck didn’t play a significant role in the discovery of this bug," Duff wrote in the above-linked postmortem. "If movrcx had tried his attack before 3 Sept or after 20 Sept, it would have failed in his tests. It’s only because he conducted it within that 17 day window that this was discovered."

Peter Bright
It's chosen by default, ready to download and install if you're not paying attention.
For the first year of Windows 10's availability, the operating system was offered as a free upgrade for anyone running a consumer version of Windows 7 and Windows 8.1. To advertise this unusual offer, the company pushed an update known as "Get Windows 10" to users of those operating systems in a move that proved more than a little contentious. The promotion used some shady techniques to trick people into upgrading to Windows 10.
The Get Windows 10 software, however, has finally been purged from user systems. Mary Jo Foley spotted that a patch shipped yesterday, KB3184143, which removes the Get Windows 10 promotional software.
Broadly speaking, the Get Windows 10 program seems to have been successful. Windows 10's uptake was unprecedented for a Windows release, with more than 350 million people now using the operating system—a number that hasn't been updated for several weeks. We hope to hear more at Microsoft's Ignite conference in Atlanta next week. The manner in which the program was operated, however, became increasingly underhanded; toward the end of the promotion, the ads felt straight-up deceptive, as they performed the upgrade even if you clicked the X to dismiss the window. That 350 million users number undoubtedly includes some number of Windows users who wanted to stick with Windows 7 or 8.1 but were tricked into upgrading.
The removal of the software isn't going to undo the reputational harm that Microsoft deliberately caused itself with the aggressive upgrade tactics, but it should at least provide some reassurance that Windows 7 or 8.1 will never again try to push a major update.
That promotion officially ended on July 30, and, for the most part, the advertisements stopped at around that time, too. Needless to say, for all the complaints about the aggression of the upgrade offer before the cut-off, we heard from many people who still wanted a little more time to upgrade and were concerned about being cut off.
If you still want to make the switch, it seems you're in luck, because it's not clear that the free upgrade program has truly ended. Paul Thurrott has been testing the use of Windows 7 or 8.1 serial numbers to install and activate Windows 10, and he reports that they continue to work. They may cease working at some point, but they haven't yet... so if you missed the free upgrade period but still want to switch to Windows 10, it seems that you still have time to do so.
On the one hand, removing the Get Windows 10 app suggests that Microsoft may be winding down the program completely and that the days of using Windows 7 and 8.1 keys with Windows 10 are drawing to a close. On the other hand, the free upgrade program continues to run for anyone using assistive software such as screen readers. There's no formal end date yet for this alternative upgrade scheme, and there appears to be zero enforcement of the assistive technology requirement, so anyone willing to lie has easy access to the Windows 10 upgrade if they want it.

Jon Brodkin
An antenna used by AT&T's Project AirGig.
AT&T is developing wireless technology that uses power lines to guide wireless signals to their destination and potentially deliver multi-gigabit Internet speeds. The technology is experimental and not close to commercial deployment, but it could potentially—in a few years—be used to deliver smartphone data or home Internet.
Project AirGig from AT&T Labs, announced yesterday, revives the possibility of using power lines for Internet service—but in a surprising way. Signals would not travel inside the power lines, but near the lines. "Low-cost plastic antennas and devices located along the power line" send wireless signals to each other, using the power lines as a guide, AT&T said.
“We’re experimenting with multiple ways to send a modulated radio signal around or near medium-voltage power lines,” AT&T’s announcement said. “There’s no direct electrical connection to the power line required, and it has the potential of multi-gigabit speeds in urban, rural, and underserved parts of the world.”
The choice of medium-voltage lines has nothing to do with their voltage, but rather their placement on utility poles. “Those lines are generally highest on the pole and provide the clearest line of sight,” AT&T told Ars.
The power lines themselves apparently don’t do anything to help the signals travel. The power lines simply “serve as a guide for the signals, not an antenna," AT&T said. Exactly what “guide” means in this context is unclear, and AT&T did not provide any further details in response to our questions.
AT&T was also somewhat vague in a call with reporters. "This is not a technology that utilizes the actual power line itself, the actual conductive material. It actually rides alongside it," AT&T CTO Andre Fuetsch said, before adding that he "can't go into much detail at this stage."

From antenna to smartphone, or your house

AT&T is using millimeter wave signals—those above 30GHz—to send data from pole to pole. “Initial testing shows that high-frequency millimeter wave signals result in better performance than lower frequency signals when transmitted along power lines,” AT&T told Ars.
But AT&T said its system provides the flexibility of being able to deliver Internet service using any part of the radio spectrum. While millimeter waves travel along the power lines from one antenna to the next, a signal going from an antenna to a mobile device or home Internet connection would use a different frequency. The signals can be converted to use 4G LTE spectrum or any other licensed or unlicensed spectrum that AT&T has in its arsenal, the company said.
The antennas essentially form a mesh network that connects back to AT&T’s core network for Internet access. AT&T said that its cell towers, or the central offices that provide wired home Internet access, could serve as the origin point for an Internet signal that would then be distributed across the antennas. AirGig could also be configured to work with small cells and distributed antenna systems, AT&T said.
Many AT&T customers in rural areas are suffering from slow, unreliable Internet access delivered over aging copper lines, and some people in AT&T territory can’t get wired Internet access at all because the company hasn’t upgraded its network to accommodate enough homes. AirGig could theoretically solve this problem, and it doesn’t require installation of new towers or cables, but it’s at least a few years away from being deployed widely.
Fuetsch said that the company needs “favorable regulation” and must coordinate with utility companies. AT&T aims to start field trials next year, but it's still looking for “the right global location,” whether that’s in the US or elsewhere, company officials said.
AirGig is about a year behind 5G cellular technology, which went into field trials at AT&T this year but doesn’t have a firm deployment date.
AT&T hopes utility companies will jump on board, as AirGig technology could help them deploy smart grid applications and even detect problems, like encroaching tree branches. “Power companies could use it to pinpoint specific locations, down to the line segment, where proactive maintenance could prevent problems,” AT&T said. “It could also support utility companies’ meter, appliance, and usage control systems.”
AT&T has the system up and running at some of its own facilities, delivering data for 4K TV and other applications. Earlier attempts at broadband over power line technology that sent data through the wires hasn’t been cost-effective and wouldn’t deliver speeds fast enough, Fuetsch said. “That's why we believe this is such a game changer and why it's revolutionarily different from the old broadband over power line technologies we're familiar with,” he said.

The Sports Xchange
Former World Boxing Champion Floyd Mayweather at the 2016 Rio Summer Olympics in Rio de Janeiro, Brazil, August 17, 2016. The five-time champion retired last September. Photo by Mike Theiler/UPI
| License Photo

Floyd Mayweather said he gave it his best shot.
Conor McGregor said he'd be more than happy to fight Mayweather if the undefeated boxer could come up with the money.
The fight always seemed like a long shot and on Tuesday, the idea of the two squaring off had died.
"I tried to make the fight happen between me and Conor McGregor," Mayweather told Fight Hype over the weekend. "We wasn't able to make the fight happen, so we must move on."
According to multiple reports, it's unclear just how serious Mayweather was when talking about fighting McGregor.
One of the main stumbling blocks is that the 28-year-old McGregor is still under contract to the UFC and Dana White has said he never heard from McGregor's camp about making the fight happen.
And Mayweather, the five-time champion, has been out of the fight game for a year, having retired in September.
There were negotiations on how to split the proceeds, with each fighter demanding that they should receive the majority of purse.

The protective protein also protected DNA in genetically engineered human cells.

By Brooks Hays
An artistic rendering shows what a tardigrade, or water bear, looks like. Photo by Sebastian Kaulitzki/Shutterstock

Tardigrades are thought to be the most durable life form on Earth. The eight-legged, water-dwelling creatures can survive extreme temperatures, intense pressure and seemingly deadly levels of radiation.
New research reveals how the micro-animals -- sometimes called water bears -- protect their DNA from harmful ultraviolet rays.
Tardigrades are short and fat creatures stretching just half a millimeter when fully grown. They prefer wet environs and are especially common in mosses and lichens, where they feed on dead plant matter and small invertebrates. They're most closely related to nematodes.
A team of scientists at the University of Tokyo recently sequenced the entire genome of the tardigrade species Ramazzottius varieornatus. The results revealed a special protein responsible for shielding the creatures' DNA from harmful radiation.
Researchers named the protective protein Dsup, short for Damage Suppressor.
When scientists engineered human cells to produce the Dsup protein, the cells experienced significantly less radiation damage than unprotected cells when irradiated.
The scientists detailed their findings in a new paper published this week in the journal Nature Communications.
"What's astonishing is that previously, molecules that repair damaged DNA were thought to be important for tolerating radiation," study co-author Takuma Hashimoto explained in a news release. "On the contrary, Dsup works to minimize the harm inflicted on the DNA."
The protective power of Dsup is just the first of what of what scientists expect to be many revelations in the wake of the sequencing of the water bear's genome.
Scientists expect further research to reveal other genes and Dsup-like proteins key to the tardigrade's ability to survive being boiled, frozen and exposed to the vacuum of space, among other feats of hardiness.

TUESDAY, Sept. 20, 2016 -- Introducing babies to eggs or peanuts early on may help reduce their risk of food allergies, a new analysis finds.
Researchers reviewed 146 previous studies that examined when babies were given foods that often trigger reactions, as well as their risk of food allergies or autoimmune diseases.
They discovered that the timing of food introduction may affect allergy risk, but they found no similar link for autoimmune disease.
The researchers reported with "moderate certainty" that babies who were given eggs when they were 4 months to 6 months old had a lower egg allergy risk. And children given peanuts between 4 months and 11 months of age had a lower peanut allergy risk than those who were older.
The study, published Sept. 20 in the Journal of the American Medical Association, said early introduction could head off 24 cases of egg allergy per 1,000 people and 18 cases of peanut allergy per 1,000 people.
The evidence was not as strong for early introduction of fish.
The researchers found low certainty that giving a baby fish before 6 months to 12 months of age would reduce the risk of nasal allergies or hay fever (allergic rhinitis). And they reported very low certainty that doing so before 6 months to 9 months of age would reduce their risk for food allergies.
The evidence surrounding gluten was clearer: Timing does not appear to affect the likelihood of celiac disease, an immune disorder that causes bowel damage.
Guidelines on food introduction have been relaxed in recent years. Parents are no longer advised to delay offering foods like eggs and peanuts to their children for fear of triggering allergies, the study authors said.
More study would be needed before revising current guidelines, the authors said.
Dr. Matthew Greenhawt is an allergy specialist at Children's Hospital Colorado in Aurora, who wrote an editorial accompanying the study.
"Delay of introduction of these foods may be associated with some degree of potential harm, and early introduction of selected foods appears to have a well-defined benefit," Greenhawt wrote.
"These important points should resonate with allergy specialists, primary care physicians, and other health care professionals who care for infants, as well as obstetricians caring for pregnant mothers, all of whom are important stakeholders in effectively conveying the message that guidance to delay allergen introduction is outdated," he said.
More information
The American College of Allergy, Asthma & Immunology provides more information on food allergies.
Copyright © 2016 HealthDay. All rights reserved.


Contact Form


Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget