May 2016

Jon Brodkin

Aurich Lawson / Thinkstock

It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor the security world. Our Jon Brodkin willingly embraced a firsthand experience with low-grade scammers in April 2013, and we're resurfacing his piece for your holiday reading pleasure.
It all began with an annoying text message sent to an Ars reader. Accompanied by a Microsoft Office logo, the message came from a Yahoo e-mail address and read, "Hi, Do u want Microsoft Office 2010. I Can Remotely Install in a Computer."

An offer I couldn't refuse.

The recipient promptly answered "No!" and then got in touch with us. Saying the spam text reminded him of the "'your computer has a virus' scam," the reader noted that "this seems to be something that promises the same capabilities, control of your computer and a request for your credit card info. Has anyone else seen this proposal?"
I hadn't seen this particular scam, so there was only one thing to do: take the scammer up on his offer and let him go to town on a spare copy of Windows. Ultimately, I did get that copy of Microsoft Office, and there were no viruses sent my way. Even when I failed to pay the $30 fee we had agreed upon, the scammer didn't bother attacking my computer in any way. He was just a nice guy, basically—making a dishonest living from the comfort of his own home.

An offer from Itman Koool

I e-mailed "Itman Koool" (short for "IT man," apparently) last week from a spare Gmail account, saying, "i got youre message about a free microsoft office 2010 and was wondering how i get that." The conversation proceeded like this:
Itman Koool: yes i charge only $30 to install. you can pay me after the installaton
Me: that sounds like a good price for microsoft. what do i do next?
Itman Koool: ok open ur computer go to google.com , search TeamViewer , download
Me: ok, I've downloaded teamviewer, what should I do now
Itman Koool: Ok tell me id and pass
TeamViewer is a remote access tool for providing PC support and conducting online meetings. The software's intended purposes are legitimate, but it can be abused.
Before going through with this experiment, I got some safety tips from security expert Troy Hunt, who is well versed in the art of "scamming the scammers."
On Hunt's advice, I set up a fully updated and patched copy of Windows 7 in a virtual machine, with no read/write privileges to the host system, and installed antivirus software with the latest virus definitions. I didn't think Itman's intent was to load my system with viruses, but even if he did, the risk to my computer would be small—I could just wipe out the virtual machine and reinstall Windows. Borrowing a tactic from Hunt, I placed a file named "passwords" into my documents folder to see if Itman would access it.

“No, I don't work for Microsoft”

So this past weekend, after giving my TeamViewer credentials to Mr. Koool, he took control of my Windows 7 virtual machine.
His mouse cursor started moving around—extremely slowly, as if he were distracted or wasn't quite sure what he was doing—and I started asking questions using the TeamViewer chat window.
Me: so what are you doing now?
Itman Koool: Im going to install microsoft office 2010
Me: do you work for microsoft?
Itman Koool: No i dont i work indivisually
Itman Koool: How much r u gonna pay ?
Me: you said it was 30, right?
Itman Koool: yes
By this point, Itman had opened Chrome and logged into a Yahoo Mail account with a username containing the letters "Coolboyusa" and some numbers. His list of e-mails showed he'd been busy looking for customers and had made at least some money. There was one "Notification of payment received" from PayPal, as well as various annoyed replies to his spam messages. "Hi, how the expletive did you get this number?" went one. Two other replies simply said "Who is this?" and another said "Stop."
The message Itman was looking for read "[username] has sent you a file." Opening this e-mail led Itman to a file sharing site called WeTransfer, where he downloaded a 654MB file titled OFFICE2010.zip.
"ok its downloadin.. its gonna take sometime," Itman told me. I asked Mr. Koool how he finds people to text about the Office offer, to which he answered that he sends the messages to numbers with certain prefixes and generates the last four digits himself. He then asked what number I had received his message on, and I let that question pass by without answering.
The file downloaded in just a few minutes, making me thankful for my 50Mbps home Internet service. How much time had Itman wasted waiting for the download to complete over slower Internet connections?
Itman went into a folder titled "office2010proplusfiles," which contained two applications—office2010proplussetup and office2010proplusactivate—as well as a text document titled office2010propluskey.
Suddenly, Microsoft Security Essentials, the antivirus program I had installed, noticed something was amiss. "Security Essentials detected a potential threat that might compromise your privacy or damage your computer. Your access to this item might be suspended until you take action," the antivirus program said. It labeled the flagged program as "HackTool:Win32/Keygen," referring to a program meant to create a fake license key.
Itman wasn't worried. "don't worry, it's nothing. it just acts like that," he wrote. "I hate that thing," I replied.
Very slowly, clicking the wrong links a few times and having to start over, Itman found his way into the Windows firewall settings and allowed his Office installer to proceed. While he looked through a list of installed programs, I wondered if Itman would notice "VMware Tools"—which might tip him off that this was a virtual machine and that perhaps I wasn't really just looking for a copy of Microsoft Office.
Itman went forth, installing Microsoft Office Professional Plus 2010. He grabbed what I suppose was a temporary license key from the "office2010propluskey" text file in order to complete the installation. After installation, he opened "office2010proplusactivate," which opened a key generation program to give the program a more permanent activation (or at least one that lasts six months).

Seems legit.
Itman opened up Excel to demonstrate that Office had been installed. Now, he wanted money. He went into Chrome and opened a page where he set up a PayPal payment link and entered his e-mail address as the payment recipient and $30 in the amount field. All that was left was for me to pay up.

“What is PayPal? I’ll just mail you a check”

A small part of me felt guilty about not paying Itman for his hard work and generosity, but I didn't want even a small risk of my financial accounts being compromised. Moreover, I was curious about what someone like Itman does when he doesn't receive payment. Would he bombard my computer with viruses? Would he at least disable Microsoft Office, promising to re-enable it only after I pay?
Itman Koool: ok u can pay here
Me: what is that website
Itman Koool: this is paypal website.
Me: i don't have an account
Itman Koool no u don't have to have account u can pay by credit or debit
Me: i'll just mail you a check
Itman Koool: just pay here
Me: what is your address?
Itman Koool: i don't accept checks. u can pay here
It went on like that for a little while, with me offering to send cash to Koool and him turning that down as well. I could buy a prepaid PayPal card from 7-11, he said. "I Need the money right now. i can wait 1hr," Itman said in the TeamViewer chat window. A few minutes later, I got an e-mail message saying only "u there".

Enlarge / I never did find out Itman's address.
I didn't reply, but I left TeamViewer on for nearly two hours. Nothing else happened. I closed TeamViewer, ran a virus scan, which turned up the same KeyGen file Security Essentials previously detected during the Office installation.

Enlarge / Microsoft's least favorite type of file.
I let Security Essentials wipe that file off my computer, but the activated version of Microsoft Office was still installed, functioning properly. Itman apparently never touched the "passwords" file I had created to tempt him.
I haven't heard from Itman again. And in case you were wondering, I uninstalled Microsoft Office.

Jonathan M. Gitlin


Americans have honored those lost in war in some shape or another since just after the Civil War. Memorial Day as we know it—a federal holiday on the last Monday in May—is more recent, dating back to 1968. But the sentiment is the same—remembering those who paid the ultimate price in defense of their country. Since a recent trip happened to take us by the National Museum of the United States Air Force in Dayton, Ohio, we've decided to celebrate it here at Ars by bringing you this gallery of some fine-looking warbirds.
The museum can be found at Wright-Patterson Air Force Base. It's truly vast—even giants of the air like the B-36 and B-52 can seem small underneath the roof of one of its hangars. It also has some rather significant planes in its collection, notably Bockscar, one of the two B-29s that dropped atom bombs on Japan in World War II (the Enola Gay lives at the Smithsonian's Udvar-Hazy collection in Dulles, VA).
The collections under those massive hangars are organized chronologically, from the beginning of flight through World War II, Korea, Vietnam, the Cold War, through to today. Sadly, we weren't able to check out one of the museum's most fascinating aircraft, the remaining North American XB-70 Valkyrie; the new hangar for research and experimental aircraft (and old Air Force Ones) doesn't open until next week.
Listing image by Jonathan Gitlin

John Timmer
It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor news from the world of climate change. Our John Timmer examined the claims that scientists are in it solely for the money in February 2011, and we're resurfacing his piece for your holiday reading pleasure.
One of the more unfortunate memes that makes an appearance whenever climate science is discussed is the accusation that, by hyping their results, climate scientists are ensuring themselves steady paychecks, and may even be enriching themselves. A Google search for "global warming gravy train" pulls out over 50,000 results (six of them from our forums).
It's tempting to respond with indignation; after all, researchers generally are doing something they love without a focus on compensation. But, more significantly, the accusation simply makes no sense on any level.

You can't make a bundle pushing the consensus

So, are there big bucks to be had in climate science? Since it doesn't have a lot of commercial appeal, most of the people working in the area, and the vast majority of those publishing the scientific literature, work in academic departments or at government agencies. Penn State, home of noted climatologists Richard Alley and Michael Mann, has a strong geosciences department and, conveniently, makes the department's salary information available. It's easy to check, and find that the average tenured professor earned about $120,000 last year, and a new hire a bit less than $70,000.
That's a pretty healthy salary by many standards, but it's hardly a racket. Penn State appears to be on the low end of similar institutions, and is outdone by two other institutions in its own state (based on this report). But, more significantly for the question at hand, we can see that Earth Sciences faculty aren't paid especially well. Sure, they do much better than the Arts faculty, but they're somewhere in the middle of the pack, and get stomped on by professors in the Business and IT departments.
If they really wanted to make money at Penn State, they'd be coaching football or basketball. If they wanted to make money doing the sort of data analysis or modeling of complex systems that climatologists perform all the time, of course, they should go to Wall Street.
It's also worth pointing out what they get that money for, as exemplified by a fairly typical program announcement for NSF grants. It calls for studies of past climate change and its impact on the weather—pretty typical stuff.
This sort of research could support the current consensus view. But it just as easily might not. It's impossible to tell before the work's done. And that's true for pretty much every scientific funding opportunity—you can't dictate the results in advance.
So, even if the granting process were biased (and there's been no indication that it is), there is no way for it to prevent people from obtaining data that poses problems for the current consensus. The granting system is also set up to induce people to publish it, since receiving a grant that doesn't produce scientific papers can make it much less likely for a professor to obtain future funding.

There's not much money in climate (or green energy)

Maybe the money is in the perks that come with grants, which provide for travel and lab toys. Unfortunately, there's no indication that there's lots of money out there for the taking, either from the public or private sector.
For the US government, spending on climate research across 13 different agencies (from the Department of State to NASA) is tracked by the US Climate Change Science Program. The group has tracked the research budget since 1989, but not everything was brought under its umbrella until 1991. That year, according to CCSP figures, about $1.45 billion was spent on climate research (all figures are in 2007 dollars). Funding peaked back in 1995 at $2.4 billion, then bottomed out in 2006 at only $1.7 billion.
Funding has gone up a bit over the last couple of years, but it's at best brought us back to somewhere around the 1995 pea (not adjusting for inflation). It's clearly not a growth field, and it's not even one that's especially well funded to start with—the NIH alone has a $31 billion budget.
Not all of this money went to researchers anyway; part of the budget goes to NASA, and includes some of that agency's (rather pricey) hardware. For example, the Orbiting Carbon Observatory cost roughly $200 million, but failed to go into orbit; its replacement cost another $170 million.
Might the private sector make up for the lack of government money? Pretty unlikely. For starters, it's tough to identify many companies that have a vested interest in the scientific consensus. Renewable energy companies would seem to be the biggest winners, but they're still relatively tiny.  In contrast, half of Fortune's top 10 global companies are in fossil fuels.
So, despite sporadic accusations otherwise, climate researchers are scrambling for a piece of a smaller piece of the government-funded pie, and the resources of the private sector are far, far more likely to go to groups that oppose their conclusions.

They lose by winning

If you were paying careful attention to that last section, you would have noticed something funny: the industry that seems most likely to benefit from taking climate change seriously produces renewable energy products. However, those companies don't employ any climatologists. They probably have plenty of space for engineers, materials scientists, and maybe a quantum physicist or two, but there's not much that a photovoltaic company would do with a climatologist.
And that's generally true. Most of the places we'd look to for helping solve climate change would have no reason to employ a climatologist. Even by convincing the public of their findings—namely, climate change is real, and could have serious impacts—the scientists are not doing themselves any favors in terms of job security or alternative careers.
But, surely, by convincing the public, or at least the politicians, that there's something serious here, they ensure their own funding? That's arguably not true either. You can contrast the flat funding in climate science with the money going to clean energy.  In 2012, funding for research there was already more than twice that of climate science. Just two years later, in 2014, it was above three times.
This really shouldn't be a surprise. Climatologists are well equipped to identify potential problems, but very poorly equipped to solve them; it would be a bit like expecting an astronomer to know how to destroy a threatening asteroid.
The solutions to problems related to climate change are going to come in areas like renewable energy, carbon sequestration, and efficiency measures; that's where most of the current administration's efforts have focused. None of these are areas where someone studying the climate is likely to have a whole lot to add. So, when they advocate that the public take them seriously, they're essentially asking the public to send money to someone else.
To sum up: climate research doesn't pay well, the amount of money dedicated to it has been largely flat, and if the researchers were successful in convincing the public that climate change was a serious threat, the response would be to give money to someone else. If you come across someone arguing that scientists are in it for the money, then you can probably assume they are willing to make arguments without getting their facts straight.

Nate Anderson
Don't be stingy guys.
It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor the world of ISPs—especially how the major players handle big data users. Our Nate Anderson looked at the economic side of the decision in July 2010, and we're resurfacing his piece for your holiday reading pleasure.
Just over a year ago, Time Warner Cable rolled out an experiment in several cities: monthly data limits for Internet usage that ranged from 5GB to 40GB. Data costs money, and consumers would need to start paying their fair share; the experiment seemed to promise an end to the all-you-can-eat Internet buffet at which contented consumers had stuffed themselves for a decade. Food analogies were embraced by the company, with COO Landel Hobbs saying at the time, "When you go to lunch with a friend, do you split the bill in half if he gets the steak and you have a salad?"
In the middle of the controversy, TWC boss Glenn Britt told BusinessWeek something similar, though with less edible imagery. "We need a viable model to be able to support the infrastructure of the broadband business," he said. "We made a mistake early on by not defining our business based on the consumption dimension."
This basic argument has a compelling logic—pay for what you consume—and it came with a side order of "implied apocalypse." Unless a major shift in pricing happens in the near future, TWC's Internet business won't be "viable" and the infrastructure won't keep pace with demand.
This key assertion underlies numerous industry experiments with consumption pricing (AT&T just wrapped up a trial of its own tight data caps in a few test markets, and other ISPs have mooted the idea for years). Few consumers are in a position to judge such claims; maybe the sky is falling. Maybe home Internet use is unsustainable without far more caps or far less data. Maybe those Netflix and Hulu users really are pigs at the broadband trough.
But there's reason to doubt. Big ISPs usually rely on peered connections to other major ISPs, connections which incur no per-bit cost. As for the cables in the ground, they've been there for years. The equipment back at the headend must be installed once, after which it runs for years. Cable node splits and DOCSIS hardware upgrades are relatively cheap. Requesting one additional bit does not necessarily incur any additional charge to the ISP.
If most Internet costs are fixed (and the National Broadband Plan agrees that they are), and if bandwidth is dirt cheap, what "charges" are heavy Internet users ringing up for ISPs like Time Warner? As a New York Times writer summed it up in the middle of last year's debate:
I tried to explore the marginal costs with Mr. Hobbs. When someone decides to spend a day doing nothing but downloading every Jerry Lewis movie from BitTorrent, Time Warner doesn’t have to write a bigger check to anyone. Rather, as best as I can figure it, the costs are all about building the network equipment and buying long-haul bandwidth for peak capacity.
If that is true, the question of what is "fair" is somewhat more abstract than just saying someone who uses more should pay more. After all, people who watch more hours of cable television don’t pay more than those who don’t.
Mr. Hobbs declined to react to my hypothesis about how costs are almost all fixed costs.
To get some answers, we dug into TWC's financial statements, then spoke to the company and to its critics. One thing quickly became clear: it's good to be an ISP. In fact, it's better than being a cable operator, since there are no multibillion-dollar payments to content creators. As TWC said in a recent filing, "Once again, High Speed Data was our best performing Primary Service Unit category."

A very good year

TWC's revenues from Internet access have soared in the last few years, surging from $2.7 billion in 2006 to $4.5 billion in 2009. Customer numbers have grown, too, from 7.6 million in 2007 to 8.9 million in 2009.
But this growth doesn't translate into higher bandwidth costs for the company; in fact, bandwidth costs have dropped. TWC spent $164 million on data contracts in 2007, but only $132 million in 2009.
What about investing in its infrastructure? That's down too as a percentage of revenue. TWC does spend billions each year building and improving its network ($3.2 billion in 2009), but the raw number alone is meaningless; what matters is relative investment, and it has declined even as subscribers increased and revenues surged. "Total CapEx [capital expenses] as a percentage of revenues for the year [2009] was 18.1 percent versus 20.5 percent in 2008," said the company a few months ago.
In fact, CapEx has declined for the industry as a whole. As the National Broadband Plan noted, the big ISPs invested $48 billion in their networks in 2008 and $40 billion in 2009. (About half of this money can be chalked up to broadband; the rest of the improvements were done to aid cable or phone service.)
To recap: subscribers up, revenues up, bandwidth costs down, infrastructure costs down. This might seem like a textbook case of "viability"; what were execs like Britt and Hobbs talking about last year when data caps were held up as a necessary safeguard against doom?

It's about bandwidth labor

Several months ago, while on a business trip to Manhattan, I entered a nondescript building near the Flatiron building and rode the elevator to the top. Inside was one of TWC's main New York operations centers, hosting an astonishing array of cable and Internet gear. But the real showpiece was the monitoring room, a darkened room with control hardware, computers, and a wall of TVs showing every cable channel currently running out over TWC's network.
It looked brand new and obscenely expensive. Engineers slipped in and out in silence. A huge pile of boxes on the floor held a new set of replacement TVs. When I make my career shift from ink-stained wretch to Evil Genius, this is exactly the sort of room I will build in order to plot my world domination.
"It's not a cheap endeavor to run a network like we do," said TWC's tweeting VP of Public Relations, Alex Dudley, when I had spoken to him the week before. Here was an obvious reminder of what he meant.
This point is hammered home by most ISPs—the billions of dollars of new investment, the upgrades, the capacity building. But it's a point only meaningful in the context of revenues. A company's financials don't lie, and TWC's financials showed a declining percentage of revenue spent on infrastructure even as profits soared and bandwidth costs dropped. I pressed Dudley on Glenn Britt's statements about viability. If these are problems, they're problems most companies want to have.
Britt is "a long-term-view kind of guy," Dudley said, and with broadband use surging, "all of the ancillary costs affiliated with broadband are going up." This didn't quite compute, since bandwidth and network investment were actually declining as percentages of revenue.
But according to Dudley, those two numbers don't tell the whole story. TWC's single biggest expense for Internet access is not network investment or bandwidth. It's labor.
As Internet use increases, TWC techs, engineers, and executives need to make adjustments such as DOCSIS upgrades at the cable company headend or "node splits" that divide a shared cable loop in two when bandwidth use hits certain metrics. Paying all of these people costs money, and those costs increase as the network is more heavily used.
(This differs from how Landel Hobbs defended the company at the height of the backlash against TWC last year. He quite clearly stated that bandwidth creates real costs for the company and that those need to be covered. "For those who want to use a tremendous amount of bandwidth, there should be a charge, because that costs money," he told the Times.)
Besides, Dudley said, TWC does invest plenty of money in raw infrastructure. If CapEx spending was down in 2009, chalk it up to the company's video subscribers, which declined a bit over 2008. One big piece of TWC's CapEx is buying all those cable set-top boxes (which are then rented on a monthly basis by subscribers), and fewer subscribers mean fewer new boxes to purchase.
The company's critics couldn't disagree more with this entire line of argument.

"Greed"

"Hogwash," says Free Press research director S. Derek Turner. "Their OpEx [Operating Expenses, which includes labor] is not growing; if anything, it's steady. Their CapEx is decreasing both in overall terms and as a percentage of revenue."
Turner has little patience for the "woe is me" arguments that ISPs trot out to defend a shift to data caps or per-bit pricing. Free Press, a constant critic of the big ISPs, says it has no philosophical problem with a move to a consumption model for broadband—but such a shift should accurately reflect costs, not serve as an excuse to gouge customers by companies already swimming in cash.
TWC's data capping trial in 2009 featured "literally ridiculous overage amounts that had no relation to underlying costs," Turner said. And the danger isn't just to consumer pocketbooks, it's to the entire Internet ecosystem. Who will start using the next high-bandwidth YouTube or Netflix when doing so results in big fees? If not done right, consumption pricing "will cripple innovation."
Turner concedes that networks cost money to build and maintain, but he argues that the costs are wildly overstated. For instance, Comcast is one of the ISPs furthest along with DOCSIS 3.0 upgrades, which do require a labor-intensive card swap at the headend and new modems in people's homes. But even as it makes this investment, the company's OpEx and CapEx are declining. As for node splits, many are "virtual" these days and don't require much labor.
Bandwidth has become dirt cheap; despite the fear-mongering about the "exaflood" and the "zettaflood" and (presumably) the "yottaflood," bandwidth costs drop significantly every year. As the National Broadband Plan noted earlier this year, international bandwidth has grown by 66 percent each year for the last five years—but the cost of IP transit has dropped 22 percent a year at the same time.
Congestion can happen even on networks with tremendous bandwidth, but consumption pricing doesn't generally care about congestion (if it did, ISPs could exempt all traffic in the middle of the night, for instance, when congestion is generally absent).
So why the push for consumption pricing? Turner has his own theory.
"This is nothing more than greed," he says. "The industry may be maturing, and therefore margins aren't rapidly increasing the way they were." Consumption pricing could be a way to boost margins. As for ISP complaints that heavy users cost them more money, those are just "excuses that they give."

Still rare

But low data caps are still not widespread in the US wireline business. That's due in large part to public resistance to the idea. When TWC expanded its capping trial last year, it took only a couple of weeks for a New York Congressman (the now-disgraced-and-resigned serial tickler of his male staffers, Eric Massa) to pledge a "Broadband Internet Fairness Act" that would "prevent job killing broadband downloading caps."
Despite a few trials (sorry, Beaumont, Texas), consumption Internet pricing remains unusual. Unless ISPs find a way to make a more compelling case for its necessity—and its fairness—it may remain so.

Cyrus Farivar
President Toomas Hendrik Ilves, in conversation with Cyrus Farivar. Filmed by Chris Schodt/Edited by Jennifer Hahn.
PALO ALTO, Calif.—I don’t usually dress up for interviews, but I also don’t usually interview heads of state, either.
On a recent afternoon, I waited patiently in a generic conference room with yellow-tinted walls at the Westin Hotel, dressed in a grey suit and a tie, eagerly anticipating the arrival of Estonian President Toomas Hendrik Ilves. My videographer, Chris Schodt, busily set up his camera and light rig.
Minutes before his arrival, a Secret Service agent came by and introduced himself—he didn’t search us. As we made small talk with him, he did show us his red lapel pin, identifying him as an on-duty agent wearing the color-of-the-day—a signal to other agents that he’s friendly. (How do agents find out what color it is that day? Apparently via a Windows phone app!) The agent disappeared back into the hallway. We didn’t see him again.
I tried to make myself useful, by removing some of the extraneous chairs and bottled water from the camera’s frame, but mostly I just waited around. Soon enough, in strolled President Ilves, dressed in an impeccable grey three-piece suit and his trademark bow tie.
While I’d met and interviewed him before, I was still a little nervous. After all, this is a guy who throws around deep knowledge of history, political science, and social science: the last time I saw him, about 2.5 years earlier at the United Nations, he was throwing around the Peace of Westphalia, which ended the Thirty Years War in mid-seventeenth century Europe, and comparing it to present-day Internet policy. (Indeed, he didn’t disappoint during our conversation, using the phrases "Lockean democracy" and even tossed in a "sine qua non" for good measure. No American politician I’ve ever met talks like this!)
The Estonian president has a fascinating background. He was born in 1953 to Estonian parents living in Sweden, who had fled their Soviet-occupied homeland. During his childhood, the family moved to New Jersey—eventually Ilves earned degrees from Columbia University and the University of Pennsylvania—consequently, he speaks flawless American English.
After Estonia regained its independence in 1991, Ilves served as ambassador to the United States and Canada from 1993 to 1996, then later as foreign minister, a member of parliament, a member of the European Parliament, and finally as president in 2006. He was re-elected in 2011 to his second and final five-year term.
But just because the end of Ilves’ time in office is rapidly approaching, that doesn't mean that Ilves is out of things that he wants to work on, both inside and outside of the Kadriorg Presidential Palace.
We talked for nearly an hour, touching on various topics, including digital prescriptions, e-residency, and his staunch support of the European Digital Single Market (DSM), an ambitious goal that seeks to make commerce flow as smoothly across the 28-member bloc as it does in the United States.
Enlarge / Estonia President Toomas Hendrik Ilves (left), with Ars editor Cyrus Farivar.
Chris Schodt

We have the technology

When I asked President Ilves how he observes Estonia’s technological, social, and cultural changes from 2006 until now, the first thing he mentioned was the advent of fully digital prescription. Estonia, like nearly every other EU member state, has universal health care. Since 2002, Estonia has issued digital ID cards to all citizens and legal residents. These cards allow access to a "citizen’s portal," enabling all kinds of government services to exist entirely online: essentially any interaction with the government can be done online, ranging from paying taxes, to voting, to even picking up a prescription.
"In the United States, 5,000 people die a year because of doctor’s bad handwriting," he said. "It’s very simple. You go to the doctor, and he writes the prescription in the computer, and you go to any pharmacy in the country, and you stick your card in the reader, and you identify yourself, and you get your prescription."
As he pointed out repeatedly, "the stumbling blocks are not technological," but rather, are bureaucratic.
In many ways, the European Union is far more federated than the United States. After all, there are 28 nation-states, each with their own languages and traditions that have bound themselves together in a loose political union that has a (mostly) unified economy and freedom of movement, and yet they have their own bureaucratic systems in most cases. But that’s slowly starting to change.
Late last year, Estonia and Finland became the first two countries to open up a cross-border data exchange, which makes such transfer seamless. This, for example, would allow an Estonian in Finland, and vice versa, to access services (such as prescriptions) away from home with no issue.
"You have different countries and different countries subsidizing different drugs to different degrees, so you have to balance all that out, so people don’t go cross-border for arbitrage," the president said. "Finland and Estonia are now working together on Version 7 [of X-Road], it would allow us to use all of these services across border, across the countries, at least across Northern Europe. I say Northern Europe, because I don’t think the rest of Europe is too keen on these kinds of things."

He just wants to spend €0.99, is that so much to ask?

What Estonia and Finland are doing is a step towards the DSM—but there remain all kinds of national-level laws that stop Europe from being truly unified.
"Take iTunes," President Ilves continued. "iTunes are based on credit cards. Credit cards are national. I cannot buy an iTunes record for my wife who has a Latvian credit card. I cannot buy her an iTunes record because I have an Estonian iTunes. This is true of virtually everything that is connected to digital services. And certainly this is why Estonia is at the forefront of the European Digital Single Market. As I like to say, it’s easier to ship a bottle of Portuguese wine from southern Portugal in the Algarve and sell it in northern Lapland, than it is for me to buy an iTunes record across the Estonian-Latvian border."
While President Ilves looks forward to erasing borders for digital services, he does want to use the power of national sovereignty to become the issuing authority of someone’s identity online.
"A secure identity is the sine qua non for any kind of process for technology in general," he added. "The new role in this age is the state as the guarantor of your identity."
He noted that as an e-resident, for example, I could now send "very encrypted" e-mail that was linked to my own identity. The recipient knows with 100 percent certainty that I was the one who sent the e-mail, because in order to be issued the card, and have the digital identity, Estonia verified not only my passport but took digital fingerprints.
"We know who you are," he said. "This does touch upon one other thing. We are strongly against any kind of backdoors, because basically the whole system would collapse if there were a backdoor. The whole system is based on trust. The state on its side, has to offer that trust."
Listing image by Chris Schodt

Eric Berger
The spring of 1961 was a time of uncertainty and insecurity in America. The Soviets had beaten the United States to space four years earlier with Sputnik, and in April 1961, they flew Yuri Gagarin into space for a single orbit around the planet. Finally, on May 5th, America responded by sending Alan Shepard into space, but he only made a suborbital flight.
Few would have predicted then that just five years later the United States would not only catch the Soviets in space but surpass them on the way to the moon. Perhaps that is the greatness of John F. Kennedy, who found in such a moment not despair, but opportunity. When Kennedy spoke to Congress on May 25th, 55 years ago, NASA hadn’t even flown an astronaut into orbit. Yet he declared the U.S. would go to the moon before the end of the decade.
“No single space project in this period will be more exciting, or more impressive, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish,” Kennedy told Congress. “In a very real sense it will not be one man going to the moon, it will be an entire nation. For all of us must work to put him there.”
This was such a bold statement that some NASA personnel at the time were incredulous. A few years ago, legendary Mercury flight director Chris Kraft recalled thinking, “How the hell are we going to do that?” A little more than a year later Kennedy reiterated his ambitions even more eloquently in a speech at Rice University in Houston. Kennedy’s desire to surpass the Soviet Union led to arguably the greatest human technological achievement of the 20th century—the Apollo moon landings.
But as NASA contemplates undertaking an even greater adventure in the coming decades—sending humans safely to the surface of Mars and back—it's worth remembering exactly why Kennedy put America on a course to the moon. Those historical lessons remain relevant today, as the space agency attempts to muster the will and funding to send humans beyond low-Earth orbit for the first time since 1972.
Enlarge / During a September, 1962 visit to Houston President Kennedy told a crowd of 35,000 at Rice Stadium, "We intend to become the world's leading spacefaring nation."
NASA
Perhaps the best insight into Kennedy’s motives can be found in a recording of a November 21, 1962 meeting in the White House Cabinet Room. Kennedy had boasted of the lunar plan just a month earlier at Rice. The main participants that day were Kennedy and James Webb, administrator of the National Aeronautics and Space Administration. At issue was the true purpose of NASA and the Apollo program, and at the outset of the meeting Kennedy asked Webb, “Do you think this program is the top priority of the agency?”
In hindsight, Webb's answer was surprising: “No sir, I do not. I think it is one of the top priority programs, but I think it is very important to recognize here, that as you have found out what you could do with a rocket, as you find out how you could get out beyond the Earth’s atmosphere and into space to make measurements, several scientific disciplines that are very powerful have (begun) to converge on this area.”
To this Kennedy responds that Apollo is the top priority. That ought to be very clear, he explained. “This is important for political reasons, for international political reasons,” Kennedy said. He told Webb he did not want to finish second to the Soviets in the “race” to the moon.
Later in the conversation, Webb mentions scientists who have doubts about the importance and viability of the moon project. These “people that are going to furnish the brain work,” as Webb called them, thought the highest priority was to “understand the environment and the areas of the laws of nature that operate out there.” The scientists wanted to science.
But Kennedy did not. Science was all well and good, Kennedy replied, but only when it applied directly to the Apollo program. Webb argued further, saying the overall program should be tied to preeminence in space, including space science. Kennedy dismissed him: “You can’t because by God, we keep—we’ve been telling everybody we’re preeminent in space for five years and nobody believes us because they have the booster and the satellite.” After the meeting President Kennedy left no doubt about what he wanted from his NASA administrator.
A couple of points stand out from this exchange: the Apollo program succeeded because it was the top priority of the President of the United States, and its success was linked directly to the national interests of the country. America was fighting a Cold War, and the best way to prove to the world that Democracy was superior was to achieve something like landing humans on the moon.
After the strategic significance of NASA faded, so too did its budget, beginning a decline in the late 1960s from 4.5 percent of the federal budget to less than 0.5 percent today. During testimony to Congress in 2014 one of the final four space shuttle astronauts, Sandy Magnus, summed up the space agency’s predicament since then.
“For NASA, it became, to a certain extent, a survival game,” Magnus testified. "There was no committed long-term strategic plan, even though there was a community that was engaged in trying to define and institute one. In the absence of a strategic vision we instead planned and executed short-term tactical goals outside of a larger defined stable framework. This is the operational mode we are still working under today.”
No president has been a stronger champion for NASA than Kennedy, and no president has worked so hard to see his vision for NASA carried out. Did a lack of interest on behalf of subsequent presidents lead to NASA's ever-shriveling budget and decreased relevance to national security? Or did NASA's declining budget and decreased relevance to national security mean that presidents were less willing to champion it? Debate persists to this day. Whatever the cause, stewardship of the agency has been left mostly to Congress, which often has a more parochial view of policy rather than an overarching view of how to move NASA forward. The legacy of all this is that NASA's “Journey to Mars” is more likely to benefit states with key congressional representation such as Alabama than it is to get off the ground.
The brilliance of the Apollo program is that it led the world on a grand voyage of discovery and demonstrated the superiority of a free world and free market system. Even today, nearly half a century later, we still marvel at images of astronauts on the moon. The unfortunate legacy is that this exploration paradigm led to an unsustainable space program. NASA planted flags on the moon, but lacked the funding and planning to make that presence permanent. Absent a national security mandate and strong presidential leadership, NASA has scaled back its ambitions. We’ve been locked in low-Earth orbit ever since.
But in the last 10 years another powerful aspect of Democracy has emerged to push the United States back toward deep space—capitalism. A growing number of companies is working to lower the cost of launch in order to facilitate business models built around resources that can be found on the moon, asteroids, and beyond.
NASA, finally, is talking about going back into deep space. This is welcome. But equally welcome is the vision by companies such as SpaceX, which wants to colonize Mars, and United Launch Alliance, which wants to help other, smaller space businesses commercialize the moon. It is not clear whether government alone, private industry alone, or some combination will get us back into deep space. Nevertheless, more than half a century after Kennedy’s grand vision, we are finally moving into an era in which just going is not enough. We will go to stay.
Listing image by NASA

Cunningham
Many companies, Apple, Samsung, and Qualcomm included, like to rely on their own custom ARM CPU architectures for their chips, but the CPUs and GPUs that ARM itself designs for other companies to use are still important. They let commodity chipmakers like MediaTek and Rockchip offer chips with good performance for less money, and they serve as a sort of pace car for the rest of the mobile industry.
Enter the new Cortex A73 CPU architecture and the Mali G71 GPU. These are new high-end designs that target 2017’s flagship phones and tablets, but they’ve also been designed with virtual reality and augmented reality in mind.

Cortex A73: A new “big” core



Cortex A73 is being positioned as a replacement of sorts for Cortex A72, which in turn replaced Cortex A57. Like its two predecessors, it’s a high-end 64-bit CPU design, and it can be paired with with “little” Cortex A53 or A35 cores that handle light or idle tasks to reduce power consumption.
ARM says that A73 will improve performance by 30 percent while “increasing power efficiency” by 30 percent. Some of that speed increase comes from a clock speed bump—ARM’s figures compare a Cortex A72 running at 2.5GHz to an A73 running at 2.8GHz—while the rest comes from architectural improvements. And at least some of that power efficiency will rely on new 10nm processes from the likes of Samsung and TSMC; today’s chips are being made on 14nm or 16nm processes.
10nm parts should be ready to ship by early 2017, when ARM says A73 will begin to show up in consumer devices, but we may see some A73 SoCs built on the more mature 14nm or 16nm processes. Beware of that when it comes time to measure A73’s performance in the real world.
New CPUs are expected to improve performance and/or reduce power consumption, but ARM also had another goal for A73: improving sustained CPU performance. As we’ve examined elsewhere, mobile processors in particular are designed for “bursty” usage. You whip your phone out, poke around at a few apps, and put it away. As a result, most mobile chips are designed to run at high clock speeds for short amounts of time, but they have to ramp down quite a bit if you’re pushing the hardware for an extended period of time (games are the best real-world example of this, but as phone and tablet apps become more varied and capable, it’s going to come up in more and more places).
ARM says that Cortex A73 CPUs ought to be able to perform at peak levels for much longer than A57 or A72 CPUs could—it’s not clear what activities ARM is using to measure this or for how long they’re running, but it’s interesting that ARM is talking about sustained performance at all, given that the focus is usually on peak performance. To date, Apple is really the only smartphone maker that has bragged about its sustained SoC performance.
Good sustained performance is especially important for VR and AR. Both applications are sensitive to dropped frames and inconsistent performance, and developers need to be able to make assumptions about how well the hardware will be able to perform when under load for extended periods of time. Android N even includes a “sustained performance mode” intended to smooth out performance and keep throttling from becoming an issue. As peak performance gains become harder to come by—an inevitability, if desktop and laptop CPUs are any indication—expect more OEMs to start talking about sustained performance and focusing on VR as much as they focus on general app and game performance.

Maii G71, and the “Bifrost” GPU architecture

Mali G71 is ARM’s new high-end GPU, and it’s built on a new architecture called “Bifrost” that will eventually trickle down to lower-end parts as well. ARM says that the GPU architecture is its “most scalable GPU to date,” and that it eliminates overhead present in the older “Midgard” architecture.
On the API side, Bifrost doesn’t enable G71 to enable anything that current ARM GPUs can do, at least not yet. It was built with the low-overhead Vulkan API in mind, but Mali T600-, T700-, and T800-series GPUs can all support Vulkan just fine provided they get the necessary driver and OS updates from OEMs.
The more significant advancement is support for HSA, which is important for GPU computing. HSA allows the CPU and the GPU to access the same data in system memory at the same time, which eliminates overhead since the two no longer have to work with two separate “pools” of memory. The CPU and GPU can each work with the data it's best suited to without having to wait for things to be copied between those memory pools.
Compared to a Mali T880 GPU on the same process node, ARM promises that G71 will deliver 1.5 times the graphics performance, thanks to energy efficiency, performance density, and memory bandwidth improvements. The number of shader cores doubles from 16 to 32, though lower-end GPUs with fewer cores will surely begin to appear in the months after G71 launches.
These improvements come primarily from reducing overhead; a new “quad vectorization” system can execute four threads at once, making more efficient use of the GPU hardware and leaving less of it idle during any given operation. And using “clause execution” to schedule multiple instructions at once without having to pause before and after every individual instruction also reduces overhead.
ARM says that SoCs using Cortex A73 and the Mali G71 GPU will enter production in late 2016 and be available in actual devices in early 2017. The GPU technology will trickle down to lower-end chips, but ARM isn’t upgrading its low-end CPU architectures. Cortex A53 is still the company’s go-to low-to-mid-end 64-bit CPU architecture, while truly low-power SoCs can use Cortex A35 instead. Both of those cores can be combined with different numbers of A73 CPU cores to create high- and mid-end chips depending on how fast you’d like your chip to be.
Listing image by ARM

Kogonuso

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget