July 2016

Jonathan M. Gitlin
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Ford-and-Ferrari-at-the-Glen-7-980x653.jpgFord mechanics and engineers prep the car for the six-hour race ahead. 
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Ford-and-Ferrari-at-the-Glen-8-980x653.jpgThe cockpit of the Ford GT. Rest assured the production car will look a lot more accommodating. 
 

The only car that was able to challenge the Fords at Le Mans was the Ferrari 488 of Risi Competitzione. The car arrived at Watkins Glen direct from Le Mans, giving the team just two days to change the engine and swap in shorter gear ratios before practice for the Sahlen's Six Hours at the Glen got underway.Elle Cayabyab Gitlin 
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Ford-and-Ferrari-at-the-Glen-10-980x653.jpg
 The Ford team's attention to detail extended to these custom dollies, which are used to move the car in the pits if necessary.
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Ford-and-Ferrari-at-the-Glen-5-980x1470.jpg
 Mark Rushbrook, Motorsports Engineering Manager at Ford Performance.

 Although the Ford GT is based on a road car, from behind it's much more reminiscent of the faster Prototype racing cars, with an enormous rear diffuser and a lot of attention paid to shaping the flow of air around the car.
Just as it was 50 years ago, the battle for sports car supremacy on the world's race tracks this year has been between Ford and Ferrari. At this year's 24 Hours of Le Mans, the two marques were head-and-shoulders ahead of their competition in the hotly contested GTE-Pro class (for racing versions of cars that you or I could buy). Ford emerged victorious, but the end of the race was somewhat acrimonious, with protests and counter-protests from both camps. We caught up with both teams at their next match up—the Sahlen's Six Hours of the Glen at Watkins Glen in upstate New York—both to check out their machinery and also to find the hatchet well and truly buried.
Back in 1966, after Henry Ford's attempt to buy the Italian car company was rebuffed, his company built the legendary GT40, beating Ferrari's V12-powered cars at Le Mans and most everywhere else. To celebrate the 50th anniversary of that match up, Ford decided to build (and race) a new mid-engined supercar, the Ford GT. The road-legal Ford GT won't actually appear until 2017, but Ford's rivals all gave their permission for the Blue Oval to start racing the car this year—the rules insist on a minimum of 500 production cars built in order to be eligible to race.
Ford has been running a quartet of GTs on track, a pair in the WeatherTech Sportscar Championship here in the US, and another pair contesting the World Endurance Championship. The cars aren't just racing for glory either; Ford Performance (the division of the company responsible for the GT as well as the Shelby GT350 and Focus RS) is using the experience to develop and improve the road car ahead of production. We met with Mark Rushbrook, motorsports engineering manager at Ford Performance, to find out more.
"Le Mans went really well," he told us. "The goal was to go there and win the race." Just two weeks had elapsed since the big showdown in France, which left little time to get ready for the return to the States and the rest of the WeatherTech season. "There's been no physical development of the car or testing [since Le Mans] but we're constantly running our simulations and analytical tools," Rushbrook said. Ford took part in a test at the newly resurfaced Watkins Glen track a few weeks previously, though. "So we looked at our test data, ran some simulations to finalize our aero settings, our chassis settings, everything with the engine and shift points [for the gearbox] so there's been a lot of work in the last two weeks."
"There are two different downforce configurations, and this is the high downforce configuration, which has different dive planes to it and different wing angle settings, but we still have the ability to tilt the wing up and down," Rushbrook continued. "We don't have as much downforce here as we would for a track like Laguna Seca, but we do have more downforce than Le Mans. We're planed out about halfway down on the rear wing. We looked pretty good in practice [the Fords topped the time sheets] so the testing paid off. Again, it's a new car, we've continued to learn, even though we did a lot of development through the end of last year. Every time we race it learn something new about the car, and we keep getting better with it."
We asked Rushbrook whether he was surprised by the GT's performance during qualifying at Le Mans, where the cars suddenly started setting lap times that were 4-5 seconds faster than they had been during testing and practice. In fact, the sudden gain of speed by the Ford GTs and also the Ferrari 488s lead to much grumbling from other competitors and fans that the so-called "balance of performance" [where the organizers attempt to equalize the grid] had gone awry, the result of some extreme sandbagging during the preceding few months.
"Yes and no," he said. "Specifically in qualifying, that was the first time the car went flat-out. In practice we were working on race setup, so double-stinting the tires [running the same set of tires for more than one run between pit stops], a full load of fuel, and knowing we wanted the tires to go two stints. So in qualifying that was the first time on that track that we had run a low fuel load with brand new tires. Definitely we went faster than practice, but it was a great back-and-forth with Ferrari on that first qualifying session—there was a lot of adrenalin!"
Whether by design or happenstance, Ford and the Risi Ferrari team (which came second at Le Mans) had been placed right opposite each other in the Watkins Glen paddock. Now, endurance racing is a sport with a lot more camaraderie than the cut-throat nature of something like Formula 1, but we had to ask—what was the mood like between the two rivals now?
"I'd say it's a very good relationship," Rushbrook explained. "I was over there talking to David Sims [Risi's team manager] yesterday, we talked a lot after Le Mans, and we agreed that we're competitors that are very respectful of each other, and that just continued yesterday. We want to race each other on the track, compete, and beat each other on the track, but we don't want to do that off the track. We've got complete respect for those guys and I think they do for us."
This was indeed confirmed by Sims, with whom we also met to get his take. "Things are not bad at all," Sims told us. "When we heard they were going to protest us, the start was, "if [Ferrari] win, we're going to protest you..." OK. Then they did protest us, for something the ACO [the organizers of Le Mans] weren't not going to bring us in to fix. It wasn't the number plate, it was the leader lights." (Sims is referring to the LED display that lets spectators know what position each car is running in, which aren't required to be working throughout the entire race, unlike the electroluminescent panels showing each car's number.)
"So then Ferrari told us to protest Ford, and it went on and on," he said. "I said to the Ganassi and Ford guys, 'If you've never done this before, there isn't going be a winner in this. It won't end at Le Mans.'" Sims explained that the ACO told the teams that any post-race protests would be settled in the coming weeks in Paris, until which time the finish of the race would remain in dispute. "And nobody's going to win that one. So we started talking. I said the best thing to is to be sensible because we work together, we know the guys, and the team boys didn't want to protest, it was Ford Corporate. In the end the best thing to do was shake hands. You withdraw your protest, I'll withdraw mine, it's easy. The ACO were very happy with that—Ford you win, Ferrari you get second place. It was no good coming away with the protest still on, it would have been a taint of the whole race," Sims told us.



With a 50:50 mix of cars to trucks & SUVs, it isn't possible.

Jonathan M. Gitlin

The US love affair with big vehicles hasn't gone away.

Way back in 2012, the US government released a relatively ambitious plan to increase US passenger fleet average fuel efficiency to 54.5mpg. Back then, we looked at some of the new technologies that automakers were adopting in order to meet this goal, plenty of which can now be found in our cars. But despite lots of hard work by the boffins in automotive research centers in the US and elsewhere, the 54.5mpg Corporate Average Fuel Efficiency (CAFE) goal is dead in the water.
Americans, it seems, are just too in love with their light trucks and SUVs to make it happen. That's according to a new report from the Environmental Protection Agency, the National Highway Traffic Safety Administration, and the California Air Resources Board. The three agencies have published a Draft Technical Assessment Report, "Midterm Evaluation of Light-duty Vehicle GHG Emissions Standards for Model Years 2022-2025" (PDF), that lays out the case for why we could meet the 2012 plan—which would have doubled fleet fuel economy, halved greenhouse gas emissions, and saved 12 billion barrels of oil and prevented 6 billion tonnes of CO2 from entering the atmosphere between now and 2025—but won't.
The report—which stretches out to over 1200 pages—spends plenty of time discussing cool technological advances, including improvements to gasoline internal combustion engines, better transmissions, mild (48v) and high-voltage hybrids, battery electric vehicles, fuel cell EVs, and more, but the bad news gets going in Chapter 12. The report projects that 46.3mpg is where we'll be when it  comes to CAFE in 2025, a drop of 15 percent compared to where we'd hoped to be.
It's worth noting that the 2022-2025 CAFE targets were not finalized back in 2012 (as well as the fact that CAFE mpg numbers are not the same as the EPA fuel efficiency figures you or I might use when deciding what car to buy). But the assumptions that underpinned that target were based on a fleet that was two-thirds passenger cars and a third light trucks and SUVs. Now, the agencies have revised that based on consumer demand to a near-50:50 mix (52 percent cars, 48 percent trucks to be exact).
Despite the unwelcome findings in the report, there is quite a lot in there that is interesting. For one thing, we wouldn't even need to buy that many battery EVs, strong EVs (think Toyota Prius), or plug-in hybrid EVs to hit the target. In fact, EPA projects just three percent strong hybrids, two percent BEVs, and two percent PHEVs across the entire fleet by then.
Advanced gasoline engines would make up the bulk of our passenger fleet, with a mix of turbocharged engines (33 percent) and Atkinson cycle engines (44 percent). Expect plenty of engines to adopt cylinder deactivation, variable valve timing, and exhaust gas recirculation too. Nearly one in five cars will be a 48v mild hybrid, according to EPA.
The draft report is now open to a 60-day public comment period, with a final determination on just what the CAFE regulations will require for 2022-2025 due by April 1, 2018.

http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-4-980x653.jpg 
IBM Watson is another partner on the Olli project, supplying the intelligent interaction system where passengers interact with the vehicle.
Jonathan Gitlin 
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-5-980x653.jpg
This is Olli. It uses 3D printing and is self-driving at speeds of 12-18mph with a 60-mile range.
Jonathan Gitlin http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-6-980x653.jpg
Meet the Swim, Local Motor's first highway-speed four-seat passenger vehicle.
Jonathan Gitlin http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-7-980x653.jpg
 This is the Strati, the first 3D-printed car. It only takes a few minutes comparing it to the Swim and Olli to see how the process is maturing.
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-7-980x653.jpg
 David Woessner at Local Motors told us that the company is working with the Department of Energy and Oak Ridge for some interesting trials at Oak Ridge National Lab where it would use inductive charging at shuttle stops to keep Olli continuously topped up.
http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-3-980x653.jpg http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-2-980x653.jpg

3D-printed parts fresh from the gigantic printer.
Jonathan Gitlin http://cdn.arstechnica.net/wp-content/uploads/2016/07/Local-Motors-visit-1-980x653.jpg
We explored the design process that resulted in the Swim last year.
Jonathan Gitlin
Local Motors is not your regular car company. It's been pioneering the use of open source 
 development to design its vehicles, starting with the Rally Fighter off-road sports car and a number of vehicles that have been the result of competitions, including one held in conjunction with the Department of Energy's ARPA-E. Most recently, the company unveiled Olli, its first autonomous vehicle. When we discovered that Olli was just up the road in National Harbor, Maryland, we decided it was time to head over there to find out more.
Local Motors has a large retail location at National Harbor (selling merchandise), along with a test lab complete with a gigantic 3D printer for rapid prototyping. Several of the company's designs were also on display—the Strati, which was the first 3D-printed car, as well as the Swim, which was the winning design from its Project Redacted competition. And of course, Olli the autonomous people mover.
As we looked at the Swim, David Woessner, general manager at Local Motors, explained the ongoing process that's expanding Local Motors' product line up. "We started in July of 2015. In September of that year, we did the first print and revealed the car in November. It's the next iteration in our path to a highway car."
"Right now we're still working on a test mule for highway certification," he continued. "We'll have a family of low-speed individual vehicles and a family of higher speed highway vehicles that we're now putting together for the crash certification process," he told us. Underneath the Swim's 3D-printed body is a rolling chassis from BMW's i3 electric vehicle. "It was the easiest way for us to get to a vehicle with a body that was in line with a highway certified car," he said.
At the far end of the lab, beyond that gigantic 3D printer, sat Olli. "This is the newest family of vehicles, the Olli. It's in a proof of concept phase and we've already taken orders for it. This is version 0.0 but we're finishing some engineering changes to make it version 1.0," he explained. "It seats 8-12 people, and it's very comfortable on the interior. A lot of the interior is 3D-printed—you can see the refinement versus the Strati. We didn't mill the finish but you can see the refinement of the printing. And a lot of the tooling for the form-pulled plastics were 3D-printed, as were the wheel wells. The other big thing is it's all-electric, and it uses lidar and autonomous technology." As currently configured, Olli has a range of 60 miles (100km) at speeds of between 12-18mph (19-29km/h).
Olli is a little reminiscent of the autonomous people movers that are now being tested in the UK. But the vehicle is not actually in service yet. "We're getting calls right now from people asking to ride Olli," Woessner said. "We're in the process of working with the team to make that a reality... but also to get the regulations in place. There's a couple of things from a final development and testing perspective we need to finish with the vehicle. And then for the state of Maryland there's currently no autonomous driving regulation. In National Harbor, though, we have some private roads with our partners at the Peterson companies that we're working on getting permission to use, and we're mapping out the right route to have people experience Olli," he told us, adding that the hope is to have everything ready for operations to begin later this summer.
Local Motors is an active participant in Maryland's process of crafting regulations for autonomous vehicles. "The one challenge we've seen is that the group is focused primarily on highway vehicles. We're interested in that application, but the low speed is where we think the initial consumer applications will be, particularly for environments like National Harbor or the National Mall in DC, or some large parks like Disney," Woessner said."We want regulations that are appropriate for that use case versus highway speed. I think there's the potential to tailor legislation and regulation for driving on an interstate or state highway versus a county or local road at a speed of less than 25mph."

Listing image by Jonathan Gitlin

Two atomically thin materials can form functional circuits given the right pattern.

John Timmer
Once a channel is cut into the graphene, a molybdenum disulfide crystal can grow within it.
Enlarge / Once a channel is cut into the graphene, a molybdenum disulfide crystal can grow within it.
The features we're making in current semiconductor materials are shrinking to the point where soon, they will be just a handful of atoms thin. Unfortunately, the behavior you get from bulk materials is often different from what you see when there are just a few atoms present, and quantum effects begin to dominate. There is an alternative, however: start with a material that is already incredibly small and has well-defined properties. Graphene, for example, is a sheet of carbon just one atom thick, and it's an excellent conductor; a variety of similar materials have been also developed.
It's a big challenge to manipulate these things that are just one atom thick, so it's really hard to put together any sort of circuitry based on these materials. Now, however, researchers have figured out how create a template where single-atom-thick materials will grow to create functional circuitry.
As we noted above, graphene is an excellent conductor of electrons, so the authors of the new paper decided to use it to create wiring. But getting sheets of graphene lined up to consistently create the wiring of even simple circuitry has been nearly impossible. The authors didn't even try. Instead, they took a larger sheet of graphene, dropped it onto silicon dioxide, and then etched away any material they didn't want. The etching involved a plasma of oxygen ions, which burned channels in the graphene that were about 15µm wide.
Once the wiring was in place, the authors added semiconducting features. These were based on another atomically thin substance, molybdenum disulfide (MoS2). This was put in place by a process called chemical vapor deposition, which is more or less exactly what it sounds like: the graphene/silicon substrate was placed in a chamber with vaporized MoS2, which then formed crystals on the surface.
The key to this working is the fact that graphene doesn't provide a surface that MoS2 is able to grow on; the silicon dioxide does. But the MoS2 vapor also preferentially starts growing crystals along the edges of a graphene sheet, moving out from there to fill in the gaps between sheets. As a result, everywhere the graphene had been etched away ended up filled with a single-atom-thick sheet of MoS2. And, since it started forming right at the edges of the graphene, the two materials integrated into a single electronic device.
Amazingly, all of this worked. After testing out the basic properties of the circuitry, the researchers constructed an inverter, which can also be viewed as a logical NOT gate.
The big limitation of the technique is our ability to control the oxygen plasma that etched away the graphene. Right now, the features are about 15µm wide, which is quite a bit larger than the latest silicon technology. But they are only a single atom thick, which means that you could potentially fit several layers of circuitry in the volume currently taken up by existing chips. And it should be possible to put the material between the layers to use for things like removing heat from the circuitry.
The real challenge will be showing that this approach can build something more complicated than the device demonstrated here. It's clear that the authors intend to try, as they've filed patent applications on this work.
Nature Nanotechnology, 2016. DOI: 10.1038/nnano.2016.115  (About DOIs).
Listing image by Berkeley Lab

Falcon returns to land as Dragon heads on a two-day voyage to the Space Station.

John Timmer
The launch lights up night-time clouds, and is later followed by the landing.
Early Monday morning, SpaceX achieved a successful launch and landing of its Falcon main stage, which sent a Dragon capsule loaded with supplies to the International Space Station. Unlike most previous attempts, the Falcon was able to return to Florida rather than dropping onto a barge in the Atlantic. The successful landing adds another item to the company's collection of lightly used boosters, some of which are intended to ultimately make return trips to space.
The Dragon capsule is expected to reach the ISS within two days. It contains a typical assortment of supplies and experiments in its pressurized portion. But it also carries a bit of hardware externally: an international docking adaptor, or IDA. The IDA is built to standards that different nations can adopt, allowing their hardware to interact with the system. According to NASA, "the adapter is built so spacecraft systems can automatically perform all the steps of rendezvous and dock with the station without input from the astronauts."
This is the second IDA sent to the Station, the first having exploded in one of SpaceX's rare failed launches.
The company has had a bit more trouble nailing the landings, as would be expected. But last night's went off without a hitch, possibly aided by the fact that the Falcon booster returned to land rather than a gently rocking barge. This is now the fifth booster the company has returned from space. While it intends to preserve the first, the others are slated to be tested and returned to service if they're found to be up for the job. We're still waiting for the first re-use at this point, however.

Specific charge can be increased by a factor of three.

Shalini Saxena


Image result for Better lithium ion batteries, how do they work? Magnets!

Battery research focuses on balancing three competing factors: performance, lifetime, and safety. Typically, you have to sacrifice one of these factors to get gains in the other two. But for applications like electric vehicles, we'd really like to see all three improved.
In an investigation recently published in Nature Energy, scientists demonstrated the ability to use a magnetic field to align graphite flakes within electrodes as they're manufactured. The alignment gives lithium ions a clearer path to transit the battery, leading to improved performance.
The electrodes of Lithium-ion batteries are often composed of graphite, which balances attributes such as a high energy density with non-toxicity, safety, and low cost. Graphite, composed of stacked sheets of carbon atoms, is often incorporated into these electrodes in the form of flake-like particles.
While graphite has many advantages, it has a downside: it limits the movement of lithium ions, which is a fundamental part of charging and discharging. The lithium ions are only able to move within the planes between stacked graphene sheets and often have to navigate a highly torturous path as they move around during charge and discharge. This slow movement through the electrodes remains a critical challenge in the development of batteries with improved performance.
The authors of the new paper reasoned that it should be possible to align the graphite flakes so that they provide a more linear path for ions to move within the battery. To accomplish this, they decided to use magnetic fields. There was just one problem: graphite doesn't respond to magnetic fields.
To work around this, the scientists coated the flakes with superparamagnetic iron oxide nanoparticles. The coated graphite flakes were then suspended in ethanol. They homogenized the suspension and added a small amount of a chemical binder (2 percent by weight poly(vinyl pyrrolidone) that helped ease the alignment process. A relatively dilute suspension was needed to give the flakes enough room to move during alignment.
During fabrication of the electrodes, the graphite particles were oriented using a rotating magnetic field aligned perpendicular to the part of the battery that would exchange charges with the graphite (called a current collector). The scientists found that a magnetic field as low as 100 mT was capable of aligning the flakes. For comparison, this magnetic strength is larger than the average fridge magnet (1 mT), but significantly smaller than an MRI magnet (1.5 T). As a control, they also prepared reference electrodes in the absence of a magnetic field.
After fabrication, the team evaluated the alignment of the graphite flakes deposited under both conditions. Visual analysis revealed a clear orientation of flakes in electrodes fabricated under the influence of the magnetic field. The flakes were tilted at an angle of 60 degrees above the plane of the current collector. By contrast, the graphite flakes in the reference electrodes fell mostly parallel to the current collector.
Next, the scientists carried out a series of experiments to evaluate the change in the path the lithium ions needed to navigate. Overall, they saw that the magnetic field decreased the tortuosity of the paths through the electrode by a factor of 4 compared to the reference electrodes.
Finally, they evaluated how this impacted the battery performance by testing the electrode in a half-cell configuration (meaning they didn't build a full battery). At practical charging rates, alignment of the graphite flakes increased the lithium storage capacity of the electrode by a factor of between 1.6 and 3.
This investigation demonstrates that chemistry isn’t the only important factor at play in battery design—optimization of the electrode architecture can help boost battery performance as well. Future studies will need to determine the scalability of this technique.
Nature Energy, 2016. DOI: 10.1038/NENERGY.2016.97 (About DOIs).
Listing image by Mario's Planet

GTX 1060 is faster than GTX 980, for just a wee bit more cash than AMD's RX 480.

Mark Walton


Specs at a glance: Nvidia GeForce GTX 1060
CUDA CORES 1280
TEXTURE UNITS 80
ROPS 48
CORE CLOCK 1,506MHz
BOOST CLOCK 1,708MHz
MEMORY BUS WIDTH 192-bit
MEMORY BANDWIDTH 192GB/s
MEMORY SIZE 6GB GDDR5
Outputs 3x DisplayPort 1.4, 1x HDMI 2.0b with support for 4K60 10/12b HEVC Decode, 1x dual-link DVI
Release date July 19
PRICE Founders Edition (as reviewed): £275/€320/$300; Partner cards priced at: £240/€280/$250




What a difference a little competition makes. Nvidia was always going to release the GTX 1060, just like it released the GTX 960, GTX 760, and GTX 560 before that. But few could have predicted how soon it would appear after the launch of the GTX 1080 and GTX 1070, the company's first Pascal-based graphics cards. Fewer still expected it to be faster than a GTX 980, a card that launched at £430/$550 and still sells for a hefty £320/$400 today.
We've got AMD to thank. Its aggressively priced RX 480—which offers excellent 1080p and VR-ready performance for a mere £180/$200—brought the budget fight to Nvidia in a segment where its competitor has traditionally struggled. If you want the fastest, buy Nvidia; if you want the best value, buy AMD. The GTX 1060 changes that. For the first time in a long time, Nvidia has a mainstream graphics card that can compete on price and performance with AMD.
[Update, July 20: This story has been updated below with information on the launch-day stock situation for the GTX 1060 in both the UK and US.]
The GTX 1060 is (mostly) faster than the GTX 980; it runs cool and quiet with a light 120W TDP; and best of all the GTX 1060 costs £240/$250. Yes, that's more expensive than the GTX 960's launch price, continuing Nvidia's tradition of jacking up prices this generation. And yes, AMD's RX 480 is a wee bit cheaper. But with around a 15 percent boost in performance on average for a 10 percent jump in price over the comparable 8GB RX 480, it's good value, and it overclocks like a champ with very little effort.
The GTX 970 might have been the people's champion in the last generation, commanding an impressive five percent share of the Steam audience, but I suspect the GTX 1060 will fill that role, particularly for those still on older 600- or 700-series cards. It's a beast at 1080p, VR-ready, and it does a great job with 1440p too. For the average guy or gal who plays on a 1080p monitor and wants to one-up their console gaming friends, this is the graphics card to buy.

But can I actually buy one this time?

That's not to say the GTX 1060 is flawless. Once again, Nvidia is offering two models: the more expensive Founders Edition, which costs £275/$300 and comes comes with a smaller version of the shard-like reference cooler used on the GTX 1070 and GTX 1080, and partner cards, which will come with a range of different coolers and overclocks. Both are said to be available on launch day (July 19, 2016). But if the GTX 1080 and GTX 1070 have taught us anything, it's that despite Nvidia's promises of a hard launch, getting hold of its latest and greatest graphics cards is easier said than done.
Even now, stock of the GTX 1080 and GTX 1070 is sporadic, and it's pretty much impossible to buy one at the advertised retail price. Nvidia's Founders Edition was launched under a questionable premise (guaranteed availability of reference designs over the full life cycle of the product) and while that's fine for system integrators and Nvidia, the cards have been a disaster for consumers. Nearly all the cards sold by partners have been priced the same as, or more expensively than, the Founders Editions. The early availability of those cards simply served as a fantastic litmus test for partners: if people were willing to pay Nvidia's high prices early on, why charge less afterwards?

GTX 1080 GTX 1070 GTX 1060 GTX Titan X GTX 980 Ti GTX 980 GTX 970 GTX 780 Ti
CUDA Cores 2,560 1,920 1,280 3,072 2,816 2,048 1,664 2,880
Texture Units 160 120 80 192 176 128 104 240
ROPs 64 64 48 96 96 64 56 48
Core Clock 1,607MHz 1,506MHz 1,506MHz 1,000MHz 1,000MHz 1,126MHz 1,050MHz 875MHz
Boost Clock 1,733MHz 1,683MHz 1,708MHz 1,050MHz 1,050MHz 1,216MHz 1,178MHz 928MHz
Memory Bus Width 256-bit 256-bit 192-bit 384-bit 384-bit 256-bit 256-bit 384-bit
Memory Speed 10GHz 8GHz 8GHz 7GHz 7GHz 7GHz 7GHz 7GHz
Memory Bandwidth 320GB/s 256GB/s 192GB/s 336GB/s 336GB/s 224GB/s 196GB/s 336Gb/sec
Memory Size 8GB GDDR5X 8GB GDDR5 6GB GDDR5 12GB GDDR5 6GB GDDR5 4GB GDDR5 4GB GDDR5 3GB GDDR5
TDP 180W 150W 120W 250W 250W 165W 145W 250W
Nvidia has crossed its heart, pinky sworn, and given me its word that this won't be the case this time, but I'm going to be keeping a very close eye on GTX 1060 stock. If I can't buy one at the advertised partner price on launch day, expect a strongly worded update to this review.
Update, July 20: As predicted, stock of the GTX 1060 is hard to come by. In the UK, Cards were briefly sold at £239 at Scan, Ebuyer, and Overclockers, but have since sold out. Overclockers is taking pre-orders, or will gladly sell you a Gainward version of the card that's priced at £10 above the MSRP. Scan also has a Palit card in stock at £250.
In the US, NewEgg currently has stock of the Zotac GTX 1060 at the correct MSRP of $249, but orders are limited to one per customer. Be quick if you're interested: all other cards at $249 are sold out, with even the more expensive partner cards like the $329 Asus Strix on back order. Best Buy also had cards from PNY and EVGA at $249, but has also sold out. Some retail Best Buy stores may have stock on shelves if you're lucky.
Meanwhile, availability of AMD's RX 480 is mixed. There's plenty of stock of the 8GB version of the card in the UK, and now at just £5 over the MSRP. That said, the cheaper 4GB card has all but disappeared from online stores, although Overclockers will sell you one for a hugely inflated £215—just £5 less than the 8GB version. In the US, neither Best Buy or NewEgg currently has stock of either version of the RX 480.
Despite prices currently at £10 above the MSRP, the GTX 1060 launch is better than that of the GTX 1070 and GTX 1080. Both of those cards are still selling for inflated prices online. Given that the cheaper 4GB version of the RX 480 still isn't an option at the moment, the conclusion still stands: the GTX 1060 is the card I'd recommend to most gamers looking for the best graphics performance without spending a fortune.
And now back to the original story...
It's also worth noting that by comparison, the RX 480 has a had a far smoother retail rollout. Sure, AMD had a PR problem with the card's power draw—something that's been somewhat resolved by a recent driver update—but availability of the RX 480 has mostly been good. Right now it's possible to buy an 8GB model at just £5 above the MSRP.
If you do decide to plump for the pricier Founders Edition, you get a multifaceted shroud made out of aluminium, as you do with the GTX 1080 and GTX 1070, although it's slightly shorter at 240mm, and has an opaque black plastic section on top instead of a clear window. Inside are two copper heat pipes along with a dual-FET power-delivery system and custom voltage regulators. There's 6GB of 8GHz GDDR5 memory, along with a 1,506MHz GPU base clock and a 1,708MHz boost clock, just above that of the GTX 1070. Nvidia says the GTX 1060 will easily overclock to 2GHz, and my tests confirm that. There's plenty of headroom here for those who like to tweak.



The GP106 die.
Enlarge / The GP106 die.

Power is handled by a single six-pin PCIe power connector, with the card sporting a 120W TDP (the GTX 980 had a 165W TDP), continuing the impressive efficiency improvements of TSMC's 16nm FinFET manufacturing process. Connectivity is handled by three DisplayPort 1.4 ports, one HDMI 2.0b port with support for 4K60 10/12b HEVC decode, and one dual-link DVI port.
At the heart of the GTX 1060 is a new Pascal chip, dubbed GP106. Essentially, the 200mm² GP106 die is a chopped-in-half version of the GP104 (as used in the GTX 1080 and GTX 1070), leaving five Streaming Multiprocessors (SM) made up of 1,280 CUDA Cores and 80 Texture Units. Those are bound to 48 ROPs and 1,536KB of L2 cache, while the 192-bit memory system results in 192GB/s of memory bandwidth. All are huge improvements over the GTX 960. Of note is the fact that the GTX 1060 uses the full implementation of GP106, leaving room for Nvidia to use a binned version of the chip for cheaper cards.
GPU Boost 3.0, Fast Sync, HDR, VR Works Audio, Ansel, and preemption (an alternative approach to asynchronous compute) make a return too (check out our GTX 1080 review for more details), as well as the ability to render multiple viewpoints in a single render-pass. The latter is especially useful for VR where, instead of rendering one eye and then rendering another, the GTX 1060 can render both viewpoints at once, drastically speeding up VR performance. Not many games have implemented the feature just yet, but Nvidia says that it's coming to major engines like Unreal and Unity soon.
What's missing from the GTX 1060 is support for SLI. Nvidia has been slowly dialling back support for multiple graphics cards that use Pascal, starting with only allowing two-way SLI in games (up to four work in synthetic benchmarks like 3DMark), and then simply removing it entirely in the GTX 1060. This is completely at odds with AMD, which actively pitched using Crossfire when it launched the RX 480. It's a shame Nvidia has removed SLI support, but given that scaling and support varies drastically from game to game, going with a single card has always been the better option, particularly at this mainstream price point.

Performance (and why you should overclock)

Test system specifications
OS Windows 10
CPU Intel Core i7-5930K, 6-core @ 4.5GHz
RAM 32GB Corsair DDR4 @ 3,000MHz
HDD 512GB Samsung SM951 M.2 PCI-e 3.0 SSD, 500GB Samsung Evo SSD
Motherboard ASUS X99 Deluxe USB 3.1
Power Supply Corsair HX1200i
Cooling Corsair H110i GT liquid cooler
Monitor Asus ROG Swift PG27AQ 4K
The GTX 1060 was tested with a suite of games on the Ars Technica UK standard test rig, including three games that use DirectX 12. There's still no reliable way to capture frame data for DX12 games without a dedicated hardware setup just yet, but for everything else there's a 99th percentile score, which shows the minimum frame rate you can expect to see 99 percent of the time. This is a great way to highlight the comparative smoothness of games—the higher the gap between the average of the 99th percentile, the more jittery a game feels.
Each game was tested at 1080p, 1440p, and UHD (4K) resolutions at high or ultra settings at stock speeds. I also overclocked the GTX 1060 to put Nvidia's 2GHz claims to the test, and I'm pleased to say that it passed and then some. With zero voltage or fan tweaks I was able to overclock the GTX 1060 to 2,025MHz on the GPU, and a hugely impressive 9,050MHz on the memory. Those that are willing to crank the fan speed or tweak the voltage are likely to get even more, while partner cards that add extra power delivery and better cooling will help things along.
On the synthetics and science side there's the standard 3DMark Firestrike benchmark (again, run across three resolutions), as well as LuxMark 3.0, CompuBench, and FAHBench (the official Folding@Home benchmark) to test compute performance.
http://cdn.arstechnica.net/wp-content/uploads/sites/3/2016/07/1060.021-980x720.pnghttp://cdn.arstechnica.net/wp-content/uploads/sites/3/2016/07/1060.030-980x720.pnghttp://cdn.arstechnica.net/wp-content/uploads/sites/3/2016/07/Review-chart-template-final-full-width-3.003-980x720.png
The GTX 1060 is indeed faster than a GTX 980, but by how much? On Rise of the Tomb Raider (DX11, 1080p) it's a small, but not insubstantial, six percent. In most other games, however, the GTX 1060 just scrapes past the GTX 980 by one or two frames per second, or is just behind by the same amount. Interestingly, it fares better at 1440p, with a lead of seven percent on Metro Last Light and 14 percent on Hitman. The GTX 980's greater number of CUDA cores helps it regain the lead at 4K, but neither card is really suitable for gaming at that resolution unless you're willing to make some big sacrifices to visual fidelity.
As mentioned, the GTX 1060 overclocks well; if you've got a half-hour to spare, there are plenty of free performance gains to be hand. I saw around an eight percent boost across the board with my overclock, which was great for that 1440p games that didn't quite reach a locked 60FPS at stock speeds. That eight percent boost also means the GTX 1060 totally surpasses the GTX 980, although it's worth noting that the GTX 980 was a good card for overclocking too. Against the much older GTX 780 Ti, the GTX 1060 comes out on top too, with significant gains across most games.


But with the GTX 1060, Nvidia comes back fighting. This is a graphics card that's not only significantly faster then the RX 480, but uses less power, overclocks well, and offers a better VR experience to boot. Sure, you're paying a little more for the privilege—provided Nvidia and its partners actually get them in stores at the MSRP this time—but if I had to choose between the two, the GTX 1060 is the card I'd save up a little longer for and buy. It's simply a better, more ambitious product.
1080p gamers, would-be VR explorers, and e-sports players who crave hundreds of frames per second look no further: the GTX 1060 is the graphics card to buy.

The good

  • Better performance than a GTX 980 makes for fantastic 1080p gaming
  • Aggressively priced to take on the RX 480—and it's faster too
  • Sharp reference cooler design
  • 120W TDP
  • Plenty of overclocking headroom

The bad

  • No SLI support

The ugly

  • Yet another pricey Founders Edition that stiffs partners and consumers

Deputy Labour leader says judicial oversight is crucial as he attacks PM's snoop tactics.

Glyn Moody
Will the CJEU judges agree with the Attorney General on DRIPA?
Politicians, lawyers, and civil rights groups have slammed the UK government's present and future surveillance laws in light of the advocate general's opinion on the Data Retention and Investigatory Powers Act (DRIPA)—which said that Theresa May's emergency spy law is legal if strong safeguards are in place.
On closer analysis, the full text of AG Henrik Saugmandsgaard Øe's views go much further in implicitly criticising the UK's snooping approach than had been initially suggested by a press release put out by the Court of Justice of the European Union (CJEU) on Tuesday.
Labour's deputy leader Tom Watson—who, alongside Tory MP and the government's new Brexit chief David Davis—brought the original legal action against the UK's DRIPA legislation, said: "This legal opinion shows the prime minister was wrong to pass legislation when she was home secretary that allows the state to access huge amounts of personal data without evidence of criminality or wrongdoing."
Human rights group Liberty, which represented Watson in the courts, said that if the CJEU judges agree with the advocate general’s opinion, "the decision could stop the government’s fatally flawed Investigatory Powers Bill in its tracks and mark a watershed moment in the fight for a genuinely effective, lawful, and targeted system of surveillance that keeps British people safe and respects their rights."
Similarly, Privacy International labelled the opinion "a serious blow to the UK's Investigatory Powers Bill."
The home office unsurprisingly disputed the claims. A Whitehall spokesperson told Ars: "The government’s view remains that the existing regime for the acquisition of communications data and the proposals in the Investigatory Powers Bill are compatible with EU law."
Many of those responding to Tuesday's opinion emphasised the main finding that "solely the fight against serious crime is an objective in the general interest that is capable of justifying a general obligation to retain data, whereas combating ordinary offences and the smooth conduct of proceedings other than criminal proceedings are not."
Open Rights' Group executive director Jim Killock said:
The advocate general has stated that data retention should only be used in the fight against serious crime, yet in the UK there are more than half a million requests for communications data each year. These do not only come from police but also local councils and government departments. It is difficult to see how the government can claim that these organisations are investigating serious crimes.
Defining what exactly counts as "serious crimes" looks set to be a hot topic in the data retention debate.

"Serious crime includes theft of a Mars bar"

Law lecturer TJ McIntyre, who played a crucial role in winning the earlier CJEU surveillance case for Digital Rights Ireland, tweeted: "Serious crime must become an autonomous EU concept, then. In Irish law serious crime includes theft of [a] Mars bar."
The need for greater clarity was a point picked up by Jan Philipp Albrecht, an expert on privacy and data protection in the European Parliament who recently helped steer the GDPR through the EU. He told Ars:
While naming the various high requirements as minimum standards and making clear that even if meeting those data retention laws may still be unproportionate [the advocate general] fails to deliver for clear indications whether and when these requirements would not be met by a member states’ law.
We can only hope that the judges of the court will not allow themselves to be that vague when interpreting the EU fundamental rights vis-à-vis member states’ laws.
Then home secretary Theresa May—speaking in November, 2015—revealing for the first time that British security services have intercepted bulk communications data of UK citizens for years.
Enlarge / Then home secretary Theresa May—speaking in November, 2015—revealing for the first time that British security services have intercepted bulk communications data of UK citizens for years.
The full opinion imposes some very stringent requirements that governments must meet if their data retention schemes are to be legal, added McIntyre in a series of tweets. He pointed out that the advocate general suggests that the safeguards mentioned by the CJEU when delivering its verdict on the Digital Rights Ireland case are all mandatory. These concern "access to the data, the period of retention, and the protection and security of the data."
Data security is particularly relevant to the UK's IP Bill, which will require ISPs to retain highly personal information about their subscribers' Internet use on local databases. Unless the security of these databases can be reasonably guaranteed, use of so-called Internet Connection Records may fall foul of EU law, if the CJEU judges follow the advocate general's reasoning on this issue.
Watson—who declined to comment when quizzed by Ars about Davis' exit from the DRIPA case following his appointment as Brexit secretary of state—also flagged up another concern: "The opinion makes it clear that information including browsing history and phone data should not be made available to the security services and other state bodies without independent authorisation," he said.
"The security services have an important job to do, but judicial oversight is vital if we are to maintain the right balance between civil liberties and state power."
Privacy International agreed: "All access to our data, including communications data, must be authorised by an independent authority such as a judge."
Killock also fretted about this aspect. The ORG boss said: "If the IP Bill is passed, data will be able to be analysed without a warrant through an intrusive tool known as the Request Filter."
As comments from different quarters suggest, the advocate general's opinion seems to offer plenty of scope for legal challenges to the IP Bill, provided the judges at Europe's highest court agree with his views. The fact that it is strongly rooted in the CJEU's reasoning in the Digital Rights Ireland case appears to make this more likely.
The final judgment is expected in a few months, at which point the DRIPA case will be passed back to the UK courts to consider in light of the CJEU's ruling on the underlying law.

Code-execution vuln resides in code used in cell towers, radios, and basebands.

Dan Goodin
Carl Lender
A newly disclosed vulnerability could allow attackers to seize control of mobile phones and key parts of the world's telecommunications infrastructure and make it possible to eavesdrop or disrupt entire networks, security experts warned Tuesday.
The bug resides in a code library used in a wide range of telecommunication products, including radios in cell towers, routers, and switches, as well as the baseband chips in individual phones. Although exploiting the heap overflow vulnerability would require great skill and resources, attackers who managed to succeed would have the ability to execute malicious code on virtually all of those devices. The code library was developed by Pennsylvania-based Objective Systems and is used to implement a telephony standard known as ASN.1, short for Abstract Syntax Notation One.
"The vulnerability could be triggered remotely without any authentication in scenarios where the vulnerable code receives and processes ASN.1 encoded data from untrusted sources," researchers who discovered the flaw wrote in an advisory published Monday evening. "These may include communications between mobile devices and telecommunication network infrastructure nodes, communications between nodes in a carrier's network or across carrier boundaries, or communication between mutually untrusted endpoints in a data network."
Security expert HD Moore, who is principal at a firm called Special Circumstances, described the flaw as a "big deal" because of the breadth of gear that are at risk of complete takeover.
"The baseband vulnerabilities are currently biggest concern for consumers, as successful exploitation can compromise the entire device, even when security hardening and encryption is in place," he wrote in an e-mail. "These issues can be exploited by someone with access to the mobile network and may also be exposed to an attacker operating a malicious cell network, using products like the Stingray or open source software like OsmocomBB."
The library flaw also has the potential to put carrier equipment at risk if attackers figured out how to modify carrier traffic in a way that was able to exploit the vulnerability and execute malicious code. Moore went on to say the threat posed to carriers is probably smaller given the challenges of testing an exploit on the specific equipment used by a targeted carrier and the difficulty of funneling attack code into the vulnerable parts of its network.
"A carrier-side attack would require a lot more effort and funding than targeting the mobile phone basebands," he said. "For specific attack scenarios, carriers may be able to block the traffic from reaching the vulnerable components, similar to how SMS filtering is done today."
Dan Guido, an expert in cellular phone security and the CEO of a firm called Trail of Bits, agreed that the vulnerability will be hard to exploit. But Moore also described ASN.1 as the "backbone" of today's mobile telephone system. Even in the absence of working code-execution capabilities, attackers could use exploits to trigger denial-of-service outages that could interrupt key parts of a network or knock them out altogether.
Right now, only gear from hardware manufacturer Qualcomm is known to be affected, according to this advisory from the Department of Homeland Security-backed CERT. Researchers are still working to determine if a long list of other manufacturers—including AT&T, BAE Systems, Broadcom, Cisco Systems, Deutsche Telekom, and Ericsson—are similarly affected. For the moment, there's little end users can do to insulate themselves from the threat other than to monitor advisories from device makers and carriers.
Objective Systems has released a "hotfix" that corrects the flaw, but both Guido and Moore said the difficulty of patching billions of pieces of hardware, many scattered in remote places throughout the world, meant the vulnerability is likely to remain unfixed for the indefinite future.
"This kind of infrastructure just does not get patches," Guido said. "So [the vulnerability] is a stationary target that others can develop against. It's easy to set goals towards it."

Oral exposure to pathogens early in life may help develop proper immune responses.

Beth Mole
Kids who got teased for sucking their thumbs or biting their nails may, after all, get the last laugh.
It turns out that repeatedly sticking grimy digits into your pie-hole as a youngster may help strengthen your immune system and prevent the development of allergies later in life, researchers report in the August issue of Pediatrics. The finding is certainly a score for the underdogs of the schoolyard, but it also lends more support to the “hygiene hypothesis.” This decades-old hypothesis generally suggests that exposure to germs and harmless microbes in childhood can help develop a healthy, tolerant immune system—that is, one not prone to autoimmune diseases and hypersensitive responses such as allergies.
“Although we do not suggest that children should be encouraged to take up these oral habits, the findings suggest that thumb-sucking and nail-biting reduce the risk for developing sensitization to common aeroallergens,” the study authors conclude.
The researchers, led by Stephanie Lynch of Dunedin School of Medicine in New Zealand, got the idea following reports that infants whose mothers licked their pacifiers clean were less likely to develop asthma and eczema. Sucking on thumbs and biting nails might have the same effect they reasoned.
Lynch and colleagues examined data from the Dunedin Multidisciplinary Health and Development Study, which had been tracking 1,037 kids born in Dunedin, New Zealand, between 1972 and 1973. In that study, parents were surveyed on their kids’ thumb-sucking and nail-biting ways when the kids were 5, 7, 9, and 11 years old. Most of the kids were also given skin-prick allergy tests at ages 13 and 32. Those tests looked for responses to common allergens, such as dust mites, grass pollen, cats, dogs, horses, wool, and mundane molds. The researchers also collected information on other factors that may influence allergy development, including breastfeeding, cat and dog ownership, parental smoking and allergies, crowded living conditions, and socioeconomic status.
According to the parent survey responses, 31 percent of the kids were “frequent” finger munchers—either nail-biters, thumb-suckers, or both—at some point between the ages of 5 and 11.
At 13 years old, 724 kids took the skin prick test, and 45 percent of them had at least one allergy. But, when the kids were sorted out, the ones with oral habits fared better. Of the kids that neither sucked their thumbs nor bit their nails, 49 percent had allergies. Of the kids that had one of those oral habits, 41 percent had allergies. And of the kids who did both, 31 percent had allergies.
At 32 years old, 946 of the kids—now adults—took the skin test again. The results were similar, except at this age the participants that had both oral habits in childhood fared about the same as those that had just one. Still, the results held up: shoving germy fingers in your mouth as a kid seemed to lessen the chances of allergies later in life by about 30 to 40 percent. And the results also held up after researchers controlled for the other allergy-altering environmental and genetic factors.
However, the researchers also looked at whether finger munching generally lessened the incidence of asthma and hay fever and didn’t find a link. They'll have to do more research with other groups of kids to validate the results and try to understand how the unhygienic habit may be affecting some allergies and not others.
Lastly, the authors note that finger biting and thumb-sucking have been linked to problems other than schoolyard bullying, such as misalignment of teeth and finger infections. But if the study is correct, these “bad habits” may not be all bad.
Pediatrics, 2016. DOI: 10.1542/peds.2016-0443  (About DOIs).

 Tokyo RPG Factory's debut is flat as fresh-fallen snow, empty as a snow angel.



At times I Am Setsuna is truly beautiful.
I Am Setsuna wears its influences on its sleeve—also on its pants, shirt, shoes, and company branded baseball cap. The game pulls heavily from SquareSoft’s SNES classic RPG Chrono Trigger to the extent that the inspiration is mentioned by name on the front page of the game's website.

That means you know going in that you're in for a top-down, turn-based JRPG where time ticks down actively during battles, and you can see your foes on-screen before facing them. There are no surprise encounters here—save the ones scripted into the story.
The story follows the titular Setsuna through the perspective of Endir, your masked, silent cipher of a protagonist. Setsuna has been selected as a sacrifice—like her aunt, mother, and many other women before them—on the theory that sacrificing one girl every few decades will cause the monsters that inhabit the world to leave them in peace.
By the time Endir enters the picture (on his own quest to kill Setsuna for unrelated reasons), monster activity is on the rise, and these beasts seem to be more organized than ever. Cue a fateful meeting between our hero and heroine where he decides against cold-blooded murder, and suddenly a journey ensues that pulls a growing cast of party members in its wake.

I've been here before...

The tale of a girl with a tragic destiny and her always encouraging entourage isn't the most original backbone for a JRPG. The greater problem with I Am Setsuna, though, is that it sprints through these clichés and archetypes without even letting them take root. Believe me, I’m not against the familiar tropes of airships, evil kings, and haggard swordsmen. In this game, however, there’s not enough to justify cleaving to those oh-so-traditional RPG elements so tightly.

You'll have to sell each new crafting material individually. 

This game sure does love penguins... 

She is Setsuna. 

No adventure is complete without a sea monster.

Setsuna is case in point. She takes an immediate shine to Endir despite the fact he barely speaks and attempts to chop off her head at their first meeting. Of course, he promises to become her bodyguard until she reaches her place of sacrifice in the aptly named "Last Lands." It's as if the developers at Tokyo RPG Factory decided believable personal motivations weren’t important. Instead, they had to hit predictable story beats dictated by the structure of games from over 20 years ago.


Keep the momentum going

The first is Momentum. When a character's turn comes up, they start filling up a second ATB-style bar representing their Momentum. When your momentum is high, you can augment spells and attacks with a properly timed button press: a fireball might do more magical damage, or a healing spell could cure status effects as well as hit points. It's up to you to decide if the bonus is worth the wait, or if you should act immediately.
Once you get the rhythm of this give-and-take down, Momentum makes ATB fights feel altogether a little more "active." It reminds me of Nintendo's Super Mario RPG series, where timing also plays a role in battles.
At first that's exactly what I thought I Am Setsuna was trying to be—a casual JRPG. The game is pretty linear from the outset, and not just in the sense that you only have one path to walk. The game doesn't have many peaks and valleys in content. You'll hit the overworld, bounce on through a cave or forest filled with monsters, and stop at the next town for a lengthy story vignette. Maybe there will be a boss fight or two in between.
Left at that, I Am Setsuna could have been a slim and slightly dull diversion. Yet it goes so, so much farther—and in so many wildly different directions with wildly varying levels of success.
Whenever you perform a skill with Momentum, for instance, there's a random chance that “Fluxation” will occur. Yet to understand Fluxation, you need to know about “Spritnite.”
Spritnite determines which skills your party members can perform—equip the "Fire" Spritnite to gain the Fire spell, for example. When Momentum-linked Fluxation happens, that individual piece of Spritnite will be imbued with a bonus from their equipped Talisman.
Oh, right. Talismans. These accessories give bonuses but not directly—they can only add attributes to specific pieces of Spritnite through random Fluxation. That’s awfully annoying when you have to wait for a lucky break of Fluxation to boost an ability that you just wish would upgrade more easily.
Not to worry, though, because certain Singularities—equally random events that occur during battle—can boost the odds that Momentum will cause Fluxation to give your Spritnite a bonus from your Talisman. Got all that?
If you'd say all that sounds like the ravings of a crazy person, it's because that's exactly what it sounds like. Yet this is my best understanding of the under-explained progression system in I Am Setsuna after several hours spent just trying to puzzle it out. Except for Momentum, none of these systems are ever directly demonstrated. Instead, you can consult otherwise mute shopkeepers for "advice," which looks like scans from a nonexistent and badly written instruction manual.

I barely understand what I just wrote

It's not that important to understand everything, anyway. I Am Setsuna isn't terribly difficult, for the most part. Enemy encounters and placement are preset and thus not nearly as annoying as the usual random encounter grind-fest. What "challenge" there is mostly comes from what a hassle the game can be to play.
I Am Setsuna misses many quality-of-life tweaks that even games from 1995 knew to include. When buffing or healing allies in combat, for instance, the game highlights their character models but not their portraits at the bottom of the screen. Since I Am Setsuna is a game mostly played in the menus, you have to retrain your eyes to look up and pick your desired target out in a crowd.
Meanwhile, back in town, the only way to acquire new Spritnite (Spritnites?) is by selling material stolen from monsters. Yet there's no "sell all" option, meaning you need to crawl down a list of sometimes dozens of ingredients to sell them one, by one, by one.
While the list of annoyances goes on, for me the most egregious example is that you can’t heal your party by sleeping at an inn in I Am Setsuna. When your party is hurt, tired, or disabled, the only way to restore them all at once is to leave town then bust out a tent—that you bought from a shopkeepers found inside a town—to restore health and mana.
For a game that’s all about RPG tropes, this is an odd one to ignore. But you’ll want to hit the overworld anyway, since you’re not allowed to save your game inside city limits for no apparent reason.

Nostalgia over understanding

I am Setsuna certainly capture elements of ‘90s JRPGs, but it does so sporadically. It’s as if the developers chose to pull elements from that era of games from the very top of their memories rather than from research or deep understanding as to why those things were the way they were.
In many cases, that research would have revealed ‘90s RPGs were the way they were because of hardware limitations, or simply because no one had thought of a better way to do things yet. Today, technology and design have come up with better solutions to many problems that I Am Setsuna desperately clings to seemingly out of a sense of nostalgia.
Square Enix (the parent company of developer Tokyo RPG Factory) has had similar problems with its Bravely Default games—which treat the symptoms of random encounter design by letting you tweak their frequency and difficulty instead of looking for a better kind of battle system. Here once I climbed the pile of mechanics and nonsense words, I did eventually fall into I Am Setsuna's awkward rhythm.
I'm just not sure that "slight and sullen" are worthy narrative rewards for overcoming the game’s many obstacles. The Momentum-driven combat is a treat on its own merits, though, and I'd like to see it survive. If not in a direct sequel, it should at least appear in another game from the same developers
With Setsuna, there's something here—it's just buried beneath the ice.

The good

  • "Momentum" is a great tweak to Active Time Battle combat.
  • The snow-covered world is a nice change of pace, and looks great to boot.
  • Love it or hate it, there's a lot of unexpected depth to character progression.

The bad

  • A clichéd story that cleaves to the past, without the same energy or endearment.
  • Unexpected, poorly explained, and incredibly complicated progression.
  • Missing quality of life tweaks that could make it much, much smoother.
  • A very linear and predictable journey.

The ugly

  • Trying to keep Singularities, Spritnite, Fluxations, and Momentum straight in my head.
Verdict
I Am Setsuna skims the surface of games long past without always understanding what made them memorable. Try it if you just want a game that looks the part or to see its admittedly cool combat.

Kogonuso

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget