May 2015

By Ryan Whitwam
Android 5.0 Lollipop was a big step for Android, though it took a few iterations to get all the major bugs ironed out. The still-unnamed Android M is only out in developer preview, but it seems aimed at cleaning up some of Lollipop’s rough edges and addressing user complaints about how Android operates. Google didn’t go into detail on all the features of M, and there are a ton of subtle differences. Let’s take the new Android for a spin and see how it works.

Installing Android M

If you want to give the new Android preview a shot, you need several things: a newer Nexus device (5, 6, or 9), working ADB on your computer, and the system image from Google. Installing Android M on your phone or tablet will require a complete wipe of the device, and there are some bugs, so it’s best if this isn’t your day-to-day phone.
To flash M, simply unpack the archived image and place the files in the Android SDK directory folder that contains ADB and fastboot. Next, plug in your device and reboot it into the bootloader. After making sure it is detected, execute the flash-all.bat (Windows) or (Mac/Linux) file to begin installation.

The entire process will take about 10 minutes, including the extremely lengthy first boot. From there, you can set up your device fresh and start playing with Android M.

The big stuff

Google announced Android M with six main features, but several of these aren’t actually specific to M (e.g. Android Pay and custom Chrome tabs). Probably the most important and user-facing change is the new permission system. Android no longer assumes that an app should get all of its requested permissions simply because you installed it. Things are much more granular now, but that also means more popups to tap through.
Any app that requests a sensitive permission on Android M will produce a popup the first time you run it. This covers things like location, contacts, microphone, and so on. You can approve or deny it, but there’s also a screen in the app info panel for each app that shows all of its permissions with a series of toggles. Just turn on and off whichever ones you want. Note: Apps that haven’t been updated for M (all of them right now) might behave strangely.

The M preview also contains Google’s long-rumored battery life optimization feature called Doze. This is a low-power mode for apps that reduces their activity in the background. Google says Doze can double the standby time of your device, which is great. That’s always been a problem for Android, especially compared with iOS.
Doze happens automatically, but there are some user-facing settings in the developer preview. Head into the application settings and then into Advanced. There you’ll see an Ignore Optimizations menu where you can control which apps remain in normal mode, even when the device is sleeping. This isn’t a high-power mode, it’s just how they would work in Lollipop and lower. Some of Google’s core services are set to ignore optimization by default, but you can add more if you want.
Google also bundled its new app data backup protocol into the M preview, though it didn’t even bother to announce this at I/O. This is all automatic and should happen in the background without any additional work for developers. That’s the advantage of this over Google’s ancient backup option that a few developers took advantage of before.
Application data backups will happen about once per day when the device is on WiFi and plugged into power. At that time, all apps will upload a copy of their data to the user’s Google Drive, but there’s a maximum of 25MB per app. This data is then restored to any new devices automatically. It sounds like this feature will not, unfortunately, support true data sync between devices. It’s mainly aimed at making it easier to migrate from one device to another.
Android M also cleans up the priority interruption system, which was introduced in Lollipop. It’s plenty powerful, but also not easy to understand. The M preview goes back to using the “Do Not Disturb” monicker, and moves most of the advanced options into the settings menu. Now all you have to do in order to make your phone or tablet shut up is hold the volume toggle down, just like old times.

Turning the volume all the way down will enable the alarm-only mode of DND, which is the default. There’s a toggle in the quick settings that can also turn on DND, and this exposes the options for priority-only (i.e. apps marked as priority by you will still make noise) and totally silent. In the system settings for DND you can also create custom rules to control when and how the feature is turned on. That’s really handy if you have an irregular work schedule.

The small stuff

There are a huge, huge number of small tweaks to Android M, so I won’t go into all of them. Among the more interesting is a setting in the developer options to use a dark system UI theme. It’s much less retina-searing in a dark room. It can even turn on automatically at night and flip back to the light theme in the AM.

While this is not technically exclusive to Android M, the version of the Google Now Launcher included with the preview has a completely revamped app drawer. It scrolls vertically and is grouped alphabetically. The widget list is vertical now as well, and it’s actually a big improvement over the old widget picker. The app drawer is more questionable, though. This UI will probably come to pre-M Android devices at some point.
The developer options has another interesting tidbit in the form of the System UI Tuner. It’s not clear what Google’s plans are for this feature, but right now it can be used to rearrange the tiles shown in the quick settings panel. Most OEMs have included this for years, but it’s never been in stock Android before. It’s quite buggy in the developer preview, but maybe it’ll be cleaned up for the final.

The stuff for later

Not everything works in the developer preview of Android M, including some of the most important forward-looking features. For example, Google Now on Tap is the next evolution of Google’s contextual search algorithms. On Tap will be able to use the content of the app you’re looking at to generate cards with helpful actions and pieces of info. If you’re having a text conversation about seeing a movie, Now on Tap might pull up reviews or show times for you. It also makes voice commands more accurate if you want to reference something about the content you’re looking at.
You will (eventually) be able to access On Tap with a long press of the Home button. However, the developer preview simply pops up a card telling you that On Tap isn’t available yet. Google doesn’t plan to make this feature available at any point during the preview either. You’ll have to wait for the final version to come out.

We also can’t get a good handle on how fingerprints will work in Android M. That’s because none of the test devices have a fingerprint sensor. This feature will eventually allow you to log into your phone, make payments with Android Pay, and access protected content in apps.
Android M includes improves support for removable storage as well, and you can experience some of this by plugging in a microUSB drive. The device will show up in the storage menu, is accessible from the built-in file manager, and has a cool notification for ejecting the media safely. None of the current Nexus devices have a microSD card slot, but Android M has an interesting new feature for devices that do — you can adopt an SD card as internal storage.
So what does that mean? You can format an SD card and encrypt it so it can be merged with the internal storage partition. This prevents the card from working in any other device, but in your phone or tablet it can hold apps and the private data associated with them. It’s a little odd to see Google embracing removable storage after ignoring it for so long.
The final version of Android M won’t be out for at least several months, and that’s probably when we’ll find out the version number and name. It will hit Nexus devices very soon thereafter, but other phones and tablets will have to wait a few months as OEMs and carriers wrangle over the details.

AMD has launched new Catalyst beta drivers with specific improvements for Project Cars and The Witcher 3, and may be prepping an entirely new brand name for its upcoming Radeon with High Bandwidth Memory onboard. The new drivers target two games that have been in the news lately — we covered the dispute over GameWorks in some depth — but the Project Cars situation involved a war of words between the developer (who alleged AMD vanished off the radar and did no testing) and AMD, which generally disputes these allegations.
AMD is claiming performance improvements of up to 10% for The Witcher 3 and up to 17% for Project Cars when used with single-GPU R9 and R7 products. The Witcher 3’s Crossfire performance has also been improved, though AMD still recommends disabling post-processing AA in Crossfire for best visuals.

The GameWorks optimization question

One thing I want to touch on here is how driver releases play into the GameWorks optimization question. When AMD has said it can’t optimize for GameWorks, what that means is that AMD can’t optimize the specific GameWorks function. In other words, in The Witcher 3, AMD can’t really do much to improve HairWorks performance.
AMD can still optimize other aspects of the game that aren’t covered by GameWorks, which is why you’ll still see performance improve in GW-enabled titles. took the new drivers for a spin in The Witcher 3 and saw some modest frame rate increases:

Improvements like this often depend on the specific video card settings or options enabled, so the gamut can swing depending on preset options. AMD has published its own optimization guide for The Witcher 3 for users looking to improve game performance.

Upcoming Fiji to be sold as Radeon Fury?

Meanwhile, the rumor mill is predicting AMD won’t brand the upcoming Fiji GPU as an R9 390X, but will instead sell the card as the Radeon Fury. Whether this is accurate is an open question, but it makes some sense — AMD pioneered the use of flagship CPU branding with the Athlon 64 FX back in 2003, and while it’s never had a flagship GPU brand, Nvidia’s Titan demonstrated that there’s clearly a use for such products.
The name “Fury” also has some history behind it. Back when AMD’s graphics division was an independent company, called ATI, its first popular line of 3D accelerators was branded “Rage,” and the Rage Fury was the top-end part. A later chip, the Rage Fury Maxx, actually implemented AFR rendering in hardware, but driver issues and compatibility problems sullied the GPU brand somewhat. When ATI launched the Rage series’ successor, it adopted a new name — Radeon.
Radeon Fury, if true, is a nice callback — and the performance from Fiji is rumored to be furious. At 4,092 cores and an expected 500GB/s of memory bandwidth, AMD’s new GPU is going to be serious competition for Nvidia — including, just possibly, the Nvidia Titan X.

By Jamie Lendino
NASA has had a remarkable record when it comes to successful missions on the Red Planet, dating back to 1976 with Viking 1 and 2, Pathfinder and Sojourner in 1997, the Spirit and Opportunity rovers in 2004, and Curiosity‘s crazy ‘7 minutes of terror’ landing in 2012. Each time, the spacecraft rovers are orders of magnitude more sophisticated, and two of the last three rovers are still doing science. Now NASA’s set to do it all over again come March 2016 with the InSight spacecraft, which will launch from Vandenberg Air Force Base in California and land on Mars roughly six months later.
Once on the surface, the mission is scheduled to last two years — 720 days, or 700 sols — and begin delivering science data in October 2016.
“Today, our robotic scientific explorers are paving the way, making great progress on the journey to Mars,” said Jim Green, director of NASA’s Planetary Science Division at the agency’s headquarters in Washington, in a statement. “Together, humans and robotics will pioneer Mars and the solar system.”
InSight will be as large as a car, and instead of looking for signs of life or studying surface rock composition, it’s directed at learning more about the interior of Mars. The name is an unwieldy acronym that reflects that: Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport. Currently, NASA has begun testing the craft’s ability to operate in and survive deep space travel, as well as the famously harsh conditions on the surface of the Red Planet.
Mars InSight lander, labeled (artist concept)
The testing phase will last about seven months. During that time, NASA will expose the lander to extreme temperatures, vacuum-space conditions with near-zero air pressure, and emulated Martian surface conditions. Engineering teams will also simulate the launch procedure and examine different parts of the craft for electronic interference.
“The assembly of InSight went very well and now it’s time to see how it performs,” said Stu Spath, InSight program manager at Lockheed Martin Space Systems, Denver, in the same statement. “The environmental testing regimen is designed to wring out any issues with the spacecraft so we can resolve them while it’s here on Earth. This phase takes nearly as long as assembly, but we want to make sure we deliver a vehicle to NASA that will perform as expected in extreme environments.”
Once testing is completed early next year, NASA will begin setting up the launch itself ahead of the March target date. “It’s great to see the spacecraft put together in its launch configuration,” said InSight Project Manager Tom Hoffman at NASA’s Jet Propulsion Laboratory, Pasadena, California. “Many teams from across the globe have worked long hours to get their elements of the system delivered for these tests. There still remains much work to do before we are ready for launch, but it is fantastic to get to this critical milestone.”
For more about the mission, check out NASA’s dedicated InSight page.

By John Hewitt
Many animals can survive prolonged periods of exposure to freezing temperatures. To do this, they run a sophisticated ‘freeze’ program on the way into the frozen state, and another ‘thaw’ program on the way out. Although there have been advances in freezing and thawing animals that lack these built-in cold survival responses, it hasn’t been made clear whether important higher-level functions, like memory, would emerge unscathed. Two researchers, Natasha Vita-More and Daniel Barranco, have now proven for the first time that cryogenically-suspended worms retain specific acquired memories after reanimation.
To do this, the researchers first trained the worms to move to specific areas when they smelled benzaldehyde (a component of almond oil). After mastering this new task, the worms were bathed in a glycerol-based cryoprotectant solution and put into to a deep freeze. When the worms were thawed, they remembered their job and moved to the right spot when benzaldehyde wafted in. The researchers compared two different methods of cooling: The first one was based on the old-fashioned way to freeze cells or organs — a low concentration of cryoprotectant and a slow cool/thaw cycle. The second way was a more aggressive procedure known as vitrification.
Vitrification requires a higher concentration of cryoprotectant, but does the freezing and thawing so fast that damaging ice crystals don’t have much chance to form. Only about a third of the worms that are frozen by the slow method actually survive, while almost all of those vitrified will survive. Surprisingly, Vita-More and Barranco found that worms frozen by either method retained the proper memory for what to do.

While all that is good news for cryonics, if we expect the fragile filaments and tender excrescences of much larger nervous systems (like ours) to survive such an ordeal intact, a little more care will be needed. To deliver cryoprotectant into all the nooks and crannies of a larger body from the outside, you generally need to drain the blood out and pump the new solution in through the circulatory system. While that might work pretty well if done properly, the problem is on the flip side — namely, getting the cryoprotectant back out.
Animals like arctic fish, frogs or insects can survive multiple freeze/thaw cycles because they do it from the bottom up rather than top down. In other words, each cell has a local copy of the freezing protocol, which has been scripted uniquely for it. The cell can therefore manufacture or import not just the cryoprotectants and associated adjuvants it needs, but also make and export the products that the cell’s host organ needs (which in turn, must be delivered to the other organs that make demands on the host organ).
If all that was required to survive freezing was for each cell to reel off a few million copies of an antifreeze protein, synthesize some ice-crystal blocking glycerol, or import glucose, then specific genetic arrangements might be readily made to accommodate that. New DNA could be spliced in, along with warm-inducible ‘promoters’ to keep the freeze proteins properly suppressed during happy times.
Unfortunately, things don’t really work like that. Santa Claus doesn’t fill an order for 10,000 sleighs if there are no trees at the North Pole. In the same way, cells probably couldn’t fulfill the requirements that massive, near-instantaneous antifreeze protein synthesis would make unless its entire genome, or at least those genes in the critical metabolic cycles that supply the building blocks (and degrade them afterwards), have been similarly adapted simultaneously through deep evolutionary time. In the case of antifreeze proteins, it seems that the original proteins evolved from digestive trypsins in the gut, presumably to deal with cold-susceptible fluids that would tend to accumulate there.
Creatures that synthesize other cryoprotectants like glycerol or glucose have their own special needs. An organism-level operating system must be engaged so that each organ supplies what is needed, and then is powered down in the right sequence, so that the most essential functions remain online until the end. For example, at low temperature, Arctic frogs produce a special form of insulin to stimulate cells to gorge on blood-supplied glucose. That glucose order must be filled by the liver, which has painstakingly packed it into the form of large glycogen molecules, which must now be broken down by running their metabolic synthesis program in reverse. When spring comes and the frog warms, the extra glucose must be rapidly removed from the cells before it compromises proteins, and then recycled through kidney excretion and ultimately stored in the bladder.
When ice crystal do form, they generally start in the extracellular regions, driving the dissolved molecules distributed there into dense congregation. The subsequent high concentration osmotically draws water out from cell interiors and jams things up there as well. In freeze-adapted creatures, the body shuttles the extra water to various ‘safe’ compartments, where it is dealt with by various mechanisms, all highly planned and routinely executed.
Showing that worm brains can handle top-down freezing by artificial means is an important step towards doing the same for larger organisms. If more researchers pick up where Vita-More and Barranco have now led, survivable cryonic suspension may eventually be mainstreamed for those that would desire it.

By Joel Hruska
Bill Nye the Science Guy’s foray into solar sail propulsion is likely to come crashing back to Earth thanks to a software error. The craft launched on the 20th of May and spent a few days in space relaying information back to ground control before abruptly falling silent. The team in charge of the little vessel has tried repeatedly to reestablish communication, but has had no luck thus far. The LightSail spacecraft was meant to demonstrate whether solar sails could be used for high-speed spacecraft propulsion. Its solar sail is much larger than the Ikaros probe that was launched by Japan in 2010, and it included three CubeSat designs for data gathering and vehicle control.
Unfortunately, a simple error in the Linux telemetry software has frozen the flight computer. Every 15 seconds, LightSail transmits a telemetry beacon to Earth and writes the data from that transmission into a file called beacon.csv. The file gets larger over time, and when it hits 32MB, it crashes the flight system. Hard data on which CubeSat design and CPU were inside LightSail doesn’t seem to be readily available, but the first product generations were based on the TI MSP430F1612, a 16-bit CPU — and the fact that the file crashed at 32MB could support that read.
According to a blog post from the Planetary Society, the goal since the satellite went dead over the weekend has been to reboot the craft. Unfortunately, the error is “non-deterministic.” In 37 passes (as of Tuesday afternoon) no reboot command has been successfully accepted by the spacecraft. Right now, the team is hoping that a cosmic ray will strike the internal components and reboot the craft. That’s not as far-fetched as you might think — apparently most CubeSats experience cosmic ray-related reboots within 3-6 weeks in space.
LightSail-A prepares for launch. Credit: The Interplanetary Society
The LightSail satellite will remain in orbit for up to six months in its undeployed CubeSat form (the original plan was to thoroughly evaluate the prototype before deploying the solar sail). If the system reboots in the next few weeks, it may still be possible to conduct the original experiment. Bug fixes have already been tested on the ground, which means the error could likely be corrected, provided that the system comes back online. If contact is re-established with CubeSat, the team will begin a manual solar sail deployment as soon as possible.
Solar sails have been proven to work in earlier missions, providing low energy thrust similar to an ion engine. The goal of these larger deployments (the current craft, LightSail-A, was launched to collect data for a larger mission in 2016, dubbed LightSail-1) is to determine the exact characteristics and challenges of operating a solar sail for various types of missions. While they can’t provide anything like the delta-v of a rocket, the low-power steady thrust of a solar sail could be incredibly useful for interplanetary or even interstellar missions.
One of the problems with sending a vessel to another star is that the probe would need to carry enough fuel to brake and take readings of the other star system. A solar sail could theoretically perform this function — the steady light pressure from the approaching star could, over a period of years, provide enough thrust to allow for detailed readings of a target planet or sun.

By Aaron Krumins

Despite all the recent hullabaloo concerning artificial intelligence, in part fueled by dire predictions made by the likes of Stephen Hawking and Elon Musk, there have been few breakthroughs in the field to warrant such fanfare. The artificial neural networks that have caused so much controversy are a product of the 1950s and 60s, and remain relatively unchanged since then. The strides forward made in areas like speech recognition owe as much to improved datasets (think big data) and faster hardware than to actual changes in AI methodology. The thornier problems, like teaching computers to do natural language processing and leaps of logic remain nearly as  intractable now as they were a decade ago.
This may all be about to change. Last week, the British high priest of artificial intelligence Professor Geoffrey Hinton, who was snapped up by Google two years back during its massive acquisition of AI experts, revealed that his employer may have found a means of breaking the AI deadlock that has persisted in areas like natural language processing.
AI Guru Geoffrey Hinton at the Google Campus
The hope comes in the form of a concept called “thought vectors.”  If you have never heard of a thought vector, you’re in good company.  The concept is both new and controversial. The underlying idea is that by ascribing every word a set of numbers (or vector), a computer can be trained to understand the actual meaning of these words.
Now, you might ask, can’t computers already do that — when I ask Google the question, “Who was the first president of the United States?”, it spits back a short bit of text containing the correct answer. Doesn’t it understand what I am saying? The answer is no. The current state of the art has taught computers to understand human language much the way a trained dog understands it when squatting down in response to the command “sit.” The dog doesn’t understand the actual meaning of the words, and has only been conditioned to give a response to a certain stimulus. If you were to ask the dog, “sit is to chair as blank is to bed,” it would have no idea what you’re getting at.
Thought vectors provide a means to change that: actually teaching the computer to understand language much the way we do. The difference between thought vectors and the previous methods used in AI is in some ways merely one of degree. While a dog maps the word sit to a single behavior, using thought vectors, that word could be mapped to thousands of sentences containing “sit” in them.  The result would be the computer arriving at a meaning for the word more closely resembling our own.
While this sounds well and dandy, in practice things will prove more difficult. For instance, there is the issue of irony, when a word is being used in more than just its literal sense. Taking a crack at his contemporaries across the pond, Professor Hinton remarked, “Irony is going to be hard to get, [as] you have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.” While this may provide some small relief to Hinton and his compatriots, regardless of which nationality gets bested by computers first, it’s going to come as a strange awakening when the laptop on the kitchen counter starts talking back to us.

By Ben Algaze
While I was researching another story, I ran across this Slate article referencing some futuristic Microsoft concept videos from 1999 and 2000. In these videos, it’s clear Microsoft did have vision (and continues to), and foresaw much of the evolution and innovation of the past 15 years. And it’s a bit of irony that this was on Slate, which started out as an online magazine on Microsoft’s MSN online service in 1996 (for kicks, check out Slate founding editor Michael Kinsley’s 2006 retrospective here).
At any rate, watch the videos and you’ll see rich online collaboration, smartphones, tablets, location-aware services, voice controlled devices, personalized cloud based content on multiple devices, and more. At the dawn of the millennium, Microsoft was one of the richest companies on the planet, sitting on over $20 billion in cash and continuing to grow revenues at a 25% annual clip. Despite its well documented foibles in many areas, today’s Microsoft continues to be cash rich and extremely profitable. But where did it go wrong?
Much has been written about Microsoft’s missteps of the past 15 years in particular. A large part of the blame has been directed at Steve Ballmer’s leadership of the company since 2000, and the company’s historically competitive culture. In most cases, the CEO of a company gets a disproportionate share of both the credit and blame for a company’s performance. Of course, credit or blame has to go somewhere, and the person at the top is the lightning rod.
But reality is usually more complicated than that. Most companies that once dominated their core markets, like IBM and Microsoft, also tend to be criticized for not being innovators. People say they’re just followers, adapters of others’ innovations, and better marketers. As companies build huge businesses, they tend to take less risks with them. While there is some truth to the innovation criticism, the real story is more nuanced.
The history of computing shows that one company rarely gets to dominate the next great technology shift. IBM dominated mainframes, successfully weathered the minicomputer wave, and created the PC architecture and market that opened the door for Intel and Microsoft. But IBM didn’t dominate these other businesses in the same way as mainframes. Microsoft dominated the market for PC operating systems, extended that dominance into PC applications, and successfully weathered the initial shift of computing to the Internet. But it failed to extend that dominance to Web services, mobile devices, cloud computing, or even gaming — despite investing tens of billions in those areas in the past two decades. 
Both IBM and Microsoft have other successful businesses, and each remains a powerful, profitable company in the Fortune 50. IBM has been around 100 years, and Microsoft for 40. The platform companies seen as leaders today, like Facebook, Google, and Amazon, have been around for 10 to 20 years. And today’s big gorilla, Apple, is a pioneer from Microsoft’s era, the era of the PC. Apple never dominated PCs despite being the early innovator, but its near-death experience in the mid-to-late 1990s caused management — aided by the return of Steve Jobs — to think about its core strengths and focus on a few key products. The other companies have seen one or two technology platform shifts. Time will tell if they will be able to be dominant in the next great platform, whatever that may be.
For now, let’s take a closer look at Microsoft, and some of the reasons why it may have missed creating the computing world it envisioned. This is by no means exhaustive or definitive. But there are some underlying themes that likely will apply — eventually — to some of Microsoft’s key competitors as well.

Early handhelds

As everyone knows, Microsoft is trying to maintain relevance in smartphones and tablets, two huge markets that have slowed the growth of PCs, and thus threaten Microsoft’s dominance. The irony, of course, is that Microsoft was a pioneer in this area, even though true to Microsoft form it was a fast follower, not an inventor of the product (as is today’s Apple).
It’s useful to trace back the history a bit. In the early 1990s, pen computing — stylus-based touch interfaces and handwriting recognition — was all the rage. Go introduced the Penpoint OS in 1992. Microsoft Windows for Pen also debuted in 1992, aiming to make Windows the OS of choice for new handheld devices, even though Windows was yet to be a juggernaut in the ensuing go-go PC growth years in the 1990s. Apple introduced the MessagePad in 1993, part of its Newton platform for these types of devices. For many reasons that would take too much space here, we know all of these devices never took off. Neither the hardware nor software was mature enough at that time to make those products popular outside of some vertical applications. And Microsoft’s WinPad project, in collaboration with leading OEMs like Compaq at the time, never even saw the light of day.
In the mid 1990s, Microsoft resurrected some of the OS work on that platform into a project called Pegasus, which became Windows CE. Windows CE was intended to be a smaller, lighter version of Windows, to run on embedded devices (and non-Intel processors) and upcoming PDAs. Around the same time, Palm released its PalmPilot, one of the first commercially successful PDAs in the market; Nokia introduced the Communicator 9000; and IBM shipped the Simon. Both the Simon and the Communicator were probably the first devices we could consider a smartphone, as each device married mobile phone and PDA capabilities.

Microsoft was not left out of the PDA market. In 1996, the first Windows CE-powered devices appeared from Casio and NEC. The original OS was called Handheld PC, then PalmPC, and then Palm-sized PC, as Palm sued Microsoft over the name. It later changed to PocketPC and Windows Mobile over time. The evolution of the name itself is telling, as it was always tied to PCs and Windows. The original interface for Windows CE devices was a touch screen, operated with a finger (difficult) or a stylus, and essentially it was a miniature version of the Windows interface.
Using a Windows CE PDA was never as easy as using a Palm PDA. The Windows UI did not scale that well to these small handhelds. The use of the stylus to operate the device made it tough to use one-handed, something we now do relatively easily with our smartphones. To be fair, Palm and other devices used styli too. The Holy Grail of useful handwriting recognition continued to be pursued. Yet the reality was that handwriting recognition was still clumsy and slow as an input mode, even if both Palm’s Graffiti and Microsoft’s own work made important strides with it.
In meetings at that time, Bill Gates didn’t use a laptop to take notes. He used yellow legal pads, and would take copious notes on them. In fact, he also didn’t like PowerPoint presentations projected. He preferred them printed out, so he could easily write on them – as well as read ahead. Personally, he had always been passionate about embedding great handwriting writing recognition on a device. That passion drove a lot of the thinking around the user experience for Microsoft’s mobile efforts, and would affect Microsoft’s later tablet efforts as well.


In the early 2000s, the PDA functionality migrated to smartphones. BlackBerry had started out manufacturing two-way pagers, but in 2000 it introduced the first wireless email device, the RIM 957. In 2002, it followed with the BlackBerry 5810, its first product that was also a phone.
BlackBerry was successful with their devices largely because of two factors. The first was a focus on email, as that was the killer app that caused the devices to be extremely popular with professionals. The second was the keyboard design, which enabled people to actually input characters quickly for email and other purposes, without slow and inaccurate handwriting recognition or cumbersome virtual keyboards.

What they grew were essentially miniature cortices
The cortex is basically the brain's outer layer of neural tissue. In the image available below, it's depicted in dark violet. As you can see, it's not very thick. Au contraire, it's been documented to measure merely 2 to 4 millimeters (0.079 to 0.157 inches). 
Studies have shown that cerebral cortex is involved in a whole lot of processes such as memory, awareness, perception, language, thought and even consciousness.
Like other brain regions, the cortex can sometimes misfire. To be able to identify the problem and fix it, medical experts must first know exactly how the cortex works.
With this in mind, researchers at the National Institutes of Health set out to find a way to grow miniature cortices in the lab, just to be able to study them in detail. And they succeeded.
Well, they didn't grow cerebral cortices per se. What they did manage to create was three-dimensional structures in which cells talked to each other as if part of an actual brain and that behaved strikingly similar to the real deal. Budding cortices, if you will.
Mind you, the scientists behind this research project are the first to admit that actual human brains are far more complex than the structures they grew in laboratory conditions.
Still, they say that their miniature cortices could serve as a model to study and better understand brain circuitry, maybe even test emerging trusts designed to treat neurological issues.
“The cortex spheroids grow to a state in which they express functional connectivity, allowing for modeling and understanding of mental illnesses,” said scientist Thomas R. Insel.
“They do not even begin to approach the complexity of a whole human brain. But that is not exactly what we need to study disorders of brain circuitry,” he went on to explain.
The cortex is the brain's outer layer
The cortex is the brain's outer layer

About 10 are very important, while the rest are minor

Even tough at first glance you might think that Android M doesn't bring too many new features over the previous version of the operating system, when you actually install the developer preview, there's quite a lot to take in.
There are lots of changes that haven't been mentioned during Google's first keynote where Android M was announced, but luckily the search giant listed all the changes during the event.
Many of you who watched the keynote live already know about some of these new features like Doze, new volume controls, new settings theme and the brand-new drawer.
Others are specifically aimed at developers while some of the features that haven't been listed by Google are simply “hidden.” We've already talked about three of them in our previous article, so let's focus on what Google actually announced.
Since the changelog is huge, we will only talk about some of the most important and attach the full list at the end of the article.

Additional features might be introduced by the time it launches

So, Dozer is a new feature that's meant to optimize the battery consumption of your Android device. It can detect movement and can tell when you left your smartphone/tablet unattended.
If it detects that your device is in sleep mode, Android will “exponentially back off background activity, trading off a little bit of app freshness for longer battery life.” According to Google, Doze can improve battery life by 2x.
Then there's the simplified volume controls. Android M has three volume slides: ringtone, apps and alarms. You can set each of these three volume slides to minimum, maximum or vibration mode.
The new app drawer now looks like a vertical scrolling pane with apps and services arranged in alphabetical order. It's also worth mentioning that there's a search function above that you can use anytime you need to find something.
For the full list of new features and improvements make sure you check out the changelog attached below.
1. Work Contacts in personal contexts
2. Hotspot 2.0 R1
3. VPN apps in settings
4. Flex Storage
5. Duplex Printing
6. App Standy
7. Seamless certificate installation for Enterprise
8. Undo/Redo keyboard shortcuts
9. Do not Disturb automatic rules
10.Data Usage API for work profiles
11.Material design support library
12,Text selection actions
13.Improved text hyphenation & justification
14.Bluetooth SAP
15.Easy word selection and Floating clipboard toolbar
16.Android Pay
17.Voice Interation service
18.USB Type C charging
19.App link verification
20.Battery historian v2 
21.Simplified volume controls
22.Improved Bluetooth low energy scanning
24.Corporate owned single use device support
25.Do not disturb quick setting and repeat caller prioritization
26.Improved trusted face reliability
27.Fingerprint sensor support
28.Improved text layout performance
29.Alphabetic app list with search
30.Unified Google settings and device settings
31.Work status notification
32.MIDI support
33.5GHz Portable Wi-Fi hotspot
34.Bluetooth connectivity for device provisioning
35.Seven additional languages
36.Power improvements in Wi-Fi scanning
37.Data binding support library Beta
38.Setups wizard: IMAP sign-in
39.Delegated certificatie installation
40.Secure token strage APIs
41.Google Now Launcher app suggestions
42.New runtime permissions
43.Stylus support UI toolkit performance improvements
44.Chrome custom tabs
45.Auto backup for Apps
46.Unified app settings view UI toolkit
47.Contextual assist framework
48.Enterprise factory reset protection
49.Direct Share
50.BT 4.2

Joab Jackson
Java's success in remaining relevant on the ever-changing landscape of software development has been its relative simplicity.
On Wednesday, Oracle celebrated the 20-year anniversary of the birth of the Java programming language with a blitz of marketing. Certainly the largely pre-Internet IT landscape was far different when the language was introduced by Sun Microsystems (which was purchased by Oracle in 2010). Yet Java has remained on the development workbench when many other widely used languages of the 1990s, such as Delphi or Perl, have been pushed to the side or used only for a select set of duties.
 "The core values of the language, and the platform, are readability and simplicity," said Mark Reinhold, Oracle's chief architect for the company's Java platform group.
Today, you'd be hard-pressed to find programming languages in as many corners of the computer industry as Java. It routinely tops, or is near the top of, surveys of the most widely used programming languages. Oracle estimates that the language is used by over 9 million developers and powers more than 7 billion devices.
It acts as the engine for both very small devices and the largest cloud computing systems. Google uses the language as the basis for programs that run on Android-based mobile devices. On the other end of the spectrum, the Map/Reduce framework for the Hadoop processing platform requires Java code to crunch petabytes of data.
Programmers like Java because, among other things, it is a very readable language, compared to the thickets of dense code often produced using languages such as C++ or Perl. "It is pretty easy to read Java code and figure out what it means. There aren't a lot of obscure gotchas in the language," Reinhold said.
Readability is a particularly valuable trait for a programming language, especially one used for writing enterprise software, Reinhold explained. With complex software, programmers must be able to understand code that may have been written months, or even years earlier.
"Most of the cost of maintaining any body of code over time is in maintenance, not in initial creation," Reinhold said.
Other characteristics also have worked in Java's favor, Reinhold added. One is Java's long-touted "write once, run anywhere," capability. Because the code runs on the cross-platform Java virtual machine, developers can write a Java program on a Windows laptop, then run it on a Linux or Solaris server without recompiling the code for the new platform.
Oracle, and Sun before it, were also mindful about long-term compatibility, which helps keep perfectly serviceable software running for as long as possible. "Every time we do an update release, or a major release, we, and the entire ecosystem, are strongly committed that old applications will continue to work," Reinhold said.
For Al Hilwa, who covers software development for IDC, this long-term support, along with the "methodical evolution" of the language, is what gives Java its staying power.
"Using Java in Android was definitely something that has extended its life as a valuable skill-set and good Oracle governance in recent years has been helpful," Hilwa wrote in an e-mail. "The maturity of the technology ... should not be underestimated, especially when compared with the many dynamic languages that have become popular in recent years, though have not been able to exceed Java's adoption rate."
Oracle continues to move the language forward with these goals in mind. For the next major version of the language, Java 9, due in September 2016, the language's designers are reorganizing Java into a modular architecture.
The idea is to make Java more suitable for smaller devices, such as the expected wave of Internet of Things devices. "We want to subdivide it into modules so you can choose what you can use for your application," Reinhold said.
Such work may be instrumental in keeping Java vital for the next 20 years of computing.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is

By Jeff Friesen
Java 101: Foundations mini-series on elementary language features introduces some of the non-object-oriented features and syntax that are fundamentals of the Java language. Find out why Unicode has replaced ASCII as the universal encoding standard for Java, then learn how to use comments, identifiers, types, literals, and variables in your Java programs.
Java is an object-oriented programming language, but there's more to Java than programming with objects. This first article in the
Note that examples in this article were written using Java 8.
Source code for "Elementary Java language features."
Created by Jeff Friesen for JavaWorld.

Unicode and character encoding

When you save a program's source code (typically in a text file), the characters are encoded for storage. Historically, ASCII (the American Standard Code for Information Interchange) was used to encode these characters. Because ASCII is limited to the English language, Unicode was developed as a replacement.
Unicode is a computing industry standard for consistently encoding, representing, and handling text that's expressed in most of the world's writing systems. Unicode uses a character encoding to encode characters for storage. Two commonly used encodings are UTF-8 and UTF-16.
Java supports Unicode. You'll learn later in this article how this support can impact your source code and compilation.

Comments: Three ways to document your Java code

Suppose you are working in the IT department for a large company. Your boss instructs you to write a program consisting of a few thousand lines of source code. After a few weeks, you finish the program and deploy it. A few months later, users begin to notice that the program occasionally crashes. They complain to your boss and he orders you to fix it. After searching your projects archive, you encounter a folder of text files that list the program's source code. Unfortunately, you find that the source code makes little sense. You've worked on other projects since creating this one, and you can't remember why you wrote the code the way that you did. It could take you hours or even days to decipher your code, but your boss wanted a solution yesterday. Talk about major stress! What do you do?
You can avoid this stress by documenting the source code with meaningful descriptions. Though frequently overlooked, documenting source code while writing a program's logic is one of a developer's most important tasks. As my example illustrates, given some time away from the code, even the original programmer might not understand the reasoning behind certain decisions.
In Java, you can use the comment feature to embed documentation in your source code. A comment is a delimited block of text that's meaningful to humans but not to the compiler. When you compile the source code, the Java compiler ignores all comments; it doesn't generate bytecodes for them. Java supports single-line, multiline, and Javadoc comments. Let's look at examples for each of these.

Single-line comments

A single-line comment spans a single line. It begins with // and continues to the end of the current line. The compiler ignores all characters from // through the end of that line. The following example presents a single-line comment:

System.out.println((98.6 - 32) * 5 / 9);
// Output Celsius equivalent of 98.6 degrees Fahrenheit.

A single-line comment is useful for specifying a short meaningful description of the intent behind a given line of code.

Multiline comments

A multiline comment spans multiple lines. It begins with /* and ends with */. All characters from /* through */ are ignored by the compiler. The following example presents a multiline comment:

 An amount of $2,200.00 is deposited in a bank paying an annual
 interest rate of 2%, which is compounded quarterly. What is
 the balance after 10 years?
 Compound Interest Formula:
 A = P(1+r/n)nt
 A = amount of money accumulated after n years, including interest
 P = principal amount (the initial amount you deposit)
 r = annual rate of interest (expressed as a decimal fraction)
 n = number of times the interest is compounded per year
 t = number of years for which the principal has been deposited
double principal = 2200;
double rate = 2 / 100.0;
double t = 10;
double n = 4;
System.out.println(principal * Math.pow(1 + rate / n, n * t));

As you can see, a multiline comment is useful for documenting multiple lines of code. Alternatively, you could use multiple single-line comments for this purpose, as I've done below:

// Create a ColorVSTextComponent object that represents a component
// capable of displaying lines of text in different colors and which
// provides a vertical scrolling capability. The width and height of
// the displayed component are set to 180 pixels and 100 pixels,
// respectively.
ColorVSTextComponent cvstc = new ColorVSTextComponent(180, 100);

Another use for multiline comments is in commenting out blocks of code that you don't want compiled, but still want to keep because you might need them in the future. The following source code demonstrates this scenario:

 if (!version.startsWith("1.3") && !version.startsWith("1.4"))
 System.out.println("JRE " + version + " not supported.");

Don't nest multiline comments because the compiler will report an error. For example, the compiler outputs an error message when it encounters /* This /* nested multiline comment (on a single line) */ is illegal */.

Javadoc comments

A Javadoc comment is a special multiline comment. It begins with /** and ends with */. All characters from /** through */ are ignored by the compiler. The following example presents a Javadoc comment:
 * Application entry point
 * @param args array of command-line arguments passed to this method
public static void main(String[] args)
 // TODO code application logic here

This example's Javadoc comment describes the main() method. Sandwiched between /** and */ is a description of the method and the @param Javadoc tag (an @-prefixed instruction to the javadoc tool).
Consider these commonly used Javadoc tags:
  • @author identifies the source code's author.
  • @deprecated identifies a source code entity (e.g., method) that should no longer be used.
  • @param identifies one of a method's parameters.
  • @see provides a see-also reference.
  • @since identifies the software release where the entity first originated.
  • @return identifies the kind of value that the method returns.
  • @throws documents an exception thrown from a method.
Although ignored by the compiler, Javadoc comments are processed by javadoc, which compiles them into HTML-based documentation. For example, the following command generates documentation for a hypothetical Checkers class:
javadoc Checkers
The generated documentation includes an index file (index.html) that describes the documentation's start page. For example, Figure 1 shows the start page from the Java SE 8 update 45 runtime library API documentation.

Figure 1. Java SE 8u45 runtime library API documentation was generated by javadoc.

Identifiers: Naming classes, methods, and more in your Java code

Various source code entities such as classes and methods must be named so that they can be referenced in code. Java provides the identifiers feature for this purpose, where an identifier is nothing more than a name for a source code entity.
An identifier consists of letters (A-Z, a-z, or equivalent uppercase/lowercase letters in other human alphabets), digits (0-9 or equivalent digits in other human alphabets), connecting punctuation characters (such as the underscore), and currency symbols (such as the dollar sign). This name must begin with a letter, a currency symbol, or a connecting punctuation character. Furthermore, it cannot wrap from one line to the next.
Below are some examples of valid identifiers:
  • i
  • count2
  • loanAmount$
  • last_name
  • $balance
  • π (Greek letter Pi -- 3.14159)
Many character sequences are not valid identifiers. Consider the following examples:
  • 5points, because it starts with a digit
  • your@email_address, because it contains an @ symbol
  • last name, because it includes a space
Almost any valid identifier can be chosen to name a class, method, or other source code entity. However, Java reserves some identifiers for special purposes; they are known as reserved words. Java reserves the following identifiers:

The compiler outputs an error message when it detects any of these reserved words being used outside of its usage contexts; for example, as the name of a class or method. Java also reserves but doesn't use const and goto.

Types: Classifying values in your Java code

Java applications process characters, integers, floating-point numbers, strings, and other kinds of values. All values of the same kind share certain characteristics. For example, integers don't have fractions and strings are sequences of characters with the concept of length.
Java provides the type feature for classifying values. A type is a set of values, their representation in memory, and a set of operations for manipulating these values, often transforming them into other values. For example, the integer type describes a set of numbers without fractional parts, a twos-complement representation (I'll explain twos-complement shortly), and operations such as addition and subtraction that produce new integers.
Java supports primitive types, reference types, and array types.

Primitive types

A primitive type is a type that's defined by the language and whose values are not objects. Java supports a handful of primitive types:
  • Boolean
  • Character
  • Byte integer
  • Short integer
  • Integer
  • Long integer
  • Floating-point
  • Double precision floating-point
We'll consider each of these before moving on to reference and array types.


The Boolean type describes true/false values. The JVM specification indicates that Boolean values stored in an array (discussed later) are represented as 8-bit (binary digit) integer values in memory. Furthermore, when they appear in expressions, these values are represented as 32-bit integers. Java supplies AND, OR, and NOT operations for manipulating Boolean values. Also, its boolean reserved word identifies the Boolean type in source code.
Note that the JVM offers very little support for Boolean values. The Java compiler transforms them into 32-bit values with 1 representing true and 0 representing false.


The character type describes character values (for instance, the uppercase letter A, the digit 7, and the asterisk [*] symbol) in terms of their assigned Unicode numbers. (As an example, 65 is the Unicode number for the uppercase letter A.) Character values are represented in memory as 16-bit unsigned integer values. Operations performed on characters include classification, for instance classifying whether a given character is a digit.
Extending the Unicode standard from 16 bits to 32 bits (to accommodate more writing systems, such as Egyptian hieroglyphs) somewhat complicated the character type. It now describes Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. If you want to learn about BMP, code points, and code units, study the Character class's Java API documentation. For the most part, however, you can simply think of the character type as accommodating character values.

John Ribeiro
The administration of President Barack Obama sided with Oracle in a dispute with Google on whether APIs, the specifications that let programs communicate with each other, are copyrightable.
Nothing about the API (application programming interface) code at issue in the case materially distinguishes it from other computer code, which is copyrightable, wrote Solicitor General Donald B. Verrilli in a filing in the U.S. Supreme Court.
The court had earlier asked for the government's views in this controversial case, which has drawn the attention of scientists, digital rights group and the tech industry for its implications on current practices in developing software.
Although Google has raised important concerns about the effects that enforcing Oracle's copyright could have on software development, those concerns are better addressed through a defense on grounds of fair use of copyrighted material, Verrilli wrote.
77 scientists, including Vinton "Vint" Cerf, Internet pioneer and Google's chief Internet evangelist, and Ken Thompson, co-designer of the Unix operating system, submitted to the court last year that the free and open use of the APIs has been both routine and essential in the computer industry since its beginning, and depended on the "sensible assumption" that APIs and other interfaces were not copyrightable.
Oracle accused Google of infringing its copyrights and patents related to Java in its Android operating system. Google was charged with copying the structure and organization of the Java API, in part to make it easier for developers, familiar with Java, to write programs for the mobile operating system.
The Internet giant, however, holds that the API code is not entitled to copyright protection because it constitutes a "method of operation" or "system" under Section 102(b) of the Copyright Act.
Judge William Alsup of the District Court for the Northern District of California ruled in 2012 that the APIs were not copyrightable, but this decision was overturned in May last year by the Court of Appeals for the Federal Circuit, which ruled that the Java API packages can be copyrighted. Google then asked the Supreme Court to review the Federal Circuit decision.
The uncopyrightable "method of operation" or "system" or "process" under Section 102(b) is the underlying computer function triggered by the written code, according to Verrilli. "The code itself, however, is eligible for copyright protection," he wrote.
The government in its filing asked the Supreme Court not to review the case and recommended its remand over Google's fair-use defense to the lower court.
"While we're disappointed, we look forward to supporting the clear language of the law and defending the concepts of interoperability that have traditionally contributed to innovation in the software industry," Google said in a statement Tuesday, in response to the government filing.
The Computer & Communications Industry Association said in a statement that the Justice Department got it wrong. Imposing legal constraints on the interoperation between programming languages can lead to serious competitive harm, it added.
Oracle said that the solicitor general's brief agrees with the Federal Circuit's decision and affirms the importance of copyright protection as an incentive for software innovation. The Federal Circuit had unanimously rejected Google's arguments that software is entitled to less copyright protection than other original, creative works, it added.
John Ribeiro covers outsourcing and general technology breaking news from India for The IDG News Service. Follow John on Twitter at @Johnribeiro. John's e-mail address is

Cameron Laird
Reflecting on Java's 20-year anniversary last week, the easiest future prediction to make is that someone will be suing someone else about Java in 2035. I assert this not just for the schadenfreude (are we still using that word in 2015?) but because it is true: Java has been a frequent presence in the courts since its launch in 1995.
The swirl of litigation around Java bring up of a couple of key facts: First, that different parts of the Java community see Java technology in different ways, sometimes to the point of almost complete disconnection. Depending on whom you ask, Java is a programming language, a platform, a library, an architecture, a virtual machine, or a family of different bundles of each of these. On the business side, it's also a brand, an academic tool, and a patent collection. Whenever you hear someone -- especially me! -- talking about Java in the abstract, make sure that you clearly understand which of these ideas about Java is being discussed.
Second, Java's prominence in the US legal system is one indicator of its importance. Whatever your own relationship to Java technology, it matters enough to the larger world of human affairs to be worth fighting about. Even if Java's stature wanes as much over the next two decades as, say, Fortran's has over the previous two decades, I predict that Java will remain a widely-used, indispensable programming platform and language.

What Java's past tells us about its future

Two classical parables help clarify Java. As I noted above, Java is very much the elephant in the blind man's parable: In 2015, Java feels very different to an Android developer of children's games than to an enterprise programmer maintaining mission-critical ERP (enterprise resource planning) applications. When it was first released in 1995, Java's most important features included the following:
  • A relatively liberal license: Remember that it was almost three years later that open source became a term of art.
  • Portability: Moving software between Windows, early Macintosh operating systems, and a dozen different Unix variants was hard in the '90s; Java changed that.
  • Rigorous object-oriented syntax: Unlike other languages of that time, Java was designed for object orientation.
  • Security: Java eliminated the memory violations that were epidemic in C applications.
  • Performance: Java applications quickly surpassed the average attained by Perl and other languages with claims to portability and safety.
  • Built-in graphical user interface (GUI) toolkit: It's hard to express how radical it was in 1995 to conceive of portable GUI development, let alone portable GUI development without a licensing fee.
This list of Java's most laudable features in 1995 might look mundane today, but the technology was well ahead of its time; so much so that several of these features remained controversial for years. Well past the dot-com collapse, software engineers seriously argued about whether a language with automatic memory management or a virtual-machine architecture -- let alone both! -- could ever perform adequately when applied to enterprise-class problems.
One of Java's greatest accomplishments over the last two decades is to have successfully overcome many of the debates it spawned. Java itself is one of the proofs that mainstream programmers are better off leaving memory management to the compiler; that a portably-implemented language can also be swift; that serious organizations will rely at least in part on software for which they do not pay; and that even GUI elements can be programmed portably.
Java's subtler strengths have also been slower to be recognized. As an example, the first releases baked in Unicode, and they did so better than essentially all competing languages before or after. Unicode received less attention than it deserved in the mid-'90s, and many observers are only now catching on to the advantages of this universal standard for programming that's relevant across the globe.
Java platform innovation didn't end in 1995. Since its early days Java has made several order-of-magnitude leaps, from research project to desktop competitor to backend workhorse to, more lately, mobility language. Just as hundreds of millions new consumers began to use Java on their mobile handsets, soon billions of networked devices will rely on Java-coded software.

How Java enables the present

This brings us to the second parable crucial to understanding Java: Heraclitus's river, or if you prefer Theseus's ship. In both parables the underlying theme is the paradox of essence: that even as its constituent water droplets flow past at each instant, the river itself persists through time. Similarly, almost all of Java's basic components have turned over in the past 20 years: Developers today use different JVMs, supporting libraries have mushroomed, and of course Java's target operating systems are light years away from what we were using in 1995.
Recall that Java was designed for television set-top boxes, which were proprietary embedded devices. Its first wide use was in Internet applets as a way to introduce dynamic elements to early HTML. Then it moved to the enterprise as a preferred medium for development of important "client-server" applications, and to college to teach the next generation of developers. In 2015, Java is evolving to become a serious real-time platform, making its way into medical devices, transportation equipment, and other safety-critical roles. Java is also well positioned to become the language of IoT.
One indicator and cause of Java's ongoing success is its place in schools. Java is widely used in classrooms both to introduce programming and to convey computer science, and it is also the vehicle for academic research that has improved garbage collection, compilation, encryption, networking, and many other practical computations.
At his most ambitious, Java inventor James Gosling couldn't have planned a more circuitous excursion for the language he created.

Future focus: Java in 2035

In 2035, I foresee developers putting in plenty of overtime to solve the looming 2038 datetime crisis. Java specialists will continue to fuss about new variations for the tools we use most -- new expectations for logging, for instance, haven't slacked at any time over the past 20 years, and I don't expect these eternal disputes to evaporate in the coming decades. Nigeria and Indonesia will be notable centers for Java talent. We'll also have plenty of surprises in the years ahead. But even if what we call programming looks unrecognizably different in 2035, a healthy portion of it will still be built on Java.

Paul Krill
Version 1.4 of the AngularJS JavaScript framework, focusing on enhancements for animation and performance, is now available.
AngularJS 1.4 offers refactored animation, which makes it possible to imperatively control CSS-based transitions/keyrames via a service called $animateCss. "We can now also animate elements across pages via animation anchoring," the blog states.
JavaScript animation also is a focus. "AngularJS 1.4 and higher has taken steps to make the amalgamation of CSS and JS animations more flexible," AngularJS documentation states. "However, unlike earlier versions of Angular, defining CSS and JS animations to work off of the same CSS class will not work anymore."
To improve performance, version 1.4 implements a rewrite of the Angular expression parser and improvements to scope watching, the compiler, and the ngOptions attribute. For internationalization, version 1.4 improves i18n support for Angular apps. The first piece of this work features an ngMessageFormat module supporting the ICU MessageFormat interpolation syntax.
The $http service in AngularJS, for communication with remote HTTP servers, now implements a mechanism for providing custom URL parameter serialization, to make it easy to connect end points that expect parameters to follow JQuery-style parameter serialization. Users also will be able to specify which version of JQuery, if any, they want to use. "This option also allows developers to instruct Angular to always use jqLite even if jQuery is present on the page, which can significantly improve performance for applications that are doing a lot of DOM manipulation via Angular's templating engine," says the documentation.
More than 100 bugs have been fixed in version 1.4, and two features -- the component helper and the component-oriented hierarchical router -- have been pulled. "The main reason for this decision," according to Angular, "was that both of these deliverables were not ready for the important task of simplifying the migration path from Angular 1 to Angular 2. Rather than delay the 1.4 release further, we decided to move these two deliverables into the 1.5 release."
This story, "AngularJS 1.4 is built for speed, animations" was originally published by InfoWorld.

Elliotte Rusty Harold
Remembering what the programming world was like in 1995 is no easy task. Object-oriented programming, for one, was an accepted but seldom practiced paradigm, with much of what passed as so-called object-oriented programs being little more than rebranded C code that used >> instead of printf and class instead of struct. The programs we wrote those days routinely dumped core due to pointer arithmetic errors or ran out of memory due to leaks. Source code could barely be ported between different versions of Unix. Running the same binary on different processors and operating systems was crazy talk.
Java changed all that. While platform-dependent, manually allocated, procedural C code will continue to be with us for the next 20 years at least, Java proved this was a choice, not a requirement. For the first time, we began writing real production code in a cross-platform, garbage-collected, object-oriented language; and we liked it ... millions of us. Languages that have come after Java, most notably C#, have had to clear the new higher bar for developer productivity that Java established.
James Gosling, Mike Sheridan, Patrick Naughton, and the other programmers on Sun’s Green Project did not invent most of the important technologies that Java brought into widespread use. Most of the key features they included in what was then known as Oak found its origins elsewhere:
  • A base Object class from which all classes descend? Smalltalk.
  • Strong static type-checking at compile time? Ada.
  • Multiple interface, single implementation inheritance? Objective-C.
  • Inline documentation? CWeb.
  • Cross-platform virtual machine and byte code with just-in-time compilation? Smalltalk again, especially Sun’s Self dialect.
  • Garbage collection? Lisp.
  • Primitive types and control structures? C.
  • Dual type system with non-object primitive types for performance? C++.
Java did, however, pioneer new territory. Nothing like checked exceptions is present in any other language before or since. Java was also the first language to use Unicode in the native string type and the source code itself.
But Java’s core strength was that it was built to be a practical tool for getting work done. It popularized good ideas from earlier languages by repackaging them in a format that was familiar to the average C coder, though (unlike C++ and Objective-C) Java was not a strict superset of C. Indeed it was precisely this willingness to not only add but also remove features that made Java so much simpler and easier to learn than other object-oriented C descendants.
Java did not (and still does not) have structs, unions, typedefs, and header files. An object-oriented language not shackled by a requirement to run legacy code didn’t need them. Similarly Java wisely omitted ideas that had been tried and found wanting in other languages: multiple implementation inheritance, pointer arithmetic, and operator overloading most noticeably. This good taste at the beginning means that even 20 years later, Java is still relatively free of the “here be dragons” warnings that litter the style guides for its predecessors.
But the rest of the programming world has not stood still. Thousands of programming languages have risen since we first started programming Java, but most never achieved more than a minuscule fraction of collective attention before eventually disappearing. What sold us on Java were applets, small programs running inside of Web pages that could interact with the user and do more than display static text, pictures, and forms. Today, this doesn’t sound like much, but remember -- in 1995, JavaScript and the DOM didn’t exist, and an HTML form that talked to a server-side CGI script written in Perl was state of the art.
The irony is that applets never worked very well. They were completely isolated from the content on the page, unable to read or write HTML as JavaScript eventually could. Security constraints prevented applets from interacting with the local file system and third-party network servers. These restrictions made applets suitable for little more than simple games and animations. Even these trivial proofs of concept were hampered by the poor performance of early browser virtual machines. And by the time applets’ deficiencies were corrected, browsers and front-end developers had long since passed Java by. Flash, JavaScript, and most recently HTML5 caught our eyes as far more effective platforms for delivering the dynamic Web content Java had promised us but failed to deliver.
Still, applets were what inspired us to work with Java, and what we discovered was a clean language that smoothed out many of the rough edges and pain points we’d been struggling with in alternatives such as C++. Automatic garbage collection alone was worth the price of admission. Applets may have been overhyped and underdelivered, but that didn’t mean Java wasn’t a damn good language for other problems.
Originally intended as a cross-platform client library, Java found real success in the server space. Servlets, JavaServer Pages, and an array of enterprise-focused libraries that were periodically bundled together and rebranded in one confusing acronym or another solved real problems for us and for business. Marketing failures aside, Java achieved near-standard status in IT departments around the world. (Quick: What’s the difference between Java 2 Enterprise Edition and Java Platform Enterprise Edition? If you guessed that J2EE is the successor of JEE, you got it exactly backward.) Some of these enterprise-focused products were on the heavyweight side and inspired open source alternatives and supplements such as Spring, Hibernate, and Tomcat, but these all built on top of the foundation Sun set.
Arguably the single most important contribution of open source to Java and the wider craft of programming is JUnit. Test-driven development (TDD) had been tried earlier with Smalltalk. However, like many other innovations of that language, TDD did not achieve widespread notice and adoption until it became available in Java. When Kent Beck and Erich Gamma released JUnit in 2000, TDD rapidly ascended from an experimental practice of a few programmers to the standard way to develop software in the 21st century. As Martin Fowler has said, "Never in the field of software development was so much owed by so many to so few lines of code," and those few lines of code were written in Java.
Twenty years since its inception, Java is no longer the scrappy upstart. It has become the entrenched incumbent other languages rebel against. Lighter-weight languages like Ruby and Python have made significant inroads into Java’s territory, especially in the startup community where speed of development counts for more than robustness and scale -- a trade-off that Java itself took advantage of in the early days when performance of virtual machines severely lagged compiled code.
Java, of course, is not standing still. Oracle continues to incorporate well-proven technologies from other languages such as generics, autoboxing, enumerations, and, most recently, lambda expressions. Many programmers first encountered these ideas in Java. Not every programmer knows Java, but whether they know it or not, every programmer today has been influenced by it.
This story, "Java at 20: How it changed programming forever" was originally published by InfoWorld.


Contact Form


Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget