November 2017

Joel Hruska
Earlier this week, Intel confirmed it would work with AMD to develop a new GPU for use in its NUC (Next Unit of Computing) systems. The news sent waves through the tech community, both because it had been previously rumored (and specifically denied), by Intel and because it’s the first such collaborative product effort between AMD and Intel in… well, basically ever. The two companies may work together in joint efforts to write standards or as members of other tech organizations, but they haven’t jointly announced collaborative products in decades.
Thanks to an earlier leaked roadmap and an image from Chiphell, we can now start putting things together on what the new NUC will look like and be capable of. First, here’s the leaked photo from Chiphell:
HadesCanyon
A few thoughts, in no particular order: The CPU is going to be the top left block (the dark patch could be burn damage), while the GPU and a single stack of HBM2 is at the other end of the package. What kind of performance can we expect from that kind of configuration?
sk_hynix_hbm2_implementationsThis slide is a few years old, but it illustrates the HBM2 product stack fairly well. HBM2 supports up to 8GB of memory per stack, at capacities ranging from 2-8GB and 128GB/s – 256GB/s of bandwidth per stack. A comparable dual-channel APU or Intel on-die GPU using DDR4-3200 would offer 51.2GB/s of memory bandwidth per stack, which means this new GPU will decisively outpace the old–the only question is by how much.
According to an earlier leaked roadmap courtesy of PC Perspective, Intel is planning three separate Hades Canyon SKUs with a 46W, 66W, and 96W TDP:
HadesCanyon
One of the two Hades Canyon devices shown on the left roadmap is labeled as “Hades Canyon VR,” but the three potential SKUs on the right only differ in their TDPs. If we assume that the one labeled chip is the already-launched Core i7-6770HQ, that leaves a 66W TDP and a 96W TDP still to populate, and the difference is likely to come down to the GPU. And a GPU in a 96W combined form factor may well be capable of at least basic VR, though we’ll have to see how performance looks to determine that.
It’s not hard to see how this would play. Intel can offer different Vega configurations to hit different performance targets by adjusting the GPU and HBM clock speeds. It’s been several years since we’ve speculated on how HBM2 could deliver the APU performance AMD has long promised–I admit, when we started covering the topic we didn’t think it would arrive in an Intel SKU, but this program should yield dividends for both companies.
1

Bill Howard
There’s little clarity in the future of Faraday Future, the three-year-old California startup hoping to build high-end electric vehicles. Reports have top executives exiting, Faraday backing away from plans to build a $1 billion Nevada factory, and rumored documents discussing bankruptcy possibilities.
Meanwhile, the company has kept at high profile, sponsoring Formula E (EV) racing, taking part in the Pikes Peak Hill Climb, and engaging large booths at the annual CES shows.
Faraday Future at CES 2017: big booth, big turnout, one car on display.
Stefan Krause, a veteran of BMW and Deutsche Bank resigned in mid-October. He had been brought on in March 2017 as chief financial officer in hopes of securing substantial investment for FF, investment on the order of $1 billion. Jalopnik Thursday reported Krause was likely to be gone in coming days, then learned he bailed a month earlier. According to Jalopnik, Krause confirmed his October 14 departure, declined further comment, and Faraday Future declined comment.
Faraday Future FF 91 climbing Pikes Peak in record time for an EV.
FF sponsored Formula E racing (source: LAT/Formula E)
More executives are apparently going. The Verge said chief technology officer Ulrich Kranz, another BMW veteran, joined in July and “is leaving or has already left the company,” along with Bill Strickland, ex of Ford, charged with running Faraday Future’s future production line.
FF’s initial plan was to create a $1 billion manufacturing facility in Nevada. Faraday had early funding from Jia Yueting, former CEO of Chinese technology company LeEco; Jia was also the controlling shareholder of Faraday Future. Faraday over the summer adjusted its sights downward from new in Nevada to used in the Golden State, leasing an existing manufacturing facility in Hanford, California, 200 miles north of FF’s Los Angeles headquarters. This was arranged by Krause, who pledged the headquarters facility as collateral.
According to Jalopnik, leaked documents say Faraday Future may be considering a bankruptcy filing. FF says that’s not the case, and if there are “bankruptcy documents” being shown around, they’re fake.
All in all, Faraday Future has come a long way down from its aspirations three years ago to create a half-dozen models, at least one of them with more performance than the fastest Tesla. With Krause gone, it’s unclear who at Faraday Future has the ability to bring on new investors. And without new money, Faraday faces a difficult future.

Joel Hruska
Supernova are some of the oldest recorded astronomical phenomena in human history. In 185 AD, Chinese astronomers recorded the appearance of a star that appeared suddenly in the night sky, did not move like a comet, and was visible for eight months before fading again. Over 2,000 years the Chinese recorded roughly 20 supernovae, with corroborating sources from Islamic, European, and Indian sources in some cases.
While the modern history of supernovae observation is much shorter, we’ve trained telescopes on the areas of sky where the ancient “guest stars” appeared and, in some cases, found likely candidates for the historical event. In all our observations, there’s been one steady assumption–that a supernova is the final cataclysmic death of a star, in which the outer shell of material around the core is blown outwards at up to 10 percent the speed of light. Stars, in other words, don’t go supernova more than once. Except… we’ve found one that has. Repeatedly.
Writing in Nature, an international research team discusses the highly unusual case of iPTF14hls, first classified as a Type II-P supernova on January 8, 2015. At first, this appears to have been an open-and-shut designation (II-P supernovas are the only known phenomena that produce the spectra observed for iPTF14hls). The team writes:
In a type II-P supernova, the core of a massive star collapses to create a neutron star, sending a shock wave through the outer hydrogen-rich envelope and ejecting the envelope. The shock ionizes the ejecta, which later expand, cool and recombine. The photosphere follows the recombination front, which is at a roughly constant temperature (T ≈ 6,000 K) as it makes its way inward in mass through the expanding ejecta (that is, the photosphere is moving from material that is further out from the exploding star towards material that is further in, but the material inside the photo-sphere is expanding in the meantime). This leads to the approximately 100-day ‘plateau’ phase of roughly constant luminosity in the light curve and prominent hydrogen P Cygni features in the spectrum.
But iPTF14hls didn’t play nice. Instead of plateauing over 100 days, it lasted more than 600, with five distinct peaks in its light curve over that time.
IPTF14hls
In the image above, there’s an implicit peak to the far left of the graph (since the light emission continued to decrease after the star was first observed, it must have been higher in the past). We then see it rise, dip, and rise again. Then the star moved behind our sun (that’s the gap in the data), only to re-emerge at a higher apparent magnitude than it had previously. The light plateau of a standard II-P supernova, SN1999em, is shown in the bottom left. Moreover, the temperature has stayed fairly constant, while its brightness varied by as much as 50 percent.
What’s even stranger–and this is already plenty strange–is that we observed a similar phenomenon over 50 years ago. In 1954, a star in the same position as iPTF14hls, as shown in the plate below. By 1993, the explosion had vanished, but now, it’s back again. Supernova are incredibly bright; our feature image above shows a supernova that’s literally outshining the galaxy nearby. But a star that repeatedly explodes? That’s something new.
One potential explanation, the BBC notes, is that this star is actually pulsational pair-instability supernova. If true, it would be the first one we’ve ever seen (they’ve been predicted, but we’ve never found one). In theory, the star could be creating antimatter in its core, which would lead to “pair instability” between positrons and electrons. In a pair-instability supernova, the production of antimatter in the core reduces its internal pressure, which leads to a partial collapse, which kicks-off an explosion so massive that not even a black hole or stellar remnant is left behind. Pulsational pair-instability supernova theory predicts that a star could blow off a substantial percentage of its total mass without completely exploding.
The trouble with this explanation is that it doesn’t explain why large amounts of hydrogen continue to be detected around the star decades after the 1954 burst. In short, we don’t have a great explanation for this star’s behavior, yet–and it’s an excellent example of how, even after millennia of watching the sky, we’re still learning how much we don’t know.

Ryan Whitwam
Most of the mechanical keyboards marketed as “gaming” boards don’t have many features that make them demonstrably better for gaming. That’s not the case with the Wooting One, an intriguing mechanical keyboard that was launched on Kickstarter earlier this year. It looks like a fairly standard keyboard, but the switches are an entirely new design that feature optical analog input–it’s not just on or off like other switches.
The Wooting One gives you more control over movement in games, similar to the analog stick on a controller, but with the added precision of a mouse. You might see a real advantage in certain titles. However, not all games play nicely with analog switches.

The Flaretech Switches

The designers of this keyboard have come up with two switches: a red switch and a blue one. The stems aren’t colored to match the names, but the properties of the switches are an approximate match for the standard Cherry Bed and Blue switches. The reds are linear, so there’s no click or bump as you press. They have a 55g operating force, which is considered a medium-weight switch (it’s slightly heavier than a Cherry Red). The Flaretech Blue has the same operating force, but there’s a click after about 1.7mm of travel.
If you’re not sure which of these switches sounds more appealing, there’s good news: The Wooting One supports swappable switches. Most keyboards require switches to be soldered into place, and even those with “hotswap sockets” often fail after a few uses. There are no pins on these switches, so they just clip into the plate and sit above the PCB. If you buy a Wooting One, you can choose red or blue for the board, but you get a kit of four extras of both types. So, you can swap some of the other flavor on your board to test them out. There’s also a premium bundle that comes with a set of both switches.
When you take a keycap off the Wooting One, it doesn’t look much different than a standard mechanical keyboard. The Flaretech switches use the standard Cherry-style cross stem, which means it can accept custom keycaps for MX boards. When you look inside the switches, things get weird.
Other mechanical switches have a metal contact of some sort inside that is triggered when you push the stem down. The Flaretech switches don’t have that. Inside is just the stem, spring, and a light pipe. The light pipe is a nice touch as it allows light from the RGB LEDs on the circuit board to shine up through the top of the switch and the transparent stem.
The inside of a Flaretech Red. Note the prism on the left side of the stem and light pipe on the far right.
The spring appears to be a typical design you’d find in any Cherry-style switch, but the stem is unique. There’s a small prism protruding from the side, so it moves up and down as you press the switch. The PCB does all the work with an infrared optoelectronic sensor. As the prism moves up and down, the sensor registers the distance as analog data. Because this is all handled by the PCB and not the switch, you can do a lot of wild stuff with the Wooting One.

The Board and Software

To customize your experience with this keyboard, you’ll need the cleverly named Wootility desktop client. It’s available for Windows and macOS with a Linux version coming soon. This app lets you customize the color of each LED on the board however you like, and multiple profiles can be configured for different games.
The keycaps are ABS with shine-through legends. The quality is okay—similar to what you get with other consumer keyboards.
The Wooting One has analog data from all the switches, so it can basically pretend to be a game controller–it uses either Xinput or Direct Input. So, you can press a key down a little, and it’s like you nudged an analog stick slightly in one direction. The most obvious advantage here is that your WASD cluster movements can speed up or slow down based on how hard you press the switch.
There are some very cool options built into Wootility that are only possible thanks to the optical analog switches. You can change the analog curve of the gamepad output, which controls how much stick movement is emulated as you press. You can even change the actuation point of the switches to be higher or lower.
The board itself has a tenkeyless layout, so there’s no number pad. The top plate is aluminum, and the keys have a floating design. That exposes the edges of the switches under the keycap. You’ll be able to see some of the light spilling out under there, which is a fun effect.
On the underside, you’ve got flip-out feet and a micro USB port in a small recess. Removable cables are a nice bonus on a keyboard. You can get fancy themed cables to match your keycaps or just bundle up the stock cable when you’re moving the board around, so it doesn’t get in the way.

Gaming

I tested the Wooting One in a few games, and it worked mostly as expected. Analog input on a keyboard takes some getting used to, and some games won’t work correctly. While many PC games do support controllers, they won’t let you use a controller and a mouse at the same time. I was unable to get Fallout 4 to work with the Wooting One, but Doom and CounterStrike seem to work well. Rocket League works great after some fiddling with profiles.
The Wooting One is far from a plug-and-play gaming experience right now, but it seems like you can get most games working. Is the analog input actually an advantage? I’m not completely sold on that yet. You’ve got about 2mm of travel that can be used for analog sensing (about half the total switch travel), and that means you need to carefully control how much pressure you apply. It’ll take practice to master this.
So, is this the gaming keyboard you’ve been wanting? Maybe… if you don’t mind tinkering with things to get them working. I don’t know that you’ll benefit much in a FPS, but driving and flying games could be much easier to play on the Wooting One. The optical analog switches are also super-cool. Pricing starts at $160, which is similar to other high-end keyboards marketed to gamers.

Joel Hruska
Yesterday, we reported on Logitech’s bone-headed decision to pull all support for its Harmony Link system in March 2018 . The issue was exacerbated by Logitech’s baffling decision to offer only Harmony Link customers with an existing warranty a free upgrade to the Harmony Hub, while giving earlier customers a 35 percent purchase price on an upgrade to the same.  None of this went over well with customers who had perfectly functional hardware already and weren’t interested in replacing it.
Now, Logitech has pulled something of an about-face. The company now promises that at some point between now and March 2018, it will provide a free Harmony Hub to existing Link owners. If you already redeemed your 35 percent discount on the Harmony Hub, the full amount of your purchase will be refunded and you will still receive the item.
HarmonyHub
First of all, it’s good Logitech is taking a better approach to this situation; offering customers a free upgrade is a better resolution than suddenly turning off their old hardware. But there’s a larger overarching issue at stake here that Logitech’s offer of a free Harmony Hub really doesn’t address: When is the hardware you purchase actually yours?
In its FAQ, Logitech writes:
We made the business decision to end the support and services of the Harmony Link when the encryption certificate expires in the spring of 2018 – we would be acting irresponsibly by continuing the service knowing its potential/future vulnerability.
That’s a valid concern. Security problems within the IoT are legion and consumers need solutions they can trust. The problem is, security systems also need to ultimately be answerable to the end user in cases where the company in question isn’t interested in maintaining the product. End-user communities have been created to perform this task around routers and phones in many cases. While their work has never been perfect (being open-source doesn’t magically make projects free of security flaws), there needs to be a way to either deploy additional security systems in place of certificate-based security or to allow an outside group to run projects.
In an interview with Wired, Rory Dooley, head of Logitech Harmony, stated: “You’re always learning. The best way of learning is when you stumble, as we did here. Having an easy path for the customer that’s using a product and using a service is the right way of looking at this. We didn’t look at it that way, unfortunately. And we’ve learned from it.”
But Dooley seems to be learning the wrong lesson. The lesson here is not that Logitech has to have a path forward for all customers in perpetuity. At a certain point, in fact, it would become economically infeasible to keep giving an expanding customer base an upgrade to the latest and greatest model simply because they owned the previous one. And while I genuinely give Logitech credit for its prompt response to customers, kicking the can down the road doesn’t solve the fundamental ownership issue. So long as Logitech or any other company has the right to brick your hardware at a moment of their choosing, it isn’t evenyour IoT device. You’ve paid a $100 to $300 fee (possibly more, depending on configuration options) to rent hardware that could literally be worthless in six months. And since total compatibility for IoT and smart home devices is still more an aspiration than reality, you could find yourself redeploying an entirely new system to replace it, with no guarantee it’ll fare better than the first.
Any company that wants to actually popularize IoT would do well to pay attention to issues like this. Customers certainly will. People may not understand device security, but they can certainly figure out that paying $100 to $300 for products with a 2-4 year lifespan is a much worse deal than doing things the old-fashioned way.

Ryan Whitwam
Amazon has released a lot of products over the years that raise some eyebrows among the more security-conscious. The concerns are usually about digital security, though. The newly available Amazon Key service allows the company’s delivery personnel to open your locked door in order to deliver a package. People are understandably a little uncomfortable with the idea.
To use Amazon Key, you need to be in one of 37 cities where the company has its own warehouses and delivery operation. Then, it’s a simple matter of buying one of the Key bundles, which start at $250. They include one of Amazon’s new Cloud Cam home security cameras and one of several different smart locks specially equipped for Amazon Key. The system is controlled via the Key app, so you can remotely lock and unlock your door whenever you like.
The camera appears to have a smart hub of some sort inside (either Z Wave or Zigbee, probably) to control the smart lock—it’s a little more expensive than the non-Key edition of the camera. When you have a package coming from an Amazon the delivery person can simply unlock your Key-enabled door lock and set the package inside. Amazon stresses this is only possible when you’ve granted access. The connected camera will also record the delivery, assuming it’s pointed at your door.
Amazon is far from the first company to offer a home security camera or a smart lock, but other systems don’t let a third-party actually open your door or trigger camera recordings. That has traditionally been a major no-no from a privacy and security standpoint. It also raises some interesting legal questions. Could police obtain an order for Amazon to unlock your doors or trigger your camera? Would that even require a warrant? Some lawyers have suggested that installing Key would mean a homeowner could have no “reasonable expectation” of privacy.
In the terms and conditions, Amazon specifies that it accepts responsibility for making sure its personnel complete the delivery and lock up afterward. However, it says unlocking or providing access via Amazon Key hardware is entirely up to you. So, if you remotely unlock the door for someone else and they steal your Xbox, that’s not Amazon’s fault. Additionally, by using Key, you agree to use binding arbitration to settle any disagreements.
Key bundles are shipping now, but you have to be in a supported area to order them. You also need to have a Prime membership.

 Ryan Whitwam
Shortly after self-driving car company Waymo was spun off from Google, it announced a new generation of autonomous vehicles that were rolling out in real-world tests in select areas. Now, Waymo is taking the next step toward a truly driverless vehicle by actually making them driverless. The company is conducting a limited test of its self-driving cars in Chandler, Arizona without a safety driver.
Waymo started as a project in Google’s secretive X division around eight years ago. In every public road test since then, the Google/Waymo cars have been under the supervision of a human driver who sat in the front seat. The idea was they could take control if the car was about to do something wrong and cause an accident. However, Google says that rarely happened. In fact, the only accidents with Google’s self-driving cars occurred when the human was driving.
Even with years of successful tests behind it, there’s understandable hesitance to begin testing cars without drivers. Still, Waymo is forging ahead in Chandler where the cars are going it alone on the open road. Technically, most of these tests will still have a Waymo employee on board to monitor the car’s systems. However, they’ll be in the back seat where they cannot easily intervene in the event of an error. The cars can’t just drive off into the sunset, though. Waymo has locked to cars to a geofence covering about 100 square miles of Chandler (a Phoenix suburb) and the surrounding area.
Waymo’s cars use radar, lasers, and regular cameras to spot objects up to 300 feet away. They can see much better than a human, but making sense of the world is tough for a computer. Waymo seems to be confident with this truly driver-free test. The local government in Chandler seems happy to let Waymo conduct its tests, and Arizona does not have many laws governing the use of driverless vehicles. Companies aren’t even required to disclose accidents, but it’s safe to say Chandler wouldn’t be so welcoming if Waymo’s cars started running into things.
Waymo previously started an Early Rider program to give Chandler residents a chance to get around the city by calling for an autonomous vehicle. They’ll be the first to ride around in the cars sans driver. Maybe Waymo will feel eventually confident enough to roll this out as a real service, but we’re probably still at least a few years away from that.

By Joel Hruska
Sixty-six million years ago, an asteroid roughly 9 kilometers in diameter slammed into the Gulf of Mexico near the Yucatán Peninsula. The massive impact created the Chicxulub crater (named for a nearby town) and wiped out the dinosaurs. It’s one of the most-studied mass extinction events. A new analysis of the impact suggests that we, by which I mean mammals and other modern species, were extremely lucky. If the impact had happened a few minutes later, we might never have existed at all.
Here’s why: In 1980, a team led by Luis Alvarez discovered evidence of a thin layer of iridium deposited across the Earth. That was a noteworthy discovery because iridium is comparatively rare on the planet’s surface–finding a thin layer of it distributed across huge amounts of the planet at a specific moment in geologic time suggested a massive impact by an asteroid comparatively rich in iridium. This boundary layer is referred to as the K-T or K-Pg boundary.
But that’s not all we’ve found. There’s a layer of soot distributed across the world as well. One of the coolest things about the K-PG boundary is that you can actually see it in various rock formations with no prior geological training:
Cretaceous_Paleogene_clay_at_Geulhemmergroeve
Image courtesy of Wikipedia
A recent paper by Kunio Kaiho and Naga Oshima investigated why the K-Pg impact was so destructive and came to a startling conclusion: If the impact had happened just a few minutes later, it might not have kicked off a mass extinction at all. According to their study, the Chicxulub impact happened in an area of the Earth that was unusually rich in hydrocarbons and decayed organic matter, as shown below:
ImpactEvent
The Chicxulub impact smashed into one of the few spots on Earth where there were huge concentrations of hydrocarbons laid down by the decay of animals and plants over millions of years. Just 13 percent of the planet held those deposits at the time. A huge volume of soot from the burned material was ejected into the air, leading to a catastrophic drop in global temperatures, possibly aided by high levels of sulfur.
While Kaiho and Oshima argue that this global layer of soot drove the mass extinction event, other scientists aren’t so sure. “The 13 percent number they’re quoting has a lot of assumptions based around it,” Sean Gulick, a geophysicist at the University of Texas at Austin, told the Washington Post. The asteroid churned up soot, he said, but soot was “not the driver” that killed the dinosaurs.
In truth, there are multiple potential drivers that could have collectively contributed to the event. The asteroid struck a relatively shallow body of water, increasing the amount of ejected material flung back into the atmosphere. The impact event could have fed the ongoing eruption of the Deccan Traps, a massive volcanic formation in India that may have played a role in multiple extinction events. If the Chicxulub impact event had occurred over the deepest part of the Pacific, the asteroid would have had to vaporize seven miles of water before hitting the bottom of the ocean. While that’s still a tremendous impact, it would have bled off a non-trivial amount of the asteroid’s impact energy and limited the amount of material released into the atmosphere.
In short, this is an interesting argument for the uniqueness of the Chicxulub impact and the evolution of mammals leading to the existence of our own species. But it’ll be difficult to ever come up with a single unified explanation that absolutely answers our questions about what led to the extinction of the dinosaurs and our own existence.

By Joel Hruska
Earlier this week, Intel confirmed it would work with AMD to develop a new GPU for use in its NUC (Next Unit of Computing) systems. The news sent waves through the tech community, both because it had been previously rumored (and specifically denied), by Intel and because it’s the first such collaborative product effort between AMD and Intel in… well, basically ever. The two companies may work together in joint efforts to write standards or as members of other tech organizations, but they haven’t jointly announced collaborative products in decades.
Thanks to an earlier leaked roadmap and an image from Chiphell, we can now start putting things together on what the new NUC will look like and be capable of. First, here’s the leaked photo from Chiphell:
HadesCanyon
A few thoughts, in no particular order: The CPU is going to be the top left block (the dark patch could be burn damage), while the GPU and a single stack of HBM2 is at the other end of the package. What kind of performance can we expect from that kind of configuration?
sk_hynix_hbm2_implementationsThis slide is a few years old, but it illustrates the HBM2 product stack fairly well. HBM2 supports up to 8GB of memory per stack, at capacities ranging from 2-8GB and 128GB/s – 256GB/s of bandwidth per stack. A comparable dual-channel APU or Intel on-die GPU using DDR4-3200 would offer 51.2GB/s of memory bandwidth per stack, which means this new GPU will decisively outpace the old–the only question is by how much.
According to an earlier leaked roadmap courtesy of PC Perspective, Intel is planning three separate Hades Canyon SKUs with a 46W, 66W, and 96W TDP:
HadesCanyon
One of the two Hades Canyon devices shown on the left roadmap is labeled as “Hades Canyon VR,” but the three potential SKUs on the right only differ in their TDPs. If we assume that the one labeled chip is the already-launched Core i7-6770HQ, that leaves a 66W TDP and a 96W TDP still to populate, and the difference is likely to come down to the GPU. And a GPU in a 96W combined form factor may well be capable of at least basic VR, though we’ll have to see how performance looks to determine that.
It’s not hard to see how this would play. Intel can offer different Vega configurations to hit different performance targets by adjusting the GPU and HBM clock speeds. It’s been several years since we’ve speculated on how HBM2 could deliver the APU performance AMD has long promised–I admit, when we started covering the topic we didn’t think it would arrive in an Intel SKU, but this program should yield dividends for both companies.
*No images of Hades Canyon have yet been made available; our feature image is an earlier model.

Geezgo - Tester's Entry Page


Brief Introduction
:

Geezgo is a generic social network service which allows users to connect, collect ,save and share anything online through its data sharing process call streaming. It is owned and was created by Geezgo Limited, which is based in Wellington, New Zealand. The platform, Geezgo does not charge users money to use it and was initially created in 2015 as private social media where close network of families and friends  otherwise known as Closenets share messages, photos, news and other mainstream ideas. As a private Social Network Service (PSNS), Geezgo did not discriminate on age or gender when admitting users which allowed it to be built properly with people of all ages and gender in mind. The use of the platform as a PSNS lasted for about two years between November 2015 to November 2017 when the developer decided to allow the public to test-drive Geezgo via unlimited time use invitee-code with the intention of releasing  the entire platform to the world by midnight of December 31, 2017, New Zealand time.

While Geezgo is made in New Zealand, it is a global platform and only accessible through its base URLs located at https://www.geezgo.com. Currently it has over 180 combined market and social Streams and counting, created by our users from all over the world.Users can access Geezgo through any web browser interface.



How Geezgo Works
This Information on Geezgo has been left out intentional as it will be emailed to you with your invitee-code when you request to test-drive the platform.

How To Request For Geezgo's Invitee-code

1).  Click or type https://www.geezgo.com
2). Page like the image above will open. Click on Request Invitation; fill out the form and submit it to Geezgo
3). Get your Geezgo's invitee-code through email address you provided in step 2 above
4). Go back to https://www.geezgo.com ;  enter your name and invitee-code from anywhere and at anytime and click Test-Drive-Geezgo to enter the platform.
5.) Once you land on Geezgo platform, sign up and activate your account.

That's all there is to test-driving Geezgo. 

Expect more updates on Geezgo here as they come. See you soon on geezgo.com


Geezgo Team

By Ryan Whitwam
The FBI and Apple could be headed for another showdown after the FBI has reportedly found itself stymied by an encryptediPhone. The device in question belonged to Devin Kelley, the suspected shooter in last weekend’s Texas church shooting. The agency initially refused to identify the phone make, but it’s now telling media that it’s an Apple device. The FBI has had a lot to say about encryption in recent years, and this could add fuel to the fire.
The Sutherland Springs shooter’s phone is encrypted, which prevents investigators from accessing its contents. That’s the default setting on most phones now–both Android and iOS encrypt device storage, and there’s no way to access that data if a phone has a secure unlock method like a PIN or fingerprint. Even biometrics like fingerprint unlock aren’t enough to unlock a phone after it has been idle for too long. At that point, you need the phone’s password to gain access.
Apple and the FBI previously butted heads over the phone belonging to the San Bernardino shooter in late 2015. In that case, theiPhone was issued by the county government that employed the shooter before the attack. Apple asked county IT specialists to reset the device’s iCloud password, but that only served to make backups inaccessible. The FBI asked Apple to bypass the device’s encryption, but the company refused, saying any tool to break its encryption would be dangerous for all of its users.
In the end, the FBI dropped the case after it found a third-party firm to unlock the phone. These sort of undisclosed vulnerabilities are highly prized among security firms, and the FBI paid handsomely for the unlocking service.
FBI Director Christopher Wray in 2017.
The agency appears to have learned its lesson from the mistakes in its San Bernardino investigation. Following the Sutherland Springs, TX shooting, the FBI began a forensic examination of the shooter’s phone. It didn’t immediately demand Apple get the device working. In fact, Apple had to reach out to see if the phone was an iPhone.
The FBI is still looking for backups of the phone’s data on a laptop or online. That could provide the agency with what it needs and save it from another messy court battle. Last time, some Apple engineers pledged to quit before they built a tool that could break encryption on the iPhone. Apple’s legal team also seemed ready to take the case all the way to the Supreme Court.
If the data on the shooter’s phone is of that much interest to the FBI, we could be looking at another legal showdown.

By Joel Hruska
At least some iPhone X phones have a significant cold bug that causes the screen to become unresponsive once the temperature drops. Apple is reportedly working on a software fix for the issue. The problem, as described by various users, is that the display becomes unresponsive in cold weather, only recognizing ~20 percent of touches. Apple has acknowledged the issue and claims it will be fixed in an upcoming software patch.
“We are aware of instances where the iPhone X screen will become temporarily unresponsive to touch after a rapid change to a cold environment. After several seconds the screen will become fully responsive again,” Apple said in a statement to ZDNet. “This will be addressed in an upcoming software update.”
The funny thing about the iPhone X’s panel issues in cold weather is that almost none of the iPhones I’ve ever owned has handled cold weather well. I can’t honestly say if this is a problem with Android devices–I’ve never personally tested one in Svalbard-like conditions–but here’s what I’ve seen:
My iPhone 4s’ screen would not stop responding altogether, but either the phone itself or the display would slow to a crawl. Swiping left or right was extremely sluggish. My iPhone 3G (the first I owned) and my iPhone 5c both turned off, even when new, if the temperature was cold enough. My fiancée’s iPhone 5 is even worse and turns off if the temperature is ~45 degrees. All of these devices exhibited these behaviors from the moment they were purchased; none developed them over time. Attempting to keep the device warm by playing games, hooking it to a portable battery, or playing games while simultaneously using the flashlight never made a difference. I’m curious–if you have a newer Apple device (or have used one in the past) did you ever see this behavior? (Where you live will definitely make a difference, New York State in winter is much colder than the typical Midwest.)
iPhone-X
I’m glad that Apple is attacking these issues on its new flagship device and I hope they can be resolved. But it’s amusing that this was apparently never an issue that needed fixing until it hit the company’s $1,000 flagship.
In other news, Apple has also acknowledged that the OLED panel on the iPhone X will inevitably suffer burn-in, and that this can only be mitigated, not eliminated. A support document for the iPhone X states:
With extended long-term use, OLED displays can also show slight visual changes. This is also expected behavior and can include “image persistence” or “burn-in,” where the display shows a faint remnant of an image even after a new image appears on the screen. This can occur in more extreme cases such as when the same high contrast image is continuously displayed for prolonged periods of time. We’ve engineered the Super Retina display to be the best in the industry in reducing the effects of OLED “burn-in.
The company suggests shutting the screen off after a short period of time via Auto Lock and using automatic brightness.

Kogonuso

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget