When ‘Her’ Becomes a Reality, She’ll Be a Digital Booth Babe

My latest Huffington Post blog post is up. It starts off with me talking about the Tamagotchi in relation to the Spike Jonze movie ‘Her’ and ends with a denouncement of sexism in tech trade shows. I’m pretty confident that the progression makes sense, but I’ll let you decide:

When ‘Her’ Becomes a Reality, She’ll Be a Digital Booth Babe

Her

An exclusive to this blog post coming soon, as well as potentially some pretty awesome news.

Mat

Moto G Review

Selecting a budget smartphone usually means compromising on performance and features just to stay within a sub-£200 price range. But Motorola’s first smartphone to get a UK release since being acquired by Google – the Moto G – comes packing an impressive set of specs for a paltry £135 price tag. So what’s the catch?

Moto G

The device itself has a fairly typical layout: power button and volume rocker on the right-hand edge, 3.5mm headphone jack atop and micro-USB port beneath. At the fore we have the Moto G’s 4.5-inch LCD touchscreen, speaker, mic and 1.3 MP front-facing camera. The notification light next to the front camera was a great design choice on Motorola’s part, as it glows softly rather than flashing brightly, meaning you could happily ignore it in a darkened bedroom at night but still notice it when you want to.

Unlike a lot of Android phones, the Moto G lacks mechanical touch-sensitive buttons as these are included in the OS. This was presumably a way to save costs on the casing since the gap left behind is not filled with anything and makes the screen seem a little off-centre, though it does act as a handy place to grip the phone while watching videos.

Considering Motorola’s history of designing handsets with quirky and interesting form factors, it’s a little disappointing that the Moto G is such a generic black rectangle, but this is understandable given the price. Many low-cost phones try to make up for lacklustre specs with a gimmicky design and the results are often hideous and tacky, so Motorola’s cost limitations may have turned out to be a strength.

Having said that, the Moto G comes out of the box sporting a glossy black back-cover that gives it a fragile and distinctly toy-like feel. The back can be replaced with a selection of coloured shells (£8.99) or flip covers (£18.99) slated to reach UK shores before the end of the year. The flip covers in particular, as they’re made of a more durable textured plastic, seem like they’d offer the best protection against the elements long-term, though they strike me as a little pricey for what they are.

Moto G Flip covers and back shells

But really it’s what’s under the shell that has everyone talking about the Moto G and for good reason. The Moto G is powered by a Qualcomm Snapdragon 400 CPU with a quad-core Cortex-A7 chip clocked at 1.2GHz, not mind-blowing but very impressive for the price, and packs a respectable 1GB of memory. Navigating menus and using less processor-intensive features were as slick as you’d expect, even coping admirably when switching between apps rapidly with no visible latency. Though you wouldn’t expect a supposedly budget device to be much good for gaming, its Adreno 305 graphics chip is shared by a number of mid-range phones and, combined with the decent frame rate enabled by the CPU, makes the Moto G a competent gaming device.

It comes with comparatively meagre 8GB storage capacity, though a 16GB model is available for an extra £25, and there’s no way of supplementing that with an SD card. It also lacks 4G connectivity, which may be a dealbreaker in the US and some other countries but isn’t really a problem if you’re in the UK and live outside the major cities.

The Moto G flaunts a crisp 720p screen, matching that of yesteryear’s mid-range phones like the Nexus 4 and Galaxy S3, and plays HD video with incredible sharpness. My only complaint is that the LCD display lacks the colour richness you’d get with an AMOLED screen, giving videos a slightly washed-out appearance. The rear camera is perfectly serviceable and about what you’d expect for this price bracket. It won’t win any awards, but it’s decent enough for the casual photographer and is run on Motorola’s own software featuring a varied but straightforward menu of settings to control photo quality.

None of this comes at the expense of draining the phone’s power source either, since the 2,070 mAh battery is a stalwart companion in keeping the Moto G running. With Android’s built in battery saver systems, I was able to eke out a good 36 hours of life with moderate use and even a little over 12 hours when I was hammering it with updates, games and music streaming. Given the hardware it has to support, Motorola might have rendered the Moto G almost unusable if they’d skimped on the battery, so it’s encouraging to see thought went into even these minute details.

Android KitKat

At the moment, the Moto G comes running the slightly older Android 4.3 Jelly Bean but is slated to receive an update in January to the latest version (KitKat), with reports this has already begun rolling out for certain devices. Whilst the Android OS itself hasn’t undergone much alteration, Motorola has thrown in a ‘Migrate’ app that streamlines the process of copying the data on your old handset over to the Moto G (assuming it was also an Android). There’s also ‘Assist’, a somewhat over-auspiciously named app that simply lets you set times for your phone to fall silent automatically, such as during meetings or at night.

Along with the normal selection of apps for Google’s services pre-installed on the phone, you’ll be invited to enable ‘Google Now’ on first startup. This is effectively a system to deliver time and location-sensitive information to your phone’s notifications window automatically, such as traffic conditions for your commute home, weather and nearby restaurants. It’s an nice idea but I found it lacking in customisation, since it’s almost entirely automated rather than letting you adjust when certain notifications arrive. Eventually I just switched it off.


The Moto G is a great device all-round and almost indistinguishable in performance from a mid-range handset costing upwards of £100 more. It’s not without compromises, but clearly Motorola has taken pains to ensure these were done strategically: saving money in specialist areas, like the camera and case design, and putting it into improving the experience for a general user. It’s received rave reviews elsewhere and I think you can fairly predict that it’s going to be a game-changer in the budget mobile arena for 2014.

Christmas adverts are weird

I realise the title of this post will probably draw in the anti-consumerism crowd, which is misleading since I love Christmas and (as a gadget reviewer) have a vested interest in its commercialisation. However, by mentioning it I’ve already skewed the Google ranking, so while I’m at it: Free iPad Air, Star Wars Episode VII leaked trailer, Miley Cyrus and cute cat videos. Anyway, my enjoyment of Christmas does not blind me to just how bizarre the elaborate seasonal adverts put out by high-street shops each year have become.

Continue reading

Samsung’s Missed Opportunity

In announcing the Galaxy Gear, Samsung took the opportunity to address one of the most frequent criticisms of smartwatches: that the fact it carries a battery. Being characteristically power-hungry gadgets, with a form factor that limits their battery size to the thickness of a toenail, smartwatches are likely to run out of juice at inconvenient times – after which their users are sporting the latest in wrist-paperweights.

However, rather than unveil some revolutionary new way to keep it chugging along for eons, all Samsung did was acknowledge the problem and vaguely boast its battery life.


[Timestamps make embedded YouTube videos cry, so skip to 3:26 to see what I mean]

Don’t get me wrong, 25 hours is pretty impressive if they can deliver on it and the idea of charging their tech overnight is nothing new for most people. Initial reviews would appear to indicate that the Gear can indeed sustain a full day’s worth of charge even with heavy use despite only sporting a 315 mAh battery.

However, the multitude of ways that people will (eventually) find to use smartwatches and the fact that the li-ion battery the Gear uses will deteriorate with age can drastically vary the amount of time it can last on one charge. The battery life is far from clockwork and users will inevitably find themselves limp-wristed at impractical times.

Portable gadgets have been around long enough for people to clock the idea of buying a spare charger to keep in the office or to carry their primary one with them. If you’re at your desk, it’s not usually a problem to leave your mobile phone charging at a nearby socket but continue to make use of it as normal. Every feature of your phone can be used without having to drastically change the way you manipulate it.

Image source: Gizmag

Image source: Gizmag

The whole appeal of smartwatches is to augment some of your phone’s functionality to an easily-accessible wrist-worn device. But in order to charge the Galaxy Gear, you must remove the wrist-straps and set the main device into a cradle that looks like an S&M rack for Smurfs.

Whilst this holds the device in a semi-usable position during its recharging cycle, it means you are no longer using it for its primary purpose. You’ve relegated the smartwatch to a superfluous miniature smartphone that can only be used to control your other smartphone; separate from your wrist and tied to the wall socket where using it is no easier, if not harder, than whipping out your phone.

This is not a problem exclusive to the Galaxy Gear, but as one of the first major companies to jump into this potentially competitive market (apart from Sony’s oddly underplayed entry), it does betray a missed opportunity on Samsung’s part to distinguish themselves from the existing competitors and from those yet to come. Even an unidentified Samsung executive has supposedly concurred with several underwhelmed reviews in saying that the Galaxy Gear “lacks something special”.

In order to be useful, the very concept of smartwatches must include a method of charging without having to remove it from the wrist or tether yourself to a mains socket like a cyborg-imposed leash law. That can only mean that smartwatch charging must go wireless.


Inductive charging had its commercial heyday a few years ago as third-party accessories to the major smartphone devices. But these were simply middlemen since they usually came in the form of a pad or surface that the handset (sporting a specialised case) still had to make physical contact with. Another form of wireless charging exists without this limitation.

Electrodynamic induction (otherwise known as resonant inductive coupling) enables the wireless transmission of electricity across short distances. The process uses a resonating magnetic coil connected to a power supply, which causes it to produce a low-frequency electromagnetic field. When a secondary “capture” coil resonating at the same frequency is introduced within that field, it can absorb the energy that the source is transmitting. This can then be converted into electricity in the recipient device in order to charge it.

The technology has been around for a while but was most recently developed by a team of MIT researchers led by Marin Soljačić, which spawned the company WiTricity. CEO Eric Giler demonstrated the technology at the TED Global Conference in 2009.


Imagine a smartwatch fitted with a miniature capture coil “tuned” to the resonant frequency of a coil in its charger, plugged into the mains on the other side of the room. The user could continue to wear and use the device as normal, as well as move freely within the admittedly limited range of the field, as it charged itself. Then your battery life is preserved exclusively for when you’re on the move and away from a plug socket.

Samsung have missed an opportunity to innovate by failing to see the potential of wireless charging in wearable technology. Not only would it have resolved one of the biggest drawbacks of smartwatches, it would have given the Galaxy Gear a distinctive edge that could help them seize that crucial early dominance in the market. Moreover, a successful proof-of-concept for wireless power would have given it the long-overdue legitimacy it needs to see it integrated into other devices, kick-starting a revolution in electronics that history would say started with Samsung.

Wearable Technology will succeed. Eventually.

You’d be forgiven for getting optimistic about wearable technology lately – almost every public appearance of Sergey Brin has made him look like a motivational speaker for the Borg and several tech firms have shown off their first offerings of carpal-computers, like Samsung’s Galaxy Gear. Even I, though usually a cynic skeptic, can see the rise of wearable tech resulting in something really inventive – but I think it has a long way to go yet.

Wisely, technology pundits have been careful not to totally write-off the idea of wearable technology too early, as such predictions usually come back to bite them – remembering the embarrassing backlog of 2006 articles laughing off the iPhone. Many have suggested that, as with the Jesus Phone, whilst it may be difficult for us ivory-tower tech writers to conceive of a practical use for the technology (or ‘weartech’ as it’s sometimes referred to, by me alone), surely those cleverclogs app developers can. Resulting in a repeated insistence that someone might, maybe, perhaps hit upon an idea for a weartech-specific app that will be so darn helpful as to launch it into the mainstream. However, this comparison is not a valid one as it ignores the circumstances that allowed the iPhone and its app ecosystem to thrive.

Galaxy Gear

This is the first time that manufacturers have created form factors that have no precedent and are looking to – indeed, depending on – app developers to assign it a purpose. Mobile phones were already ubiquitous when the iPhone was announced and manufacturers had long-since hit upon the idea of the handset being more than just your basic blower. There was a proven market for mobile devices – old enough to have already refined the form factor and normalise it with consumers – and clear demand for them to be multi-purpose tools.

The original iPhone was successful even without third-party apps (only introduced with the iPhone 3G) because it did all the things we’d come to expect from a phone (and more) really well. But without an antecedent market for mobile phones, the iPhone would have been attempting to create and popularise an entirely new type of contraption, rather than build on an existing one, and its success would have been far less assured.

Even tablet computers had a precursor (of sorts) in the form of netbooks. Their fleeting success towards the end of the last decade proved that a market for smaller computers existed, to complement smartphones rather than compete with them. Steve Jobs introduced the original iPad to replace netbooks as this third-category device.

Google Glass Fitness App (yes, really)

That’s not to say that no useful applications for wearable tech exists, but these tend to be gimmicky or niche or both. At least for smartwatches, their use as a fitness monitors could result in respectable sales amongst exercise enthusiasts. But when many cheaper wrist-worn activity trackers already exist, it’s hard to see how these users will regard the Galaxy Gear’s other features as anything other than expensive add-ons. Samsung may have announced what will turn out to be the most versatile pedometer in history. There are far more worthy uses for wearable technology than just calorie-counting of course, such as medical applications, but nothing that would put a smartwatch on every wrist or a Glass over every (other) iris in the consumer space.

The supposed selling point behind a lot of the consumer weartech being created at the moment is that it’ll link with your smartphone and augment some of its functionality and notifications – such as reading SMS and email messages – onto a screen visible somewhere on your body. Given that most of these products currently match (or exceed) the average price of a smartphone, it’s not wise for manufacturers to position the tech as a mere accessory to your phone.

Moreover, the limitations of the form factor would soon outweigh the novelty of using it. Sneaking a sideways glance at your phone is much more compatible with our sense of decorum than bellowing “OK Glass” in the middle of a crowded room, or having an intimate conversation with someone whilst stroking your temple like you’re trying to coax a tapeworm out of your skull. For a much more in-depth look at why using a watch as a phone would be a surreal and impractical experience, see this bias rant objective analysis.

ok_glass

This reliance on the inventiveness of third-party developers is a backwards and potentially ruinous strategy for companies like Google and Samsung who, though in different ways, are trying to be the first-movers in the weartech market. Especially since the sheer variety of forms that wearable technology can take means that it will initially be very difficult to create apps without heavily fragmented support. Whereas a smartphone or tablet has a very limited and easily generalised set of interfaces, an application for weartech will have to account for each device’s unique ergonomics and quirks.

As the field matures and the myriad types of wearable tech become more clearly defined, this will become easier, since the best ways to handle the user interaction will evolve from successive generations. But without any comparable precedent and a lack of useful applications right now, wearable technology must rely on its gimmick driving enough sales to reach that level of development. The company that makes wearable technology a success will need to be patient, attentive to feedback and tolerant of making a loss at first but (if done right) the result could be truly revolutionary. Wearable technology can succeed, but now is not the right time.

Nokia Lumia and Windows Phone – Needs of the Many

Since September, Nokia have churned out ten different Lumia devices of massively varying specifications and sizes – not including the 810 which was discontinued in April. Whilst Microsoft’s Windows Phone software is licensed to HTC, Samsung and Huawei to use on their handsets, around 80% of WP7 and 8 devices currently in use worldwide are Nokias. The Finnish company, in particular, has a vested interest in helping Windows Phone to grow, since their strong association – albeit not an exclusive one – will hopefully echo back into a resurgence in Nokia sales. The overload of Lumias seems to be them trying appeal to every section of the market, but is that really the best strategy?

Nokia_Lumia_range

For all concerned, the separation of device and OS into two distinct entities has been a welcome change. For the hardware makers, this frees up time and resources to focus on the device itself without having to go through the rigamarole of tailoring bespoke software to run on it. Naturally Nokia, who clung to its own Symbian software as late as 2011, has taken full advantage of this judging by the plethora of new Lumias. However, entrusting the OS – and by extension most of the user experience – to a different company altogether is risky. If the user is dissatisfied because of a problem in the OS then they’ll think less of every logo attached to it, regardless of their culpability in the fault.

Whilst the relative homogeneity of an OS makes this less of a risk, the more handsets the manufacturer produces the less time they have to perfect the integration between hardware and software. Apple’s iPhone – being a single device with a homegrown OS – has the benefit of being tightly integrated whereas other manufacturers have to adapt both their hardware and the OS to smooth the synthesis. Having to repeat this process for many handsets, each with varying specifications and quirks, means that corners will inevitably be cut.

Of course, the average consumer doesn’t usually notice these things, so you could argue that it makes sense for manufacturers to offer as wide a variety of handsets as possible so that people are more likely to find a device that suits their needs (not to mention wallet). Whilst this is true in theory, it assumes that the average consumer has the time or inclination to exhaustively research every handset presently on the market. Let alone make sense of what the information means practically and how they each compare, exacerbated now by the need to choose a preferred OS as well.

This is where the simplicity of Apple’s single-device approach shines – albeit helped largely by the power of their brand – as it allows people to choose the most up-to-date version of a phone that they (at least anecdotally) know to be good without having to weigh up all the options. Thereafter, the deep platform lock-in that has been ingrained into iOS since the very first iPhone means that customers are far less likely to stray after they’ve sunk a great deal of time, money and content into the Apple ecosystem. The fact that Apple got there first means that this success could not easily be replicated, even by them.

Android Fragmentation

Infographic showing Android device fragmentation in 2013. Source: OpenSignal

But how can that be when Android has a majority market share and continues to grow each year? Consider that all the major spikes in Android’s growth since its introduction has been on the back of single flagship devices. The HTC Dream, better known as the T-Mobile G1, kicked off this trend and a succession of distinctly recognisable HTC devices (the Desire, Hero and Nexus One, for example) facilitated Android’s rise in its first year. More recently, as the infographic above demonstrates, the most prominent Android phones have all been from Samsung’s Galaxy line (primarily the S3) and presently the S4 seems to be the most recognisable “iPhone-alternative”.

The app infrastructure of a mobile OS is a factor that even the most technophobic smartphone users will take into account when selecting a mobile OS and is an area where Windows Phone has a lot of catching up to do. Nokia has a strategic role to play in helping tempt app developers to the Windows Phone platform and bolster Microsoft’s claim to the “third ecosystem”. Too many varying handsets and the result will be to fragment support and deter developers, as we’ve seen happen to Android. Its initial popularity, before the discrepancies became too obvious, helped Android survive as a profitable system for developers but Nokia and Microsoft have no such head-start.

Microsoft imposes strict hardware requirements on manufacturers it licenses Windows Phone to, which should prevent the OS from becoming fragmented. Nokia needs to ensure it appreciates the necessity of this and doesn’t use the influence it has with Microsoft – as the most popular Windows Phone carrier – to demand that they lift the restrictions so they can churn out more phones.

With ten impressive Lumias already on the market, Nokia should slow down and let the most popular ones shine through, giving them a basis on which to create a more recognisable smartphone brand that will endure regardless of Windows Phone’s ultimate fate.