Moto G Review

Selecting a budget smartphone usually means compromising on performance and features just to stay within a sub-£200 price range. But Motorola’s first smartphone to get a UK release since being acquired by Google – the Moto G – comes packing an impressive set of specs for a paltry £135 price tag. So what’s the catch?

Moto G

The device itself has a fairly typical layout: power button and volume rocker on the right-hand edge, 3.5mm headphone jack atop and micro-USB port beneath. At the fore we have the Moto G’s 4.5-inch LCD touchscreen, speaker, mic and 1.3 MP front-facing camera. The notification light next to the front camera was a great design choice on Motorola’s part, as it glows softly rather than flashing brightly, meaning you could happily ignore it in a darkened bedroom at night but still notice it when you want to.

Unlike a lot of Android phones, the Moto G lacks mechanical touch-sensitive buttons as these are included in the OS. This was presumably a way to save costs on the casing since the gap left behind is not filled with anything and makes the screen seem a little off-centre, though it does act as a handy place to grip the phone while watching videos.

Considering Motorola’s history of designing handsets with quirky and interesting form factors, it’s a little disappointing that the Moto G is such a generic black rectangle, but this is understandable given the price. Many low-cost phones try to make up for lacklustre specs with a gimmicky design and the results are often hideous and tacky, so Motorola’s cost limitations may have turned out to be a strength.

Having said that, the Moto G comes out of the box sporting a glossy black back-cover that gives it a fragile and distinctly toy-like feel. The back can be replaced with a selection of coloured shells (£8.99) or flip covers (£18.99) slated to reach UK shores before the end of the year. The flip covers in particular, as they’re made of a more durable textured plastic, seem like they’d offer the best protection against the elements long-term, though they strike me as a little pricey for what they are.

Moto G Flip covers and back shells

But really it’s what’s under the shell that has everyone talking about the Moto G and for good reason. The Moto G is powered by a Qualcomm Snapdragon 400 CPU with a quad-core Cortex-A7 chip clocked at 1.2GHz, not mind-blowing but very impressive for the price, and packs a respectable 1GB of memory. Navigating menus and using less processor-intensive features were as slick as you’d expect, even coping admirably when switching between apps rapidly with no visible latency. Though you wouldn’t expect a supposedly budget device to be much good for gaming, its Adreno 305 graphics chip is shared by a number of mid-range phones and, combined with the decent frame rate enabled by the CPU, makes the Moto G a competent gaming device.

It comes with comparatively meagre 8GB storage capacity, though a 16GB model is available for an extra £25, and there’s no way of supplementing that with an SD card. It also lacks 4G connectivity, which may be a dealbreaker in the US and some other countries but isn’t really a problem if you’re in the UK and live outside the major cities.

The Moto G flaunts a crisp 720p screen, matching that of yesteryear’s mid-range phones like the Nexus 4 and Galaxy S3, and plays HD video with incredible sharpness. My only complaint is that the LCD display lacks the colour richness you’d get with an AMOLED screen, giving videos a slightly washed-out appearance. The rear camera is perfectly serviceable and about what you’d expect for this price bracket. It won’t win any awards, but it’s decent enough for the casual photographer and is run on Motorola’s own software featuring a varied but straightforward menu of settings to control photo quality.

None of this comes at the expense of draining the phone’s power source either, since the 2,070 mAh battery is a stalwart companion in keeping the Moto G running. With Android’s built in battery saver systems, I was able to eke out a good 36 hours of life with moderate use and even a little over 12 hours when I was hammering it with updates, games and music streaming. Given the hardware it has to support, Motorola might have rendered the Moto G almost unusable if they’d skimped on the battery, so it’s encouraging to see thought went into even these minute details.

Android KitKat

At the moment, the Moto G comes running the slightly older Android 4.3 Jelly Bean but is slated to receive an update in January to the latest version (KitKat), with reports this has already begun rolling out for certain devices. Whilst the Android OS itself hasn’t undergone much alteration, Motorola has thrown in a ‘Migrate’ app that streamlines the process of copying the data on your old handset over to the Moto G (assuming it was also an Android). There’s also ‘Assist’, a somewhat over-auspiciously named app that simply lets you set times for your phone to fall silent automatically, such as during meetings or at night.

Along with the normal selection of apps for Google’s services pre-installed on the phone, you’ll be invited to enable ‘Google Now’ on first startup. This is effectively a system to deliver time and location-sensitive information to your phone’s notifications window automatically, such as traffic conditions for your commute home, weather and nearby restaurants. It’s an nice idea but I found it lacking in customisation, since it’s almost entirely automated rather than letting you adjust when certain notifications arrive. Eventually I just switched it off.


The Moto G is a great device all-round and almost indistinguishable in performance from a mid-range handset costing upwards of £100 more. It’s not without compromises, but clearly Motorola has taken pains to ensure these were done strategically: saving money in specialist areas, like the camera and case design, and putting it into improving the experience for a general user. It’s received rave reviews elsewhere and I think you can fairly predict that it’s going to be a game-changer in the budget mobile arena for 2014.

Christmas adverts are weird

I realise the title of this post will probably draw in the anti-consumerism crowd, which is misleading since I love Christmas and (as a gadget reviewer) have a vested interest in its commercialisation. However, by mentioning it I’ve already skewed the Google ranking, so while I’m at it: Free iPad Air, Star Wars Episode VII leaked trailer, Miley Cyrus and cute cat videos. Anyway, my enjoyment of Christmas does not blind me to just how bizarre the elaborate seasonal adverts put out by high-street shops each year have become.

Continue reading Christmas adverts are weird

Samsung’s Missed Opportunity

In announcing the Galaxy Gear, Samsung took the opportunity to address one of the most frequent criticisms of smartwatches: that the fact it carries a battery. Being characteristically power-hungry gadgets, with a form factor that limits their battery size to the thickness of a toenail, smartwatches are likely to run out of juice at inconvenient times – after which their users are sporting the latest in wrist-paperweights.

However, rather than unveil some revolutionary new way to keep it chugging along for eons, all Samsung did was acknowledge the problem and vaguely boast its battery life.


[Timestamps make embedded YouTube videos cry, so skip to 3:26 to see what I mean]

Don’t get me wrong, 25 hours is pretty impressive if they can deliver on it and the idea of charging their tech overnight is nothing new for most people. Initial reviews would appear to indicate that the Gear can indeed sustain a full day’s worth of charge even with heavy use despite only sporting a 315 mAh battery.

However, the multitude of ways that people will (eventually) find to use smartwatches and the fact that the li-ion battery the Gear uses will deteriorate with age can drastically vary the amount of time it can last on one charge. The battery life is far from clockwork and users will inevitably find themselves limp-wristed at impractical times.

Portable gadgets have been around long enough for people to clock the idea of buying a spare charger to keep in the office or to carry their primary one with them. If you’re at your desk, it’s not usually a problem to leave your mobile phone charging at a nearby socket but continue to make use of it as normal. Every feature of your phone can be used without having to drastically change the way you manipulate it.

Image source: Gizmag
Image source: Gizmag

The whole appeal of smartwatches is to augment some of your phone’s functionality to an easily-accessible wrist-worn device. But in order to charge the Galaxy Gear, you must remove the wrist-straps and set the main device into a cradle that looks like an S&M rack for Smurfs.

Whilst this holds the device in a semi-usable position during its recharging cycle, it means you are no longer using it for its primary purpose. You’ve relegated the smartwatch to a superfluous miniature smartphone that can only be used to control your other smartphone; separate from your wrist and tied to the wall socket where using it is no easier, if not harder, than whipping out your phone.

This is not a problem exclusive to the Galaxy Gear, but as one of the first major companies to jump into this potentially competitive market (apart from Sony’s oddly underplayed entry), it does betray a missed opportunity on Samsung’s part to distinguish themselves from the existing competitors and from those yet to come. Even an unidentified Samsung executive has supposedly concurred with several underwhelmed reviews in saying that the Galaxy Gear “lacks something special”.

In order to be useful, the very concept of smartwatches must include a method of charging without having to remove it from the wrist or tether yourself to a mains socket like a cyborg-imposed leash law. That can only mean that smartwatch charging must go wireless.


Inductive charging had its commercial heyday a few years ago as third-party accessories to the major smartphone devices. But these were simply middlemen since they usually came in the form of a pad or surface that the handset (sporting a specialised case) still had to make physical contact with. Another form of wireless charging exists without this limitation.

Electrodynamic induction (otherwise known as resonant inductive coupling) enables the wireless transmission of electricity across short distances. The process uses a resonating magnetic coil connected to a power supply, which causes it to produce a low-frequency electromagnetic field. When a secondary “capture” coil resonating at the same frequency is introduced within that field, it can absorb the energy that the source is transmitting. This can then be converted into electricity in the recipient device in order to charge it.

The technology has been around for a while but was most recently developed by a team of MIT researchers led by Marin Soljačić, which spawned the company WiTricity. CEO Eric Giler demonstrated the technology at the TED Global Conference in 2009.


Imagine a smartwatch fitted with a miniature capture coil “tuned” to the resonant frequency of a coil in its charger, plugged into the mains on the other side of the room. The user could continue to wear and use the device as normal, as well as move freely within the admittedly limited range of the field, as it charged itself. Then your battery life is preserved exclusively for when you’re on the move and away from a plug socket.

Samsung have missed an opportunity to innovate by failing to see the potential of wireless charging in wearable technology. Not only would it have resolved one of the biggest drawbacks of smartwatches, it would have given the Galaxy Gear a distinctive edge that could help them seize that crucial early dominance in the market. Moreover, a successful proof-of-concept for wireless power would have given it the long-overdue legitimacy it needs to see it integrated into other devices, kick-starting a revolution in electronics that history would say started with Samsung.

Wearable Technology will succeed. Eventually.

You’d be forgiven for getting optimistic about wearable technology lately – almost every public appearance of Sergey Brin has made him look like a motivational speaker for the Borg and several tech firms have shown off their first offerings of carpal-computers, like Samsung’s Galaxy Gear. Even I, though usually a cynic skeptic, can see the rise of wearable tech resulting in something really inventive – but I think it has a long way to go yet.

Wisely, technology pundits have been careful not to totally write-off the idea of wearable technology too early, as such predictions usually come back to bite them – remembering the embarrassing backlog of 2006 articles laughing off the iPhone. Many have suggested that, as with the Jesus Phone, whilst it may be difficult for us ivory-tower tech writers to conceive of a practical use for the technology (or ‘weartech’ as it’s sometimes referred to, by me alone), surely those cleverclogs app developers can. Resulting in a repeated insistence that someone might, maybe, perhaps hit upon an idea for a weartech-specific app that will be so darn helpful as to launch it into the mainstream. However, this comparison is not a valid one as it ignores the circumstances that allowed the iPhone and its app ecosystem to thrive.

Galaxy Gear

This is the first time that manufacturers have created form factors that have no precedent and are looking to – indeed, depending on – app developers to assign it a purpose. Mobile phones were already ubiquitous when the iPhone was announced and manufacturers had long-since hit upon the idea of the handset being more than just your basic blower. There was a proven market for mobile devices – old enough to have already refined the form factor and normalise it with consumers – and clear demand for them to be multi-purpose tools.

The original iPhone was successful even without third-party apps (only introduced with the iPhone 3G) because it did all the things we’d come to expect from a phone (and more) really well. But without an antecedent market for mobile phones, the iPhone would have been attempting to create and popularise an entirely new type of contraption, rather than build on an existing one, and its success would have been far less assured.

Even tablet computers had a precursor (of sorts) in the form of netbooks. Their fleeting success towards the end of the last decade proved that a market for smaller computers existed, to complement smartphones rather than compete with them. Steve Jobs introduced the original iPad to replace netbooks as this third-category device.

Google Glass Fitness App (yes, really)

That’s not to say that no useful applications for wearable tech exists, but these tend to be gimmicky or niche or both. At least for smartwatches, their use as a fitness monitors could result in respectable sales amongst exercise enthusiasts. But when many cheaper wrist-worn activity trackers already exist, it’s hard to see how these users will regard the Galaxy Gear’s other features as anything other than expensive add-ons. Samsung may have announced what will turn out to be the most versatile pedometer in history. There are far more worthy uses for wearable technology than just calorie-counting of course, such as medical applications, but nothing that would put a smartwatch on every wrist or a Glass over every (other) iris in the consumer space.

The supposed selling point behind a lot of the consumer weartech being created at the moment is that it’ll link with your smartphone and augment some of its functionality and notifications – such as reading SMS and email messages – onto a screen visible somewhere on your body. Given that most of these products currently match (or exceed) the average price of a smartphone, it’s not wise for manufacturers to position the tech as a mere accessory to your phone.

Moreover, the limitations of the form factor would soon outweigh the novelty of using it. Sneaking a sideways glance at your phone is much more compatible with our sense of decorum than bellowing “OK Glass” in the middle of a crowded room, or having an intimate conversation with someone whilst stroking your temple like you’re trying to coax a tapeworm out of your skull. For a much more in-depth look at why using a watch as a phone would be a surreal and impractical experience, see this bias rant objective analysis.

ok_glass

This reliance on the inventiveness of third-party developers is a backwards and potentially ruinous strategy for companies like Google and Samsung who, though in different ways, are trying to be the first-movers in the weartech market. Especially since the sheer variety of forms that wearable technology can take means that it will initially be very difficult to create apps without heavily fragmented support. Whereas a smartphone or tablet has a very limited and easily generalised set of interfaces, an application for weartech will have to account for each device’s unique ergonomics and quirks.

As the field matures and the myriad types of wearable tech become more clearly defined, this will become easier, since the best ways to handle the user interaction will evolve from successive generations. But without any comparable precedent and a lack of useful applications right now, wearable technology must rely on its gimmick driving enough sales to reach that level of development. The company that makes wearable technology a success will need to be patient, attentive to feedback and tolerant of making a loss at first but (if done right) the result could be truly revolutionary. Wearable technology can succeed, but now is not the right time.

Nokia Lumia and Windows Phone – Needs of the Many

Since September, Nokia have churned out ten different Lumia devices of massively varying specifications and sizes – not including the 810 which was discontinued in April. Whilst Microsoft’s Windows Phone software is licensed to HTC, Samsung and Huawei to use on their handsets, around 80% of WP7 and 8 devices currently in use worldwide are Nokias. The Finnish company, in particular, has a vested interest in helping Windows Phone to grow, since their strong association – albeit not an exclusive one – will hopefully echo back into a resurgence in Nokia sales. The overload of Lumias seems to be them trying appeal to every section of the market, but is that really the best strategy?

Nokia_Lumia_range

For all concerned, the separation of device and OS into two distinct entities has been a welcome change. For the hardware makers, this frees up time and resources to focus on the device itself without having to go through the rigamarole of tailoring bespoke software to run on it. Naturally Nokia, who clung to its own Symbian software as late as 2011, has taken full advantage of this judging by the plethora of new Lumias. However, entrusting the OS – and by extension most of the user experience – to a different company altogether is risky. If the user is dissatisfied because of a problem in the OS then they’ll think less of every logo attached to it, regardless of their culpability in the fault.

Whilst the relative homogeneity of an OS makes this less of a risk, the more handsets the manufacturer produces the less time they have to perfect the integration between hardware and software. Apple’s iPhone – being a single device with a homegrown OS – has the benefit of being tightly integrated whereas other manufacturers have to adapt both their hardware and the OS to smooth the synthesis. Having to repeat this process for many handsets, each with varying specifications and quirks, means that corners will inevitably be cut.

Of course, the average consumer doesn’t usually notice these things, so you could argue that it makes sense for manufacturers to offer as wide a variety of handsets as possible so that people are more likely to find a device that suits their needs (not to mention wallet). Whilst this is true in theory, it assumes that the average consumer has the time or inclination to exhaustively research every handset presently on the market. Let alone make sense of what the information means practically and how they each compare, exacerbated now by the need to choose a preferred OS as well.

This is where the simplicity of Apple’s single-device approach shines – albeit helped largely by the power of their brand – as it allows people to choose the most up-to-date version of a phone that they (at least anecdotally) know to be good without having to weigh up all the options. Thereafter, the deep platform lock-in that has been ingrained into iOS since the very first iPhone means that customers are far less likely to stray after they’ve sunk a great deal of time, money and content into the Apple ecosystem. The fact that Apple got there first means that this success could not easily be replicated, even by them.

Android Fragmentation
Infographic showing Android device fragmentation in 2013. Source: OpenSignal

But how can that be when Android has a majority market share and continues to grow each year? Consider that all the major spikes in Android’s growth since its introduction has been on the back of single flagship devices. The HTC Dream, better known as the T-Mobile G1, kicked off this trend and a succession of distinctly recognisable HTC devices (the Desire, Hero and Nexus One, for example) facilitated Android’s rise in its first year. More recently, as the infographic above demonstrates, the most prominent Android phones have all been from Samsung’s Galaxy line (primarily the S3) and presently the S4 seems to be the most recognisable “iPhone-alternative”.

The app infrastructure of a mobile OS is a factor that even the most technophobic smartphone users will take into account when selecting a mobile OS and is an area where Windows Phone has a lot of catching up to do. Nokia has a strategic role to play in helping tempt app developers to the Windows Phone platform and bolster Microsoft’s claim to the “third ecosystem”. Too many varying handsets and the result will be to fragment support and deter developers, as we’ve seen happen to Android. Its initial popularity, before the discrepancies became too obvious, helped Android survive as a profitable system for developers but Nokia and Microsoft have no such head-start.

Microsoft imposes strict hardware requirements on manufacturers it licenses Windows Phone to, which should prevent the OS from becoming fragmented. Nokia needs to ensure it appreciates the necessity of this and doesn’t use the influence it has with Microsoft – as the most popular Windows Phone carrier – to demand that they lift the restrictions so they can churn out more phones.

With ten impressive Lumias already on the market, Nokia should slow down and let the most popular ones shine through, giving them a basis on which to create a more recognisable smartphone brand that will endure regardless of Windows Phone’s ultimate fate.

iPad Mini Review

In 2010, Steve Jobs veraciously denounced the batch of 7-inch tablets being created by Apple’s competitors to fend off the iPad, bemoaning the sacrifice in usability that had to be made to cram it into the smaller chassis. Two years later, incumbent Apple CEO Tim Cook took the stage to unveil the more diminutive iPad – the iPad Mini – that Jobs said should never happen. Was he right all along or has Cook found the formula to condense the iPad without compromise?

White iPad Mini

Strictly speaking, the iPad Mini rocks a 7.9-inch display, nearly a full inch larger than its competition: Google’s Nexus 7 and Amazon’s Kindle Fire tablets. The Mini is thinner and lighter than both rivals though bigger in other dimensions to accommodate the larger screen, home button and 1.2 megapixel camera on the front face. The 5 megapixel rear-camera is embedded in an aluminium aft which, while alluring, will probably show some battle scars before long.

Atop the Mini you’ll find the 3.5mm headphone jack and standby button, whilst the edges are clear of all but the volume rocker and lock switch. Between the dual speaker-grilles along the bottom sits the new proprietary “lightning” port that have been on all Apple devices since the iPhone 5. Other than being considerably smaller, the main benefit of this new connector is the ability to be plugged in either side-up, if you ever had trouble with that before.

Conspicuous is the lack of retina display, which may be a way to save cost, battery life or simply as an incentive to put into the next generation Mini. Nevertheless, its absence may be disappointing for those looking to use it to watch videos on the move and gives the Mini an overall underwhelming display than its less wallet-draining competitors.

Under the hood is Apple’s A5 chip clocked at 1GHz: notably less powerful than the A6 and A6X CPUs powering the latest generation iPhone and larger iPads. This may be another concession to bring down the cost or power consumption of the device. However, the Nexus 7 carries a faster quad-core NVidia chip and boasts the same 10-hour battery life, so this seems unnecessary.

Moving away from the hardware, the Mini runs iOS but with one crucial difference: thumb detection, which lets you use the multi-touch display when your thumb is resting on the screen without causing interference. Given the tiny bezel on either side of the display, this is a welcome feature and shows that Apple are putting real thought into the limitations of a smaller form factor. In terms of usability, it was surprisingly easy to type for prolonged periods, likely due to the slightly larger screen allowing the onscreen keyboard to be more spacious.

Green Smart Cover

The iPad Mini is available directly from Apple, priced at £269 for the 16GB model and scaling up to £429 for the 64GB storage. Tack another £100 if you want it to come with a 3G receiver. Like its commodious counterpart, the Mini can be decked out with a Smart Case cover (£35), though with only three folding segments on this version it doesn’t feel nearly as sturdy.

Apple seems to have taken pains to minimise compromising usability or design – two of Apple’s core principles – when coming up with the iPad Mini. Unfortunately, either a desire to reduce cost or to not show their hand too early means that unnecessary sacrifices have been made elsewhere. Time will tell if Apple has enough clout to sell the Mini despite its limitations, or if people will be drawn to the cheaper, more powerful Nexus 7.

Doctor Who – Series 7B Review

In the mid-Eighties, when the popularity of Doctor Who was waning and the show was awkwardly spluttering towards its eventual cancellation, new script editor Andrew Cartmel put together a plan to restore life to the show and mystery to the titular character: the eponymous Cartmel Masterplan. Whilst hints were dropped towards it, the “indefinite” hiatus of Who in 1989 meant that it was never fully realised on-screen.

The revival of the show in 2005 meant that they could start again, with references to an unseen war and a main character radically altered from the cravat-sporting fopp who’d last graced our screens. However, with ever-more candid references to the Time War, this enigma has also been gradually unwrapped to the point of becoming stale and the conclusion of Doctor Who’s seventh series seems to be the beginning of a shake-up. However, before we discuss that let’s look at the series as a whole.

the new tardis
I can’t help it, I bloody love this new TARDIS console!

The previous series saw Smith’s portrayal of the titular Doctor solidify, but it’s only now that the Ponds have been jettisoned that the differences between the Eleventh Doctor and his predecessor come into focus. This is probably helped by the gorgeous new look TARDIS console and the wider variations in what this Doctor wears (anchored by the bow-tie, naturally), but the latter half of this series definitely felt the most like Smith had finally become comfortable in the role. Hopefully, he will stick around for a long time to come so that this incarnation can gain the distinctness that Troughton, Baker (the bescarfed one) and Tennant enjoyed before him.

Though Moffat is often criticised for his cookie-cutter approach to writing female characters, the modern show has always established that a certain “type” of person is suitable for The Doctor to choose them as a companion. This leads to the erroneous claim that each companion is simply a rehash of the same character with a different backstory, but I think this is given the lie in the contrast between Amy and Clara. Whereas Amy wanted to constantly run away from her boring Leadworth life, leaving with the Doctor so quickly she didn’t even bother to get dressed and put off dealing with the consequences until her experiences with the Doctor gave her new focus and enabled her to get over the Raggedy Man and mature. Clara, on the other hand, is torn between her desire to travel (as seen in her book with her age crossed out) and the need to cling on to the memory of her mother, holding on to the leaf and inhabiting a maternal role as a nanny to similarly bereaved children. The fact that she doesn’t live in the TARDIS indicates that travelling with The Doctor allows her to fulfill both needs – going on adventures and back in time for tea.

clara's leaf
Page One

‘Asylum of the Daleks’ notwithstanding, the story arcs over the course of series seven can be nicely compartmentalised into their two parts: the long goodbye to the Ponds for the former half and the mystery of Clara for the latter. As in the Russell T Davies era, the story arc for the series is back to being a background feature of the run, bookended by its introduction at the start of the series and its payoff at the end. Personally, I liked that The Doctor didn’t spend all eight episodes constantly obsessing over Clara’s identity, but you could see it was on his mind enough to influence his choice of locations (such as seeking out the clairvoyant Emma Grayling in ‘Hide’) and never seemed entirely forgotten. The payoff was clever but not exactly hard to figure out, though I definitely didn’t think they’d have the stones to integrate JLC into archive footage in the way they did. Kudos to them on a brave but worthwhile (if somewhat ropey) attempt.

I suggested in my speculation on what would be seen in the anniversary special that it was unlikely we’d get full appearances from past Doctors and some form of trickery would be use to reference them. The finale of series seven has met the fan service obligation of showing past Doctors, and now the event itself is a little more free to call-back to the show’s history as part of its story, rather than for its own sake. Having seen John Hurt in set photos, it appears that the ending of the series is setting the stage for the anniversary special and I suspect that, rather than simply reference the show’s past, Moffat will use this episode to reveal hitherto unseen parts of the Doctor’s history. Other than the fact that he is (in some manner) The Doctor, the real identity of Hurt’s character in the pantheon remains to be seen.

time war book
So that’s Who…

I doubt it will be as clear-cut as Hurt is playing the true but disowned Ninth Doctor, shifting everyone after him down the line, as this will affect a lot of established continuity and Moffat even had Clara affirm Smith’s status as the Eleventh Doctor before the reveal. My prediction is that he’ll be some intermediate form between Eight and Nine – artificially forced into partially-regenerating by the Time Lords and manipulated into fighting a genocidal Time War, against his own nature – “without choice”. The Doctor has freely admitted his actions in ending the Time War already, so perhaps this incarnation broke free of the control of the corrupted Time Lords and ended it – “in the name of peace and sanity” – likely causing the completion of his regeneration. In doing so, he reclaimed the mantle of The Doctor and renewed his promise to help people. In all likelihood, this will come to be mere fanon when the truth comes out in November but I like the idea all the same.

When Cartmel conceived of his plan to renew the mystery around The Doctor, he aimed to retcon large parts of established canon by revealing that The Doctor was actually the reincarnation of one of the founding figures of Time Lord society. The Moffat Masterplan (as I’m calling it) seems to be doing much the same: overturning seemingly entrenched continuity to reveal more about the character, but deepening the mystery by the nature of what we learn and its implications. Of course, it won’t affect the overall premise of the show or the nature of the series going forward, but it will add new depth to the character and reinvigorate the mythos.

Still Got Legs

Because my final year project is the biggest piece of academic work I’ve ever had to do, it naturally attracts the biggest opportunities for procrastination. I’ve been meaning to switch my blog over to a new host and redesign its template since 2011 but only now that I’m in the deepest, darkest, deadliest parts of writing my dissertation does the deed demand my diversion.

Continue reading Still Got Legs