This week I was lucky enough to be a judge at the most recent TechPitch 4.5 event in London. I say lucky for a number of reasons: it’s nice to be chosen, of course, but more than that, judging offers a rare opportunity to really think about what’s going on.

The range of candidates was diverse to say the least — from an enterprise-scale AI solution as a service to a widget that you can put on your web site, from a new way of making music to an asset management solution for estate agencies.

Largely because of this diversity, it was possible to see what made a good pitch and what doesn’t. And indeed, why it matters. I’m reminded of a recent conversation with a colleague who fielded a (relatively cold) sales call. “I wasn’t clear on what they were trying to sell me,” she said. “I doubt I’ll be using it.”

While this may appear to be short-term thinking, in these cluttered, time-strapped times we really don’t have the bandwidth to investigate every new possibility that comes along. Failure to realise this significantly undermines the addressable market, to the subset of “people who will spend the extra time trying to work out what I didn’t articulate.”

It shouldn’t be necessary to say this but clearly, it is. The presenters at TechPitch 4.5 had only 3 minutes to tell their stories: some, but not all succeeded. This isn’t the place to run through the qualities...

Today's leading minds talk AI with host Byron Reese

In this episode, Byron and Jem discuss machine learning, privacy, ethics, and Moore’s law.

- - 0:00 0:00 0:00
Today's leading minds talk AI with host Byron Reese

In this episode, Byron and Rand discuss intelligence, AGI, consciousness and more.

- - 0:00 0:00 0:00
Today's leading minds talk AI with host Byron Reese

In this episode Byron and Dennis discuss machine learning.

- - 0:00 0:00 0:00
Today's leading minds talk AI with host Byron Reese

In this episode Byron and David discuss intelligence, consciousness, Moore’s Law, and an AI crisis.

- - 0:00 0:00 0:00

While the ecommerce market has grown rapidly in recent years, and is set to continue to boom, the fact is that most retail transactions are still actually completed in bricks and mortar stores. However, as more and more consumers get used to the convenience and quick process of buying online, it’s imperative that retailers use every tool at their disposal to streamline transactions in store, and to offer customers an excellent experience at every touchpoint.

One of the ways they can do that is through using mobile payments (mPOS). A BI Intelligence report forecasted that there will be a whopping 27.7 million mPOS devices in circulation by 2021 in the United States, up from just 3.2 million items seven years prior.

For many retailers though, the introduction of mobile payments isn’t a priority yet, so mPOS adoption continues to lag. However, if you’re an entrepreneur who hasn’t started using this tech, you’re probably not just missing out on sales, but also losing the opportunity to build consumer loyalty and increase referrals.

Mobile payments allow retailers to help customers complete checkouts more quickly, and locate stock in store. They also enable businesses to...

Today's leading minds talk AI with host Byron Reese

In this episode Byron and Carolina discuss computer vision, machine learning, biology and more.

- - 0:00 0:00 0:00
Today's leading minds talk AI with host Byron Reese

In this episode, Byron and Mike talk about AGI, Turing Test, machine learning, jobs, and Takt.

- - 0:00 0:00 0:00

It’s pretty easy to be a digital transformation consultant these days. Here’s what you do.

First, you report on the amount of data growth, the increasing rate of change and other exponential factors; you flag up the massive growth of recent, tech-first companies such as Amazon and Alibaba (whilst carefully ignoring those who tried and failed to follow similar models); you list out conveniently acronymised manifestations of technological progress — Social, Mobile, Analytics and Cloud. Oh and IoT. And AI. You get the picture.

Having engendered a suitable level of fear and uncertainty among your target audience, namely executive decision makers (who happen to control consulting budgets), you go in with the scoop: that the only possible response is to transform. Not to tweak, nor encourage stepwise progress, but to make a ground-to-sky, soup-to-nuts matrix-style inversion of the entire organisation.

How should we do this, you are asked. Well, how fortunate you have an answer for that, you say. The response is to run a series of very expensive strategy workshops, which will generate a new vision for the company. You will then review existing lines of business and operational departments, looking at existing processes and advising on the best way to bring them into the new world.

Oh, and you will also propose a cloud-first innovation strategy, which means shifting an organisation’s IT capabilities (planned and legacy) from where they are “into the cloud”. The “cloud” in question just happens to be your data centres, or those...

Watch out, there’s a new term on the block. Even as the initial flurry of excitement over Oculus-primed virtual reality seems to be in a perpetual state of prototyping, and as other forms of augmentation are hanging about like costume options for Ready Player One, discussion is turning to enhanced reality. I know this not because of some online insight (Google Trends isn’t showing much), but because it has come up in conversation more than once with enterprise technology strategists.

So, what can we take from this? All forms of adjusted reality are predicated on a real-time feed of information that brings a direct effect to our senses:

  • At one end of the scale, we have fully immersive environments known as Virtual Reality (VR). These are showing themselves to be enormously powerful tools, with potential not just in gaming or architecture but also areas such as healthcare — imagine if you can shrink to the size of a tiny cancer, and then control microscopic lasers to burn it away? At the same time, the experience is isolating and restricted, which is both a blessing and a curse.
  • Augmented Reality (AR) rests on the fulcrum between virtual reality and, ahem, reality. Goes the argument, why take a real-time video feed and add data to it, if you can project data or images directly onto what you are seeing? It’s a good argument, but it demonstrates just how fraught and complex the debate quickly...

Machine learning sits at the forefront of innovation across a growing number of industries in today’s business world. Still, it’s a mistake to think of machine learning as one monolithic business solution — there are many forms of machine learning and each is capable of solving different sets of problems. The most popular forms of ML used in business today are supervised, unsupervised, semi-supervised, and reinforcement learning. At Vidora, we’ve used these techniques to help Fortune 500 partners solve some of their most pressing problems in innovative ways. This article draws from our experiences to demystify these four common approaches to ML, introducing practical applications of each technique so that anyone in your organization can recognize how machine learning can enhance your business.

Machine Learning at a Glance

Machine learning is an approach to Artificial Intelligence which borrows principles from computer science and statistics to model relationships in data. Unlike other AI systems which distill human knowledge into explicit rules (e.g. Expert Systems), ML instructs an algorithm to learn for itself by analyzing data. The more data it processes, the smarter the algorithm gets.

Machine learning is not a new concept. Its theoretical foundation was laid in the 1950s when Alan Turing conceptualized a “learning machine”. That same decade, Frank Rosenblatt invented the “perceptron” to roughly simulate the learning process of the brain. More algorithms followed, but machine learning remained largely...

I’ve been in a number of conversations recently about Functions as a Service (FaaS), and more specifically, AWS’ Lambda instantiation of the idea. For the lay person, this is where you don’t have to actually provide anything but program code — “everything else” is taken care of by the environment.

You upload and press play. Sounds great, doesn’t it? Unsurprisingly, some see application development moving inexorably towards a serverless, i.e. FaaS-only, future. As with all things technological however, there are plusses and minuses to any such model. FaaS implementations tend to be stateless and event-driven — that is, they react to whatever they are asked to do without remembering what position they were in.

This means you have to manage state within the application code. FaaS frameworks are vendor-specific by nature, and tend to add transactional latency, so a re good for doing small things with huge amounts of data, rather than lots of little things each with small amounts of data. For a more detailed explanation of the pros and cons, check Martin Fowler’s blog (HT Mike Roberts) .

So, yes, horses for courses as always. We may one day arrive in a place where our use of technology is so slick, we don’t have to think about hardware, or virtual machines, or containers, or anything else. But right now, and as with so many over-optimistic predictions, we are continuing to fan-out into more complexity (cf the Internet of Things).

Plus, each time we...

There is a vigorous debate about the effects of automation on jobs. Everyone agrees that some jobs will be lost to automation and, in turn, some jobs will be created by it. The pivotal question is how all of that nets out.

Often lost in the abstract debate is the question of exactly which jobs are likely to be automated. I have created a test to try to capture just that.

The idea is simple: Some things are quite easy for computers and robots to do, and other things are quite hard. Jobs in the “safe” category have lots of things about them that are hard for machines to do.

The good news is that it doesn’t take very many hard things to make a job, practically speaking, impervious to automation, at least in this century. While jobs like “hostage negotiator” are clearly better done by people than machines, even jobs that look like good candidates for automation have difficulties. In theory, a robot should be able to clean the windows on my home, in practice this isn’t likely to happen for quite a long time.

The test is ten questions, and each one can be scored from 0 to 10. For each one, I give examples of some jobs at 0, 5, and 10. My examples are meant to show each extreme, and a midpoint. You should not just score with those three points. Use 7’s and 2’s and 9’s.

When you are...

These days, your business computer system faces many threats — malware and viruses alone aren’t the only things you have to worry about. Phishing attacks, social engineering, and password crackers all pose risks to the security of your system, and the safety of your business’s, your employees’, and your customers’ personal information.

By practicing good cyber hygiene, you can protect your business system from the many threats it faces. Cyber hygiene involves mitigating risks by implementing best security practices. Even without a dedicated IT security staff, you can protect your business by using strong passwords, implementing multiple levels of security, updating software regularly, and training your employees to resist social engineering attacks.

Use Strong Passwords

It might seem simple, but using strong passwords is a fundamental aspect of cyber hygiene, and one that many system users still struggle with. It’s all too common for users to create generic, easily-guessed passwords, like password123, often because they’re worried about remembering a complicated password. Even a more personal password, like the name of a child or pet, can be easily guessed by hackers who have access to yours or your employees’ social media feeds, or by software that can crack passwords...

Today's leading minds talk AI with host Byron Reese

In this episode Byron and Bill talk about SRI International, aging, human productivity and more.

- - 0:00 0:00 0:00

This week’s backlash against Facebook’s practices was as predictable as it was unavoidable. The smoking gun is in the hands of Cambridge Analytica, which is either the epitome of corruption or simply the poor kid who happened to be in the orchard when the farmer walked by, even though every other child had been stealing apples with impunity.

Let’s pick this apart. The backlash was predictable — why, yes. We have been handing over our data in the vain hope that some random collection of profit-driven third parties might nonetheless act in our best interests. How very naive of us all, but we did so apparently eyes-open: if you are getting something for free, then you are the product, goes the adage.

What we perhaps didn’t realise was just how literally this might be taken. We, or our virtual representations, have been bundled into containers and sold to whoever took an interest, feeding algorithms like mincing machines that have, in turn, fed highly targeted and manipulative campaigns. Keep in mind that the perpetrators maintain that they have not broken any rules.

And indeed, maybe they haven’t. The ‘crime’, if there is one, revolves around the potential that Facebook’s data (well, it’s ours really, but let’s come back to that) was in some way ‘breached’. If that is the case, at least privacy law has something to go on. Let’s be realistic, however: this is trying to fit an ethical square peg into a legal round hole.

Meanwhile, the...

Today's leading minds talk AI with host Byron Reese

In this episode, Byron and Lorien talk about intelligence, AGI, jobs, and the human genome project.

- - 0:00 0:00 0:00

For the world’s largest and most regulated organizations, understanding each employee’s day-to-activities and behaviors is simply impossible and, oftentimes, unnecessary. But uncovering internal issues – such as operational inefficiencies or even criminal activities – that could result in wasted time, lost money or damaged reputations, is critical. Therefore, it is important that businesses invest in tools to effectively identify and help correct these problems, and in turn drive sizable ROI.

Artificial intelligence is one technology that many enterprises have implemented to solve these internal challenges. In fact, IDC forecasts worldwide spending on cognitive and artificial intelligence (AI) systems to reach $57.6 billion in 202. But, which AI applications are actually helping enterprises, and how are their investments driving returns?

Forget Big Brother: How AI Surveillance Helps Enterprises “Know Your Employee”

There’s a general consensus among consumers that AI technology will create an Orwellian world; but the fact of the matter is that this technology has the potential to do a lot of good for the modern-day enterprise as well as their workforces from a surveillance standpoint.

Oftentimes, enterprises leave employee communications untapped. There simply aren’t enough hours in the day to monitor every message someone sends via email or business chat – nor is this a necessary practice as the majority of communications are benign. But within every e-communication lies unique insights that could lead businesses to uncover some harsh truths about employee activity.

AI is about to reshape the enterprise workplace in a big and fundamental way, and any organization that hasn’t already started thinking about, planning for and adopting the new wave of smart AI tools is at risk of being left behind by their competitors.

Even at this early stage, it’s clear the benefits of AI in the office are going to be enormous, as these new tools work alongside employees — becoming a personal digital “coworker” — and augment our productivity and creative-thinking skills while freeing us from the monotony of the routine tasks that currently consume our workdays.

But it’s also clear that not all workplace AI is created equal — some of these new AI tools will be seamlessly adopted into your employees’ daily tech stack and workflows, like Slack, while others won’t be a good fit, getting the cold shoulder and ending up unused and unloved despite the best efforts of management and IT.

In other words, there is good workplace AI and bad workplace AI. The challenge enterprise leadership is now facing is figuring out how to tell the two apart.

Signs of bad workplace AI

Bad workplace AI creates more problems than it solves, holding back adoption rates and wasting everyone’s time. Look out for these warning signs.

Bad workplace AI requires significant customization to interact with and understand your office’s digital data. If the AI tool doesn’t work out of the box with the APIs of Office...

Human endeavour is a powerful thing. It saw Amelia Earhart fly single-handedly across the Atlantic, Neil Armstrong on the moon and no end of people tracing the steps of others up the slopes of Everest, in the knowledge that they might not come back.

Many of our enterprises were originally formed on the basis of similar, beyond the call of duty effort — “One percent Inspiration, 99 percent perspiration,” as Thomas Edison was purported to say. The combination of money, vision, acumen and consistent, focused effort, even when all seems pointless — Sir Ranulph Fiennes calls this ’motivation’ — is only occasionally rewarded by remarkable success: the legions of failure go silently to their doom, like all those films we will never see because the hero gets killed in the opening scene.

It is against this background that we should view today’s technology superheroes — like Jobs and Gates before them, and many more before that (shout out to unsung hero Tommy Flowers, who had the drive and chops, but not the PR – hat-tip Jon Pyke), the current raft of Brin, Bezos, Zuckerberg and of course Musk, have had to deliver all of the above, over a period of decades.

Elon Musk has been close to failure more times than he dares think about, but the announcements keep on coming. For example, Tesla’s latest smart power grid experiment in Nova Scotia in collaboration with the Canadian Government, or DHL’s suggestion that the...