Wednesday, October 18, 2017

AI-driven tool produces high quality online learning for global company in days not months

You have a target of a two thousand apprentices by 2020, have a sizeable £2 million plus pot from the Apprenticeship Levy. This money has to, by law, be spent on training. The Head of Apprenticeships in this Global company is a savvy manager and they already have a track record in the delivery of online learning. So they decided to deliver a large portion of that training using online learning.
Blended Learning
Our first task was to identify what was most useful in the context of Blended Learning. It is important to remember that Blended Learning is not Blended TEACHING. The idea is to analyse the types of learning, types of learners, context and resources to identify your optimal blend, not just a bit of classroom, a bit of online stuff, stick them together like Velcro, and call it ‘blended’. In this case the company will be training a wide range of apprentices over the coming years, a major part of their recruitment strategy, important to the company and the young people joining the company.
Learning
The apprentice ‘frameworks’ identify knowledge, behaviours and competences as the three desired types of learning and all of these have to be assessed. The first project, therefore, looked at the ‘knowledge’ component. This was substantial as few new apprentices have much in the way knowledge in this sector. Behaviours and competences need to be primed and supported by underlying knowledge.
Assessment
Additionally, assessment matters in apprenticeships, both formatively, as the apprentices progress, and summatively, at the end. Assessment is a big deal as funding, and the successful attainment of the apprentice, depends on objective and external assessment. It can’t be fudged.
Context
These young apprentices will be widely distributed in retail outlets and other locations, here and abroad. They may also work weekends and shifts. One of our goals was to provide training where and when it was needed, on-demand, at times when workload was low. Content, Level 3 and Level 2, had to be available 24/7, on a range of devices, as tablets were widespread and mobile increasingly popular.
Solution
WildFire was chosen, as it could produce powerful online content that is:

  • Highly retentive
  • Aligned with assessment
  • Deliverable on all devices
  • Quick to produce
  • Low cost

Using an AI-driven content creation tool, we produced 158 modules (60 hours of learning), in days not months. After producing Level 3, we could quickly produce the Level 2 courses and load them up to the LMS for tracking user performance. The learner uses high-retention, open input, rather than weak multiple choice questions. The AI-driven content creation tool not only produced the high quality, online content quickly, it produced links out to additional supplementary content that proved extremely useful in terms of further learning. It only accepts completion when 100% competence is achieved and the learner has to persevere in a module until that is achieved.
Conclusion

AI is the new UI. Google has long been used in learning and AI shapes almost all online experiences – Facebook, Twitter, Amazon, Netflix and so on. AI can now be used to shape online experiences in learning. It can create high-quality content in minutes not months, at a fraction of the cost, from a document, PPT, podcast or video. I think this changes the game in the e-learning market.
For more detail or a demonstration contact here

 Subscribe to RSS

Saturday, October 14, 2017

Is there one book you’d recommend as an introduction to AI? Yes. Android Dreams by Toby Walsh

Although there are books galore on AI, from technical textbooks to potboilers, few are actually readable. Nick Bostrom’s ‘Superintelligence’ is dense and needed a good edit, ‘The Future of the Professions’ too dense, ‘The Rise of the Robots’ good but a bit dated, and lacks depth, and ‘Weapons of Math Destruction’ a one-sided and exaggerated contrarian tract. At last there’s an answer to that question “Is there one book you’d recommend as an introduction to AI?” That book is Android Dreams by Toby Walsh.
I met Toby Walsh in Berlin and he’s measured and a serious researcher in AI. So I was looking forward to this book and wasn’t disappointed. The book, like the man, is neither too utopian nor dystopian. He rightly describes AI as an IDIOT SAVANT, and this sets the tone for the whole book. In general, you could identify his position on AI, as overestimated in the short-term, underestimated in the long-term. He sees AI as having some limitations and that progress in robotics, and even the much lauded deep learning, have their Achille’s heels – back-propagation being one.
On ethics he focuses not on the surface criticisms about algorithmic bias but on whether weaponised AI is a threat – it is – and it’s terrifying. Loved it when he skewered the Frey & Osborne Oxford report on the idea that 47% of jobs are at threat from AI. He explains why they got so many things wrong by going through a series of job types, explaining why robots will not be cutting your hair or serving your food in restaurants any time soon. He also takes a healthy potshot at academics and teachers who think that everyone else’s jobs are at risk, except their own.

The book has all the hallmarks of being written by an expert in the field with none of the usual exaggeration or ill-informed negativity, that many commentators have when it comes to AI. AI is not one thing, it is many things, he explains that well. AI can be used for good as well as evil, he explains that well. AI is probably the most important tech development since language, writing and printing – he explains that well. Worth reading, if only for some of his speculative predictions – driverless cars, doctor will be your computer, Marilyn Monroe back in the movies, computer recruitment, talking to rooms,  AI/robot sports, ghost ships, planes and trains, TV news made without humans, personal bot that lives on after you die. This review was partly written using AI. Really.

 Subscribe to RSS

Tuesday, September 26, 2017

AI on land, sea, air (space) & cyberspace – it’s truly terrifying

Vladamir Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not only for Russia, but for all humankind…  It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world… If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.Elon Musk, tweeted a reply, “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo”, then, “May be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory.
That pretty much sums up the problem. Large and even small nations, even terrorist groups, may soon have the ability to use ‘smart’, autonomous AI-driven tech in warfare. To be honest, it doesn’t have to be that smart. A mobile device, a drone and explosives are all that one needs to deliver a lethal device from a distance. You may even have left the country when it takes off and delivers its deadly payload. Here’s the rub – sharing may be the last thing we want to do. The problem with sharing, is that anyone can benefit.
In truth, AI has long been part of the war game. Turing, the father of AI, used it to crack German codes, and thankfully contributed to ending the second World War and let’s not imagine that it has been dormant for the last half a century. The landmine, essentially, a dormant robot that acts autonomously, has been in use since the 17th century. One way to imagine the future is to extend the concept of the landmine. What we now face are autonomous, small landmines, armed with deadly force on land, sea, air and even space.
AI is already a major force in intelligence, security and in the theatre of war. AI exists in all war zones, on all four fronts – land, sea, air (space) and cyberspace.
AI on land
Robot soldiers are with us. You can watch Boston Analytics videos on YouTube and see machines that match humans in some, not all, aspects of carrying, shooting and fighting. The era of the AI-driven robot soldier is here. We have to be careful here, as the cognitive side of soldiering is far from being achieved.
Nevertheless, in the DMZ between South and North Korea, robot guard are armed with and will shoot on sight. Known as a Lethal Autonomous Weapons System (LAWS) it will shoot on sight, and by sight we mean infrared detection and laser identification and tracking of a target. It has an AI-driven voice recognition system, asks for identification, and can shoot autonomously. This is a seriously scary development as they are already mounted on tanks. You can see why these sentry or rapid response systems have become autonomous. Humans are far too slow in detecting in-coming attacks or targeting with enough accuracy. Many guns are now targeted automatically with sensors and systems way beyond the capabilities of any human.
AI at sea
Lethal Autonomous Weapons can already operate on or beneath the sea. Naval mines (let’s call them autonomous robots) have been in operation for centuries. Unmanned submarines have been around for decades and have been used for purposes good and bad, for example, the deliver of drugs using autonomous GPS navigation, as well as finding aircraft that have downed in mid-ocean. In military terms, large submarines capable of travelling thousands of miles, sensor-rich, with payloads are already in play. Russian drone submarines have already been detected, code-named Kanyon by the Pentagon, they are thought to have a range of up to 6,200 miles with speeds up to 56 knots. They can also deliver nucear payloads.
AI in the air
I flew to Oslo to give a talk on AI in the National gallery. The pilot of the Norwegian Air 737 had switched to autopilot at 1000 feet and we were then technically flying in a robot for the rest of the flight, albeit being supervised by the pilots – fuel consumption, weather and so on. They could have landed using autoland but most pilots still prefer to land the aircraft themselves. The bottom line is that software does most flying better than humans and will soon outclass them on all tasks. Flying is safe precisely because it is highly regulated and smart software is used to ensure safety.
Drones are the most obvious example, largely controlled from the ground, often at huge distances, they are now AI-driven, operate from aircraft carriers, can defend themselves against other aircraft and, worryingly, deliver deadly missiles to selected targets. The days of the fighter plane may be numbered, as drones, free from the problem of seating and coping with a human pilot, is cheaper and can be produced in larger numbers. Even ISIS use drones to spy and drop bombs.
A terrifying vocabulary of nanoweapons, mosquito-like robots and mini-nukes have entered the vocabulary. Nanoweapons: A Growing Threat to Humanity by Louis A. Del Monte is a terrifying account of how nanoweapons may change the whole nature of warfare, making other forms almost redundant. It is the miniaturisation of weaponry that also makes this more of a lethal threat.
AI in cyberspace
War used to be fought on land, sea and air, with the services -  army, navy and airforce – representing those three theatres of war. It is thought that a brand new front has opened up on the internet but this is not entirely true, as the information and communications war has always been the fourth front. The Persians did it, the Romans were masters of it and it has featured in all modern conflicts. Whenever a new form of communications technology is invented, from clay tablets, paper, printing, broadcast media and the internet, it has been used as a weapon of war.
However, the internet offers a much wider, deeper and difficult arena, as it is global and encrypted. Russia, China, US are the major players, with billions invested. China also wages a war against freedom of expression within its country with its infamous Great Firewall of China. Russia has banned LinkedIn and Putin has been explicit in seeing this as the new battlefield. The US is no different, with explicit lies about the surveillance of its own citizens. But it is the smaller state actors that have had real wins – ISIS, North Korea and others. With limited resources they see this amphitheatre as somewhere they can compete and outwit the big boys.
It is here that AI comes into play. AI has a habit of being demoted. No sooner has an algorithm been invented than it is demoted to being mere software, a part of the landscape. So it has been with encryption – one of the great successes of AI, it keeps the financial system afloat and secure, as well as preserving privacy in our communications. However, it also allows private and
AI as weapon of peace
When I landed at Olso airport, I walked through a gate that scanned my passport. In a chip on my passport is stored an image of my face and face recognition software, along with other checks, identifies me as being able to enter the country. I never spoke to a human on my entire trip, from home to Oslo. You will soon be able to walk through borders using a mobile phone only. Restricting the movement of criminals and terrorists is being achieved through the use of many types of AI. The war on terror is being fought using AI. It is AI that is identifying and taking down ISIS propaganda. What is required, is a determined effort to use AI to police AI. All robots may have to have black boxes, like aircraft, so that rogue behaviour can be forensically examined. AI may be our best defence against offensive (in both senses of the word) AI.
Conclusion

What is worrying is that while most of the above is known, you can bet that this is merely the tip of a chilling iceberg, as most of these weapon and systems are being developed in deep secrecy. Musk and many others, especially the AI research and development community are screaming out for regulation at an international level on this front. Our politicians seem ill-equipped to deal with these developments, so it is up to the AI community and those in the 'know' to press this home. This is an arms race that is far more dangerous than the nuclear race, where only large nations and humans have been in control and calls for a declaration of war on AI weaponry. We are facing a future where even small nations, rogue states and actors within states could get hold of this technology. That is a terrifying prospect.

 Subscribe to RSS

Sunday, September 10, 2017

ResearchEd - 1000 teachers turn up on a Saturday for grassroots event....

Way back I wrote a piece on awful INSET days and how inadequate they were on CPD, often promulgating half-baked myths and fads. Organisations don’t, these days, throw their customers out of the door for an entire day of training. The cost/load on parents in terms of childcare is significant. Kids lose about a week of schooling a year. There is no convincing research evidence that INSET days have any beneficial effects. Many are hotchpotches of non-empirical training. Many (not all) are ill-planned, dull and irrelevant. So here’s an alternative.
ResearchED is a welcome antidote. A thousand teachers rock up to spend their Saturday, with 100 speakers (none of whom are paid), to a school in the East End of London, to share their knowledge and experiences. What’s not to like? This is as grassroots as it gets. No gun to the head by the head, just folk who want to be there – most as keen as mustard. They get detailed talks and discussions on a massive range of topics but above all it tries to build on an evidence-based approach to teaching and learning.
Judging from some on Twitter, conspiracy theories abound that Tom Bennett, its founder, is a bounder, in the pocket of…. well someone or another. Truth is that this event is run on a shoestring, and there’s no strings attached to what minimal sponsorship there is to host the event. It’s refreshingly free from the usual forced feel of quango-led events or large conferences or festivals of education. Set in a school, with pupils as volunteers, even a band playing soul numbers, it felt real. And Tom walks the floor – I’m sure, in the end, he talked to every single person that day.
Tom invited me to speak about AI and technology, hardly a ‘trad’ topic. I did, to a full house, with standing room only. Why? Education may be a slow learner but young teachers are keen to learn about research, examples and what’s new. Pedro de Bruykere was there from Belgium to give an opposing view, with some solid research on the use of technology in education. It was all good. Nobody got precious.
But most of the sessions were on nuts and bolts issues, such as behaviour, teaching practice and assessment. For example, Daisy Christodoulou gave a brilliant and detailed talk on assessment, first demolishing four distorting factors but also gave practical advice to teachers on alternatives. I can’t believe that any teacher would walk out of that talk without reflecting deeply on their own attitudes towards assessment and practice.
What was interesting for me, was the lack of the usual ‘teachers always know best’ attitudes. You know, that defensive pose, that it’s all about practice and that theory and evidence don’t matter, which simply begs the question, what practice? People were there to learn, to see what’s new, not to be defensive.
Even more important was Tom’s exhortation at the end to share – I have already done two podcasts on the experience, got several emails and Twitter was twittering away like fury. He asked that people go back to school – talk, write, blog… whatever… so that’s what I’ve done here. Give it a go – you will rarely learn more in a single day – isn’t that what this is all about?

 Subscribe to RSS

Thursday, August 17, 2017

LearnDirect - lessons to learn?

Brown’s dream
I was a Trustee of LearnDirect for many years and played a role in its sale to Lloyd’s Capital and in setting up the Charity from the proceeds of the sale – Ufi. It’s a salutary tale of a political football that was started by Gordon Brown, with great intentions. It was originally seen as a brake on the University system aimed at the majority of young people who were being failed by the system. It’s aim was vocational – hence the name - University for Industry. However, it morphed into something a little different – essentially a vehicle for whatever educational ails the government in power identified as in need of a sticking plaster – numeracy, literacy, ILAs, Train to gain… in this manifestation it was a Charity that delivered on whatever the Government asked it to deliver. Good people doing a good job but straightjacketed by a succession of oddball policies around low-level skills and vocational learning. It was a sort of public/private, hybrid model with a charity at the core and a network of delivery Centres. Eventually, as things went online, we trimmed the network – that was the right thing to do. What it didn't do was stay true to the original aim of having a vocational alternative, with a strong online offer, an alternative to HE. It was basically a remedial plaster for the failure of schools on literacy and numeracy. The lesson to learn here was to have a policy around vocational learning that really does offer a major channel for the majority of young people who do not go to University. Lesson - we now have that with the Apprenticeship Levy. There is no need for a LearnDirect now.
Sheffield factor
Based in Sheffield, it was also a sizeable employer in the North, stimulating the e-learning industry in that town. The city never really exploited this enough, with the hapless, EU-funded Learning Light, that was hijacked by some local who simply turned it into a ‘let’s spend the money’ entity. I was a Director of this and resigned when the Chair was ousted and stupid, local politics caused chaos. Missed opportunity. Nevertheless the city grew its e-learning sector. Interestingly, both Line and Kineo started production studios out of London and Brighton – where the real action was. But it was a good skills base with some really good, local, home-grown companies. Lesson - something should be salvaged here. Lesson - a smooth transition of contracts could encourage companies and organisations to take on redundant staff. The problem would be the terms and conditions, and general practical difficulties.
Gun to the head
Then came the crunch – the Conservative Government came in and the bonfire of the quangos started. LearnDirect (Ufi) was seen as a quango (some truth in this) and the trustees were told that contracts would not be renewed unless it was sold. It was a gun to the head – we had no choice. So we sold the company in 2011 – that was our duty. I remember the day that the Lloyds Capital guy turned up in a red Ferarri – he was an arrogant, asinine fool. Remember that Lloyds at that time were 40% owned by Government. I didn’t like the deal - it stank. 
Phoenix - Ufi
What we did, however, was not simply hand the £50 million plus cheque back to the Conservative Party run Treasury. Out of the ashes, a few of us set up Ufi as a new charity, with a focus on technology in vocational learning. This is still going strong. It has stimulated the sector with MOOCs on Blended learning for vocational teachers, and projects that are now being used in Apprenticeships and vocational learning. Lesson - don't give in - be imaginative in finding new solutions that push innovation and technology. 
LearnDirect and Ofsted
Then came the second crunch – an Ofsted inspection. I don’t have much time for all of those people who whine on about Ofsted, then turn turtle to praise them when it suits their political agenda. Ofsted did what it was meant to do – act as a quality control mechanism to stop these excesses and failures – good for them. It wasn’t as if there was much ambiguity here – LearnDirect was failing to deliver and failing the young people under its care. Lesson - we need Ofted/TEF and quality control across education.
Ugly private equity
What happened to LearnDirect was sad. The private equity guys started to rip out the cash. Their first act was to spend £504,000 on F1 sponsorship (the hopelessly inept Marussia Team, partly owned by Darryl Eales - the LDC guy who did the deal)– probably one of the most dishonest acts I’ve ever seen in business. F1 was his hobby – he should have been crucified. What they did was merge with another weak, almost broken, training company (JHP Group) and tried to rebuild, not by building on what they had, but by cutting costs and being rapacious in stripping out cash in dividends.
What next?

It will limp on. The owners have an asset that has plunged in value and their goal will simply be to salvage the residual value. The contracts will go on for another year but the Government has announced that they will end in July 2018 – so the game is up. The 53 suppliers may be in real trouble but if they failed to deliver, that is their problem. They tried to sell it before this calamity and failed as it was hopelessly overvalued. It is now practically worthless when the government contracts dry up – as they should. That’s a shame, as it could survive if it were taken over by someone in the sector. I don’t mean a College, and certainly not a University. The government should intervene here and effect a transition to another entity to protect what jobs they can but more importantly provide a better deal for the tens of thousands of young people who have had their life chances dented by these clowns. Lesson - for my money City and Guilds would be a great candidate. They have the brand, the financial stability, they're a charity and they know what they're doing.

 Subscribe to RSS