We were having a discussion about what jobs would get killed off by ‘AI’ first.
I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning
The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’
The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?
The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.
The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.
It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.
Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.
Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.
Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
a ‘meh’ failure case
a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
as little need for complex contextual information as possible
not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.
There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.
The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.
That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)
Good thoughts, Alan. I've been wondering about this as well, though my perspective is a bit different. Not AI, but the precursor automation has been taking over tasks which comprise jobs. This frees up manpower with the results driven by management style; either: "we have excess manpower, let someone go to save money" or "we have excess manpower, find something productive for these people to do so we make more money." My hunch is successive iterations of automation simplifies the states, paving the way for AI or ML to take over the tasks. I automated much of my previous job in self-defense: they gave me 6 times the work but no extra help. My replacement is better at coding and automated it further. Nothing in the job description relates to app development... but I digress. I don't think most jobs will be replaced as most jobs have too much variety like you described. Individual tasks however, are ripe for ML and AI to take over, and I think this will eat away at jobs like bookworms eating pages of books.
While it my be fun to guess about what will be first I don't think that it's a very important question. I'm far more interested in what the changes will be in a give time period, say 10 or 20 years.
In 20 years I'd see it as likely that truck drivers have been mostly replaced. The economic incentive is enormous, so if the domain turns out to be too complex we'll adapt the domain. For example dedicated lanes for automatic vehicles. Tordnados goes under force majeour, so as long as the failure mode is only uneffective and not catastrophic it will be accepted.
Once we've gotten this far, the step to automated personal transport isn't that large.
I also think that people tend to get stuck on the definition of AI. It's as very interesting academic question, but for society it doesn't really matter if the thing is "intelligent" or just a simple algorithm. It might even be that a very effective human cat categoriser would fail your intial points.
All those who talk about using automatic vehicles to replace drivers have never driven in Boston. No automated vehicles can navigate our cow paths and alleyways in the middle of winter.... one misplaced lawn furniture item to protect a parking space is all thats needed to cause a traffic jam (and the concept of a seperate lane would require razing 1/3 of the city buildings... not going to happen.
"People find ways of getting money by impeding society. Once they can impede society, they can be paid to leave people alone. The waste inherent in owning information will become more and more important and will ultimately make the difference between the utopia in which nobody really has to work for a living because it's all done by robots and a world just like ours where everyone spends much time replicating what the next fellow is doing."
If we could solve this problem, it would be OK for AI to do all of the work. That's the world I'd like to have. If we don't solve this problem, the robots will do all of the impeding and that will be very bad.