social.sokoll.com

Search

Items tagged with: ai

"If robots steal so many jobs, why aren't they saving us now?" "Because the machines are far, far away from matching our intelligence and dexterity. You’re more likely to have a machine automate part of your job, not destroy your job entirely."

If robots steal so many jobs, why aren't they saving us now?

#solidstatelife #ai #technologicalunemployment
 
"If robots steal so many jobs, why aren't they saving us now?" "Because the machines are far, far away from matching our intelligence and dexterity. You’re more likely to have a machine automate part of your job, not destroy your job entirely."

If robots steal so many jobs, why aren't they saving us now?

#solidstatelife #ai #technologicalunemployment
 

Dennis Demmer auf Twitter: "My #styleGAN has learned to create new plants based on the @BioDivLibrary #OpenAccess artworks. #ArtificialIntelligence #ai #MachineLearning #ml #OpenScience https://t.co/vA7ErAzvX6" / Twitter


This is totally cool!

https://twitter.com/DemmerDennis/status/1234189594180161536
 
TextFooler makes adversarial text examples. "Adversarial examples" are images where, if you add imperceptible noise, the neural network will change its classification from "panda" to "gibbon", even though to you, the human, it still looks exactly like a panda.

The idea behind TextFooler is to change text in such a way that a human wouldn't change its classification but an AI would. The new version should have the same meaning as the original, and it should have correct spelling and grammar and otherwise look natural.

Examples: "The characters, cast in impossibly contrived situations, are totally estranged from reality." changed to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality." The idea was to categorize the movie review as "positive" or "negative". The first is classified as negative but in the second, the AI system, in this case a system called WordLSTM, gets confused into thinking it's positive.

"It cuts to the knot of what it actually means to face your scares, and to ride the overwhelming metaphorical wave that life wherever it takes you." changed to "It cuts to the core of what it actually means to face your fears, and to ride the big metaphorical wave that life wherever it takes you." This flips the classification from positive to negative.

"Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band uniforms." changed to "Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band garments." The idea here is to classify as "contradiction", "entailment" (second idea follows from the first), or "neutral". The second pair of sentences flips the classification, done by a system called SNLI, from "contradiction" to "entailment".

"A child with wet hair is holding a butterfly decorated beach ball. The child is at the beach." A child with wet hair is holding a butterfly decorated beach ball. The youngster is at the shore." The second pair flips the classification from "neutral" to "entailment".

Hey Alexa: Sorry I fooled you

#solidstatelife #ai #nlp #textfooler
 
TextFooler makes adversarial text examples. "Adversarial examples" are images where, if you add imperceptible noise, the neural network will change its classification from "panda" to "gibbon", even though to you, the human, it still looks exactly like a panda.

The idea behind TextFooler is to change text in such a way that a human wouldn't change its classification but an AI would. The new version should have the same meaning as the original, and it should have correct spelling and grammar and otherwise look natural.

Examples: "The characters, cast in impossibly contrived situations, are totally estranged from reality." changed to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality." The idea was to categorize the movie review as "positive" or "negative". The first is classified as negative but in the second, the AI system, in this case a system called WordLSTM, gets confused into thinking it's positive.

"It cuts to the knot of what it actually means to face your scares, and to ride the overwhelming metaphorical wave that life wherever it takes you." changed to "It cuts to the core of what it actually means to face your fears, and to ride the big metaphorical wave that life wherever it takes you." This flips the classification from positive to negative.

"Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band uniforms." changed to "Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band garments." The idea here is to classify as "contradiction", "entailment" (second idea follows from the first), or "neutral". The second pair of sentences flips the classification, done by a system called SNLI, from "contradiction" to "entailment".

"A child with wet hair is holding a butterfly decorated beach ball. The child is at the beach." A child with wet hair is holding a butterfly decorated beach ball. The youngster is at the shore." The second pair flips the classification from "neutral" to "entailment".

Hey Alexa: Sorry I fooled you

#solidstatelife #ai #nlp #textfooler
 
Die Behörden in den #USA müssen alle #Anklagepunkte gegen #Julian #Assange fallen lassen, die sich auf seine Arbeit mit Wikileaks beziehen. Die USA haben Assange jahrelang unnachgiebig verfolgt – das ist ein Angriff auf das Recht auf freie Meinungsäußerung!

#Amnesty #international #AI
 
A machine learning algorithm has identified an antibiotic that kills E. coli and many other disease-causing bacteria, including some strains that are resistant to all known antibiotics. To test it, mice were infected on purpose with A. baumannii and C. difficile and the antibiotic cleared the mice of both infections.

"The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs."

"The researchers also identified several other promising antibiotic candidates, which they plan to test further. They believe the model could also be used to design new drugs, based on what it has learned about chemical structures that enable drugs to kill bacteria."

"The machine learning model can explore, in silico, large chemical spaces that can be prohibitively expensive for traditional experimental approaches."

"Over the past few decades, very few new antibiotics have been developed, and most of those newly approved antibiotics are slightly different variants of existing drugs." "We're facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anemic pipeline in the biotech and pharmaceutical industries for new antibiotics."

"The researchers designed their model to look for chemical features that make molecules effective at killing E. coli. To do so, they trained the model on about 2,500 molecules, including about 1,700 FDA-approved drugs and a set of 800 natural products with diverse structures and a wide range of bioactivities."

"Once the model was trained, the researchers tested it on the Broad Institute's Drug Repurposing Hub, a library of about 6,000 compounds. The model picked out one molecule that was predicted to have strong antibacterial activity and had a chemical structure different from any existing antibiotics. Using a different machine-learning model, the researchers also showed that this molecule would likely have low toxicity to human cells."

"This molecule, which the researchers decided to call halicin, after the fictional artificial intelligence system from '2001: A Space Odyssey,' has been previously investigated as possible diabetes drug. The researchers tested it against dozens of bacterial strains isolated from patients and grown in lab dishes, and found that it was able to kill many that are resistant to treatment, including Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis. The drug worked against every species that they tested, with the exception of Pseudomonas aeruginosa, a difficult-to-treat lung pathogen."

"Preliminary studies suggest that halicin kills bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is necessary, among other functions, to produce ATP (molecules that cells use to store energy), so if the gradient breaks down, the cells die. This type of killing mechanism could be difficult for bacteria to develop resistance to, the researchers say."

"The researchers found that E. coli did not develop any resistance to halicin during a 30-day treatment period. In contrast, the bacteria started to develop resistance to the antibiotic ciprofloxacin within one to three days, and after 30 days, the bacteria were about 200 times more resistant to ciprofloxacin than they were at the beginning of the experiment."

The way the system works is, they developed a "directed message passing neural network", open sourced as "Chemprop", that learns to predict molecular properties directly from the graph structure of the molecule, where atoms are represented as nodes and bonds are represented as edges. For every molecule, the molecular graph corresponding to each compound's simplified molecular-input line-entry system (SMILES) string was reconstructed, and the set of atoms and bonds determined using an open-source package called RDKit. From this a feature vector describing each atom and bond was computed, with the number of bonds for each atom, formal charge, chirality, number of bonded hydrogens, hybridization, aromaticity, atomic mass, bond type for each bond (single/double/triple/aromatic), conjugation, ring membership, and stereochemistry. "Aromatic" refers to rings of bonds. "Conjugation" refers to those chemistry diagrams you see where they look like alternating single and double (or sometimes triple) bonds -- what's going on here is the molecule has connected p orbitals with electrons that move around. "Stereochemistry" refers to the fact that molecules with the same formula can form different "stereoisomers", which have different 3D arrangements that are mirror images of each other.

From here, and the reason the system is called "directed message passing", the model applies a series of message passing steps where it aggregates information from neighboring atoms and bonds to build an understanding of local chemistry. "On each step of message passing, each bond's featurization is updated by summing the featurization of neighboring bonds, concatenating the current bond's featurization with the sum, and then applying a single neural network layer with non-linear activation. After a fixed number of message-passing steps, the learned featurizations across the molecule are summed to produce a single featurization for the whole molecule. Finally, this featurization is fed through a feed-forward neural network that outputs a prediction of the property of interest. Since the property of interest in our application was the binary classification of whether a molecule inhibits the growth of E. coli, the model is trained to output a number between 0 and 1, which represents its prediction about whether the input molecule is growth inhibitory."

The system has additional optimizations including 200 additional molecule-level features computed with RDKit to overcome the problem that the message passing paradigm works for local chemistry, it does not do well with global molecular features, and this is especially true the larger the molecule gets and the larger the number of message-passing hops involved.

They used a Bayesian hyperparameter optimization system, which optimized such things as the number of hidden and feed-forward layers in the neural network and the amount of dropout (a regularization technique) involved.

On top of that they used ensembling, which in this case involved independently training several copies of the same model and combining their output. They used an ensemble of 20 models.

The training set was 2,335 molecules, with 120 of them having "growth inhibitory" effects against E. coli.

Once trained, the system was set loose on the Drug Repurposing Hub library, which was 6,111 molecules, the WuXi anti-tuberculosis library, which was 9,997 molecules, and parts of the ZINC15 database thought to contain likely antibiotic molecules, which was 107,349,233 molecules.

A final set of 6,820 compounds was found, and further reduced using the scikit-learn random forest and support vector machine classifiers.

To predict the toxicity of the molecules, they retrained Chemprop on a different training set, called the ClinTox dataset. This dataset has 1,478 molecules with clinical trial toxicity and FDA approval status. Once this model was made it was used to test the toxicity of the candidate antibiotic molecules.

At that point they hit the lab and started growing E. coli on 96 flat-bottomed assay plates. 63 molecules were tested. The chemical they named halicin did the best and went on to further testing against other bacteria and in mice.

Artificial intelligence yields new antibiotic

#solidstatelife #ai #biochemistry #antibiotics
 
A machine learning algorithm has identified an antibiotic that kills E. coli and many other disease-causing bacteria, including some strains that are resistant to all known antibiotics. To test it, mice were infected on purpose with A. baumannii and C. difficile and the antibiotic cleared the mice of both infections.

"The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs."

"The researchers also identified several other promising antibiotic candidates, which they plan to test further. They believe the model could also be used to design new drugs, based on what it has learned about chemical structures that enable drugs to kill bacteria."

"The machine learning model can explore, in silico, large chemical spaces that can be prohibitively expensive for traditional experimental approaches."

"Over the past few decades, very few new antibiotics have been developed, and most of those newly approved antibiotics are slightly different variants of existing drugs." "We're facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anemic pipeline in the biotech and pharmaceutical industries for new antibiotics."

"The researchers designed their model to look for chemical features that make molecules effective at killing E. coli. To do so, they trained the model on about 2,500 molecules, including about 1,700 FDA-approved drugs and a set of 800 natural products with diverse structures and a wide range of bioactivities."

"Once the model was trained, the researchers tested it on the Broad Institute's Drug Repurposing Hub, a library of about 6,000 compounds. The model picked out one molecule that was predicted to have strong antibacterial activity and had a chemical structure different from any existing antibiotics. Using a different machine-learning model, the researchers also showed that this molecule would likely have low toxicity to human cells."

"This molecule, which the researchers decided to call halicin, after the fictional artificial intelligence system from '2001: A Space Odyssey,' has been previously investigated as possible diabetes drug. The researchers tested it against dozens of bacterial strains isolated from patients and grown in lab dishes, and found that it was able to kill many that are resistant to treatment, including Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis. The drug worked against every species that they tested, with the exception of Pseudomonas aeruginosa, a difficult-to-treat lung pathogen."

"Preliminary studies suggest that halicin kills bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is necessary, among other functions, to produce ATP (molecules that cells use to store energy), so if the gradient breaks down, the cells die. This type of killing mechanism could be difficult for bacteria to develop resistance to, the researchers say."

"The researchers found that E. coli did not develop any resistance to halicin during a 30-day treatment period. In contrast, the bacteria started to develop resistance to the antibiotic ciprofloxacin within one to three days, and after 30 days, the bacteria were about 200 times more resistant to ciprofloxacin than they were at the beginning of the experiment."

The way the system works is, they developed a "directed message passing neural network", open sourced as "Chemprop", that learns to predict molecular properties directly from the graph structure of the molecule, where atoms are represented as nodes and bonds are represented as edges. For every molecule, the molecular graph corresponding to each compound's simplified molecular-input line-entry system (SMILES) string was reconstructed, and the set of atoms and bonds determined using an open-source package called RDKit. From this a feature vector describing each atom and bond was computed, with the number of bonds for each atom, formal charge, chirality, number of bonded hydrogens, hybridization, aromaticity, atomic mass, bond type for each bond (single/double/triple/aromatic), conjugation, ring membership, and stereochemistry. "Aromatic" refers to rings of bonds. "Conjugation" refers to those chemistry diagrams you see where they look like alternating single and double (or sometimes triple) bonds -- what's going on here is the molecule has connected p orbitals with electrons that move around. "Stereochemistry" refers to the fact that molecules with the same formula can form different "stereoisomers", which have different 3D arrangements that are mirror images of each other.

From here, and the reason the system is called "directed message passing", the model applies a series of message passing steps where it aggregates information from neighboring atoms and bonds to build an understanding of local chemistry. "On each step of message passing, each bond's featurization is updated by summing the featurization of neighboring bonds, concatenating the current bond's featurization with the sum, and then applying a single neural network layer with non-linear activation. After a fixed number of message-passing steps, the learned featurizations across the molecule are summed to produce a single featurization for the whole molecule. Finally, this featurization is fed through a feed-forward neural network that outputs a prediction of the property of interest. Since the property of interest in our application was the binary classification of whether a molecule inhibits the growth of E. coli, the model is trained to output a number between 0 and 1, which represents its prediction about whether the input molecule is growth inhibitory."

The system has additional optimizations including 200 additional molecule-level features computed with RDKit to overcome the problem that the message passing paradigm works for local chemistry, it does not do well with global molecular features, and this is especially true the larger the molecule gets and the larger the number of message-passing hops involved.

They used a Bayesian hyperparameter optimization system, which optimized such things as the number of hidden and feed-forward layers in the neural network and the amount of dropout (a regularization technique) involved.

On top of that they used ensembling, which in this case involved independently training several copies of the same model and combining their output. They used an ensemble of 20 models.

The training set was 2,335 molecules, with 120 of them having "growth inhibitory" effects against E. coli.

Once trained, the system was set loose on the Drug Repurposing Hub library, which was 6,111 molecules, the WuXi anti-tuberculosis library, which was 9,997 molecules, and parts of the ZINC15 database thought to contain likely antibiotic molecules, which was 107,349,233 molecules.

A final set of 6,820 compounds was found, and further reduced using the scikit-learn random forest and support vector machine classifiers.

To predict the toxicity of the molecules, they retrained Chemprop on a different training set, called the ClinTox dataset. This dataset has 1,478 molecules with clinical trial toxicity and FDA approval status. Once this model was made it was used to test the toxicity of the candidate antibiotic molecules.

At that point they hit the lab and started growing E. coli on 96 flat-bottomed assay plates. 63 molecules were tested. The chemical they named halicin did the best and went on to further testing against other bacteria and in mice.

Artificial intelligence yields new antibiotic

#solidstatelife #ai #biochemistry #antibiotics
 
Some good points in here:

Quick, cheap to make and loved by police – facial recognition apps are on the rise - Clearview AI may be controversial but it’s not the first business to identify you from your online pics

#technology #facerecognition #clearview #ai
 
More people would trust a robot than their manager

Yeah. Seems like many managers are just idiots.
#work #AI
#work #AI
 

You looking for an AI project? You love Lego? Look no further than this Reg reader's machine-learning Lego sorter • The Register


That's cool ;)
#ai #lego
#ai #lego
 

How to recognize AI snake oil


#ai #snakeOil
Die Sache mit der Interview Software erinnert mich an Google Suchmaschinenoptimierung aus den Anfangstagen. Möglichst viele Keywords unterbringen.
 

Opinion: AI For Good Is Often Bad | WIRED

Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
Exactly. Technology alone does not solve social issues.

#technology #AI
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
And the teeny-tiny bottle of AI whisky goes to... • The Register

#whiskey #AI
 
"Deep learning can't progress with IEEE-754 floating point. Here's why Google, Microsoft, and Intel are leaving it behind." "The de facto standard for floating point is IEEE-754. It's available in all processors sold by Intel, AMD, IBM, and NVIDIA. But as the deep learning renaissance blossomed researches quickly realized that IEEE-754 would be a major constraint limiting the progress they could make. IEEE floating point was designed 30 years ago when processing was expensive, and memory access was cheap. The current technology stack is reversed: memory access is expensive, and processing is cheap. And deep learning is memory bound."

"Google developed the first version of its Deep Learning accelerator in 2014, which delivered two orders of magnitude more performance than the NVIDIA processors that were used prior, simply by abandoning IEEE-754. Subsequent versions have incorporated a new floating-point format, called bfloat16, optimized for deep learning to further their lead."

"Now, even Intel is abandoning IEEE-754 floating point for deep learning. Its Cooper Lake Xeon processor, for example, offers Google's bfloat16 format for deep learning acceleration. Thus, it comes as no surprise that competitors in the AI race are all following suit and replacing IEEE-754 floating point with their own custom number systems. And researchers are demonstrating that other number systems, such as posits and Facebook's DeepFloat, can even improve on Google's bfloat16."

Deep Learning Can't Progress With IEEE-754 Floating Point. Here's Why Google, Microsoft, And Intel Are Leaving It Behind

#solidstatelife #ai
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 
Thanks to Microsoft AI, Mackmyra creates world's first AI-generated whiskey - MSPoweruser

Lol tastes likes bugs?
#Microsoft #Whiskey #AI
Thanks to Microsoft AI, Mackmyra creates world’s first AI-generated whiskey
 
Can an Algorithm Be Racist? | Mind Matters
No, the machine has no opinion. It processes vast tracts of data. And, as a result, the troubling hidden roots of some data are exposed
It is important to differentiate between the algorithm model and the data it processes. Therefore it is dangerous to blame the algorithm without understanding why.

#AI #machineLearning #science
Can an Algorithm Be Racist?
 

Nationalism Destroying Science

And an idiotic populace is exactly what nationalists need


If this is happening in AI research, how much is happening in other fields that are going unreported?
Last year respected Dutch scientist and Professor of Information Retrieval at the University of Amsterdam Maarten de Rijke had six publications accepted by SIGIR 2018, the world’s top conference in the field of information retrieval. De Rijke was a scheduled panelist and, along with with his students, was organizing a workshop and presenting a tutorial for the conference at the University of Michigan in Ann Arbor, USA. However, because de Rijke had spoken on data science in Iran in November 2017, his US visa application was denied.
https://medium.com/syncedreview/abandon-us-petition-protests-ai-conference-visa-denials-b90dd5a808c4

#Science #AI #Politics #CliffBramlettArchivesPolitics
 

Nationalism Destroying Science

And an idiotic populace is exactly what nationalists need


If this is happening in AI research, how much is happening in other fields that are going unreported?
Last year respected Dutch scientist and Professor of Information Retrieval at the University of Amsterdam Maarten de Rijke had six publications accepted by SIGIR 2018, the world’s top conference in the field of information retrieval. De Rijke was a scheduled panelist and, along with with his students, was organizing a workshop and presenting a tutorial for the conference at the University of Michigan in Ann Arbor, USA. However, because de Rijke had spoken on data science in Iran in November 2017, his US visa application was denied.
https://medium.com/syncedreview/abandon-us-petition-protests-ai-conference-visa-denials-b90dd5a808c4

#Science #AI #Politics #CliffBramlettArchivesPolitics
 
Are you a top model? Poor thing, I feel for you: AI has stolen your job, what will you do now?
#AI #jobs #body #clothes #model

Amazing AI Generates Entire Bodies of People Who Don’t Exist



The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.
 
Are you a top model? Poor thing, I feel for you: AI has stolen your job, what will you do now?
#AI #jobs #body #clothes #model

Amazing AI Generates Entire Bodies of People Who Don’t Exist



The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
Be unpredictable and #AI looses its power immediately.
#AI
 

Notes on AI Bias — Benedict Evans


#AI #science #intelligence

“Machine Learning can do anything you could train a dog to do - but you’re never totally sure what you trained the dog to do.”
 

Notes on AI Bias — Benedict Evans


#AI #science #intelligence

“Machine Learning can do anything you could train a dog to do - but you’re never totally sure what you trained the dog to do.”
 
#AI #surveillance #escape

This colorful printed patch makes you pretty much invisible to AI - The Verge



The rise of AI-powered surveillance is extremely worrying. The ability of governments to track and identify citizens en masse could spell an end to public anonymity. But as researchers have shown time and time again, there are ways to trick such systems.

The latest example comes from a group of engineers from the university of KU Leuven in Belgium. In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that’s designed to recognize people in images.

If you print off one of the students’ specially designed patches and hang it around your neck, from an AI’s point of view, you may as well have slipped under an invisibility cloak.

As the researchers write: “We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.” (They don’t mention it, but this is, famously, an important plot device in the sci-fi novel Zero History by William Gibson.)
 
#AI #surveillance #escape

This colorful printed patch makes you pretty much invisible to AI - The Verge



The rise of AI-powered surveillance is extremely worrying. The ability of governments to track and identify citizens en masse could spell an end to public anonymity. But as researchers have shown time and time again, there are ways to trick such systems.

The latest example comes from a group of engineers from the university of KU Leuven in Belgium. In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that’s designed to recognize people in images.

If you print off one of the students’ specially designed patches and hang it around your neck, from an AI’s point of view, you may as well have slipped under an invisibility cloak.

As the researchers write: “We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.” (They don’t mention it, but this is, famously, an important plot device in the sci-fi novel Zero History by William Gibson.)
 

Die intelligente Programmiersprache | HNF Blog

Wenn ein Computer so schlau ist wie ein Mensch, dann liegt es an der Software. Niemand begriff das besser als der amerikanische Mathematiker John McCarthy. Im April 1959 veröffentlichte er die Sprache LISP; sie eignet sich hervorragend für Programme aus der Künstlichen Intelligenz. Ab 1979 bauten Computerfirmen sogenannte Lisp-Maschinen, die auf diese Sprache zugeschnitten waren.
LISP macht einen wahnsinnig mit den verschachtelten Klammern...

#LISP #retrocomputing #AI
Die intelligente Programmiersprache
 
Bild/Foto
GauGAN turns your doodles into photorealistic landscapes

NVIDIA's deep learning AI model GauGAN, cleverly named after post-impressionist painter Paul Gauguin, turns simple sketches into realistic scenes in seconds by leveraging generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.

GauGAN allows users to draw their own segmentation maps and manipulate the scene, labeling each segment with labels like sand, sky, sea or snow. The tool also allows users to add a style filter, changing a generated image to adapt the style of a particular painter, or change a daytime scene to sunset.

Source: https://blogs.nvidia.com/blog/2019/03/18/gaugan-photorealistic-landscapes-nvidia-research/

#AI #deep learning #art
 
Bild/Foto
GauGAN turns your doodles into photorealistic landscapes

NVIDIA's deep learning AI model GauGAN, cleverly named after post-impressionist painter Paul Gauguin, turns simple sketches into realistic scenes in seconds by leveraging generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.

GauGAN allows users to draw their own segmentation maps and manipulate the scene, labeling each segment with labels like sand, sky, sea or snow. The tool also allows users to add a style filter, changing a generated image to adapt the style of a particular painter, or change a daytime scene to sunset.

Source: https://blogs.nvidia.com/blog/2019/03/18/gaugan-photorealistic-landscapes-nvidia-research/

#AI #deep learning #art
 
HNF - Bewusstsein im Computer?
Seit Jahrhunderten versuchen Menschen, das Verhältnis von Geist und Gehirn zu verstehen und seit einigen Jahrzehnten sogar, dessen Funktion im Computer nachzubilden. Zwar stehen wir immer noch vor einem Rätsel, aber eine einfache Lösung scheint möglich.

Ohne Vorurteile wirft der renommierte Referent einen Blick auf die beteiligten Spezialgebiete und beleuchtet wie Gehirn und Geist zusammenhängen. KI könnte ein Durchbruch gelingen, autonome Organismen mit Bewusstsein zu schaffen.
Klingt sehr interessant, werde ich mir glaube definitiv anhören

https://www.hnf.de/veranstaltungen/vortraege/date/2019/04/11/cal/event/tx_cal_phpicalendar/bewusstsein_im_computer.html

#KI #AI #Geist #Gehirn
 
Instagram is best when you write automated software that abuses it's algorithms for your personal gain.

#programming #algorithm #python #ai #newyork #instagram
 
Instagram is best when you write automated software that abuses it's algorithms for your personal gain.

#programming #algorithm #python #ai #newyork #instagram
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Facebook Reminds Us That Binary Deep Learning Classifiers Don't Work For Content Moderation

#deepLearning #AI
 
The Gold Rush of #Singularity | #Science & #Technology #AI
Japan's billionaire Masayoshi Son has sold the idea of singularity to Saudi Arabia. But is this investment a good idea?
https://www.aljazeera.com/indepth/opinion/mohammed-bin-salman-gold-rush-singularity-180522101213108.html
 
Later posts Earlier posts