social.sokoll.com

Search

Items tagged with: ai

"With neural filters, Photoshop can adjust a subject's age and facial expression, amplifying or reducing feelings like 'joy,' 'surprise,' or 'anger' with simple sliders. You can remove someone's glasses or smooth out their spots. One of the weirder filters even lets you transfer makeup from one person to another. And it's all done in just a few clicks, with the output easily tweaked or reversed entirely."

"Adobe is harnessing the power of generative adversarial networks -- or GANs -- a type of machine learning technique that's proved particularly adept at generating visual imagery. Some of the processing is done locally and some in the cloud, depending on the computational demands of each individual tool."

Photoshop's AI neural filters can tweak age and expression with a few clicks

#solidstatelife #ai #adobe #photoshop
 
"With neural filters, Photoshop can adjust a subject's age and facial expression, amplifying or reducing feelings like 'joy,' 'surprise,' or 'anger' with simple sliders. You can remove someone's glasses or smooth out their spots. One of the weirder filters even lets you transfer makeup from one person to another. And it's all done in just a few clicks, with the output easily tweaked or reversed entirely."

"Adobe is harnessing the power of generative adversarial networks -- or GANs -- a type of machine learning technique that's proved particularly adept at generating visual imagery. Some of the processing is done locally and some in the cloud, depending on the computational demands of each individual tool."

Photoshop's AI neural filters can tweak age and expression with a few clicks

#solidstatelife #ai #adobe #photoshop
 
The Guardian’s GPT-3-written article misleads readers about AI. Here’s why. – TechTalks https://bdtechtalks.com/2020/09/14/guardian-gpt-3-article-ai-fake-news/

The usage or claiming something is AI is usually a huge indicator for bullshit.
It's just an advanced text generator which stitches words together.

#gpt3 #ai
The Guardian’s GPT-3-written article misleads readers about AI. Here’s why.
#gpt3 #ai
 
Mmmm. AI "You Keep Using That Word, I Do Not Think It Means What You Think It Means"

These students figured out their tests were graded by AI — and the easy way to cheat

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. He’d completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. He’d received a 50 out of 100. That wasn’t on a practice test — it was his real grade.
“He was like, I’m gonna have to get a 100 on all the rest of this to make up for this,” said Simmons in a phone interview with The Verge. “He was totally dejected.”
At first, Simmons tried to console her son. “I was like well, you know, some teachers grade really harshly at the beginning,” said Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.
Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.
Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like... ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”
“I wanted to game it because I felt like it was an easy way to get a good grade,” Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.
Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.
#AI #education #hanginthere
 
Mmmm. AI "You Keep Using That Word, I Do Not Think It Means What You Think It Means"

These students figured out their tests were graded by AI — and the easy way to cheat

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. He’d completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. He’d received a 50 out of 100. That wasn’t on a practice test — it was his real grade.
“He was like, I’m gonna have to get a 100 on all the rest of this to make up for this,” said Simmons in a phone interview with The Verge. “He was totally dejected.”
At first, Simmons tried to console her son. “I was like well, you know, some teachers grade really harshly at the beginning,” said Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.
Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.
Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like... ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”
“I wanted to game it because I felt like it was an easy way to get a good grade,” Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.
Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.
#AI #education #hanginthere
 

This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits








A team of researchers at the University of Chicago have developed an algorithm that makes tiny, imperceptible edits to your images in order to mask you from facial recognition technology. Their invention is called Fawkes, and anybody can use it on their own images for free.

The algorithm was created by researchers in the SAND Lab at the University of Chicago, and the open-source software tool that they built is free to download and use on your computer at home.

The program works by making "tiny, pixel-level changes that are invisible to the human eye," but that nevertheless prevent facial recognition algorithms from categorizing you correctly. It's not so much that it makes you impossible to categorize; it's that the algorithm will categorize you as a different person entirely. The team calls the result "cloaked" photos, and they can be used like any other:
You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo.
The only difference is that a company like the infamous startup Clearview AI can't use them to build an accurate database that will make you trackable.

Here's a before-and-after that the team created to show the cloaking at work. On the left is the original image, on the right a "cloaked" version. The differences are noticeable if you look closely, but they look like the result of dodging and burning rather than actual alterations that might change the way you look:




You can watch an explanation and demonstration of Fawkes by co-lead authors Emily Wenger and Shawn Shan below:

According to the team, Fawkes has proven 100% effective against state-of-the-art facial recognition models. Of course, this won't make facial recognition models obsolete overnight, byt if technology like this caught on as "standard" when, say, uploading an image to social media, it would make maintaining accurate models much more cumbersome and expensive.

"Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition," explains the team. "If we can reduce the accuracy of these models to make them untrustable, or force the model's owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded."

To learn more about this technology, or if you want to download Version 0.3 and try it on your own photos, head over to the Fawkes webpage. The team will be (virtually) presenting their technical paper at the upcoming USENIX Security Symposium running from August 12th to the 14th.

(via Fstoppers via Gizmodo)

Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto

Bild/Foto

#finds #software #technology #ai #algorithm #artificialintelligence #clearview #clearviewai #cloaking #face #facialrecognition #fawkes #photoediting #privacy #security
posted by pod_feeder_v2
This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits

PetaPixel: This 'Cloaking' Algorithm Breaks Facial Recognition by Making Tiny Edits (DL Cade)

 

This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits








A team of researchers at the University of Chicago have developed an algorithm that makes tiny, imperceptible edits to your images in order to mask you from facial recognition technology. Their invention is called Fawkes, and anybody can use it on their own images for free.

The algorithm was created by researchers in the SAND Lab at the University of Chicago, and the open-source software tool that they built is free to download and use on your computer at home.

The program works by making "tiny, pixel-level changes that are invisible to the human eye," but that nevertheless prevent facial recognition algorithms from categorizing you correctly. It's not so much that it makes you impossible to categorize; it's that the algorithm will categorize you as a different person entirely. The team calls the result "cloaked" photos, and they can be used like any other:
You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo.
The only difference is that a company like the infamous startup Clearview AI can't use them to build an accurate database that will make you trackable.

Here's a before-and-after that the team created to show the cloaking at work. On the left is the original image, on the right a "cloaked" version. The differences are noticeable if you look closely, but they look like the result of dodging and burning rather than actual alterations that might change the way you look:




You can watch an explanation and demonstration of Fawkes by co-lead authors Emily Wenger and Shawn Shan below:

According to the team, Fawkes has proven 100% effective against state-of-the-art facial recognition models. Of course, this won't make facial recognition models obsolete overnight, byt if technology like this caught on as "standard" when, say, uploading an image to social media, it would make maintaining accurate models much more cumbersome and expensive.

"Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition," explains the team. "If we can reduce the accuracy of these models to make them untrustable, or force the model's owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded."

To learn more about this technology, or if you want to download Version 0.3 and try it on your own photos, head over to the Fawkes webpage. The team will be (virtually) presenting their technical paper at the upcoming USENIX Security Symposium running from August 12th to the 14th.

(via Fstoppers via Gizmodo)

Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto

Bild/Foto

#finds #software #technology #ai #algorithm #artificialintelligence #clearview #clearviewai #cloaking #face #facialrecognition #fawkes #photoediting #privacy #security
posted by pod_feeder_v2
This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits

PetaPixel: This 'Cloaking' Algorithm Breaks Facial Recognition by Making Tiny Edits (DL Cade)

 
Schau dir "ORDEN OGAN - In The Dawn Of The AI (2020) // Official Music Video // AFM Records" auf YouTube an https://youtu.be/cAYvwbUUhD0

Really cool and nice topic covered!
#music #metal #ai
 
Machine Learning Summer School starts June 28th. All virtual. From the Max Planck Institute for Intelligent Systems, Tübingen, Germany. Topics covered include symbolic AI, statistical AI, causality and learning theory, AI fairness, computational neuroscience in AI, Bayesian AI, game theory in AI, kernel methods, AI in healthcare, deep learning, geometric deep learning, deep reinforcement learning, and quantum machine learning, whatever that is.

The Machine Learning Summer School

#solidstatelife #ai #aieducation
 
Machine Learning Summer School starts June 28th. All virtual. From the Max Planck Institute for Intelligent Systems, Tübingen, Germany. Topics covered include symbolic AI, statistical AI, causality and learning theory, AI fairness, computational neuroscience in AI, Bayesian AI, game theory in AI, kernel methods, AI in healthcare, deep learning, geometric deep learning, deep reinforcement learning, and quantum machine learning, whatever that is.

The Machine Learning Summer School

#solidstatelife #ai #aieducation
 
High resolution neural face swapping. Deepfakes taken to the next level. There's one encoder for any input face, but every output face has its own output decoder trained for that face. The decoding has a "progressive" system where it adds resolution is steps, rather than going end-to-end in high resolution. Face alignment by detecting facial landmarks combined with an "ablation" system eliminates any jitter. The background is composited using a separate compositing process.



#solidstatelife #ai #computervision #generativenetworks #deepfakes
 
High resolution neural face swapping. Deepfakes taken to the next level. There's one encoder for any input face, but every output face has its own output decoder trained for that face. The decoding has a "progressive" system where it adds resolution is steps, rather than going end-to-end in high resolution. Face alignment by detecting facial landmarks combined with an "ablation" system eliminates any jitter. The background is composited using a separate compositing process.



#solidstatelife #ai #computervision #generativenetworks #deepfakes
 
"When to assume neural networks can solve a problem. A pragmatic guide to the powers and limits of neural networks" from Skynet Today. "A neural network can almost certainly solve a problem if another ML algorithm has already been used to solve it." "A neural network can almost certainly solve a problem very similar to ones already solved by neural nets." "A neural network can solve problems that a human can solve if these problems are 'small' in data and require little-to-no context." (Yeah but we like big data, right?) "A neural network might be able to solve a problem when we are reasonably sure that a) it's deterministic, b) we provide any relevant context as part of the input data, and c) the data is reasonably small."

When to Assume Neural Networks Can Solve a Problem

#solidstatelife #ai
 
"When to assume neural networks can solve a problem. A pragmatic guide to the powers and limits of neural networks" from Skynet Today. "A neural network can almost certainly solve a problem if another ML algorithm has already been used to solve it." "A neural network can almost certainly solve a problem very similar to ones already solved by neural nets." "A neural network can solve problems that a human can solve if these problems are 'small' in data and require little-to-no context." (Yeah but we like big data, right?) "A neural network might be able to solve a problem when we are reasonably sure that a) it's deterministic, b) we provide any relevant context as part of the input data, and c) the data is reasonably small."

When to Assume Neural Networks Can Solve a Problem

#solidstatelife #ai
 
Why is artificial intelligence so useless for business? Ponders Matthew Eric Bassett. "Today's work in artificial intelligence is amazing. We've taught computers to beat the most advanced players in the most complex games. We've taught them to drive cars and create photo-realistic videos and images of people. They can re-create works of fine-art and emulate the best writers. Yet I know that many businesses still need people to, e.g., read PDF documents about an office building and write down the sizes of the leasable units contained therein. If artificial intelligence can do all that, why can't it read a PDF document and transform it into a machine-readable format? Today's artificial intelligence algorithms can recreate playable versions of Pacman just from playing games against itself. So why can't I get a computer to translate my colleague's financial spreadsheet into the format my SAP software wants? Despite two decades of advancements in artificial intelligence, it feels that the majority of office work consists of menial mental tasks."

Why is Artificial Intelligence So Useless for Business?

#solidstatelife #ai
 
Why is artificial intelligence so useless for business? Ponders Matthew Eric Bassett. "Today's work in artificial intelligence is amazing. We've taught computers to beat the most advanced players in the most complex games. We've taught them to drive cars and create photo-realistic videos and images of people. They can re-create works of fine-art and emulate the best writers. Yet I know that many businesses still need people to, e.g., read PDF documents about an office building and write down the sizes of the leasable units contained therein. If artificial intelligence can do all that, why can't it read a PDF document and transform it into a machine-readable format? Today's artificial intelligence algorithms can recreate playable versions of Pacman just from playing games against itself. So why can't I get a computer to translate my colleague's financial spreadsheet into the format my SAP software wants? Despite two decades of advancements in artificial intelligence, it feels that the majority of office work consists of menial mental tasks."

Why is Artificial Intelligence So Useless for Business?

#solidstatelife #ai
 
"AI techniques in medical imaging may lead to incorrect diagnoses." The researchers tested 6 medical AI systems on MRI and CT images. They made tiny perturbations to the images to see if those destabilized the AI algorithms. This was done by adding small bits of random noise or small samples from a Fourier transform. They also tested making "structural" changes to the images, in this case adding characters to them. They also tested upsampling the images.

Tiny perturbations lead to a myriad of different artifacts. Not only that, but different AI systems have different artifacts and instabilities. There is no common denominator.

Likewise, there are a variety of failures in trying to recover from structural changes to images. Failures range from complete removal of details to more subtle distortions and blurring of features.

AI systems have to be retrained from scratch on any subsampling pattern. Even increasing the number of samples can cause the quality of reconstruction to deteriorate.

These instabilities are not necessarily rare events. A key question regarding instabilities with respect to tiny perturbations is how much they occur in practice. There can be noise in the images, machines can malfunction, patients can move while images are being made, there can be subtle anatomic differences between patients, and so on.

Current deep learning methods lack any easy way to make the instability problem go away.

AI techniques in medical imaging may lead to incorrect diagnoses

#solidstatelife #ai #medicalai
 
"AI techniques in medical imaging may lead to incorrect diagnoses." The researchers tested 6 medical AI systems on MRI and CT images. They made tiny perturbations to the images to see if those destabilized the AI algorithms. This was done by adding small bits of random noise or small samples from a Fourier transform. They also tested making "structural" changes to the images, in this case adding characters to them. They also tested upsampling the images.

Tiny perturbations lead to a myriad of different artifacts. Not only that, but different AI systems have different artifacts and instabilities. There is no common denominator.

Likewise, there are a variety of failures in trying to recover from structural changes to images. Failures range from complete removal of details to more subtle distortions and blurring of features.

AI systems have to be retrained from scratch on any subsampling pattern. Even increasing the number of samples can cause the quality of reconstruction to deteriorate.

These instabilities are not necessarily rare events. A key question regarding instabilities with respect to tiny perturbations is how much they occur in practice. There can be noise in the images, machines can malfunction, patients can move while images are being made, there can be subtle anatomic differences between patients, and so on.

Current deep learning methods lack any easy way to make the instability problem go away.

AI techniques in medical imaging may lead to incorrect diagnoses

#solidstatelife #ai #medicalai
 
DeepDesigns.ai. AI-designed face masks and other fashion. You pick an initial design, and it generates mutations, and then you pick one of those, and keep iterating as long as you like.

DeepDesigns.ai

#solidstatelife #ai #fashion
 
DeepDesigns.ai. AI-designed face masks and other fashion. You pick an initial design, and it generates mutations, and then you pick one of those, and keep iterating as long as you like.

DeepDesigns.ai

#solidstatelife #ai #fashion
 
A benchmark for evaluating the ability of natural language processing systems to discover shared underlying structure between languages has been developed. It's important to note that this is a "benchmark", that is to say, it's not a solution, it's a test for testing proposed solutions against, to see if they're any good. Historically, the development of good benchmarks has helped spur advancement in the field, for example ImageNet spurred the invention of good image classifying AIs.

"One of the key challenges in natural language processing (NLP) is building systems that not only work in English but in all of the world's ~6,900 languages. Luckily, while most of the world's languages are data sparse and do not have enough data available to train robust models on their own, many languages do share a considerable amount of underlying structure. On the vocabulary level, languages often have words that stem from the same origin -- for instance, 'desk' in English and 'Tisch' in German both come from the Latin 'discus'. Similarly, many languages also mark semantic roles in similar ways, such as the use of postpositions to mark temporal and spatial relations in both Chinese and Turkish."

"In NLP, there are a number of methods that leverage the shared structure of multiple languages in training in order to overcome the data sparsity problem. Historically, most of these methods focused on performing a specific task in multiple languages. Over the last few years, driven by advances in deep learning, there has been an increase in the number of approaches that attempt to learn general-purpose multilingual representations (e.g., mBERT, XLM, XLM-R), which aim to capture knowledge that is shared across languages and that is useful for many tasks. In practice, however, the evaluation of such methods has mostly focused on a small set of tasks and for linguistically similar languages."

"To encourage more research on multilingual learning, we introduce 'XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization', which covers 40 typologically diverse languages (spanning 12 language families) and includes nine tasks that collectively require reasoning about different levels of syntax or semantics. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil (spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the Niger-Congo languages Swahili and Yoruba, spoken in Africa."

The first test asks whether a premise sentence entails, contradicts, or is neutral toward a hypothesis sentence. The next requires the NLP system to determine whether two sentences are paraphrases. The next is part-of-speech tagging. The next requires the NLP system to annotate entities in Wikipedia with LOC, PER, and ORG tags. (LOC means location, PER means person, ORG means organization. Paris is a location, Chilly Gonzales is a person, and Warner Bros. Records is an organization. Any other proper nouns gets tagged MISC.) The next test requires the system to answer to a question as a span in a paragraph. The next test requires the NLP system to answer questions that are unanswerable as spans in the passage text. The next test requires it to extract parallel sentences from text in English and another language.

You may have seen similar tests before but what's different here is these are all cross-lingual. "Models must first be pre-trained on multilingual text using objectives that encourage cross-lingual learning. Then, they are fine-tuned on task-specific English data, since English is the most likely language where labelled data is available. It then evaluates these models on zero-shot cross-lingual transfer performance, i.e., on other languages for which no task-specific data was seen."

The blog post goes on to describe how an existing set of cross-lingual models perform on the test.

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

#solidstatelife #ai #nlp
 
A benchmark for evaluating the ability of natural language processing systems to discover shared underlying structure between languages has been developed. It's important to note that this is a "benchmark", that is to say, it's not a solution, it's a test for testing proposed solutions against, to see if they're any good. Historically, the development of good benchmarks has helped spur advancement in the field, for example ImageNet spurred the invention of good image classifying AIs.

"One of the key challenges in natural language processing (NLP) is building systems that not only work in English but in all of the world's ~6,900 languages. Luckily, while most of the world's languages are data sparse and do not have enough data available to train robust models on their own, many languages do share a considerable amount of underlying structure. On the vocabulary level, languages often have words that stem from the same origin -- for instance, 'desk' in English and 'Tisch' in German both come from the Latin 'discus'. Similarly, many languages also mark semantic roles in similar ways, such as the use of postpositions to mark temporal and spatial relations in both Chinese and Turkish."

"In NLP, there are a number of methods that leverage the shared structure of multiple languages in training in order to overcome the data sparsity problem. Historically, most of these methods focused on performing a specific task in multiple languages. Over the last few years, driven by advances in deep learning, there has been an increase in the number of approaches that attempt to learn general-purpose multilingual representations (e.g., mBERT, XLM, XLM-R), which aim to capture knowledge that is shared across languages and that is useful for many tasks. In practice, however, the evaluation of such methods has mostly focused on a small set of tasks and for linguistically similar languages."

"To encourage more research on multilingual learning, we introduce 'XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization', which covers 40 typologically diverse languages (spanning 12 language families) and includes nine tasks that collectively require reasoning about different levels of syntax or semantics. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil (spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the Niger-Congo languages Swahili and Yoruba, spoken in Africa."

The first test asks whether a premise sentence entails, contradicts, or is neutral toward a hypothesis sentence. The next requires the NLP system to determine whether two sentences are paraphrases. The next is part-of-speech tagging. The next requires the NLP system to annotate entities in Wikipedia with LOC, PER, and ORG tags. (LOC means location, PER means person, ORG means organization. Paris is a location, Chilly Gonzales is a person, and Warner Bros. Records is an organization. Any other proper nouns gets tagged MISC.) The next test requires the system to answer to a question as a span in a paragraph. The next test requires the NLP system to answer questions that are unanswerable as spans in the passage text. The next test requires it to extract parallel sentences from text in English and another language.

You may have seen similar tests before but what's different here is these are all cross-lingual. "Models must first be pre-trained on multilingual text using objectives that encourage cross-lingual learning. Then, they are fine-tuned on task-specific English data, since English is the most likely language where labelled data is available. It then evaluates these models on zero-shot cross-lingual transfer performance, i.e., on other languages for which no task-specific data was seen."

The blog post goes on to describe how an existing set of cross-lingual models perform on the test.

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

#solidstatelife #ai #nlp
 
"If robots steal so many jobs, why aren't they saving us now?" "Because the machines are far, far away from matching our intelligence and dexterity. You’re more likely to have a machine automate part of your job, not destroy your job entirely."

If robots steal so many jobs, why aren't they saving us now?

#solidstatelife #ai #technologicalunemployment
 
"If robots steal so many jobs, why aren't they saving us now?" "Because the machines are far, far away from matching our intelligence and dexterity. You’re more likely to have a machine automate part of your job, not destroy your job entirely."

If robots steal so many jobs, why aren't they saving us now?

#solidstatelife #ai #technologicalunemployment
 

Dennis Demmer auf Twitter: "My #styleGAN has learned to create new plants based on the @BioDivLibrary #OpenAccess artworks. #ArtificialIntelligence #ai #MachineLearning #ml #OpenScience https://t.co/vA7ErAzvX6" / Twitter


This is totally cool!

https://twitter.com/DemmerDennis/status/1234189594180161536
 
TextFooler makes adversarial text examples. "Adversarial examples" are images where, if you add imperceptible noise, the neural network will change its classification from "panda" to "gibbon", even though to you, the human, it still looks exactly like a panda.

The idea behind TextFooler is to change text in such a way that a human wouldn't change its classification but an AI would. The new version should have the same meaning as the original, and it should have correct spelling and grammar and otherwise look natural.

Examples: "The characters, cast in impossibly contrived situations, are totally estranged from reality." changed to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality." The idea was to categorize the movie review as "positive" or "negative". The first is classified as negative but in the second, the AI system, in this case a system called WordLSTM, gets confused into thinking it's positive.

"It cuts to the knot of what it actually means to face your scares, and to ride the overwhelming metaphorical wave that life wherever it takes you." changed to "It cuts to the core of what it actually means to face your fears, and to ride the big metaphorical wave that life wherever it takes you." This flips the classification from positive to negative.

"Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band uniforms." changed to "Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band garments." The idea here is to classify as "contradiction", "entailment" (second idea follows from the first), or "neutral". The second pair of sentences flips the classification, done by a system called SNLI, from "contradiction" to "entailment".

"A child with wet hair is holding a butterfly decorated beach ball. The child is at the beach." A child with wet hair is holding a butterfly decorated beach ball. The youngster is at the shore." The second pair flips the classification from "neutral" to "entailment".

Hey Alexa: Sorry I fooled you

#solidstatelife #ai #nlp #textfooler
 
TextFooler makes adversarial text examples. "Adversarial examples" are images where, if you add imperceptible noise, the neural network will change its classification from "panda" to "gibbon", even though to you, the human, it still looks exactly like a panda.

The idea behind TextFooler is to change text in such a way that a human wouldn't change its classification but an AI would. The new version should have the same meaning as the original, and it should have correct spelling and grammar and otherwise look natural.

Examples: "The characters, cast in impossibly contrived situations, are totally estranged from reality." changed to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality." The idea was to categorize the movie review as "positive" or "negative". The first is classified as negative but in the second, the AI system, in this case a system called WordLSTM, gets confused into thinking it's positive.

"It cuts to the knot of what it actually means to face your scares, and to ride the overwhelming metaphorical wave that life wherever it takes you." changed to "It cuts to the core of what it actually means to face your fears, and to ride the big metaphorical wave that life wherever it takes you." This flips the classification from positive to negative.

"Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band uniforms." changed to "Two small boys in blue soccer uniforms use a wooden set of steps to wash their hands. The boys are in band garments." The idea here is to classify as "contradiction", "entailment" (second idea follows from the first), or "neutral". The second pair of sentences flips the classification, done by a system called SNLI, from "contradiction" to "entailment".

"A child with wet hair is holding a butterfly decorated beach ball. The child is at the beach." A child with wet hair is holding a butterfly decorated beach ball. The youngster is at the shore." The second pair flips the classification from "neutral" to "entailment".

Hey Alexa: Sorry I fooled you

#solidstatelife #ai #nlp #textfooler
 
Die Behörden in den #USA müssen alle #Anklagepunkte gegen #Julian #Assange fallen lassen, die sich auf seine Arbeit mit Wikileaks beziehen. Die USA haben Assange jahrelang unnachgiebig verfolgt – das ist ein Angriff auf das Recht auf freie Meinungsäußerung!

#Amnesty #international #AI
 
A machine learning algorithm has identified an antibiotic that kills E. coli and many other disease-causing bacteria, including some strains that are resistant to all known antibiotics. To test it, mice were infected on purpose with A. baumannii and C. difficile and the antibiotic cleared the mice of both infections.

"The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs."

"The researchers also identified several other promising antibiotic candidates, which they plan to test further. They believe the model could also be used to design new drugs, based on what it has learned about chemical structures that enable drugs to kill bacteria."

"The machine learning model can explore, in silico, large chemical spaces that can be prohibitively expensive for traditional experimental approaches."

"Over the past few decades, very few new antibiotics have been developed, and most of those newly approved antibiotics are slightly different variants of existing drugs." "We're facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anemic pipeline in the biotech and pharmaceutical industries for new antibiotics."

"The researchers designed their model to look for chemical features that make molecules effective at killing E. coli. To do so, they trained the model on about 2,500 molecules, including about 1,700 FDA-approved drugs and a set of 800 natural products with diverse structures and a wide range of bioactivities."

"Once the model was trained, the researchers tested it on the Broad Institute's Drug Repurposing Hub, a library of about 6,000 compounds. The model picked out one molecule that was predicted to have strong antibacterial activity and had a chemical structure different from any existing antibiotics. Using a different machine-learning model, the researchers also showed that this molecule would likely have low toxicity to human cells."

"This molecule, which the researchers decided to call halicin, after the fictional artificial intelligence system from '2001: A Space Odyssey,' has been previously investigated as possible diabetes drug. The researchers tested it against dozens of bacterial strains isolated from patients and grown in lab dishes, and found that it was able to kill many that are resistant to treatment, including Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis. The drug worked against every species that they tested, with the exception of Pseudomonas aeruginosa, a difficult-to-treat lung pathogen."

"Preliminary studies suggest that halicin kills bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is necessary, among other functions, to produce ATP (molecules that cells use to store energy), so if the gradient breaks down, the cells die. This type of killing mechanism could be difficult for bacteria to develop resistance to, the researchers say."

"The researchers found that E. coli did not develop any resistance to halicin during a 30-day treatment period. In contrast, the bacteria started to develop resistance to the antibiotic ciprofloxacin within one to three days, and after 30 days, the bacteria were about 200 times more resistant to ciprofloxacin than they were at the beginning of the experiment."

The way the system works is, they developed a "directed message passing neural network", open sourced as "Chemprop", that learns to predict molecular properties directly from the graph structure of the molecule, where atoms are represented as nodes and bonds are represented as edges. For every molecule, the molecular graph corresponding to each compound's simplified molecular-input line-entry system (SMILES) string was reconstructed, and the set of atoms and bonds determined using an open-source package called RDKit. From this a feature vector describing each atom and bond was computed, with the number of bonds for each atom, formal charge, chirality, number of bonded hydrogens, hybridization, aromaticity, atomic mass, bond type for each bond (single/double/triple/aromatic), conjugation, ring membership, and stereochemistry. "Aromatic" refers to rings of bonds. "Conjugation" refers to those chemistry diagrams you see where they look like alternating single and double (or sometimes triple) bonds -- what's going on here is the molecule has connected p orbitals with electrons that move around. "Stereochemistry" refers to the fact that molecules with the same formula can form different "stereoisomers", which have different 3D arrangements that are mirror images of each other.

From here, and the reason the system is called "directed message passing", the model applies a series of message passing steps where it aggregates information from neighboring atoms and bonds to build an understanding of local chemistry. "On each step of message passing, each bond's featurization is updated by summing the featurization of neighboring bonds, concatenating the current bond's featurization with the sum, and then applying a single neural network layer with non-linear activation. After a fixed number of message-passing steps, the learned featurizations across the molecule are summed to produce a single featurization for the whole molecule. Finally, this featurization is fed through a feed-forward neural network that outputs a prediction of the property of interest. Since the property of interest in our application was the binary classification of whether a molecule inhibits the growth of E. coli, the model is trained to output a number between 0 and 1, which represents its prediction about whether the input molecule is growth inhibitory."

The system has additional optimizations including 200 additional molecule-level features computed with RDKit to overcome the problem that the message passing paradigm works for local chemistry, it does not do well with global molecular features, and this is especially true the larger the molecule gets and the larger the number of message-passing hops involved.

They used a Bayesian hyperparameter optimization system, which optimized such things as the number of hidden and feed-forward layers in the neural network and the amount of dropout (a regularization technique) involved.

On top of that they used ensembling, which in this case involved independently training several copies of the same model and combining their output. They used an ensemble of 20 models.

The training set was 2,335 molecules, with 120 of them having "growth inhibitory" effects against E. coli.

Once trained, the system was set loose on the Drug Repurposing Hub library, which was 6,111 molecules, the WuXi anti-tuberculosis library, which was 9,997 molecules, and parts of the ZINC15 database thought to contain likely antibiotic molecules, which was 107,349,233 molecules.

A final set of 6,820 compounds was found, and further reduced using the scikit-learn random forest and support vector machine classifiers.

To predict the toxicity of the molecules, they retrained Chemprop on a different training set, called the ClinTox dataset. This dataset has 1,478 molecules with clinical trial toxicity and FDA approval status. Once this model was made it was used to test the toxicity of the candidate antibiotic molecules.

At that point they hit the lab and started growing E. coli on 96 flat-bottomed assay plates. 63 molecules were tested. The chemical they named halicin did the best and went on to further testing against other bacteria and in mice.

Artificial intelligence yields new antibiotic

#solidstatelife #ai #biochemistry #antibiotics
 
A machine learning algorithm has identified an antibiotic that kills E. coli and many other disease-causing bacteria, including some strains that are resistant to all known antibiotics. To test it, mice were infected on purpose with A. baumannii and C. difficile and the antibiotic cleared the mice of both infections.

"The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs."

"The researchers also identified several other promising antibiotic candidates, which they plan to test further. They believe the model could also be used to design new drugs, based on what it has learned about chemical structures that enable drugs to kill bacteria."

"The machine learning model can explore, in silico, large chemical spaces that can be prohibitively expensive for traditional experimental approaches."

"Over the past few decades, very few new antibiotics have been developed, and most of those newly approved antibiotics are slightly different variants of existing drugs." "We're facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anemic pipeline in the biotech and pharmaceutical industries for new antibiotics."

"The researchers designed their model to look for chemical features that make molecules effective at killing E. coli. To do so, they trained the model on about 2,500 molecules, including about 1,700 FDA-approved drugs and a set of 800 natural products with diverse structures and a wide range of bioactivities."

"Once the model was trained, the researchers tested it on the Broad Institute's Drug Repurposing Hub, a library of about 6,000 compounds. The model picked out one molecule that was predicted to have strong antibacterial activity and had a chemical structure different from any existing antibiotics. Using a different machine-learning model, the researchers also showed that this molecule would likely have low toxicity to human cells."

"This molecule, which the researchers decided to call halicin, after the fictional artificial intelligence system from '2001: A Space Odyssey,' has been previously investigated as possible diabetes drug. The researchers tested it against dozens of bacterial strains isolated from patients and grown in lab dishes, and found that it was able to kill many that are resistant to treatment, including Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis. The drug worked against every species that they tested, with the exception of Pseudomonas aeruginosa, a difficult-to-treat lung pathogen."

"Preliminary studies suggest that halicin kills bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is necessary, among other functions, to produce ATP (molecules that cells use to store energy), so if the gradient breaks down, the cells die. This type of killing mechanism could be difficult for bacteria to develop resistance to, the researchers say."

"The researchers found that E. coli did not develop any resistance to halicin during a 30-day treatment period. In contrast, the bacteria started to develop resistance to the antibiotic ciprofloxacin within one to three days, and after 30 days, the bacteria were about 200 times more resistant to ciprofloxacin than they were at the beginning of the experiment."

The way the system works is, they developed a "directed message passing neural network", open sourced as "Chemprop", that learns to predict molecular properties directly from the graph structure of the molecule, where atoms are represented as nodes and bonds are represented as edges. For every molecule, the molecular graph corresponding to each compound's simplified molecular-input line-entry system (SMILES) string was reconstructed, and the set of atoms and bonds determined using an open-source package called RDKit. From this a feature vector describing each atom and bond was computed, with the number of bonds for each atom, formal charge, chirality, number of bonded hydrogens, hybridization, aromaticity, atomic mass, bond type for each bond (single/double/triple/aromatic), conjugation, ring membership, and stereochemistry. "Aromatic" refers to rings of bonds. "Conjugation" refers to those chemistry diagrams you see where they look like alternating single and double (or sometimes triple) bonds -- what's going on here is the molecule has connected p orbitals with electrons that move around. "Stereochemistry" refers to the fact that molecules with the same formula can form different "stereoisomers", which have different 3D arrangements that are mirror images of each other.

From here, and the reason the system is called "directed message passing", the model applies a series of message passing steps where it aggregates information from neighboring atoms and bonds to build an understanding of local chemistry. "On each step of message passing, each bond's featurization is updated by summing the featurization of neighboring bonds, concatenating the current bond's featurization with the sum, and then applying a single neural network layer with non-linear activation. After a fixed number of message-passing steps, the learned featurizations across the molecule are summed to produce a single featurization for the whole molecule. Finally, this featurization is fed through a feed-forward neural network that outputs a prediction of the property of interest. Since the property of interest in our application was the binary classification of whether a molecule inhibits the growth of E. coli, the model is trained to output a number between 0 and 1, which represents its prediction about whether the input molecule is growth inhibitory."

The system has additional optimizations including 200 additional molecule-level features computed with RDKit to overcome the problem that the message passing paradigm works for local chemistry, it does not do well with global molecular features, and this is especially true the larger the molecule gets and the larger the number of message-passing hops involved.

They used a Bayesian hyperparameter optimization system, which optimized such things as the number of hidden and feed-forward layers in the neural network and the amount of dropout (a regularization technique) involved.

On top of that they used ensembling, which in this case involved independently training several copies of the same model and combining their output. They used an ensemble of 20 models.

The training set was 2,335 molecules, with 120 of them having "growth inhibitory" effects against E. coli.

Once trained, the system was set loose on the Drug Repurposing Hub library, which was 6,111 molecules, the WuXi anti-tuberculosis library, which was 9,997 molecules, and parts of the ZINC15 database thought to contain likely antibiotic molecules, which was 107,349,233 molecules.

A final set of 6,820 compounds was found, and further reduced using the scikit-learn random forest and support vector machine classifiers.

To predict the toxicity of the molecules, they retrained Chemprop on a different training set, called the ClinTox dataset. This dataset has 1,478 molecules with clinical trial toxicity and FDA approval status. Once this model was made it was used to test the toxicity of the candidate antibiotic molecules.

At that point they hit the lab and started growing E. coli on 96 flat-bottomed assay plates. 63 molecules were tested. The chemical they named halicin did the best and went on to further testing against other bacteria and in mice.

Artificial intelligence yields new antibiotic

#solidstatelife #ai #biochemistry #antibiotics
 
Some good points in here:

Quick, cheap to make and loved by police – facial recognition apps are on the rise - Clearview AI may be controversial but it’s not the first business to identify you from your online pics

#technology #facerecognition #clearview #ai
 
More people would trust a robot than their manager

Yeah. Seems like many managers are just idiots.
#work #AI
#work #AI
 

You looking for an AI project? You love Lego? Look no further than this Reg reader's machine-learning Lego sorter • The Register


That's cool ;)
#ai #lego
#ai #lego
 

How to recognize AI snake oil


#ai #snakeOil
Die Sache mit der Interview Software erinnert mich an Google Suchmaschinenoptimierung aus den Anfangstagen. Möglichst viele Keywords unterbringen.
 

Opinion: AI For Good Is Often Bad | WIRED

Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
Exactly. Technology alone does not solve social issues.

#technology #AI
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
And the teeny-tiny bottle of AI whisky goes to... • The Register

#whiskey #AI
 
"Deep learning can't progress with IEEE-754 floating point. Here's why Google, Microsoft, and Intel are leaving it behind." "The de facto standard for floating point is IEEE-754. It's available in all processors sold by Intel, AMD, IBM, and NVIDIA. But as the deep learning renaissance blossomed researches quickly realized that IEEE-754 would be a major constraint limiting the progress they could make. IEEE floating point was designed 30 years ago when processing was expensive, and memory access was cheap. The current technology stack is reversed: memory access is expensive, and processing is cheap. And deep learning is memory bound."

"Google developed the first version of its Deep Learning accelerator in 2014, which delivered two orders of magnitude more performance than the NVIDIA processors that were used prior, simply by abandoning IEEE-754. Subsequent versions have incorporated a new floating-point format, called bfloat16, optimized for deep learning to further their lead."

"Now, even Intel is abandoning IEEE-754 floating point for deep learning. Its Cooper Lake Xeon processor, for example, offers Google's bfloat16 format for deep learning acceleration. Thus, it comes as no surprise that competitors in the AI race are all following suit and replacing IEEE-754 floating point with their own custom number systems. And researchers are demonstrating that other number systems, such as posits and Facebook's DeepFloat, can even improve on Google's bfloat16."

Deep Learning Can't Progress With IEEE-754 Floating Point. Here's Why Google, Microsoft, And Intel Are Leaving It Behind

#solidstatelife #ai
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 
Thanks to Microsoft AI, Mackmyra creates world's first AI-generated whiskey - MSPoweruser

Lol tastes likes bugs?
#Microsoft #Whiskey #AI
Thanks to Microsoft AI, Mackmyra creates world’s first AI-generated whiskey
 
Can an Algorithm Be Racist? | Mind Matters
No, the machine has no opinion. It processes vast tracts of data. And, as a result, the troubling hidden roots of some data are exposed
It is important to differentiate between the algorithm model and the data it processes. Therefore it is dangerous to blame the algorithm without understanding why.

#AI #machineLearning #science
Can an Algorithm Be Racist?
 
Later posts Earlier posts