social.sokoll.com

Search

Items tagged with: ai

Some good points in here:

Quick, cheap to make and loved by police – facial recognition apps are on the rise - Clearview AI may be controversial but it’s not the first business to identify you from your online pics

#technology #facerecognition #clearview #ai
 
More people would trust a robot than their manager

Yeah. Seems like many managers are just idiots.
#work #AI
#work #AI
 

You looking for an AI project? You love Lego? Look no further than this Reg reader's machine-learning Lego sorter • The Register


That's cool ;)
#ai #lego
#ai #lego
 

How to recognize AI snake oil


#ai #snakeOil
Die Sache mit der Interview Software erinnert mich an Google Suchmaschinenoptimierung aus den Anfangstagen. Möglichst viele Keywords unterbringen.
 

Opinion: AI For Good Is Often Bad | WIRED

Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
Exactly. Technology alone does not solve social issues.

#technology #AI
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
#AI #DeepLearning #vision #Python #Google #image #EXIF

The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation



Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.

Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.

This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.

You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly
 
And the teeny-tiny bottle of AI whisky goes to... • The Register

#whiskey #AI
 
"Deep learning can't progress with IEEE-754 floating point. Here's why Google, Microsoft, and Intel are leaving it behind." "The de facto standard for floating point is IEEE-754. It's available in all processors sold by Intel, AMD, IBM, and NVIDIA. But as the deep learning renaissance blossomed researches quickly realized that IEEE-754 would be a major constraint limiting the progress they could make. IEEE floating point was designed 30 years ago when processing was expensive, and memory access was cheap. The current technology stack is reversed: memory access is expensive, and processing is cheap. And deep learning is memory bound."

"Google developed the first version of its Deep Learning accelerator in 2014, which delivered two orders of magnitude more performance than the NVIDIA processors that were used prior, simply by abandoning IEEE-754. Subsequent versions have incorporated a new floating-point format, called bfloat16, optimized for deep learning to further their lead."

"Now, even Intel is abandoning IEEE-754 floating point for deep learning. Its Cooper Lake Xeon processor, for example, offers Google's bfloat16 format for deep learning acceleration. Thus, it comes as no surprise that competitors in the AI race are all following suit and replacing IEEE-754 floating point with their own custom number systems. And researchers are demonstrating that other number systems, such as posits and Facebook's DeepFloat, can even improve on Google's bfloat16."

Deep Learning Can't Progress With IEEE-754 Floating Point. Here's Why Google, Microsoft, And Intel Are Leaving It Behind

#solidstatelife #ai
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 

The case against teaching kids to be polite to Alexa

When parents tell kids to respect AI assistants, what kind of future are we preparing them for?
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

Link given as comment to this resharing post and reposted by @Birne Helene request :)
Bild/Foto
#article #tech #AI #Alexa #education #society #future
 
Thanks to Microsoft AI, Mackmyra creates world's first AI-generated whiskey - MSPoweruser

Lol tastes likes bugs?
#Microsoft #Whiskey #AI
Thanks to Microsoft AI, Mackmyra creates world’s first AI-generated whiskey
 
Can an Algorithm Be Racist? | Mind Matters
No, the machine has no opinion. It processes vast tracts of data. And, as a result, the troubling hidden roots of some data are exposed
It is important to differentiate between the algorithm model and the data it processes. Therefore it is dangerous to blame the algorithm without understanding why.

#AI #machineLearning #science
Can an Algorithm Be Racist?
 

Nationalism Destroying Science

And an idiotic populace is exactly what nationalists need


If this is happening in AI research, how much is happening in other fields that are going unreported?
Last year respected Dutch scientist and Professor of Information Retrieval at the University of Amsterdam Maarten de Rijke had six publications accepted by SIGIR 2018, the world’s top conference in the field of information retrieval. De Rijke was a scheduled panelist and, along with with his students, was organizing a workshop and presenting a tutorial for the conference at the University of Michigan in Ann Arbor, USA. However, because de Rijke had spoken on data science in Iran in November 2017, his US visa application was denied.
https://medium.com/syncedreview/abandon-us-petition-protests-ai-conference-visa-denials-b90dd5a808c4

#Science #AI #Politics #CliffBramlettArchivesPolitics
 

Nationalism Destroying Science

And an idiotic populace is exactly what nationalists need


If this is happening in AI research, how much is happening in other fields that are going unreported?
Last year respected Dutch scientist and Professor of Information Retrieval at the University of Amsterdam Maarten de Rijke had six publications accepted by SIGIR 2018, the world’s top conference in the field of information retrieval. De Rijke was a scheduled panelist and, along with with his students, was organizing a workshop and presenting a tutorial for the conference at the University of Michigan in Ann Arbor, USA. However, because de Rijke had spoken on data science in Iran in November 2017, his US visa application was denied.
https://medium.com/syncedreview/abandon-us-petition-protests-ai-conference-visa-denials-b90dd5a808c4

#Science #AI #Politics #CliffBramlettArchivesPolitics
 
Are you a top model? Poor thing, I feel for you: AI has stolen your job, what will you do now?
#AI #jobs #body #clothes #model

Amazing AI Generates Entire Bodies of People Who Don’t Exist



The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.
 
Are you a top model? Poor thing, I feel for you: AI has stolen your job, what will you do now?
#AI #jobs #body #clothes #model

Amazing AI Generates Entire Bodies of People Who Don’t Exist



The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
We were having a discussion about what jobs would get killed off by ‘AI’ first.

I find a lot of the articles about AI taking jobs rather odd and uninformed. And by that they mean ‘machine learning’ - calling it AI is a bit rich as it generally has yet to make the two critical leaps.
  • It’s not yet very good at getting from ‘I can classify cats correctly’ to ‘I can provide you a meaningful model of how to classify cats that you can act upon’
  • It can’t discuss its model of cats with other systems and debate and reason about it and improvements. When Alexa and Siri start arguing with each other about the best way for you to get to the airport on time - then worry.
There are IMHO four things that define whether a human job can usefully be done by machine learning

The first is simple - what happens when it breaks. If there is a defined safe simple behaviour for ‘wtf I don’t know’ then it’s much easier to automate. It’s why we have years old self driving production trains on things like the Docklands Light Railway but not serious self driving cars. The ‘help I’ve gone wrong’ response for a light railway vehicle is to brake at a precalculated rate to stop ASAP without hurting the people inside. The ‘help I’ve gone wrong’ response for a car is seriously complicated and one humans often get wrong. Car accidents are full of ‘if only I had xyz then’

The second one is that it has to be reasonable predictable and have lots of labelled training data. If it’s not predictable then you lose (human or otherwise). The more complex it gets the more data you need (and current systems need way more than humans and are fragile). That also plays into the first problem. If you have a complex system where ‘help’ is not an acceptable response then you need a hell of a lot of data. Not good for self driving cars that have to be able to deal with bizarre rare events like deer jumping over fences, people climbing out of manholes and tornadoes. None of which feature prominently in data sets. Does a Google self drive car understand a tornado - I’d love to know ?

The third is context. A system can have a lot of inputs that are not obvious and require additional information to process. A human finding that a line of cones blocks the path from their drive way to the road is likely to have the contextual data to conclude that drunk students have been at work for example. In a system with very little context life is a lot easier.

The most critical of all though is what is in systems called variety. The total number of different states you have to manage. A system that can properly manage something has (we believe) to have more states than the system it manages. It’s a thing called ‘Ashby’s law’ although ‘law’ might be the wrong word for it given in the general armwaving systems context there isn’t a mathematical proof for it.

It’s why an IT department can handle lost passwords but falls apart when someone phones up to complain the printer is telling them to subscribe to youtube videos. It’s why the US tax system is so complicated and it leads to a whole pile of other fun things (such as never being able to understand yourself entirely). It’s also the other half of why a drunk student can outwit a self driving car.

Variety is a huge challenge for machine systems. It’s why burger flipping robots are easier than serving robots. It’s why security may well be the last job that gets automated in a shop. Automatic shelf stocking - not too hard, there are challenges. automatic sales - usually easy. Dealing with 8 drunk people swaying around the shop taking and drinking cans… difficult. Security folks may not be well paid but they actually have to deal with an enormous amount of variety and context.

Whilst it’s not usually phrased in AI terms we do actually know a hell of a lot about variety, systems that cope with it and structure through systems modelling, management cybernetics and the like going back to work in the early 1970s by folks like Stafford Beer (who is as interesting as his name) on viable system models - all the feedback loops and arrangements you need to have to make something that is actually functional and adaptable.

Back however to ‘what will machine learning kill off first’ (and not in the sense of run over in automated cars) we need something that has
  • a ‘meh’ failure case
    • a large amount of training data, preferably well labelled, online and easily bulk fed to the learning end of the system
    • as little need for complex contextual information as possible
    • not too much complex variety and state
It would also be nice if people would rate the output for free to help improve the model.

There are two obvious candidates. The first - cat pictures - doesn’t have enough commercial value, so while it would be funny to create a site that posts infinite made up cat pictures every Saturday it’s probably an academic or for laughs project.

The second is photographic porn (not video - far too much context and variety in physics models). There is a vast amount of training data, lots of labels and rating information and relatively low context and variety. The failure case is ‘wtf, reload’ and a lot of the training is already being done - for filters.

That therefore was my guess for the debate - that the obvious early deployment of machine learning is actually a non internet connected, unfirewallable app that produces still pornography on demand - without having to employ any models (except mathematical ones)

#ai #machinelearning #randomramblings
 
Be unpredictable and #AI looses its power immediately.
#AI
 

Notes on AI Bias — Benedict Evans


#AI #science #intelligence

“Machine Learning can do anything you could train a dog to do - but you’re never totally sure what you trained the dog to do.”
 

Notes on AI Bias — Benedict Evans


#AI #science #intelligence

“Machine Learning can do anything you could train a dog to do - but you’re never totally sure what you trained the dog to do.”
 
#AI #surveillance #escape

This colorful printed patch makes you pretty much invisible to AI - The Verge



The rise of AI-powered surveillance is extremely worrying. The ability of governments to track and identify citizens en masse could spell an end to public anonymity. But as researchers have shown time and time again, there are ways to trick such systems.

The latest example comes from a group of engineers from the university of KU Leuven in Belgium. In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that’s designed to recognize people in images.

If you print off one of the students’ specially designed patches and hang it around your neck, from an AI’s point of view, you may as well have slipped under an invisibility cloak.

As the researchers write: “We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.” (They don’t mention it, but this is, famously, an important plot device in the sci-fi novel Zero History by William Gibson.)
 
#AI #surveillance #escape

This colorful printed patch makes you pretty much invisible to AI - The Verge



The rise of AI-powered surveillance is extremely worrying. The ability of governments to track and identify citizens en masse could spell an end to public anonymity. But as researchers have shown time and time again, there are ways to trick such systems.

The latest example comes from a group of engineers from the university of KU Leuven in Belgium. In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that’s designed to recognize people in images.

If you print off one of the students’ specially designed patches and hang it around your neck, from an AI’s point of view, you may as well have slipped under an invisibility cloak.

As the researchers write: “We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.” (They don’t mention it, but this is, famously, an important plot device in the sci-fi novel Zero History by William Gibson.)
 

Die intelligente Programmiersprache | HNF Blog

Wenn ein Computer so schlau ist wie ein Mensch, dann liegt es an der Software. Niemand begriff das besser als der amerikanische Mathematiker John McCarthy. Im April 1959 veröffentlichte er die Sprache LISP; sie eignet sich hervorragend für Programme aus der Künstlichen Intelligenz. Ab 1979 bauten Computerfirmen sogenannte Lisp-Maschinen, die auf diese Sprache zugeschnitten waren.
LISP macht einen wahnsinnig mit den verschachtelten Klammern...

#LISP #retrocomputing #AI
Die intelligente Programmiersprache
 
Bild/Foto
GauGAN turns your doodles into photorealistic landscapes

NVIDIA's deep learning AI model GauGAN, cleverly named after post-impressionist painter Paul Gauguin, turns simple sketches into realistic scenes in seconds by leveraging generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.

GauGAN allows users to draw their own segmentation maps and manipulate the scene, labeling each segment with labels like sand, sky, sea or snow. The tool also allows users to add a style filter, changing a generated image to adapt the style of a particular painter, or change a daytime scene to sunset.

Source: https://blogs.nvidia.com/blog/2019/03/18/gaugan-photorealistic-landscapes-nvidia-research/

#AI #deep learning #art
 
Bild/Foto
GauGAN turns your doodles into photorealistic landscapes

NVIDIA's deep learning AI model GauGAN, cleverly named after post-impressionist painter Paul Gauguin, turns simple sketches into realistic scenes in seconds by leveraging generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.

GauGAN allows users to draw their own segmentation maps and manipulate the scene, labeling each segment with labels like sand, sky, sea or snow. The tool also allows users to add a style filter, changing a generated image to adapt the style of a particular painter, or change a daytime scene to sunset.

Source: https://blogs.nvidia.com/blog/2019/03/18/gaugan-photorealistic-landscapes-nvidia-research/

#AI #deep learning #art
 
HNF - Bewusstsein im Computer?
Seit Jahrhunderten versuchen Menschen, das Verhältnis von Geist und Gehirn zu verstehen und seit einigen Jahrzehnten sogar, dessen Funktion im Computer nachzubilden. Zwar stehen wir immer noch vor einem Rätsel, aber eine einfache Lösung scheint möglich.

Ohne Vorurteile wirft der renommierte Referent einen Blick auf die beteiligten Spezialgebiete und beleuchtet wie Gehirn und Geist zusammenhängen. KI könnte ein Durchbruch gelingen, autonome Organismen mit Bewusstsein zu schaffen.
Klingt sehr interessant, werde ich mir glaube definitiv anhören

https://www.hnf.de/veranstaltungen/vortraege/date/2019/04/11/cal/event/tx_cal_phpicalendar/bewusstsein_im_computer.html

#KI #AI #Geist #Gehirn
 
Instagram is best when you write automated software that abuses it's algorithms for your personal gain.

#programming #algorithm #python #ai #newyork #instagram
 
Instagram is best when you write automated software that abuses it's algorithms for your personal gain.

#programming #algorithm #python #ai #newyork #instagram
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Bild/Foto
In a future not so far away, one Artificial Intelligence prevailed above all other AI’s and their governments. Society has migrated to a permanently integrated reality connected to a single neural network which continuously optimizes their experiences by processing personal data.

Nathan, an outsider still refusing to comply to the new system, is making a living off the grid as a smuggler of modded hardware and cracked software. Geared up with his custom headset, he is among the few that can still switch AR off and see reality for what it is.

VALENBERG [Pixel Art, Animation] MASTER BOOT RECORD [Story, Music, FX] ELDER0010 [Code, Text Mode, Hacking]

The musics are fucking cool ! And it’s VALENBERG on the pixel art, you know, the guy behind Perturbator music video !!!


Take a look at the announcement .

#game #linux #pixelart #pixel #metal #music
#art #hack #privacy #hardware #software
#hacking #cyberpunk #cyber #punk #future
#AI #government #steam #code #reality
 
Facebook Reminds Us That Binary Deep Learning Classifiers Don't Work For Content Moderation

#deepLearning #AI
 
The Gold Rush of #Singularity | #Science & #Technology #AI
Japan's billionaire Masayoshi Son has sold the idea of singularity to Saudi Arabia. But is this investment a good idea?
https://www.aljazeera.com/indepth/opinion/mohammed-bin-salman-gold-rush-singularity-180522101213108.html
 
The Gold Rush of #Singularity | #Science & #Technology #AI
Japan's billionaire Masayoshi Son has sold the idea of singularity to Saudi Arabia. But is this investment a good idea?
https://www.aljazeera.com/indepth/opinion/mohammed-bin-salman-gold-rush-singularity-180522101213108.html
 
#NVIDIA acquires #Mellanox: yesterday‘s rumour became reality today.

Why did they do that? #Datacenter operators accelerate computing power with NVIDIA #GPU‘s for increasing #AI or #DL/#ML workloads. Mellanox empowers networks to reach ever new speeds and nearly eliminated latency. Last is most important for cloud computing setups and the increasing number of service providers such as Zalando, booking or Amazon.

The racy thing about the current acquisition is NVIDIA‘s engagement in future #technology like autonomous driving and industrial networks generally. Processing huge amounts of data in a maximum distributed environment is a big challenge – and in the case of selfdriving cars it becomes life critical literally. I understand, that a company like NVIDIA wants to get as much control as possible over as much components as possible. Or why would have been Intel so keen on the same subject?

With picking the sweetest cherry from the tree NVIDIA gets an enormous head start in the race.
 
Later posts Earlier posts