social.sokoll.com

Search

Items tagged with: Security

Three npm packages found opening shells on Linux, Windows systems | ZDNet


#nodejs #npm #security
 
3-D biometric authentication based on finger veins almost impossible to fool https://techxplore.com/news/2020-09-d-biometric-authentication-based-finger.html

I see a new challenge for the #CCC, they already showed that hand veins can be faked/copied to circumvent scanners.
Or they just cut your finger off.

#security #biometry
 

FinSpy analyzed


German-made FinSpy spyware found in Egypt, and Mac and Linux versions revealed


• FinSpy is a commercial spyware suite produced by the Munich-based company FinFisher Gmbh. Since 2011 researchers have documented numerous cases of targeting of Human Rights Defenders (HRDs) - including activists, journalists, and dissidents with the use of FinSpy in many countries, including Bahrain, Ethiopia, UAE, and more. Because of this, Amnesty International’s Security Lab tracks FinSpy usage and development as part of our continuous monitoring of digital threats to Human Rights Defenders.
• Amnesty International published a report in March 2019 describing phishing attacks targeting Egyptian human rights defenders and media and civil society organizations staff carried out by an attacker group known as “NilePhish”. While continuing research into this group’s activity, we discovered it has distributed samples of FinSpy for Microsoft Windows through a fake Adobe Flash Player download website. Amnesty International has not documented human rights violations by NilePhish directly linked to FinFisher products.
• Through additional technical investigations into this most recent variant, Amnesty’s Security Lab also discovered, exposed online by an unknown actor, new samples of FinSpy for Windows, Android, and previously undisclosed versions for Linux and MacOS computers.
• This report provides technical information on these recent FinSpy samples in order to aid the cybersecurity research community in further investigations, enable cybersecurity vendors implement protection mechanisms against these newly discovered variants, and to raise awareness among HRDs of evolving digital attack techniques.
#security #finfisher #finspy #spyware #crapware
 

FinSpy analyzed


German-made FinSpy spyware found in Egypt, and Mac and Linux versions revealed


• FinSpy is a commercial spyware suite produced by the Munich-based company FinFisher Gmbh. Since 2011 researchers have documented numerous cases of targeting of Human Rights Defenders (HRDs) - including activists, journalists, and dissidents with the use of FinSpy in many countries, including Bahrain, Ethiopia, UAE, and more. Because of this, Amnesty International’s Security Lab tracks FinSpy usage and development as part of our continuous monitoring of digital threats to Human Rights Defenders.
• Amnesty International published a report in March 2019 describing phishing attacks targeting Egyptian human rights defenders and media and civil society organizations staff carried out by an attacker group known as “NilePhish”. While continuing research into this group’s activity, we discovered it has distributed samples of FinSpy for Microsoft Windows through a fake Adobe Flash Player download website. Amnesty International has not documented human rights violations by NilePhish directly linked to FinFisher products.
• Through additional technical investigations into this most recent variant, Amnesty’s Security Lab also discovered, exposed online by an unknown actor, new samples of FinSpy for Windows, Android, and previously undisclosed versions for Linux and MacOS computers.
• This report provides technical information on these recent FinSpy samples in order to aid the cybersecurity research community in further investigations, enable cybersecurity vendors implement protection mechanisms against these newly discovered variants, and to raise awareness among HRDs of evolving digital attack techniques.
#security #finfisher #finspy #spyware #crapware
 

This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits








A team of researchers at the University of Chicago have developed an algorithm that makes tiny, imperceptible edits to your images in order to mask you from facial recognition technology. Their invention is called Fawkes, and anybody can use it on their own images for free.

The algorithm was created by researchers in the SAND Lab at the University of Chicago, and the open-source software tool that they built is free to download and use on your computer at home.

The program works by making "tiny, pixel-level changes that are invisible to the human eye," but that nevertheless prevent facial recognition algorithms from categorizing you correctly. It's not so much that it makes you impossible to categorize; it's that the algorithm will categorize you as a different person entirely. The team calls the result "cloaked" photos, and they can be used like any other:
You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo.
The only difference is that a company like the infamous startup Clearview AI can't use them to build an accurate database that will make you trackable.

Here's a before-and-after that the team created to show the cloaking at work. On the left is the original image, on the right a "cloaked" version. The differences are noticeable if you look closely, but they look like the result of dodging and burning rather than actual alterations that might change the way you look:




You can watch an explanation and demonstration of Fawkes by co-lead authors Emily Wenger and Shawn Shan below:

According to the team, Fawkes has proven 100% effective against state-of-the-art facial recognition models. Of course, this won't make facial recognition models obsolete overnight, byt if technology like this caught on as "standard" when, say, uploading an image to social media, it would make maintaining accurate models much more cumbersome and expensive.

"Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition," explains the team. "If we can reduce the accuracy of these models to make them untrustable, or force the model's owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded."

To learn more about this technology, or if you want to download Version 0.3 and try it on your own photos, head over to the Fawkes webpage. The team will be (virtually) presenting their technical paper at the upcoming USENIX Security Symposium running from August 12th to the 14th.

(via Fstoppers via Gizmodo)

Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto

Bild/Foto

#finds #software #technology #ai #algorithm #artificialintelligence #clearview #clearviewai #cloaking #face #facialrecognition #fawkes #photoediting #privacy #security
posted by pod_feeder_v2
This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits

PetaPixel: This 'Cloaking' Algorithm Breaks Facial Recognition by Making Tiny Edits (DL Cade)

 

This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits








A team of researchers at the University of Chicago have developed an algorithm that makes tiny, imperceptible edits to your images in order to mask you from facial recognition technology. Their invention is called Fawkes, and anybody can use it on their own images for free.

The algorithm was created by researchers in the SAND Lab at the University of Chicago, and the open-source software tool that they built is free to download and use on your computer at home.

The program works by making "tiny, pixel-level changes that are invisible to the human eye," but that nevertheless prevent facial recognition algorithms from categorizing you correctly. It's not so much that it makes you impossible to categorize; it's that the algorithm will categorize you as a different person entirely. The team calls the result "cloaked" photos, and they can be used like any other:
You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo.
The only difference is that a company like the infamous startup Clearview AI can't use them to build an accurate database that will make you trackable.

Here's a before-and-after that the team created to show the cloaking at work. On the left is the original image, on the right a "cloaked" version. The differences are noticeable if you look closely, but they look like the result of dodging and burning rather than actual alterations that might change the way you look:




You can watch an explanation and demonstration of Fawkes by co-lead authors Emily Wenger and Shawn Shan below:

According to the team, Fawkes has proven 100% effective against state-of-the-art facial recognition models. Of course, this won't make facial recognition models obsolete overnight, byt if technology like this caught on as "standard" when, say, uploading an image to social media, it would make maintaining accurate models much more cumbersome and expensive.

"Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition," explains the team. "If we can reduce the accuracy of these models to make them untrustable, or force the model's owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded."

To learn more about this technology, or if you want to download Version 0.3 and try it on your own photos, head over to the Fawkes webpage. The team will be (virtually) presenting their technical paper at the upcoming USENIX Security Symposium running from August 12th to the 14th.

(via Fstoppers via Gizmodo)

Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto Bild/Foto

Bild/Foto

#finds #software #technology #ai #algorithm #artificialintelligence #clearview #clearviewai #cloaking #face #facialrecognition #fawkes #photoediting #privacy #security
posted by pod_feeder_v2
This ‘Cloaking’ Algorithm Breaks Facial Recognition by Making Tiny Edits

PetaPixel: This 'Cloaking' Algorithm Breaks Facial Recognition by Making Tiny Edits (DL Cade)

 

Corona-Warn-App: Play Services übermitteln personenbezogene Daten an Google


Übertragen werden u. a:
- IP-Adresse
- Geräte-Nummer des Handys
- Telefonnummer
- Mail-Adresse
- Nutzungsdaten des Smartphones (= welche Apps wie genutzt werden)

und das alles durchschnittlich alle 20 Minuten!

https://www.deutschlandfunk.de/corona-warn-app-play-services-uebermitteln-daten-an-google.684.de.html?dram:article_id=481253

#covid #COVID19 #corona #cwa #CoronaWarnApp #Security #Datenschutz #dsgvo
 

Corona-Warn-App: Play Services übermitteln personenbezogene Daten an Google


Übertragen werden u. a:
- IP-Adresse
- Geräte-Nummer des Handys
- Telefonnummer
- Mail-Adresse
- Nutzungsdaten des Smartphones (= welche Apps wie genutzt werden)

und das alles durchschnittlich alle 20 Minuten!

https://www.deutschlandfunk.de/corona-warn-app-play-services-uebermitteln-daten-an-google.684.de.html?dram:article_id=481253

#covid #COVID19 #corona #cwa #CoronaWarnApp #Security #Datenschutz #dsgvo
 
New ‘Meow’ attack has wiped over 1,800 unsecured databases https://www.bleepingcomputer.com/news/security/new-meow-attack-has-wiped-over-1-800-unsecured-databases/

#security #dba
 
Bild/Foto

INFOSEC: FUCK YOUR '"BLACK/WHITE NEUTRALITY"!

By Catalin Cimpanu for Zero Day | July 4, 2020

The information security (infosec) community has angrily reacted today to calls to abandon the use of the 'black hat' and 'white hat' terms, citing that the two, and especially 'black hat,' have nothing to do with racial stereotyping.



Discussions about the topic started late last night after David Kleidermacher, VP of Engineering at Google, and in charge of Android Security and the Google Play Store, withdrew from a scheduled talk he was set to give in August at the Black Hat USA 2020 security conference.

In his withdrawal announcement, Kleidermacher asked the infosec industry to consider replacing terms like black hat, white hat, and man-in-the-middle with neutral alternatives.

These changes remove harmful associations, promote inclusion, and help us break down walls of unconscious bias. Not everyone agrees which terms to change, but I feel strongly our language needs to (this one in particular).

— David Kleidermacher (@DaveKSecure) July 3, 2020

While Kleidermacher only asked the industry to consider changing these terms, several members mistook his statement as a direct request to the Black Hat conference to change its name.

With Black Hat being the biggest event in cyber-security, online discussions on the topic quickly became widespread among cyber-security experts, dominating the July 4th weekend.

While a part of the infosec community agreed with Kledermacher, the vast majority did not, and called it virtue signaling taken to the extreme.

Most security researchers pointed to the fact that the terms had nothing to do with racism or skin color, and had their origins in classic western movies, where the villain usually wore a black hat, while the good guy wore a white hat.

Others pointed to the dualism between black and white as representing evil and good, concepts that have been around since the dawn of civilizations, long before racial divides even existed between humans.

Right now, the infosec community doesn't seem to be willing to abandon the two terms, which they don't see as a problem when used in infosec-related writings.
MORE COMMENTS: https://www.zdnet.com/article/infosec-community-disagrees-with-changing-black-hat-term-due-to-racial-stereotyping/

#programming #computer #science #software #development #infosec #black hat #resistance #goggle #hackers #internet #censorship #freedom #sexism #social #web #human rights #sanctimony #activism #activist #correctness #meetoo #blacklivesmatter #racism #racist #USA #research #cyber-security #security #privacy
 
Bild/Foto

INFOSEC: FUCK YOUR '"BLACK/WHITE NEUTRALITY"!

By Catalin Cimpanu for Zero Day | July 4, 2020

The information security (infosec) community has angrily reacted today to calls to abandon the use of the 'black hat' and 'white hat' terms, citing that the two, and especially 'black hat,' have nothing to do with racial stereotyping.



Discussions about the topic started late last night after David Kleidermacher, VP of Engineering at Google, and in charge of Android Security and the Google Play Store, withdrew from a scheduled talk he was set to give in August at the Black Hat USA 2020 security conference.

In his withdrawal announcement, Kleidermacher asked the infosec industry to consider replacing terms like black hat, white hat, and man-in-the-middle with neutral alternatives.

These changes remove harmful associations, promote inclusion, and help us break down walls of unconscious bias. Not everyone agrees which terms to change, but I feel strongly our language needs to (this one in particular).

— David Kleidermacher (@DaveKSecure) July 3, 2020

While Kleidermacher only asked the industry to consider changing these terms, several members mistook his statement as a direct request to the Black Hat conference to change its name.

With Black Hat being the biggest event in cyber-security, online discussions on the topic quickly became widespread among cyber-security experts, dominating the July 4th weekend.

While a part of the infosec community agreed with Kledermacher, the vast majority did not, and called it virtue signaling taken to the extreme.

Most security researchers pointed to the fact that the terms had nothing to do with racism or skin color, and had their origins in classic western movies, where the villain usually wore a black hat, while the good guy wore a white hat.

Others pointed to the dualism between black and white as representing evil and good, concepts that have been around since the dawn of civilizations, long before racial divides even existed between humans.

Right now, the infosec community doesn't seem to be willing to abandon the two terms, which they don't see as a problem when used in infosec-related writings.
MORE COMMENTS: https://www.zdnet.com/article/infosec-community-disagrees-with-changing-black-hat-term-due-to-racial-stereotyping/

#programming #computer #science #software #development #infosec #black hat #resistance #goggle #hackers #internet #censorship #freedom #sexism #social #web #human rights #sanctimony #activism #activist #correctness #meetoo #blacklivesmatter #racism #racist #USA #research #cyber-security #security #privacy
 
#Apple #iOS #security
 

Haveibeenpwned.com pwned our helpdesk! GLPI 9.4.5 SQL Injection – fyr.io


Literally Bobby Tables
#sql #injection #bobbyTables #security
Haveibeenpwned.com pwned our helpdesk! GLPI 9.4.5 SQL Injection
 
Zählen die Stunden, Tage, Wochen bis zu einem Patch #Security @Apple
 

reCaptcha


W3C has published an extensive list of reCAPTCHA alternatives:

https://www.w3.org/TR/turingtest/

W3C is requesting feedback for the document, if you'd like to make suggestions, please open an issue: https://github.com/w3c/apa/issues

"Google wants you to think reCaptcha is the ONLY tool. That way they can get more user data."

People should start to complain to every site which is using recaptcha, because it is just one of google's data hamstering tools (which fingerprints users, gets device information, ip and many more private data).

There are alternatives, don't use this google service!

#recaptcha #google #w3c #w3 #linux #bot #bots #gnu #security #spam #bsd
 

reCaptcha


W3C has published an extensive list of reCAPTCHA alternatives:

https://www.w3.org/TR/turingtest/

W3C is requesting feedback for the document, if you'd like to make suggestions, please open an issue: https://github.com/w3c/apa/issues

"Google wants you to think reCaptcha is the ONLY tool. That way they can get more user data."

People should start to complain to every site which is using recaptcha, because it is just one of google's data hamstering tools (which fingerprints users, gets device information, ip and many more private data).

There are alternatives, don't use this google service!

#recaptcha #google #w3c #w3 #linux #bot #bots #gnu #security #spam #bsd
 
 
Autopsy - A Digital Forensic Lab #security
 
(German translation below.)

If you're stuck at home and use Zoom as a video conferencing solution that works for you, that's fine. Keep using it. Here are some options you might want to check to enhance the overall security of your and your guests.

First, log in at https://zoom.us/signin and head to your settings at https://zoom.us/profile/setting.

* In the "Meeting" tab:
1. Set "Audio Type" to "Computer Audio". This will block people from using their phone to join a meeting - but that's required if you want to use End-to-End encryption all the time. Phones can't do encryption.
1. Make sure "Use Personal Meeting ID (PMI) when scheduling a meeting" is disabled. The PMI is a meeting ID that never changes, so don't use it. It should be disabled by default, but make sure.
1. Enable "Require a password for Personal Meeting ID (PMI)", so people can't join via your PMI even if you accidentally share it.
1. Make sure "Join before host" is disabled. If enabled, people can join your meetings before a host is there - meaning there won't be moderation.
1. Enable "Play sound when participants join or leave". That's useful, as everyone will be aware when someone joins unexpectedly.
1. Enable "Require Encryption for 3rd Party Endpoints (H323/SIP)".
* In the "Recording" tab:
1. Disable "Cloud recording". You can still record meetings to your local disk, but there is no need to store potentially private conversations on Zoom's servers.

If you have a more "presentation"-like format scheduled, where only you or a small number of presenters will be speaking to a high number of consuming participants, there are a couple of additional tips in addition to the settings above:

* Before the meeting: Require people to sign-up and collect their eMail addresses. Do not share the join-link publicly, and only send the credentials via eMail to the people who signed up.
* In the "Meeting" tab:
1. Enable "Mute participants upon entry" - this will force-mute everyone joining. You will have the option to decide whether people can speak or not.
1. Enable "Co-host" and promote someone you trust as Co-host to assist with muting/unmuting people as needed.
1. Set "Screen sharing" to "Host-Only" to avoid random people sharing their screens, which can be used for abuse. Promote people who need to share as Co-hosts, if you trust them.
1. Enable "Nonverbal feedback". This is useful if you have force-muted everyone. People can raise their hands if they want to say something, allowing you to unmute people for a short period.
1. Enable "Waiting room" for all participants if the nature of the call is sensitive/private. This means that people will not be able to join your meeting directly, but will be placed in a virtual waiting room, waiting for you to approve them to join the meeting. If you enable this, make sure to keep an eye on the participant list to avoid missing someone.
1. Make sure "Allow removed participants to rejoin" is disabled. This means that people that got kicked out of the meeting will not be able to rejoin, even if they know the credentials.
Wenn du zuhause festsitzt und Zoom als das Tool deiner Wahl für Videokonferenzen und Videotelefonate entdeckt hast, mach dir nicht zu viel Sorgen und bleibe dabei. Es ist wichtiger, ein Tool zu haben, dass stressfrei und problemlos die Aufgabe erledigt, als sich stundenlang mit Alternativen zu schlagen. Hier sind einige Tipps, wie du deine Meetings für dich und deine Teilnehmerinnen sicherer gestalten kannst.

Als Erstes, melde dich auf https://zoom.us/signin an und rufe deine Einstellungen unter https://zoom.us/profile/setting auf.

* Im "Meeting"-Tab:
1. Setze "Audiotyp" auf "Computeraudio". Damit deaktivierst du zwar die Möglichkeit, über ein Telefon am Meeting teilzunehmen, aber das ist wichtig, wenn du Ende-zu-Ende-Verschlüsselung verwenden willst. Telefone verstehen keine Verschlüsselung.
1. Stelle sicher, dass "Beim Planen eines Meetings die persönliche Meeting-ID (PMI) verwenden" nicht aktiv ist. Deine PMI ist eine Meeting-ID, die sich nie ändert, also sollte man davon besser die Finger lassen.
1. Schalte "Bei Personal-Meeting-ID (PMI) Kennwort verlangen" an, falls man doch mal versehentlich die fixe PMI weitergibt. Mit Kennwort kann dann trotzdem niemand das Meeting betreten.
1. Deaktiviere "Beitritt vor Moderator", dann können deine Gäste das Meeting erst betreten, wenn du da bist. Ist diese Option deaktiviert, können Leute ohne Moderation das Meeting betreten.
1. Aktiviere "Sound wiedergeben, wenn Teilnehmer teilnehmen oder verlassen". Dann wird immer, wenn eine Teilnehmerin beitritt, ein Ton für alle abgespielt. Damit wissen alle, wenn unerwartet jemand dazu kommt.
1. Aktiviere "Verschlüsselung für Endpunkte von Drittanbietern erforderlich (H323/SIP)".
* Im "Aufzeichnung"-Tab:
1. "Cloud-Aufzeichnung" ausschalten. Du kannst das Meeting immernoch auf deine Festplatte aufnehmen, aber es gibt keinen Grund, potenziell private Gespräche auf Zoom's Servern zu speichern.

Wenn man ein "vortragsähliches" Ding geplant hat, also ein Format in dem eine kleine Gruppe an Leuten aktiv zu einer großen, ggf. öffentlichen Gruppe spricht, gibt es zu den Einstellungen oben noch ein paar weitere Tipps:

* Vor dem Meeting: Stelle sicher, dass sich alle Teilnehmerinnen vor der Veranstaltung anmelden und sammle eMail-Adressen. Verteile den Zoom-Link oder die Meetingdaten dann nicht öffentlich, sondern nur per eMail an angemeldete Personen.
* Im "Meeting"-Tab:
1. Aktiviere "Teilnehmer beim Beitritt stumm schalten". Du hast dann bei jedem Meeting die Option, zu entscheiden, ob sich Teilnehmerinnen entstummen dürfen oder ob du das Sprechrecht einzeln vergeben willst.
1. Schalte "Co-Moderator" ein und befördere einer Person, der du vertraust, als Co-Moderator. Diese Person hat dann ebenfalls das Recht, Leute stummzuschalten oder zu kicken, und kann dir arbeit abnehmen.
1. Setze "Bildschirmübertragung" so, dass nur der Host den Bildschirm freigeben darf. Das verhindert, dass Leute ihren Bildschirm freigeben, um "Inhalte" zu präsentieren. Leute, die Vortragen müssen, können zum Co-Moderator befördert werden.
1. Aktiviere "Feedback ohne Worte". Das ist nützlich, damit Leute "die Hand heben können" wenn sie etwas sagen wollen - und dann kannst du als Host sie Entstummschalten und sie können reden.
1. Schalte den "Warteraum" für alle Teilnehmerinnen an, wenn das Gespräch persönlich ist. Das bedeutet, dass alle neuen Teilnehmerinnen in einen virtuellen Warteraum gesetzt werden, und die Moderatoren haben die Möglichkeit, diese Leute dann in das Meeting zu holen. Wenn du diese Option aktivierst, achte darauf, die Teilnehmerliste im Blick zu halten, damit du niemanden übersiehst.
1. Stelle sicher, dass "Entfernten Teilnehmern den erneuten Beitritt erlauben" deaktiviert ist. Das bedeutet, dass Leute, die aus dem Meeting geworfen wurden, nicht wieder beitreten können, auch wenn sie die Zugangsdaten kennen.
#zoom #privacy #security
 
(German translation below.)

If you're stuck at home and use Zoom as a video conferencing solution that works for you, that's fine. Keep using it. Here are some options you might want to check to enhance the overall security of your and your guests.

First, log in at https://zoom.us/signin and head to your settings at https://zoom.us/profile/setting.

* In the "Meeting" tab:
1. Set "Audio Type" to "Computer Audio". This will block people from using their phone to join a meeting - but that's required if you want to use End-to-End encryption all the time. Phones can't do encryption.
1. Make sure "Use Personal Meeting ID (PMI) when scheduling a meeting" is disabled. The PMI is a meeting ID that never changes, so don't use it. It should be disabled by default, but make sure.
1. Enable "Require a password for Personal Meeting ID (PMI)", so people can't join via your PMI even if you accidentally share it.
1. Make sure "Join before host" is disabled. If enabled, people can join your meetings before a host is there - meaning there won't be moderation.
1. Enable "Play sound when participants join or leave". That's useful, as everyone will be aware when someone joins unexpectedly.
1. Enable "Require Encryption for 3rd Party Endpoints (H323/SIP)".
* In the "Recording" tab:
1. Disable "Cloud recording". You can still record meetings to your local disk, but there is no need to store potentially private conversations on Zoom's servers.

If you have a more "presentation"-like format scheduled, where only you or a small number of presenters will be speaking to a high number of consuming participants, there are a couple of additional tips in addition to the settings above:

* Before the meeting: Require people to sign-up and collect their eMail addresses. Do not share the join-link publicly, and only send the credentials via eMail to the people who signed up.
* In the "Meeting" tab:
1. Enable "Mute participants upon entry" - this will force-mute everyone joining. You will have the option to decide whether people can speak or not.
1. Enable "Co-host" and promote someone you trust as Co-host to assist with muting/unmuting people as needed.
1. Set "Screen sharing" to "Host-Only" to avoid random people sharing their screens, which can be used for abuse. Promote people who need to share as Co-hosts, if you trust them.
1. Enable "Nonverbal feedback". This is useful if you have force-muted everyone. People can raise their hands if they want to say something, allowing you to unmute people for a short period.
1. Enable "Waiting room" for all participants if the nature of the call is sensitive/private. This means that people will not be able to join your meeting directly, but will be placed in a virtual waiting room, waiting for you to approve them to join the meeting. If you enable this, make sure to keep an eye on the participant list to avoid missing someone.
1. Make sure "Allow removed participants to rejoin" is disabled. This means that people that got kicked out of the meeting will not be able to rejoin, even if they know the credentials.
Wenn du zuhause festsitzt und Zoom als das Tool deiner Wahl für Videokonferenzen und Videotelefonate entdeckt hast, mach dir nicht zu viel Sorgen und bleibe dabei. Es ist wichtiger, ein Tool zu haben, dass stressfrei und problemlos die Aufgabe erledigt, als sich stundenlang mit Alternativen zu schlagen. Hier sind einige Tipps, wie du deine Meetings für dich und deine Teilnehmerinnen sicherer gestalten kannst.

Als Erstes, melde dich auf https://zoom.us/signin an und rufe deine Einstellungen unter https://zoom.us/profile/setting auf.

* Im "Meeting"-Tab:
1. Setze "Audiotyp" auf "Computeraudio". Damit deaktivierst du zwar die Möglichkeit, über ein Telefon am Meeting teilzunehmen, aber das ist wichtig, wenn du Ende-zu-Ende-Verschlüsselung verwenden willst. Telefone verstehen keine Verschlüsselung.
1. Stelle sicher, dass "Beim Planen eines Meetings die persönliche Meeting-ID (PMI) verwenden" nicht aktiv ist. Deine PMI ist eine Meeting-ID, die sich nie ändert, also sollte man davon besser die Finger lassen.
1. Schalte "Bei Personal-Meeting-ID (PMI) Kennwort verlangen" an, falls man doch mal versehentlich die fixe PMI weitergibt. Mit Kennwort kann dann trotzdem niemand das Meeting betreten.
1. Deaktiviere "Beitritt vor Moderator", dann können deine Gäste das Meeting erst betreten, wenn du da bist. Ist diese Option deaktiviert, können Leute ohne Moderation das Meeting betreten.
1. Aktiviere "Sound wiedergeben, wenn Teilnehmer teilnehmen oder verlassen". Dann wird immer, wenn eine Teilnehmerin beitritt, ein Ton für alle abgespielt. Damit wissen alle, wenn unerwartet jemand dazu kommt.
1. Aktiviere "Verschlüsselung für Endpunkte von Drittanbietern erforderlich (H323/SIP)".
* Im "Aufzeichnung"-Tab:
1. "Cloud-Aufzeichnung" ausschalten. Du kannst das Meeting immernoch auf deine Festplatte aufnehmen, aber es gibt keinen Grund, potenziell private Gespräche auf Zoom's Servern zu speichern.

Wenn man ein "vortragsähliches" Ding geplant hat, also ein Format in dem eine kleine Gruppe an Leuten aktiv zu einer großen, ggf. öffentlichen Gruppe spricht, gibt es zu den Einstellungen oben noch ein paar weitere Tipps:

* Vor dem Meeting: Stelle sicher, dass sich alle Teilnehmerinnen vor der Veranstaltung anmelden und sammle eMail-Adressen. Verteile den Zoom-Link oder die Meetingdaten dann nicht öffentlich, sondern nur per eMail an angemeldete Personen.
* Im "Meeting"-Tab:
1. Aktiviere "Teilnehmer beim Beitritt stumm schalten". Du hast dann bei jedem Meeting die Option, zu entscheiden, ob sich Teilnehmerinnen entstummen dürfen oder ob du das Sprechrecht einzeln vergeben willst.
1. Schalte "Co-Moderator" ein und befördere einer Person, der du vertraust, als Co-Moderator. Diese Person hat dann ebenfalls das Recht, Leute stummzuschalten oder zu kicken, und kann dir arbeit abnehmen.
1. Setze "Bildschirmübertragung" so, dass nur der Host den Bildschirm freigeben darf. Das verhindert, dass Leute ihren Bildschirm freigeben, um "Inhalte" zu präsentieren. Leute, die Vortragen müssen, können zum Co-Moderator befördert werden.
1. Aktiviere "Feedback ohne Worte". Das ist nützlich, damit Leute "die Hand heben können" wenn sie etwas sagen wollen - und dann kannst du als Host sie Entstummschalten und sie können reden.
1. Schalte den "Warteraum" für alle Teilnehmerinnen an, wenn das Gespräch persönlich ist. Das bedeutet, dass alle neuen Teilnehmerinnen in einen virtuellen Warteraum gesetzt werden, und die Moderatoren haben die Möglichkeit, diese Leute dann in das Meeting zu holen. Wenn du diese Option aktivierst, achte darauf, die Teilnehmerliste im Blick zu halten, damit du niemanden übersiehst.
1. Stelle sicher, dass "Entfernten Teilnehmern den erneuten Beitritt erlauben" deaktiviert ist. Das bedeutet, dass Leute, die aus dem Meeting geworfen wurden, nicht wieder beitreten können, auch wenn sie die Zugangsdaten kennen.
#zoom #privacy #security
 

Jackie Singh✨ auf Twitter: "Now that’s what I call a combination lock! https://t.co/mYGYHsRx1T" / Twitter


Maximum #security #lock !

https://twitter.com/find_evil/status/1241717265038479360

Twitter: IanWatson on Twitter (IanWatson)

 

Jackie Singh✨ auf Twitter: "Now that’s what I call a combination lock! https://t.co/mYGYHsRx1T" / Twitter


Maximum #security #lock !

https://twitter.com/find_evil/status/1241717265038479360

Twitter: IanWatson on Twitter (IanWatson)

 
Hackers can clone millions of Toyota, Hyundai, and Kia keys | Ars Technica

Anyone got a car with keyless go?
#security #toyota #hyundai #kia
 
FYI: When Virgin Media said it leaked 'limited contact info', it meant p0rno filter requests, IP addresses, IMEIs as well as names, addresses and more • The Register

That's a lot of content
#security #data
 
"A Virgin Media database containing the personal details of 900,000 people was left unsecured and accessible online for 10 months, the company has admitted."

The breach was not due to a hack or a criminal attack, but because the database had been "incorrectly configured" by a member of staff not following the correct procedures, Virgin Media said.
Which shows yet again people still don't get that digital security is not about giving people enormous power and writing a policy and process document that basically says thou shalt not f**k up.

A database does not spend 10 months unsecured because it was "incorrectly configured". It gets to spend 10 months unsecured becasuse a) your 'process' allowed the mistake to be made in the first place and didn't have enough automated and human checking and b) more importantly because someone didn't have effective monitoring and regular scanning of their assets in place to catch the problem later and sound the alarm.

An industrial site does not get burgled "because someone left the window opened for 10 months", it gets burgled because someone didn't have their security doing basic commonsense daily checks and closing it". Ditto in digital space.

#security #rant #virgin #verminmedia
 
"A Virgin Media database containing the personal details of 900,000 people was left unsecured and accessible online for 10 months, the company has admitted."

The breach was not due to a hack or a criminal attack, but because the database had been "incorrectly configured" by a member of staff not following the correct procedures, Virgin Media said.
Which shows yet again people still don't get that digital security is not about giving people enormous power and writing a policy and process document that basically says thou shalt not f**k up.

A database does not spend 10 months unsecured because it was "incorrectly configured". It gets to spend 10 months unsecured becasuse a) your 'process' allowed the mistake to be made in the first place and didn't have enough automated and human checking and b) more importantly because someone didn't have effective monitoring and regular scanning of their assets in place to catch the problem later and sound the alarm.

An industrial site does not get burgled "because someone left the window opened for 10 months", it gets burgled because someone didn't have their security doing basic commonsense daily checks and closing it". Ditto in digital space.

#security #rant #virgin #verminmedia
 
Concerned with #security ? #NetBSD includes #Postfix as its mail agent.
 
#eff #letsencrypt #security #tls #https
 
#eff #letsencrypt #security #tls #https
 
OMG. Just… no.

#InternetOfShit #IoT #TroyHunt #Security #InfoSec #RemoteControlDetonator
 
OMG. Just… no.

#InternetOfShit #IoT #TroyHunt #Security #InfoSec #RemoteControlDetonator
 

FBI recommends passphrases over password complexity | ZDNet


Correct horse battery staple

#password #security
 
Bild/Foto

Private WhatsApp groups visible in Google searches

Your #WhatsApp groups may not be as secure as you think they are


Google is indexing invite links to private WhatsApp group chats. This means with a simple search anyone can discover and join these groups including ones the administrator may want to keep private.

Does #Google care about your privacy and security? No.

Does #Facebook honestly care about your privacy and security? No.

https://www.dw.com/en/private-whatsapp-groups-visible-in-google-searches/a-52468603

#Facebook #chat #apps #privacy #security #surveillance #messaging #im
 
Bild/Foto

Private WhatsApp groups visible in Google searches

Your #WhatsApp groups may not be as secure as you think they are


Google is indexing invite links to private WhatsApp group chats. This means with a simple search anyone can discover and join these groups including ones the administrator may want to keep private.

Does #Google care about your privacy and security? No.

Does #Facebook honestly care about your privacy and security? No.

https://www.dw.com/en/private-whatsapp-groups-visible-in-google-searches/a-52468603

#Facebook #chat #apps #privacy #security #surveillance #messaging #im
 
Bild/Foto

Private WhatsApp groups visible in Google searches

Your #WhatsApp groups may not be as secure as you think they are


Google is indexing invite links to private WhatsApp group chats. This means with a simple search anyone can discover and join these groups including ones the administrator may want to keep private.

Does #Google care about your privacy and security? No.

Does #Facebook honestly care about your privacy and security? No.

https://www.dw.com/en/private-whatsapp-groups-visible-in-google-searches/a-52468603

#Facebook #chat #apps #privacy #security #surveillance #messaging #im
 

SHA-1 is a Shambles

First Chosen-Prefix Collision on SHA-1 and Application to the PGP Web of Trust


https://eprint.iacr.org/2020/014.pdf

Below is the abstract from the article. The most concerning thing here is the ability to forge signatures of keys. As you know if you read my posts, I have always argued that we should never sign other people's keys. Even without the problem of possible forged signatures using the technique in the article, key-signing harms privacy.

The only key signature created by EasyGPG is the signature on a newly created key pair.

printf "${newkeyattr}" | env TZ=UTC gpg --homedir "${keydir}" --batch --use-agent --cert-digest-algo "SHA512" --s2k-cipher-algo "AES256" --s2k-digest-algo "SHA512" --s2k-mode 3 --s2k-count 32000000 --status-file "${temp}" --gen-key 2> /dev/null

Notice that SHA512 is used. As for signatures on messages and encrypted files, see below (after the abstract). EasyGPG always uses SHA512.

Abstract. The SHA-1 hash function was designed in 1995 and has been widely used
during two decades. A theoretical collision attack was first proposed in 2004 [WYY05],
but due to its high complexity it was only implemented in practice in 2017, using
a large GPU cluster [SBK + 17]. More recently, an almost practical chosen-prefix
collision attack against SHA-1 has been proposed [LP19]. This more powerful attack
allows to build colliding messages with two arbitrary prefixes, which is much more
threatening for real protocols.
In this paper, we report the first practical implementation of this attack, and its
impact on real-world security with a PGP/GnuPG impersonation attack. We managed
to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia
GTX 970, identical-prefix collisions can now be computed with a complexity of 2 61.2
rather than 2 64.7 , and chosen-prefix collisions with a complexity of 2 63.4 rather than
2 67.1 . When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,
and 45k US$ for a chosen-prefix collision, within the means of academic researchers.
Our actual attack required two months of computations using 900 Nvidia GTX 1060
GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time
preparing the attack).
Therefore, the same attacks that have been practical on MD5 since 2009 are now
practical on SHA-1. In particular, chosen-prefix collisions can break signature schemes
and handshake security in secure channel protocols (TLS, SSH). We strongly advise
to remove SHA-1 from those type of applications as soon as possible.
We exemplify our cryptanalysis by creating a pair of PGP/GnuPG keys with different
identities, but colliding SHA-1 certificates. A SHA-1 certification of the first key can
therefore be transferred to the second key, leading to a forgery. This proves that
SHA-1 signatures now offers virtually no security in practice. The legacy branch of
GnuPG still uses SHA-1 by default for identity certifications, but after notifying the
authors, the modern branch now rejects SHA-1 signatures (the issue is tracked as
CVE-2019-14855).
Keywords:
$ grep "gpg" easygpg.sh | grep " -s " 
  encryptedText=`printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty -` 
  printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" --no-emit-version --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - | xclip -i -selection clipboard 
      (tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" --trust-model always -a -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" -) | zenity --progress --text="Encrypting..." --pulsate --auto-close --no-cancel 
      (tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" --trust-model always -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" -) | zenity --progress --text="Encrypting..." --pulsate --auto-close --no-cancel 
    tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" -a --trust-model always -s -u "${senderID}" --no-emit-version --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" - 
    printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e -R "${senderID}" --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - > "${savepath}" 
    printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e -R "${senderID}" --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - > "${savepath}"

#easygpg #gpg #encryption #privacy #surveillance #security #cryptography
 

SHA-1 is a Shambles

First Chosen-Prefix Collision on SHA-1 and Application to the PGP Web of Trust


https://eprint.iacr.org/2020/014.pdf

Below is the abstract from the article. The most concerning thing here is the ability to forge signatures of keys. As you know if you read my posts, I have always argued that we should never sign other people's keys. Even without the problem of possible forged signatures using the technique in the article, key-signing harms privacy.

The only key signature created by EasyGPG is the signature on a newly created key pair.

printf "${newkeyattr}" | env TZ=UTC gpg --homedir "${keydir}" --batch --use-agent --cert-digest-algo "SHA512" --s2k-cipher-algo "AES256" --s2k-digest-algo "SHA512" --s2k-mode 3 --s2k-count 32000000 --status-file "${temp}" --gen-key 2> /dev/null

Notice that SHA512 is used. As for signatures on messages and encrypted files, see below (after the abstract). EasyGPG always uses SHA512.

Abstract. The SHA-1 hash function was designed in 1995 and has been widely used
during two decades. A theoretical collision attack was first proposed in 2004 [WYY05],
but due to its high complexity it was only implemented in practice in 2017, using
a large GPU cluster [SBK + 17]. More recently, an almost practical chosen-prefix
collision attack against SHA-1 has been proposed [LP19]. This more powerful attack
allows to build colliding messages with two arbitrary prefixes, which is much more
threatening for real protocols.
In this paper, we report the first practical implementation of this attack, and its
impact on real-world security with a PGP/GnuPG impersonation attack. We managed
to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia
GTX 970, identical-prefix collisions can now be computed with a complexity of 2 61.2
rather than 2 64.7 , and chosen-prefix collisions with a complexity of 2 63.4 rather than
2 67.1 . When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,
and 45k US$ for a chosen-prefix collision, within the means of academic researchers.
Our actual attack required two months of computations using 900 Nvidia GTX 1060
GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time
preparing the attack).
Therefore, the same attacks that have been practical on MD5 since 2009 are now
practical on SHA-1. In particular, chosen-prefix collisions can break signature schemes
and handshake security in secure channel protocols (TLS, SSH). We strongly advise
to remove SHA-1 from those type of applications as soon as possible.
We exemplify our cryptanalysis by creating a pair of PGP/GnuPG keys with different
identities, but colliding SHA-1 certificates. A SHA-1 certification of the first key can
therefore be transferred to the second key, leading to a forgery. This proves that
SHA-1 signatures now offers virtually no security in practice. The legacy branch of
GnuPG still uses SHA-1 by default for identity certifications, but after notifying the
authors, the modern branch now rejects SHA-1 signatures (the issue is tracked as
CVE-2019-14855).
Keywords:
$ grep "gpg" easygpg.sh | grep " -s " 
  encryptedText=`printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty -` 
  printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" --no-emit-version --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - | xclip -i -selection clipboard 
      (tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" --trust-model always -a -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" -) | zenity --progress --text="Encrypting..." --pulsate --auto-close --no-cancel 
      (tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" --trust-model always -s -u "${senderID}" -e ${recipients} --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" -) | zenity --progress --text="Encrypting..." --pulsate --auto-close --no-cancel 
    tar --numeric-owner -c "$(basename "${filename}")" | gpg --homedir "${keydir}" -a --trust-model always -s -u "${senderID}" --no-emit-version --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty --yes -o "${savepath}" - 
    printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e -R "${senderID}" --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - > "${savepath}" 
    printf "%s\n" "${theText}" | gpg --homedir "${keydir}" -a --trust-model always --textmode -s -u "${senderID}" -e -R "${senderID}" --no-emit-version --no-encrypt-to --personal-digest-preferences "SHA512 SHA384 SHA256" --personal-compress-preferences "ZLIB BZIP2 ZIP" --personal-cipher-preferences "AES256 TWOFISH CAMELLIA256 AES192 AES" --use-agent --no-tty - > "${savepath}"

#easygpg #gpg #encryption #privacy #surveillance #security #cryptography
 

#UK #police deny responsibility for poster urging parents to report kids for using #Kali #Linux


source: https://www.zdnet.com/article/uk-police-distance-themselves-from-poster-warning-parents-to-report-kids-for-using-kali-linux/
Virtual machines, the #Tor Browser, Kali Linux, #WiFi Pineapple, #Discord, and #Metasploit are all deemed terrible finds and the poster urges parents to call the cops "so we can give advice and engage them into positive diversions."
Just a few years ago I would have been burnt at the stake.

#Danger #Warning #fail #Technology #Security #Crime #Cyber #children #news
 

#UK #police deny responsibility for poster urging parents to report kids for using #Kali #Linux


source: https://www.zdnet.com/article/uk-police-distance-themselves-from-poster-warning-parents-to-report-kids-for-using-kali-linux/
Virtual machines, the #Tor Browser, Kali Linux, #WiFi Pineapple, #Discord, and #Metasploit are all deemed terrible finds and the poster urges parents to call the cops "so we can give advice and engage them into positive diversions."
Just a few years ago I would have been burnt at the stake.

#Danger #Warning #fail #Technology #Security #Crime #Cyber #children #news
 
Shared via Fedilab @realramnit@chaos.social 🔗 https://chaos.social/users/realramnit/statuses/103645804448003773

Die Loki-Foundation hat sich den #Signal-Quellcode genommen und eine der größten Schwächen des Messengers entfernt - die Telefonnummernabhängigkeit!

Außerdem routen sie sämtlichen Traffic durch Tor. Das macht "Session" - wie der neue Messenger getauft wurde - zu einem interessanten Delta-Chat-Konkurrenten.

https://getsession.org/

#Session #Signal #Messenger #Chat # Privacy #Security

chaos.social: Matthias Kneiss (@realramnit@chaos.social) (Matthias Kneiss)

 
Shared via Fedilab @realramnit@chaos.social 🔗 https://chaos.social/users/realramnit/statuses/103645804448003773

Die Loki-Foundation hat sich den #Signal-Quellcode genommen und eine der größten Schwächen des Messengers entfernt - die Telefonnummernabhängigkeit!

Außerdem routen sie sämtlichen Traffic durch Tor. Das macht "Session" - wie der neue Messenger getauft wurde - zu einem interessanten Delta-Chat-Konkurrenten.

https://getsession.org/

#Session #Signal #Messenger #Chat # Privacy #Security

chaos.social: Matthias Kneiss (@realramnit@chaos.social) (Matthias Kneiss)

 
Bild/Foto
This image is typical of the current state of #cyber #security.

#code #source #lol #fail #fun #humor
 
Later posts Earlier posts