Oh dear. Signal now includes payment.
This is basically a bad idea, because it makes Signal a target for scammers. At the moment, it's just a target for intelligence agencies. Bad enough.
But Signal is not only adding payments. No, no. Signal is building in blockchain.
That's the final nail in the coffin. I've already stopped recommending Signal since they put in that disastrous pin update. Content-wise, it meant they were uploading your phonebook to the cloud. Wasn't as clearly communicated, but that was that. They do a pack of bullshit voodoo about your pin, then they use it to encrypt your phonebook, and then they upload it to the cloud.
Because smartphones don't do well at all for entering text or passwords, the pin is either useless because you can't get it entered, or it's useless because it has too little entropy and could just be tried through on the server.
And Signal didn't make this crappy pin opt-in but forced it on everyone once. That was the moment since when I stopped using Signal.
But now? Make a blockchain bullshit sandwich out of it too?
And not even a real blockchain but some voodoo handwaving hybrid snake oil.
I took a closer look at it last year. The whole thing is based on Intel SGX. That's Intel's enclave voodoo tech. According to Signal, it works like this: their crypto blockchain voodoo tech is open source and you can build it yourself. Then you can take a crypto checksum from it. Then you ask a proprietary unverifiable closed-source cloud service from Intel, and it will tell you the checksum from the stuff in the SGX enclave. Then you can see that the software was unmodified.
How does Intel's proprietary unverifiable closed-source voodoo service in the cloud know the checksum of the software in the SGX enclave? Well, it relies on Intel's proprietary unverifiable closed-source voodoo ware in your Intel CPU (in the management engine that Intel imposes on all customers in a non-disable way) with SGX.
You can already see: It's been a long time since I've been so unconvinced by a concept. The number of layers of "don't look too closely here" level crypto is staggering. The humanists among you will know the word meaning of crypto, and at this point enjoy the irony of crypto-voodoo being used here as a flimflam ingredient for a bullshit cocktail of wild diversionary hand movements otherwise only known from hat players and stage magicians.
https://blog.fefe.de/?ts=9e9221ad
#signal #payments #komment #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
This is basically a bad idea, because it makes Signal a target for scammers. At the moment, it's just a target for intelligence agencies. Bad enough.
But Signal is not only adding payments. No, no. Signal is building in blockchain.
That's the final nail in the coffin. I've already stopped recommending Signal since they put in that disastrous pin update. Content-wise, it meant they were uploading your phonebook to the cloud. Wasn't as clearly communicated, but that was that. They do a pack of bullshit voodoo about your pin, then they use it to encrypt your phonebook, and then they upload it to the cloud.
Because smartphones don't do well at all for entering text or passwords, the pin is either useless because you can't get it entered, or it's useless because it has too little entropy and could just be tried through on the server.
And Signal didn't make this crappy pin opt-in but forced it on everyone once. That was the moment since when I stopped using Signal.
But now? Make a blockchain bullshit sandwich out of it too?
And not even a real blockchain but some voodoo handwaving hybrid snake oil.
I took a closer look at it last year. The whole thing is based on Intel SGX. That's Intel's enclave voodoo tech. According to Signal, it works like this: their crypto blockchain voodoo tech is open source and you can build it yourself. Then you can take a crypto checksum from it. Then you ask a proprietary unverifiable closed-source cloud service from Intel, and it will tell you the checksum from the stuff in the SGX enclave. Then you can see that the software was unmodified.
How does Intel's proprietary unverifiable closed-source voodoo service in the cloud know the checksum of the software in the SGX enclave? Well, it relies on Intel's proprietary unverifiable closed-source voodoo ware in your Intel CPU (in the management engine that Intel imposes on all customers in a non-disable way) with SGX.
You can already see: It's been a long time since I've been so unconvinced by a concept. The number of layers of "don't look too closely here" level crypto is staggering. The humanists among you will know the word meaning of crypto, and at this point enjoy the irony of crypto-voodoo being used here as a flimflam ingredient for a bullshit cocktail of wild diversionary hand movements otherwise only known from hat players and stage magicians.
https://blog.fefe.de/?ts=9e9221ad
#signal #payments #komment #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
How a 30-year-old technology – WiFi – will turn into our next big privacy problem
WiFi can be traced back to 1991, but it was in the late 1990s that the technology began to be taken up by the general public. Its success back then was by no means guaranteed. Although largely forgotten now, there was a rival approach called Home RF, which only gave up the fight in 2003 when it became clear that WiFi would become the standard for wireless local area networks (WLANs).
Since then, WiFi has become an increasingly important part of modern life, with hotels, restaurants and many other venues providing it for free as an expected part of their services. Over the years, the technology has improved, mostly in terms of speed and range. But there is a new iteration of the WiFi standard being developed that will have massive implications for privacy and surveillance. It goes by the unmemorable name of 802.11bf. Here’s how the IEEE, the organization that is drawing up the new standard, describes it:
"IEEE 802.11bf will enable stations to inform other stations of their WLAN sensing capabilities and request and set up transmissions that allow for WLAN sensing measurements to be performed, among other features. WLAN sensing makes use of received WLAN signals to detect features of an intended target in a given environment. The technology can measure range, velocity, and angular information; detect motion, presence, or proximity; detect objects, people, and animals; and be used in rooms, houses, cars, and enterprise environments."
The idea is simple. Radio waves are emitted from WiFi units that support the new 802.11bf, but not only to transfer data, as today. Instead, details of how those waves bounce off objects in their vicinity are gathered and analyzed to detect key features. Different uses are made possible by the availability of license-exempt frequency bands between 1 GHz and 7.125 GHz, and also above 45 GHz. The former will allow relatively large-scale motions to be detected – people or animals moving around, for example – and have the useful ability to pass through obstacles such as walls. The high frequencies, on the other hand, will have a shorter range, but be more precise: as well as gestures, it will be possible to track finer movements on a keyboard, for example. The two might be used in tandem, with the lower frequencies deployed to guide the tighter radio beam used with the higher frequencies.
https://www.privateinternetaccess.com/blog/how-a-30-year-old-technology-wifi-will-turn-into-our-next-big-privacy-problem/
#wifi #privacy #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
WiFi can be traced back to 1991, but it was in the late 1990s that the technology began to be taken up by the general public. Its success back then was by no means guaranteed. Although largely forgotten now, there was a rival approach called Home RF, which only gave up the fight in 2003 when it became clear that WiFi would become the standard for wireless local area networks (WLANs).
Since then, WiFi has become an increasingly important part of modern life, with hotels, restaurants and many other venues providing it for free as an expected part of their services. Over the years, the technology has improved, mostly in terms of speed and range. But there is a new iteration of the WiFi standard being developed that will have massive implications for privacy and surveillance. It goes by the unmemorable name of 802.11bf. Here’s how the IEEE, the organization that is drawing up the new standard, describes it:
"IEEE 802.11bf will enable stations to inform other stations of their WLAN sensing capabilities and request and set up transmissions that allow for WLAN sensing measurements to be performed, among other features. WLAN sensing makes use of received WLAN signals to detect features of an intended target in a given environment. The technology can measure range, velocity, and angular information; detect motion, presence, or proximity; detect objects, people, and animals; and be used in rooms, houses, cars, and enterprise environments."
The idea is simple. Radio waves are emitted from WiFi units that support the new 802.11bf, but not only to transfer data, as today. Instead, details of how those waves bounce off objects in their vicinity are gathered and analyzed to detect key features. Different uses are made possible by the availability of license-exempt frequency bands between 1 GHz and 7.125 GHz, and also above 45 GHz. The former will allow relatively large-scale motions to be detected – people or animals moving around, for example – and have the useful ability to pass through obstacles such as walls. The high frequencies, on the other hand, will have a shorter range, but be more precise: as well as gestures, it will be possible to track finer movements on a keyboard, for example. The two might be used in tandem, with the lower frequencies deployed to guide the tighter radio beam used with the higher frequencies.
https://www.privateinternetaccess.com/blog/how-a-30-year-old-technology-wifi-will-turn-into-our-next-big-privacy-problem/
#wifi #privacy #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
PIA
How a 30-year-old technology - WiFi - will turn into our next big privacy problem
WiFi can be traced back to 1991, but it was in the late 1990s that the technology began to be taken up by the general public. Its success back then was by
Media is too big
VIEW IN TELEGRAM
Life Inside North Korea’s Hacker Army
North Korea has sent hundreds of programmers abroad to make money by any means necessary. With the latest U.S. hacking charges, we take a look at the lives of this secret army, their fears and dreams.
https://www.youtube.com/watch?v=7A6I-NLzIOI
#nk #northkorea #hacker #programmers #video
📽@cRyPtHoN_INFOSEC_FR
📽@cRyPtHoN_INFOSEC_EN
📽@cRyPtHoN_INFOSEC_DE
📽@BlackBox_Archiv
📽@NoGoolag
North Korea has sent hundreds of programmers abroad to make money by any means necessary. With the latest U.S. hacking charges, we take a look at the lives of this secret army, their fears and dreams.
https://www.youtube.com/watch?v=7A6I-NLzIOI
#nk #northkorea #hacker #programmers #video
📽@cRyPtHoN_INFOSEC_FR
📽@cRyPtHoN_INFOSEC_EN
📽@cRyPtHoN_INFOSEC_DE
📽@BlackBox_Archiv
📽@NoGoolag
Scraped data of 500 million LinkedIn users being sold online, 2 million records leaked as proof
Updated on 07/04: We updated our personal data leak checker database with more than 780,000 email addresses associated with this leak. Use it to find out if your LinkedIn profile has been scraped by the threat actors.
Days after a massive Facebook data leak made the headlines, it seems like we’re in for another one, this time involving LinkedIn.
An archive containing data purportedly scraped from 500 million LinkedIn profiles has been put for sale on a popular hacker forum, with another 2 million records leaked as a proof-of-concept sample by the post author.
https://cybernews.com/news/stolen-data-of-500-million-linkedin-users-being-sold-online-2-million-leaked-as-proof-2/
#linkedIn #leak #leaked #data
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Updated on 07/04: We updated our personal data leak checker database with more than 780,000 email addresses associated with this leak. Use it to find out if your LinkedIn profile has been scraped by the threat actors.
Days after a massive Facebook data leak made the headlines, it seems like we’re in for another one, this time involving LinkedIn.
An archive containing data purportedly scraped from 500 million LinkedIn profiles has been put for sale on a popular hacker forum, with another 2 million records leaked as a proof-of-concept sample by the post author.
https://cybernews.com/news/stolen-data-of-500-million-linkedin-users-being-sold-online-2-million-leaked-as-proof-2/
#linkedIn #leak #leaked #data
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Cybernews
Scraped data of 500 million LinkedIn users being sold online, 2 million records leaked as proof
A user on a popular hacker forum is selling 500M LinkedIn users' full names, email addresses, phone numbers, workplace information, and more.
Forwarded from NoGoolag
This media is not supported in your browser
VIEW IN TELEGRAM
Intel announced "bleep" to remove offensive content from gaming
https://www.youtube.com/watch?v=W9f0h4nB6VM
#intel #bleeb #gaming
📡 @nogoolag 📡 @blackbox_archiv
https://www.youtube.com/watch?v=W9f0h4nB6VM
#intel #bleeb #gaming
📡 @nogoolag 📡 @blackbox_archiv
Apple's stricter rules on digital tracking to take effect soon
Beginning with iOS 14.5, due out in the next couple of weeks, iPhone apps will have to ask users for permission to track their digital activity.
Why it matters: Only if a user gives permission will apps have access to the unique advertising identifier assigned to each device. Apple will also take action against apps that try to fingerprint individual devices via other methods.
👉🏼 Apple first announced the plan last June, but delayed making it mandatory until now to give the industry more time to prepare.
👉🏼 Apple is continuing to prepare customers, app makers and the ad industry about the change. Today it is making changes to a cartoon it uses to illustrate a hypothetical example of how apps can track people's activity, including sharing information with data brokers.
👉🏼 Facebook and others remain opposed to what Apple is doing, but are preparing their apps to comply with the rules.
💡 Between the lines: One place you won't see the ad-tracking permission prompt is within Apple's own apps. The rules do apply to Apple, but the company said none of its apps, including those with ads, use such tracking.
https://www.axios.com/apple-digital-tracking-rules-17cd4625-bea8-4f69-8ef2-eafdb6ed76ab.html
#apple #digital #tracking
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Beginning with iOS 14.5, due out in the next couple of weeks, iPhone apps will have to ask users for permission to track their digital activity.
Why it matters: Only if a user gives permission will apps have access to the unique advertising identifier assigned to each device. Apple will also take action against apps that try to fingerprint individual devices via other methods.
👉🏼 Apple first announced the plan last June, but delayed making it mandatory until now to give the industry more time to prepare.
👉🏼 Apple is continuing to prepare customers, app makers and the ad industry about the change. Today it is making changes to a cartoon it uses to illustrate a hypothetical example of how apps can track people's activity, including sharing information with data brokers.
👉🏼 Facebook and others remain opposed to what Apple is doing, but are preparing their apps to comply with the rules.
💡 Between the lines: One place you won't see the ad-tracking permission prompt is within Apple's own apps. The rules do apply to Apple, but the company said none of its apps, including those with ads, use such tracking.
https://www.axios.com/apple-digital-tracking-rules-17cd4625-bea8-4f69-8ef2-eafdb6ed76ab.html
#apple #digital #tracking
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Axios
Apple's stricter rules on digital tracking to take effect soon
With the new iOS, iPhone apps will have to ask users for permission to track their digital activity.
New Wormable Android Malware Spreads by Creating Auto-Replies to Messages in WhatsApp
Check Point Research (CPR) recently discovered malware on Google Play hidden in a fake application that is capable of spreading itself via users’ WhatsApp messages. If the user downloaded the fake application and unwittingly granted the malware the appropriate permissions, the malware is capable of automatically replying to victim’s’ incoming WhatsApp messages with a payload received from a command-and-control (C&C) server. This unique method could have enabled threat actors to distribute phishing attacks, spread false information or steal credentials and data from users’ WhatsApp accounts, and more.
As the mobile threat landscape evolves, threat actors are always seeking to develop new techniques to evolve and successfully distribute malware. In this specific campaign, Check Point’s researchers discovered a new and innovative malicious threat on the Google Play app store which spreads itself via mobile users’ WhatsApp conversations, and can also send further malicious content via automated replies to incoming WhatsApp messages.
Researchers found the malware hidden within an app on Google Play called ’FlixOnline.’” The app is a fake service that claims to allow users to view Netflix content from all around the world on their mobiles. However, instead of allowing the mobile user to view Netflix content, the application is actually designed to monitor the user’s WhatsApp notifications, and to send automatic replies to the user’s incoming messages using content that it receives from a remote command and control (C&C) server.
‼️ The malware sends the following response to its victims, luring them with the offer of a free Netflix service:
“2 Months of Netflix Premium Free at no cost For REASON OF QUARANTINE (CORONA VIRUS)* Get 2 Months of Netflix Premium Free anywhere in the world for 60 days. Get it now HERE https://bit[.]ly/3bDmzUw.”
💡 Utilizing this technique, a threat actor could perform a wide range of malicious activities:
❌ Spread further malware via malicious links
❌ Stealing data from users’ WhatsApp accounts
❌ Spreading fake or malicious messages to users’ WhatsApp contacts and groups (for example, work-related groups)
❌ Extort users by threatening to send sensitive WhatsApp data or conversations to all of their contacts
https://research.checkpoint.com/2021/new-wormable-android-malware-spreads-by-creating-auto-replies-to-messages-in-whatsapp/
#android #malware #whatsapp #DeleteWhatsapp
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Check Point Research (CPR) recently discovered malware on Google Play hidden in a fake application that is capable of spreading itself via users’ WhatsApp messages. If the user downloaded the fake application and unwittingly granted the malware the appropriate permissions, the malware is capable of automatically replying to victim’s’ incoming WhatsApp messages with a payload received from a command-and-control (C&C) server. This unique method could have enabled threat actors to distribute phishing attacks, spread false information or steal credentials and data from users’ WhatsApp accounts, and more.
As the mobile threat landscape evolves, threat actors are always seeking to develop new techniques to evolve and successfully distribute malware. In this specific campaign, Check Point’s researchers discovered a new and innovative malicious threat on the Google Play app store which spreads itself via mobile users’ WhatsApp conversations, and can also send further malicious content via automated replies to incoming WhatsApp messages.
Researchers found the malware hidden within an app on Google Play called ’FlixOnline.’” The app is a fake service that claims to allow users to view Netflix content from all around the world on their mobiles. However, instead of allowing the mobile user to view Netflix content, the application is actually designed to monitor the user’s WhatsApp notifications, and to send automatic replies to the user’s incoming messages using content that it receives from a remote command and control (C&C) server.
‼️ The malware sends the following response to its victims, luring them with the offer of a free Netflix service:
“2 Months of Netflix Premium Free at no cost For REASON OF QUARANTINE (CORONA VIRUS)* Get 2 Months of Netflix Premium Free anywhere in the world for 60 days. Get it now HERE https://bit[.]ly/3bDmzUw.”
💡 Utilizing this technique, a threat actor could perform a wide range of malicious activities:
❌ Spread further malware via malicious links
❌ Stealing data from users’ WhatsApp accounts
❌ Spreading fake or malicious messages to users’ WhatsApp contacts and groups (for example, work-related groups)
❌ Extort users by threatening to send sensitive WhatsApp data or conversations to all of their contacts
https://research.checkpoint.com/2021/new-wormable-android-malware-spreads-by-creating-auto-replies-to-messages-in-whatsapp/
#android #malware #whatsapp #DeleteWhatsapp
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Check Point Research
New Wormable Android Malware Spreads by Creating Auto-Replies to Messages in WhatsApp - Check Point Research
Research by: Aviran Hazum, Bodgan Melnykov & Israel Wenik Overview Check Point Research (CPR) recently discovered malware on Google Play hidden in a fake application that is capable of spreading itself via users’ WhatsApp messages. If the user downloaded…
Attackers Blowing Up Discord, Slack with Malware
One Discord network search turned up 20,000 virus results, researchers found.
Workflow and collaboration tools like Slack and Discord have been infiltrated by threat actors, who are abusing their legitimate functions to evade security and deliver info-stealers, remote-access trojans (RATs) and other malware.
The pandemic-induced shift to remote work drove business processes onto these collaboration platforms in 2020, and predictably, 2021 has ushered in a new level cybercriminal expertise in attacking them.
Cisco’s Talos cybersecurity team said in a report on collaboration app abuse this week that during the past year threat actors have increasingly used apps like Discord and Slack to trick users into opening malicious attachments and deploy various RATs and stealers, including Agent Tesla, AsyncRAT, Formbook and others.
“One of the key challenges associated with malware delivery is making sure that the files, domains or systems don’t get taken down or blocked,” Talos researchers explained in their report. “By leveraging these chat applications that are likely allowed, they are removing several of those hurdles and greatly increase the likelihood that the attachment reaches the end user.”
https://threatpost.com/attackers-discord-slack-malware/165295/
#discord #malware
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
One Discord network search turned up 20,000 virus results, researchers found.
Workflow and collaboration tools like Slack and Discord have been infiltrated by threat actors, who are abusing their legitimate functions to evade security and deliver info-stealers, remote-access trojans (RATs) and other malware.
The pandemic-induced shift to remote work drove business processes onto these collaboration platforms in 2020, and predictably, 2021 has ushered in a new level cybercriminal expertise in attacking them.
Cisco’s Talos cybersecurity team said in a report on collaboration app abuse this week that during the past year threat actors have increasingly used apps like Discord and Slack to trick users into opening malicious attachments and deploy various RATs and stealers, including Agent Tesla, AsyncRAT, Formbook and others.
“One of the key challenges associated with malware delivery is making sure that the files, domains or systems don’t get taken down or blocked,” Talos researchers explained in their report. “By leveraging these chat applications that are likely allowed, they are removing several of those hurdles and greatly increase the likelihood that the attachment reaches the end user.”
https://threatpost.com/attackers-discord-slack-malware/165295/
#discord #malware
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Threat Post
Attackers Blowing Up Discord, Slack with Malware
One Discord network search turned up 20,000 virus results, researchers found.
Peter Thiel warns: Bitcoin could be weaponized by China
Billionaire and Paypal co-founder Peter Thiel recommends that the US government strictly regulate Bitcoin. Otherwise, he says, it cannot be excluded that China will use Bitcoin to damage the U.S. financial system.
“Even though I'm a pro-crypto, pro-Bitcoin maximalist person, I do wonder whether if at this point Bitcoin should also be thought of in part as a Chinese financial weapon against the U.S.” says @Paypal
co-founder Peter Thiel.
https://nitter.pussthecat.org/nixonfoundation/status/1379894036060864516
#thiel #paypal #bitcoin #china
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Billionaire and Paypal co-founder Peter Thiel recommends that the US government strictly regulate Bitcoin. Otherwise, he says, it cannot be excluded that China will use Bitcoin to damage the U.S. financial system.
“Even though I'm a pro-crypto, pro-Bitcoin maximalist person, I do wonder whether if at this point Bitcoin should also be thought of in part as a Chinese financial weapon against the U.S.” says @Paypal
co-founder Peter Thiel.
https://nitter.pussthecat.org/nixonfoundation/status/1379894036060864516
#thiel #paypal #bitcoin #china
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Nitter
Richard Nixon Foundation (@nixonfoundation)
“Even though I'm a pro-crypto, pro-Bitcoin maximalist person, I do wonder whether if at this point Bitcoin should also be thought of in part as a Chinese financial weapon against the U.S.” says @Paypal co-founder Peter Thiel.
More on cryptocurrencies from…
More on cryptocurrencies from…
The best of Yahoo! Answers - The wisdom of the crowd
https://www.theverge.com/22368753/yahoo-answers-best-funny-shut-down
#yahoo #answers #funny
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
https://www.theverge.com/22368753/yahoo-answers-best-funny-shut-down
#yahoo #answers #funny
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Samsung's 'iTest' Lets You Try a Galaxy Device on Your iPhone
Samsung has launched "iTest," an interactive website experience that's designed to allow iPhone users to test out Android on a Galaxy device, or "sample the other side," as Samsung puts it.
The iTest website is being advertised in New Zealand, according to a MacRumors reader who came across the feature. Visiting the iTest website on an iPhone prompts users to install a web app to the Home screen.
From there, tapping the app launches into a simulated Galaxy smartphone home screen complete with a range of apps and settings options. You can open the Galaxy Store, apply Themes, and even access the messages and phone apps.
https://www.macrumors.com/2021/04/08/samsung-itest-galaxy-device-iphone-experience/
#samsung #SumSum #apple #iphone #itest
📡 @nogoolag 📡 @blackbox_archiv
Samsung has launched "iTest," an interactive website experience that's designed to allow iPhone users to test out Android on a Galaxy device, or "sample the other side," as Samsung puts it.
The iTest website is being advertised in New Zealand, according to a MacRumors reader who came across the feature. Visiting the iTest website on an iPhone prompts users to install a web app to the Home screen.
From there, tapping the app launches into a simulated Galaxy smartphone home screen complete with a range of apps and settings options. You can open the Galaxy Store, apply Themes, and even access the messages and phone apps.
https://www.macrumors.com/2021/04/08/samsung-itest-galaxy-device-iphone-experience/
#samsung #SumSum #apple #iphone #itest
📡 @nogoolag 📡 @blackbox_archiv
MacRumors
Samsung's 'iTest' Lets You Try a Galaxy Device on Your iPhone
Samsung has launched "iTest," an interactive website experience that's designed to allow iPhone users to test out Android on a Galaxy...
Facebook is down
https://developers.facebook.com/status
#facebook #DeleteFacebook #down
📡 @nogoolag 📡 @blackbox_archiv
https://developers.facebook.com/status
#facebook #DeleteFacebook #down
📡 @nogoolag 📡 @blackbox_archiv
Facebook axes 16,000 accounts for trading fake reviews after UK intervenes
(Reuters) - Social media company Facebook Inc suspended 16,000 accounts for selling or buying fake reviews of products and services on its platforms, after the Britain’s competition watchdog intervened for the second time, the regulator said.
U.S.-based Facebook also made further changes to detect, remove and prevent paid content which could mislead users on its platforms, including popular photo-sharing app Instagram, UK’s Competition and Markets Authority (CMA) said on Friday.
“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews,” a Facebook representative said.
The CMA began a crackdown on false reviews from 2019 when it first asked Facebook and e-commerce platform eBay Inc to check their websites after it found evidence of a growing marketplace for misleading customer reviews on the platforms.
Facebook has also been under scrutiny by the CMA for antitrust concerns over the technology company’s acquisition of GIF website Giphy. It has been under pressure the world over for its data sharing practices as well as fake news and hate speech.
“The pandemic has meant that more and more people are buying online, and millions of us read reviews to enable us to make informed choices when we shop around. That’s why fake and misleading reviews are so damaging,” said CMA Chief Executive Andrea Coscelli.
CMA’s crackdown on Facebook coincides with Britain’s efforts to set up a dedicated digital markets unit within the regulatory authority to specifically look at governing digital platforms.
https://www.reuters.com/article/us-facebook-britain-reviews/facebook-axes-16000-accounts-for-trading-fake-reviews-after-uk-intervenes-idUSKBN2BW168
#facebook #DeleteFacebook #fake #reviews #uk
📡 @nogoolag 📡 @blackbox_archiv
(Reuters) - Social media company Facebook Inc suspended 16,000 accounts for selling or buying fake reviews of products and services on its platforms, after the Britain’s competition watchdog intervened for the second time, the regulator said.
U.S.-based Facebook also made further changes to detect, remove and prevent paid content which could mislead users on its platforms, including popular photo-sharing app Instagram, UK’s Competition and Markets Authority (CMA) said on Friday.
“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews,” a Facebook representative said.
The CMA began a crackdown on false reviews from 2019 when it first asked Facebook and e-commerce platform eBay Inc to check their websites after it found evidence of a growing marketplace for misleading customer reviews on the platforms.
Facebook has also been under scrutiny by the CMA for antitrust concerns over the technology company’s acquisition of GIF website Giphy. It has been under pressure the world over for its data sharing practices as well as fake news and hate speech.
“The pandemic has meant that more and more people are buying online, and millions of us read reviews to enable us to make informed choices when we shop around. That’s why fake and misleading reviews are so damaging,” said CMA Chief Executive Andrea Coscelli.
CMA’s crackdown on Facebook coincides with Britain’s efforts to set up a dedicated digital markets unit within the regulatory authority to specifically look at governing digital platforms.
https://www.reuters.com/article/us-facebook-britain-reviews/facebook-axes-16000-accounts-for-trading-fake-reviews-after-uk-intervenes-idUSKBN2BW168
#facebook #DeleteFacebook #fake #reviews #uk
📡 @nogoolag 📡 @blackbox_archiv
Reuters
Facebook removes 16,000 groups for selling fake reviews after UK intervention
Britain's competition watchdog said on Friday social media company Facebook Inc (FB.O)removed 16,000 groups that were selling and buying fake reviews of various products and services, the second time the regulator had to intervene.
CyberBattleSim
CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for training of automated agents using reinforcement learning algorithms.
The simulation environment is parameterized by a fixed network topology and a set of vulnerabilities that agents can utilize to move laterally in the network. The goal of the attacker is to take ownership of a portion of the network by exploiting vulnerabilities that are planted in the computer nodes. While the attacker attempts to spread throughout the network, a defender agent watches the network activity and tries to detect any attack taking place and mitigate the impact on the system by evicting the attacker. We provide a basic stochastic defender that detects and mitigates ongoing attacks based on pre-defined probabilities of success. We implement mitigation by re-imaging the infected nodes, a process abstractly modeled as an operation spanning over multiple simulation steps.
To compare the performance of the agents we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.
https://github.com/microsoft/CyberBattleSim
https://www.microsoft.com/security/blog/2021/04/08/gamifying-machine-learning-for-stronger-security-and-ai-models/
#simulation #CyberBattleSim #machine #learning #ai #security #microsoft
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for training of automated agents using reinforcement learning algorithms.
The simulation environment is parameterized by a fixed network topology and a set of vulnerabilities that agents can utilize to move laterally in the network. The goal of the attacker is to take ownership of a portion of the network by exploiting vulnerabilities that are planted in the computer nodes. While the attacker attempts to spread throughout the network, a defender agent watches the network activity and tries to detect any attack taking place and mitigate the impact on the system by evicting the attacker. We provide a basic stochastic defender that detects and mitigates ongoing attacks based on pre-defined probabilities of success. We implement mitigation by re-imaging the infected nodes, a process abstractly modeled as an operation spanning over multiple simulation steps.
To compare the performance of the agents we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.
https://github.com/microsoft/CyberBattleSim
https://www.microsoft.com/security/blog/2021/04/08/gamifying-machine-learning-for-stronger-security-and-ai-models/
#simulation #CyberBattleSim #machine #learning #ai #security #microsoft
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
GitHub
GitHub - microsoft/CyberBattleSim: An experimentation and research platform to investigate the interaction of automated agents…
An experimentation and research platform to investigate the interaction of automated agents in an abstract simulated network environments. - microsoft/CyberBattleSim
This media is not supported in your browser
VIEW IN TELEGRAM
Anthropomorphic Webcam - The Open-Hardware Human-eye webcam.
Sensing devices are everywhere, up to the point where we become unaware of their presence.
Eyecam is a critical design prototype exploring the potential futures of sensing devices. Eyecam is a webcam shaped like a human eye. It can see, blink, look around and observe you.
https://marcteyssier.com/projects/eyecam/
https://marcteys.github.io/eyecam/
#anthropomorphic #webcam #eyecam #video
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
Sensing devices are everywhere, up to the point where we become unaware of their presence.
Eyecam is a critical design prototype exploring the potential futures of sensing devices. Eyecam is a webcam shaped like a human eye. It can see, blink, look around and observe you.
https://marcteyssier.com/projects/eyecam/
https://marcteys.github.io/eyecam/
#anthropomorphic #webcam #eyecam #video
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
This media is not supported in your browser
VIEW IN TELEGRAM
Elon Musk shows off cyborg monkey that can play ping-pong video game with its mind
The macaque had a chip inserted on each side of his brain, created by Elon Musk's AI company Neuralink
Billionaire Elon Musk has unveiled a video showing a cyborg monkey playing the 1970s video game Pong entirely with its mind using brain implants.
The footage shows a nine-year-old macaque called Pager with a chip inserted on each side of his brain, created by Musk's AI company Neuralink.
https://www.telegraph.co.uk/technology/2021/04/09/elon-musk-shows-cyborg-monkey-can-play-video-games-mind/
#musk #cyborg #monkey #videogames #neuralink #video
📡 @nogoolag 📡 @blackbox_archiv
The macaque had a chip inserted on each side of his brain, created by Elon Musk's AI company Neuralink
Billionaire Elon Musk has unveiled a video showing a cyborg monkey playing the 1970s video game Pong entirely with its mind using brain implants.
The footage shows a nine-year-old macaque called Pager with a chip inserted on each side of his brain, created by Musk's AI company Neuralink.
https://www.telegraph.co.uk/technology/2021/04/09/elon-musk-shows-cyborg-monkey-can-play-video-games-mind/
#musk #cyborg #monkey #videogames #neuralink #video
📡 @nogoolag 📡 @blackbox_archiv
Media is too big
VIEW IN TELEGRAM
Interview with Hanna from Tutanota
Interview with Hanna from Tutanota about the importance of encryption in email, some of Tutanota's offerings and more.
https://www.youtube.com/watch?v=vLvxf6IxhPQ
#tutanota #encryption #email #interview #video
📡 @nogoolag 📡 @blackbox_archiv
Interview with Hanna from Tutanota about the importance of encryption in email, some of Tutanota's offerings and more.
https://www.youtube.com/watch?v=vLvxf6IxhPQ
#tutanota #encryption #email #interview #video
📡 @nogoolag 📡 @blackbox_archiv
Social Media Use in 2021
A majority of Americans say they use YouTube and Facebook, while use of Instagram, Snapchat and TikTok is especially common among adults under 30.
To better understand Americans’ use of social media, online platforms and messaging apps, Pew Research Center surveyed 1,502 U.S. adults from Jan. 25 to Feb. 8, 2021, by cellphone and landline phone. The survey was conducted by interviewers under the direction of Abt Associates and is weighted to be representative of the U.S. adult population by gender, race, ethnicity, education and other categories. Here are the questions used for this report, along with responses, and its methodology.
https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/
#socialmedia #facebook #youtube #instagram #snapchat #tiktok #research #usa
📡 @nogoolag 📡 @blackbox_archiv
A majority of Americans say they use YouTube and Facebook, while use of Instagram, Snapchat and TikTok is especially common among adults under 30.
To better understand Americans’ use of social media, online platforms and messaging apps, Pew Research Center surveyed 1,502 U.S. adults from Jan. 25 to Feb. 8, 2021, by cellphone and landline phone. The survey was conducted by interviewers under the direction of Abt Associates and is weighted to be representative of the U.S. adult population by gender, race, ethnicity, education and other categories. Here are the questions used for this report, along with responses, and its methodology.
https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/
#socialmedia #facebook #youtube #instagram #snapchat #tiktok #research #usa
📡 @nogoolag 📡 @blackbox_archiv
How A Facial Recognition Tool Found Its Way Into Hundreds Of US Police Departments, Schools, And Taxpayer-Funded Organizations
A BuzzFeed News investigation has found that employees at law enforcement agencies across the US ran thousands of Clearview AI facial recognition searches — often without the knowledge of the public or even their own departments.
(updated on April 8, 2021)
A controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.
BuzzFeed News has developed a searchable table of 1,803 publicly funded agencies whose employees are listed in the data as having used or tested the controversial policing tool before February 2020. These include local and state police, US Immigration and Customs Enforcement, the Air Force, state healthcare organizations, offices of state attorneys general, and even public schools.
In many cases, leaders at these agencies were unaware that employees were using the tool; five said they would pause or ban its use in response to questions about it.
Our reporting is based on data that describes facial recognition searches conducted on Clearview AI between 2018 and February 2020, as well as tens of thousands of pages of public records, and outreach to every one of the hundreds of taxpayer-funded agencies included in the dataset.
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-local-police-facial-recognition
#clearview #ai #police #facial #recognition
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
A BuzzFeed News investigation has found that employees at law enforcement agencies across the US ran thousands of Clearview AI facial recognition searches — often without the knowledge of the public or even their own departments.
(updated on April 8, 2021)
A controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.
BuzzFeed News has developed a searchable table of 1,803 publicly funded agencies whose employees are listed in the data as having used or tested the controversial policing tool before February 2020. These include local and state police, US Immigration and Customs Enforcement, the Air Force, state healthcare organizations, offices of state attorneys general, and even public schools.
In many cases, leaders at these agencies were unaware that employees were using the tool; five said they would pause or ban its use in response to questions about it.
Our reporting is based on data that describes facial recognition searches conducted on Clearview AI between 2018 and February 2020, as well as tens of thousands of pages of public records, and outreach to every one of the hundreds of taxpayer-funded agencies included in the dataset.
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-local-police-facial-recognition
#clearview #ai #police #facial #recognition
📡@cRyPtHoN_INFOSEC_FR
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_DE
📡@BlackBox_Archiv
📡@NoGoolag
BuzzFeed News
How A Facial Recognition Tool Found Its Way Into Hundreds Of US Police Departments, Schools, And Taxpayer-Funded Organizations
A BuzzFeed News investigation has found that employees at law enforcement agencies across the US ran thousands of Clearview AI facial recognition searches — often without the knowledge of the public or even their own departments.
The App Store is broken because it wasn't designed to work
When Kosta Eleftheriou first started revealing scam upon scam in the App Store, I have to admit I didn't quite get it. How were all these multi-million dollar scams being allowed into the App Store in the first place? And why weren't they being expediently removed when scores of customers complained in their 1-star reviews?
The answer turns out to be as simple as it is depressing: Apple's App Store was never designed to work. At least not in the way the company purports that it does. Apple presents the App Store as a highly curated, secure mall of apps which have been thoroughly vetted, and that you can safely install without any due diligence. But it's not and you shouldn't.
As part of Epic's lawsuit against Apple, we've come to learn that app reviewers typically review 50-100 apps per day. Some times spending less than a minute reviewing an individual app. We've also learned that these reviewers are hired without any technical background, let alone any particular expertise with the iOS or macOS platforms.
There's a term for a practice like this: security theater.
https://world.hey.com/dhh/the-app-store-is-broken-because-it-wasn-t-designed-to-work-aa479eb5
#apple #appstore #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
When Kosta Eleftheriou first started revealing scam upon scam in the App Store, I have to admit I didn't quite get it. How were all these multi-million dollar scams being allowed into the App Store in the first place? And why weren't they being expediently removed when scores of customers complained in their 1-star reviews?
The answer turns out to be as simple as it is depressing: Apple's App Store was never designed to work. At least not in the way the company purports that it does. Apple presents the App Store as a highly curated, secure mall of apps which have been thoroughly vetted, and that you can safely install without any due diligence. But it's not and you shouldn't.
As part of Epic's lawsuit against Apple, we've come to learn that app reviewers typically review 50-100 apps per day. Some times spending less than a minute reviewing an individual app. We've also learned that these reviewers are hired without any technical background, let alone any particular expertise with the iOS or macOS platforms.
There's a term for a practice like this: security theater.
https://world.hey.com/dhh/the-app-store-is-broken-because-it-wasn-t-designed-to-work-aa479eb5
#apple #appstore #thinkabout
📡 @nogoolag 📡 @blackbox_archiv
Hey
The App Store is broken because it wasn't designed to work
When Kosta Eleftheriou first started revealing scam upon scam in the App Store, I have to admit I didn't quite get it. How were all these multi-million dollar scams being allowed into the App Store in the first place? And why weren't they being expediently…