No new Bitcoin: Don't touch Facebooks Libra!
Facebook wants to become the central bank with Libra and profit from the Bitcoin hype. But the blockchain is primarily a facade, Libra is neither decentralized nor a crypto currency.
If Facebook is the answer, Libra will develop into a world currency. But the Facebook coin has little in common with Bitcoin and other crypto currencies. Libra is a digital currency that resembles Wechat Pay rather than Bitcoin. The question of whether Libra is a crypto currency is directly related to considerations of privacy and user trust.
Crypto currencies vs. central banks
With the blockchain as a technology and especially with crypto currencies, a lot revolves around trust. In principle, this is very similar to conventional currencies such as the euro, which also work solely because we trust, for example, that the state and the central bank will not devalue them. In the case of fiat currencies, i.e. uncovered money, recent history - actually only from the 20th century onwards - has shown that this trust in the state is not always justified. Replacing this blind trust in a central authority that controls the monetary system has been one of the core promises of crypto currencies from the outset and can be found in Bitcoin's first announcement, written by Satoshi Nakamoto.
Facebook also wants to give the impression that its digital currency is decentralized, so that users do not have to rely on a central authority. Libra is to be controlled by the Libra Association based in Switzerland, which includes many other companies such as Paypal, Visa, Uber and Mastercard. The mere fact that many well-known companies are on board - and have each paid at least ten million US dollars for it - combined with the ambitious goal of creating a global financial network, is causing a lot of hype. If you then stick the label "Blockchain" on such an ambitious project, you can be sure that everyone is talking about it.
Decentralised, my ass: Libra Association acts as central bank
"[The new blockchain for the global currency] is a decentralized, programmable database designed to support a low-volatility crypto currency that acts as a medium of exchange for billions of people," the Libra white paper says. Admittedly, there are many superficial technical reminiscences of Ethereum or Bitcoin in Libra: Smart Contracts, Dapps, Move, a programming language of its own, and all that even faster and better. The Libra blockchain is to be used by around 2.7 billion people who have a Facebook profile and can process up to 1,000 transactions per second. Bitcoin processes around seven transactions per second.
Unlike Bitcoin or Ethereum, the Libra Blockchain is not a public blockchain, but a Consortium Blockchain in which only paying members of the Libra Association are involved in mining. According to Facebook, this is necessary in order to avoid problems such as high energy consumption, slow transactions and other difficulties that plague Bitcoin, for example. For this reason alone, the Libra Association acts as a sort of central bank. According to Facebook, this will change after five years and the Libra blockchain will open, but one can be sceptical about this.
https://t3n.de/news/libra-ist-keine-kryptowaehrung-kein-bitcoin-1172551/
#DeleteFacebook #libra #CryptoCurrency #decentralized #paypal #visa #uber #mastercard #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Facebook wants to become the central bank with Libra and profit from the Bitcoin hype. But the blockchain is primarily a facade, Libra is neither decentralized nor a crypto currency.
If Facebook is the answer, Libra will develop into a world currency. But the Facebook coin has little in common with Bitcoin and other crypto currencies. Libra is a digital currency that resembles Wechat Pay rather than Bitcoin. The question of whether Libra is a crypto currency is directly related to considerations of privacy and user trust.
Crypto currencies vs. central banks
With the blockchain as a technology and especially with crypto currencies, a lot revolves around trust. In principle, this is very similar to conventional currencies such as the euro, which also work solely because we trust, for example, that the state and the central bank will not devalue them. In the case of fiat currencies, i.e. uncovered money, recent history - actually only from the 20th century onwards - has shown that this trust in the state is not always justified. Replacing this blind trust in a central authority that controls the monetary system has been one of the core promises of crypto currencies from the outset and can be found in Bitcoin's first announcement, written by Satoshi Nakamoto.
Facebook also wants to give the impression that its digital currency is decentralized, so that users do not have to rely on a central authority. Libra is to be controlled by the Libra Association based in Switzerland, which includes many other companies such as Paypal, Visa, Uber and Mastercard. The mere fact that many well-known companies are on board - and have each paid at least ten million US dollars for it - combined with the ambitious goal of creating a global financial network, is causing a lot of hype. If you then stick the label "Blockchain" on such an ambitious project, you can be sure that everyone is talking about it.
Decentralised, my ass: Libra Association acts as central bank
"[The new blockchain for the global currency] is a decentralized, programmable database designed to support a low-volatility crypto currency that acts as a medium of exchange for billions of people," the Libra white paper says. Admittedly, there are many superficial technical reminiscences of Ethereum or Bitcoin in Libra: Smart Contracts, Dapps, Move, a programming language of its own, and all that even faster and better. The Libra blockchain is to be used by around 2.7 billion people who have a Facebook profile and can process up to 1,000 transactions per second. Bitcoin processes around seven transactions per second.
Unlike Bitcoin or Ethereum, the Libra Blockchain is not a public blockchain, but a Consortium Blockchain in which only paying members of the Libra Association are involved in mining. According to Facebook, this is necessary in order to avoid problems such as high energy consumption, slow transactions and other difficulties that plague Bitcoin, for example. For this reason alone, the Libra Association acts as a sort of central bank. According to Facebook, this will change after five years and the Libra blockchain will open, but one can be sceptical about this.
https://t3n.de/news/libra-ist-keine-kryptowaehrung-kein-bitcoin-1172551/
#DeleteFacebook #libra #CryptoCurrency #decentralized #paypal #visa #uber #mastercard #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Audio
🎧 Dating App Privacy and NASA Cyberattack
* A ransomware webinar hosted by Threatpost editor Tara Seals, which included experts from Recorded Future, Malwarebytes and Moss Adams. The webinar looked at the top ransomware trends and threats, and outlined how enterprises can protect themselves.
* A Florida city hit three weeks ago by a ransomware attack voted this week to pay the hackers a ransom of $600,000.
* A Threatpost feature, that looked at top dating apps like Match.com and Tinder, found that the services are collecting and sharing a disturbing range of data, from chat messages to sexual orientation.
* Rampant security-operations bungling allowed cyberattackers to infiltrate NASA’s JPL network, which carries human mission data.
📻 #DatingApp #Privacy and #NASA #Cyberattack #podcast
https://threatpost.com/podcast-dating-app-privacy-and-nasa-cyberattack/145902/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
* A ransomware webinar hosted by Threatpost editor Tara Seals, which included experts from Recorded Future, Malwarebytes and Moss Adams. The webinar looked at the top ransomware trends and threats, and outlined how enterprises can protect themselves.
* A Florida city hit three weeks ago by a ransomware attack voted this week to pay the hackers a ransom of $600,000.
* A Threatpost feature, that looked at top dating apps like Match.com and Tinder, found that the services are collecting and sharing a disturbing range of data, from chat messages to sexual orientation.
* Rampant security-operations bungling allowed cyberattackers to infiltrate NASA’s JPL network, which carries human mission data.
📻 #DatingApp #Privacy and #NASA #Cyberattack #podcast
https://threatpost.com/podcast-dating-app-privacy-and-nasa-cyberattack/145902/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
Can’t Fight the Future, Suckers! #PropagandaWatch
Don’t want an Alexa in your home spying on everything you say? Well too bad, suckers! It’s the way of the future. Can’t argue with that, right?
Who would have ever guessed that the creepy spy gadget that’s listening to everything you do is listening to everything you do? Anyone with half a brain, that’s who.
❗️ Don’t buy this garbage, and don’t let your friends buy it, either.
📺 #Corbettreport #alexa #why #video #podcast
https://www.corbettreport.com/cant-fight-the-future-suckers-propagandawatch/
📺 Don’t Be An Idiot! Get Rid of Alexa!
https://www.corbettreport.com/dont-be-an-idiot-get-rid-of-alexa/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Don’t want an Alexa in your home spying on everything you say? Well too bad, suckers! It’s the way of the future. Can’t argue with that, right?
Who would have ever guessed that the creepy spy gadget that’s listening to everything you do is listening to everything you do? Anyone with half a brain, that’s who.
❗️ Don’t buy this garbage, and don’t let your friends buy it, either.
📺 #Corbettreport #alexa #why #video #podcast
https://www.corbettreport.com/cant-fight-the-future-suckers-propagandawatch/
📺 Don’t Be An Idiot! Get Rid of Alexa!
https://www.corbettreport.com/dont-be-an-idiot-get-rid-of-alexa/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
Insider Blows Whistle & Exec Reveals Google Plan to Prevent “Trump situation” in 2020 on Hidden Cam
Project Veritas has released a new report on Google which includes undercover video of a Senior Google Executive, leaked documents, and testimony from a Google insider. The report appears to show Google’s plans to affect the outcome of the 2020 elections and “prevent” the next “Trump situation.”
“Elizabeth Warren is saying we should break up Google. And like, I love her but she’s very misguided, like that will not make it better it will make it worse, because all these smaller companies who don’t have the same resources that we do will be charged with preventing the next Trump situation, it’s like a small company cannot do that.”
📺 https://www.projectveritas.com/2019/06/24/insider-blows-whistle-exec-reveals-google-plan-to-prevent-trump-situation-in-2020-on-hidden-cam/
📡 @BlackBox
#whistleblower #google #DeleteGoogle #HiddenCam #undercover #insider #why
Project Veritas has released a new report on Google which includes undercover video of a Senior Google Executive, leaked documents, and testimony from a Google insider. The report appears to show Google’s plans to affect the outcome of the 2020 elections and “prevent” the next “Trump situation.”
“Elizabeth Warren is saying we should break up Google. And like, I love her but she’s very misguided, like that will not make it better it will make it worse, because all these smaller companies who don’t have the same resources that we do will be charged with preventing the next Trump situation, it’s like a small company cannot do that.”
📺 https://www.projectveritas.com/2019/06/24/insider-blows-whistle-exec-reveals-google-plan-to-prevent-trump-situation-in-2020-on-hidden-cam/
📡 @BlackBox
#whistleblower #google #DeleteGoogle #HiddenCam #undercover #insider #why
For Police, Social Media Is Now Part of the Job
🎧 For Police, Social Media Is Now Part of the Job
When police officer David Gomez was first stationed at a school in rural Idaho, he thought he’d spend his time breaking up fights in bathrooms and scanning the hallways for weed. Instead, he found that almost every problem was either happening on social media or started there. This week on Decrypted, reporter Shelly Banjo explores how age-old dangers like drugs, child predators and school shooters have shifted onto new platforms, and how one school has tried to adapt.
📻 #Bloomberg #podcast
https://www.bloomberg.com/news/audio/2019-06-24/for-police-social-media-is-now-part-of-the-job-podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
When police officer David Gomez was first stationed at a school in rural Idaho, he thought he’d spend his time breaking up fights in bathrooms and scanning the hallways for weed. Instead, he found that almost every problem was either happening on social media or started there. This week on Decrypted, reporter Shelly Banjo explores how age-old dangers like drugs, child predators and school shooters have shifted onto new platforms, and how one school has tried to adapt.
📻 #Bloomberg #podcast
https://www.bloomberg.com/news/audio/2019-06-24/for-police-social-media-is-now-part-of-the-job-podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
4 Times the US Threatened to Stage an Attack and Blame it on Iran
The US has threatened to stage an attack and blame it on Iran over and over in the last few years. Don’t let a war based on false pretenses happen again. Please share this video.
📺 #corbettreport #video #podcast
https://www.corbettreport.com/iranfalseflag/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
The US has threatened to stage an attack and blame it on Iran over and over in the last few years. Don’t let a war based on false pretenses happen again. Please share this video.
📺 #corbettreport #video #podcast
https://www.corbettreport.com/iranfalseflag/
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
Witness Speaks Out on Organ Harvesting Taking Place in China
A former intern at a #military #hospital in #Shenyang witnessed the #crime firsthand and spoke with NTD about his harrowing experience.
https://news.ntd.com/witness-speaks-out-on-organ-harvesting-taking-place-in-china_347497.html
📺 An independent people’s #tribunal has unanimously concluded that #prisoners of #conscience have been—and continue to be #killed in #China for their #organs “on a significant scale,” after a year-long #investigation
https://www.youtube.com/watch?v=nM1ZzWeshFk
#HumanRights #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
A former intern at a #military #hospital in #Shenyang witnessed the #crime firsthand and spoke with NTD about his harrowing experience.
https://news.ntd.com/witness-speaks-out-on-organ-harvesting-taking-place-in-china_347497.html
📺 An independent people’s #tribunal has unanimously concluded that #prisoners of #conscience have been—and continue to be #killed in #China for their #organs “on a significant scale,” after a year-long #investigation
https://www.youtube.com/watch?v=nM1ZzWeshFk
#HumanRights #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
🇪🇸 El reto de computar en la nube con datos cifrados sin descifrarlos.
Como todo en la vida, la nube tiene sus ventajas y sus inconvenientes. Las ventajas las conocemos todos: reducción de costes, nulo mantenimiento, enorme flexibilidad, total disponibilidad, alta escalabilidad, etc. Sus problemas de seguridad son igualmente evidentes: un servidor comprometido supone el compromiso de los datos alojados.
La contramedida inmediata que a todos se nos ocurre para proteger los datos almacenados en la nube consiste en cifrarlos. El cifrado resulta satisfactorio siempre y cuando los datos permanezcan en reposo y no se necesite realizar operaciones sobre ellos. Pero ¿y si hay que realizar cálculos en la nube? ¿Cómo hacerlo sin descifrarlos ni revelar las claves de cifrado al software en ejecución en la nube?
El reto es formidable. Se está impulsando un potente esfuerzo de investigación para desarrollar métodos criptográficos que permitan la computación con datos cifrados sin descifrarlos, como, por ejemplo:
✳️ El cifrado totalmente homomórfico (FHE), que busca abordar este problema requiriendo que un cliente cifre los datos antes de enviarlos a la nube y proporcione además un código que se ejecute sobre esos datos sin descifrarlos. Los resultados se devuelven cifrados al cliente. Dado que solo el cliente controla la clave de descifrado, nadie más puede descifrar los datos originales ni los resultados, lo que garantiza la seguridad de esa información. Por desgracia, si bien el cálculo con datos cifrados es teóricamente posible, este cálculo se ralentiza en casi 10 órdenes de magnitud, lo que lo hace inviable con los algoritmos disponibles hoy.
✳️ Otra estrategia consiste en la computación multi-parte segura (SMPC, Secure Multi-Party Computation), en la cual múltiples entidades pueden realizar cálculos de manera conjunta y al mismo tiempo mantener la privacidad de los datos de cada entidad. Al igual que con FHE, estos protocolos añaden una sobrecarga computacional considerable, de dos órdenes de magnitud.
✳️ Por último, la criptografía con umbral exige que para descifrar un mensaje cifrado o para firmar un mensaje, varias partes (que superen un umbral predeterminado) deben cooperar en el protocolo de descifrado o firma. El mensaje se cifra mediante una clave pública y la clave privada correspondiente se comparte entre los participantes.
En este artículo veremos con más detalle el funcionamiento de FHE, mientras que en un segundo artículo profundizaremos en las otras dos estrategias.
El cifrado totalmente homomórfico (FHE).
El cifrado homomórfico sería el «Santo Grial» de la seguridad en la nube. Se define como la capacidad de realizar operaciones sobre datos cifrados cuyo resultado, una vez descifrado, es idéntico al resultado de esas mismas operaciones sobre los datos en claro.
Aunque a primera vista puede parecer mágico, lo cierto es que a nuestro alrededor abundan algoritmos criptográficos de uso cotidiano que soportan parcialmente el cifrado homomórfico, como por ejemplo los de clave pública. Se les dice «parcialmente» homomórficos porque sólo son homomórficos para una operación, como la suma o la multiplicación, pero no para cualquier otra operación algebraica. Un ejemplo con el archiconocido RSA hará que todo quede más claro.
Imagina que en el servidor guardas dos cantidades, x1 y x2, cifradas con tu clave pública RSA (n y e), de manera que nadie más que el legítimo poseedor de la clave privada correspondiente, o sea, tú, podrá descifrarlas. Ahora bien, RSA (sin padding y sin las modificaciones que se le añaden para aumentar su robustez) es parcialmente homomórfico respecto de la multiplicación, ya que:
Como todo en la vida, la nube tiene sus ventajas y sus inconvenientes. Las ventajas las conocemos todos: reducción de costes, nulo mantenimiento, enorme flexibilidad, total disponibilidad, alta escalabilidad, etc. Sus problemas de seguridad son igualmente evidentes: un servidor comprometido supone el compromiso de los datos alojados.
La contramedida inmediata que a todos se nos ocurre para proteger los datos almacenados en la nube consiste en cifrarlos. El cifrado resulta satisfactorio siempre y cuando los datos permanezcan en reposo y no se necesite realizar operaciones sobre ellos. Pero ¿y si hay que realizar cálculos en la nube? ¿Cómo hacerlo sin descifrarlos ni revelar las claves de cifrado al software en ejecución en la nube?
El reto es formidable. Se está impulsando un potente esfuerzo de investigación para desarrollar métodos criptográficos que permitan la computación con datos cifrados sin descifrarlos, como, por ejemplo:
✳️ El cifrado totalmente homomórfico (FHE), que busca abordar este problema requiriendo que un cliente cifre los datos antes de enviarlos a la nube y proporcione además un código que se ejecute sobre esos datos sin descifrarlos. Los resultados se devuelven cifrados al cliente. Dado que solo el cliente controla la clave de descifrado, nadie más puede descifrar los datos originales ni los resultados, lo que garantiza la seguridad de esa información. Por desgracia, si bien el cálculo con datos cifrados es teóricamente posible, este cálculo se ralentiza en casi 10 órdenes de magnitud, lo que lo hace inviable con los algoritmos disponibles hoy.
✳️ Otra estrategia consiste en la computación multi-parte segura (SMPC, Secure Multi-Party Computation), en la cual múltiples entidades pueden realizar cálculos de manera conjunta y al mismo tiempo mantener la privacidad de los datos de cada entidad. Al igual que con FHE, estos protocolos añaden una sobrecarga computacional considerable, de dos órdenes de magnitud.
✳️ Por último, la criptografía con umbral exige que para descifrar un mensaje cifrado o para firmar un mensaje, varias partes (que superen un umbral predeterminado) deben cooperar en el protocolo de descifrado o firma. El mensaje se cifra mediante una clave pública y la clave privada correspondiente se comparte entre los participantes.
En este artículo veremos con más detalle el funcionamiento de FHE, mientras que en un segundo artículo profundizaremos en las otras dos estrategias.
El cifrado totalmente homomórfico (FHE).
El cifrado homomórfico sería el «Santo Grial» de la seguridad en la nube. Se define como la capacidad de realizar operaciones sobre datos cifrados cuyo resultado, una vez descifrado, es idéntico al resultado de esas mismas operaciones sobre los datos en claro.
Aunque a primera vista puede parecer mágico, lo cierto es que a nuestro alrededor abundan algoritmos criptográficos de uso cotidiano que soportan parcialmente el cifrado homomórfico, como por ejemplo los de clave pública. Se les dice «parcialmente» homomórficos porque sólo son homomórficos para una operación, como la suma o la multiplicación, pero no para cualquier otra operación algebraica. Un ejemplo con el archiconocido RSA hará que todo quede más claro.
Imagina que en el servidor guardas dos cantidades, x1 y x2, cifradas con tu clave pública RSA (n y e), de manera que nadie más que el legítimo poseedor de la clave privada correspondiente, o sea, tú, podrá descifrarlas. Ahora bien, RSA (sin padding y sin las modificaciones que se le añaden para aumentar su robustez) es parcialmente homomórfico respecto de la multiplicación, ya que:
Por lo tanto, el servidor podría multiplicar tus dos cantidades cifradas y entregarte el resultado cifrado sin conocer los valores de x1 ni x2. Cuando descifres el resultado devuelto obtendrás el mismo valor que si hubieras multiplicado las dos cantidades originales sin cifrar. Impresionante, ¿no?
Existen otros muchos algoritmos criptográficos que al igual que RSA son parcialmente homomórficos, como ElGamal también para la multiplicación o Paillier para la suma.
Las cosas se complican enormemente cuando se busca el cifrado «totalmente» homomórfico (FHE), capaz de soportar tanto la suma como el producto. Aunque existen muchas propuestas en la literatura científica sobre FHE, la más destacada es la planteada por Craig Gentry en 2009 y evolucionada por él mismo y por otros autores a lo largo de los años. Su propuesta se basa en un concepto algebraico abstracto conocido como “celosía“. Seguro que has visto cientos de celosías en ventanas y balcones. Las que te venden en tiendas de bricolaje son celosías bidimensionales: listones de madera o de metal que se cruzan en ciertos puntos. Ahora imagina esa misma celosía en 3D. Y ahora añade otra dimensión. Y otra. Y otra. Y así hasta n dimensiones. Bien, ¿tienes ya una celosía n-dimensional en tu cabeza? Complicada, ¿verdad? Puedes creer que encontrar el punto más cercano a otro en esa celosía no es tarea fácil. De hecho, es tan difícil que se conoce como el Problema del Vector Más Corto (Shortest Vector Problem, SVP) y constituye precisamente el problema matemático “intratable” del cifrado basado en celosías. De hecho, este criptosistema representa una de las alternativas criptográficas más serias para la era post-cuántica.
Lo mejor de todo es que, con las variantes adecuadas, las celosías también sirven para el cifrado homomórfico completo. Pero, y aquí aparece un gran, gran PERO, estos algoritmos resultan tremendamente ineficientes. Operar con los datos cifrados puede volverse hasta 10 órdenes de magnitud más lento que con los datos en claro (o sea, 1010 veces más lento o, lo que es lo mismo, un uno seguido de diez ceros: 10.000.000.000). En definitiva, son inservibles para aplicaciones prácticas reales. Hasta que no alcancen velocidades aceptables, no veremos un despliegue a gran escala en servicios en la nube. Mientras tanto, la investigación en este campo continúa intensamente.
Mientras tanto, los criptógrafos no se cruzan de brazos. Si operar sobre los datos cifrados constituye un reto formidable, ¿por qué no acometer versiones más sencillas del problema? Tal vez no confíes en tu proveedor en la nube. ¿Se podría repartir la carga entre los dos? Otros esquemas criptográficos persiguen que varias partes que no confían mutuamente puedan operar sobre los datos sin tener que revelárselos unas partes a otras.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-1/
#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Existen otros muchos algoritmos criptográficos que al igual que RSA son parcialmente homomórficos, como ElGamal también para la multiplicación o Paillier para la suma.
Las cosas se complican enormemente cuando se busca el cifrado «totalmente» homomórfico (FHE), capaz de soportar tanto la suma como el producto. Aunque existen muchas propuestas en la literatura científica sobre FHE, la más destacada es la planteada por Craig Gentry en 2009 y evolucionada por él mismo y por otros autores a lo largo de los años. Su propuesta se basa en un concepto algebraico abstracto conocido como “celosía“. Seguro que has visto cientos de celosías en ventanas y balcones. Las que te venden en tiendas de bricolaje son celosías bidimensionales: listones de madera o de metal que se cruzan en ciertos puntos. Ahora imagina esa misma celosía en 3D. Y ahora añade otra dimensión. Y otra. Y otra. Y así hasta n dimensiones. Bien, ¿tienes ya una celosía n-dimensional en tu cabeza? Complicada, ¿verdad? Puedes creer que encontrar el punto más cercano a otro en esa celosía no es tarea fácil. De hecho, es tan difícil que se conoce como el Problema del Vector Más Corto (Shortest Vector Problem, SVP) y constituye precisamente el problema matemático “intratable” del cifrado basado en celosías. De hecho, este criptosistema representa una de las alternativas criptográficas más serias para la era post-cuántica.
Lo mejor de todo es que, con las variantes adecuadas, las celosías también sirven para el cifrado homomórfico completo. Pero, y aquí aparece un gran, gran PERO, estos algoritmos resultan tremendamente ineficientes. Operar con los datos cifrados puede volverse hasta 10 órdenes de magnitud más lento que con los datos en claro (o sea, 1010 veces más lento o, lo que es lo mismo, un uno seguido de diez ceros: 10.000.000.000). En definitiva, son inservibles para aplicaciones prácticas reales. Hasta que no alcancen velocidades aceptables, no veremos un despliegue a gran escala en servicios en la nube. Mientras tanto, la investigación en este campo continúa intensamente.
Mientras tanto, los criptógrafos no se cruzan de brazos. Si operar sobre los datos cifrados constituye un reto formidable, ¿por qué no acometer versiones más sencillas del problema? Tal vez no confíes en tu proveedor en la nube. ¿Se podría repartir la carga entre los dos? Otros esquemas criptográficos persiguen que varias partes que no confían mutuamente puedan operar sobre los datos sin tener que revelárselos unas partes a otras.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-1/
#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Telefónica Tech
El gran reto de la computación segura en la nube: usando datos cifrados sin descifrarlos (I)
Tomás dirige una asesoría fiscal y lleva la contabilidad de docenas de clientes. Almacena toda la información de sus clientes en la nube, de esta mane
La computación multi-parte segura (SMPC).
Imagínate que estás charlando con otros dos compañeros de trabajo. De repente sale el tema de los bonus que cobráis. A los tres os gustaría saber quién es el que cobra el bonus más alto, pero ninguno queréis revelar el importe de vuestro bonus. ¿Cómo podéis averiguarlo? Una solución consiste en confiar en una tercera parte a quien cada uno le reveláis el importe de vuestro bonus y, una vez conocidos todos, anuncia quién gana el bonus mayor.
Imagina ahora que trabajas en el servicio de inteligencia de una empresa de ciberseguridad. Se ha producido un ataque y tienes una lista de sospechosos. Los servicios de inteligencia de otras empresas también tienen sus propias listas de sospechosos. Os gustaría conocer qué sospechosos aparecen en todas las listas, pero ni tu empresa ni las demás queréis revelar vuestra lista completa. ¿Cómo podéis calcular la intersección de estas listas? Una vez más, una solución inmediata sería que cada empresa entregue su lista a una tercera parte confiable y que ésta obtenga el conjunto intersección de todas las listas de sospechosos.
En ambos escenarios se recurre a una tercera parte de confianza. Pero ¿y si no te fías de esta tercera parte? Después de todo, asumir que una parte es de confianza es mucho asumir. ¿De qué otra manera podrían resolverse estos dilemas sin recurrir a terceras partes y con la misma garantía de seguridad?
Precisamente, la computación multi-parte segura propone protocolos que emulan a la tercera parte de confianza. Permiten calcular una función con varios valores de entrada, de manera que sólo se revela el resultado de la evaluación de la función, manteniendo privados los valores de las entradas.
Expresado matemáticamente: reunidos un número n de participantes, p1, p2, …, pn, cada uno de los cuales posee datos privados, respectivamente d1, d2, …, dn, desean calcular el valor de una función pública sobre esos datos privados: F(d1, d2, …, dn), manteniendo sus propias entradas en secreto.
Volvamos al ejemplo de los bonus. Si las entradas x, y, z representan vuestros bonus, queréis conocer el más alto de los tres, sin revelar el valor de ninguno. En otras palabras, queréis calcular:
F(x, y, z) = max (x, y, z)
Se espera que estos protocolos garanticen una serie de requisitos de seguridad:
✳️ Corrección: aunque alguna de las partes engañe, el resultado final será correcto.
✳️ Privacidad: solo se conoce el resultado de la evaluación de la función, pero no el valor de las entradas evaluadas (salvo la propia de cada uno, claro está).
✳️ Independencia de las entradas: ninguna parte puede elegir su entrada como función de la entrada de otra parte.
✳️ Justicia: si una parte conoce el resultado de la evaluación, entonces todas las partes conocerán el mismo resultado.
✳️ Entrega garantizada del resultado: si una parte tiene acceso al resultado, entonces las demás partes también lo tendrán.
Existen diferentes protocolos criptográficos para realizar esta computación segura distribuyéndola entre las partes. El más conocido es el protocolo de Circuito Confuso de Yao. La idea de este protocolo consiste en simular cualquier función matemática con un circuito booleano utilizando exclusivamente puertas lógicas, concretamente AND y XOR. Para funciones muy sencillas, estos circuitos pueden diseñarse incluso a mano. Obviamente, a medida que se vuelven más y más complejas, los circuitos crecen paralelamente en complejidad. Puedes imaginar que simular AES mediante puertas lógicas AND y XOR no es precisamente tarea sencilla, aunque sí posible con ¡32.000 puertas! De hecho, las implantaciones más recientes alcanzan velocidades muy eficientes, de unos pocos milisegundos.
Imagínate que estás charlando con otros dos compañeros de trabajo. De repente sale el tema de los bonus que cobráis. A los tres os gustaría saber quién es el que cobra el bonus más alto, pero ninguno queréis revelar el importe de vuestro bonus. ¿Cómo podéis averiguarlo? Una solución consiste en confiar en una tercera parte a quien cada uno le reveláis el importe de vuestro bonus y, una vez conocidos todos, anuncia quién gana el bonus mayor.
Imagina ahora que trabajas en el servicio de inteligencia de una empresa de ciberseguridad. Se ha producido un ataque y tienes una lista de sospechosos. Los servicios de inteligencia de otras empresas también tienen sus propias listas de sospechosos. Os gustaría conocer qué sospechosos aparecen en todas las listas, pero ni tu empresa ni las demás queréis revelar vuestra lista completa. ¿Cómo podéis calcular la intersección de estas listas? Una vez más, una solución inmediata sería que cada empresa entregue su lista a una tercera parte confiable y que ésta obtenga el conjunto intersección de todas las listas de sospechosos.
En ambos escenarios se recurre a una tercera parte de confianza. Pero ¿y si no te fías de esta tercera parte? Después de todo, asumir que una parte es de confianza es mucho asumir. ¿De qué otra manera podrían resolverse estos dilemas sin recurrir a terceras partes y con la misma garantía de seguridad?
Precisamente, la computación multi-parte segura propone protocolos que emulan a la tercera parte de confianza. Permiten calcular una función con varios valores de entrada, de manera que sólo se revela el resultado de la evaluación de la función, manteniendo privados los valores de las entradas.
Expresado matemáticamente: reunidos un número n de participantes, p1, p2, …, pn, cada uno de los cuales posee datos privados, respectivamente d1, d2, …, dn, desean calcular el valor de una función pública sobre esos datos privados: F(d1, d2, …, dn), manteniendo sus propias entradas en secreto.
Volvamos al ejemplo de los bonus. Si las entradas x, y, z representan vuestros bonus, queréis conocer el más alto de los tres, sin revelar el valor de ninguno. En otras palabras, queréis calcular:
F(x, y, z) = max (x, y, z)
Se espera que estos protocolos garanticen una serie de requisitos de seguridad:
✳️ Corrección: aunque alguna de las partes engañe, el resultado final será correcto.
✳️ Privacidad: solo se conoce el resultado de la evaluación de la función, pero no el valor de las entradas evaluadas (salvo la propia de cada uno, claro está).
✳️ Independencia de las entradas: ninguna parte puede elegir su entrada como función de la entrada de otra parte.
✳️ Justicia: si una parte conoce el resultado de la evaluación, entonces todas las partes conocerán el mismo resultado.
✳️ Entrega garantizada del resultado: si una parte tiene acceso al resultado, entonces las demás partes también lo tendrán.
Existen diferentes protocolos criptográficos para realizar esta computación segura distribuyéndola entre las partes. El más conocido es el protocolo de Circuito Confuso de Yao. La idea de este protocolo consiste en simular cualquier función matemática con un circuito booleano utilizando exclusivamente puertas lógicas, concretamente AND y XOR. Para funciones muy sencillas, estos circuitos pueden diseñarse incluso a mano. Obviamente, a medida que se vuelven más y más complejas, los circuitos crecen paralelamente en complejidad. Puedes imaginar que simular AES mediante puertas lógicas AND y XOR no es precisamente tarea sencilla, aunque sí posible con ¡32.000 puertas! De hecho, las implantaciones más recientes alcanzan velocidades muy eficientes, de unos pocos milisegundos.
Por supuesto, la computación multi-parte segura es muchísimo más complicada. El adversario puede ser pasivo o activo, las funciones a evaluar pueden ser más o menos complicadas, pueden soportar mayor o menor número de adversarios activos, pueden imponerse mayores o menores restricciones de seguridad, pueden requerir más o menos tiempo de computación, pueden exigir que todos los nodos de la red estén conectados entre sí o basta que exista un camino cualquiera entre cualesquiera dos nodos, pueden comunicarse síncrona o asíncronamente, etc.
Algunas empresas han comenzado a comercializar soluciones de SMPC en escenarios reales: aplicaciones de Datos Privados como Servicio (Private Data as a Service), tales como las bases de datos de Sharemind o de Jana; aplicaciones de gestión de claves, como los productos de Sepior o de Unbound; y aplicaciones de solución puntual, como la de Partisia.
En suma, la computación multi-parte segura es un campo en continua expansión, con multitud de protocolos, escenarios y casos de uso, en el que todavía estamos muy lejos de haber escuchado la última palabra.
La criptografía con umbral
La criptografía se ha transformado en un estándar tecnológico para proteger la confidencialidad de los datos. En criptografía, una regla básica de diseño se conoce como Principio de Kerckhoffs: de un criptosistema se conoce todo menos la clave.
La cuestión es: si guardas los datos cifrados, ¿dónde guardas la clave de cifrado? En última instancia, la seguridad de un sistema de cifrado reside en la gestión de sus claves. Las claves pasan a ser el talón de Aquiles de la criptografía. De hecho, no están seguras ni en la memoria del ordenador: Heartbleed, Spectre y Meltdown vienen a la cabeza como ejemplos recientes de vulnerabilidades que permitían leer espacios privados de la memoria y obtener, entre otros datos, claves de cifrado. A su vez, los ataques de canal lateral pueden filtrar información sobre claves gracias a variaciones electromagnéticas o de consumo de energía. Más aún, las claves pueden quedarse grabadas en una memoria DRAM incluso después de apagar el equipo. ¿No existe forma entonces de garantizar la seguridad de las claves?
Una solución pasa por dividir la clave en dos o más partes, de manera que la información cifrada no pueda descifrarse a menos que se junten todas (o un número mínimo de) las partes de la clave. Por ejemplo, para dividir la clave K en tres partes, K1, K2 y K3, se seleccionan dos claves aleatoriamente, K1 y K2, de la misma longitud que K. La tercera parte de la clave se calcula como K3 = K1 Å K2 Å K, donde Å es la operación OR exclusiva. No hay dos partes que proporcionen ninguna información sobre la clave secreta: las tres partes son necesarias para recuperar K (dejamos como ejercicio al lector comprobar que efectivamente así sucede).
El esquema descrito exhibe la propiedad «3 de 3». Generalizando, un esquema de intercambio de secretos es «k de n» (siendo n ≥ k ≥ 1) si juntando k partes puede recuperarse un secreto compartido entre n partes, pero juntando k − 1 partes no se sabe nada sobre el secreto.
Y así es como llegamos a la criptografía con umbral. Ya no se trata simplemente de dividir la clave en varias partes, como en el sencillo ejemplo anterior, sino de realizar operaciones criptográficas con cada parte de la clave de manera que, al juntarlas todas, el resultado sea el mismo que si se hubiera realizado con la clave completa. RSA nos ayudará nuevamente a entenderlo con mayor claridad.
Hemos visto en la entrega anterior que la clave pública está formada por dos números: un exponente, e; y un módulo, n, que a su vez es el producto de dos primos, n = p · q. Por otro lado, la clave privada está formada por un número d, tal que e · d = 1 mod (p − 1) · (q − 1).
Para firmar un mensaje m con RSA, se realiza el cálculo s = md mod n. Verificar la firma es muy sencillo por cualquier persona que conozca la clave pública, realizando la operación se = med = m mod n.
Algunas empresas han comenzado a comercializar soluciones de SMPC en escenarios reales: aplicaciones de Datos Privados como Servicio (Private Data as a Service), tales como las bases de datos de Sharemind o de Jana; aplicaciones de gestión de claves, como los productos de Sepior o de Unbound; y aplicaciones de solución puntual, como la de Partisia.
En suma, la computación multi-parte segura es un campo en continua expansión, con multitud de protocolos, escenarios y casos de uso, en el que todavía estamos muy lejos de haber escuchado la última palabra.
La criptografía con umbral
La criptografía se ha transformado en un estándar tecnológico para proteger la confidencialidad de los datos. En criptografía, una regla básica de diseño se conoce como Principio de Kerckhoffs: de un criptosistema se conoce todo menos la clave.
La cuestión es: si guardas los datos cifrados, ¿dónde guardas la clave de cifrado? En última instancia, la seguridad de un sistema de cifrado reside en la gestión de sus claves. Las claves pasan a ser el talón de Aquiles de la criptografía. De hecho, no están seguras ni en la memoria del ordenador: Heartbleed, Spectre y Meltdown vienen a la cabeza como ejemplos recientes de vulnerabilidades que permitían leer espacios privados de la memoria y obtener, entre otros datos, claves de cifrado. A su vez, los ataques de canal lateral pueden filtrar información sobre claves gracias a variaciones electromagnéticas o de consumo de energía. Más aún, las claves pueden quedarse grabadas en una memoria DRAM incluso después de apagar el equipo. ¿No existe forma entonces de garantizar la seguridad de las claves?
Una solución pasa por dividir la clave en dos o más partes, de manera que la información cifrada no pueda descifrarse a menos que se junten todas (o un número mínimo de) las partes de la clave. Por ejemplo, para dividir la clave K en tres partes, K1, K2 y K3, se seleccionan dos claves aleatoriamente, K1 y K2, de la misma longitud que K. La tercera parte de la clave se calcula como K3 = K1 Å K2 Å K, donde Å es la operación OR exclusiva. No hay dos partes que proporcionen ninguna información sobre la clave secreta: las tres partes son necesarias para recuperar K (dejamos como ejercicio al lector comprobar que efectivamente así sucede).
El esquema descrito exhibe la propiedad «3 de 3». Generalizando, un esquema de intercambio de secretos es «k de n» (siendo n ≥ k ≥ 1) si juntando k partes puede recuperarse un secreto compartido entre n partes, pero juntando k − 1 partes no se sabe nada sobre el secreto.
Y así es como llegamos a la criptografía con umbral. Ya no se trata simplemente de dividir la clave en varias partes, como en el sencillo ejemplo anterior, sino de realizar operaciones criptográficas con cada parte de la clave de manera que, al juntarlas todas, el resultado sea el mismo que si se hubiera realizado con la clave completa. RSA nos ayudará nuevamente a entenderlo con mayor claridad.
Hemos visto en la entrega anterior que la clave pública está formada por dos números: un exponente, e; y un módulo, n, que a su vez es el producto de dos primos, n = p · q. Por otro lado, la clave privada está formada por un número d, tal que e · d = 1 mod (p − 1) · (q − 1).
Para firmar un mensaje m con RSA, se realiza el cálculo s = md mod n. Verificar la firma es muy sencillo por cualquier persona que conozca la clave pública, realizando la operación se = med = m mod n.
¿Cómo conseguir que un grupo de personas coopere para firmar un mensaje? En lugar de firmar el mensaje una sola persona con la clave privada d, se puede separar esta clave en varias, por ejemplo, en tres: d1, d2, d3, tales que d1 + d2 + d3 = d mod (p − 1) · (q − 1).
Ahora, cada una de las partes puede firmar por su cuenta el mismo mensaje m: s1 = md1, s2 = md2, s3 = md3, de manera que la firma total será el producto de las tres firmas: s = s1 · s2 · s3. Es fácil verificar que s1 · s2 · s3 = md1 + d2 + d3 = md mod n. En otras palabras, sólo puede crearse una firma completa si cada una de las partes firma el mensaje con su parte de la clave privada. Así se protege la clave privada, d, ya que no se almacena completa en ningún servidor ni en ninguna memoria. Ni siquiera es necesario reunir las tres partes de la clave, ya que cada operación de cifrado de cada parte es independiente del resto. Podría comprometerse una parte de la clave o incluso dos y, aun así, la clave completa se mantendría segura.
Los esquemas de criptografía con umbral más sofisticados poseen la propiedad «k de n» ya mencionada. Esta propiedad aporta tolerancia a fallos: una parte de la clave podría perderse o verse comprometida y, aun así, se podría realizar la operación criptográfica con la parte restante. Además, exige la cooperación: ninguna parte podrá realizar la operación criptográfica completa; al menos k partes han de ponerse de acuerdo. Desde la perspectiva de un atacante, comprometer una parte de la clave no le servirá de nada: necesitará comprometer al menos k partes.
Como vemos, la criptografía con umbral elimina los puntos únicos de fallo en criptografía, permitiendo redistribuir la responsabilidad de la custodia de las claves. Y no vayas a creer que todo queda en ejercicios matemáticos para cursos de postgrado: los productos de gestión de claves de Sepior y de Unbound constituyen los ejemplos más avanzados de soluciones basadas en criptografía con umbral de la actualidad. Como los otros campos de estudio, está en constante expansión y veremos nuevos resultados próximamente.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-2/
#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Ahora, cada una de las partes puede firmar por su cuenta el mismo mensaje m: s1 = md1, s2 = md2, s3 = md3, de manera que la firma total será el producto de las tres firmas: s = s1 · s2 · s3. Es fácil verificar que s1 · s2 · s3 = md1 + d2 + d3 = md mod n. En otras palabras, sólo puede crearse una firma completa si cada una de las partes firma el mensaje con su parte de la clave privada. Así se protege la clave privada, d, ya que no se almacena completa en ningún servidor ni en ninguna memoria. Ni siquiera es necesario reunir las tres partes de la clave, ya que cada operación de cifrado de cada parte es independiente del resto. Podría comprometerse una parte de la clave o incluso dos y, aun así, la clave completa se mantendría segura.
Los esquemas de criptografía con umbral más sofisticados poseen la propiedad «k de n» ya mencionada. Esta propiedad aporta tolerancia a fallos: una parte de la clave podría perderse o verse comprometida y, aun así, se podría realizar la operación criptográfica con la parte restante. Además, exige la cooperación: ninguna parte podrá realizar la operación criptográfica completa; al menos k partes han de ponerse de acuerdo. Desde la perspectiva de un atacante, comprometer una parte de la clave no le servirá de nada: necesitará comprometer al menos k partes.
Como vemos, la criptografía con umbral elimina los puntos únicos de fallo en criptografía, permitiendo redistribuir la responsabilidad de la custodia de las claves. Y no vayas a creer que todo queda en ejercicios matemáticos para cursos de postgrado: los productos de gestión de claves de Sepior y de Unbound constituyen los ejemplos más avanzados de soluciones basadas en criptografía con umbral de la actualidad. Como los otros campos de estudio, está en constante expansión y veremos nuevos resultados próximamente.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-2/
#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Telefónica Tech
El gran reto de la computación segura en la nube: usando datos cifrados sin descifrarlos (II)
La nube plantea grandes retos de seguridad. El más importante tal vez sea garantizar la privacidad de los datos. Es de cultura general que cifrando lo
NSA Starts Contributing Low-Level Code to UEFI BIOS Alternative
The NSA has started assigning developers to the Coreboot project, which is an open source alternative to Windows BIOS/UEFI firmware. The NSA's Eugene Myers has begun contributing SMI Transfer Monitor (STM) implementation code for the x86 processor. Myers works for NSA’s Trusted Systems Research Group, which according to the agency’s website, is meant to “conduct and sponsor research in the technologies and techniques which will secure America's information systems of tomorrow.”
Can The NSA Be Trusted With Such Low-Level Code?
NSA has worked on security projects embraced by the public before, including Security-Enhanced Linux, a security module for Linux. More recently, the NSA released the Ghidra reverse engineering tool as open source, which has also been adopted by Coreboot developers so that they can more easily reverse-engineer hardware firmware.
Myers published a paper about STM last year on how NSA’s STM implementation could work. All Coreboot code, including all the STM contributions from the NSA, are open source, so anyone could verify that there is no backdoor in there -- in theory.
In practice, the NSA could have also written the code in a less-than-secure way with vulnerabilities that are hard to detect without more experienced security researchers. Alternatively, the NSA could also update this implementation years later, when there are less eyes on the STM implementation and the update would no longer make headlines.
This wouldn’t be completely out of the question for an agency like the NSA. After all, the NSA succeeded in pushing a backdoor through the NIST standardization process years ago. The agency was also accused by EFF co-founder John Gilmore of sabotaging the IPsec protocol by making it too complex to ever be secure (something that would benefit an espionage agency).
More recently, it also tried to push two encryption algorithms through the ISO standardization process, but the reviewers overwhelmingly rejected the algorithms based on trust concerns and NSA’s failure to answer some technical questions.
Read more:
https://www.tomshardware.com/news/nsa-contributes-low-level-stm-coreboot,39704.html
#nsa #code #UEFI #BIOS #coreboot
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
The NSA has started assigning developers to the Coreboot project, which is an open source alternative to Windows BIOS/UEFI firmware. The NSA's Eugene Myers has begun contributing SMI Transfer Monitor (STM) implementation code for the x86 processor. Myers works for NSA’s Trusted Systems Research Group, which according to the agency’s website, is meant to “conduct and sponsor research in the technologies and techniques which will secure America's information systems of tomorrow.”
Can The NSA Be Trusted With Such Low-Level Code?
NSA has worked on security projects embraced by the public before, including Security-Enhanced Linux, a security module for Linux. More recently, the NSA released the Ghidra reverse engineering tool as open source, which has also been adopted by Coreboot developers so that they can more easily reverse-engineer hardware firmware.
Myers published a paper about STM last year on how NSA’s STM implementation could work. All Coreboot code, including all the STM contributions from the NSA, are open source, so anyone could verify that there is no backdoor in there -- in theory.
In practice, the NSA could have also written the code in a less-than-secure way with vulnerabilities that are hard to detect without more experienced security researchers. Alternatively, the NSA could also update this implementation years later, when there are less eyes on the STM implementation and the update would no longer make headlines.
This wouldn’t be completely out of the question for an agency like the NSA. After all, the NSA succeeded in pushing a backdoor through the NIST standardization process years ago. The agency was also accused by EFF co-founder John Gilmore of sabotaging the IPsec protocol by making it too complex to ever be secure (something that would benefit an espionage agency).
More recently, it also tried to push two encryption algorithms through the ISO standardization process, but the reviewers overwhelmingly rejected the algorithms based on trust concerns and NSA’s failure to answer some technical questions.
Read more:
https://www.tomshardware.com/news/nsa-contributes-low-level-stm-coreboot,39704.html
#nsa #code #UEFI #BIOS #coreboot
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
I was 7 words away from being spear-phished
Three weeks ago I received a very flattering email from the University of Cambridge, asking me to judge the Adam Smith Prize for Economics:
"Dear Robert,
My name is Gregory Harris. I’m one of the Adam Smith Prize Organizers.
Each year we update the team of independent specialists who could assess the quality of the competing projects:
We need your assistance in evaluating several projects for Adam Smith Prize.
Looking forward to receiving your reply.
Best regards, Gregory Harris"
I wouldn’t say I’m an “expert” in economics exactly, but the university’s request wasn’t that surprising. I do have a subscription to The Economist, and I do understand - very roughly - how and why central banks set interest rates. I’ve read “Capital in the Twenty-First Century” and basically got the gist of the first half. I’ve written a few blog posts that I’ve generously tagged as “economics”, and perhaps there’s a new discipline of computational economics that I might be able to shed some software industry insight onto. Overall it felt perfectly plausible that the organizers of the Adam Smith prize would want my perspective. I assumed that being a judge for the Adam Smith Prize would be a lot of work and would not be paid, but it would still be great fuel for the ole ego.
All of this said, in my heart of hearts I knew that some wires had probably got crossed somewhere. There was no doubt a Professor Hobert Reaton at UC San Diego, expert in Heckscher-Ohlin trade theory, who was patiently waiting for the chance to further his career through a Transatlantic collaboration. Nonetheless, I judged this a thread worth pulling and a mild fantasy worth entertaining.
I reflexively did some basic security hygiene checks. The email was from an
If “Gregory” had added just 7 extra words to this page - “THIS PAGE MUST BE VIEWED IN FIREFOX” - I would have been screwed. More on that later.
Next I think I visited the root
I remember thinking that Gregory’s email seemed very curt and poorly phrased, and that he could use a few lessons on how to most effectively ask strangers on the internet to do free work for him. He was lucky that I didn’t care about such trivialities. He was also lucky that I didn’t care that he’d missed a “the” in We need your assistance in evaluating several projects for Adam Smith Prize. Apparently I further didn’t care that he’d unnecessarily capitalized the word Organizers in Adam Smith Prize Organizers, or that he didn’t seem to understand that a paragraph can contain more than a single sentence.
At the time I just thought he wasn’t a very good writer.
I sent Gregory a short reply, expressing preliminary interest and asking for more information.....
Read more:
https://robertheaton.com/2019/06/24/i-was-7-words-away-from-being-spear-phished/
#pishing #firefox #zeroday
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Three weeks ago I received a very flattering email from the University of Cambridge, asking me to judge the Adam Smith Prize for Economics:
"Dear Robert,
My name is Gregory Harris. I’m one of the Adam Smith Prize Organizers.
Each year we update the team of independent specialists who could assess the quality of the competing projects:
https://people.ds.cam.ac.uk/grh37/awards/Adam_Smith_Prize
Our colleagues have recommended you as an experienced specialist in this field.We need your assistance in evaluating several projects for Adam Smith Prize.
Looking forward to receiving your reply.
Best regards, Gregory Harris"
I wouldn’t say I’m an “expert” in economics exactly, but the university’s request wasn’t that surprising. I do have a subscription to The Economist, and I do understand - very roughly - how and why central banks set interest rates. I’ve read “Capital in the Twenty-First Century” and basically got the gist of the first half. I’ve written a few blog posts that I’ve generously tagged as “economics”, and perhaps there’s a new discipline of computational economics that I might be able to shed some software industry insight onto. Overall it felt perfectly plausible that the organizers of the Adam Smith prize would want my perspective. I assumed that being a judge for the Adam Smith Prize would be a lot of work and would not be paid, but it would still be great fuel for the ole ego.
All of this said, in my heart of hearts I knew that some wires had probably got crossed somewhere. There was no doubt a Professor Hobert Reaton at UC San Diego, expert in Heckscher-Ohlin trade theory, who was patiently waiting for the chance to further his career through a Transatlantic collaboration. Nonetheless, I judged this a thread worth pulling and a mild fantasy worth entertaining.
I reflexively did some basic security hygiene checks. The email was from an
@cam.ac.uk email address. I hovered over the link in the email - https://people.ds.cam.ac.uk/grh37/awards/Adam_Smith_Prize. It pointed to the same URL that the email text claimed it did, and was located on a valid cam.ac.uk subdomain. It did strike me as a little odd that the page was hosted inside gh327’s personal directory instead of the main economics department’s site; but hey, it’s probably less bureaucracy that way. I clicked on the link and read a little about the history of the Adam Smith prize.If “Gregory” had added just 7 extra words to this page - “THIS PAGE MUST BE VIEWED IN FIREFOX” - I would have been screwed. More on that later.
Next I think I visited the root
cam.ac.uk website to make sure that this really was the domain of the University of Cambridge. I did a quick Google for gregory harris cambridge to see how much of a big deal he was. I couldn’t find much - I vaguely remember turning up only a very sparse LinkedIn account. But that’s fine; not everyone has to have a Twitter profile or a cooking blog.I remember thinking that Gregory’s email seemed very curt and poorly phrased, and that he could use a few lessons on how to most effectively ask strangers on the internet to do free work for him. He was lucky that I didn’t care about such trivialities. He was also lucky that I didn’t care that he’d missed a “the” in We need your assistance in evaluating several projects for Adam Smith Prize. Apparently I further didn’t care that he’d unnecessarily capitalized the word Organizers in Adam Smith Prize Organizers, or that he didn’t seem to understand that a paragraph can contain more than a single sentence.
At the time I just thought he wasn’t a very good writer.
I sent Gregory a short reply, expressing preliminary interest and asking for more information.....
Read more:
https://robertheaton.com/2019/06/24/i-was-7-words-away-from-being-spear-phished/
#pishing #firefox #zeroday
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Your Hard Drive May Be Listening
Researchers demonstrated that a hard drive can be used as a microphone, allowing attackers to listen in to conversations.
If you are already nervous about malicious computer attacks, then here’s some unwelcome news: there are many ways in which our technology is vulnerable to attacks based on physics, rather than on software. University of Michigan computer scientist Kevin Fu and his colleagues have found several unsettling ways that sound waves and other sources of interference could be used to commandeer household devices and personal electronics. At the American Association for the Advancement of Science (AAAS) conference in Washington, DC, two weeks ago, he reported his latest scary find: your computer hard drive could—without you knowing it—be used to record your voice.
Sensors are ubiquitous and essential—think of the thermometers in freezers for human eggs, accelerometers in airbags, and voltage monitors in pacemakers. The devices reading these sensors almost universally accept their data without question, but Fu and his colleagues have repeatedly shown that, using carefully crafted electromagnetic and acoustic interference, an attacker can take control of sensor outputs.
For example, the team has shown that appropriate electromagnetic waves can cause a thermocouple—a sensor that produces a voltage to represent the temperature—to be read as showing −1847 degrees Fahrenheit when it was actually at room temperature. They similarly caused the voltage sensor in a pacemaker to provide inaccurate signals.
The researchers produced additional mayhem with sound waves, demonstrating that accelerometers in Fitbits, smart phones, and other devices are vulnerable. In one experiment, they showed that certain high-frequency sound waves can cause a Fitbit to add steps without moving. In another test, they used a specific acoustic waveform to force the graph of the voltage output of an accelerometer to spell out the word “WALNUT.” This waveform worked even when the sound was surreptitiously embedded in a sound track, so an attacker could, in principle, control your phone’s accelerometer by tricking you into watching an online video.
The team’s latest trick is to turn a hard drive into a microphone. They tapped into the feedback system that helps control the position of the read head above the magnetic disk. When the head is buffeted by sound waves, the vibrations are reflected in the voltage signal produced by the drive’s position sensors. By reading this signal, Fu and his colleagues were able to make high-quality recordings of people speaking near the drive.
In another test, they showed that music played nearby could be recorded with high enough fidelity that the music recognition app Shazam could successfully identify the song. Malicious software could use this technique to record audio and then secretly upload it to a remote site, thus bugging a room without ever planting a microphone.
The team proposes defenses against every attack they develop, but Fu is still concerned. He worries most about the security of sensor-dependent systems that make independent decisions, such as temperature controllers in embryo labs, self-driving cars, and even spacecraft. “We just blindly trust these sensors,” he says. The industry needs to take these threats more seriously, and “computer scientists need to spend more time in physics labs.”
https://physics.aps.org/articles/v12/24
#Researchers #HardDrive #listening #conversations #attackers
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Researchers demonstrated that a hard drive can be used as a microphone, allowing attackers to listen in to conversations.
If you are already nervous about malicious computer attacks, then here’s some unwelcome news: there are many ways in which our technology is vulnerable to attacks based on physics, rather than on software. University of Michigan computer scientist Kevin Fu and his colleagues have found several unsettling ways that sound waves and other sources of interference could be used to commandeer household devices and personal electronics. At the American Association for the Advancement of Science (AAAS) conference in Washington, DC, two weeks ago, he reported his latest scary find: your computer hard drive could—without you knowing it—be used to record your voice.
Sensors are ubiquitous and essential—think of the thermometers in freezers for human eggs, accelerometers in airbags, and voltage monitors in pacemakers. The devices reading these sensors almost universally accept their data without question, but Fu and his colleagues have repeatedly shown that, using carefully crafted electromagnetic and acoustic interference, an attacker can take control of sensor outputs.
For example, the team has shown that appropriate electromagnetic waves can cause a thermocouple—a sensor that produces a voltage to represent the temperature—to be read as showing −1847 degrees Fahrenheit when it was actually at room temperature. They similarly caused the voltage sensor in a pacemaker to provide inaccurate signals.
The researchers produced additional mayhem with sound waves, demonstrating that accelerometers in Fitbits, smart phones, and other devices are vulnerable. In one experiment, they showed that certain high-frequency sound waves can cause a Fitbit to add steps without moving. In another test, they used a specific acoustic waveform to force the graph of the voltage output of an accelerometer to spell out the word “WALNUT.” This waveform worked even when the sound was surreptitiously embedded in a sound track, so an attacker could, in principle, control your phone’s accelerometer by tricking you into watching an online video.
The team’s latest trick is to turn a hard drive into a microphone. They tapped into the feedback system that helps control the position of the read head above the magnetic disk. When the head is buffeted by sound waves, the vibrations are reflected in the voltage signal produced by the drive’s position sensors. By reading this signal, Fu and his colleagues were able to make high-quality recordings of people speaking near the drive.
In another test, they showed that music played nearby could be recorded with high enough fidelity that the music recognition app Shazam could successfully identify the song. Malicious software could use this technique to record audio and then secretly upload it to a remote site, thus bugging a room without ever planting a microphone.
The team proposes defenses against every attack they develop, but Fu is still concerned. He worries most about the security of sensor-dependent systems that make independent decisions, such as temperature controllers in embryo labs, self-driving cars, and even spacecraft. “We just blindly trust these sensors,” he says. The industry needs to take these threats more seriously, and “computer scientists need to spend more time in physics labs.”
https://physics.aps.org/articles/v12/24
#Researchers #HardDrive #listening #conversations #attackers
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Western intelligence hacked 'Russia's Google' Yandex to spy on accounts - sources
Hackers working for Western intelligence agencies broke into Russian internet search company Yandex in late 2018 deploying a rare type of malware in an attempt to spy on user accounts, four people with knowledge of the matter told Reuters.
The malware, called Regin, is known to be used by the “Five Eyes” intelligence-sharing alliance of the United States, Britain, Australia, New Zealand and Canada, the sources said. Intelligence agencies in those countries declined to comment.
Western cyberattacks against Russia are seldom acknowledged or spoken about in public. It could not be determined which of the five countries was behind the attack on Yandex, said sources in Russia and elsewhere, three of whom had direct knowledge of the hack. The breach took place between October and November 2018.
Yandex spokesman Ilya Grabovsky acknowledged the incident in a statement to Reuters, but declined to provide further details. “This particular attack was detected at a very early stage by the Yandex security team. It was fully neutralized before any damage was done,” he said.
The company also said that “the Yandex security team’s response ensured that no user data was compromised by the attack.”
The company, widely known as “Russia’s Google” for its array of online services from internet search to email and taxi reservations, says it has more than 108 million monthly users in Russia. It also operates in Belarus, Kazakhstan and Turkey.
The sources who described the attack to Reuters said the hackers appeared to be searching for technical information that could explain how Yandex authenticates user accounts. Such information could help a spy agency impersonate a Yandex user and access their private messages.
The hack of Yandex’s research and development unit was intended for espionage purposes rather than to disrupt or steal intellectual property, the sources said. The hackers covertly maintained access to Yandex for at least several weeks without being detected, they said.
The Regin malware was identified as a Five Eyes tool in 2014 following revelations by former U.S. National Security Agency (NSA) contractor Edward Snowden.
Reports by The Intercept, in partnership with a Dutch and Belgian newspaper, tied an earlier version of Regin to a hack at Belgian telecom firm Belgacom in 2013 and said British spy agency Government Communications Headquarters (GCHQ) and the NSA were responsible. At the time GCHQ declined to comment and the NSA denied involvement.
Read more:
https://www.reuters.com/article/us-usa-cyber-yandex-exclusive/exclusive-western-intelligence-hacked-russias-google-yandex-to-spy-on-accounts-sources-idUSKCN1TS2SX
#hacker #attack #russia #spy #malware #google #yandex
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Hackers working for Western intelligence agencies broke into Russian internet search company Yandex in late 2018 deploying a rare type of malware in an attempt to spy on user accounts, four people with knowledge of the matter told Reuters.
The malware, called Regin, is known to be used by the “Five Eyes” intelligence-sharing alliance of the United States, Britain, Australia, New Zealand and Canada, the sources said. Intelligence agencies in those countries declined to comment.
Western cyberattacks against Russia are seldom acknowledged or spoken about in public. It could not be determined which of the five countries was behind the attack on Yandex, said sources in Russia and elsewhere, three of whom had direct knowledge of the hack. The breach took place between October and November 2018.
Yandex spokesman Ilya Grabovsky acknowledged the incident in a statement to Reuters, but declined to provide further details. “This particular attack was detected at a very early stage by the Yandex security team. It was fully neutralized before any damage was done,” he said.
The company also said that “the Yandex security team’s response ensured that no user data was compromised by the attack.”
The company, widely known as “Russia’s Google” for its array of online services from internet search to email and taxi reservations, says it has more than 108 million monthly users in Russia. It also operates in Belarus, Kazakhstan and Turkey.
The sources who described the attack to Reuters said the hackers appeared to be searching for technical information that could explain how Yandex authenticates user accounts. Such information could help a spy agency impersonate a Yandex user and access their private messages.
The hack of Yandex’s research and development unit was intended for espionage purposes rather than to disrupt or steal intellectual property, the sources said. The hackers covertly maintained access to Yandex for at least several weeks without being detected, they said.
The Regin malware was identified as a Five Eyes tool in 2014 following revelations by former U.S. National Security Agency (NSA) contractor Edward Snowden.
Reports by The Intercept, in partnership with a Dutch and Belgian newspaper, tied an earlier version of Regin to a hack at Belgian telecom firm Belgacom in 2013 and said British spy agency Government Communications Headquarters (GCHQ) and the NSA were responsible. At the time GCHQ declined to comment and the NSA denied involvement.
Read more:
https://www.reuters.com/article/us-usa-cyber-yandex-exclusive/exclusive-western-intelligence-hacked-russias-google-yandex-to-spy-on-accounts-sources-idUSKCN1TS2SX
#hacker #attack #russia #spy #malware #google #yandex
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
US security company discovers numerous vulnerabilities in Huawei network equipment
According to the US company Finite State, 55 percent of firmware images have at least one serious security vulnerability. The reason for this is outdated source components like OpenSSL.
The Amrerican IoT security company Finite State has investigated the firmware of Huaweis network devices and discovered numerous security holes: "There is clear evidence that zero-day gaps based on memory errors are abundant in Huawei firmware. In summary, if you add known remote access vulnerabilities and possible backdoors, there seems to be a high risk of compromise with Huawei devices," Finite State writes in its study.
Finite State also claims to have found that Huaweis's public commitment to improving the safety of its products has not yet produced results. Instead, the situation has worsened. "From a technical point of view, the Huawei devices are among the worst I have ever analyzed," Finite State states.
According to the company, the study is based on examining 1.5 million files from 10,000 firmware images from 558 Huawei enterprise network products. In more than 55 percent of the firmware images, security researchers found at least one critical vulnerability. These include preset credentials, insecure handling of cryptographic keys, and signs of poor software development.
On average, Finite State found 102 known vulnerabilities in each Huawei firmware image, as well as evidence of zero-day vulnerabilities. Especially open source components like OpenSSL would not be updated regularly. On average, the open source components are more than five years old, and thousands of instances of these components are said to last more than ten years. The oldest OpenSSL version in a Huawei firmware was released in 1999.
Finite State PDF:
https://finitestate.io/wp-content/uploads/2019/06/Finite-State-SCA1-Final.pdf
Read more:
https://www.zdnet.de/88363849/us-sicherheitsfirma-entdeckt-zahlreiche-sicherheitsluecken-in-netzwerkausruestung-von-huawei/
#huawei #FiniteState #study #analyzed #security #vulnerabilities #network #devices
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
According to the US company Finite State, 55 percent of firmware images have at least one serious security vulnerability. The reason for this is outdated source components like OpenSSL.
The Amrerican IoT security company Finite State has investigated the firmware of Huaweis network devices and discovered numerous security holes: "There is clear evidence that zero-day gaps based on memory errors are abundant in Huawei firmware. In summary, if you add known remote access vulnerabilities and possible backdoors, there seems to be a high risk of compromise with Huawei devices," Finite State writes in its study.
Finite State also claims to have found that Huaweis's public commitment to improving the safety of its products has not yet produced results. Instead, the situation has worsened. "From a technical point of view, the Huawei devices are among the worst I have ever analyzed," Finite State states.
According to the company, the study is based on examining 1.5 million files from 10,000 firmware images from 558 Huawei enterprise network products. In more than 55 percent of the firmware images, security researchers found at least one critical vulnerability. These include preset credentials, insecure handling of cryptographic keys, and signs of poor software development.
On average, Finite State found 102 known vulnerabilities in each Huawei firmware image, as well as evidence of zero-day vulnerabilities. Especially open source components like OpenSSL would not be updated regularly. On average, the open source components are more than five years old, and thousands of instances of these components are said to last more than ten years. The oldest OpenSSL version in a Huawei firmware was released in 1999.
Finite State PDF:
https://finitestate.io/wp-content/uploads/2019/06/Finite-State-SCA1-Final.pdf
Read more:
https://www.zdnet.de/88363849/us-sicherheitsfirma-entdeckt-zahlreiche-sicherheitsluecken-in-netzwerkausruestung-von-huawei/
#huawei #FiniteState #study #analyzed #security #vulnerabilities #network #devices
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
The Pentagon has a laser that can identify people from a distance—by their heartbeat
The Jetson prototype can pick up on a unique cardiac signature from 200 meters away, even through clothes.
Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.
It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.
Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”
Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).
The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.
Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.
More info:
https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/
#pentagon #laser #heartbeat #recognition #biometric #identification
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
The Jetson prototype can pick up on a unique cardiac signature from 200 meters away, even through clothes.
Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.
It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.
Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”
Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).
The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.
Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.
More info:
https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/
#pentagon #laser #heartbeat #recognition #biometric #identification
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
I Shouldn’t Have to Publish This in The New York Times
The way we regulated social media platforms didn’t end harassment, extremism or disinformation. It only gave them more power and made the problem worse.
I shouldn’t have to publish this in The New York Times.
Ten years ago, I could have published this on my personal website, or shared it on one of the big social media platforms. But that was before the United States government decided to regulate both the social media platforms and blogging sites as if they were newspapers, making them legally responsible for the content they published.
The move was spurred on by an unholy and unlikely coalition of media companies crying copyright; national security experts wringing their hands about terrorism; and people who were dismayed that our digital public squares had become infested by fascists, harassers and cybercriminals. Bit by bit, the legal immunity of the platforms was eroded — from the judges who put Facebook on the line for the platform’s inaction during the Provo Uprising to the lawmakers who amended section 230 of the Communications Decency Act in a bid to get Twitter to clean up its Nazi problem.
While the media in the United States remained protected by the First Amendment, members of the press in other countries were not so lucky. The rest of the world responded to the crisis by tightening rules on acceptable speech. But even the most prolific news service — a giant wire service like AP-AFP or Thomson-Reuters-TransCanada-Huawei — only publishes several thousand articles per day. And thanks to their armies of lawyers, editors and insurance underwriters, they are able to make the news available without falling afoul of new rules prohibiting certain kinds of speech — including everything from Saudi blasphemy rules to Austria’s ban on calling politicians “fascists” to Thailand’s stringent lèse-majesté rules. They can ensure that news in Singapore is not “out of bounds” and that op-eds in Britain don’t call for the abolition of the monarchy.
But not the platforms — they couldn’t hope to make a dent in their users’ personal expressions. From YouTube’s 2,000 hours of video uploaded every minute to Facebook-Weibo’s three billion daily updates, there was no scalable way to carefully examine the contributions of every user and assess whether they violated any of these new laws. So the platforms fixed this the Silicon Valley way: They automated it. Badly.
Which is why I have to publish this in The New York Times.
The platforms and personal websites are fine if you want to talk about sports, relate your kids’ latest escapades or shop. But if you want to write something about how the platforms and government legislation can’t tell the difference between sex trafficking and sex, nudity and pornography, terrorism investigations and terrorism itself or copyright infringement and parody, you’re out of luck. Any one of those keywords will give the filters an incurable case of machine anxiety — but all of them together? Forget it.
If you’re thinking, “Well, all that stuff belongs in the newspaper,” then you’ve fallen into a trap: Democracies aren’t strengthened when a professional class gets to tell us what our opinions are allowed to be.
And the worst part is, the new regulations haven’t ended harassment, extremism or disinformation. Hardly a day goes by without some post full of outright Naziism, flat-eartherism and climate trutherism going viral. There are whole armies of Nazis and conspiracy theorists who do nothing but test the filters, day and night, using custom software to find the adversarial examples that slip past the filters’ machine-learning classifiers.
The way we regulated social media platforms didn’t end harassment, extremism or disinformation. It only gave them more power and made the problem worse.
I shouldn’t have to publish this in The New York Times.
Ten years ago, I could have published this on my personal website, or shared it on one of the big social media platforms. But that was before the United States government decided to regulate both the social media platforms and blogging sites as if they were newspapers, making them legally responsible for the content they published.
The move was spurred on by an unholy and unlikely coalition of media companies crying copyright; national security experts wringing their hands about terrorism; and people who were dismayed that our digital public squares had become infested by fascists, harassers and cybercriminals. Bit by bit, the legal immunity of the platforms was eroded — from the judges who put Facebook on the line for the platform’s inaction during the Provo Uprising to the lawmakers who amended section 230 of the Communications Decency Act in a bid to get Twitter to clean up its Nazi problem.
While the media in the United States remained protected by the First Amendment, members of the press in other countries were not so lucky. The rest of the world responded to the crisis by tightening rules on acceptable speech. But even the most prolific news service — a giant wire service like AP-AFP or Thomson-Reuters-TransCanada-Huawei — only publishes several thousand articles per day. And thanks to their armies of lawyers, editors and insurance underwriters, they are able to make the news available without falling afoul of new rules prohibiting certain kinds of speech — including everything from Saudi blasphemy rules to Austria’s ban on calling politicians “fascists” to Thailand’s stringent lèse-majesté rules. They can ensure that news in Singapore is not “out of bounds” and that op-eds in Britain don’t call for the abolition of the monarchy.
But not the platforms — they couldn’t hope to make a dent in their users’ personal expressions. From YouTube’s 2,000 hours of video uploaded every minute to Facebook-Weibo’s three billion daily updates, there was no scalable way to carefully examine the contributions of every user and assess whether they violated any of these new laws. So the platforms fixed this the Silicon Valley way: They automated it. Badly.
Which is why I have to publish this in The New York Times.
The platforms and personal websites are fine if you want to talk about sports, relate your kids’ latest escapades or shop. But if you want to write something about how the platforms and government legislation can’t tell the difference between sex trafficking and sex, nudity and pornography, terrorism investigations and terrorism itself or copyright infringement and parody, you’re out of luck. Any one of those keywords will give the filters an incurable case of machine anxiety — but all of them together? Forget it.
If you’re thinking, “Well, all that stuff belongs in the newspaper,” then you’ve fallen into a trap: Democracies aren’t strengthened when a professional class gets to tell us what our opinions are allowed to be.
And the worst part is, the new regulations haven’t ended harassment, extremism or disinformation. Hardly a day goes by without some post full of outright Naziism, flat-eartherism and climate trutherism going viral. There are whole armies of Nazis and conspiracy theorists who do nothing but test the filters, day and night, using custom software to find the adversarial examples that slip past the filters’ machine-learning classifiers.
It didn’t have to be this way. Once upon a time, the internet teemed with experimental, personal publications. The mergers and acquisitions and anticompetitive bullying that gave rise to the platforms and killed personal publishing made Big Tech both reviled and powerful, and they were targeted for breakups by ambitious lawmakers. Had we gone that route, we might have an internet that was robust, resilient, variegated and dynamic.
Think back to the days when companies like Apple and Google — back when they were stand-alone companies — bought hundreds of start-ups every year. What if we’d put a halt to the practice, re-establishing the traditional antitrust rules against “mergers to monopoly” and acquiring your nascent competitors? What if we’d established an absolute legal defense for new market entrants seeking to compete with established monopolists?
Most of these new companies would have failed — if only because most new ventures fail — but the survivors would have challenged the Big Tech giants, eroding their profits and giving them less lobbying capital. They would have competed to give the best possible deals to the industries that tech was devouring, like entertainment and news. And they would have competed with the news and entertainment monopolies to offer better deals to the pixel-stained wretches who produced the “content” that was the source of all their profits.
But instead, we decided to vest the platforms with statelike duties to punish them for their domination. In doing so, we cemented that domination. Only the largest companies can afford the kinds of filters we’ve demanded of them, and that means that any would-be trustbuster who wants to break up the companies and bring them to heel first must unwind the mesh of obligations we’ve ensnared the platforms in and build new, state-based mechanisms to perform those duties.
Our first mistake was giving the platforms the right to decide who could speak and what they could say. Our second mistake was giving them the duty to make that call, a billion times a day.
https://www.nytimes.com/2019/06/24/opinion/future-free-speech-social-media-platforms.html
#Facebook #DeleteFacebook #USA #harassment #extremism #disinformation
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Think back to the days when companies like Apple and Google — back when they were stand-alone companies — bought hundreds of start-ups every year. What if we’d put a halt to the practice, re-establishing the traditional antitrust rules against “mergers to monopoly” and acquiring your nascent competitors? What if we’d established an absolute legal defense for new market entrants seeking to compete with established monopolists?
Most of these new companies would have failed — if only because most new ventures fail — but the survivors would have challenged the Big Tech giants, eroding their profits and giving them less lobbying capital. They would have competed to give the best possible deals to the industries that tech was devouring, like entertainment and news. And they would have competed with the news and entertainment monopolies to offer better deals to the pixel-stained wretches who produced the “content” that was the source of all their profits.
But instead, we decided to vest the platforms with statelike duties to punish them for their domination. In doing so, we cemented that domination. Only the largest companies can afford the kinds of filters we’ve demanded of them, and that means that any would-be trustbuster who wants to break up the companies and bring them to heel first must unwind the mesh of obligations we’ve ensnared the platforms in and build new, state-based mechanisms to perform those duties.
Our first mistake was giving the platforms the right to decide who could speak and what they could say. Our second mistake was giving them the duty to make that call, a billion times a day.
https://www.nytimes.com/2019/06/24/opinion/future-free-speech-social-media-platforms.html
#Facebook #DeleteFacebook #USA #harassment #extremism #disinformation
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN