BlackBox (Security) Archiv
4.08K subscribers
183 photos
393 videos
167 files
2.67K links
👉🏼 Latest viruses and malware threats
👉🏼 Latest patches, tips and tricks
👉🏼 Threats to security/privacy/democracy on the Internet

👉🏼 Find us on Matrix: https://matrix.to/#/!wNywwUkYshTVAFCAzw:matrix.org
Download Telegram
Por lo tanto, el servidor podría multiplicar tus dos cantidades cifradas y entregarte el resultado cifrado sin conocer los valores de x1 ni x2. Cuando descifres el resultado devuelto obtendrás el mismo valor que si hubieras multiplicado las dos cantidades originales sin cifrar. Impresionante, ¿no?

Existen otros muchos algoritmos criptográficos que al igual que RSA son parcialmente homomórficos, como ElGamal también para la multiplicación o Paillier para la suma.

Las cosas se complican enormemente cuando se busca el cifrado «totalmente» homomórfico (FHE), capaz de soportar tanto la suma como el producto. Aunque existen muchas propuestas en la literatura científica sobre FHE, la más destacada es la planteada por Craig Gentry en 2009 y evolucionada por él mismo y por otros autores a lo largo de los años. Su propuesta se basa en un concepto algebraico abstracto conocido como “celosía“. Seguro que has visto cientos de celosías en ventanas y balcones. Las que te venden en tiendas de bricolaje son celosías bidimensionales: listones de madera o de metal que se cruzan en ciertos puntos. Ahora imagina esa misma celosía en 3D. Y ahora añade otra dimensión. Y otra. Y otra. Y así hasta n dimensiones. Bien, ¿tienes ya una celosía n-dimensional en tu cabeza? Complicada, ¿verdad? Puedes creer que encontrar el punto más cercano a otro en esa celosía no es tarea fácil. De hecho, es tan difícil que se conoce como el Problema del Vector Más Corto (Shortest Vector Problem, SVP) y constituye precisamente el problema matemático “intratable” del cifrado basado en celosías. De hecho, este criptosistema representa una de las alternativas criptográficas más serias para la era post-cuántica.

Lo mejor de todo es que, con las variantes adecuadas, las celosías también sirven para el cifrado homomórfico completo. Pero, y aquí aparece un gran, gran PERO, estos algoritmos resultan tremendamente ineficientes. Operar con los datos cifrados puede volverse hasta 10 órdenes de magnitud más lento que con los datos en claro (o sea, 1010 veces más lento o, lo que es lo mismo, un uno seguido de diez ceros: 10.000.000.000). En definitiva, son inservibles para aplicaciones prácticas reales. Hasta que no alcancen velocidades aceptables, no veremos un despliegue a gran escala en servicios en la nube. Mientras tanto, la investigación en este campo continúa intensamente.

Mientras tanto, los criptógrafos no se cruzan de brazos. Si operar sobre los datos cifrados constituye un reto formidable, ¿por qué no acometer versiones más sencillas del problema? Tal vez no confíes en tu proveedor en la nube. ¿Se podría repartir la carga entre los dos? Otros esquemas criptográficos persiguen que varias partes que no confían mutuamente puedan operar sobre los datos sin tener que revelárselos unas partes a otras.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-1/

#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
La computación multi-parte segura (SMPC).

Imagínate que estás charlando con otros dos compañeros de trabajo. De repente sale el tema de los bonus que cobráis. A los tres os gustaría saber quién es el que cobra el bonus más alto, pero ninguno queréis revelar el importe de vuestro bonus. ¿Cómo podéis averiguarlo? Una solución consiste en confiar en una tercera parte a quien cada uno le reveláis el importe de vuestro bonus y, una vez conocidos todos, anuncia quién gana el bonus mayor.

Imagina ahora que trabajas en el servicio de inteligencia de una empresa de ciberseguridad. Se ha producido un ataque y tienes una lista de sospechosos. Los servicios de inteligencia de otras empresas también tienen sus propias listas de sospechosos. Os gustaría conocer qué sospechosos aparecen en todas las listas, pero ni tu empresa ni las demás queréis revelar vuestra lista completa. ¿Cómo podéis calcular la intersección de estas listas? Una vez más, una solución inmediata sería que cada empresa entregue su lista a una tercera parte confiable y que ésta obtenga el conjunto intersección de todas las listas de sospechosos.

En ambos escenarios se recurre a una tercera parte de confianza. Pero ¿y si no te fías de esta tercera parte? Después de todo, asumir que una parte es de confianza es mucho asumir. ¿De qué otra manera podrían resolverse estos dilemas sin recurrir a terceras partes y con la misma garantía de seguridad?

Precisamente, la computación multi-parte segura propone protocolos que emulan a la tercera parte de confianza. Permiten calcular una función con varios valores de entrada, de manera que sólo se revela el resultado de la evaluación de la función, manteniendo privados los valores de las entradas.

Expresado matemáticamente: reunidos un número n de participantes, p1, p2, …, pn, cada uno de los cuales posee datos privados, respectivamente d1, d2, …, dn, desean calcular el valor de una función pública sobre esos datos privados: F(d1, d2, …, dn), manteniendo sus propias entradas en secreto.

Volvamos al ejemplo de los bonus. Si las entradas x, y, z representan vuestros bonus, queréis conocer el más alto de los tres, sin revelar el valor de ninguno. En otras palabras, queréis calcular:

F(x, y, z) = max (x, y, z)

Se espera que estos protocolos garanticen una serie de requisitos de seguridad:

✳️ Corrección: aunque alguna de las partes engañe, el resultado final será correcto.
✳️ Privacidad: solo se conoce el resultado de la evaluación de la función, pero no el valor de las entradas evaluadas (salvo la propia de cada uno, claro está).
✳️ Independencia de las entradas: ninguna parte puede elegir su entrada como función de la entrada de otra parte.
✳️ Justicia: si una parte conoce el resultado de la evaluación, entonces todas las partes conocerán el mismo resultado.
✳️ Entrega garantizada del resultado: si una parte tiene acceso al resultado, entonces las demás partes también lo tendrán.

Existen diferentes protocolos criptográficos para realizar esta computación segura distribuyéndola entre las partes. El más conocido es el protocolo de Circuito Confuso de Yao. La idea de este protocolo consiste en simular cualquier función matemática con un circuito booleano utilizando exclusivamente puertas lógicas, concretamente AND y XOR. Para funciones muy sencillas, estos circuitos pueden diseñarse incluso a mano. Obviamente, a medida que se vuelven más y más complejas, los circuitos crecen paralelamente en complejidad. Puedes imaginar que simular AES mediante puertas lógicas AND y XOR no es precisamente tarea sencilla, aunque sí posible con ¡32.000 puertas! De hecho, las implantaciones más recientes alcanzan velocidades muy eficientes, de unos pocos milisegundos.
Por supuesto, la computación multi-parte segura es muchísimo más complicada. El adversario puede ser pasivo o activo, las funciones a evaluar pueden ser más o menos complicadas, pueden soportar mayor o menor número de adversarios activos, pueden imponerse mayores o menores restricciones de seguridad, pueden requerir más o menos tiempo de computación, pueden exigir que todos los nodos de la red estén conectados entre sí o basta que exista un camino cualquiera entre cualesquiera dos nodos, pueden comunicarse síncrona o asíncronamente, etc.

Algunas empresas han comenzado a comercializar soluciones de SMPC en escenarios reales: aplicaciones de Datos Privados como Servicio (Private Data as a Service), tales como las bases de datos de Sharemind o de Jana; aplicaciones de gestión de claves, como los productos de Sepior o de Unbound; y aplicaciones de solución puntual, como la de Partisia.

En suma, la computación multi-parte segura es un campo en continua expansión, con multitud de protocolos, escenarios y casos de uso, en el que todavía estamos muy lejos de haber escuchado la última palabra.
La criptografía con umbral

La criptografía se ha transformado en un estándar tecnológico para proteger la confidencialidad de los datos. En criptografía, una regla básica de diseño se conoce como Principio de Kerckhoffs: de un criptosistema se conoce todo menos la clave.

La cuestión es: si guardas los datos cifrados, ¿dónde guardas la clave de cifrado? En última instancia, la seguridad de un sistema de cifrado reside en la gestión de sus claves. Las claves pasan a ser el talón de Aquiles de la criptografía. De hecho, no están seguras ni en la memoria del ordenador: Heartbleed, Spectre y Meltdown vienen a la cabeza como ejemplos recientes de vulnerabilidades que permitían leer espacios privados de la memoria y obtener, entre otros datos, claves de cifrado. A su vez, los ataques de canal lateral pueden filtrar información sobre claves gracias a variaciones electromagnéticas o de consumo de energía. Más aún, las claves pueden quedarse grabadas en una memoria DRAM incluso después de apagar el equipo. ¿No existe forma entonces de garantizar la seguridad de las claves?

Una solución pasa por dividir la clave en dos o más partes, de manera que la información cifrada no pueda descifrarse a menos que se junten todas (o un número mínimo de) las partes de la clave. Por ejemplo, para dividir la clave K en tres partes, K1, K2 y K3, se seleccionan dos claves aleatoriamente, K1 y K2, de la misma longitud que K. La tercera parte de la clave se calcula como K3 = K1 Å K2 Å K, donde Å es la operación OR exclusiva. No hay dos partes que proporcionen ninguna información sobre la clave secreta: las tres partes son necesarias para recuperar K (dejamos como ejercicio al lector comprobar que efectivamente así sucede).

El esquema descrito exhibe la propiedad «3 de 3». Generalizando, un esquema de intercambio de secretos es «k de n» (siendo n ≥ k ≥ 1) si juntando k partes puede recuperarse un secreto compartido entre n partes, pero juntando k − 1 partes no se sabe nada sobre el secreto.

Y así es como llegamos a la criptografía con umbral. Ya no se trata simplemente de dividir la clave en varias partes, como en el sencillo ejemplo anterior, sino de realizar operaciones criptográficas con cada parte de la clave de manera que, al juntarlas todas, el resultado sea el mismo que si se hubiera realizado con la clave completa. RSA nos ayudará nuevamente a entenderlo con mayor claridad.

Hemos visto en la entrega anterior que la clave pública está formada por dos números: un exponente, e; y un módulo, n, que a su vez es el producto de dos primos, n = p · q. Por otro lado, la clave privada está formada por un número d, tal que e · d = 1 mod (p − 1) · (q − 1).

Para firmar un mensaje m con RSA, se realiza el cálculo s = md mod n. Verificar la firma es muy sencillo por cualquier persona que conozca la clave pública, realizando la operación se = med = m mod n.
¿Cómo conseguir que un grupo de personas coopere para firmar un mensaje? En lugar de firmar el mensaje una sola persona con la clave privada d, se puede separar esta clave en varias, por ejemplo, en tres: d1, d2, d3, tales que d1 + d2 + d3 = d mod (p − 1) · (q − 1).

Ahora, cada una de las partes puede firmar por su cuenta el mismo mensaje m: s1 = md1, s2 = md2, s3 = md3, de manera que la firma total será el producto de las tres firmas: s = s1 · s2 · s3. Es fácil verificar que s1 · s2 · s3 = md1 + d2 + d3 = md mod n. En otras palabras, sólo puede crearse una firma completa si cada una de las partes firma el mensaje con su parte de la clave privada. Así se protege la clave privada, d, ya que no se almacena completa en ningún servidor ni en ninguna memoria. Ni siquiera es necesario reunir las tres partes de la clave, ya que cada operación de cifrado de cada parte es independiente del resto. Podría comprometerse una parte de la clave o incluso dos y, aun así, la clave completa se mantendría segura.

Los esquemas de criptografía con umbral más sofisticados poseen la propiedad «k de n» ya mencionada. Esta propiedad aporta tolerancia a fallos: una parte de la clave podría perderse o verse comprometida y, aun así, se podría realizar la operación criptográfica con la parte restante. Además, exige la cooperación: ninguna parte podrá realizar la operación criptográfica completa; al menos k partes han de ponerse de acuerdo. Desde la perspectiva de un atacante, comprometer una parte de la clave no le servirá de nada: necesitará comprometer al menos k partes.

Como vemos, la criptografía con umbral elimina los puntos únicos de fallo en criptografía, permitiendo redistribuir la responsabilidad de la custodia de las claves. Y no vayas a creer que todo queda en ejercicios matemáticos para cursos de postgrado: los productos de gestión de claves de Sepior y de Unbound constituyen los ejemplos más avanzados de soluciones basadas en criptografía con umbral de la actualidad. Como los otros campos de estudio, está en constante expansión y veremos nuevos resultados próximamente.
https://empresas.blogthinkbig.com/computacion-segura-en-la-nube-datos-cifrados-sin-descifrarlos-parte-2/

#nube #seguridad #cifrado
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
NSA Starts Contributing Low-Level Code to UEFI BIOS Alternative

The NSA has started assigning developers to the Coreboot project, which is an open source alternative to Windows BIOS/UEFI firmware. The NSA's Eugene Myers has begun contributing SMI Transfer Monitor (STM) implementation code for the x86 processor. Myers works for NSA’s Trusted Systems Research Group, which according to the agency’s website, is meant to “conduct and sponsor research in the technologies and techniques which will secure America's information systems of tomorrow.”

Can The NSA Be Trusted With Such Low-Level Code?

NSA has worked on security projects embraced by the public before, including Security-Enhanced Linux, a security module for Linux. More recently, the NSA released the Ghidra reverse engineering tool as open source, which has also been adopted by Coreboot developers so that they can more easily reverse-engineer hardware firmware.

Myers published a paper about STM last year on how NSA’s STM implementation could work. All Coreboot code, including all the STM contributions from the NSA, are open source, so anyone could verify that there is no backdoor in there -- in theory.

In practice, the NSA could have also written the code in a less-than-secure way with vulnerabilities that are hard to detect without more experienced security researchers. Alternatively, the NSA could also update this implementation years later, when there are less eyes on the STM implementation and the update would no longer make headlines.

This wouldn’t be completely out of the question for an agency like the NSA. After all, the NSA succeeded in pushing a backdoor through the NIST standardization process years ago. The agency was also accused by EFF co-founder John Gilmore of sabotaging the IPsec protocol by making it too complex to ever be secure (something that would benefit an espionage agency).

More recently, it also tried to push two encryption algorithms through the ISO standardization process, but the reviewers overwhelmingly rejected the algorithms based on trust concerns and NSA’s failure to answer some technical questions.

Read more:
https://www.tomshardware.com/news/nsa-contributes-low-level-stm-coreboot,39704.html

#nsa #code #UEFI #BIOS #coreboot
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
I was 7 words away from being spear-phished

Three weeks ago I received a very flattering email from the University of Cambridge, asking me to judge the Adam Smith Prize for Economics:

"Dear Robert,

My name is Gregory Harris. I’m one of the Adam Smith Prize Organizers.

Each year we update the team of independent specialists who could assess the quality of the competing projects:
https://people.ds.cam.ac.uk/grh37/awards/Adam_Smith_Prize

Our colleagues have recommended you as an experienced specialist in this field.

We need your assistance in evaluating several projects for Adam Smith Prize.

Looking forward to receiving your reply.

Best regards, Gregory Harris
"

I wouldn’t say I’m an “expert” in economics exactly, but the university’s request wasn’t that surprising. I do have a subscription to The Economist, and I do understand - very roughly - how and why central banks set interest rates. I’ve read “Capital in the Twenty-First Century” and basically got the gist of the first half. I’ve written a few blog posts that I’ve generously tagged as “economics”, and perhaps there’s a new discipline of computational economics that I might be able to shed some software industry insight onto. Overall it felt perfectly plausible that the organizers of the Adam Smith prize would want my perspective. I assumed that being a judge for the Adam Smith Prize would be a lot of work and would not be paid, but it would still be great fuel for the ole ego.

All of this said, in my heart of hearts I knew that some wires had probably got crossed somewhere. There was no doubt a Professor Hobert Reaton at UC San Diego, expert in Heckscher-Ohlin trade theory, who was patiently waiting for the chance to further his career through a Transatlantic collaboration. Nonetheless, I judged this a thread worth pulling and a mild fantasy worth entertaining.

I reflexively did some basic security hygiene checks. The email was from an @cam.ac.uk email address. I hovered over the link in the email - https://people.ds.cam.ac.uk/grh37/awards/Adam_Smith_Prize. It pointed to the same URL that the email text claimed it did, and was located on a valid cam.ac.uk subdomain. It did strike me as a little odd that the page was hosted inside gh327’s personal directory instead of the main economics department’s site; but hey, it’s probably less bureaucracy that way. I clicked on the link and read a little about the history of the Adam Smith prize.

If “Gregory” had added just 7 extra words to this page - “THIS PAGE MUST BE VIEWED IN FIREFOX” - I would have been screwed. More on that later.

Next I think I visited the root cam.ac.uk website to make sure that this really was the domain of the University of Cambridge. I did a quick Google for gregory harris cambridge to see how much of a big deal he was. I couldn’t find much - I vaguely remember turning up only a very sparse LinkedIn account. But that’s fine; not everyone has to have a Twitter profile or a cooking blog.

I remember thinking that Gregory’s email seemed very curt and poorly phrased, and that he could use a few lessons on how to most effectively ask strangers on the internet to do free work for him. He was lucky that I didn’t care about such trivialities. He was also lucky that I didn’t care that he’d missed a “the” in We need your assistance in evaluating several projects for Adam Smith Prize. Apparently I further didn’t care that he’d unnecessarily capitalized the word Organizers in Adam Smith Prize Organizers, or that he didn’t seem to understand that a paragraph can contain more than a single sentence.

At the time I just thought he wasn’t a very good writer.

I sent Gregory a short reply, expressing preliminary interest and asking for more information.....

Read more:
https://robertheaton.com/2019/06/24/i-was-7-words-away-from-being-spear-phished/

#pishing #firefox #zeroday
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Your Hard Drive May Be Listening

Researchers demonstrated that a hard drive can be used as a microphone, allowing attackers to listen in to conversations.

If you are already nervous about malicious computer attacks, then here’s some unwelcome news: there are many ways in which our technology is vulnerable to attacks based on physics, rather than on software. University of Michigan computer scientist Kevin Fu and his colleagues have found several unsettling ways that sound waves and other sources of interference could be used to commandeer household devices and personal electronics. At the American Association for the Advancement of Science (AAAS) conference in Washington, DC, two weeks ago, he reported his latest scary find: your computer hard drive could—without you knowing it—be used to record your voice.

Sensors are ubiquitous and essential—think of the thermometers in freezers for human eggs, accelerometers in airbags, and voltage monitors in pacemakers. The devices reading these sensors almost universally accept their data without question, but Fu and his colleagues have repeatedly shown that, using carefully crafted electromagnetic and acoustic interference, an attacker can take control of sensor outputs.

For example, the team has shown that appropriate electromagnetic waves can cause a thermocouple—a sensor that produces a voltage to represent the temperature—to be read as showing −1847 degrees Fahrenheit when it was actually at room temperature. They similarly caused the voltage sensor in a pacemaker to provide inaccurate signals.

The researchers produced additional mayhem with sound waves, demonstrating that accelerometers in Fitbits, smart phones, and other devices are vulnerable. In one experiment, they showed that certain high-frequency sound waves can cause a Fitbit to add steps without moving. In another test, they used a specific acoustic waveform to force the graph of the voltage output of an accelerometer to spell out the word “WALNUT.” This waveform worked even when the sound was surreptitiously embedded in a sound track, so an attacker could, in principle, control your phone’s accelerometer by tricking you into watching an online video.

The team’s latest trick is to turn a hard drive into a microphone. They tapped into the feedback system that helps control the position of the read head above the magnetic disk. When the head is buffeted by sound waves, the vibrations are reflected in the voltage signal produced by the drive’s position sensors. By reading this signal, Fu and his colleagues were able to make high-quality recordings of people speaking near the drive.

In another test, they showed that music played nearby could be recorded with high enough fidelity that the music recognition app Shazam could successfully identify the song. Malicious software could use this technique to record audio and then secretly upload it to a remote site, thus bugging a room without ever planting a microphone.

The team proposes defenses against every attack they develop, but Fu is still concerned. He worries most about the security of sensor-dependent systems that make independent decisions, such as temperature controllers in embryo labs, self-driving cars, and even spacecraft. “We just blindly trust these sensors,” he says. The industry needs to take these threats more seriously, and “computer scientists need to spend more time in physics labs.”

https://physics.aps.org/articles/v12/24

#Researchers #HardDrive #listening #conversations #attackers
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Western intelligence hacked 'Russia's Google' Yandex to spy on accounts - sources

Hackers working for Western intelligence agencies broke into Russian internet search company Yandex in late 2018 deploying a rare type of malware in an attempt to spy on user accounts, four people with knowledge of the matter told Reuters.

The malware, called Regin, is known to be used by the “Five Eyes” intelligence-sharing alliance of the United States, Britain, Australia, New Zealand and Canada, the sources said. Intelligence agencies in those countries declined to comment.

Western cyberattacks against Russia are seldom acknowledged or spoken about in public. It could not be determined which of the five countries was behind the attack on Yandex, said sources in Russia and elsewhere, three of whom had direct knowledge of the hack. The breach took place between October and November 2018.

Yandex spokesman Ilya Grabovsky acknowledged the incident in a statement to Reuters, but declined to provide further details. “This particular attack was detected at a very early stage by the Yandex security team. It was fully neutralized before any damage was done,” he said.

The company also said that “the Yandex security team’s response ensured that no user data was compromised by the attack.”

The company, widely known as “Russia’s Google” for its array of online services from internet search to email and taxi reservations, says it has more than 108 million monthly users in Russia. It also operates in Belarus, Kazakhstan and Turkey.

The sources who described the attack to Reuters said the hackers appeared to be searching for technical information that could explain how Yandex authenticates user accounts. Such information could help a spy agency impersonate a Yandex user and access their private messages.

The hack of Yandex’s research and development unit was intended for espionage purposes rather than to disrupt or steal intellectual property, the sources said. The hackers covertly maintained access to Yandex for at least several weeks without being detected, they said.

The Regin malware was identified as a Five Eyes tool in 2014 following revelations by former U.S. National Security Agency (NSA) contractor Edward Snowden.

Reports by The Intercept, in partnership with a Dutch and Belgian newspaper, tied an earlier version of Regin to a hack at Belgian telecom firm Belgacom in 2013 and said British spy agency Government Communications Headquarters (GCHQ) and the NSA were responsible. At the time GCHQ declined to comment and the NSA denied involvement.

Read more:
https://www.reuters.com/article/us-usa-cyber-yandex-exclusive/exclusive-western-intelligence-hacked-russias-google-yandex-to-spy-on-accounts-sources-idUSKCN1TS2SX

#hacker #attack #russia #spy #malware #google #yandex
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
US security company discovers numerous vulnerabilities in Huawei network equipment

According to the US company Finite State, 55 percent of firmware images have at least one serious security vulnerability. The reason for this is outdated source components like OpenSSL.

The Amrerican IoT security company Finite State has investigated the firmware of Huaweis network devices and discovered numerous security holes: "There is clear evidence that zero-day gaps based on memory errors are abundant in Huawei firmware. In summary, if you add known remote access vulnerabilities and possible backdoors, there seems to be a high risk of compromise with Huawei devices," Finite State writes in its study.

Finite State also claims to have found that Huaweis's public commitment to improving the safety of its products has not yet produced results. Instead, the situation has worsened. "From a technical point of view, the Huawei devices are among the worst I have ever analyzed," Finite State states.

According to the company, the study is based on examining 1.5 million files from 10,000 firmware images from 558 Huawei enterprise network products. In more than 55 percent of the firmware images, security researchers found at least one critical vulnerability. These include preset credentials, insecure handling of cryptographic keys, and signs of poor software development.

On average, Finite State found 102 known vulnerabilities in each Huawei firmware image, as well as evidence of zero-day vulnerabilities. Especially open source components like OpenSSL would not be updated regularly. On average, the open source components are more than five years old, and thousands of instances of these components are said to last more than ten years. The oldest OpenSSL version in a Huawei firmware was released in 1999.

Finite State PDF:
https://finitestate.io/wp-content/uploads/2019/06/Finite-State-SCA1-Final.pdf

Read more:
https://www.zdnet.de/88363849/us-sicherheitsfirma-entdeckt-zahlreiche-sicherheitsluecken-in-netzwerkausruestung-von-huawei/

#huawei #FiniteState #study #analyzed #security #vulnerabilities #network #devices
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
The Pentagon has a laser that can identify people from a distance—by their heartbeat

The Jetson prototype can pick up on a unique cardiac signature from 200 meters away, even through clothes.

Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.

It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.

Long-range detection

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.

Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.

More info:
https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/

#pentagon #laser #heartbeat #recognition #biometric #identification
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
I Shouldn’t Have to Publish This in The New York Times

The way we regulated social media platforms didn’t end harassment, extremism or disinformation. It only gave them more power and made the problem worse.

I shouldn’t have to publish this in The New York Times.

Ten years ago, I could have published this on my personal website, or shared it on one of the big social media platforms. But that was before the United States government decided to regulate both the social media platforms and blogging sites as if they were newspapers, making them legally responsible for the content they published.

The move was spurred on by an unholy and unlikely coalition of media companies crying copyright; national security experts wringing their hands about terrorism; and people who were dismayed that our digital public squares had become infested by fascists, harassers and cybercriminals. Bit by bit, the legal immunity of the platforms was eroded — from the judges who put Facebook on the line for the platform’s inaction during the Provo Uprising to the lawmakers who amended section 230 of the Communications Decency Act in a bid to get Twitter to clean up its Nazi problem.

While the media in the United States remained protected by the First Amendment, members of the press in other countries were not so lucky. The rest of the world responded to the crisis by tightening rules on acceptable speech. But even the most prolific news service — a giant wire service like AP-AFP or Thomson-Reuters-TransCanada-Huawei — only publishes several thousand articles per day. And thanks to their armies of lawyers, editors and insurance underwriters, they are able to make the news available without falling afoul of new rules prohibiting certain kinds of speech — including everything from Saudi blasphemy rules to Austria’s ban on calling politicians “fascists” to Thailand’s stringent lèse-majesté rules. They can ensure that news in Singapore is not “out of bounds” and that op-eds in Britain don’t call for the abolition of the monarchy.

But not the platforms — they couldn’t hope to make a dent in their users’ personal expressions. From YouTube’s 2,000 hours of video uploaded every minute to Facebook-Weibo’s three billion daily updates, there was no scalable way to carefully examine the contributions of every user and assess whether they violated any of these new laws. So the platforms fixed this the Silicon Valley way: They automated it. Badly.

Which is why I have to publish this in The New York Times.

The platforms and personal websites are fine if you want to talk about sports, relate your kids’ latest escapades or shop. But if you want to write something about how the platforms and government legislation can’t tell the difference between sex trafficking and sex, nudity and pornography, terrorism investigations and terrorism itself or copyright infringement and parody, you’re out of luck. Any one of those keywords will give the filters an incurable case of machine anxiety — but all of them together? Forget it.

If you’re thinking, “Well, all that stuff belongs in the newspaper,” then you’ve fallen into a trap: Democracies aren’t strengthened when a professional class gets to tell us what our opinions are allowed to be.

And the worst part is, the new regulations haven’t ended harassment, extremism or disinformation. Hardly a day goes by without some post full of outright Naziism, flat-eartherism and climate trutherism going viral. There are whole armies of Nazis and conspiracy theorists who do nothing but test the filters, day and night, using custom software to find the adversarial examples that slip past the filters’ machine-learning classifiers.
It didn’t have to be this way. Once upon a time, the internet teemed with experimental, personal publications. The mergers and acquisitions and anticompetitive bullying that gave rise to the platforms and killed personal publishing made Big Tech both reviled and powerful, and they were targeted for breakups by ambitious lawmakers. Had we gone that route, we might have an internet that was robust, resilient, variegated and dynamic.

Think back to the days when companies like Apple and Google — back when they were stand-alone companies — bought hundreds of start-ups every year. What if we’d put a halt to the practice, re-establishing the traditional antitrust rules against “mergers to monopoly” and acquiring your nascent competitors? What if we’d established an absolute legal defense for new market entrants seeking to compete with established monopolists?

Most of these new companies would have failed — if only because most new ventures fail — but the survivors would have challenged the Big Tech giants, eroding their profits and giving them less lobbying capital. They would have competed to give the best possible deals to the industries that tech was devouring, like entertainment and news. And they would have competed with the news and entertainment monopolies to offer better deals to the pixel-stained wretches who produced the “content” that was the source of all their profits.

But instead, we decided to vest the platforms with statelike duties to punish them for their domination. In doing so, we cemented that domination. Only the largest companies can afford the kinds of filters we’ve demanded of them, and that means that any would-be trustbuster who wants to break up the companies and bring them to heel first must unwind the mesh of obligations we’ve ensnared the platforms in and build new, state-based mechanisms to perform those duties.

Our first mistake was giving the platforms the right to decide who could speak and what they could say. Our second mistake was giving them the duty to make that call, a billion times a day.

https://www.nytimes.com/2019/06/24/opinion/future-free-speech-social-media-platforms.html

#Facebook #DeleteFacebook #USA #harassment #extremism #disinformation
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
How to speak Silicon Valley: 53 essential tech-bro terms explained

Your guide to understanding an industry where capitalism is euphemized

Airbnb (n)
– A hotel company that figured out how to avoid the expense of owning hotels or employing hotel workers. See unicorn. (v) – To illegally convert an apartment into a vacation rental in a city with an affordable housing crisis.

Amazon (n) – A website that went from selling books to selling virtually all items on Earth; it’s also a movie studio, book publisher, major grocery chain owner, hardware manufacturer, and host for most of the internet, to name just a few endeavors. Competitors in nearly every industry fear its might. Formerly known as “the everything store”; soon to be known as “the only store”.

angel investor (phrase) – A wealthy individual who invests a small amount of startup capital at the earliest stages of a company or idea. Often, the angel is part of the entrepreneur’s extended network, whether because they went to the same college, worked together at a previous company, or are family friends. Frequently a vocal opponent of affirmative action. See also meritocracy.

apology (n) – A public relations exercise designed to change headlines. In practice, a promise to keep doing the same thing but conceal it better. “People need to be able to explicitly choose what they share,” said Mark Zuckerberg in a 2007 apology, before promising better privacy controls in a 2010 mea culpa, vowing more transparency in 2011, and acknowledging “mistakes” in the Cambridge Analytica scandal. See Facebook, privacy.

Apple (n) – America’s first trillion-dollar company, which achieved inordinate success through groundbreaking products such as the Macintosh, iPod and iPhone. After it ran out of ideas for new products, Apple maintained its dominance by coming up with new ways to force its customers to purchase expensive accessories. See dongle.

artificial intelligence (ph) – Computers so smart that their behavior is indistinguishable from that of humans. Often achieved by secretly paying real humans to pretend they’re robots.

Autopilot (n) – The name Tesla gives to its advanced driver assistance system, ie souped-up cruise control. Named after the advanced technology that allows pilots to take their hands off the controls of a plane, but very much not an invitation for Tesla drivers to take their hands off the wheel, right, Elon?

bad actors (ph) – People who use a social media platform in a way that results in bad press. Bad actors usually take advantage of features of the platform that were clearly vulnerable for abuse but necessary to achieve scale. “The Russian intelligence operatives who used Facebook’s self-serve advertising system to target US voters with divisive and false messages were ‘bad actors’.”

biohacking (n) – Applying the DIY hacker ethos to one’s own body to achieve higher performance. Often involves bizarre eating habits, fasting, inserting microchips into one’s body, and taking nootropics (AKA expensive nutritional supplements). When done by women, dieting. In extreme forms, an eating disorder.

bootstrap (v) – To start a company without venture capital. The only option for the vast majority of people who start companies, but a point of pride for the tiny subset of entrepreneurs who have access to venture capital and eschew it. “My dad is friends with Tim Draper but I wanted to do something on my own so I’m bootstrapping” – a tech bro.

cloud, the (n) – Servers. A way to keep more of your data off your computer and in the hands of big tech, where it can be monetized in ways you don’t understand but may have agreed to when you clicked on the Terms of Service. Usually located in a city or town whose elected officials exchanged tens of millions of dollars in tax breaks for seven full-time security guard jobs.
data (n) – A record of everything you do involving the internet – which is increasingly synonymous with everything you do, period. Corporations use the digital trails you and millions of others leave to sell you things – in other words, your actions, relationships, and desires have become currency. See privacy.

deprecated (adj) – A description for a software feature that is no longer being updated and will probably be phased out soon.

disrupt (v) – To create a new market, either by inventing something completely new (ie the personal computer, the smartphone) or by ignoring the rules of an old market. If the latter, often illegal, but rarely prosecuted. Uber disrupted the taxi industry by flooding the market with illegal cabs, while Airbnb disrupted the hotel market by flooding the market with illegal sublets. See sharing economy.

diversity and inclusion (ph) – Initiatives designed to sugarcoat Silicon Valley’s systematic failure to hire, promote and retain African American and Latinx employees. The phrase is usually invoked when a company is expounding on its “values” in response to incontrovertible evidence of widespread racial or gender discrimination.

dongle (n) A small, expensive and easily misplaced piece of computer gear. Usually required when a company revolutionizes its products by getting rid of all the ports that are compatible with the accessories you already own. See Apple.

Don’t Be Evil (ph) Google’s original corporate motto. Deprecated.

employee (n) People who work for a tech company and are eligible for health insurance and retirement benefits. Importantly, this does not necessarily include the vast majority of people who perform work for the company and create its value, such as the people who drive for transportation companies, the people who deliver for delivery companies, and the cooks, cleaners, security guards and parking attendants on tech campuses. Less than 50% of Google’s global workforce. See Uber, sharing economy, disruption, scale.

evangelist (n) A job title for salespeople who are slightly creepy in their cultish devotion to the product they are selling. “I used to work in sales but now I evangelize Microsoft’s products.”

FAANG (ph) An acronym for Facebook, Apple, Amazon, Netflix and Google. Originally coined to refer to the company’s high-performing tech stocks, but also used to denote a certain amount of status. “His boyfriend is a software engineer, but not at a FAANG so he’s not really marriage material.”

Facebook (n) Your mom’s favorite social media platform.

5G (n) – The next generation of mobile internet, which promises to enable digital surveillance at blindingly fast speeds.

free speech (ph) A constitutionally protected right in the US that is primarily invoked by tech bros and internet trolls when they are asked to stop being assholes. Syn: hate speech. See ideological diversity.

GDPR (ph) A comprehensive data protection law that applies to companies operating in Europe, including American ones. Though the safeguards don’t apply directly to people outside Europe, the measure may push companies to step up their privacy efforts everywhere – handy for Americans, whose own government has done a pretty poor job of protecting them.

gentrifier (n) – A relatively affluent newcomer to a historically poor or working-class neighborhood whose arrival portends increased policing, pricier restaurants and the eviction or displacement of longtime residents. Often used by gentrifiers as a general epithet for anyone who arrived in their neighborhood after they did.

Google (n) – The privacy-devouring tech company that does everything that Facebook does, but manages to get away with it, largely because its products are useful instead of just depressing. (v) – To make the bare minimum effort to inform oneself about something. What a tech bro did before he insisted on explaining your area of expertise to you.
ideological diversity (ph) – The rallying cry for opponents of diversity and inclusion programs. Advocates for ideological diversity argue that corporate efforts to increase the representation of historically marginalized groups – women, African Americans and Latinos, among others – should also be required to increase the representation of people who believe that women, African Americans and Latinos are inherently unsuited to work in tech.

incubator (n) A parent company that takes baby companies under its wing until they can fly on their own; a playgroup for tech bros. See meritocracy.

IPO (n) Initial public offering – when a company begins allowing regular people to buy shares. A way for everyone, not just venture capital firms, to lose money, as in Uber’s recent disappointing IPO.

meritocracy (n) A system that rewards those who most deserve it, as long as they went to the right school. The tech industry is a meritocracy in much the same way that America is a meritocracy. See diversity and inclusion.

microdosing (n) – Taking small amounts of illegal drugs while white. It may be possible to microdose without writing a book or personal essay about it, but the evidence suggests otherwise.

mission (n) – What separates a tech bro and a finance bro: the tech bro works for a company that has a “mission”. Usually something grandiose, utopian, and entirely inconsistent with the company’s business model. Facebook’s mission is to make the world more open and connected; Facebook’s business model is to sell ads by dividing people into incredibly narrow marketing profiles.

monetize (v) – To charge money for a product, or, to figure out how to extract money from people without their understanding or explicit consent. Though having a plan to monetize is usually the first step for a small business or startup (“You mean I shouldn’t just give the lemonade away for free?”), angel investors and venture capitalists have created an environment in which companies can attempt to scale first and monetize later. “My app is free because I’m monetizing my users’ data.”

Move fast and break things (ph) – Facebook’s original corporate motto. In hindsight, a red flag. Deprecated, allegedly.

off-site (n) – A work event at a non-work location. Often includes alcohol and socializing. Primarily used when describing a sexual harassment complaint.

pivot (v) – What tech startups do when they realize scaling is not a business model without a monetization strategy.

platform (n) – A website that hosts user-generated content. Platforms are distinct from publishers, which more directly commission and control the content they publish. In the US, platforms enjoy special legal status protecting them from liability for the content they host and allowing them to exercise broad discretion over which content they want to ban or delete. Facebook, YouTube, Reddit and Craigslist are examples of platforms. The reason Facebook says it does not “have a policy that stipulates that the information you post on Facebook must be true”.

privacy (n) – Archaic. The concept of maintaining control over one’s personal information.

revolutionize (v) – To change something that does not need to be changed in order to charge money for its replacement. “Apple revolutionized the experience of using headphones when it killed the headphone jack on iPhones.”

runway (n) – The amount of venture capital a startup has left before it has to either monetize its product, pivot or start selling the office furniture. “I can’t believe Topher spent half our runway on a Tesla Roadster.”

scale (v) – The holy grail. To create a business that can accommodate exponential increases in users with minimal increases in costs. Also applicable if the costs can be externalized to taxpayers or countries in the global south. In the negative, a surprisingly effective excuse not to do something that any non-tech company would do. “We would prefer not to foment genocide in Myanmar, but content moderation simply does not scale.”
shadowban (v) – The conspiracy theory that no one is responding to a social media post because the platform is secretly preventing the user’s content from being seen and/or going viral. “Brandon was convinced that Twitter had shadowbanned him when no one responded to his demand that an SJW feminazi debate him.”

sharing economy (ph) A system in which working does not mean being employed. See employees.

smart (adj) – A product that is capable of being hooked up to the internet – thus rendering it capable of being hacked or abusing your data.

Snapchat (n) – Facebook’s research and development department.

tech bro (n) – A US-born, college-educated, Patagonia-clad male whose entry level salary at one of the FAANG companies was at least $125,000 and who frequently insists that his female co-workers give him high-fives. Typically works in product management or marketing. Had he been born 10 years earlier, he would have been a finance bro instead.

the FTC (n) The US Federal Trade Commission. Capable of levying enormous fines against companies like Facebook, potentially whittling down its revenues to just a handful of billions of dollars. Not really in that much of a hurry to do anything, however.

thought leader (n) – An unemployed rich person.

Twitter (n) – A mid-sized business with outsized importance due to its three primary users: Donald Trump, Elon Musk and journalists. A useful tool for journalists to gauge public opinion by talking to other journalists, and for Elon Musk to provoke lawsuits and federal investigations into security fraud.

Uber (n) – A unicorn startup that disrupted the taxi industry by revolutionizing the sharing economy at incredible scale thanks to unprecedented amounts of venture capital. In the first earnings report after a lackluster IPO, revealed that it lost $1bn in three months.

unicorn (n) – A startup valued at at least $1bn. At one point, rare. Increasingly, not even that exciting.

UX designer (n) The person responsible for a website or app user’s experience (UX). They make the buttons they want you to click on – Share! Buy! Sign Up! – large and noticeable, and the buttons that turn off location tracking very small.

venture capital (ph) A system by which wealthy individuals can invest in startups before they go public. A legal and surprisingly respectable form of gambling. An alternate retirement plan for fortysomething multimillionaires who never developed hobbies.

https://www.theguardian.com/us-news/2019/jun/26/how-to-speak-silicon-valley-decoding-tech-bros-from-microdosing-to-privacy

#howto #techbro
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
Sky Census, Dumb City, 5G Glastonbury

This week on the New World Next Week:

#US #government turns to aerial #surveillance for its 2020 census; #Google promises it won’t sell your data in its #smartcity and #Glastonbury goes 5G.

📺 New World Next Week #5G #google #panopticon #corbettreport #video #podcast
https://www.corbettreport.com/sky-census-dumb-city-5g-glastonbury-new-world-next-week/

📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
🇪🇸 Acusan a China de instalar malware a los turistas que cruzan la frontera, para descargar y buscar datos "prohibidos".

Medios tan relevantes como The Guardian, el New York Times o Motherboard han llevado a cabo una investigación conjunta, en la que han revelado que el Gobierno de China está instalando malware en los smartphones de los turistas que quieren cruzar la frontera.

Al parecer, este malware se encarga de descargar mensajes, contactos, el registro de llamadas y las entradas del calendario. Por si fuera poco, también busca en el dispositivo unos 73.000 archivos "prohibidos".

Entre estos archivos prohibidos se encontrarían fotos o documentos PDF relacionados con el Dalai Lama y con el Estado Islámico (citas del Corán o incluso diccionarios árabes).

Estos medios se han encargado de viajar a China para comprobarlo, y aseguran que no ocurre en todos los puntos de la frontera, pero han podido cerciorar que el malware realmente existe.

Afirman que, cuando llegan a estos puntos de la frontera (en la región de Xinjiang), oficiales del Gobierno chino se encargan de pedir los smartphones a los turistas y cuando se los devuelven llevan instalado el malware (llamado BXAQ o Fengcai).

De todos modos, cualquier usuario un poco avispado se daría cuenta que al devolvérselo tiene una nueva app instalada (llamada CellHunter o MobileHunter). Si eliminas la app deberías estar de nuevo a salvo (aunque seguramente ya hayan podido sustraer parte de tu información).

De hecho, si alguien se quiere aventurar a instalarla, la han subido a Github para que cualquiera pueda destriparla y ver a qué tipo de información puede acceder. Muchos usuarios y medios se han encargado de hacerlo, y la verdad es que es una especie de tela de araña que intenta capturar los máximos datos posibles:

✳️ Logins en Alibaba
✳️ Logins en Weibo
✳️ Número de teléfono
✳️ Información sobre pagos
✳️ Información sobre el operador de red
✳️ Fabricante del smartphone, versión de Android, IMEI

En el caso de que los turistas tengan un iPhone, los oficiales obtienen los datos conectándolo a un dispositivo (que actualmente se desconoce información sobre él) vía USB.
https://www.genbeta.com/actualidad/acusan-a-china-instalar-malware-a-turistas-que-cruzan-frontera-para-descargar-buscar-datos-prohibidos

#china #spyware #vigilancia #privacidad
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
🇪🇸 Cuando una empresa sufre un ataque de ransomware, me llaman para solucionarlo: la difícil lucha contra el malware del momento.

Viernes, ocho de la mañana. Tras sortear a una marabunta de gente en el transporte público, llegas a las oficinas de la empresa y enciendes el ordenador. El cansancio de toda la semana pesa como una losa y, mientras arrancas el pc, tu imaginación vuela hacia los planes que has hecho para el fin de semana. Introduces usuario y contraseña y pulsas ‘enter’, como cada día, pero, a diferencia de ayer, hoy el escritorio está vacío.

Después de varios minutos investigando en la vacuidad de la pantalla con el puntero, optas por la solución clásica y reinicias la máquina. Repites el proceso de inicio de sesión -usuario, contraseña y ‘enter’- y esta vez en lugar del escritorio aparece un mensaje desconcertante: “Hola, he cifrado todos los datos importantes de tu empresa. Puedes recuperarlos de forma rápida y segura enviando bitcoins por valor de 3.000 euros a la siguiente dirección…”.

En los últimos años han sido muchas las empresas que se han enfrentado a una situación de estas características. Un cifrado de datos vitales por parte de un ransomware con el que las compañías afrontan dos opciones críticas, pagar una elevada suma por el rescate de sus archivos sin ninguna garantía o perderlos de forma irremediable.

Los ataques de ransomware saltaron al conocimiento del gran público con el secuestro masivo de datos de importantes empresas en 2017, cuando una variante de este malware conocida como WannaCry puso en jaque a compañías como Telefónica. Sin embargo, el día a día de este tipo de ciberdelincuencia discurre a una escala mucho menor, entre pymes y autónomos, donde la ciberseguridad y la repercusión son menores y las posibilidades de extorsión, por lo tanto, mayores.

“En nuestra empresa recibimos a diario decenas de solicitudes de diagnóstico tanto de particulares como de empresas que han sufrido el ataque de un ransomware y han visto sus datos comprometidos”, afirma Ricardo Labiaga, director técnico de la compañía de ciberseguridad OnRetraival.

Una vez se ha producido la infección del sistema se pueden dar dos casos, que la empresa tenga una copia de seguridad actualizada y pueda recuperar sus archivos comprometidos o, por el contrario, que el cifrado haya secuestrado datos claves de los que no se tienen copias.

Este último es “el escenario más apocalíptico que se puede dar” según Marco Antonio Lozano, responsable de Servicios de Ciberseguridad de Empresas y Profesionales en el Instituto Nacional de Ciberseguridad de España (Incibe), puesto que es muy difícil descifrar este tipo de malware y muy pocas compañías a nivel mundial ofrecen garantías para recuperar los archivos.

En esas circunstancias la empresa o usuario afectado puede acudir tanto al Incibe como a compañías de ciberseguridad privada como OnRetraival, así como al proyecto internacional No More Ransom, en el que participa la Europol. Aunque en muchas ocasiones lo más que podrán hacer será identificar el tipo de ransomware, contener la infección y aislar los equipos comprometidos.

✳️ ¿Pago el rescate?

De esta forma, el usuario se puede encontrar ante la desesperante circunstancia de ver archivos de importancia capital para el funcionamiento de su empresa comprometidos, sin soluciones posibles por parte de los técnicos de ciberseguridad y el pago como única alternativa al hundimiento de la compañía. ¿Qué debería hacer?

“En ningún caso recomendamos pagar”, subraya Lozano, una opinión que comparten tanto desde OnRetraival como en No More Ransom. “Pagar no garantiza obtener una solución al problema. Además, así se demuestra a los cibercriminales que este tipo de extorsiones funcionan”, explican desde el proyecto internacional antirransomware.

En este sentido, los expertos hacen hincapié en que el usuario no puede saber si el malware que ha infectado su sistema tiene la funcionalidad de descifrado o no. Es decir, que hay programas malignos que sólo pueden bloquear los datos, pero no liberarlos, por lo que el pago no resolverá nada.
Asimismo, Lozano señala que, aun consiguiendo que los ciberdelincuentes desbloqueen los archivos mediante el pago, nada garantiza que el malware no siga en el sistema y a los pocos meses vuelva a pedir otro rescate. “Puede haber rebrotes. Al final, si pagas, te vas a quedar con un sistema que no sabes si está comprometido o no. No sabes si los archivos siguen infectados”, subraya.

✳️ ¿Por qué es tan complicado combatirlo?

Las dificultades para luchar contra el ransomware radican en su complejidad como malware, pues utiliza una herramienta legítima como es el cifrado de datos, que otros muchos programas normales usan a diario al desarrollar sus funciones, para secuestrar los archivos.
Labiaga

“Los primeros malware criptográficos (de cifrado de datos) utilizaban una clave simétrica, es decir, la misma clave para cifrar y descifrar. De esta forma, la información corrupta podía ser descifrada con éxito por una compañía de ciberseguridad. Con el tiempo, los cibercriminales empezaron a utilizar algoritmos de cifrado asimétricos, con dos claves diferentes, una pública para cifrar los archivos y otra privada para el descifrado que sólo tienen ellos”, explican desde No More Ransom.

Por lo tanto, con anterioridad las empresas de ciberseguridad podían descifrar este tipo de malware al rastrear la clave de cifrado, necesariamente presente en el equipo infectado para bloquear los archivos. Ahora, sin embargo, los delincuentes han conseguido que las claves de cifrado y desbloqueo sean distintas, por lo que no dejan rastro alguno y es imposible solucionar el ataque por este método.

Así, la única forma de resolver una agresión de estas características es que los ciberdelincuentes cometiesen un error al crear el ransomware y dejasen una brecha por la que puedan acceder las empresas y herramientas de ciberseguridad para romper el cifrado, o que la policía capture los servidores con las claves.

Otra de las dificultades que presenta este malware es que está en auge y cada poco tiempo surge una nueva variante con un mejor cifrado y nuevas características que dificultan el trabajo de la policía y las empresas de ciberseguridad. De hecho, en la actualidad existen más de 50 familias de ransomware en circulación.

✳️ ¿Cómo actuar ante un ransomware?

Pese a todas estas complicaciones, organismos públicos y privados tienen una serie de protocolos para tratar de resolver, siempre que sea posible, los ataques de ransomware.

Desde el Incibe recomiendan que el primer paso tras la infección sea crear una copia del disco duro comprometido para tratar de recuperar los datos sobre el clon, de forma que se deje el equipo principal intacto por si se dañan los archivos al tratar de desencriptarlos. Así se podrá volver siempre al punto de partida. Además, de esta forma también podrá ser usado como prueba en una investigación judicial.

Tras hacer eso, el usuario tendrá que desinfectar la copia mediante un antivirus para que, en caso de que se consigan liberar los documentos, el malware no los vuelva a cifrar. Con esto se elimina el programa malicioso que bloqueó el acceso a los datos, pero no el cifrado en sí. Es decir, el sistema ya estaría limpio pero todos los archivos afectados siguen encriptados.

Para tratar de solventar esto, el Instituto Nacional de Ciberseguridad recomienda usar la herramienta Crypto-sheriff de No More Ransom, que ayuda a identificar la variante de malware que ha atacado el sistema. Una vez reconocida, desde el proyecto encabezado por la Europol recomendarán el programa de descifrado más indicado para esa variante de ransomware, si existe.

Pese a todo esto, puede que el programa de desencriptado no funcione. En ese caso, desde el Incibe recomiendan conservar el disco duro cifrado por si apareciese alguna solución en el futuro.