CodeQL code scanning launches Kotlin analysis support
Starting November 28, GitHub code scanning includes beta support for analyzing code written in Kotlin, powered by the CodeQL engine.
Starting November 28, GitHub code scanning includes beta support for analyzing code written in Kotlin, powered by the CodeQL engine.
Advent of Code is an annual set of Christmas-themed computer programming challenges that follow an Advent calendar. It has been running since 2015. The programming puzzles cover a variety of skill sets and skill levels and can be solved using any programming language.
OpenAI Solved Part 1 in 10 Seconds
https://www.reddit.com/r/adventofcode/comments/zb942v/2022_day_03_first_place_for_part_1_today_10/
OpenAI Solved Part 1 in 10 Seconds
https://www.reddit.com/r/adventofcode/comments/zb942v/2022_day_03_first_place_for_part_1_today_10/
Reddit
r/adventofcode on Reddit: [2022 Day 03] First place for part 1 today (10 seconds!) was fully automated using new OpenAI language…
Posted by u/rk-imn - No votes and 3 comments
ENASE 2023
Position/Regular Paper Submission: January 19, 2023
Doctoral Consortium Paper Submission: March 1, 2023
Abstracts Track Submission: March 1, 2023
Position/Regular Paper Submission: January 19, 2023
Doctoral Consortium Paper Submission: March 1, 2023
Abstracts Track Submission: March 1, 2023
enase.scitevents.org
ENASE, 21st Int'l. Conf. on Evaluation of Novel Approaches to Software Engineering
Theory and Practice of Systems and Applications Development, Challenges and Novel Approaches to Systems and Software Engineering (SSE), Systems and Software Quality, Systems and Software Engineering (SSE) for Emerging Domains
Ransomware Detection (Huawei)
* A baseline model is established based on historical data to check for any abnormalities in the changed feature values of the metadata of copies.
* Abnormal copies are further compared to determine file size changes, entropy values, and similarities.
* The Machine Learning (ML) model is used to determine whether file changes are caused by ransomware encryption, flagging them accordingly.
* A baseline model is established based on historical data to check for any abnormalities in the changed feature values of the metadata of copies.
* Abnormal copies are further compared to determine file size changes, entropy values, and similarities.
* The Machine Learning (ML) model is used to determine whether file changes are caused by ransomware encryption, flagging them accordingly.
Huawei BLOG
The Ransomware Story: Predicting the Unpredictable
Part 1 of a four-part series examining the features of ransomware, its devastating effects & the best way your company can stop it in its tracks.
DeepMind’s AlphaCode Conquers Coding, Performing as Well as Humans
DeepMind’s new coding AI just trounced roughly 50 percent of human coders in a highly competitive programming competition.
DeepMind’s new coding AI just trounced roughly 50 percent of human coders in a highly competitive programming competition.
Singularity Hub
DeepMind’s AlphaCode Conquers Coding, Performing as Well as Humans
AlphaCode paves the road for a novel way to design AI coders: forget past experience and just listen to the data.
Microsoft is preparing to add OpenAI’s ChatGPT chatbot to its Bing search engine
OpenAI, the AI research shop backed by a $1 billion investment from Microsoft, publicly released ChatGPT for users to test in November. The chatbot’s ability to spout everything from cocktail recipes to authentic-seeming school essays has since catapulted it into the spotlight. While the AI service sometimes confidently offers incorrect information with a patina of authority, some analysts and experts have suggested its ability to summarize publicly available data can make it a credible alternative to Google search and a list of search-generated links.
OpenAI, the AI research shop backed by a $1 billion investment from Microsoft, publicly released ChatGPT for users to test in November. The chatbot’s ability to spout everything from cocktail recipes to authentic-seeming school essays has since catapulted it into the spotlight. While the AI service sometimes confidently offers incorrect information with a patina of authority, some analysts and experts have suggested its ability to summarize publicly available data can make it a credible alternative to Google search and a list of search-generated links.
Bloomberg.com
Microsoft Hopes OpenAI’s Chatbot Will Make Bing Smarter
ChatGPT’s accuracy will be key to timing of any rollout
The analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools.
– creating Infostealer
– creating an encryption tool
– facilitating ChatGPT for fraud activity
– creating Infostealer
– creating an encryption tool
– facilitating ChatGPT for fraud activity
Check Point Research
OPWNAI : Cybercriminals Starting to Use ChatGPT - Check Point Research
Introduction At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber…
The Art of LaTeX
Some common mistakes that are made by LaTeX practitioners (even in heavily cited papers)
Some common mistakes that are made by LaTeX practitioners (even in heavily cited papers)
On the Security Vulnerabilities of Text-to-SQL Models
Authors showed that the Text-to-SQL modules of two commercial black boxes (Baidu-UNIT and Codex-powered Ai2sql) can be manipulated to produce malicious code, potentially leading to data breaches and Denial of Service. This demonstrates the danger of NLP models being exploited as attack vectors in the wild. Moreover, experiments involving four open-source frameworks verified that simple backdoor attacks can achieve a 100% success rate on Text-to-SQL systems with almost no prediction performance impact.
Authors showed that the Text-to-SQL modules of two commercial black boxes (Baidu-UNIT and Codex-powered Ai2sql) can be manipulated to produce malicious code, potentially leading to data breaches and Denial of Service. This demonstrates the danger of NLP models being exploited as attack vectors in the wild. Moreover, experiments involving four open-source frameworks verified that simple backdoor attacks can achieve a 100% success rate on Text-to-SQL systems with almost no prediction performance impact.
LineVul: A Transformer-based Line-Level Vulnerability Prediction
The authors propose a novel approach to detecting vulnerabilities in source code. The approach uses machine learning and works at line level.
Code
The authors propose a novel approach to detecting vulnerabilities in source code. The approach uses machine learning and works at line level.
Code