ml4se
500 subscribers
448 photos
1 file
526 links
Machine Learning for Software Engineering
Download Telegram
Minerva: Solving Quantitative Reasoning Problems with Language Models (Google)

Minerva is a large language model pretrained on general natural language data and further trained on technical content. The main novelty of the paper is a large training dataset that juxtaposes natural language with the correct use of formal mathematical language, such as equations and diagrams. The data is collected from the arXiv preprint server and from web pages.
Code as Policies: Language Model Programs for Embodied Control (Google)

Large language models trained on code completion have been shown to be capable of synthesizing simple Python programs from docstrings. These models can be re-purposed to write robot policy code, given natural language commands.

Project website: https://code-as-policies.github.io/
Fixing Dockerfile Smells: An Empirical Study

RQ1: How do developers fix Dockerfile smells?
RQ2: Which Dockerfile smells are developers willing to address?
Microsoft sued for open-source piracy through GitHub Copilot

Programmer and lawyer Matthew Butterick has sued Microsoft, GitHub, and OpenAI, alleging that GitHub's Copilot violates the terms of open-source licenses and infringes the rights of programmers.

Apart from the license violations, Butterick also alleges that the development feature violates the following:
- GitHub's terms of service and privacy policies,
- DMCA 1202, which forbids the removal of copyright-management information,
- the California Consumer Privacy Act,
- and other laws giving rise to the related legal claims.

The complaint was submitted to the U.S. District Court of the Northern District of California, demanding the approval of statutory damages of $9,000,000,000.
😁1
TOSS: Revisiting Code Search in a Two-Stage Paradigm (Microsoft)

The paper proposes a combination of two main DL-based approaches to code search — a fusion of bi-encoder and cross-encoder methods. The framework achieves state-of-the-art accuracy with an overall mean reciprocal ranking score of 0.763, compared to the best baseline result on the CodeSearchNet benchmark of 0.713.
μBERT: Mutation Testing using Pre-Trained Language Models

μBERT is a mutation testing tool. It exploits CodeBERT to generate mutants. The proposed approach is compared with PiTest on fault detection and assertion inference.
The Illustrated Stable Diffusion

A gentle introduction to how Stable Diffusion works.
TiCoder: Interactive Code Generation via Test-Driven User-IntentFormalization (Microsoft)

Test-driven user-intent formalization (or test-driven user-intent discovery): to create an interactive framework to (a) refine and formalize the user intent through generated tests, and (b) generate code that is consistent with such tests.
Time-Series Anomaly Detection with Implicit Neural Representation

Some ML4SE tasks are related to time series (anomaly detection in logs, forecasting in resource management, etc.). A novel method called Implicit Neural Representation-based Anomaly Detection (INRAD) is proposed. It uses error-based anomaly detection strategy. Using MLP, it learns to predict the value of a time series by a timestamp. The timestamp is the only input.
HyperTime: Implicit Neural Representation for Time Series

This architecture leverages INRs to learn a compressed latent representation of an entire time series dataset. The output of the HyperNet is a one-dimensional 7500-values embedding that contains the network weights of an INR (HypoNet) which encodes the time series data from the input.
Cloud Intelligence/AIOps – Infusing AI into Cloud Computing Systems (Microsoft)

AIOps is a rapidly emerging technology trend and an interdisciplinary research direction across system, software engineering, and AI/ML communities. With years of research on Cloud Intelligence, Microsoft Research has built up rich technology assets in detection, diagnosis, prediction, and optimization.
Scientists and government representatives meeting at a conference in France have voted to scrap leap seconds by 2035, the organisation responsible for global timekeeping has said.

In November 2022 at the 27th General Conference on Weights and Measures, held about every four years at the Versailles Palace, it was decided to abandon the leap second by or before 2035. From then the difference between atomic and astronomical time will be allowed to grow to a larger value yet to be determined.
CS598: Machine Learning for Software Engineering

- Code representation and embeddings
- Source code analysis
- Code summarization
- Test input generation
- Fuzz testing
- Oracle inference
- Fault localization
- Program (bug) repair
- Regression testing
- Security testing and vulnerability detection
- Code completion
- Clone detection
🔥2
Course: Machine Learning for Software Engineering (Ural State University)

- Introduction to machine learning
- Introduction to Transformer
- Code representation 1
- Code representation 2
- Code generation
- Code summarization
- Clone detection
- Code search 1
- Code search 2
- Code completion
- Vulnerabilities
Large Language Models Can Self-Improve

CoT + multiple path decoding + self-consistency = effective self-training

74.4%->82.1% on GSM8K
78.2%->83.0% on DROP
90.0%->94.4% on OpenBookQA
63.4%->67.9% on ANLI-A3
Is effective self-training possible for small and medium-sized models?
Anonymous Poll
57%
Yes
43%
No