ml4se
500 subscribers
448 photos
1 file
526 links
Machine Learning for Software Engineering
Download Telegram
UL2: Unifying Language Learning Paradigms (Google)

A novel language pre-training paradigm called Unified Language Learner (UL2) frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations.
What is it like to program with artificial intelligence (Microsoft)

The authors explore how programming with large language models (LLM-assisted programming) is similar to, and differs from, prior conceptualisations of programmer assistance. They find that while LLM-assisted programming shares some properties of compilation, pair programming, and programming via search and reuse, there are fundamental differences both in the technical possibilities as well as the practical experience. Thus, LLM-assisted programming ought to be viewed as a new way of programming with its own distinct properties and challenges.
Forest: Structural Code Editing with Multiple Cursors (ETH)

Forest allows to perform a single action simultaneously in multiple program locations and thus support complex refactorings.
Minerva: Solving Quantitative Reasoning Problems with Language Models (Google)

Minerva is a large language model pretrained on general natural language data and further trained on technical content. The main novelty of the paper is a large training dataset that juxtaposes natural language with the correct use of formal mathematical language, such as equations and diagrams. The data is collected from the arXiv preprint server and from web pages.
Code as Policies: Language Model Programs for Embodied Control (Google)

Large language models trained on code completion have been shown to be capable of synthesizing simple Python programs from docstrings. These models can be re-purposed to write robot policy code, given natural language commands.

Project website: https://code-as-policies.github.io/
Fixing Dockerfile Smells: An Empirical Study

RQ1: How do developers fix Dockerfile smells?
RQ2: Which Dockerfile smells are developers willing to address?
Microsoft sued for open-source piracy through GitHub Copilot

Programmer and lawyer Matthew Butterick has sued Microsoft, GitHub, and OpenAI, alleging that GitHub's Copilot violates the terms of open-source licenses and infringes the rights of programmers.

Apart from the license violations, Butterick also alleges that the development feature violates the following:
- GitHub's terms of service and privacy policies,
- DMCA 1202, which forbids the removal of copyright-management information,
- the California Consumer Privacy Act,
- and other laws giving rise to the related legal claims.

The complaint was submitted to the U.S. District Court of the Northern District of California, demanding the approval of statutory damages of $9,000,000,000.
😁1
TOSS: Revisiting Code Search in a Two-Stage Paradigm (Microsoft)

The paper proposes a combination of two main DL-based approaches to code search — a fusion of bi-encoder and cross-encoder methods. The framework achieves state-of-the-art accuracy with an overall mean reciprocal ranking score of 0.763, compared to the best baseline result on the CodeSearchNet benchmark of 0.713.
μBERT: Mutation Testing using Pre-Trained Language Models

μBERT is a mutation testing tool. It exploits CodeBERT to generate mutants. The proposed approach is compared with PiTest on fault detection and assertion inference.
The Illustrated Stable Diffusion

A gentle introduction to how Stable Diffusion works.
TiCoder: Interactive Code Generation via Test-Driven User-IntentFormalization (Microsoft)

Test-driven user-intent formalization (or test-driven user-intent discovery): to create an interactive framework to (a) refine and formalize the user intent through generated tests, and (b) generate code that is consistent with such tests.
Time-Series Anomaly Detection with Implicit Neural Representation

Some ML4SE tasks are related to time series (anomaly detection in logs, forecasting in resource management, etc.). A novel method called Implicit Neural Representation-based Anomaly Detection (INRAD) is proposed. It uses error-based anomaly detection strategy. Using MLP, it learns to predict the value of a time series by a timestamp. The timestamp is the only input.
HyperTime: Implicit Neural Representation for Time Series

This architecture leverages INRs to learn a compressed latent representation of an entire time series dataset. The output of the HyperNet is a one-dimensional 7500-values embedding that contains the network weights of an INR (HypoNet) which encodes the time series data from the input.
Cloud Intelligence/AIOps – Infusing AI into Cloud Computing Systems (Microsoft)

AIOps is a rapidly emerging technology trend and an interdisciplinary research direction across system, software engineering, and AI/ML communities. With years of research on Cloud Intelligence, Microsoft Research has built up rich technology assets in detection, diagnosis, prediction, and optimization.