ml4se
500 subscribers
448 photos
1 file
527 links
Machine Learning for Software Engineering
Download Telegram
Category Theory for AI

Program
Week 1: Why Category Theory?
Week 2: Essential building blocks: Categories and Functors
Week 3: Categorical Dataflow: Optics and Lenses as data structures for backpropagation
Week 4: Geometric Deep Learning & Naturality
Week 5: Monoids, Monads, Mappings, and lstMs
No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence

The authors compare fine-tuning and prompt-tuning for different tasks in code understanding: defect prediction, code summarization and code translation. They come to the conclusion that prompt-tuning is more effective than fine-tuning on the code intelligence tasks, with respect to different pre-trained models and different programming languages. Besides, the advantage of prompt tuning is more obvious for smaller pre-trained models. Also prompt tuning is more effective in low-resource scenarios than fine-tuning. The fewer training instances, the larger the improvement achieved by prompt tuning. And prompt-tuning also shows superior performance on the cross-domain code intelligence task.
Stanford, CS 349M: Machine Learning for Software Engineering

In recent years, tools based on machine learning have become increasingly prevalent in the software engineering field. The ubiquity of machine learning is an important factor, but just as important is the availability of software engineering data: there are billions of lines of code available in public repositories (e.g. on GitHub), there is the change history of that code, there are discussion fora (e.g. Stack Overflow) that contain a wealth of information for developers, companies have access tontelemetry on their apps from millions of users, and so on. The scale of software engineering data has permitted machine learning and statistical approaches to imagine tools that are beyond the capabilities of traditional, semantics-based approaches. In this graduate seminar, students will learn the various ways in which code and related artifacts can be treated as data, and how various developer tools can be built by applying machine learning over this data. The course will consist of discussion of a selection of research papers, as well as a hands-on project that can be done in small groups.nnPrerequisites: Familiarity with basic machine learning, and either CS143 or CS295.
BigCode project

Large Language Models (LLMs) are fast becoming an essential tool for all fields of AI research. One striking feature of these large pre-trained models is that they can be adapted to a wide variety of language tasks, often with very little in-domain data.

BigCode is focused on developing state-of-the-art LLMs for code. Code LLMs enable the completion and synthesis of code, both from other code snippets and natural language descriptions, and work across a wide range of domains, tasks, and programming languages. These models can, for example, assist professional and citizen developers with coding new applications.

BigCode invites AI researchers to collaborate on the following topics:
* A representative evaluation suite for code LLMs, covering a diverse set of tasks and programming languages
* Responsible data governance and development for code LLMs
* Faster training and inference methods for LLMs
Hi everyone! Who knows good datasets for the clone detection problem?
Anomaly Detection in Time Series: A Comprehensive Evaluation

The authors collected and re-implemented 71 anomaly detection algorithms from different domains and evaluated them on 976 time series datasets. Paper: https://vldb.org/pvldb/vol15/p1779-wenig.pdf
RING: Repair Is Nearly Generation: Multilingual Program Repair with LLMs (Microsoft)

An LLMC-based approach to multilingual repair powered by Codex. It enables a flipped model for AI-assisted programming, in which the user writes code and the assistant suggests fixes for last-mile mistakes. The approach is built on the intuition that fixing such mistakes can be conceptualized into three tasks: fault localization, code transformation, and candidate ranking.
Transformers in Time Series: A Survey

The application of the Transformer architecture looks natural in the case of time series. In a sense, time series is even a more native area for sec2sec approaches than CV or NLP. What peculiarities of time series should be taken into account when using Transformers? A lot of them. Let's list some.
- it is often necessary to be able to process very long sequences (periodicity, seasonality)
- sequences in time series often have timestamps labels that contain more information than just a position number in the sequence from NLP or CV. For example, days of the week, holidays. - sometimes sequential generation is not efficient and the decoder needs to be able to generate subsequences at one time
- time series pose special tasks: spatio-temporal forecasting, event forecasting, anomaly detection, etc.

The survey systematizes approaches in the field of application of Transformers for time series. In addition, the authors test different algorithms on the ETTm2 dataset.
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

As you know, there are several severe issues with Transformer that prevent it from being directly applicable to long sequence time-series forecasting including:
- quadratic time complexity
- high memory usage
- inherent limitation of the encoder-decoder architecture.
To address these issues, the authors design an efficient transformer-based model named Informer.

Informer includes:
- a ProbSparse self-attention mechanism, which achieves O(L log L) in time complexity and memory usage, and has comparable performance on sequences’ dependency
alignment
- the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences
- the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference
speed of long-sequence predictions.
🔥2
COMBO Pre-training representations of binary code using contrastive learning

Binary code analysis is critical to applications in reverse engineering and computer security tasks where source code is not available.

In the paper authors propose a COntrastive learning Model for Binary cOde Analysis, or COMBO, that incorporates source code and comment information into binary code during representation learning. Specifically, there are three components in COMBO:
- a primary contrastive learning method for cold-start pre-training
- a simplex interpolation method to incorporate source code, comments, and binary code
- an intermediate representation learning algorithm to provide binary code embeddings.

The effectiveness of the pre-trained representations produced by COMBO is evaluated using three indicative downstream tasks relating to binary code:
- algorithmic functionality classification
- binary code similarity
- vulnerability detection.
Bug analysis in Jupyter notebook projects An empirical study

The paper presents a systematic study of bugs and challenges that Jupyter practitioners face through a large-scale empirical investigation. The authors mined 14,740 commits from 105 GitHub open-source projects with Jupyter notebook code. Next, they analyzed 30,416 Stack Overflow posts which gave them insights into bugs that practitioners face when developing Jupyter notebook projects.

• RQ1. What types of bugs are more frequent?
• RQ2. What are the root causes of bugs?
• RQ3. What are the frequent impacts of bugs?
• RQ4. What challenges do data scientists face in practice on Jupyter Projects?
UL2: Unifying Language Learning Paradigms (Google)

A novel language pre-training paradigm called Unified Language Learner (UL2) frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations.
What is it like to program with artificial intelligence (Microsoft)

The authors explore how programming with large language models (LLM-assisted programming) is similar to, and differs from, prior conceptualisations of programmer assistance. They find that while LLM-assisted programming shares some properties of compilation, pair programming, and programming via search and reuse, there are fundamental differences both in the technical possibilities as well as the practical experience. Thus, LLM-assisted programming ought to be viewed as a new way of programming with its own distinct properties and challenges.
Forest: Structural Code Editing with Multiple Cursors (ETH)

Forest allows to perform a single action simultaneously in multiple program locations and thus support complex refactorings.