ml4se
500 subscribers
448 photos
1 file
527 links
Machine Learning for Software Engineering
Download Telegram
QP4SE workshop, call for papers: https://sites.google.com/view/qp4se/call-for-papers

The main focus topics of the workshop are:
- Quantum optimization for software engineering
- Quantum machine learning for software engineering
- Quantum artificial intelligence for software engineering
- Quantum solutions to computational problems in software engineering
- Quantum software and algorithms
- Quantum programming for detection quality issues in software engineering
- Quantum programming languages
- Quantum empirical evaluations
- Industrial applications on quantum programming for software engineering

Important Dates:
- Papers submission: July 15th, 2022;
- Papers notification: August 15th, 2022;
- Papers camera-ready: September 9th, 2022.
BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks.
MLGOPerf: An ML Guided Inliner to Optimize Performance (Huawei)

MLGOPerf — the first end-to-end framework capable of optimizing performance using LLVM’s ML-Inliner.

The experimental results show MLGOPerf is able to gain up to 1.8% and 2.2% with respect to LLVM’s optimization at O3 when trained for performance on SPEC CPU2006 and Cbench benchmarks, respectively. Furthermore, the proposed approach provides up to 26% increased opportunities to autotune code regions for our benchmarks which can be translated into an additional 3.7% speedup value.
CodeT: Code Generation with Generated Tests (Microsoft)

The work explores the use of pre-trained language models to automatically generate test cases. Method is titled CodeT: Code generation with generated Tests. CodeT executes the code solutions using the generated test cases, and then chooses the best solution based on a dual execution agreement with both the generated test cases and other generated solutions.
ESEC/FSE 2023
https://conf.researchr.org/home/fse-2023
Sat 11 - Fri 17 November 2023 San Francisco, California, United States

Thu 26 Jan 2023 Research Papers Paper registration
Thu 2 Feb 2023 Research Papers Full paper submission
Thu 4 May 2023 Research Papers Initial notification
Thu 29 Jun 2023 Research Papers Revised manuscript submissions (major revisions only)
Thu 27 Jul 2023 Research Papers Final notification for major revisions
Thu 24 Aug 2023 Research Papers Camera ready
Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity by generating code recommendations based on their comments in natural language and code in the integrated development environment (IDE).
List of code generation models
So Much in So Little: Creating Lightweight Embeddings of Python Libraries (JetBrains, Huawei)

- python library embeddings
- a prototype tool for suggesting relevant libraries to a given project
The Universal Approximation Theorem for neural networks