AutoML相关论文

简介: 本文为Awesome-AutoML-Papers的译文。1、AutoML简介Machine Learning几年来取得的不少可观的成绩,越来越多的学科都依赖于它。然而,这些成果都很大程度上取决于人类机器学习专家来完成如下工作:数据预处理 Preprocess the data选择合适的特征...

本文为Awesome-AutoML-Papers的译文。

1、AutoML简介

Machine Learning几年来取得的不少可观的成绩,越来越多的学科都依赖于它。然而,这些成果都很大程度上取决于人类机器学习专家来完成如下工作:

  • 数据预处理 Preprocess the data
  • 选择合适的特征 Select appropriate features
  • 选择合适的模型族 Select an appropriate model family
  • 优化模型参数 Optimize model hyperparameters
  • 模型后处理 Postprocess machine learning models
  • 分析结果 Critically analyze the results obtained

随着大多数任务的复杂度都远超非机器学习专家的能力范畴,机器学习应用的不断增长使得人们对现成的机器学习方法有了极大的需求。因为这些现成的机器学习方法使用简单,并且不需要专业知识。我们将由此产生的研究领域称为机器学习的逐步自动化。

AutoML借鉴了机器学习的很多知识,主要包括:

  • 贝叶斯优化 Bayesian optimization
  • 结构化数据的大数据的回归模型 Regression models for structured data and big data
  • 元学习 Meta learning
  • 迁移学习 Transfer learning
  • 组合优化 Combinatorial optimization.

2、目录

Papers

Automated Feature Engineering

  • Expand Reduce

    • 2017 | AutoLearn — Automated Feature Generation and Selection | Ambika Kaul, et al. | ICDM | PDF
    • 2017 | One button machine for automating feature engineering in relational databases | Hoang Thanh Lam, et al. | arXiv | PDF
    • 2016 | Automating Feature Engineering | Udayan Khurana, et al. | NIPS | PDF
    • 2016 | ExploreKit: Automatic Feature Generation and Selection | Gilad Katz, et al. | ICDM | PDF
    • 2015 | Deep Feature Synthesis: Towards Automating Data Science Endeavors | James Max Kanter, Kalyan Veeramachaneni | DSAA | PDF
  • Hierarchical Organization of Transformations

    • 2016 | Cognito: Automated Feature Engineering for Supervised Learning | Udayan Khurana, et al. | ICDMW | PDF
  • Meta Learning

    • 2017 | Learning Feature Engineering for Classification | Fatemeh Nargesian, et al. | IJCAI | PDF
  • Reinforcement Learning

    • 2017 | Feature Engineering for Predictive Modeling using Reinforcement Learning | Udayan Khurana, et al. | arXiv | PDF
    • 2010 | Feature Selection as a One-Player Game | Romaric Gaudel, Michele Sebag | ICML | PDF

      Architecture Search

  • Evolutionary Algorithms

    • 2017 | Large-Scale Evolution of Image Classifiers | Esteban Real, et al. | PMLR | PDF
    • 2002 | Evolving Neural Networks through Augmenting Topologies | Kenneth O.Stanley, Risto Miikkulainen | Evolutionary Computation | PDF
  • Local Search

    • 2017 | Simple and Efficient Architecture Search for Convolutional Neural Networks | Thomoas Elsken, et al. | ICLR | PDF
  • Meta Learning

    • 2016 | Learning to Optimize | Ke Li, Jitendra Malik | arXiv | PDF
  • Reinforcement Learning

    • 2018 | Efficient Neural Architecture Search via Parameter Sharing | Hieu Pham, et al. | arXiv | PDF
    • 2017 | Neural Architecture Search with Reinforcement Learning | Barret Zoph, Quoc V. Le | ICLR | PDF
  • Transfer Learning

    • 2017 | Learning Transferable Architectures for Scalable Image Recognition | Barret Zoph, et al. | arXiv | PDF

      Frameworks

  • 2017 | Google Vizier: A Service for Black-Box Optimization | Daniel Golovin, et al. | KDD |PDF
  • 2017 | ATM: A Distributed, Collaborative, Scalable System for Automated Machine Learning | T. Swearingen, et al. | IEEE | PDF
  • 2015 | AutoCompete: A Framework for Machine Learning Competitions | Abhishek Thakur, et al. | ICML | PDF

    Hyperparameter Optimization

  • Bayesian Optimization

    • 2016 | Bayesian Optimization with Robust Bayesian Neural Networks | Jost Tobias Springenberg, et al. | NIPS | PDF
    • 2016 | Scalable Hyperparameter Optimization with Products of Gaussian Process Experts | Nicolas Schilling, et al. | PKDD | PDF
    • 2016 | Taking the Human Out of the Loop: A Review of Bayesian Optimization | Bobak Shahriari, et al. | IEEE | PDF
    • 2016 | Towards Automatically-Tuned Neural Networks | Hector Mendoza, et al. | JMLR | PDF
    • 2016 | Two-Stage Transfer Surrogate Model for Automatic Hyperparameter Optimization | Martin Wistuba, et al. | PKDD | PDF
    • 2015 | Efficient and Robust Automated Machine Learning | PDF
    • 2015 | Hyperparameter Optimization with Factorized Multilayer Perceptrons | Nicolas Schilling, et al. | PKDD | PDF
    • 2015 | Hyperparameter Search Space Pruning - A New Component for Sequential Model-Based Hyperparameter Optimization | Martin Wistua, et al. | PDF
    • 2015 | Joint Model Choice and Hyperparameter Optimization with Factorized Multilayer Perceptrons | Nicolas Schilling, et al. | ICTAI | PDF
    • 2015 | Learning Hyperparameter Optimization Initializations | Martin Wistuba, et al. | DSAA | PDF
    • 2015 | Scalable Bayesian optimization using deep neural networks | Jasper Snoek, et al. | ACM | PDF
    • 2015 | Sequential Model-free Hyperparameter Tuning | Martin Wistuba, et al. | ICDM | PDF
    • 2013 | Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms | PDF
    • 2013 | Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures | J. Bergstra | JMLR | PDF
    • 2012 | Practical Bayesian Optimization of Machine Learning Algorithms | PDF
    • 2011 | Sequential Model-Based Optimization for General Algorithm Configuration(extended version) | PDF
  • Evolutionary Algorithms

    • 2018 | Autostacker: A Compositional Evolutionary Learning System | Boyuan Chen, et al. | arXiv | PDF
    • 2017 | Large-Scale Evolution of Image Classifiers | Esteban Real, et al. | PMLR | PDF
  • Lipschitz Functions

    • 2017 | Global Optimization of Lipschitz functions | C´edric Malherbe, Nicolas Vayatis | arXiv | PDF
  • Local Search

    • 2009 | ParamILS: An Automatic Algorithm Configuration Framework | Frank Hutter, et al. | JAIR | PDF
  • Meta Learning

    • 2008 | Cross-Disciplinary Perspectives on Meta-Learning for Algorithm Selection | PDF
  • Particle Swarm Optimization

    • 2017 | Particle Swarm Optimization for Hyper-parameter Selection in Deep Neural Networks | Pablo Ribalta Lorenzo, et al. | GECCO | PDF
    • 2008 | Particle Swarm Optimization for Parameter Determination and Feature Selection of Support Vector Machines | Shih-Wei Lin, et al. | Expert Systems with Applications | PDF
  • Random Search

    • 2016 | Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization | Lisha Li, et al. | arXiv | PDF
    • 2012 | Random Search for Hyper-Parameter Optimization | James Bergstra, Yoshua Bengio | JMLR | PDF
    • 2011 | Algorithms for Hyper-parameter Optimization | James Bergstra, et al. | NIPS | PDF
  • Transfer Learning

    • 2016 | Efficient Transfer Learning Method for Automatic Hyperparameter Tuning | Dani Yogatama, Gideon Mann | JMLR | PDF
    • 2016 | Flexible Transfer Learning Framework for Bayesian Optimisation | Tinu Theckel Joy, et al. | PAKDD | PDF
    • 2016 | Hyperparameter Optimization Machines | Martin Wistuba, et al. | DSAA | PDF
    • 2013 | Collaborative Hyperparameter Tuning | R´emi Bardenet, et al. | ICML | PDF

      Miscellaneous

  • 2018 | Accelerating Neural Architecture Search using Performance Prediction | Bowen Baker, et al. | ICLR | PDF
  • 2017 | Automatic Frankensteining: Creating Complex Ensembles Autonomously | Martin Wistuba, et al. | SIAM | PDF

Tutorials

Bayesian Optimization

  • 2010 | A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning | PDF

    Meta Learning

  • 2008 | Metalearning - A Tutorial | PDF

Articles

Bayesian Optimization

  • 2016 | Bayesian Optimization for Hyperparameter Tuning | Link

    Meta Learning

  • 2017 | Why Meta-learning is Crucial for Further Advances of Artificial Intelligence? | Link
  • 2017 | Learning to learn | Link

Slides

Automated Feature Engineering

  • Automated Feature Engineering for Predictive Modeling | Udyan Khurana, etc al. | PDF

    Hyperparameter Optimization

    Bayesian Optimization

  • Bayesian Optimisation | PDF
  • A Tutorial on Bayesian Optimization for Machine Learning | PDF

Books

Meta Learning

  • 2009 | Metalearning - Applications to Data Mining | Springer | PDF

Projects

  • Advisor | Python | Open Source | Code
  • auto-sklearn | Python | Open Source | Code
  • Auto-WEKA | Java | Open Source | Code
  • Hyperopt | Python | Open Source | Code
  • Hyperopt-sklearn | Python | Open Source | Code
  • SigOpt | Python | Commercial | Link
  • SMAC3 | Python | Open Source | Code
  • RoBO | Python | Open Source | Code
  • BayesianOptimization | Python | Open Source | Code
  • Scikit-Optimize | Python | Open Source | Code
  • HyperBand | Python | Open Source | Code
  • BayesOpt | C++ | Open Source | Code
  • Optunity | Python | Open Source | Code
  • TPOT | Python | Open Source | Code
  • ATM | Python | Open Source | Code
  • Cloud AutoML | Python | Commercial| Link
  • H2O | Python | Commercial | Link
  • DataRobot | Python | Commercial | Link
  • MLJAR | Python | Commercial | Link
  • MateLabs | Python | Commercial | Link


MARSGGBO原创





2018-7-14



目录
相关文章
|
4月前
|
机器学习/深度学习 PyTorch 算法框架/工具
Azure 机器学习 - 使用 ONNX 对来自 AutoML 的计算机视觉模型进行预测
Azure 机器学习 - 使用 ONNX 对来自 AutoML 的计算机视觉模型进行预测
67 0
|
8月前
|
机器学习/深度学习 编解码 算法
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 10 日论文合集)
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 10 日论文合集)
|
8月前
|
机器学习/深度学习 编解码 自然语言处理
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 11 日论文合集)
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 11 日论文合集)
|
8月前
|
机器学习/深度学习 编解码 自然语言处理
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 9 日论文合集)
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 9 日论文合集)
|
8月前
|
机器学习/深度学习 图形学 计算机视觉
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 14 日论文合集)
【计算机视觉 | Transformer】arxiv 计算机视觉关于Transformer的学术速递(8 月 14 日论文合集)
|
10月前
|
机器学习/深度学习 自然语言处理 并行计算
计算机视觉中Transformer的应用,论文精选
计算机视觉中Transformer的应用,论文精选
116 0
|
11月前
|
机器学习/深度学习 数据采集 编解码
一文看懂AutoML
一文看懂AutoML
|
11月前
|
机器学习/深度学习 人工智能 自然语言处理
从感知机到Transformer,一文概述深度学习简史(2)
从感知机到Transformer,一文概述深度学习简史
|
11月前
|
机器学习/深度学习 存储 自然语言处理
从感知机到Transformer,一文概述深度学习简史(1)
从感知机到Transformer,一文概述深度学习简史
112 0
|
11月前
|
机器学习/深度学习 编解码 缓存
深度学习论文阅读(四):GoogLeNet《Going Deeper with Convolutions》
我们在 ImageNet 大规模视觉识别挑战赛 2014(ILSVRC14)上 提出了一种代号为 Inception 的深度卷积神经网络结构,并在分类和 检测上取得了新的最好结果。这个架构的主要特点是提高了网络内部 计算资源的利用率。通过精心的手工设计,我们在增加了网络深度和 广度的同时保持了计算预算不变。为了优化质量,架构的设计以赫布 理论和多尺度处理直觉为基础。我们在 ILSVRC14 提交中应用的一个 特例被称为 GoogLeNet,一个 22 层的深度网络,其质量在分类和检 测的背景下进行了评估。
302 0