专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  本文证明了多种生成模型的ELBO在驻点处会收 ... ·  昨天  
宝玉xp  ·  转发微博-20250122004343 ·  昨天  
爱可可-爱生活  ·  【[143星]WebWalker:一款用于测 ... ·  昨天  
量化投资与机器学习  ·  D.E.Shaw 夺魁! ·  2 天前  
机器之心  ·  谁说撞墙了?展望2025 Scaling ... ·  2 天前  
51好读  ›  专栏  ›  机器学习研究会

【学习】无梯度优化python工具包ZOOpt 0.1

机器学习研究会  · 公众号  · AI  · 2017-04-19 18:57

正文



点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:eyounx_俞扬

A python package of Zeroth-Order Optimization (ZOOpt).


Zeroth-order optimization (a.k.a. derivative-free optimization/black-box optimization) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable.


A quick example

We define the Ackley function for minimization using Theano

import math, theano, theano.tensor as T
x = T.dvector('x')
f = theano.function([x], -20 * T.exp(-0.2 * T.sqrt((T.dot(x - 0.2, x - 0.2)).mean())) - T.exp(
    (T.cos(2 * math.pi * (x - 0.2))).mean()) + math.e + 20)

Ackley function is a classical function with many local minima. In 2-dimension, it looks like (from wikipedia)

Then, use ZOOpt to optimize a 100-dimension Ackley function

from zoopt import Dimension, Objective, Parameter, Opt, Solution
dim = 100 # dimensionobj = Objective(lambda s: f(s.get_x()), Dimension(dim, [[-1, 1]] * dim, [True] * dim)) # setup objective# perform optimizationsolution = Opt.min(obj, Parameter(budget=100 * dim))# print resultsolution.print_solution()

For a few seconds, the optimization is done. Then, we can visualize the optimization progress

from matplotlib import pyplot
pyplot.plot(obj.get_history_bestsofar())
pyplot.savefig('figure.png')

链接:

https://github.com/eyounx/ZOOpt


原文链接:

http://weibo.com/1852429147/EFcDc704I?type=comment#_rnd1492598342556

“完整内容”请点击【阅读原文】
↓↓↓