专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
宝玉xp  ·  说好的 Sora ... ·  2 天前  
爱可可-爱生活  ·  【FeatureAlignment:提升大型 ... ·  2 天前  
DataFunTalk  ·  RAG+Agent:面向生成式AI的最佳架构实践 ·  3 天前  
DataFunTalk  ·  RAG+Agent:面向生成式AI的最佳架构实践 ·  3 天前  
机器之心  ·  刚刚,OpenAI ... ·  4 天前  
51好读  ›  专栏  ›  机器学习研究会

【学习】Keras作者:深度学习的未来

机器学习研究会  · 公众号  · AI  · 2017-07-19 23:51

正文



点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:网路冷眼

This post is adapted from Section 3 of Chapter 9 of my book, Deep Learning with Python (Manning Publications). It is part of a series of two posts on the current limitations of deep learning, and its future. You can read the first part here: The Limitations of Deep Learning.


Given what we know of how deep nets work, of their limitations, and of the current state of the research landscape, can we predict where things are headed in the medium term? Here are some purely personal thoughts. Note that I don't have a crystal ball, so a lot of what I anticipate might fail to become reality. This is a completely speculative post. I am sharing these predictions not because I expect them to be proven completely right in the future, but because they are interesting and actionable in the present.


At a high-level, the main directions in which I see promise are:

  • Models closer to general-purpose computer programs, built on top of far richer primitives than our current differentiable layers—this is how we will get to reasoning and abstraction, the fundamental weakness of current models.

  • New forms of learning that make the above possible—allowing models to move away from just differentiable transforms.

  • Models that require less involvement from human engineers—it shouldn't be your job to tune knobs endlessly.

  • Greater, systematic reuse of previously learned features and architectures; meta-learning systems based on reusable and modular program subroutines.


Additionally, do note that these considerations are not specific to the sort of supervised learning that has been the bread and butter of deep learning so far—rather, they are applicable to any form of machine learning, including unsupervised, self-supervised, and reinforcement learning. It is not fundamentally important where your labels come from or what your training loop looks like; these different branches of machine learning are just different facets of a same construct.


Let's dive in.


链接:

https://blog.keras.io/the-future-of-deep-learning.html


原文链接:

https://m.weibo.cn/5501429448/4131227164344283

“完整内容”请点击【阅读原文】
↓↓↓