专栏名称: Call4Papers
致力于帮助所有科研人员发表学术论文,向大家及时提供各领域知名会议的deadline以及期刊的约稿信息
目录
相关文章推荐
实验万事屋  ·  看这东南大学的博士生发的12.2分SCI文章 ... ·  3 天前  
研之成理  ·  购买研理云服务器,让计算更省心! ·  1 周前  
研之成理  ·  研究氧化铝,发一篇Science! ·  1 周前  
研之成理  ·  清华大学,Nature! ·  1 周前  
募格学术  ·  Nature:好导师的16个标准 ·  4 天前  
51好读  ›  专栏  ›  Call4Papers

人工智能 | SCI期刊专刊信息4条

Call4Papers  · 公众号  · 科研  · 2020-10-28 22:38

正文

人工智能

Neural Networks

Special Issue on Artificial Intelligence and Brain Science

全文截稿: 2021-01-10
影响因子: 5.785
CCF分类: B类
中科院JCR分区:
• 大类 : 工程技术 - 1区
• 小类 : 计算机:人工智能 - 2区
• 小类 : 神经科学 - 2区
网址: http://www.journals.elsevier.com/neural-networks/



Recent advances in “deep learning” realized artificial intelligence (AI) that surpasses humans in certain tasks like visual object recognition and game playing. Today’s AI, however, still lacks the versatility and flexibility of human intelligence, which motivates AI researchers to learn brain’s working principles. Neuroscientists also need helps of AI in making sense of massive data from sequencing, imaging, and so forth. This special issue aims to capture recent advances in the crossing forefront of AI and neuroscience and to point to the next targets in creating brain-like intelligence and further advancing neuroscience.

This special issue features perspective articles based on the discussions at the International Symposium on Artificial Intelligence and Brain Science, held online in October 2020 with leading speakers from both fields and more than a thousand registrants from around the globe (http://www.brain-ai.jp/symposium2020/). We call for submissions of papers aiming at fruitful fusion of AI and brain science.

Topics of interest include, but are not limited to:

Brain-inspired Artificial Intelligence

Deep Learning

Reinforcement Learning

World Model Learning and Inference

Metacognition and Metalearning

AI for Neuroscience

Neuromorphic Technologies

Social Impact and Ethics of Neuro-AI Technologies



人工智能

Applied Soft Computing

Special Issue on Predictive Intelligence: Humans Meet Artificial Intelligence

全文截稿: 2021-05-31
影响因子: 4.873
CCF分类: 无
中科院JCR分区:
• 大类 : 工程技术 - 2区
• 小类 : 计算机:人工智能 - 2区
• 小类 : 计算机:跨学科应用 - 2区
网址: http://www.journals.elsevier.com/applied-soft-computing/



Artificial Intelligence (AI) has seized the attention of the business world. AI is the next step on the journey from big data to full automation. Human needs are the motivation behind improvements in computing paradigms. Examples of this include such things as collecting brainwave data via “wearables” and using that information to monitor health and predict issues, tracking the movements of mobile phones on roads to predict traffic jams (Google Maps), and using natural language processing to learn and “predict” correct spelling and offer human-like speech (Amazon Alexa, Apple Siri). The more data that is collected, the wider the variety of predictions that can be offered. Each of these examples indicates an implicit or explicit need or expectation from humans, and each is an attempt to satisfy that need via a specific approach. However, humans expect more as technology develops. To this end, AI continuously interacts with us by simulating our thinking patterns, behaviors, and bringing other relevant information into play. Given the number of similar studies in this field, we suggest the introduction of a new computing paradigm, “Predictive Intelligence.”

“Predictive Intelligence” utilizes three types of data: (1) training data for building the AI model, (2) input data for prediction functionality, and (3) feedback data for tuning the model parameters and improving the prediction accuracy. Strong predictions also serve as inputs that are factored into subsequent decisions. The economic field has developed a reliable framework that aids in the understanding of how decisions are made. Recent advances in prediction technology have created implications that are not well-understood, and decision theory derived from economics can provide deeper insight. “Predictive Intelligence” outperforms humans when the complex interactions of various dimensions are considered, especially when huge amounts of data are involved. Increasing the number of interacting dimensions exposes the progressive limitation of the human ability to make accurate predictions, especially when compared to the abilities of a machine. On the other hand, humans often outperform machines—especially when small amounts of data are involved—because their ability to understand the process that generates the data gives them a prediction advantage. This phenomenon offers the opportunity to raise challenging issues within the field of computer science.

Machine learning refers to the design and analysis of algorithms with which computers can "learn" automatically, allowing machines to generate rules by analyzing data and employing that data to “predict” unknowns. Machine learning has been applied to solve complex problems in human society for years, and it has been successful because of advances in computing capabilities and sensing technology. As artificial intelligence and soft computing approaches evolve, they will soon have a considerable impact on the field. Recently, deep learning has matured in the field of supervised learning. Machine learning is only incipient in such areas as unsupervised learning and reinforcement learning using methodologies that involve soft computing. Developments in artificial intelligence and high-speed computing performance have brought recent dramatic changes. Thus, deep learning serves as an excellent example of using feature engineering to exceed the limits of machine learning, offering far greater performance and making possible a number of extremely complex applications.

Prediction can be broken into four distinct categories: known knowns, known unknowns, unknown unknowns, and unknown knowns. “Predictive Intelligence” deals extremely well with known knowns. Machine learning works best with rich data. “Predictive Intelligence” is good at filling in the gaps around known unknowns. These are things humans know intuitively, but machines cannot know. The “unknown” is used in the sense of the discovery of something not known previously. Data can be hard to collect because the rarity of some events makes them a challenge to predict. Unlike machines, humans excel at making predictions with small amounts of data. The majority of “deep learning” technologies build on the concept of supervised learning to determine classifiers that allow the system to recognize various data patterns or events. A Generative Adversarial Networks (GAN) can also overcome the “too little data” issue that creates the known unknowns bottleneck. In order to generate a prediction, human experts and knowledge engineers need to inform the machine regarding the kinds of things for which a prediction is valuable. Unprecedented events cannot be predicted by a machine as they have never occurred. Therefore, the machine is disoriented by data of which it is unaware or that is entirely unexpected, i.e., the unknown unknowns. The Google Flu Trends (GFT) was a failed attempt at using machine intelligence to predict unknown unknowns. This means that linking predictions and effective predictions can also incur some risks and bias. “Soft computing and metaheuristic algorithms” are applicable to this situation. The concept of unsupervised learning is then used to determine efficacious solutions within a solution space, the infinite space in which the unknown unknowns issues can be overcome. Finally, the greatest weakness of “Predictive Intelligence” is the unknown knowns, i.e., when the wrong answer is provided with complete confidence that it is actually right. That sends AI down the wrong path. If the decision process by which the data was generated is not fully understood by the machine, prediction failure is likely. Therefore, large scale incremental learning and transfer learning methods can be used to detect possible knowns from the current knowns and ameliorate this weakness.

For this special issue, we solicit original contributions that address challenges and issues relating to the exploitation of soft computing or deep learning methods to build prediction models and resolve situations involving unknown unknowns and unknown knowns. Classification names should not be derived by learning from past knowns but rather from predicting the expected answer. The best predictions are achieved when humans and machines work in combination, as the strengths of each makes up for the weaknesses of the other. The main goal of this special issue is to collect manuscripts reporting the latest advances in standards, models, algorithms, technologies and applications, and to highlight the paradigm shifts in this field.

We solicit original contributions that fall within, and each submission must contribute to soft computing related methodology, the following topics of interest:

- Methodologies, and Techniques

Adaptive machine learning and soft computing algorithms for data streams

New methods combining soft computing and deep learning

New learning methods involving soft computing concepts for extant architectures and structures of predictive intelligence

Evolutionary and soft computing-based tuning and optimization of predictive intelligence

Metaheuristics aspects and soft computing algorithms in deep learning for improved convergence of predictive intelligence

Robust data augmentation methods for predictive intelligence learning

Faster incremental learning and transfer learning methods for predictive intelligence self-learning

- Human Behavior

Human behavior and user interfaces for human-centered predictive intelligence

Human participation and social sensing for human-centered predictive intelligence

The applications of personality and social psychology for predictive intelligence

Artificial intelligence and mental processes in human-centered predictive intelligence

Trust, security, and privacy issues for human-centered predictive intelligence

- Real-World Applications

Economic and financial applications

Intelligent e-learning & tutoring

Internet of Things (IoT) applications

Smart healthcare

Social computing

Smart living and smart cities



人工智能

Pattern Recognition Letters

Few-shot Learning for Human-machine Interactions (FSL-HMI)

全文截稿: 2021-07-20
影响因子: 2.81
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 3区
• 小类 : 计算机:人工智能 - 3区
网址: http://www.journals.elsevier.com/pattern-recognition-letters/



The widespread use of Web technologies, mobile technologies, and cloud computing have paved a new surge of ubiquitous data available for business, human, and societal research. Nowadays, people interact with the world via various Information and Communications Technology (ICT) channels, generating a variety of data that contain valuable insights into business opportunities, personal decisions, and public policies. Machine learning has become the common task of applications in various application scenarios, e.g., e-commerce, health, transport, security and forensics, sustainable resource management, emergency and crisis management to support intelligent analytics, predictions, and decision-making. It has proven highly successful in data-intensive applications and revolutionized human-machine interactions in many ways in modern society.

Essential to machine learning is to deal with a small dataset or few-shot learning, which aims to develop learning models that can generalize rapidly generalize from a few examples. Though challenging, few-shot learning has gained increasing popularity since inception and has mostly focused on the studies in general machine learning contexts. Meanwhile, traditional human-machine interactions research has primarily focused on interaction design and local adaptation for user-friendliness, ergonomics, or efficiency. The emerging topics such as brain-computer interface, multimodal user interfaces, and mobile personal assistants as new means of human-machine interactions are still in their infancies. Few-shot learning is especially important for such new types of human-machine interactions due to the difficulty of acquiring examples with supervised information due to privacy, safety, expense, or ethical concerns. Although the related research is relatively new, it promises a fertile ground for research and innovation.

This special issue aims at gathering the recent advances and novel contributions from academic researchers and industry practitioners in the vibrant topic of few-shot learning to achieve the full potential of human-machine interaction applications. It calls for innovative methodological, algorithmic, and computational methods that incorporate the most recent advances in data analytics, artificial intelligence, and interaction research to solve the theoretical and practical problems. It also requires reexamining the existing architectures, models, and techniques in machine learning and deep neural networks to address the challenges to advance state-of-the-art knowledge in this area.

Topics of Interest include but not limited to:

Novel few-shot, one-shot, or zero-shot learning models and algorithms for sense-making of humans, systems, and their interactions

Conceptual frameworks, computational design for few-shot learning or human-centric computing

Methods that improve the learnability, efficiency, or usability of systems that interact with humans

Techniques to address small datasets, e.g., data imputation/augmentation, generative models, reinforcement learning, active learning.

Novel recommender systems in HCI related aspects

Trust, security/privacy, and performance evaluations for few-shot learning

Interface or interaction designs based on few shot examples to enable humans to interact with computers in novel ways

Other technologies and applications that advocates a better understanding of or exploiting values from human-machine interactions



人工智能

Pattern Recognition Letters

Mobile and Wearable Biometrics (VSI:MWB)

全文截稿: 2021-09-20
影响因子: 2.81
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 3区
• 小类 : 计算机:人工智能 - 3区
网址: http://www.journals.elsevier.com/pattern-recognition-letters/






请到「今天看啥」查看全文