人工智能
Knowledge-Based Systems
Robust, Explainable, and Privacy-Preserving Deep Learning
全文截稿: 2021-08-31
影响因子: 5.101
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 2区
• 小类 : 计算机:人工智能 - 2区
网址:
http://www.journals.elsevier.com/knowledge-based-systems/
This Special Issue aims to: 1) improve the understanding and explainability of deep neural networks; 2) improve the accuracy of deep learning leveraging new stochastic optimization and neural architecture search; 3) enhance the mathematical foundation of deep neural networks; 4) design new data privacy mechanisms to optimally tradeoff between utility and privacy; and 5) increase the computational efficiency and stability of the deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following:
Novel theoretical insights on the deep neural networks
Exploration of post-hoc interpretation methods which can shed light on how deep learning models produce a specific prediction and generate a representation
Investigation of interpretable models which aim to construct self-explanatory models and incorporate interpretability directly into the structure of a deep learning model
Quantifying or visualizing the interpretability of deep neural networks
Stability improvement of deep neural network optimization
Optimization methods for deep learning
Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data)
Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security
Important Dates
Submission Deadline: August 31, 2021
First Review Decision: September 30, 2021
Revisions Due: October 31, 2021
Final Decision: November 30, 2021
Final Manuscript: December 31, 2021
人工智能
Pattern Recognition Letters
Self-Learning Systems and Pattern Recognition and Exploitation (SeLSPRE)
全文截稿: 2021-10-20
影响因子: 2.81
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 3区
• 小类 : 计算机:人工智能 - 3区
网址:
http://www.journals.elsevier.com/pattern-recognition-letters/
Self-Learning Systems aim to achieve a goal -without being pre-programmed- in an environment that may be completely unknown initially. Self-learning algorithms are inspired by neuroscience and mimic he way the brain achieves cognition: they explore the environment following a try-and-error approach, or acquire knowledge from demonstrations provided by experts. The development of such a kind of systems is pushed forward by AI technologies such as Reinforcement Learning, Inverse Reinforcement Learning, and Learning by Demonstration. Their application spams from robotics and autonomous driving up to healthcare and precision medicine.
This special issue focuses on pattern recognition and their successive exploitation by Self-Learning Systems. The way Inverse Reinforcement Learning or Learning by Demonstration extract patterns from ‘demonstrated trajectories’, and how such patterns are successively exploited by a self-learning algorithm to optimize its policy or fasten its learning process, is of interest of this special issue.
Topics of interest
Inverse Reinforcement Learning
Learning-by-Demonstration and Imitation Learning
Pattern Recognition via Inverse Reinforcement Learning
Pattern Recognition from Demonstrations
Pattern exploitation in Self-Learning Systems
Pattern recognition in partially observable environments
Action-State trajectories analysis for pattern recognition and reward engineering
Pattern recognition and exploitation in Multi-Agent Self-Learning Systems
Pattern recognition and exploitation in Hierarchical Self-Learning Systems
人工智能
Pattern Recognition Letters
Computational Linguistics Processing in Indigenous Language (CLPIL)
全文截稿: 2021-11-20
影响因子: 2.81
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 3区
• 小类 : 计算机:人工智能 - 3区
网址:
http://www.journals.elsevier.com/pattern-recognition-letters/
Natural language processing (NLP) involves building models of the language environment and inferring the consequences of inter-language processing. In the Machine Learning (ML) research, this technology has traditionally been facilitated by a technique called state-of-the-art machine translation, in which a translation model is developed and using this the meaning of each word from the original language is extracted. This type of model can be extended to several different languages, and for this reason, it can be useful for words those are identical in meaning or form are found to have a common meaning in each language. Textual participation helps to facilitate natural language interpretation, to allow computer applications, and to characterize how the text is interpreted by natural language devices. Automated algorithms for lexical task participation may be extended to different applications in the processing of natural languages. In particular, automated parsing tools have a crucial role to play in developing a computational approach to natural language processing for general purposes. The aim of the virtual special issue is to investigate the computational complexity of indigenous languages and to provide a solution for a problem from an obvious point of view: How do we solve a classification problem? Natural language processing is the application of artificial intelligence to the English language. Additionally, this issue will give an introduction to the mathematical machinery behind the classification problem of indigenous language. The purpose of this issue is to summarize the research techniques related to the future trends in Artificial Intelligence (AI), computational engineering, information science, Natural Language Processing. This issue tends to present several interesting open problems with future research directions for data engineering, computational engineering, data science, Multilingual models, Social Media mining and big data.