专栏名称: Call4Papers
致力于帮助所有科研人员发表学术论文,向大家及时提供各领域知名会议的deadline以及期刊的约稿信息
目录
51好读  ›  专栏  ›  Call4Papers

计算机 | SCI期刊专刊信息5条

Call4Papers  · 公众号  · 科研  · 2021-03-08 09:25

正文

计算机体系结构,并行与分布式计算

Computers & Electrical Engineering

Securing IoT-based Critical Infrastructure (VSI-cei)






全文截稿: 2021-05-15

影响因子: 2.663

CCF分类: 无

中科院JCR分区:

  • 大类 : 工程技术 - 3区

  • 小类 : 计算机:硬件 - 3区

  • 小类 : 计算机:跨学科应用 - 4区

  • 小类 : 工程:电子与电气 - 4区

网址:
http://www.journals.elsevier.com/computers-and-electrical-engineering/


Critical Energy Infrastructure (CEI) refers to specific engineering information about proposed or existing critical infrastructure. Modern infrastructures are increasingly moving to distributed and complex cyber-physical systems which requires proactive protection and fast restoration to mitigate physical or cyber attacks, espically, combined physical-cyber attacks, that are much more challenging task and it is expected to become the most intrusive attack. This case is particularly true for the Critical Energy Infrastructures (CEI), e.g., the US Industrial Control Systems Cyber Emergency Response Team responded to 245 plus incidents during 2015, and 32% of these incidents were from the Energy sector.

Considering the importance of energy in our life and its impact on other critical infrastructures, CEI requires significant attention comparatively. Machine learning allows the data to remain on-premise in the infrastructure and used to provide a robust defense mechanism for critical infrastructures. For example, wind-turbine system is considered one of the most complex cyber-physical infrastructures causing huge cascading effects to other critical energy infrastructures, such as transportation, healthcare sector, communications, industry finance and electrical power systems. Such threats to infrastructure enable the responsible authorities to consider the advantages of machine learning, IoT and simultaneously protecting their privacy i.e. there is always a possibility of attacks against these infrastructures which can be predicted and detected efficiently.

This special section aims to stimulate discussion on the design, use and evaluation of machine learning models for Critical Energy Infrastructure towards the improvement of the privacy and security. We invite theoretical work and review articles on practical use-cases of Federated Learning in CEI that discuss adding a layer of trust to powerful algorithms for delivering near real-time intelligence.

Topics:

This special section will respond to the research challenges by encouraging researchers in the computing world to bring to bear novel techniques, combinations of tools, and so forth to build effective ways to Enhancing the Security of Critical Energy Infrastructures. We solicit papers covering various topics of interest that include:

Securing Critical Infrastructure

Federated Learning for Critical Infrastructure

Data privacy solutions for critical infrastructure

Automated Protection to CEI

Security and privacy of big data in Energy

Enhancing the Security of CEI

Cyber Attacks on C EI

Machine Learning in Energy Sector

Model and Infrastructure for Federated Learning in Energy

Advances and Open Problems in Critical Infra Structure

Scalable Federated in Energy sector

Securing Federated learning

Federated Learning for Crisis in Critical Infrastructure

Management of Cloud-based critical infrastructure

Deep learning for Industrial control systems

Schedule:

Submission of manuscript: May. 15, 2021

First notification: July 30, 2020

Submission of revised manuscript: August 28, 2021

Notification of the re-review: October 28, 2021

Final notification: November 16, 2020

Final paper due: December 1, 2021

Publication: March. 2022

计算机体系结构,并行与分布式计算

Computers & Electrical Engineering

Intelligent Approaches in Security and Privacy Computing (VSI-spc)






全文截稿: 2021-06-02

影响因子: 2.663

CCF分类: 无

中科院JCR分区:

  • 大类 : 工程技术 - 3区

  • 小类 : 计算机:硬件 - 3区

  • 小类 : 计算机:跨学科应用 - 4区

  • 小类 : 工程:电子与电气 - 4区

网址:
http://www.journals.elsevier.com/computers-and-electrical-engineering/


Conferences on the security of information and networks address a wide range of academic, technical, and practical aspects of security and privacy. Recently, there has been an interest in how to solve ordinary as well as advanced computation needs in security and privacy using smart approaches. Collating effective attempts employing artificial intelligence for a wide span of research interests in security and privacy is likely to serve to highlight existing solutions and advance similar approaches.

Topics:

Suggested topics include themes of artificial intelligence, machine learning, and other intelligent approaches to computation for smart security and privacy, especially in the following areas:

Network Security and Protocols,

Security of Cyber-Physical Systems,

Intrusion Detection and Remediation,

Cryptographic Techniques,

Key Management,

Computational Intelligence Techniques in Security,

Cryptographic Protocol Security, and

Formal Verification Techniques.

计算机体系结构,并行与分布式计算

Computers & Electrical Engineering

Security and Privacy in IoT and Cloud (VSI-spiot)






全文截稿: 2021-06-30

影响因子: 2.663

CCF分类: 无

中科院JCR分区:

  • 大类 : 工程技术 - 3区

  • 小类 : 计算机:硬件 - 3区

  • 小类 : 计算机:跨学科应用 - 4区

  • 小类 : 工程:电子与电气 - 4区

网址:
http://www.journals.elsevier.com/computers-and-electrical-engineering/


With the changing industrial and economic landscape based on the Internet, the individuals and enterprises are becoming more used to storing and processing of personal and organizational data on the cloud platforms. The cloud and IoT infrastructures are becoming more capable to serve the emerging needs of users. The client and the IoT devices are acquiring data from the environment and sending them to the cloud to process. But this transmission of data faces challenges like privacy, integrity, and authentication. While the data owner stores or processes the data on the cloud, it needs to be encrypted; the most important challenge is processing the data on the cloud without decrypting, which can be assured by homomorphic cryptosystems. Further, the devices on the network are heterogeneous and embedded in the case of IoT devices. Most IoT devices have limited resources like memory, energy, and processing power. Hence, they need lightweight and ultra-lightweight encryption algorithms, suitable for hardware implementation.

This special section plans to address the above security challenges by inviting original research, tools, techniques, algorithms, and designs for meeting security challenges in cloud and IoT infrastructure.

Topics:

Suggested topics include:

Homomorphic encryption techniques for cloud

Homomorphic encryption techniques for Surveillance

Lightweight encryption algorithms for IoT network

Ultra-lightweight block cipher

Low-latency block cipher

Embedded and FPGA implemented security solutions for IoT network

Hardware designed new cryptosystem for cloud and IoT devices

Security vulnerabilities in cyber physical systems

Lightweight authentication for cyber physical systems

Adversarial neural cryptography for cloud and IoT

Blockchain in cyber physical systems

Secure solutions for healthcare, smart city, smart grid, etc.

Cyber forensics

计算机体系结构,并行与分布式计算

Future Generation Computer Systems

Special Issue on Explainable Artificial Intelligence for Healthcare






全文截稿: 2021-07-01

影响因子: 6.125

CCF分类: C类

中科院JCR分区:

  • 大类 : 工程技术 - 2区

  • 小类 : 计算机:理论方法 - 1区

网址:
http://www.journals.elsevier.com/future-generation-computer-systems/


The spread of the use of artificial intelligence techniques is now pervasive and unstoppable. However, it brings with its opportunities but also risks and problems that must be addressed in order not to compromise an effective evolution. The eXplainable AI (XAI) is one of the answers to these problems to bring humans closer to machines.

While from a research perspective the discussions on XAI date back a few decades, the concept emerged with renewed vigour at the end of 2019 when Google, after announcing its "AI-first" strategy in 2017, recently announced a new XAI toolset for developers.

Nowadays many of the machine and deep learning applications do not allow you to understand how they work entirely or the logic behind them for effect called "BlackBox", according to which machine learning models are mostly black boxes.

This feature is considered one of the biggest problems in the application of AI techniques; it makes machine decisions not transparent and often incomprehensible even to the eyes of experts or developers themselves.

Explainable AI systems can explain the logic of decisions, characterize the strengths and weaknesses of decision making, and provide insights into their future behaviour.

We think of autonomous driving systems, AI applications used in healthcare, in the financial, legal or military sectors. In these cases, it is easy to understand that to trust the decisions and the data obtained, it is necessary to know how the artificial partner has "reasoned".

The most popular AI architecture currently is given by Deep Learning (DL) in which a neural network (NN) of tens or even hundreds of layers of "neurons", or elementary processing units, is used.

The complexity of DL architectures makes them behave like "black boxes", so it is practically impossible to identify the exact mechanism for which the system provides specific answers.

The applications of artificial intelligence in healthcare, in particular in diagnostic imaging, are rapidly growing. But the involvement of deep learning architectures turns the spotlight on the "accountability" of processes.

Given the widespread use of DL solutions, this problem will become increasingly felt in perspective. It must be emphasized that in the medical field the accountability, or responsibility, of the professional is of primary importance: any medical decision must be able to be justified a posteriori, possibly through objective evidence.

The same must be true when the outcome of a type AI processing also contributes to the clinical decision, for which the "black box" architectures are hardly compatible with the healthcare sector. Furthermore, since these software applications have to be certified, the criticality of this procedure is understood in the face of an unexplained algorithm.

Doctors are happy to be able to use neural networks in the most complex or challenging diagnoses, but they need to understand how they come to their conclusions to validate the report.

The main objective of this special issue is to bring together diverse, novel and impactful research work on Explainable Deep Learning for Medicine, thereby accelerating research in this field.

Topics of Interest

The topics of interest for this special issue include, but are not limited to:

Explainable AI on graph structured medical data;

Real-time Explainable AI for medical image processing;

Intelligent feature selection for interpretable deep learning classification;

Explainable Artificial Intelligence for Internet of Medical Things;

Explainable deep Bayesian learning for medical data;

Fusion of emerging Explainable AI methods with conventional methods;

Explainable Artificial Intelligence methodologies to detecting emerging medical threats from Social Media;

Relations between Explainability and other Quality Criteria (such as Interpretability, Accuracy, Stability, etc.)

Hybrid Approaches (e.g. Neuro-Fussy systems) for Explainable AI.


Evaluation Criterion

Novelty of approach (how is it different from what exists already?)

Technical soundness (e.g., rigorous model evaluation)

Impact (how does it change our current state of affairs)

Readability (is it clear what has been done)

Reproducibility and open science: pre-registrations if confirmatory claims are being, open data, materials, code as far as ethically possible.

计算机体系结构,并行与分布式计算

Computers & Electrical Engineering

Recent Advances in Deep Learning (VSI-radl)






全文截稿: 2021-12-15

影响因子: 2.663

CCF分类: 无

中科院JCR分区:

  • 大类 : 工程技术 - 3区

  • 小类 : 计算机:硬件 - 3区

  • 小类 : 计算机:跨学科应用 - 4区

  • 小类 : 工程:电子与电气 - 4区

网址:
http://www.journals.elsevier.com/computers-and-electrical-engineering/


Deep learning (DL) is one of the most promising artificial intelligence (AI) methods that tries to imitate the workings of the human brain in processing information, and automatically generates patterns for decision making and other complicated tasks. DL is able to learn with/without human supervision, drawing from data, even unstructured and/or unlabelled. However, the achievements of DL techniques do not stop at only arriving and outperforming the results of other AI algorithms: DL’s accomplishments are generally better than human results for tasks like image recognition or game playing, thus beyond the expectations of the experts.

The aim of this special section is to provide a diverse, but complementary, set of contributions to demonstrate new developments and applications of DL to solve problems in diverse fields. The ultimate goal is to promote research and development of DL by publishing high-quality survey and research articles in this rapidly growing field.

Topics:

The topics of interest include

New architectures, theories, analytics for DL

Deep convolutional neural network

Deep graph neural network

DL with attention mechanism

Deep auto-encoders

Reinforcement learning

DL applications, e.g., IoT, medical image analysis, multimedia technology, and enhanced learning


END

SCI专刊信息 | 国际会议信息

公众号领域的全球学术科研交流平台

最新最全的学术发表资讯

Call 4 Papers | 联系小编 | 长按识别图中二维码