Delve into the Top Research Papers on Deep Learning to understand the latest advancements and technologies in this exciting field. Whether you're a researcher, student, or enthusiast, these pivotal studies provide valuable insights and innovative approaches that are shaping the future of AI. Enhance your knowledge and stay ahead with these must-read papers.
Looking for research-backed answers?Try AI Search
Sahil Sharma, A. Srinivas, Balaraman Ravindran
2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT)
An overview of Deep Reinforcement Learning, including its basic components, key algorithms and techniques, and applications in areas s.a. robotics, game playing, and autonomous driving is provided.
This intensive workshop provides a comprehensive exploration of deep learning applications in life sciences, focusing on practical techniques for analyzing complex biological datasets. Participants will gain theoretical and hands-on experience with deep learning tools such as TensorFlow and Keras, learning to construct neural networks and apply them to areas like genomics and personalized medicine.
Nir Shlezinger, Y. Eldar, Stephen P. Boyd
IEEE Access
This work describes model-based optimization and data-centric deep learning as edges of a continuous spectrum varying in specificity and parameterization, and provides a tutorial-style presentation to the methodologies lying in the middle ground of this spectrum, referred to as model- based deep learning.
A survey of modular architectures is offered, providing a unified view over several threads of research that evolved independently in the scientific literature, and various additional purposes of modularity are explored, including scaling language models, causal inference, programme induction, and planning in reinforcement learning.
A systematic overview of trends, challenges, and opportunities in applications of deep-learning methods in seismology is presented, covering approaches, limitations, and opportunity.
Yoshua Bengio, Yann LeCun, Geoffrey E. Hinton
Communications of the ACM
How can neural networks learn the rich internal representations required for difficult tasks such as recognizing objects or understanding language?
Moritz Blumenthal, Guanxiong Luo, M. Schilling + 2 more
Magnetic Resonance in Medicine
To develop a deep‐learning‐based image reconstruction framework for reproducible research in MRI, a network of supervised experiments and real-time measurements were used to demonstrate the power of deep learning in image reconstruction.
Zhihan Lv, Dongliang Chen, Bin Cao + 2 more
IEEE Transactions on Computers
A network intrusion detection algorithm integrated with Deep Neural Network (DNN) model and a trust model based on Keyed-Hashing-based Self-Synchronization (KHSS) that predicts the security state and detects attacks according to existing malicious attacks, ensuring the network security defense system's regular operation.
Ching-Hao Wang, Kang-Yang Huang, Yi Yao + 3 more
IEEE Consumer Electronics Magazine
This work presents a fresh overview to summarize recent development and challenges for model compression, which indicates the procedures of compressing DNN models into more compact ones, which are suitable to be executed on edge devices.
Tri-Hai Nguyen, Heejae Park, Kihyun Seol + 2 more
2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)
An overview of the applications and advancements of DL and DRL in 6G networks is provided and the latest research is discussed to identify areas for further exploration in this field.
Jeroen Berrevoets, Krzysztof Kacprzyk, Zhaozhi Qian + 1 more
ArXiv
Causal deep learning enables us to make progress on a variety of real-world problems by leveraging partial causal knowledge and quantitatively characterising causal relationships among variables of interest (possibly over time).
The study highlights the potential of CCT, Patch Up, and novel CamCenterLoss in processing single modality clinical data within deep learning frameworks, paving the way for future multimodal medical research and promoting precision and personalized healthcare.
Lakshin Pathak, Mili Virani, Drashti Kansara
International Journal of Innovative Science and Research Technology (IJISRT)
The paper shows that artificial intelligence is fairly useful for the obligations of the automatic disease detection and switch mastering (as a method for reusing the existing understanding in the new software) is also beneficial.
S. Hambardzumyan, Abhina Tuli, Levon Ghukasyan + 7 more
ArXiv
Deep Lake maintains the benefits of a vanilla data lake with one key difference: it stores complex data in the form of tensors and rapidly streams the data over the network to (a) Tensor Query Language, (b) in-browser visualization engine, or (c) deep learning frameworks without sacrificing GPU utilization.
This is the first rigorous, self-contained treatment of the theory of deep learning and provides guidance on how to think about scientific questions, and leads readers through the history of the field and its fundamental connections to neuroscience.
R. Fioresi, F. Zanchetta
ArXiv
This expository paper gives a brief introduction to the inner functioning of the new and successfull algorithms of Deep Learning and Geometric Deep Learning with a focus on Graph Neural Networks.
Gaurav Menghani
ACM Computing Surveys
This is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support and the seminal work there.
This new volume addresses a number of important issues related to reading and writing that were not attended to in the previous volume, Deep Reading: Teaching Reading in the Writing Classroom (NCTE 2017)—especially those related to identity, culture, and positionality. In this volume the authors address the broad question of equity and social justice in the acquisition and practice of literacy, and the multifaceted lived reality of positionality related especially to race, class, language, and gender as experienced by students in the classroom.
Maximilian Pichler, F. Hartig
Methods in Ecology and Evolution
It is concluded that ML and DL are powerful new tools for predictive modelling and data analysis, comparable to other traditional statistical tools.
This review introduces deep neural networks, covering methods such as classifiers, regression models, generative AI, and embedding models, covering applications including classification, document digitization, record linkage, and methods for data exploration in massive scale text and image corpora.
Deep meaningful learning is the higher-order thinking and development through manifold active intellectual engagement aiming at meaning construction through pattern recognition and concept association. It includes inquiry, critical thinking, creative thinking, problem-solving, and metacognitive skills. It is a theory with a long academic record that can accommodate the demand for excellence in teaching and learning at all levels of education. Its achievement is verified through knowledge application in authentic contexts.
P. Bartlett, A. Montanari, A. Rakhlin
Acta Numerica
This article surveys recent progress in statistical learning theory that provides examples illustrating these principles in simpler settings, and focuses specifically on the linear regime for neural networks, where the network can be approximated by a linear model.
Tom Tirer, Raja Giryes Se, Young Chun + 3 more
IEEE Signal Processing Magazine
This survey article aims at covering deep internal learning techniques that have been proposed in the past few years for signal and image processing problems and most of the approaches are derived for general signals and are therefore applicable to other modalities.
Jiang Hua, Liangcai Zeng, Gongfa Li + 1 more
Sensors (Basel, Switzerland)
A state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots.
Jaya Gupta, Sunil Pathak, G. Kumar
Journal of Physics: Conference Series
This paper is using deep learning to uncover higher-level representational features, to clearly explain transfer learning, to provide current solutions and evaluate applications in diverse areas of transfer learning as well as deep learning.
Yi Zhang, Ziying Fan
Academic Journal of Science and Technology
This paper has performed a peer review to understand the different mechanisms that include GRUs, MANN, LSTM and self-attention mechanisms that helps in capturing a particular location within a video dataset to understand the patterns of a data.
Daniel A. Roberts, Sho Yaida, B. Hanin
ArXiv
For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.
P. Micikevicius, Dusan Stosic, N. Burgess + 12 more
ArXiv
This paper proposes an 8-bit FP8 binary interchange format consisting of two encodings - E4M3 and E5M2 - and demonstrates the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
Marko Radeta, Agustin Zuniga, Naser Hossein Motlagh + 6 more
Computer
A research vision for deep learning in the oceans is presented, collating applications and use cases as well as identifying opportunities, constraints, and open research challenges.
Junguang Jiang, Yang Shu, Jianmin Wang + 1 more
ArXiv
This survey connects different isolated areas in deep learning with their relation to transferability, and provides a unified and complete view to investigating transferability through the whole lifecycle of deep learning.
T. Tran, Vuong Le, Hung Le + 1 more
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
This tutorial reviews recent developments to extend the capacity of neural networks to "learning-to-reason'' from data, where the task is to determine if the data entails a conclusion.
Z. Huang, Fan Li, Zhanliang Wang + 1 more
International Journal of Future Computer and Communication
The current methodologies and techniques about improving the interpretability of Deep Learning from different research directions are reviewed and a future look for Deep Learning researchers is provided.
D. G., A. Karegowda
Deep Learning Applications and Intelligent Decision Making in Engineering
This chapter provides a detailed account of the IoT domain, machine learning, and DL techniques and applications and current challenges and potential areas for future research.
Mohammad Mahdi Forootan, I. Larki, Rahimov M. Zahedi + 1 more
Sustainability
A comprehensive and detailed study has been conducted on the methods and applications of Machine Learning (ML) and Deep Learning (DL), which are the newest and most practical models based on Artificial Intelligence for use in energy systems.
Iain Mackie, Jeffrey Dalton, Andrew Yates
Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
DL-HARD contains fifty topics from the official DL 2019/2020 evaluation benchmark, half of which are newly and independently assessed and a framework for identifying challenging queries is introduced.
Mr.Gopi K
INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
An extensive survey on deepfake generation and recognition techniques using neural networks is provided and a detailed study of the different technologies used in deepfake detection is provided.
Haifeng Jin, François Chollet, Qingquan Song + 1 more
J. Mach. Learn. Res.
AutoKeras is an Automated Machine Learning (AutoML) library that automates the process of model selection and hyperparameter tuning, which enables novice users to solve standard machine learning problems with a few lines of code.
Le Lyu, Yang Shen, Sicheng Zhang
2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA)
The development of reinforcement learning is introduced, including classic reinforcement learning methods and deep reinforcementlearning methods, and the challenges faced by reinforcement learning are discussed.
Moisés Cordeiro-Costas, D. Villanueva, Pablo Eguía-Oller + 2 more
Applied Sciences
The results show that the model with less dispersion, both in the validation set and test set, is LSTM, and the robustness of the most suitable AI techniques for modeling and forecasting the electricity consumption in buildings.
Yang Li, Quanbiao Pan, E. Cambria
Knowl. Based Syst.
This work proposes the reinforcement learning-based attacking framework by considering the effectiveness and stealthy spontaneously, and proposes a new metric to evaluate the performance of the attack model in these two aspects and validate the transferability of the model, and also its robustness under the adversarial training.
Julius Berner, P. Grohs, Gitta Kutyniok + 1 more
ArXiv
The new field of mathematical analysis of deep learning is described, which emerged around a list of research questions that were not answered within the classical framework of learning theory, and an overview of modern approaches that yield partial answers.
Georg Ostrovski, P. S. Castro, Will Dabney
journal unavailable
This work proposes the "tandem learning" experimental paradigm, and identifies function approximation in conjunction with fixed data distributions as the strongest factors, thereby extending but also challenging hypotheses stated in past work.
Haider Ali, Dian Chen, Matthew Harrington + 5 more
IEEE Access
This paper comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL, and highlighted the application domains, datasets, metrics, and testbeds in these fields.
Da-Wei Zhou, Qiwen Wang, Zhi-Hong Qi + 3 more
ArXiv
This paper surveys comprehensively recent advances in deep class-incremental learning and summarizes these methods from three aspects, i.e. , data-centric, model-centric, and algorithm-centric, and provides a rigorous and unified evaluation of 16 methods in benchmark image class-incremental learning tasks to find out the characteristics of different algorithms empirically.
Eleonora Giunchiglia, Mihaela C. Stoian, Thomas Lukasiewicz
journal unavailable
This survey retrace such works and categorizes them based on (i) the logical language that they use to express the background knowledge and (ii) the goals that they achieve.
Yuchen Dang, Ziqi Chen, Heng Li + 1 more
Applied Artificial Intelligence
The proposed XGBoost-DL achieves the best forecasting performance in the comparison, outperforming the best non-deep learning model SARIMA, the best deep learning model Informer, and the NASA’s forecast (RMSE and MAE).
Zhiwen Xiao, Huanlai Xing, Bowen Zhao + 5 more
IEEE Transactions on Emerging Topics in Computational Intelligence
This work presents a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain and demonstrates that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
Condell Eastmond, Aseem Pratap Subedi, S. De + 1 more
Neurophotonics
The application of DL techniques to fNirS studies has shown to mitigate many of the hurdles present in fNIRS studies such as lengthy data preprocessing or small sample sizes while achieving comparable or improved classification accuracy.
H. V. Ribeiro, Diego D. Lopes, Arthur A. B. Pessa + 6 more
ArXiv
A series of deep learning models based on the GraphSAGE framework are developed that are able to recover missing criminal partnerships, distinguish among types of associations, predict the amount of money exchanged among criminal agents, and even anticipate partnerships and recidivism of criminals during the growth dynamics of corruption networks, all with impressive accuracy.
This approach does not require any assumptions and proves that most networks composed of convolutions and Transformers can be mathematically written in the same form as the fully proven universal approximation theorem, thus establishing them as specific implementations of theuniversal approximation theorem.