其他分享
首页 > 其他分享> > NLP:《NLP Year in Review 2019&NLP_2019_Highlights》2019年自然语言处理领域重要进展回顾及其解读

NLP:《NLP Year in Review 2019&NLP_2019_Highlights》2019年自然语言处理领域重要进展回顾及其解读

作者:互联网

NLP:《NLP Year in Review 2019&NLP_2019_Highlights》2019年自然语言处理领域重要进展回顾及其解读

导读:2019年是自然语言处理(NLP)领域令人印象深刻的一年。在这份报告中,我想重点介绍一些我在2019年遇到的关于机器学习和NLP的最重要的故事。我将主要关注NLP,但我也会强调一些与AI相关的有趣故事。标题没有特定的顺序。故事可以包括出版物、工程成果、年度报告、教育资源的发布等。

 

 

目录

NLP_2019_Highlights

Publications ????

ML/NLP Creativity and Society ????

ML/NLP Tools and Datasets ⚙️

Articles and Blog posts ✍️

Ethics in AI ????

ML/NLP Education ????


 

NLP_2019_Highlights
2019 was an impressive year for the field of natural language processing (NLP). In this report, I want to highlight some of the most important stories related to machine learning and NLP that I came across in 2019. I will mostly focus on NLP but I will also highlight a few interesting stories related to AI in general. The headlines are in no particular order. Stories may include publications, engineering efforts, yearly reports, the release of educational resources, etc. 2019年是自然语言处理(NLP)领域令人印象深刻的一年。在这份报告中,我想重点介绍一些我在2019年遇到的关于机器学习和NLP的最重要的故事。我将主要关注NLP,但我也会强调一些与AI相关的有趣故事。标题没有特定的顺序。故事可以包括出版物、工程成果、年度报告、教育资源的发布等。
Warning! This is a very long article so before you get started I would suggest bookmarking the article if you wish to read it in parts. I have also published the PDF version of this article which you can find at the end of the post. 警告!这是一篇很长的文章,所以在你开始阅读之前,我建议你把这篇文章放在书签里,如果你想分几部分阅读的话。我也发布了这篇文章的PDF版本,你可以在文章末尾找到。

 

Publications ????

Google AI introduces ALBERT which a lite version of BERT for self-supervised learning of contextualized language representations. The main improvements are reducing redundancy and allocating the model’s capacity more efficiently. The method advances state-of-the-art performance on 12 NLP tasks.

Earlier this year, researchers at NVIDIA published a popular paper (coined StyleGAN) which proposed an alternative generator architecture for GANs, adopted from style transfer. Here is a follow-up work where that focuses on improvements such as redesigning the generator normalization process.

谷歌人工智能介绍ALBERT,这是一个精简版的BERT自我监督学习语境化的语言表示。主要的改进是减少冗余和更有效地分配模型的容量。该方法提高了12个NLP任务的最新性能。
今年早些时候,NVIDIA的研究人员发表了一篇很受欢迎的论文(发表了StyleGAN),该论文为GANs提出了一种替代的生成器架构,采用了style transfer。接下来的工作重点是改进,比如重新设计生成器的标准化过程。

One of my favorite papers this year was code2seq which is a method for generating natural language sequences from the structured representation of code. Such research can give way to applications such as automated code summarization and documentation.

Ever wondered if it’s possible to train a biomedical language model for biomedical text mining? The answer is BioBERT which is a contextualized approach for extracting important information from biomedical literature.

今年我最喜欢的一篇论文是code2seq,它是一种从代码的结构化表示生成自然语言序列的方法。这样的研究可能会让位于自动化代码摘要和文档等应用程序。有没有想过是否有可能为生物医学文本挖掘训练一个生物医学语言模型?答案是BioBERT,这是一种从生物医学文献中提取重要信息的语境化方法。

After the release of BERT, Facebook researchers published RoBERTa which introduced new methods for optimization to improve upon BERT and produced state-of-the-art results on a wide variety of NLP benchmarks.

Researchers from Facebook AI also recently published a method based on an all-attention layer for improving the efficiency of a Transformer language model. More work from this research group includes a method to teach AI systems on how to plan using natural language.

在BERT发布之后,Facebook的研究人员发布了RoBERTa,它引入了新的优化方法来改进BERT,并在各种NLP基准上产生了最先进的结果。Facebook人工智能的研究人员最近还发布了一种基于全注意力层的方法,用于提高Transformer语言模型的效率。该研究小组的更多工作包括教授人工智能系统如何使用自然语言进行规划。

 

Explainability continues to be an important topic in machine learning and NLP. This paper provides a comprehensive overview of works addressing explainability, taxonomies, and opportunities for future research.

Sebastian Ruder published his thesis on Neural Transfer Learning for Natural Language Processing.

A group of researchers developed a method to perform emotion recognition in the context of conversation which could pave the way to affective dialogue generation. Another related work involves a GNN approach called DialogueGCN to detect emotions in conversations. This research paper also provides code implementation.

The Google AI Quantum team published a paper in Nature where they claim to have developed a quantum computer that is faster than the world’s largest supercomputer. Read more about their experiments here.

可解释性一直是机器学习和NLP中的一个重要课题。这篇文章提供了一个关于可解释性、分类和未来研究机会的全面综述。

Sebastian Ruder发表了他关于自然语言处理的神经转移学习的论文。

一组研究人员开发了一种在对话环境下进行情感识别的方法,为情感对话的产生铺平了道路。另一项相关工作涉及一种名为DialogueGCN的GNN方法来检测对话中的情绪。本论文还提供了代码实现。

谷歌人工智能量子小组在《自然》杂志上发表了一篇论文,声称他们已经开发出一种比世界上最大的超级计算机还要快的量子计算机。此处可阅读更多关于他们的实验。

As mentioned earlier, one of the areas of neural network architectures that require a lot of improvement is explainability. This paper discusses the limitations of attention as a reliable approach for explainability in the context of language modeling.

Neural Logic Machine is a neural-symbolic network architecture that is able to do well at both inductive learning and logic reasoning. The model does significantly well on tasks such as sorting arrays and finding shortest paths.

And here is a paper that applies Transformer language models to Extractive and Abstractive Neural document summarization.

Researchers developed a method that focuses on using comparisons to build and train ML models. Instead of requiring large amounts of feature-label pairs, this technique compares images with previously seen images to decide whether the image should be of a certain label.

Nelson Liu and others presented a paper discussing the type of linguistic knowledge being captured by pretrained contextualizers such as BERT and ELMo.

如前所述,需要大量改进的神经网络体系结构的一个领域是可解释性。本文讨论了注意力作为一种可靠的可解释性方法在语言建模中的局限性。

神经逻辑机是一种既能进行归纳学习又能进行逻辑推理的神经符号网络结构。该模型在排序数组和寻找最短路径等任务中表现得非常出色。

这是一篇将Transformer语言模型应用于提取和抽象神经类文档摘要的论文。

研究人员开发了一种方法,侧重于使用比较来建立和训练ML模型。这种技术不需要大量的特征标签对,而是将图像与以前看到的图像进行比较,以确定图像是否属于某个特定的标签。

Nelson Liu等人发表了一篇论文,讨论了预先训练的语境设定者(如BERT和ELMo)获取的语言知识的类型。

XLNet is a pretraining method for NLP that showed improvements upon BERT on 20 tasks. I wrote a summary of this great work here.

This work from DeepMind reports the results from an extensive empirical investigation that aims to evaluate language understanding models applied to a variety of tasks. Such extensive analysis is important to better understand what language models capture so as to improve their efficiency.

VisualBERT is a simple and robust framework for modeling vision-and-language tasks including VQA and Flickr30K, among others. This approach leverages a stack of Transformer layers coupled with self-attention to align elements in a piece of text and the regions of an image.

This work provides a detailed analysis comparing NLP transfer learning methods along with guidelines for NLP practitioners.

XLNet是NLP的一种前训练方法,它在20个任务上都比BERT有改进。我写了一个总结,这伟大的工作在这里。

这项来自DeepMind的工作报告了一项广泛的实证调查的结果,该调查旨在评估应用于各种任务的语言理解模型。这种广泛的分析对于更好地理解语言模型所捕获的内容以提高它们的效率是很重要的。

VisualBERT是一个简单而健壮的框架,用于建模视觉和语言任务,包括VQA和Flickr30K等。这种方法利用了一组Transformer层,并结合了self-attention来对齐文本中的元素和图像中的区域。

这项工作提供了一个详细的分析比较NLP转移学习方法和指导NLP的从业者。

Alex Wang and Kyunghyun propose an implementation of BERT that is able to produce high-quality, fluent generations. Here is a Colab notebookto try it.

Facebook researchers published code (PyTorch implementation) for XLM which is a model for cross-lingual model pretraining.

This works provides a comprehensive analysis of the application of reinforcement learning algorithms for neural machine translation.

This survey paper published in JAIR provides a comprehensive overview of the training, evaluation, and use of cross-lingual word embedding models.

The Gradient published an excellent article detailing the current limitations of reinforcement learning and also providing a potential path forward with hierarchical reinforcement learning. And in a timely manner, a couple of folks published an excellent set of tutorials to get started with reinforcement learning.

This paper provides a light introduction to contextual word representations.

Alex Wang和Kyunghyun提出了BERT的实现,能够产生高质量、流畅的代。这里有一个Colab notebookt笔记本来试试。

Facebook的研究人员发表了XLM的代码(PyTorch实现),这是一个跨语言模型的预培训模型。

本文全面分析了强化学习算法在神经机器翻译中的应用。

这篇发表在JAIR上的调查论文对跨语言单词嵌入模型的培训、评估和使用进行了全面的概述。

Gradient发表了一篇优秀的文章,详细阐述了强化学习目前的局限性,并提供了一条潜在的分级强化学习的前进道路。及时地,一些人发布了一套优秀的教程来开始强化学习。

本文简要介绍了上下文词表示。

 

ML/NLP Creativity and Society ????

Machine learning has been applied to solve real-world problems but it has also been applied in interesting and creative ways. ML creativity is as important as any other research area in AI because at the end of the day we wish to build AI systems that will help shape our culture and society.

Towards the end of this year, Gary Marcus and Yoshua Bengio debated on the topics of deep learning, symbolic AI and the idea of hybrid AI systems.

The 2019 AI Index Report was finally released and provides a comprehensive analysis of the state of AI which can be used to better understand the progress of AI in general.

机器学习已被应用于解决现实世界的问题,但它也被应用在有趣和创造性的方式。创造力和任何其他人工智能研究领域一样重要,因为在一天结束的时候,我们希望建立有助于塑造我们的文化和社会的人工智能系统。

今年年底,加里·马库斯(Gary Marcus)和约舒亚·本吉奥(yobengio)就深度学习、象征性人工智能和混合人工智能系统的想法展开了辩论。

2019年人工智能指数报告最终发布,全面分析了人工智能的现状,可以更好地了解人工智能的总体进展。

 

Commonsense reasoning continues to be an important area of research as we aim to build artificial intelligence systems that not are only able to make a prediction on the data provided but also understand and can reason about those decisions. This type of technology can be used in conversational AI where the goal is to enable an intelligent agent to have more natural conversations with people. Check out this interview with Nasrin Mostafazadeh having a discussion on commonsense reasoning and applications such as storytelling and language understanding. You can also check out this recent paper on how to leverage language models for commonsense reasoning.

Activation Atlases is a technique developed by researchers at Google and Open AI to better understand and visualize the interactions happening between neurons of a neural network

常识推理仍然是一个重要的研究领域,因为我们的目标是建立人工智能系统,不仅能够对提供的数据进行预测,而且能够理解和推理这些决策。这种类型的技术可以用于会话人工智能,其目标是使智能代理能够与人进行更自然的对话。来看看Nasrin Mostafazadeh对常识推理和诸如讲故事和语言理解等应用的讨论吧。您还可以查看最近这篇关于如何利用语言模型进行常识推理的文章。

激活地图集是谷歌和Open AI的研究人员开发的一种技术,用于更好地理解和可视化神经网络神经元之间发生的交互作用

 

Check out the Turing Lecture delivered by Geoffrey Hinton and Yann LeCun who were awarded, together with Yoshua Bengio, the Turing Award this year.

Tackling climate change with machine learning is discussed in this paper.

OpenAI published an extensive report discussing the social impacts of language models covering topics like beneficial use and potential misuse of the technology.

Emotion analysis continues to be used in a diverse range of applications. The Mojifier is a cool project that looks at an image, detects the emotion, and replaces the face with the emojis matching the emotion detected.

看看Geoffrey Hinton和Yann LeCun的图灵奖演讲,他们和Yoshua Bengio一起获得了今年的图灵奖。

本文讨论了利用机器学习来应对气候变化的问题。

OpenAI发布了一份广泛的报告,讨论了语言模型的社会影响,包括对技术的有益使用和潜在的误用。

情绪分析继续在各种各样的应用中得到应用。Mojifier是一个很酷的项目,它可以查看图像,检测情绪,然后用与检测到的情绪相匹配的表情符号替换脸部表情。

Work on radiology with the use of AI techniques has also been trending this year. Here is a nice summary of trends and perspectives in this area of study. Researchers from NYU also released a Pytorch implementationof a deep neural network that improves radiologists’ performance on breast cancer screening. And here is a major dataset release called MIMIC-CXR which consists of a database of chest Xrays and text radiology reports.

The New York Times wrote a piece on Karen Spark Jones remembering the seminal contributions she made to NLP and Information Retrieval.

OpenAI Five became the first AI system to beat a world champion at an esports game.

今年,人工智能技术在放射学方面的应用也很流行。以下是对这一研究领域的趋势和前景的一个很好的总结。纽约大学的研究人员还发布了Pytorch深层神经网络的实现,提高了放射科医生在乳腺癌筛查中的表现。这是一个主要的数据集发布,叫做MIMIC-CXR,它包括一个胸部x光和文本放射学报告的数据库。

《纽约时报》(The New York Times)写了一篇关于凯伦•斯帕克•琼斯(Karen Spark Jones)的文章,回忆她对NLP和信息检索的重大贡献。

OpenAI Five成为第一个在电子竞技游戏中击败世界冠军的AI系统。

The Global AI Talent Report provides a detailed report of the worldwide AI talent pool and the demand for AI talent globally.

If you haven’t subscribed already, the DeepMind team has an excellent podcast where participants discuss the most pressing topics involving AI. Talking about AI potential, Demis Hassabis did an interview with The Economist where he spoke about futuristic ideas such as using AI as an extension to the human mind to potentially find solutions to important scientific problems.

全球人工智能人才报告提供了全球人工智能人才库和全球对人工智能人才需求的详细报告。

如果你还没有订阅,DeepMind团队有一个很棒的播客,参与者可以在里面讨论与人工智能有关的最紧迫的话题。谈到人工智能的潜力,杰米斯·哈萨比斯(Demis Hassabis)在接受《经济学人》(The Economist)采访时谈到了一些未来主义的想法,比如将人工智能作为人类思维的延伸,以潜在地找到重要科学问题的解决方案。

This year also witnessed incredible advancement in ML for health applications. For instance, researchers at Massachusetts developed an AI system capable of spotting brain hemorrhages as accurate as humans.

Janelle Shane summarizes a set of “weird” experiments showing how machine learning can be used in creative ways to conduct fun experimentation. Sometimes this is the type of experiment that’s needed to really understand what an AI system is actually doing and not doing. Some experiments include neural networks generating fake snakes and telling jokes.

今年还见证了ML在健康应用方面令人难以置信的进步。例如,马萨诸塞州的研究人员开发了一种人工智能系统,能够像人类一样准确地发现脑出血。

Janelle Shane总结了一组“奇怪”的实验,展示了机器学习如何以创造性的方式进行有趣的实验。有时候,这是一种需要真正理解人工智能系统实际在做什么和没有在做什么的实验。一些实验包括神经网络生成假蛇和讲笑话。

Learn to find planets with machine learning models build on top of TensorFlow.

OpenAI discusses the implication of releasing (including the potential of malicious use cases) large-scale unsupervised language models.

This Colab notebook provides a great introduction on how to use Nucleus and TensorFlow for “DNA Sequencing Error Correction”. And here is a great detailed post on the use of deep learning architectures for exploring DNA.

Alexander Rush is a Harvard NLP researcher who wrote an important article about the issues with tensors and how some current libraries expose them. He also went on to talk about a proposal for tensors with named indices.

学习使用建立在TensorFlow之上的机器学习模型来寻找行星。

OpenAI讨论了发布(包括潜在的恶意用例)大型非监督语言模型的含义。

这个Colab笔记本提供了一个很好的介绍如何使用细胞核和张力流“DNA测序错误纠正”。这里有一篇关于深度学习架构在探索DNA中的应用的详细文章。

Alexander Rush是哈佛大学的一位NLP研究员,他写了一篇关于张量的问题以及当前的一些库如何暴露张量的重要文章。他还讨论了一个关于带命名指标的张量的建议。

 

ML/NLP Tools and Datasets ⚙️

Here I highlight stories related to software and datasets that have assisted in enabling NLP and machine learning research and engineering.

Hugging Face released a popular Transformer library based on Pytorch names pytorch-transformers. It allows NLP practitioners and researchers to easily use state-of-the-art general-purpose architectures such as BERT, GPT-2, and XLM, among others. If you are interested in how to use pytorch-transformers there are a few places to start but I really liked this detailed tutorial by Roberto Silveira showing how to use the library for machine comprehension.

在这里,我强调了与软件和数据集相关的故事,它们帮助了NLP和机器学习的研究和工程。

抱抱脸发布了一个基于Pytorch名称Pytorch - Transformer的流行Transformer库。它允许NLP从业者和研究人员轻松地使用最先进的通用架构,如BERT、GPT-2和XLM等。如果您对如何使用pytorch-transformer感兴趣,可以从以下几个地方开始,但是我非常喜欢Roberto Silveira的这个详细教程,它展示了如何使用库进行机器理解。

TensorFlow 2.0 was released with a bunch of new features. Read more about best practices here. François Chollet also wrote an extensive overview of the new features in this Colab notebook.

PyTorch 1.3 was released with a ton of new features including named tensors and other front-end improvements.

The Allen Institute for AI released Iconary which is an AI system that can play Pictionary-style games with a human. This work incorporates visual/language learning systems and commonsense reasoning. They also published a new commonsense reasoning benchmark called Abductive-NLI.

TensorFlow 2.0发布了一系列新特性。在这里阅读更多关于最佳实践的信息。Francois Chollet也在这个Colab笔记本中对新特性进行了广泛的概述。

PyTorch 1.3发布了大量新特性,包括命名张量和其他前端改进。

艾伦人工智能研究所(Allen Institute for AI)发布了一款名为ic的人工智能系统,它可以和人类一起玩图画类游戏。这项工作结合了视觉/语言学习系统和常识推理。他们还发布了一个新的常识推理基准,称为外展- nli。

 

spaCy releases a new library to incorporate Transformer language models into their own library so as to be able to extract features and used them in spaCy NLP pipelines. This effort is built on top of the popular Transformers library developed by Hugging Face. Maximilien Roberti also wrote a nice article on how to combine fast.ai code with pytorch-transformers.

The Facebook AI team released PHYRE which is a benchmark for physical reasoning aiming to test the physical reasoning of AI systems through solving various physics puzzles.

spaCy发布了一个新的库,将Transformer语言模型合并到它们自己的库中,以便能够提取特性并在spaCy NLP管道中使用它们。这一努力是建立在流行的变形金刚图书馆开发拥抱的脸。Maximilien Roberti也写了一篇关于如何快速合并的好文章。ai代码与pytorch-transformer。

Facebook人工智能团队发布了物理推理基准PHYRE,旨在通过解决各种物理难题来测试人工智能系统的物理推理。

 

StanfordNLP released StanfordNLP 0.2.0 which is a Python library for natural language analysis. You can perform different types of linguistic analysis such as lemmatization and part of speech recognition on over 70 different languages.

GQA is a visual question answering dataset for enabling research related to visual reasoning.

exBERT is a visual interactive tool to explore the embeddings and attention of Transformer language models. You can find the paper hereand the demo here.

StanfordNLP发布了一个用于自然语言分析的Python库StanfordNLP 0.2.0。你可以对70多种不同的语言进行不同类型的语言分析,如词化和部分语音识别。

GQA是一个可视化的问题回答数据集,用于支持与可视化推理相关的研究。

exBERT是一个可视化交互工具,用于探索Transformer语言模型的嵌入和注意事项。你可以在这里找到论文和演示。

 

Distill published an article on how to visualize memorization in Recurrent Neural Networks (RNNs).

Mathpix is a tool that lets you take a picture of an equation and then it provides you with the latex version.

Parl.ai is a platform that hosts many popular datasets for all works involving dialog and conversational AI.

Uber researchers released Ludwig, an open-source tool that allows users to easily train and test deep learning models with just a few lines of codes. The whole idea is to avoid any coding while training and testing models.

Google AI researchers release “Natural Questions” which is a large-scale corpus for training and evaluating open-domain question answering systems.

Distill 发表了一篇关于如何在递归神经网络中可视化记忆的文章。

Mathpix是一个工具,它允许您为一个等式拍照,然后为您提供乳胶版本。

Parl。人工智能是一个平台,为所有涉及对话和对话人工智能的工作托管许多流行的数据集。

Uber的研究人员发布了一个开源工具路德维希(Ludwig),用户只需几行代码就可以轻松地训练和测试深度学习模型。整个想法是在训练和测试模型时避免任何编码。

谷歌人工智能研究人员发布了“自然问题”,这是一个大规模的语料库,用于培训和评估开放领域的问题回答系统。

 

Articles and Blog posts ✍️

This year witnessed an explosion of data science writers and enthusiasts. This is great for our field and encourages healthy discussion and learning. Here I list a few interesting and must-see articles and blog posts I came across:

Christian Perone provides an excellent introduction to maximum likelihood estimation (MLE) and maximum a posteriori (MAP) which are important principles to understand how parameters of a model are estimated.

Reiichiro Nakano published a blog post discussing neural style transfer with adversarially robust classifiers. A Colab notebook was also provided.

Saif M. Mohammad started a great series discussing a diachronic analysis of ACL anthology.

今年见证了数据科学作家和爱好者的激增。这对我们的领域很有好处,并鼓励健康的讨论和学习。这里我列出了一些有趣的、必须看的文章和博客帖子。

Christian Perone对极大似然估计(MLE)和极大后验概率(MAP)进行了很好的介绍,它们是理解如何估计模型参数的重要原则。

Reiichiro Nakano发表了一篇博客文章,讨论了神经类型转换与反鲁棒分类器之间的关系。还提供了一个Colab notebook

Saif M. Mohammad开始了一个伟大的系列讨论ACL选集的历时分析。

The question is: can a language model learn syntax? Using structural probes, this work aims to show that it is possible to do so using contextualized representations and a method for finding tree structures.

Andrej Karpathy wrote a blog post summarizing best practices and a recipe on how to effectively train neural networks.

Google AI researchers and other researchers collaborated to improve the understanding of search using BERT models. Contextualized approaches like BERT are adequate to understand the intent behind search queries.

问题是:语言模型可以学习语法吗?通过使用结构探测,这项工作的目的是证明使用上下文化的表示和查找树结构的方法是可行的。

Andrej Karpathy写了一篇博客文章,总结了最佳实践和有效训练神经网络的方法。

谷歌人工智能研究人员和其他研究人员合作使用BERT模型来提高对搜索的理解。像BERT这样的上下文化方法足以理解搜索查询背后的意图。

Rectified Adam (RAdam) is a new optimization technique based on Adam optimizer that helps to improve AI architectures. There are several efforts in coming up with better and more stable optimizers but the authors claim to focus on other aspects of optimizations that are just as important for delivering improved convergence.

With a lot of development of machine learning tools recently, there are also many discussions on how to implement ML systems that enable solutions to practical problems. Chip Huyen wrote an interesting chapter discussing machine learning system design emphasizing on topics such as hyperparameter tuning and data pipeline.

Rectified Adam(RAdam)是一种基于亚当优化器的新型优化技术,有助于改善人工智能架构。在提出更好和更稳定的优化器方面已经做了一些努力,但是作者声称关注优化的其他方面,这些方面对于提高收敛性同样重要。

随着最近大量机器学习工具的开发,关于如何实现ML系统以解决实际问题的讨论也越来越多。Chip Huyen写了一个有趣的章节,讨论机器学习系统的设计,强调超参数调优和数据管道等主题。

NVIDIA breaks the record for creating the biggest language model trained on billions of parameters.

Abigail See wrote this excellent blog post about what makes a good conversation in the context of systems developed to perform natural language generation task.

英伟达打破了在数十亿个参数上训练出最大语言模型的记录。

Abigail See写了一篇优秀的博客文章,讨论了在执行自然语言生成任务的系统环境中,如何进行良好的对话。

 

Google AI published two natural language dialog datasets with the idea to use more complex and natural dialog datasets to improve personalization in conversational applications like digital assistants.

Deep reinforcement learning continues to be one of the most widely discussed topics in the field of AI and it has even attracted interest in the space of psychology and neuroscience. Read more about some highlights in this paper published in Trends in Cognitive Sciences.

Samira Abner wrote this excellent blog post summarizing the main building blocks behind Transformers and capsule networks and their connections. Adam Kosiorek also wrote this magnificent piece on stacked capsule-based autoencoders (an unsupervised version of capsule networks) which was used for object detection.

谷歌人工智能发布了两个自然语言对话数据集的想法,使用更复杂和自然的对话数据集,以改善个性化会话应用程序,如数字助理。

深度强化学习仍然是人工智能领域中最广泛讨论的话题之一,它甚至引起了心理学和神经科学领域的兴趣。阅读这篇发表在《认知科学趋势》上的论文中的一些重点。

Samira Abner写了一篇优秀的博客文章,总结了Transformers和capsule 胶囊网络及其连接背后的主要构件。亚当·科西奥雷克(Adam Kosiorek)还写了一篇关于堆叠的基于胶囊的自动编码器(一种无监督的胶囊网络)的精彩文章,用于对象检测。

Researchers published an interactive article on Distill that aims to show a visual exploration of Gaussian Processes.

Through this Distill publication, Augustus Odena makes a call to researchers to address several important open questions about GANs.

Here is a PyTorch implementation of graph convolutional networks (GCNs) used for classifying spammers vs. non-spammers.

At the beginning of the year, VentureBeat released a list of predictions for 2019 made by experts such as Rumman Chowdury, Hilary Mason, Andrew Ng, and Yan LeCun. Check it out to see if their predictions were right.

研究人员发表了一篇关于提取的交互式文章,旨在展示对高斯过程的可视化探索。

通过这本精粹的出版物,奥古斯都·奥德纳呼吁研究人员解决关于甘斯的几个重要的开放性问题。

这是一个图卷积网络(GCNs)的PyTorch实现,用于对垃圾邮件发送者和非垃圾邮件发送者进行分类。

今年年初,VentureBeat发布了一份由鲁曼·乔杜里(Rumman Chowdury)、希拉里·梅森(Hilary Mason)、吴恩达(Andrew Ng)和严乐存(Yan LeCun)等专家做出的2019年预测清单。看看他们的预测是否正确。

Learn how to finetune BERT to perform multi-label text classification.

Due to the popularity of BERT, in the past few months, many researchers developed methods to “compress” BERT with the idea to build faster, smaller and memory-efficient versions of the original. Mitchell A. Gordon wrote a summary of the types of compressions and methods developed around this objective.

学习如何调整BERT来执行多标签文本分类。

由于BERT的流行,在过去的几个月里,许多研究人员开发出了“压缩”BERT的方法,以期在原始版本的基础上构建出更快、更小、更节省内存的版本。Mitchell A. Gordon写了一个总结类型的压缩和方法围绕这一目标发展。

Superintelligence continued to be a topic of debate among experts. It’s an important topic that needs a proper understanding of frameworks, policies, and careful observations. I found this interesting series of comprehensive essays (in the form of a technical report by K. Eric Drexler) to be useful to understand some issues and considerations around the topic of superintelligence.

Eric Jang wrote a nice blog post introducing the concept of meta-learning which aims to build and train machine learning models that not only predict well but also learn well.

summary of AAAI 2019 highlights by Sebastian Ruder.

Graph neural networks were heavily discussed this year. David Mack wrote a nice visual article about how they used this technique together with attention to perform shortest path calculations.

Bayesian approaches remain an interesting subject, in particular how they can be applied to neural networks to avoid common issues like over-fitting. Here is a list of suggested reads by Kumar Shridhar on the topic.

超级智能仍然是专家们争论的话题。这是一个需要正确理解框架、策略和仔细观察的重要主题。我发现这一系列有趣的综合文章(以K. Eric Drexler的技术报告的形式)对理解有关超级智能的一些问题和考虑非常有用。

Eric Jang写了一篇很好的博客文章,介绍了元学习的概念,旨在建立和训练机器学习模型,这些模型不仅能很好地预测,还能很好地学习。

塞巴斯蒂安·鲁德对2019年AAAI的总结。

图神经网络是今年的热门话题。David Mack写了一篇很好的可视化文章,介绍了他们如何将这种技术与注意力结合起来进行最短路径计算。

贝叶斯方法仍然是一个有趣的课题,尤其是如何将它们应用到神经网络中以避免诸如过拟合这样的常见问题。以下是Kumar Shridhar关于这个话题的建议阅读清单。

 

Ethics in AI ????

Perhaps one of the most highly discussed aspects of AI systems this year was ethics which include discussions around bias, fairness, and transparency, among others. In this section, I provide a list of interesting stories and papers around this topic:

The paper titled “Does mitigating ML’s impact disparity require treatment disparity?” discusses the consequences of applying disparate learning processes through experiments conducted on real-world datasets.

HuggingFace published an article discussing ethics in the context of open-sourcing NLP technology for conversational AI.

Being able to quantify the role of ethics in AI research is an important endeavor going forward as we continue to introduce AI-based technologies to society. This paper provides a broad analysis of the measures and “use of ethics-related research in leading AI, machine learning and robotics venues.”

也许今年人工智能系统讨论最多的方面之一是伦理,其中包括关于偏见、公平和透明度等的讨论。在这一部分中,我提供了一个关于这个主题的有趣故事和论文列表:

这篇题为“缓解ML的影响差异需要治疗差异吗?”通过在真实数据集上进行的实验,讨论应用不同的学习过程的后果。

HuggingFace发表了一篇文章,讨论了在开源NLP技术用于人工智能对话环境下的伦理问题。

随着我们继续将基于人工智能的技术引入社会,量化伦理在人工智能研究中的作用是一项重要的努力。本文对这些措施和“在领先的人工智能、机器学习和机器人领域使用与伦理相关的研究”进行了广泛的分析。

 

This work presented at NAACL 2019 discusses how debiasing methods can cover up gender bias in word embeddings.

Listen to Zachary Lipton presenting his paper “Troubling Trends in ML Scholarship”. I also wrote a summary of this interesting paper which you can find here.

Gary Marcus and Ernest Davis published their book on “Rebooting AI: Building Artificial Intelligence We Can Trust”. The main theme of the book is to talk about the steps we must take to achieve robust artificial intelligence. On the topic of AI progression, François Chollet also wrote an impressive paper making a case for better ways to measure intelligence.

Check out this Udacity course created by Andrew Trask on topics such as differential privacy, federated learning, and encrypted AI. On the topic of privacy, Emma Bluemke wrote this great post discussing how one may go about training machine learning models while preserving patient privacy.

At the beginning of this year, Mariya Yao posted a comprehensive list of research paper summaries involving AI ethics. Although the list of paper reference was from 2018, I believe they are still relevant today.

这项工作在NAACL 2019上提出,讨论了去偏方法如何掩盖词嵌入中的性别偏见。

请听扎卡里•利普顿(Zachary Lipton)发表的论文《ML学术的令人不安的趋势》(Trends in ML Scholarship)。我还写了一篇关于这篇有趣论文的总结,你们可以在这里找到。

加里•马库斯(Gary Marcus)和欧内斯特•戴维斯(Ernest Davis)出版了一本关于“重启人工智能:构建我们可以信任的人工智能”(Rebooting AI: Building Artificial Intelligence We Can Trust)的书。这本书的主题是讨论我们必须采取哪些步骤来实现强大的人工智能。在人工智能发展的话题上,弗朗索瓦·乔莱(Francois Chollet)也写了一篇令人印象深刻的论文,为更好地衡量智力提出了理由。

查看这个由Andrew Trask创建的Udacity课程,主题包括差异隐私、联邦学习和加密人工智能。关于隐私的话题,Emma Bluemke写了一篇很棒的文章,讨论如何在训练机器学习模型的同时保护病人的隐私。

今年年初,Mariya Yao发布了一份全面的关于AI伦理的研究论文总结列表。虽然这份纸质参考文献的清单是从2018年开始的,但我相信它们在今天仍然适用。

 

ML/NLP Education ????

Here I will feature a list of educational resources, writers and people doing some amazing work educating others about difficult ML/NLP concepts/topics:

CMU released materials and syllabus for their “Neural Networks for NLP” course.

Elvis Saravia and Soujanya Poria released a project called NLP-Overviewthat is intended to help students and practitioners to get a condensed overview of modern deep learning techniques applied to NLP, including theory, algorithms, applications, and state of the art results — Link

在这里,我将列出一些教育资源,作家和人们做一些了不起的工作,教育别人关于ML/NLP的困难概念/主题:

CMU为他们的“NLP的神经网络”课程发布了材料和教学大纲。

Elvis Saravia和Soujanya Poria发布了一个名为NLP- overview的项目,旨在帮助学生和实践者获得应用于NLP的现代深度学习技术的简要概述,包括理论、算法、应用,以及最先进的结果- Link

Microsoft Research Lab published a free ebook on the foundation of data science with topics ranging from Markov Chain Monte Carlo to Random Graphs.

Mathematics for Machine Learning” is a free ebook introducing the most important mathematical concepts used in machine learning. It also includes a few Jupyter notebook tutorials describing the machine learning parts. Jean Gallier and Jocelyn Quaintance wrote an extensive free ebookcovering mathematical concepts used in machine learning.

Stanford releases a playlist of videos for its course on “Natural Language Understanding”.

On the topic of learning, OpenAI put together this great list of suggestions on how to keep learning and improving your machine learning skills. Apparently, their employees use these methods on a daily basis to keep learning and expanding their knowledge.

微软研究实验室出版了一本关于数据科学基础的免费电子书,主题从马尔科夫链蒙特卡罗到随机图。

《机器学习的数学》是一本介绍机器学习中最重要的数学概念的免费电子书。它还包括几个木星笔记本教程描述的机器学习部分。Jean Gallier和Jocelyn quain容西写了一本广泛的免费电子书,涵盖了机器学习中使用的数学概念。

斯坦福大学发布了“自然语言理解”课程的视频播放列表。

关于学习的话题,OpenAI整理了关于如何持续学习和提高机器学习技能的建议列表。显然,他们的员工每天都使用这些方法来不断学习和扩展知识。

Adrian Rosebrock published an 81-page guide on how to do computer vision with Python and OpenCV.

Emily M. Bender and Alex Lascarides published a book titled “Linguistic Fundamentals for NLP”. The main idea behind the book is to discuss what 
“meaning” is in the field of NLP by providing a proper foundation on semantics and pragmatics.

Elad Hazan published his lecture notes on “Optimization for Machine Learning” which aims to present machine training as an optimization problem with beautiful math and notations. Deeplearning.ai also published a great article that discusses parameter optimization in neural networks using a more visual and interactive approach.

Andreas Mueller published a playlist of videos for a new course in “Applied Machine Learning”.

Fast.ai releases its new MOOC titled “Deep Learning from the Foundations”.

MIT published all videos and syllabus for their course on “Introduction to Deep Learning”.

Adrian Rosebrock出版了一本81页的指南,介绍如何使用Python和OpenCV实现计算机视觉。

Emily M. Bender和Alex Lascarides出版了一本名为《NLP的语言学基础》的书。这本书的主要思想是讨论什么

“意义”是自然语言处理的范畴,它为语义学和语用学奠定了一定的基础。

Elad Hazan发表了他关于“机器学习的最优化”的演讲笔记,旨在用漂亮的数学和符号把机器训练描述成一个最优化问题。Deeplearning。ai还发表了一篇很棒的文章,讨论了使用更可视化和交互式方法的神经网络中的参数优化。

Andreas Mueller发布了“应用机器学习”新课程的视频播放列表。

Fast.ai发布了名为“从基础中深度学习”的新MOOC。

麻省理工学院发布了他们“深度学习入门”课程的所有视频和教学大纲。

 

Chip Huyen tweeted an impressive list of free online courses to get started with machine learning.

Andrew Trask published his book titled “Grokking Deep Learning”. The book serves as a great starter for understanding the fundamental building blocks of neural network architectures.

Sebastian Raschka uploaded 80 notebooks about how to implement different deep learning models such as RNNs and CNNs. The great thing is that the models are all implemented in both PyTorch and TensorFlow.

Here is a great tutorial that goes deep into understanding how TensorFlow works. And here is one by Christian Perone for PyTorch.

Fast.ai also published a course titled “Intro to NLP” accompanied by a playlist. Topics range from sentiment analysis to topic modeling to the Transformer.

Learn about Graph Convolutional Neural Networks for Molecular Generation in this talk by Xavier Bresson. Slides can be found here. And here is a paper discussing how to pre-train GNNs.

Chip Huyen在推特上发布了一份令人印象深刻的免费在线课程列表,开始机器学习。

安德鲁·查斯克出版了他的书《探索深度学习》。这本书是一个伟大的开端,以了解基本建设模块的神经网络架构。

Sebastian Raschka上传了80本关于如何实现不同深度学习模型的笔记,例如RNNs和CNNs。重要的是这些模型都是在PyTorch和TensorFlow中实现的。

这里有一个很好的教程,深入地介绍了TensorFlow是如何工作的。这是Christian Perone为PyTorch设计的。

快。ai还发布了一门名为“NLP入门”的课程,并配有播放列表。主题范围从情感分析到主题建模再到转换器。

在Xavier Bresson的演讲中学习关于分子生成的图卷积神经网络。幻灯片可以在这里找到。这是一篇讨论如何训练GNNs的论文。

 

On the topic of graph networks, some engineers use them to predict the properties of molecules and crystal. The Google AI team also published an excellent blog post explaining how they use GNNs for odor prediction. If you are interested in getting started with Graph Neural Networks, here is a comprehensive overview of the different GNNs and their applications.

Here is a playlist of videos on unsupervised learning methods such as PCA by Rene Vidal from Johns Hopkins University.

If you are ever interested in converting a pretrained TensorFlow model to PyTorch, Thomas Wolf has you covered in this blog post.

在图形网络这个话题上,一些工程师用它们来预测分子和晶体的性质。谷歌人工智能团队还发表了一篇优秀的博客文章,解释他们如何使用GNNs来预测气味。如果您对图形神经网络感兴趣,这里是不同gnn及其应用程序的全面概述。

这里有一些关于无监督学习方法的视频,如约翰霍普金斯大学的Rene Vidal的PCA。

如果您有兴趣将一个预先训练好的TensorFlow模型转换成PyTorch, Thomas Wolf将在这篇博客文章中为您介绍。

Want to learn about generative deep learning? David Foster wrote a great book that teaches data scientists how to apply GANs and encoder-decoder models for performing tasks such as painting, writing, and composing music. Here is the official repository accompanying the book, it includes TensorFlow code. There is also an effort to convert the code to PyTorch as well.

A Colab notebook containing code blocks to practice and learn about causal inference concepts such as interventions, counterfactuals, etc.

Here are the materials for the NAACL 2019 tutorial on “Transfer Learning in Natural Language Processing” delivered by Sebastian Ruder, Matthew Peters, Swabha Swayamdipta and Thomas Wolf. They also provided an accompanying Google Colab notebook to get started.

Another great blog post from Jay Alammar on the topic of data representation. He also wrote many other interesting illustrated guides that include GPT-2 and BERT. Peter Bloem also published a very detailed blog post explaining all the bits that make up a Transformer.

想学习生成式深度学习吗?David Foster写了一本伟大的书,教数据科学家如何将GANs和编解码器模型应用到绘画、写作和作曲等任务中。这是本书附带的官方存储库,它包括TensorFlow代码。还需要将代码转换为PyTorch。

一个包含代码块的Colab笔记本,用于练习和学习诸如干预、反事实等因果推理概念。

以下是NAACL 2019年“自然语言处理中的迁移学习”教程的材料,由Sebastian Ruder、Matthew Peters、Swabha Swayamdipta和Thomas Wolf提供。他们还提供了一个附带的谷歌Colab笔记本开始。

另一篇来自Jay Alammar的关于数据表示主题的博客文章。他还写了许多有趣的插图指南,包括GPT-2和BERT。Peter Bloem还发表了一篇非常详细的博客文章,解释了构成Transformer的所有部件。

Here is a nice overview of trends in NLP at ACL 2019, written by Mihail Eric. Some topics include infusing knowledge into NLP architectures, interpretability, and reducing bias among others. Here are a couple more overviews if you are interested: link 2 and link 3.

The full syllabus for CS231n 2019 edition was released by Stanford.

David Abel posted a set of notes for ICLR 2019. He was also nice to provide an impressive summary of NeurIPS 2019.

This is an excellent book that provides learners with a proper introduction to deep learning with notebooks provided as well.

以下是由Mihail Eric撰写的关于2019 ACL NLP趋势的精彩概述。一些主题包括将知识注入NLP体系结构、可解释性和减少偏见。如果您感兴趣,这里还有一些概述:链接2和链接3。

CS231n 2019版的完整教学大纲由斯坦福大学发布。

大卫·阿贝尔为ICLR 2019发布了一套说明。他还很高兴地提供了一个令人印象深刻的NeurIPS 2019总结。

这是一本优秀的书,为学习者提供了一个适当的介绍深入学习和笔记本提供了。

An illustrated guide to BERT, ELMo, and co. for transfer learning NLP.

Fast.ai releases its 2019 edition of the “Practical Deep Learning for Coders” course.

Learn about deep unsupervised learning in this fantastic course taught by Pieter Abbeel and others.

Gilbert Strang released a new book related to Linear Algebra and neural networks.

Caltech provided the entire syllabus, lecture slides, and video playlist for their course on “Foundation of Machine Learning”.

The “Scipy Lecture Notes” is a series of tutorials that teach you how to master tools such as matplotlib, NumPy, and SciPy.

BERT, ELMo和co.的转移学习NLP的图解指南。

Fast.ai发布了其2019年版“面向程序员的实用深度学习”课程。

在这个奇妙的课程中学习关于深度非监督学习,由彼得·阿贝尔和其他人教授。

Gilbert Strang出版了一本关于线性代数和神经网络的新书。

加州理工学院为他们的“机器学习基础”课程提供了完整的教学大纲、幻灯片和视频播放列表。

“Scipy课堂笔记”是一系列教程,教你如何掌握matplotlib、NumPy和Scipy等工具。

Here is an excellent tutorial on understanding Gaussian processes. (Notebooks provided).

This is a must-read article in which Lilian Weng provides a deep dive into generalized language models such as ULMFit, OpenAI GPT-2, and BERT.

Papers with Code is a website that shows a curated list of machine learning papers with code and state-of-the-art results.

Christoph Molnar released the first edition of “Interpretable Machine Learning” which is a book that touches on important techniques used to better interpret machine learning algorithms.

David Bamman releases the full syllabus and slides to the NLP courses offered at UC Berkley.

Berkley releases all materials for their “Applied NLP” class.

Aerin Kim is a senior research engineer at Microsoft and writes about topics related to applied Math and deep learning. Some topics include intuition to conditional independence, gamma distribution, perplexity, etc.

Tai-Danae Bradley wrote this blog post discussing ways on how to think about matrices and tensors. The article is written with some incredible visuals which help to better understand certain transformations and operations performed on matrices.

这是一个关于理解高斯过程的优秀教程。(笔记本提供)。

这是一篇必读的文章,在这篇文章中,Lilian Weng深入研究了通用语言模型,如ULMFit、OpenAI GPT-2和BERT。

“带代码的论文”是一个网站,它展示了一个包含代码和最新成果的机器学习论文列表。

克里斯托夫·莫尔纳(Christoph Molnar)出版了《可解释机器学习》的第一版,这本书触及了用于更好地解释机器学习算法的重要技术。

大卫班曼发布完整的教学大纲和幻灯片的NLP课程提供在加州大学伯克利分校。

伯克利为他们的“应用NLP”类发布了所有的材料。

Aerin Kim是微软公司的一名高级研究工程师,主要撰写与应用数学和深度学习相关的文章。一些主题包括对条件独立的直觉,伽马分布,困惑等。

taii - danae Bradley写了这篇博客文章,讨论如何考虑矩阵和张量。这篇文章是用一些令人难以置信的视觉效果写的,这些视觉效果有助于更好地理解在矩阵上执行的某些转换和操作。

 

I hope you found the links useful. I wish you a successful and healthy 2020!

Due to the holidays, I didn’t get much chance to proofread the article so any feedback or corrections are welcomed!

我希望你觉得这些链接有用。祝你2020年成功健康!

由于假期的关系,我没有太多机会校对这篇文章,所以欢迎任何反馈或修改!

 

原文地址https://medium.com/dair-ai/nlp-year-in-review-2019-fb8d523bcb19
Github地址https://github.com/omarsar/nlp_highlights/blob/master/NLP_2018_Highlights.pdf

 

 

 

标签:NLP,AI,Highlights,学习,人工智能,2019,learning
来源: https://blog.51cto.com/u_14217737/2912843