爱可可AI论文推介(10月13日)( 三 )


Creating a descriptive grammar of a language is an indispensable step for language documentation and preservation. However, at the same time it is a tedious, time-consuming task. In this paper, we take steps towards automating this process by devising an automated framework for extracting a first-pass grammatical specification from raw text in a concise, human- and machine-readable format. We focus on extracting rules describing agreement, a morphosyntactic phenomenon at the core of the grammars of many of the world's languages. We apply our framework to all languages included in the Universal Dependencies project, with promising results. Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data. We confirm this finding with human expert evaluations of the rules that our framework produces, which have an average accuracy of 78%. We release an interface demonstrating the extracted rules at this https URL.
爱可可AI论文推介(10月13日)文章插图
爱可可AI论文推介(10月13日)文章插图
爱可可AI论文推介(10月13日)文章插图
5、[CV]Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering
(Under review as a conference paper at ICLR 2021)
将生成模型与可微渲染结合提取并解缠生成式图像合成模型学到的三维知识 , 利用GAN作为多视图数据生成器 , 用可微渲染器训练逆图形网络 , 将训练好的逆图形网络作为教师 , 将GAN潜代码解缠为可解释的3D属性 , 用循环一致性损失迭代训练整个网络结构 。 可获得显著高质量的三维重建结果 , 而需要的注释工作比标准数据集少10000次 。
Differentiable rendering has paved the way to training neural networks to perform “inverse graphics” tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent codes. However, these latent codes often lack further physical interpretation and thus GANs cannot easily be inverted to perform explicit 3D reasoning. In this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers. Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN's latent code into interpretable 3D properties. The entire architecture is trained iteratively using cycle consistency losses. We show that our approach significantly outperforms state-of-the-art inverse graphics networks trained on existing datasets, both quantitatively and via user studies. We further showcase the disentangled GAN as a controllable 3D “neural renderer", complementing traditional graphics renderers.