<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ru">
		<id>http://neerc.ifmo.ru/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=176.59.19.246&amp;*</id>
		<title>Викиконспекты - Вклад участника [ru]</title>
		<link rel="self" type="application/atom+xml" href="http://neerc.ifmo.ru/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=176.59.19.246&amp;*"/>
		<link rel="alternate" type="text/html" href="http://neerc.ifmo.ru/wiki/index.php?title=%D0%A1%D0%BB%D1%83%D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F:%D0%92%D0%BA%D0%BB%D0%B0%D0%B4/176.59.19.246"/>
		<updated>2026-04-16T16:11:28Z</updated>
		<subtitle>Вклад участника</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>http://neerc.ifmo.ru/wiki/index.php?title=%D0%93%D0%B5%D0%BD%D0%B5%D1%80%D0%B0%D1%86%D0%B8%D1%8F_%D0%B8%D0%B7%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D1%8F_%D0%BF%D0%BE_%D1%82%D0%B5%D0%BA%D1%81%D1%82%D1%83&amp;diff=76489</id>
		<title>Генерация изображения по тексту</title>
		<link rel="alternate" type="text/html" href="http://neerc.ifmo.ru/wiki/index.php?title=%D0%93%D0%B5%D0%BD%D0%B5%D1%80%D0%B0%D1%86%D0%B8%D1%8F_%D0%B8%D0%B7%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D1%8F_%D0%BF%D0%BE_%D1%82%D0%B5%D0%BA%D1%81%D1%82%D1%83&amp;diff=76489"/>
				<updated>2021-01-06T16:01:43Z</updated>
		
		<summary type="html">&lt;p&gt;176.59.19.246: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{В разработке}}&lt;br /&gt;
&lt;br /&gt;
Автоматический синтез реалистичных изображений из текста был бы интересен и довольно полезен, но современные системы искусственного интеллекта все еще далеки от этой цели. Однако в последние годы были разработаны универсальные и мощные рекуррентные архитектуры нейронных сетей для изучения различных представлений текстовых признаков. Между тем, глубокие сверточные генеративные состязательные сети (англ. ''Generative Adversarial Nets, GANs'') начали генерировать весьма убедительные изображения определенных категорий, таких как лица, обложки альбомов и интерьеры комнат. Мы рассмотрим глубокую архитектуру и формулировку GAN, объединим достижения в моделировании текста и изображений, переводя визуальные концепции из символов в пиксели.&lt;br /&gt;
&lt;br /&gt;
== GAN ==&lt;br /&gt;
=== DCGAN ===&lt;br /&gt;
=== Attribute2Image ===&lt;br /&gt;
=== StackGAN ===&lt;br /&gt;
=== StackGAN++ ===&lt;br /&gt;
=== Some Name Here (Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis) ===&lt;br /&gt;
=== AttnGAN ===&lt;br /&gt;
=== Stacking VAE and GAN ===&lt;br /&gt;
=== ChatPainter ===&lt;br /&gt;
=== MMVR ===&lt;br /&gt;
=== FusedGAN ===&lt;br /&gt;
=== MirrorGAN ===&lt;br /&gt;
=== Obj-GANs ===&lt;br /&gt;
=== LayoutVAE ===&lt;br /&gt;
=== TextKD-GAN ===&lt;br /&gt;
=== MCA-GAN ===&lt;br /&gt;
=== LeicaGAN ===&lt;br /&gt;
== См. также ==&lt;br /&gt;
*[[Generative Adversarial Nets (GAN)|Порождающие состязательные сети]]&lt;br /&gt;
&lt;br /&gt;
== Примечания ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Источники информации ==&lt;br /&gt;
*[https://arxiv.org/abs/1605.05396 Scott R. {{---}} Generative Adversarial Text to Image Synthesis, 2016]&lt;br /&gt;
*[https://arxiv.org/abs/1512.00570 Xinchen Y. {{---}} Conditional Image Generation from Visual Attributes, 2015]&lt;br /&gt;
*[https://arxiv.org/abs/1612.03242 Han Z., Tao X. {{---}} Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, 2017]&lt;br /&gt;
*[https://arxiv.org/abs/1710.10916 Han Z., Tao X. {{---}} Realistic Image Synthesis with Stacked Generative Adversarial Networks, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1801.05091 Seunghoon H., Dingdong Y. {{---}} Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis, 2018]&lt;br /&gt;
*[https://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_AttnGAN_Fine-Grained_Text_CVPR_2018_paper.pdf Tao X., Pengchuan Z. {{---}}  Fine-Grained Text to Image Generationwith Attentional Generative Adversarial Networks, 2018]&lt;br /&gt;
*[https://ieeexplore.ieee.org/document/8499439 Chenrui Z., Yuxin P. {{---}} Stacking VAE and GAN for Context-awareText-to-Image Generation, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1802.08216 Shikhar S., Dendi S. {{---}} ChatPainter: Improving Text to Image Generation using Dialogue, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1809.10274 Shagan S., Dheeraj P. {{---}} SEMANTICALLY INVARIANT TEXT-TO-IMAGE GENERATION, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1801.05551 Navaneeth B., Gang H. {{---}} Semi-supervised FusedGAN for ConditionalImage Generation, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1903.05854 Tingting Q., Jing Z. {{---}} MirrorGAN: Learning Text-to-image Generation by Redescription, 2019]&lt;br /&gt;
*[https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Object-Driven_Text-To-Image_Synthesis_via_Adversarial_Training_CVPR_2019_paper.pdf Wendo L., Pengchuan Z. {{---}} Object-driven Text-to-Image Synthesis via Adversarial Training 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1907.10719 Akash A.J., Thibaut D. {{---}} LayoutVAE: Stochastic Scene Layout Generation From a Label Set, 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1905.01976 Md. Akmal H. and Mehdi R. {{---}} TextKD-GAN: Text Generation using Knowledge Distillation and Generative Adversarial Networks, 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1909.07083 Bowen L., Xiaojuan Q. {{---}} MCA-GAN: Text-to-Image Generation Adversarial NetworkBased on Multi-Channel Attention, 2019]&lt;br /&gt;
*[http://papers.nips.cc/paper/8375-learn-imagine-and-create-text-to-image-generation-from-prior-knowledge.pdf Tingting Q.,∗ Jing Z. {{---}} Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge, 2019]&lt;br /&gt;
&lt;br /&gt;
[[Категория: Машинное обучение]]&lt;br /&gt;
[[Категория: Порождающие модели]]&lt;/div&gt;</summary>
		<author><name>176.59.19.246</name></author>	</entry>

	<entry>
		<id>http://neerc.ifmo.ru/wiki/index.php?title=%D0%93%D0%B5%D0%BD%D0%B5%D1%80%D0%B0%D1%86%D0%B8%D1%8F_%D0%B8%D0%B7%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D1%8F_%D0%BF%D0%BE_%D1%82%D0%B5%D0%BA%D1%81%D1%82%D1%83&amp;diff=76488</id>
		<title>Генерация изображения по тексту</title>
		<link rel="alternate" type="text/html" href="http://neerc.ifmo.ru/wiki/index.php?title=%D0%93%D0%B5%D0%BD%D0%B5%D1%80%D0%B0%D1%86%D0%B8%D1%8F_%D0%B8%D0%B7%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D1%8F_%D0%BF%D0%BE_%D1%82%D0%B5%D0%BA%D1%81%D1%82%D1%83&amp;diff=76488"/>
				<updated>2021-01-06T15:51:53Z</updated>
		
		<summary type="html">&lt;p&gt;176.59.19.246: Add the contents and categories&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{В разработке}}&lt;br /&gt;
&lt;br /&gt;
== GAN ==&lt;br /&gt;
=== DCGAN ===&lt;br /&gt;
=== Attribute2Image ===&lt;br /&gt;
=== StackGAN ===&lt;br /&gt;
=== StackGAN++ ===&lt;br /&gt;
=== Some Name Here (Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis) ===&lt;br /&gt;
=== AttnGAN ===&lt;br /&gt;
=== Stacking VAE and GAN ===&lt;br /&gt;
=== ChatPainter ===&lt;br /&gt;
=== MMVR ===&lt;br /&gt;
=== FusedGAN ===&lt;br /&gt;
=== MirrorGAN ===&lt;br /&gt;
=== Obj-GANs ===&lt;br /&gt;
=== LayoutVAE ===&lt;br /&gt;
=== TextKD-GAN ===&lt;br /&gt;
=== MCA-GAN ===&lt;br /&gt;
=== LeicaGAN ===&lt;br /&gt;
== См. также ==&lt;br /&gt;
*[[Generative Adversarial Nets (GAN)|Порождающие состязательные сети]]&lt;br /&gt;
&lt;br /&gt;
== Примечания ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Источники информации ==&lt;br /&gt;
*[https://arxiv.org/abs/1605.05396 Scott R. {{---}} Generative Adversarial Text to Image Synthesis, 2016]&lt;br /&gt;
*[https://arxiv.org/abs/1512.00570 Xinchen Y. {{---}} Conditional Image Generation from Visual Attributes, 2015]&lt;br /&gt;
*[https://arxiv.org/abs/1612.03242 Han Z., Tao X. {{---}} Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, 2017]&lt;br /&gt;
*[https://arxiv.org/abs/1710.10916 Han Z., Tao X. {{---}} Realistic Image Synthesis with Stacked Generative Adversarial Networks, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1801.05091 Seunghoon H., Dingdong Y. {{---}} Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis, 2018]&lt;br /&gt;
*[https://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_AttnGAN_Fine-Grained_Text_CVPR_2018_paper.pdf Tao X., Pengchuan Z. {{---}}  Fine-Grained Text to Image Generationwith Attentional Generative Adversarial Networks, 2018]&lt;br /&gt;
*[https://ieeexplore.ieee.org/document/8499439 Chenrui Z., Yuxin P. {{---}} Stacking VAE and GAN for Context-awareText-to-Image Generation, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1802.08216 Shikhar S., Dendi S. {{---}} ChatPainter: Improving Text to Image Generation using Dialogue, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1809.10274 Shagan S., Dheeraj P. {{---}} SEMANTICALLY INVARIANT TEXT-TO-IMAGE GENERATION, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1801.05551 Navaneeth B., Gang H. {{---}} Semi-supervised FusedGAN for ConditionalImage Generation, 2018]&lt;br /&gt;
*[https://arxiv.org/abs/1903.05854 Tingting Q., Jing Z. {{---}} MirrorGAN: Learning Text-to-image Generation by Redescription, 2019]&lt;br /&gt;
*[https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Object-Driven_Text-To-Image_Synthesis_via_Adversarial_Training_CVPR_2019_paper.pdf Wendo L., Pengchuan Z. {{---}} Object-driven Text-to-Image Synthesis via Adversarial Training 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1907.10719 Akash A.J., Thibaut D. {{---}} LayoutVAE: Stochastic Scene Layout Generation From a Label Set, 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1905.01976 Md. Akmal H. and Mehdi R. {{---}} TextKD-GAN: Text Generation using Knowledge Distillation and Generative Adversarial Networks, 2019]&lt;br /&gt;
*[https://arxiv.org/abs/1909.07083 Bowen L., Xiaojuan Q. {{---}} MCA-GAN: Text-to-Image Generation Adversarial NetworkBased on Multi-Channel Attention, 2019]&lt;br /&gt;
*[http://papers.nips.cc/paper/8375-learn-imagine-and-create-text-to-image-generation-from-prior-knowledge.pdf Tingting Q.,∗ Jing Z. {{---}} Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge, 2019]&lt;br /&gt;
&lt;br /&gt;
[[Категория: Машинное обучение]]&lt;br /&gt;
[[Категория: Порождающие модели]]&lt;/div&gt;</summary>
		<author><name>176.59.19.246</name></author>	</entry>

	</feed>