See Gatt and Krahmer (2018; pp. 118)[1] for a survey.

Aproaches Edit

Using recurrent neural networks Edit

Fortuin et al. (2018)[2]:

  • Devised a tool to help authors overcome writer's block
  • Use NLP tools to generate event sequences where an event is a tuple of (character, location, action, object, keyword)
  • Use a vanilla LSTM to predict the next event
  • Details about training LSTM: "We use batch normalization (Ioffe and Szegedy 2015) and dropout (Srivastava et al. 2014) on the LSTM output features and train the network with the Adam optimizer (Kingma and Ba 2015). Moreover, we use gradient norm clipping (Pascanu, Mikolov, and Ben- gio 2012) and a cyclic learning rate scheme (Smith 2017). The parameters are initialized using the Glorot method (Glorot and Bengio 2010). During training we also use teacher forcing (Williams and Zipser 1989)."

Martin et al. (2018)[3]:

  • Represent events as 4-tuple: "Following Pichotta and Mooney (2016a), we developed a

4-tuple event representation where v is a verb, s is the subject of the verb, o is the object of the verb, and m is the modifier or “wildcard”, which can be a propositional object, indirect object, causal complement (e.g., in “I was glad that he drove,” “drove” is the causal complement to “glad.”), or any other dependency unclassifiable to Stanford’s dependency parser."

  • Use LSTM to predict event sequences

Harrison et al. (2017)[4]:

  • Use MCMC to sample event sequence distribution, with the help of LSTM
  • Represents an event as a tuple: <subject, verb, object, modifier>

Khalifa et al. (2017)[5]:

  • Used 2 layers of LSTMs
  • Experimented with character-based model but opted for word-based ones instead
  • Training details: "trained for 2,500 epochs using the Adam Optimizer with fixed learning rate 0.0001"

References Edit

  1. Gatt, A., & Krahmer, E. (2018). Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61, 65–170.
  2. Fortuin, V., Weber, R. M., Schriber, S., Wotruba, D., & Gross, M. (2018). InspireMe: Learning Sequence Models for Stories, 7747–7752. Retrieved from
  3. Martin, L. J., Ammanabrolu, P., Wang, X., Hancock, W., Singh, S., Harrison, B., & Riedl, M. O. (2018). Event Representations for Automated Story Generation with Deep Neural Nets. AAAI.
  4. Harrison, B., Purdy, C., & Riedl, M. O. (2017). Toward Automated Story Generation with Markov Chain Monte Carlo Methods and Deep Neural Networks. Retrieved from
  5. Khalifa, A., Barros, G. A. B., & Togelius, J. (2017). DeepTingle.