This image is currently processing. Please be patient...
“Shanshui–DaDA” is an interactive installation that utilizes machine-learning algorithm in helping amateur participants realize traditional style Chinese Ink Wash Paintings. “Shanshui-DaDA” is the first of a series of explorations-“DaDA: Design and Draw with AI”- that seek to find AI’s role in traditionally human creativity centered areas. By way of challenging the creator’s conventional position, the artist poses the questions: Can we collaborate with AI to better facilitate, even enhance human creativity?
What it does
The audience is invited to sketch the a simple line drawing representation of their ideal landscape painting in the software interface, through calculation, “Shanshui-DaDA” will generate a Chinese “Shanshui” based on the input drawn by the user.
How we built it
"Shanshui-DaDA" is trained with "CycleGAN" on 108 (later got expanded to 205 paintings) Shanshui paintings collected from online open data. The raw painting scans are pre-processed to 1772 pairs of edge-painting (sketches) and Shanshui paintings. And the trained machine learning model is then wrapped with a client-sever system, where participants on the client side can sketch on and later see the real-time generated paintings on the front interface - an iPad 2018. And a node server and the machine learning model are running on backend on a separated computer under local network.
A more detailed documentation, like how to preprocess the collected paintings, training setup and process, everything about how the project is developed from concept to implementation is here!
Challenges we ran into
In the short time we had experimented with TF 2.0, we HAVE NOT get same result as the previous implementation (based on pytorch). The TF 1.7 version kind of working, but the results are not satisfying. By time of summiting this, we are still figuring out why. Also, we didn't figure out a working pipeline for porting the TF 2.0 model to tf.js, yet. Even though we successfully ported the TF 1.7 model to tf.js, but for some reasons it's not working well. The tf.js model generates very different result from its TF python code. Again, we are still working on it. Any suggestion will be appreciated.
Accomplishments that we're proud of
More than 40, 000 participants interacted with ShanshuiDaDA and co-created Shanshui paintings during ShanshuiDaDA's appearance at Shanghai Duolun Museum of Modern Art as the main art installation of art exhibition - "the Love of Shanshui". And more people had interaction with ShanshuiDaDA at other art spaces and exhibitions such as Fouhaus +, Yeah!Nah! gallery and NeurIPS 2018 (our paper about ShanshuiDaDA got accepted to Machine Learning for Creativity and Design workshop at NeurIPS 2018 and also got accepted to AI Art Gallery which runs parallelly with the workshop).
What we learned
TF 2.0 is way more straight forward in every way! Love it!
What's next for ShanshuiDaDA
Port it to tf.js or say figuring out why it's not working well as TF python version. With tf.js we can make it run with pure front-end code and make it available for everyone. Thus, it can eventually fulfil the original idea of ShanshuiDaDA - enhance everyone's creativity and help all amateurs to draw Shanshui and use it as an expressive medium in their daily life!