Painted in a Good Light: Method Makes Any Still-life Painting Dynamic

“A Painterly Rendering Approach to Create Still-life Paintings With Dynamic Lighting” © 2020 Texas A&M University

SIGGRAPH 2020 Posters selection “A Painterly Rendering Approach to Create Still-life Paintings With Dynamic Lighting” presents a method that turns still-life paintings with global illumination effects into dynamic paintings with moving lights. Presented by Meena Subramanian, this project took home second prize in the graduate category of the ACM Student Research Competition. We caught up with Dr. Ergun Akleman, who advised Meena throughout her capstone project, to learn more about the method presented and how the SIGGRAPH Posters program is an inclusive venue that amplifies all types of research.

SIGGRAPH: Share some background on “A Painterly Rendering Approach to Create Still-life Paintings With Dynamic Lighting.” What inspired this research?

Ergun Akleman (EA): I have been working on two ideas to obtain desired visual styles with simple and yet complete control. For modeling, we are using anamorphic bas-reliefs, which are perspective-embedded shapes. Anamorphic bas-reliefs simplify the modeling process and allow us to obtain non-realistic shapes. For shading, we are using Barycentric shading. This concept is borrowed from computer-aided design. It allows us to control images like parametric surfaces. Because these methods can design desired style effectively, they can provide a competitive advantage to technical directors. Since the graduates of our M.S. in visualization program mostly work as technical directors, these methods evolved while teaching a variety of graduate classes in the visualization program.

Meena Subramanian took a class of mine, digital compositing, in spring 2019 [1]. In that class, we obtain seamless integration of global illumination effects created by virtual objects into a photograph, painting, video, or animation. Using Barycentric methods, we also can recreate whole visuals and turn them virtual. For her final project, Meena chose one of her still-life paintings with a wine glass and grapes. (See Meena’s original painting here.) This is a great problem, because grapes have subsurface scattering and reflection and wine glasses have reflection, refraction, and scattering with a wide range of regions such as thick glass, glass and water, thin glass, and the top surface of wine. Meena did a good job at the end of semester, but there was still a need to make significant improvements.

Meena also took my image synthesis course in the same semester. In image synthesis, she had developed her own ray tracer. As a result, she became very interested in rendering and shading. For her capstone project, she chose to improve her digital compositing project. As we have theoretically expected, using a very simple rendering process and Barycentric shading, she obtained subsurface scattering on grapes. She was able to precisely control every aspect. The two images are from her animated dynamic painting with moving lights. She completed her capstone in spring 2020, and we submitted the resulting work as a SIGGRAPH poster. She did great work explaining the process and method, and she received second place in the competition.

SIGGRAPH: Tell us how you developed your method to turn still-life paintings with global illumination into dynamic paintings with moving area lights.

EA: This direction started almost 20 years ago when I created the digital compositing course [1]. We initially used standard tools such as gazing balls to collect data from the environment and recover camera parameters, etc. There still were many problems, such as registration of shape boundaries. If we do not get the shape boundaries right, shadows casted on the real objects or reflections of virtual objects onto real objects did not become quite right. I realized that there must be a simple way to include global illumination into digital compositing. I have developed an idea, which I initially called Mock3D shapes. These are the shapes that are not really 3D but appear 3D. It was easier to model them as proxy shapes. We did not need to know camera parameters and 3D shapes to register Mock3D shapes perfectly. Two of my students got their Ph.Ds. working on Mock3D shapes, in 2014 and 2017 respectively [2,3].

However, this was really a fringe direction, and we published “Global Illumination for 2D Artworks With Vector Field Rendering” at SIGGRAPH 2014 [4]. Today, I am calling Mock3D shapes anamorphic bas-reliefs. They can be 3D models. But, the key idea is that the shapes only appear to be correct from one point of view. For instance, the figure on the right shows the anamorphic bas-relief type of model we used for creating the dynamic wine glass painting. Note that this model is not correct in any other view.

Another problem with digital compositing was color registration. Based on my background in geometric modeling, I developed a new concept called Barycentric shading. Using Barycentric shading, it became easier to imitate any material as an interpolation of intrinsic images [5]. My dream was to one day develop a completely new digital compositing system that can significantly simplify the technical director’s work. I also was thinking that the same type of approaches can be used to obtain augmented reality with global illuminations. However, these directions needed a significant amount of additional research and development. I also realized that this direction will not be mainstream in a short time. Instead of giving up, I decided to demonstrate the power of this approach in many small projects.

During the last five years, my students completed many thesis and capstone projects. We published almost all of them as SIGGRAPH posters. We were able to obtain a wide variety of styles from Chinese painting to charcoal and crosshatching, and from painters Edgar Payne, Anne Garney, and Georgia O’Keefe [6,7,8,9,10,11,12,13]. In each of the cases, we included a variety of global illumination. We also developed web-based interactive systems. You can find them online here. Using these systems, artists can create their own dynamic painting by uploading intrinsic images directly our website.

SIGGRAPH: How did you employ machine learning to inform this research?

EA: We did not employ machine learning; however, this whole approach is a perfect fit for machine learning. Instead of trying to reconstruct intrinsic parameters for physically based materials, if we reconstruct intrinsic parameters for a Barycentric shader, we can potentially speed up the processes. In addition, we can turn non-realistic images, such as paintings, into renderable objects.

SIGGRAPH: How do you envision your method being used in the future? By artists? In future research?

EA: I am a computer graphics researcher, but I am essentially an artist. I started as a professional cartoonist. As a cartoonist, I have always broken the rules slightly. This concept is hard to explain to a non-practitioner. As a representational artist, we need stay in between two worlds. Our artworks should not necessarily be real, but they must be rooted in realism. For instance, it is not desired to have a perfect perspective, but we do not want a completely wrong perspective. We will have global illumination, but  it will not be completely correct. Our models and shaders must support this type of flexibility. To me, this direction also helps so-called “photorealism.” If we can create imperfect artworks, it is actually easier to create the illusion of reality. 

SIGGRAPH: Share a bit about your experience attending the virtual SIGGRAPH 2020. How did it go? Any favorite memories or sessions you enjoyed?

EA: I personally do not like to travel. Therefore, I like virtual conferences because I do not have to travel. Although I couldn’t attend many SIGGRAPH 2020 sessions due to the start of Texas A&M University’s fall semester, I did not miss Meena’s final Posters presentation. I was really proud of her great presentation and her knowledge.

SIGGRAPH: What advice would you share with others considering submitting to the SIGGRAPH Posters program?

EA: This direction would not happen without the SIGGRAPH Posters program. The program allows non-mainstream research to be heard by the community. Now, many people know about Barycentric shading, which is mainly because of our SIGGRAPH Posters publications. I am very thankful that SIGGRAPH allows non-mainstream voices to be heard by the Posters program. I believe this work will make a big impact in the future. Having these breadcrumbs is important for other researchers to learn about this direction.

There’s still time! Submit your innovative research to the SIGGRAPH 2021 Posters program by Tuesday, 20 April.

References

[1] http://people.tamu.edu/~ergun/courses/viza665/
[2] Wang, Youyou (2014). Qualitative Global Illumination of Mock-3D Scenes. Doctoral dissertation, Texas A & M University. Available electronically from https : / /hdl .handle .net /1969 .1 /157921.
[3] Gonen, Mehmet Ozgur (2017). Quad Dominant 2-Manifold Mesh Modeling. Doctoral dissertation, Texas A & M University. Available electronically from https : / /hdl .handle .net /1969 .1 /161442.
[4] Wang, Youyou, Ozgur Gonen, and Ergun Akleman. “Global illumination for 2d artworks with vector field rendering.” In ACM SIGGRAPH 2014 Posters, pp. 1-1. 2014.
[5] Akleman, Ergun, S. Liu, and Donald House. “Barycentric shaders: Art directed shading using control images.” In Proceedings of the Joint Symposium on Computational Aesthetics and Sketch Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering, pp. 39-49. 2016.
[6] Liu, Siran, and Ergun Akleman. “Chinese ink and brush painting with reflections.” In ACM SIGGRAPH 2015 Posters, pp. 1-1. 2015.
[7] Du, Yuxiao, and Ergun Akleman. “Charcoal rendering and shading with reflections.” In ACM SIGGRAPH 2016 Posters, pp. 1-2. 2016.
[8] Du, Yuxiao, and Ergun Akleman. “Designing look-and-feel using generalized crosshatching.” In ACM SIGGRAPH 2017 Talks, pp. 1-2. 2017.
[9] Akleman, Ergun, Fermi Perumal, and Youyou Wang. “Cos Θ shadows: an integrated model for direct illumination, subsurface scattering and shadow computation.” In ACM SIGGRAPH 2017 Posters, pp. 1-2. 2017.
[10] Castaneda, Saif, and Ergun Akleman. “Shading with painterly filtered layers: a technique to obtain painterly portrait animations.” In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, pp. 1-2. 2017.
[11] Justice, Matthew, and Ergun Akleman. “A process to create dynamic landscape paintings using barycentric shading with control paintings.” In ACM SIGGRAPH 2018 Posters, pp. 1-2. 2018.
[12] Jack Clifford, Ergun Akleman. A Barycentric Shading Process to Create Dynamic Paintings in Contemporary Fauvist-Expressionist Style with Reflections. Proceedings of Eurasia Graphics’2020. pp. 32-39. 2020.
[13] Subramanian, Meena, and Ergun Akleman. “A Painterly Rendering Approach to Create Still-Life Paintings with Dynamic Lighting.” In ACM SIGGRAPH 2020 Posters, pp. 1-2. 2020.


Dr. Ergun Akleman is a professor in the Departments of Visualization and Computer Science and Engineering. Akleman has been at Texas A&M University for 25 years. He received his Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in 1992. Akleman is a living embodiment of the transdisciplinary teaching, research, and creative activities. He has over 150 publications in leading journals and conferences in a wide variety of disciplines from computer graphics, computer-aided design, and mathematics to art, architecture, and social sciences. He also is a professional cartoonist who has published over 500 cartoons. He has a bimonthly corner called “Computing Through Time” in the flagship magazine of IEEE Computer Society, IEEE Computer.  He has illustrated and written several children’s books in Turkish. He also writes monthly popular science articles with his own illustrations. His most significant and influential contributions as a researcher have been in shape modeling and computer-aided sculpting. His work on topological mesh modeling has resulted in a powerful manifold mesh modeling system, called TopMod, and many people have downloaded the software. Many talented artists created very interesting sculptures using TopMod,and there are approximately 100 YouTube videos on TopMod. He teaches both technical and artistic courses. Seventy-five students have received graduate degrees under his supervision. Most of his former students now work for companies such as Pixar, Disney, DreamWorks, Digital Domain, Google, Amazon, and Facebook.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.