Our discussion first focused on my recent work. We talked about what I had done during the Tate Exchange. I explained that I had attended all four Digital Maker Collective days during February and March. This event was held over the entire 5th floor of the Switch House, Tate Modern, I led an activity entitled ‘Virtual meets Reality’ ably assisted by Kirstin Barnes (MFA CSM) and Aurelie Freoua (MA FAD Alumni), and in the Feb sessions also by some BA students from Camberwell and Wimbledon. This activity involved helping visitors experience for themselves Google Tilt-Brush, a 3D painting app using HTC Vive Virtual Reality equipment, and Mixed Reality using the Microsoft Hololens headset, as well as 3D scanning using the Occipital Structure Sensor attached to a mini iPad. These activities were well attended and had a ‘Wow’ factor for most people trying them out for the first time.
I also offered a similar activity during our Low Residency in Feb, with much needed help from Manolis Perrakis, an MA Fine Art Digital first year on-line student from Greece, who had prior experience with the HoloLens. Jonathan tried both the HoloLens and Tilt-Brush during that session. He has a strong preference for the HoloLens, as you can still see what is around you when using it, holograms being projected into the real world space of the Camberwell Photography Studio. Whereas, with the HTC Vive you are in another Virtual World altogether.
I have assisted my wife Suzy, with her installation for an MA Museums and Galleries exhibition at the Platform Gallery, Kingston University which finished last week. This exhibition will be transferring to the Museum of the Future, Surbiton next week. Here, we exhibited ‘The Scream 2030’. This is a 3D printed sculpture produced from a highly detailed scan made using the Veronica Scanner developed by the Factum Foundation (who pioneered 3D printing in archeology allowing destroyed artifacts from antiquity, such as in Palmyra, to be reproduced). It is also a Hologram, produced from the same scan, and shown using the Hololens. The idea was that an original sculpture on display which had to be removed for conservation, was away on loan, or perhaps was ‘conserved’ as a hologram, could still be seen in its original setting. The idea was to show what is possible now, but which may be commonplace in 2030.
Above you can see ‘The Scream 2030’ on a plinth, and then on the floor, with one of the attendees viewing the exhibit holographically using the HoloLens. Below you can see what the viewer saw in the HoloLens (this picture was taken during the Tate Exchange, hence a different colour plinth).
This has some relevance to my proposed post MA Research, as it illustrates how an object could be ‘conserved’ as a hologram. See my blog on my PhD Research proposal.
Our discussion then moved on to my proposed MA show (which will be installing in only 14 weeks time!!!). I explained that my exhibit was planned to be in three layers and based on the book ‘The optician of Lampedusa’. The three layers are: a large video projection of seagulls diving and screeching as a backdrop scene, two life-size sculptures representing Theresa, the opticians wife, and one of the refugees, and I had hoped that the third layer would be holographic recordings of the actors’ narratives viewed and heard in the HoloLens.
I said that I would be using video from a photo library for the backdrop, as previously discussed with Prof. Lucy Orta and discussed in my earlier blog.
The sculpture of Theresa is the same as the maquette used in the exhibit ‘The Scream 2030’, except that it will be life-size, and representing the time when she first saw refugees drowning in the sea. The face of this sculpture has just been 3D printed and can be seen below. The sculpture is being printed in ten parts, so that each part fits within the bounding box (maximum print size) of the Projet 360 printer at CSM. These will need to be assembled and sanded, which I plan to do over the Easter break.
I also showed Jonathan a small 3D print that I had produced (on my Ultimaker 3D printer) alongside that of the one I intend to use for the refugee. The latter was produced from the scan I had made of the actor Leo Wringer. Both can be seen below. Jonathan commented that the pose for the refugee was a perfect choice. Well done Leo.
I related the issue I was having with making a 3-5 minute holographic video of the actors narrating their parts, to be seen in the HoloLens. At that time, I only knew that it was proving difficult. Now I know why (see my last blog). It is impossible with my knowledge and resources. So I set the expectation that it would be a 2D video seen in the HoloLens, so that a viewer could also see the rest of the physical exhibit. This is not so easy either, as I am now beginning to discover (more about this in a future blog).
I talked about Lucy Orta’s comment that perhaps all three layers were too much. That the purpose of the piece to invoke empathy for the refugee situation generally, may be better achieved with either the sculptures or the HoloLens narratives alongside the backdrop video, but not both. Perhaps even down-scaling even further with only the voices of the actors. Jonathan could see her point and given the difficulties I was having with the HoloLens narrative thought that the sculptures against the backdrop video were good enough to be my finished exhibit. This may well end up to be the case, but I will continue the learning experience with devising a script for the actors, directing and video recording their performances against a green screen, editing the video to remove the background using the software Isadora, and finally exporting the edited videos to the HoloLens or other device (perhaps editing their performances into the backdrop video of the sea),
I concluded that I would make as many of these ‘assets’ as I could in the time left, and then decide which to use in my final exhibit, which would be dependent upon the exhibition space yet to be allocated to me and the other MA Show exhibits I am sharing it with.