Jeanne Kelly

Posts Tagged ‘digital’

Google Art Project | As an Artist

In Thesis Research, Wow on February 14, 2011 at 12:27 pm

I think this is brilliant, wonderful.

Who ever gets to see these works of art this up close?

Very few people are allowed to study these collections in this way. And rightly so. Too many visitors getting as close as Google Art Project does would destroy a work of art in no time. Yet it’s one of those many things I want to do in museums that’s not allowed; getting up close and studying the hand of the artist.

I learn a lot from being up close. I used to look at engraving stones and etching plates with a magnifying glass. But that was my own work. I love that I can now see these in this works of art way.

But I’m interested to hear what the museums and galleries have to say. There will never be a replacement for the original, but when the original isn’t avalible and certainly can’t be examined this closely by a million people all the time. [that would surely destroy it]   A certain distance is important for the life and health of the work and the viewer. In a way, the fragility of the piece, the unique nature of it being a one of a kind, gives it a life we must protect. There will never be another van Gogh’s The Bedroom and I may never be able to get it.

If museums get really desperate, they could sell personal viewings to a few people to help them pay the rent. But a few other people might get angry over that, maybe the people who couldn’t afford it.

Seems to me a double edged sword. A museum’s greatest competition can be the wealthy elite. They drive and sustain the price of art; they are who the museums bid against.  At the same time, they provide free labor, funding and donate entire collections to museums. Tricky, tricky. I understand that I’m oversimplifying a complex problem, but I think it has a simple relevance here. Who gets what kind of access?

And, what’s in the best interest of the work?  Should everyone, no one, only a few people a year, be allowed to get breath on Rembrandt’s Self Portrait. Whatever the right answer, Google says it’s everyone.

For now the following museums are included in the project:

  • Alte Nationalgalerie, Berlin – Germany
  • Freer Gallery of Art, Smithsonian, Washington DC – USA
  • The Frick Collection, NYC – USA
  • Gemäldegalerie, Berlin – Germany
  • The Metropolitan Museum of Art, NYC – USA
  • MoMA, The Museum of Modern Art, NYC – USA
  • Museo Reina Sofia, Madrid – Spain
  • Museo Thyssen – Bornemisza, Madrid – Spain
  • Museum Kampa, Prague – Czech Republic
  • National Gallery, London – UK
  • Palace of Versailles – France
  • Rijksmuseum, Amsterdam – The Netherlands
  • The State Hermitage Museum, St Petersburg – Russia
  • State Tretyakov Gallery, Moscow – Russia
  • Tate Britain, London – UK
  • Uffizi Gallery, Florence – Italy
  • Van Gogh Museum, Amsterdam – The Netherlands
Here’s a little more about it in Google’s own words:
What is the ‘Art Project’?
A unique collaboration with some of the world’s most acclaimed art museums to enable people to discover and view more than a thousand artworks online in extraordinary detail.

  • Explore museums with Street View technology:virtually move around the museum’s galleries, selecting works of art that interest you, navigate though interactive floor plans and learn more about the museum and you explore.
  • Artwork View: discover featured artworks at high resolution and use the custom viewer to zoom into paintings. Expanding the info panel allows you to read more about an artwork, find more works by that artist and watch related YouTube videos.
  • Create your own collection: the ‘Create an Artwork Collection’ feature allows you to save specific views of any of the 1000+ artworks and build your own personalised collection. Comments can be added to each painting and the whole collection can then be shared with friends and family.
Are the images on the Art Project site copyright protected?
The high resolution imagery of artworks featured on the art project site are owned by the museums, and these images may be subject to copyright laws around the world. The Street View imagery is owned by Google. All of the imagery on this site is provided for the sole purpose of enabling you to use and enjoy the benefit of the art project site, in the manner permitted by Google’s Terms of Service.The normal Google Terms of Service apply to your use of the entire site.
Advertisements

A Recent Show I Entered

In Thesis Research on February 13, 2011 at 3:44 pm


Digital art defines the contemporary. The Los Angeles Center For Digital Art is dedicated to the propagation of all forms of digital art, new media, digital video art, net art, digital sculpture, interactive multimedia, and the vast panorama of hybrid forms of art and technology that constitute our moment in culture. We are committed to supporting local, international, emerging and established artists through exposure in our gallery.”

LACDA 2011 INTERNATIONAL JURIED COMPETITION

Jurors:
Edward Robinson, L.A. County Museum of Art (LACMA)
Rex Bruce,
L.A. Center for Digital Art

All styles of artwork and photography where digital processes of any kind were integral to the creation of the images are acceptable. The competition is international, open to all geographical locations.

The winner of this competition will be the inaugural exhibit for the new 4,000 square foot gallery at 102 West Fifth, directly across from our current location! The selected winner receives 10 prints up to 44×60 inches on canvas or museum quality paper (approximately a $2,500-$3,000 value) to be shown in a solo exhibition in the main gallery from March 10-April 2, 2011. The show will be widely promoted and will include a reception for the artist.

Second place prizes: Ten second place winners will receive one print of their work up to 24×36 inches ($150-$200 in value) to be included in upcoming group exhibits. Second place winners will be scheduled into a group shows within twelve months of announcement of winners. Consideration is given to placing these works in shows appropriate to their style, genre and/or content. These shows will be widely promoted and will include a reception for the artists.

Artist’s Reception: March 10, 7-9pm. The artist’s reception will be the opening gala at our new expanded location in conjunction with the Downtown Art Walk which is attended by up to 20,000 gallery goers.

Deadline for entries: February 15, 2011
Winners Announced: February 21, 2011
Exhibit Dates: March 10-April 2, 2011

I entered The Rope Walker and Our Child Murderer:

Submedia | Zoetropes

In Spring 2010 on January 27, 2011 at 9:14 pm

For the warm up to really understanding how the Zoetropes work I’m making one or more.  I started with a simple copypaper prototype fashioned after Josh Spodek’s simple plastic one. Not so successful because the material was too flimsy and there was no smooth way to spin it quickly.  The breakdown of space and time however did work, so now it’s just a mater of better material and better images.

I was thinking about how similar zoetropes are to some animated gifs.  The example Josh showed us and the one I created after that one both had 12 images and 12 slits between them. I knew that a few of my favorite gifs have only 12 frames. I decided to rotoscope one, print it and use it as test images for the next zoetrope construction.  Most of you will recognize it. If not, then go to I Am Not An Artist and look at a few other great little animated gifs. You’ll find the original there.

Next up …

Muybridge’s Galloping Horse

Submedia | The First Zoetropes

In Spring 2010 on January 27, 2011 at 9:14 pm

To truly understand how zoetropes work you have to just make one or two.   (or twenty 🙂

I started with a simple copypaper prototype fashioned after Josh Spodek’s small plastic example. Not so successfully though; the material was too flimsy and there was no smooth way to spin it quickly.  The breakdown of space and time however did work. The slots, although being slightly various in width, seem to work well. Ideally I think they should be completely uniform. So, by the end of class it seemed to be just a matter of better materials and better images.

On my subway ride home I began to think about how analogous zoetropes are to animated gifs, each containing very few frames and usually viewed in the loop.  Although the two mediums are vastly different, the images and optical effects they create are very similar.

The example Josh showed us, and the one I created in class after that (the first image above/with new and improved dots),  both had 12 images and 12 slits between them. I knew that a few of my favorite gifs also have only 12 frames. I decided to rotoscope one, print the images and use them to test the next zoetrope construction.  Some of you will recognize it. If not, then go to I Am Not An Artist to check out a few other little gems along with the original of the one above.

I have to say that usually when I’m learning a new technology, medium or skill I try to focus on learning just that. Creativity, for me  at least, can sometimes get in the way of  learning the left brain stuff. So I try to stick with something simple in concept, that way I’m less likely to get distracted by being creative. I find this method works best for me. Once I know and understand the technology then I can go crazy in the creative department, no holds barred.

Next up, Muybridge’s Galloping Horse.

I chose this series of stills to also test the zoetrope.  It’s a reference I’m familiar with and I know that it works as an animation in several formats. So my reasoning goes: if these images don’t work then it will mean that it is the fault of the mechanism more than likely, not due to poor rotoscoping or poor animation on my part. This makes a good measure against the machine.

Using the lazy susan that serves as my spice rack from my kitchen cabinet as the spinning mechanism, I could concentrate on the aspects of the outside cylinder: deciding on the slats, how many I would need, and how wide each opening needed to be. I use  black foam core I had on hand to construct the outer cylinder.

Because foamcore can’t be bent into a smooth circle I instead

cut “planks” and evenly spaced them around a circumference of the lazy susan.  I had to create a way to keep the “planks” together at the top however; they had a tendency to spread open as soon as the lazy susan was spun. Again I used materials I found on hand, straightening out paperclips and punching them through the slats to attach to an inner ring at the top.

This solution caused its own problems,  blocking out most of the light needed to see the animation.  I attempted different forms of lighting to compensate for this “ceiling” as you can see in the video below but nothing was quite successful enough in my opinion.

http://vimeo.com/moogaloop.swf?clip_id=19663037&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=615b80&fullscreen=1&autoplay=0&loop=0

Another thing I discovered in constructing this first zoetrope was the slats had to be closed when the images were. Meaning no light should be allowed to breakthrough between the images. As you can see in the video, what your eye is most drawn to is the flash of light coming through the back of the zoetrope.  this is easily fixed by wrapping the outside in a sheet of black paper. For my next construction I will simply only cut the slots halfway down.

Zini in Osirix and Maya

In Fall 2010, Thesis Research on December 26, 2010 at 4:21 pm

So I finally got a good working render from OsiriX and opened it in Maya, and just as I’d thought, it was a mesh mess to an extreme degree.


Maya can’t seem to automatically fix the Nonmanifold Geometry.  Nonmanifold Geometry is, simply put, a mesh that could not exist in the real world. Maya refuses to convert to subdivisions, booleans won’t work and smooth operations can lead to strange results.

There are three different types of nonmanifold Geometry (actually four since lamina faces are technically also nonmanifold):

• Three or more faces share the same edge on an object
• Two or more faces share the same vertex, yet they share no edge
• Two or more adjacent faces have opposite normal directions

I can’t clean they geometry up myself in the state it’s in. Opposite normal directions can’t be seen because I can’t even seen all the faces.
I’m not sure at this point if it can be any other way.  I need to fix the geometry completely if I ever want to print these.  As of right now however, I need to just concentrate on the reconstructions, even if that means creating a 2 dimensionally likeness first for each of the eight subjects in the narrative.  I just have to get the ball rolling.Click on the image below to take you to a link that will then take you to a video of a fly through I put together in OsiriX. There has got to be work around for posting videos to wordpress without having to click, click, click to actually see it.  I’ll work in this friends.

Here are a few screen captures of the OsiriX interface.  It’s pretty intuitive if you’ve ever used a 3D program before, but it does take some processing power. I’ll have to get it onto a computer at Parsons to save myself some headaches.

The visuals of the build are interesting as well. Click on the image below to take you to a link that will then take you to a video to watch OsiriX in action stitching the layer back together.

3D Magic With DICOM Data

In Fall 2010, Thesis Research on November 19, 2010 at 10:35 pm

OSIRIXIMAGING
BUT THE MESH IS RIDICULOUS!

I received the first four CT scan sets from the University of Philadelphia’s Anthropology Department today.  Tom Schoenemann, the gentleman working on getting me those scans, recommended OsiriX as a good program for viewing and processing DICOM (Digital Imaging and Communications in Medicine) data created from the CT scans.  DICOM is a comprehensive set of standards for handling, storing and transmitting information in medical imaging.  These images can come from not only CT scans but other medical imaging modalities such as MRI, PET scans, etc.

OsiriX is a freeware program available to the public on the Apple Inc. Website. It is seamlessly tied into the Mac OS X platform.  Biomedical Visualizers can use this software to visualize anatomical data sets and extract visual information for reference.

OsiriX software has a “C-STORE SCP” capability, and is therefore capable of storing incoming DICOM images into a local database.  I’ll use the DICOM data from the CT scans of the Hyrtl skulls to create 3D volume renderings using the Osirix program to emulate a PACS system (Picture Archival and Communication System) on my local drive.

The 3D renderings will then be imported into image editing programs such as Maya, Mudbox and Photoshop.  From these renderings forensic facial reconstructions will be made for the different Hyrtl subjects.  The first subject of the narrative will have their skull printed 3 dimensionally as well to create a traditional 3D forensic facial reconstruction.  Printouts will be made of one of the subject’s skulls so a 2D forensic reconstruction can be made.  But all of the characters will have digital facial reconstructions made and printed.

These reconstructions will be the basis for the figures in the dioramas.  They will beminiature versions of how the Hyrtl subjects looked in real life.  Facial expressions will be changed in Maya and Mudbox to reflect the different emotions experienced by the characters.  These different face will be printed on one of Parsons’ 3D printers and used throughout the scenes.

On “NewTek” discussion forums a usergoing by “mrxd” was having similar issues and offered some images that I’ll share with you until the Hyrtl skulls are done.  All of the images on this post are from his attemps to use the DICOM data in CT scans imported it into Lightwave to make a model.

YUKI

In Fall 2010 on September 21, 2010 at 9:20 pm

http://vimeo.com/moogaloop.swf?clip_id=6620739&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=ffffff&fullscreen=1&autoplay=0&loop=0
via  QNQ/AUJIK on Vimeo.

Lasers and Photoluminescent Paint

In Fall 2010, Wow on September 19, 2010 at 11:26 am

Fade Out, an eye-catching visual display system developed by media artists Daito Manabe and Motoi Ishibashi, uses laser beams to “print” ephemeral glow-in-the-dark images on a wall-mounted screen coated with photoluminescent paint.

After the computer receives and processes a digital image (in this case, a webcam snapshot), ultraviolet laser beams are fired at the photoluminescent screen to produce square pixels of glowing green light. Subtle gradations are created by controlling the timing of the laser shots and allowing the darker portions of the image to fade. The completed image gradually disappears as the glow of the screen dims.

Creators are now looking at ways to create glowing images in liquid and on irregular surfaces.

via Pink Tentacle.

Battle of Branchage

In Wow on June 27, 2010 at 12:27 pm

Architectural Projection Mapping @ Branchage Film Festival 2009

Projection mapping by seeper | video by flat-e | via Vimeo.

Motion Graphics | Fixing the Roosevelt Island Animatic

In Major Studio Narrative, Motion Graphics 1, Spring 2010 on May 14, 2010 at 9:16 pm

The reworking of the video turned out okay.  I think the music really adds the right feeling.  I’m thinking of replacing some of the drawings with photographs.  It still needs a lot of work but it’s getting there.

http://vimeo.com/moogaloop.swf?clip_id=11733182&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=ffffff&fullscreen=1