PDA

View Full Version : LipSync Capability - Testing the Water



JHinkle
09-23-2012, 12:15 PM
I'm just testing the water to see how many are interested.

NOT for this Holiday season!

How many would be interested if HLS processed your wav file and provided a channel that depicted the suggested phoneme.

Another process would then take that phoneme channel and automatically fill in sequence data for the channels being used to display your faces.

Kind of like one stop shopping for singing Halloween Monsters.

If you would use such a capability - please reply.

Joe

chelmuth
09-23-2012, 12:19 PM
Would use!

Materdaddy
09-23-2012, 12:19 PM
That would be perfect for next year. My wife and I discussed doing Halloween for our first time next year! (This year we'll just do simple chases/fades with the pixels).

kingofkya
09-23-2012, 01:07 PM
Yeah, if it was there i would defiantly use it, then i have some one to blame for buying more stuff.

Xenia
09-23-2012, 04:02 PM
Absolutely!

intwoit2002
09-23-2012, 04:05 PM
I agree that would be a great addition.

Thanks,
Joe

tjetzer
09-23-2012, 05:16 PM
VERY interested!

n_gifford
09-23-2012, 05:26 PM
+1 Would definitely be useful!

XmasInGalt
09-23-2012, 06:22 PM
I would use it. I've been wanting to do a halloween display with singing skeletons for years.

jess_her
09-23-2012, 07:58 PM
Hello Joe
yes I would.

Are you thinking of using DMX on a phoneme channel output?
The reason I ask is Dave Hoppe and Mike Ardai design and build the Bobcat Controler.
It has the abllity to control 8 RC servos and has an output for 8 led's all controled by DMX.
You can find it on the other site, DIY light animation.
Jess

JHinkle
09-23-2012, 08:50 PM
Hello Joe
yes I would.

Are you thinking of using DMX on a phoneme channel output?
The reason I ask is Dave Hoppe and Mike Ardai design and build the Bobcat Controler.
It has the abllity to control 8 RC servos and has an output for 8 led's all controled by DMX.
You can find it on the other site, DIY light animation.
Jess

Jess:

My initial, fairly under educated, thought on this was to enable editing of sequence channels to light the various channels making the different eye/mouth positions.

I did not know anyone was doing RC servos.

What is entailed in DMX servo control?

Joe

Henedce
09-24-2012, 12:19 AM
+1 definitely would use

jess_her
09-24-2012, 01:26 AM
Joe
As I understand it, 8 DMX channels = 8 RC servo controls from the Bobcat controller.
From DMX 1 to 256 will give you the position of a RC servo with 128 the center.
The servo board has a PIC with Dave’s firmware loaded, and from his utility
you can set CW or CCW rotation.
Also you can set the endpoints of a servo so you won't over run the mechanical endpoints.
And what the DMX start channel for the controller would be.

I’ve used this with Vixen and it works great. The Only drawback that I saw was using RJ’s DMX dongle and Vixen . when the show ended RJ’s Dongle will send out DMX value of 0 and the servo controller will move all the servo channels to the 0 programed endpoint.

If I'm understanding you, your using a phoneme channel for lighted face movement.

Be great to do this with a RC servo driven animatronic face.

JHinkle
09-24-2012, 05:44 AM
Joe
As I understand it, 8 DMX channels = 8 RC servo controls from the Bobcat controller.
From DMX 1 to 256 will give you the position of a RC servo with 128 the center.
The servo board has a PIC with Dave’s firmware loaded, and from his utility
you can set CW or CCW rotation.
Also you can set the endpoints of a servo so you won't over run the mechanical endpoints.
And what the DMX start channel for the controller would be.

I’ve used this with Vixen and it works great. The Only drawback that I saw was using RJ’s DMX dongle and Vixen . when the show ended RJ’s Dongle will send out DMX value of 0 and the servo controller will move all the servo channels to the 0 programed endpoint.

If I'm understanding you, your using a phoneme channel for lighted face movement.

Be great to do this with a RC servo driven animatronic face.

Jess:

Not all channels in HLS are output channels.

As an example - a Beat Track is not outputted but used as an editor aid.

I foresee a channel that will accept "word" position for text input. This channel will drive a phoneme channel (or mouth positions). From that ... (both of the fore mentioned channels are not output channels) - there will be a process that will drive output channels - hence the 8 RC channels you mention - or any number of channels.

Most of the time a channel contains illumination data - in the editor you identify intensity from 0 to 100%. During the output stage - that intensity is then translated into a numerical value between 0 and 255.

HLS also works with DMX values in the channels. These are not 0 to 100% illumination values but numeric values with no translation in the output stage. If you want a DMX value of 128 to be sent - you simple set the DMX value to 128. You need not attempt o figure what 0 to 100% illumination value is need to acquire the 128 value.

In my research I have found that Disney uses 14 mouth positions in all of their animations. Your face will not use 14 (13 plus silence) so you job will be to tell me how to convert from the 14 to how many positions you have - those positions will then drive your DMX values and your channels.

Hope that helps explain the direction I'm looking at.

Joe

jess_her
09-24-2012, 11:00 AM
Good morning Joe

Yes it does make sense.

Let me think about this.

I have a bare Bobcat Servo Board I can give you if you like to touch one.

Jess

JHinkle
09-24-2012, 11:03 AM
Good morning Joe

Yes it does make sense.

Let me think about this.

I have a bare Bobcat Servo Board I can give you if you like to touch one.

Jess

Thanks - but not right now.

What protocol is used to talk to you Bobcat board?

Joe

intwoit2002
09-24-2012, 11:07 AM
Where can I find more information about "Dave Hoppe and Mike Ardai design and build the Bobcat Controler" mentioned below.

Thanks,
Al

jess_her
09-24-2012, 11:22 AM
Joe, it's the standard DMX 512 protocol

AL, http://diylightanimation.com/ Bobcat has it's own place on this forum.

Jess

Onewish1
10-19-2012, 02:01 PM
I would be interested as well

JHinkle
10-19-2012, 03:05 PM
LipSync was released last week - as Animation.

Joe

SLSettles
10-19-2012, 08:44 PM
There is a good app out already that takes a text entry, breaks it into phonemes, and allows you to align the words (or lyrics) to an imported WAV file. It's called Papagayo. Certainly worth taking a look at. If nothing else it uses a huge open source (I believe) dictionary to break down the phonemes that would give a big leg-up to your project. A fellow has even written a converter to take the Papagayo output and convert it into a LOR clipboard according to how you want the phonemes displayed ("oh" is channels 1, 2, & 5 while "m" is 1 & 7, for instance). Anyway, some good stuff to look at for inspiration. I think having this functionality directly available in a sequence editor would be amazing!

JHinkle
10-19-2012, 08:52 PM
I use the same engine as Papagayo.

It's already fully integrated into HLS.

Joe

budude
10-19-2012, 08:55 PM
Joe's integration of the text-to-phonemes makes Papagayo no longer necessary - it effectively does exactly the same thing and is much easier to use as well. Go through his training video if you haven't already and you'll see how it's better in many ways if you are trying to drive a set of channels for lighting.

edit - - uh - - like Joe just said... lol...

JHinkle
10-19-2012, 09:35 PM
Brian - you have your 20 faces and more ...
Joe