PDA

View Full Version : My New Sequencer



JHinkle
01-29-2012, 01:55 PM
After I retired, I was looking for something to focus my skills on.

In December, I saw my first Holdman Christmas light show and said - I can do that.

I found your web site shortly there after and have designed my own 24 channel controller and my own sequencer. My sequencer is more like your Vixen 3 in that it uses all hi-level commands.

Please note - I have never used Vixen 2. I compiled a controller for Vixen 3 and tried it - found I wanted to make my own.

This forum is mostly a Vixen forum and I don't want to step on toes here. After I have proven my design, it others wish to use it, I can make it available.

But what I'm asking today is for clarification on a few sequence implementation points as I move to complete my first song.

The song I am using for my development is "Music Box Dancer".

Since I am constructing a sequence to music, I always have the audio displayed. I have the ability to apply simple lo and hi pass dsp filtering techniques to the audio so as to hi-light the key audio notes I am interested in.

Shown below is Music Box Dancer passed through a lo pass - showing the segment that appears about 35 seconds into the song. This is the first place where the song actually starts taking on a beat. I've aligned a 5 unit ramp-down command with each beat (my time units for this song is 50 msecs)

13083

In the same audio segment, the song has what sounds like a piano playing. I ran the audio through a hi-pass and aligned a single 50 msec cell to each note played.

13085

What the actual Audio looks like

13086

Question - it's nice in a tool to see the actual notes and associate lights to them - but - in reality - can you actually see a 50 msec LED string turn on so that someone can see the notes being played?

I don't have my controller back from the board manufacturer yet - so I have not actually illuminated any lights to correlate what I put on my tool and how it looks in reality.

I am building a mega-tree. I believe what Vixen calls a "Shimmer" is a ramp-up, ramp-down --- either butted against each other or at a specified frequency. Please see my attempt at a "Shimmer across multiple channels. Is this a shimmer?

13087

Thanks in advance for your comments.

kychristmas
01-29-2012, 02:10 PM
Please understand that even KC (The Creator of Vixen) would welcome you and your attempts. I have never read anything other than positive comments from him towards other people's efforts. Also, while most members here use Vixen, this is NOT a Vixen Forum. It's about DIY lighting. Your efforts certainly fall into that. What you are doing could certainly develop into something we can all use. Whether its a standalone tool or something that can be wrapped into a Module for Vixen. Nobody should try to put that down. We should all welcome it with open arms.

I run 50 msec and changes are noticable. In fact, soem folks think that 20 frames per second is not enough. For me, I run Incans and anything more is wasted because the response from them is just to slow.

To me, that is not a shimmer. That looks like the enitre tree would be pulsing on and off. I think of a shimmer as having things on at some level all time and then having portions of the element get brighter in a random fashion.

JHinkle
01-29-2012, 02:28 PM
More like this? Constant illumination at 30% with random up-down from 30 to 100%.

13088

djulien
01-29-2012, 06:24 PM
This forum is mostly a Vixen forum and I don't want to step on toes here. After I have proven my design, it others wish to use it, I can make it available.

There are some alternate tools used also, so this would still be of interest. What is the overall workflow you have in mind? Some of the steps may still be useful in the context of Vixen, or maybe also to some of the non-Vixen members.


Since I am constructing a sequence to music, I always have the audio displayed. I have the ability to apply simple lo and hi pass dsp filtering techniques to the audio so as to hi-light the key audio notes I am interested in.

What are you using for the filters? I was thinking about how to add this type of functionality to a plug-in, so I was starting to look at fmod (used by Vixen for audio functions), but if you already have some filters that would be a head start.

don

JHinkle
01-29-2012, 08:02 PM
djulien wrote
What are you using for the filters? I was thinking about how to add this type of functionality to a plug-in, so I was starting to look at fmod (used by Vixen for audio functions), but if you already have some filters that would be a head start.

DSP filters can be complex and/or simple.

I chose a simple set since I did not care about distortion - I just wanted brute force manipulation to extract the information I wanted.

I settled on CFxRbjFilters. I acquired the C version and converted it to a C++ class;

It has LO, HI, BandPass, and a few others - but I only use LO and HI. I was considering adding a FFT function to help identify the center freq in a band pass for use on the melody tones but the HI pass along has worked well enough so far (keep Q at 1 or less for HI pass).

The arguments are Freq, Q, and Gain. By manipulating those three, I have been able to extract and display the main low freq beat tone and higher freq melody tones.

Send me a PM with your email address and I will return the CPP and H files.

Joe

JHinkle
01-29-2012, 08:33 PM
djulien wrote
There are some alternate tools used also, so this would still be of interest. What is the overall workflow you have in mind? Some of the steps may still be useful in the context of Vixen, or maybe also to some of the non-Vixen members.

Not quite sure what you want to encompass in Work Flow.

I don't assign controller information or assignments until the very end. To me, this allows someone to start work without a commitment to hardware.

Up until that point, channels are identified, labeled and sequenced.

Preview display is also based on just channels.

Since the work in creating a sequence is the detailed job of deciding what channels to illuminate and at what intensity, I make use of what I call "Custom Effects".

I have four Basis effects from which everything is based on: Ramp-Up, Ramp-Down, Level, and DMX Specific Value.

All Custom effects are composed by grouping one or more of the above. A Custom effect can then be added to the sequence as a single entity, multiple shared, multiple unique, multiple random placement, multiple butted together end-to-end, multiple frequency/period. All of them can also have a random intensity applied.

Right now I am attempting to figure out what custom effect equates to Vixen's Shimmer, Sparkle, etc.

Throughout the sequencing work, the audio is always displayed (unfiltered or filtered) so effects can be easily positioned.

Preview (I use straight line graphics to construct my preview verses what I think Vixen uses is a BitMap).

As far as the back-end goes. If others are interested, I will add a back-end plug-in.

For me - my back-end provides each controller with an array of FLOATS (channel illumination values - from 0.0 to 100.0). I convert the float to byte (255) or to 10 bit values in the future. I have a LED mapping array for each type of color/light I use (my back-end allows this mapping along with controller assignment). The actual channel value sent to the controller will be an illumination value to more closely represents the intensity I want to see instead of what the tool produces (0 to 100%).

I will package my channel data into a TCP/IP message and send it to a Ethernet to RS485 device. From there my lighting bus consists of 10 of my new controllers - each driving 24 channels. I have not decided if I will use full or half duplex on the 485 bus. Half would be more compatible with the rest of the world but the two-way communication I want with the controller may be easier using full. I am designing the controller to be compatible with both.

I hope that was the question you asked.

Joe

Zeph
01-29-2012, 10:11 PM
I love hearing the updates on this project, Joe. Very interesting to see your from-scratch approach. I'm glad you will be making your controller compatible with half or full duplex - I am hoping that eventually this may evolve into another option for others to play with too, mixed with what we already have.

djulien
01-30-2012, 12:25 AM
I hope that was the question you asked.

Yes. Sounds very interesting! I've been tinkering with some hard-coded effects, and wanted to hook them up with some frequency-based triggering so CFxRbjFilters might be helpful. Is that licensed or public domain? I found some discussions about using it, but no pointer back to the original source so I wasn't sure how restricted it was.


Right now I am attempting to figure out what custom effect equates to Vixen's Shimmer, Sparkle, etc.

Do you have any tools such as .NET Reflector? Then you could look at the actual code that Vixen is using. If not, I could look up the logic for you.


Throughout the sequencing work, the audio is always displayed (unfiltered or filtered) so effects can be easily positioned.

I was thinking in terms of using filters to tag sections of the audio, and then allow the user to manually adjust those if desired (like a Papagayo, but for any effect and not just phonemes). It sounds like you would be displaying the results of the filtered audio, which actually sounds more powerful. Will it be something like a side-by-side original to filtered, or combined into one?


Preview (I use straight line graphics to construct my preview verses what I think Vixen uses is a BitMap).

Yes, the Adjustable Preview plug-in renders lists of cells overlaid on a background image and the cells are just a list of pixels so it is like a bitmap.

don

frankv
01-30-2012, 05:02 AM
Is a 50ms light pulse visible?

Yes, I believe so. My understanding is that people are most sensitive to a 10Hz (i.e. 100ms period, which I guess would be 50ms on, 50ms off) flash rate. The downside is that 10Hz can trigger epileptic fits. This would apply to LEDs -- I doubt that incans could switch on/off fast enough.

One point: What are you using for your GUI, etc? Hopefully this will be Linux/Unix compatible?

JHinkle
01-30-2012, 06:01 AM
Originally posted by djulien
I was thinking in terms of using filters to tag sections of the audio, and then allow the user to manually adjust those if desired (like a Papagayo, but for any effect and not just phonemes). It sounds like you would be displaying the results of the filtered audio, which actually sounds more powerful. Will it be something like a side-by-side original to filtered, or combined into one?


The thumbnails are small - I posted larger versions in the first entry of this thread.

I convert MP3 to WAV and always draw an oscilloscope type display positioned just above the sequence. Sequence and Audio are then synchronized as scrolling takes place throughout the song.

When I want to apply a filter - I create a new file which is the original audio passed through the filter. So what is now displayed is the filtered audio - thereby enhancing the the spectrum of frequencies of interest.

This is a pic of Music Box Dancer around 35 secs into the song - unfiltered.
13093

I apply a Hi Q - Lo pass filter - designed to only pass the beat frequency - and that becomes my audio display. As you can see - only the beat is shown.
13092

I apply a .8Q - Hi pass filter to the audio and get the resulting audio shown below. The Hi pass is not as good as the low pass because you have all of the harmonics associated with the frequencies you are not interested in - also present.

But it works well enough to easily see the piano notes being played.
13094

I can display and play audio at full, 3/4, 1/2, 1/4 speeds. It really helps when you want to pick the exact note out of audio.

This is my DSP configuration dialog.
13095

So - for my tool - audio is a critical part - always displayed to aid in constructing the sequence.

Joe

Slite
01-30-2012, 06:08 AM
This looks like an incredibly interesting tool.

I think one of my major stumbleblocks is the sequencing, and anything that can make that easier would be way cool!

So keep up the good work and if possible, add support for exporting to other sequencing tools :)

/Stefan

JHinkle
01-30-2012, 06:33 AM
Originally posted by djulien
Yes. Sounds very interesting! I've been tinkering with some hard-coded effects, and wanted to hook them up with some frequency-based triggering so CFxRbjFilters might be helpful. Is that licensed or public domain? I found some discussions about using it, but no pointer back to the original source so I wasn't sure how restricted it was.

Sorry - I gave you my class name.

There is tons of non-licensed code available on DSP filtering.

Here is one example of a link.
http://www.musicdsp.org/archive.php?classid=3

The type of filter I'm using is referred to as RBJ. I believe it's someone's initials. This is not the site I acquired it at - but as an example.

Joe

JHinkle
01-30-2012, 06:56 AM
Here is what I do for Preview - very crude but allows somewhat of a detailed view of the display.

I capture starting and ending line points and use "LineTo" instead of bit pixels.

This is just a sample of a House, Mega-Tree, arch, mini-tree and Actual tree with lights.

I've seen Holdman's preview display - you get a good idea of what his display/sequence will look like.



Joe

djulien
01-30-2012, 10:22 PM
So - for my tool - audio is a critical part - always displayed to aid in constructing the sequence.

I also like to sequence the lights and specific props directly to the music, rather than just running general effects in a loop while the music is playing. But, that is more work so the filtering approach will be very helpful.


There is tons of non-licensed code available on DSP filtering.

Thanks for clarifying! I assumed you were using a packaged library, but I like the idea of custom editing a filter function better. That fits nicely with some other stuff I was working on.

EDIT: Attachment 13096 seems to be broken - I can't display it. Can you check it?

don

JHinkle
01-31-2012, 05:48 PM
EDIT: Attachment 13096 seems to be broken - I can't display it. Can you check it?


Fixed it - see my previous post - 2 up.

I made several uploads yesterday - don't know if the site has a limit - but it would not accept my upload file so I tried an alternate approach. I could see it - but I guess I was the only one.

The attachment is a sample of a drawing on my preview - line segments - not bits in a bit map.

Joe

JHinkle
01-31-2012, 06:00 PM
Question:

Why would someone copy a channel? That is duplicate a complete channel into another channel.

I have the ability to copy blocks of effects and paste them in other channels - but I can't think of a single reason why a channel copy would be used. No two channels should be the same - so why copy?

Please let me know when you would use a channel copy.

Thanks.

Joe

kychristmas
01-31-2012, 06:06 PM
I quite often want the exact same sequencing on similar elements. While in the "effects" driven world it may be different, I copy complete channel data just about every sequence I make. I might copy a channel that is one color on one display element and a different color on another element.

There are a number of ways to do this in the Vixen editor.

If every Channel was completely different in everyone one of my sequences, I think that would look nasty and very Chaotic. So I would argue that indeed there "should" be channel that are exactly the same"



Question:

Why would someone copy a channel? That is duplicate a complete channel into another channel.

I have the ability to copy blocks of effects and paste them in other channels - but I can't think of a single reason why a channel copy would be used. No two channels should be the same - so why copy?

Please let me know when you would use a channel copy.

Thanks.

Joe

JHinkle
01-31-2012, 06:23 PM
So I would argue that indeed there "should" be channel that are exactly the same"

Don't take me wrong - I was asking - not stating.

I have not done a sequence yet - trying to think of all aspects of the tool before I start. Changing data structures and/or some capabilities mid-stream can cause the whole sequence to be thrown away.

After your comment - and a bit of brain pain - is this an example of when?

I have three channels driving a mini tree. I want another mini three driven exactly the same way. I could just wire them together or make them unique in the sequence so I would want to copy the three channels from one to the other.

Once I lock this aspect of the tool down - I will actually start to use it to build my first sequence for 2012.

Joe

KeithW
01-31-2012, 08:03 PM
I also use channel copy when I do specific 'chase' or 'spin' effects that at one part of a sequence I might want a perimeter set of lights 'spinning', then later I would want the left side of the yard to spin clockwise, with the right side of the yard to spin counterclockwise (copy, then mirror option, either left-right or up-down) to accomplish this.

JHinkle
01-31-2012, 08:45 PM
Of the mirroring functions in Vixen - which do you use the most - mirror Vertical or mirror Horizontal?

When you sequence a song - how many times (approx) you would use mirror Vertical and how many times Horizontal?

Thanks

Joe

kychristmas
01-31-2012, 09:30 PM
For me personally, I use them equally. I really don't like how they work in Vixen. The usability is a bit tough. Personally, I would like to see the Mirror Applied to the selection rather than copy the "mirror" to the clipboard.

Hard to say how ofetn per song, but I know I use them regularly enought that I would know if they weren't there :)





Of the mirroring functions in Vixen - which do you use the most - mirror Vertical or mirror Horizontal?

When you sequence a song - how many times (approx) you would use mirror Vertical and how many times Horizontal?

Thanks

Joe

JHinkle
01-31-2012, 09:34 PM
Thanks for your response.

I can see a vertical mirror - example - chase left becomes chase right.

When you you use a horizontal mirror?

Joe

Zeph
01-31-2012, 09:48 PM
Fade out becomes fade in?

JHinkle
01-31-2012, 09:53 PM
Thanks - now that you say it - I see how stupid my question was.

Sometimes you can get so deep into things you can't see the trees for the forest.

Joe

JHinkle
01-31-2012, 10:01 PM
Originally posted by kychristmas
Personally, I would like to see the Mirror Applied to the selection rather than copy the "mirror" to the clipboard.


I'll implement mirroring by having the effect or multiple effects selected - then apply the selected mirror action in place. No translate then paste.

Thanks again for all of your comments.

Joe

kychristmas
01-31-2012, 10:13 PM
NP, anxious to see what you end up with. You are building sort what I would call a Vixen Bridge. It is 2.x with the Object oriented sequencing of Vixen 3.0. Looking forward to it. Especially if your audio timing tools work. I suck at doing that part. I have gotten better at sequencing, but still terrible at the Beat part. Now, I start with other people's beat tracks and go from there.

KeithW
01-31-2012, 11:27 PM
Of the mirroring functions in Vixen - which do you use the most - mirror Vertical or mirror Horizontal?

When you sequence a song - how many times (approx) you would use mirror Vertical and how many times Horizontal?

Thanks

Joe

Sorry for the 'correct' answer, but in that I'm kinda ADD...Changes depending upon the 'feel' of the song...but usually Vertical if I want to vary a spin from CW to CCW, Horizontal if I want to 'sweep' lights left to right, then in the next measure, 'sweep' right to left to just find a way to beat the same tricks out with a slightly different flavor. Part of the reason I wanted to do RGB was that I'd have 10 or so elements in a spin, then copy/paste them back in so that I can go from a single element spinning to 2 elements spinning, then increase to four. During Wizards of Winter, when I wanted to go even faster, I copied a 2 on / 4 off series offset by one on each lower channel, giving a 'spinning faster than the eye can see' type effect (how I rationalized it) I guess I forgot to mention that I'll also use random functions, say full saturation, varying 30%-60% to simulate candles, too.

djulien
02-01-2012, 01:38 AM
I'll implement mirroring by having the effect or multiple effects selected - then apply the selected mirror action in place.

Would a matrix transformation be more flexible? You could then do horizontal, vertical, as well as others.


Why would someone copy a channel? That is duplicate a complete channel into another channel.

I've copied channels as a starting point, and then done further editing on the copy. For example, maybe scale the copy down 50%, mirror it, offset it, or some other alteration of the original channel, possibly at different times during the sequence. That way, the 2 channels are in sync with each (and the audio) so they are coordinated, but they will still have a different appearance.

don

JHinkle
02-01-2012, 02:36 AM
Would a matrix transformation be more flexible? You could then do horizontal, vertical, as well as others.


In Vixen 2+ - each time unit cell has a level value in it; Mirroring/coping is just transposing those cells and their contents.

I keep using the term Effect in my tool because it is a hi-level description of many Vixen Cells.

As an example - using Vixen - you want a 10 sec ramp-up from 0 to 100% at a resolution of 50 msec. Vixen would process that as 200 cells with different intensities. To mirror that you would have to flip the 200 cells.

In my tool - there would be a single cell that contains data - that data fully describes the 200 output cells.

I design in concepts you think of for the sequence and the tool produces the 50msec output values - so my output matches Vixen - but I sequence at a much higher level.

So all that being said - I can't do matrix type manipulation - I do command substitution. A Ramp-up becomes a Ramp-down, etc.

13121

If you look at the segment of sequence above - the RED channel - there are 32 equivalent Vixen cells. I designed those 32 cells with just 4 effects.

From left to right:

A Level at 100% covering 2 units.
A custom covering 25 units - a 10 unit ramp-up followed by a 5 unit level followed by a 10 unit ramp-down
A Level at 100% covering 2 units.
A Level at 50% covering 3 units.

I move them, copy them, and mirror them - not as 32 individual cells but as 4 effects.

Joe

kychristmas
02-01-2012, 09:33 AM
Joe,
That makes sense to me. I think just think that this is all perception because most of us do not have enough experience with effect-based sequencing.

I think it's going to be difficult to define what a Mirror is for each effect, but I guess you will eventually get it. It may be that certain effects don't have both a Vertical or Horizontal Mirror. Or maybe you should just remove Mirror and have a "Switch To" available for each effect ie. Ramp Up would have a "Switch to Ramp Down" option.

My only issue would be multiple effects applied to a cells. Will that be able to happen? For Instance I quite often will do a Chase,Spin or Arch effect where I gradually ramp up and ramp down. For instance at the end of a song I may continue the effects, but then gradually fade out to black. Woudl this require a special custom effect or can I overlay the effects.

Kepp at it.

JHinkle
02-01-2012, 10:16 AM
My only issue would be multiple effects applied to a cells. Will that be able to happen? For Instance I quite often will do a Chase,Spin or Arch effect where I gradually ramp up and ramp down. For instance at the end of a song I may continue the effects, but then gradually fade out to black. Would this require a special custom effect or can I overlay the effects.


I'm still trying to understand the difference between a spin and chase. I thought they were the same.

Once I do my first sequence - it will probably be as clear as day. I want to define everything up front in case a change is required.

Your statement above is interesting. Take what I consider to be a chase: multi-channel - multi-cell - effect where the same effect is applied to adjacent channels only offset by a defined number of time units - Like a long stair step or mirror or a stair step.

My first gut (not thinking much) reply was going to be - ok - its a chase using step-down. By your statement - the complete time period you are looking at will cover multiple of these (10+ of these). It would be a pain to individually change the hi to lo intensity range of all of these (in the same channel) to get a nice decay to zero.

I am going to add an effect modifier. Select a group of common ramp type effect (like all of the ramp-downs that would compose your last chase to end) - tell the tool the starting intensity and the ending intensity and the tool will then modify each individual effect to produce the slowly decaying chase sequence.


Now do I implement it (as you stated) as two effects - one controlling some aspect of the other (this would preserve the design intent and make it apparent) - or have the second effect do an immediate change to the first - thereby producing the wanted outcome but leaving no trail as to how it was accomplished.

I'm going to look hard into the first - the second is easy as a last resort.

Consider this - your chase where each segment is a ramp down to zero - have a higher controlling effect that determines the starting/ending intensity of the ramp - so you are getting a chase using fades to zero - and the overall chase scene slowly fades to zero.

This is a data structure change - to do it now is easy - later (after a sequence was started) - maybe impossible.

Thanks ky - I really like this!

Joe

JHinkle
02-01-2012, 04:09 PM
I've added what I call "Over Control". Over Control is a secondary Ramp affect applied to multiple effects to obtain an over-all ramp appearance.

Below are two two views of a chase scene. The first one all lights are at 100% and fading to zero as shown. The solid color indicates there is no Over Control in effect.

13124

The next view - I simply selected everything (all channels and all effects) - once selected - I applied a "Over Control" Ramp-Down effect to the scene.

The hash indicates a Over Control is in place.

13125

As you can see - now the chase scene decays from 100% at the beginning down to the last block of Ramp-Downs which are at 25% - fading to 0.

I think that is what kychristmas was referring to - and I like it.

Joe

dslynx
02-01-2012, 04:14 PM
Looks and sounds sweet.. can't wait to give it a try ;)

JHinkle
02-01-2012, 06:20 PM
I now have Mirror Vertical just like kychristmas requested - select the block and it flips in place.

13127

Zeph
02-01-2012, 06:37 PM
Joe - we are back to some degree to why there still exists graphic raster editors (state = N x M with RBG each) even with the existance of graphic vector editors (various stroked and filled objects in layers). The latter is more powerful for later editing (you can make a blue circle into a bigger green one with a yellow border, even if it's partially behind another object).

When you stroke a brush, are you creating a parameterized object, or modifying the RGB values of a raster of pixels? In the latter case the saved state is further from capturing the original intent in abstract form.

It's kind of an early binding/late binding thing - eventually you will be creating a raster image, but the later you do that the more flexibility you have in modifying your design.

However, if you are going to be layering effects with fine granularity (airbrushing, sharpens), sometimes it's a lot less complex to just edit the pixels in successive passes (yeah, with a temporary multi-level undo etc). Thus such editors still exist.

Back to lighting rather than raster vs vector graphic editors.

You are surfing that edge. For now, you are considering a fully object based layering of a specific effect (the "brush" creates a new fade object which dynamically modifies other chase objects), and a hybrid (your brush would do a one time static modification of the parameters in a group of objects). I hope and expect that you can pull off the former because this is a pretty straightforward (and high utility) case. But if you continue to layer arbitrary effects, a time could come when the complexity isn't worth it.

It's actually kind of a fun edge to explore, balancing complexity and aesthetics and practicality.

Cheering from the sidelines for now.

JHinkle
02-01-2012, 07:49 PM
Zeph:

I'm not understanding what you are trying to graciously convey to me.

I considered vectored graphics when I started.

How to convey effect illumination information to the user - fast and easy. I decided it was easier to simply change the color associated with a brush and PatBlt an area. I use one brush normally - and simply change TextColor.

When kychristmas suggested an overlay effect - I wanted some way to convey to the user that the effect being presented to them was really a union of two.

So when I perform a paint of a channel cell - I look to see if the cell is governed by an effect and also by a overlay effect. Based on the results I choose one of two brushes.

I don't see adding any more types of overlays - so it will probably stay at two.

Was you concern a performance concern, resource concern?

Please share.

Thanks.

Joe

JHinkle
02-01-2012, 07:49 PM
I've got both Horizontal and Vertical mirroring working. I have limited Horizontal mirroring to my simple effects (Ramp-Up, Ramp-Down, and Level). In a way it does not make sense to flip a complex effect horizontally. I say that until I need it - then I'll make it happen.

Joe

Zeph
02-01-2012, 09:00 PM
Joe, I apologize for being unclear. I was using vector graphics editing and raster graphics editing as a metaphor. Your approach is more like vector graphics editing (you create objects which will later become a series of explicit levels at controller refresh time); Vixen 2.x is more like raster graphics editing (you edit in the actual values for each cell).

I was NOT talking about your display to the user per se, but about the architecture, using an analogy about related tradeoffs in the graphical editing world. My words might make at least a notch more sense in this light (sorry I didn't make it more clear).

My point was more about design philosophy - not a complaint or problem, but a reflection or musing on the design space territory you are so aptly exploring. I've done some thinking along these same lines and as I contemplate the (always dangerous but sometimes exciting) Generalized Case, um, well it gets interesting.

If I'm understanding, you are dealing with two effects affecting the same set of channels in the same time period and thus interacting. One does a chase by more or less copying one channel to the next but with a time delay; this is represented by an object which specifies the channels and time of course, along with chase parameters. The other effect fades down the maximum brightness; this would potentially affect the same channels and time range, but of course it has its own fade parameters. What if there were a third effect, like a color shift (at a different rate than the chase or fade)? Or a fourth? (Easy answer - don't go there).

If one was editing a grid of values per channel/timeslot, then just like editing a raster image in an editor, you can apply 100 effects to the same image pixels with no problem - each just affects the results produced by earlier edits. With an object based sequencer such as I understand you are creating, it get architecturally more interesting. So it's interesting to me to see how you approach the challenges.

In my case, I've only done some conceptual work - no code. The approach I'm exploring (on paper) is sort of a cross between the Moog synthesizer and animation sprites. And it may never be implemented. Whereas you are making great progress on a real world new approach, so I'm enjoying hearing and thinking about it as you share.

JHinkle
02-01-2012, 09:21 PM
What if there were a third effect, like a color shift (at a different rate than the chase or fade)?


I see RGB string between the lines.

Right now - I don't own any RGB strings so they are outside my consideration. Once I have the tool where I can easily work with constant color strings - I'll probably step in.

I did do a search in the forum on the various RGB strings that are out there and the different controllers driving them. I looked to see if there was a combination of custom controller/ software sequencer that could help. After about 6 hours I decided to keep on track with standard LEDs for my 2012 display.

I initially had no intention of doing an overlay - but Kelly presented such a perfect case - I had to do it. It works great.

Thanks to everyone on their mirror comments - I have them fully functional now also.

I can't think of any more features to add - so I will actually start sequencing - THAT will be where the tire meets the road.

Joe

djulien
02-01-2012, 10:14 PM
In my tool - there would be a single cell that contains data - that data fully describes the 200 output cells.

I design in concepts you think of for the sequence and the tool produces the 50msec output values - so my output matches Vixen - but I sequence at a much higher level.

So all that being said - I can't do matrix type manipulation - I do command substitution. A Ramp-up becomes a Ramp-down, etc.

I was assuming it was a function that is called to generate the cell values dynamically, so I thought it could be chained with a matrix transform. Thanks for clarifying.

don

JHinkle
02-02-2012, 10:05 AM
I designing my output plug-in.

It will be c++ based - not C# - .net.

Does Microsoft provide "free" access to a VC c++ compiler as they do for c#.

Joe

rokkett
02-02-2012, 10:20 AM
Yes, I think the "Express" compiler series is freely available....

http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express

JHinkle
02-02-2012, 11:00 AM
This is a departure for me in that I will be providing a plug-in capability to allow an end user to attach an output driver of their design/choice.

I'm asking for a sanity check on my interface to make sure it embraces those that might want to try the tool.

The plug-in will have three function calls:

Initialize();
ShutDown();
Output(int Controller, float *data, int DataSize);

I envision a user who needs to use multiple COM ports, for example, to initialize them and then create a mechanism to associate Controller to Com port.

Output will provide the Controller number the data is destined for (from that the plug-in should acquire access to the proper COM port), the size of the data array and a pointer to an array of floats (each float value will be between 0.0 and 100.0)

To convert the data to the actual value being sent to the physical controller ...

byte V;

V = (byte)((data[i] * 255.0) / 100.0);

if someone was using 10 bit intensity values

word V;

V = (word)((data[i] * 1024.0) / 100.0);


If someone wanted to take that value and transform it into an intensity value tailored for a specific type and color of lamp/led ...

assuming V is associated with a RED LED - whose translation table is TranRedLed[]

V = TranRedLed[V];

My question for those using Renard type controllers - do you send the Com port data one byte at a time - or do you package every thing in an array and then pass the array?

I ask - so my sample plug-in will provide the code structure that fits what you do today.

If you see anything I'm missing - please yell.

Note:

I'm implementing 1 plugin per communication mechanism - not per controller.

I'm also considering have a text file that defines which com ports and channel order is used. That way if the user needs to make a change - it cam be done simply by changing the config file.


Thanks.

Joe

djulien
02-02-2012, 11:47 AM
My question for those using Renard type controllers - do you send the Com port data one byte at a time - or do you package every thing in an array and then pass the array?

I ask - so my sample plug-in will provide the code structure that fits what you do today.

The Renard output plugins, as well as others, load up the bytes into an in-memory buffer and then call the Windows write function to send them out all at once. What happens to it after that is up to Windows.


Does Microsoft provide "free" access to a VC c++ compiler as they do for c#.

There is also an open source tool called #develop (SharpDevelop) that can be used.
www.icsharpcode.net

don

JHinkle
02-02-2012, 12:37 PM
If a user had 2 Renards - say a 24 and 64 - would they be on different COM ports or would they be on 1 COM port and viewed as 88 sequential channels?

Joe

stenersonj
02-02-2012, 12:47 PM
Normally one port, sequential

Zeph
02-02-2012, 01:28 PM
Maybe this could be organized per com port = per daisy chain of controllers (rather than per controller per se). From the driver's perspective, I don't think it matters how many controllers there are, just how many channels the daisy chain of controllers has for Renard; or 512 for DMX.

And the port might be virtual rather than a COM port, if using ethernet and e1.31. All part of the config of course, which is outside the spec of the call interface.

Small note which doesn't change your interface but might affect the documentation: If the output module writer wants to accomodate light curves, they might want to do that before converting to integer V. For gamma correction they might do the match in floating point before converting to an integer (byte or word size). Or for a lookup table they may want to convert the float to a different number of bits for input to the lookup table, than they get out of the table. (ie: V into the table need not be the same size as V out of the table).

JHinkle
02-02-2012, 01:58 PM
Zeph:

I have to keep separating the way I am going to communicate and the way those using a Renard type controller do.

For Renard type systems - I am implementing Com Port - Controller Number (Only to help identify which channels go to what COM port if multiple ports are used) and Channel count (total number of channels transmitted on that COM port).

For my protocol - I will have uniquely addressable controllers all on the same bus - so that's were my terminology has to be watched.

If a Renard User only has 1 COM port - then all channels will go to that device.

I just through the light curve concept out there to see who would bite.

There are a many ways of addressing the implementation as shown by you presenting several more.

Joe

JHinkle
02-02-2012, 11:31 PM
I made a short youtube video demonstrating what the sequencer will do so far.

All comments - good or bad - are welcome.


http://www.youtube.com/watch?v=aOwrz-P5Bf0

Joe

kychristmas
02-03-2012, 01:04 AM
Looking Good Joe.

Based on your description of the Generic Com Drivers, I'm not sure about the useful ness of that. What gets ouput to the COM ports?

budude
02-03-2012, 01:42 AM
Very impressive - especially with such a short time of development.

djulien
02-03-2012, 01:55 AM
I made a short youtube video demonstrating what the sequencer will do so far.
All comments - good or bad - are welcome.

Joe,

Looks great so far! The periodic repeat feature is nice; that would save a bunch of copy + paste effort for effects that repeat at a regular interval or are based on the beat.

Applying the DSP filtering is very nice. I think it would be very powerful if you could apply the output of the filter directly into the channels rather than, or in addition to, having to manually set channels by looking at the results of the filter. For example, if output of a filter could actually "trigger" the effects or supply parameters to the effects functions that would be very nice (that's the direction I'm probably headed - it allows effects to be automatically generated, but they are still synced directly to the music rather than just playing them asynchronously while the music is going).

For the areas that have multiple effects ("overlays"?), it might be helpful to have a popup Tool Tip when the mouse is over that area to show which effects those are, or maybe watermark it into the dithering. (this would save having to remember or open another window to find out).

It would be helpful to have some kind of channel grouping if the #channels gets up into the 1000s as it would be for some of the larger DIYC displays.

Regarding the channel colors, channel triplets are often used to represent RGB channels or props. Since you are allowing a color to be assigned to each channel, it might be nice to take the current intensity of a R, G and B channel and combine those into an RGB value and then use that to show the current color of the RGB channel at that point in the sequence. I guess that might also be distracting to see the channels changing color while editing them, although kind of cool also.

A way to copy + paste shapes for the previews would be convenient if there are many similar props. That way a "library" of prop shapes could be built up over time. (ie, various M-trees, rather than having to draw one yourself). There is another thread about 3D models that would fit very nicely with this (it was dealing with spiral trees, for example).

don

JHinkle
02-03-2012, 08:07 AM
Originally posted by kychristmas
Based on your description of the Generic Com Drivers, I'm not sure about the useful ness of that. What gets ouput to the COM ports?

King of hard to think, talk, and do something at the same time making a video - I guess my intent did not come across.

I believe I have implemented an output capability to handle Renard type controllers. In the Output section - as an example - you identify 3 COM ports - and their characteristics (baud, parity, stop). You assign the number of channels to that COM ports required by all of your controllers on that COM port based bus. My output driver performs the standard float to byte illumination translation, packages up the byte intensities for each COM port and each channel - and sends them at each sequence tic. I believe that is what is required - I'll get my scope out shortly and verify the bit stream.


Originally post by djulien
For the areas that have multiple effects ("overlays"?), it might be helpful to have a popup Tool Tip when the mouse is over that area to show which effects those are, or maybe watermark it into the dithering. (this would save having to remember or open another window to find out).


I have a tool-tip pop-up that describes the effect under it - just didn't show it.


Originally post by djulien
Applying the DSP filtering is very nice. I think it would be very powerful if you could apply the output of the filter directly into the channels rather than, or in addition to, having to manually set channels by looking at the results of the filter.

That could be done with a low frequency beat as I've shown in the Music box dancer. The hi-frequency musical tones I think are a lot harder since you also have all of the harmonics of "what you don't want" in there also. I was looking into a DFFT to identify the frequency of the musical instrument I was interested in and then define a hi-Q band-pass to see if it could be extracted. I've not walked that path as of yet.


Originally posted by djulien
It would be helpful to have some kind of channel grouping if the #channels gets up into the 1000s as it would be for some of the larger DIYC displays.

I believe I have that - my channel grouping quickie at the beginning did not provide the insight I was hoping for.

My concept was - with your 1000 channel sequence - is to group them into "groups " to work on - I used a tree as an example. I would have one for a Meag-tree, group of mini's - etc. If I wanted a completed sequence from one group to be used as a cue in another, I would just make those channels part of the that group also.

Currently - I do not have the ability to compress a group of channels into one visible "virtual" channel where all effects are then placed in each of the compressed channels automatically. I considered that but could not relate to a real situation that would require it.

Thanks for you comments.

Joe

Traneman
02-03-2012, 10:47 AM
I have read all the posts here and don't have a clue about what you guys are talking about.
After watching the video I dont care. I JUST WANT IT !!!!!!!!!
From a relative newby's point of view I think it is awesome and very jealous I can't do things like that.
I really like how you can just drag a sequence and change the timing on it.
As far as the beat track viewer (not sure what to call it) for me not being musically inclined would be
extremely helpful.
Do you think this will be ready for 2012 ?
Great Job and I for one are glad you are retired and have the time to develop something like this

Thank you.

craftylag
02-03-2012, 11:10 AM
What do you mean ready for 2012? Do you think it will be ready today?

Traneman
02-03-2012, 11:26 AM
What do you mean ready for 2012? Do you think it will be ready today?

Sorry I meant for Christmas 2012 geeesh.

kychristmas
02-03-2012, 02:38 PM
King of hard to think, talk, and do something at the same time making a video - I guess my intent did not come across.

I believe I have implemented an output capability to handle Renard type controllers. In the Output section - as an example - you identify 3 COM ports - and their characteristics (baud, parity, stop). You assign the number of channels to that COM ports required by all of your controllers on that COM port based bus. My output driver performs the standard float to byte illumination translation, packages up the byte intensities for each COM port and each channel - and sends them at each sequence tic. I believe that is what is required - I'll get my scope out shortly and verify the bit stream.


Joe

For the most part that would work, but there are Protocol specific things that need to happen. It's not just Raw Channel data. Mostly yes, but there are leading bytes and then there is a sort of "Sync" or "Catch Up" byte that is sent every 100 channels. I have played with some coding for Renard, but can't remember exactly. Your concept sounds more like the Vixen "Generic Serial" pluging.

JHinkle
02-03-2012, 03:04 PM
For the most part that would work, but there are Protocol specific things that need to happen. It's not just Raw Channel data. Mostly yes, but there are leading bytes and then there is a sort of "Sync" or "Catch Up" byte that is sent every 100 channels. I have played with some coding for Renard, but can't remember exactly. Your concept sounds more like the Vixen "Generic Serial" pluging.

If someone has the spec's - I will incorporate them.

If I can find source for the Renard Output module - I can extract it from there also.

Push come to shove - I can always look at the PIC source and reverse engineer it that way. (I'd rather not)

Joe

Zeph
02-03-2012, 03:06 PM
Very cool Joe!

I thought the video really covered it well, taking us through the functionality at a good pace. I look forward to betas or whatever when you can share it...

I see that unlit channels in the preview are drawn in black (the upper left square has a chunk taken out where an unlit stroke overlaps it). Might be a useful and tiny change to omit drawing strokes for 0 values.

You will probably continue to get lots of suggestions (as well as your own inspirations).

I understand that you have the functionality well enough nailed down to begin creating sequences. At that point you may start getting some inertia - some caution about making any change in structure which would be problematic for already created sequences (unless you create an upgrade function). I'm guessing however that you save your sequences in a fairly simple format which will be fairly easy to maintain over versions - as a set of parameterized effect objects tied to channel groups and time ranges. Hopefully these will be upward compatible; for example, if you add a new parameter to an effect with a newer version, there will be some default value which can be applied to that parameter when loading an effect object saved from an earlier version which did not have the new parameter.

The thing I'm most curious about (besides when we might get to play with it) is how you implemented the overlay effect. For example the fade out; did you create one fade overlay effect per channel for the selected area? Can you delete or move the overlay effect, leaving the underlying primary effects unchanged? If you add a new primary effect within the time range of an existing overlay effect, will it too be controlled by the overlay?

I'm guessing that the overlay effect may detect primary effects within its channel and time range, and "reach into them" to change their brightness level parameters. Or each primary effect could check to see if it is within an overlay effect, and if so ask the overlay for a scaled brightness range to use.

The following gets a bit abstract; nobody should read it unless they want to. I'm trying to put Vixen 2, Joe's editor, and Vixen 3 in a common conceptual framework. (Also an editor in incubation which I call Pixie).

I think you've chosen a good compromize of simplicity and power, between Vixen 2 and Vixen 3.

Vixen 2 was a 0 dimensional object system - the "objects" were like points of varied intensity - a point being a cell. The editor could create a fade on a channel (1 dimensional line along time axis as described to the user), but this was saved as a bunch of independent values in particular timeslots (independent "points"). The individual channels values as saved (0 D "points") don't know they were originally part of a fade (1 D edit operation as seen by the user). There are also 2D editing operations (operating on a rectange of channels and time) which also save point-like channel values in timeslots - a channel value doesn't know it was part of a chase.

Your editor saves 1D objects. A fade or level or custom effect is a 1 dimensional object, along the time axis. When one draws a fade in the editor, it is saved as a "1D" object rather than as a set of channel/timeslot/values. There's a lot of bang per buck in that level of abstraction. When you do an editor operation in 2d (channels x time), this can create a set of 1D effects along the channel axis (not unlike Vixen 2 creating a set of 0 D channel values along the time axis). If you create a chase of a set of fade effects, each fade effect does not know or care that it was part of a chase.

Vixen 3 is somewhat of a 2d object system, where the parameterized object can control a timespan as well as a channel group. So (as I'm understanding so far), a chase can be an object which knows it's a chase. This has power, but also complexity. I look forward to the amazing things it will do.

I am glad to have your fascinating new sequencer in the mix.

I mentioned that I'm working on the paper level on a different sort of light controller (which I'm tenatively calling Pixie). The Pixie sequencer is inspired by the need to control pixels in more interesting (and less time intensive) ways, but would also handle regular lights. Pixie is currently mostly 1D, but that dimension can switch between time and space (channels but also cartesian space of a sort). Your 1D effects are similar to what in Pixie would be transferred to a node as a sprite moves across it while following a track, so some of the concepts will sound familiar when I eventually go into more detail :-) I do know that sounds like gibberish! At some point I'll describe Pixie to the group such that the above would make good sense (with diagrams), but it's not ready yet. And Pixie is entirely vaporware at this point - a design being formulated, no code. That's all I'll say here, as this thread is about YOUR great new sequencer. I just wanted to mention a fourth example with still a different take on the 0 D/ 1 D / 2 D object orientation of sequence editors.

You rock! And you are fast.

JHinkle
02-03-2012, 03:42 PM
Very cool Joe!

I see that unlit channels in the preview are drawn in black (the upper left square has a chunk taken out where an unlit stroke overlaps it). Might be a useful and tiny change to omit drawing strokes for 0 values.

The thing I'm most curious about (besides when we might get to play with it) is how you implemented the overlay effect. For example the fade out; did you create one fade overlay effect per channel for the selected area? Can you delete or move the overlay effect, leaving the underlying primary effects unchanged? If you add a new primary effect within the time range of an existing overlay effect, will it too be controlled by the overlay?

I'm guessing that the overlay effect may detect primary effects within its channel and time range, and "reach into them" to change their brightness level parameters. Or each primary effect could check to see if it is within an overlay effect, and if so ask the overlay for a scaled brightness range to use.



I found the Renard protocol and will update my output section

http://www.doityourselfchristmas.com/wiki/index.php?title=Renard#Protocol

I draw Black segments because I don't erase the page and draw each time - slows thing down.

What you saw was a bad example of overlaying the Mega-Tree on to the Box (Window). In reality - I would create a Preview sprite in that fashion.

I pondered a long time on if I should preserve the "Over Lay" effect. I usually keep to the KISS principle - and that's what I did here - I am leaving the higher level of intelligence to the user.

When a group of effects are identified by the selection rectangle - I do the following:

Determine an intensity decay percentage for each sequence tic defined by the width of the selection. Process each each effect in the selection and apply the appropriate decay percentage (based on position in selection rectangle) to the effects illumination start/end setting - depending if ramp up/dn.

The wider the selection containing effects - the more gradual to decay to black.

What I defined above is not exactly what was shown on the video - I did not like the results - but it was OK for show and tell - so I revised it this morning.

I put a lot of work into serializing the saved sequence data in and out of the program. It's all in simple XML. So if I need to make BIG changes - I can simple pre-process the document.

My intent to to compile and export out a xml file containing the output data so that the sequence can be played by a stand alone program (I'm working on that also).
I will use it to manage and play several songs and light sequences instead of have the editor do that.

JHinkle
02-03-2012, 04:07 PM
For the most part that would work, but there are Protocol specific things that need to happen. It's not just Raw Channel data. Mostly yes, but there are leading bytes and then there is a sort of "Sync" or "Catch Up" byte that is sent every 100 channels. I have played with some coding for Renard, but can't remember exactly. Your concept sounds more like the Vixen "Generic Serial" pluging.

I'ts all done.

By the way - the link:

http://doityourselfchristmas.com/forums/showthread.php?18969-Renard-protocol-what-does-it-look-like&highlight=renard+protocol

has an attached file from frankv. Frank has a nice implementation.

Joe

frankv
02-03-2012, 05:40 PM
Hi Joe,
That's my file, and I know it works because I used it last Xmas.
The function ser_puts(char *x) does all the packetising work, and it has a buffer sized to [MAX_CHANS*2+2] to allow for any escape characters plus the sync plus the device ID.
Data presented to it should be an array of bytes, one per channel. e.g. if you have one Ren24, then MAX_CHANS should be 24. In my case, I have 2 Ren24s daisy-chained, so MAX_CHANS is 48.
Making this work with multiple serial ports would take some work.

Frank

JHinkle
02-03-2012, 06:03 PM
Hi Joe,
That's my file, and I know it works because I used it last Xmas.
The function ser_puts(char *x) does all the packetising work, and it has a buffer sized to [MAX_CHANS*2+2] to allow for any escape characters plus the sync plus the device ID.
Data presented to it should be an array of bytes, one per channel. e.g. if you have one Ren24, then MAX_CHANS should be 24. In my case, I have 2 Ren24s daisy-chained, so MAX_CHANS is 48.
Making this work with multiple serial ports would take some work.

Frank

Frank:

My old eyes ---- I saw --- [MAX_CHAN+2] ---- not [MAX_CHAN*2+2]; In fact you did the same thing I did ...

I am going to retract my statement above - I apologize for not looking closer.

Joe

frankv
02-03-2012, 10:12 PM
OK, no problem. I was just confused by your statement :)

JHinkle
02-04-2012, 01:39 AM
Update:

I've now started using the tool in a production mode. Changing small things as I find them.

I have reworked the Overlay/Over Control effect - I now like how it ramps up or fades a range of effects to darkness. I keep track of both the underlying effect and the overlay effect with the ability to remove the overlay at any time.

The biggest hurdle now is gaining an understanding of HOW to create specific scenes or animations I want. I guess as I gain more experience - this will come easier. Just like life - the more you live it the more experience your have to draw upon. Learning to see in the macro-level now instead of the micro-level.

I see another feature of the tool that is a must have (will be done over the weekend). I need to be able to take a multichannel sequence (over a specific length of time) that I am have happy with - and save it in a reusable - independent - library. I will then be able to pick a scene animation feature from the library and reuse it many times with the same song sequence or within different song sequences.

This would also allow easy sharing among users. Animation that was beat specific might be difficult - but non-beat --- just like picking a custom effect --- but it would be a BIG - multichannel effect.

I'm going to make a similar library for my Preview sprites. I will be able to reuse them or share with others.

If one or two people would like to beta test - I would be pleased to share. I would prefer a beta tester NOT be a newbie like me. I would prefer comments coming back from beta testers based on experience - not newbie questions like I keep asking myself and you on the forum.

PM if you are interested.

Thanks.

Joe

JHinkle
02-04-2012, 03:31 PM
I have two gentleman that have agreed to do some beta testing for me.

One of them has a large number of RGB strings - so I am now going to implement RGB.

The tool will process an RGB effects on a single channel - not three separate channels. The translation from one channel to three will occur only in the output stage.

Question:

Controllers that currently work with RGB strings - are the three channels (R, G, B) always sequential in the message that is transmitted to the controller or is there a reason NOT to have them sequential. To me - sequential should be the only way - but I'm just asking.

Can someone recommend a good E1.31 document that explains the protocol or what subset needs to be implemented to drive RGB strings controllers.

Thanks in advance for your reply.

Joe

budude
02-04-2012, 04:01 PM
I have two gentleman that have agreed to do some beta testing for me.

One of them has a large number of RGB strings - so I am now going to implement RGB.

The tool will process an RGB effects on a single channel - not three separate channels. The translation from one channel to three will occur only in the output stage.

Question:

Controllers that currently work with RGB strings - are the three channels (R, G, B) always sequential in the message that is transmitted to the controller or is there a reason NOT to have them sequential. To me - sequential should be the only way - but I'm just asking.

Can someone recommend a good E1.31 document that explains the protocol or what subset needs to be implemented to drive RGB strings controllers.

Thanks in advance for your reply.

Joe

Here you go:
http://www.esta.org/tsp/documents/docs/E1-31_2009.pdf

JHinkle
02-04-2012, 05:04 PM
Thanks Brian.

JHinkle
02-04-2012, 09:28 PM
I have enhanced the fade to black or black to color ---- and have fully implemented RGB capabilities (I'll have E1.31 implemented in the next couple of days).

As always - comments - good and bad are always welcomed.


http://youtu.be/RTr6tZ7JiaU

Joe

TimW
02-05-2012, 02:15 AM
Question:

Controllers that currently work with RGB strings - are the three channels (R, G, B) always sequential in the message that is transmitted to the controller or is there a reason NOT to have them sequential. To me - sequential should be the only way - but I'm just asking.
Joe

For reasons that defy logic (!) some of the RGB pixel strips implement combinations that are not RGB sequenced (say BGR). I assume it might be different LED pinout configs? Anyway I guess the appropriate place to pick this up is in the bridge controller... its a hardware problem that is specific to the string/strip. But its not impossible that you may need to correct for order somewhere in the path...

JHinkle
02-05-2012, 09:23 AM
Is a hardware flip of RGB to BGR the only one out there? I can account for that.

Are there any others?

Depending on how many exists - I will setup a config to properly handle the translation.

Joe

JHinkle
02-05-2012, 10:03 AM
Question for all the E1.31 users:

Are your E1.31 controllers always expecting a full package of 512 slots (used or not) or can they accept a package with only the number of active slots?

Do your controllers expect priority 100 or do your expect something higher?

Do you expect your universes to be updated at a 20 msec rate even if your sequence runs slower (ie 50 msec)?

Is there any reason not to run the E1.31 protocol at the same rate as your sequence? (As long as we don't get into timeout conditions)

Thanks - I'm just wrapping up my implementation of E1.31.

Joe

budude
02-05-2012, 12:52 PM
Is a hardware flip of RGB to BGR the only one out there? I can account for that.

Are there any others?

Depending on how many exists - I will setup a config to properly handle the translation.

Joe

Depending on which pixel controller you use this may be a non-issue. The E680/681 have settings to make the strip RGB order to the E1.31 stream. My 1809 pixel strips are BGR but the E681 takes care of that.

JHinkle
02-05-2012, 01:03 PM
Thanks Brian:

My Channel Config is now designed to allow the user to select RGB or BGR if the channel is a RGB string channel.

Joe

rokkett
02-05-2012, 01:57 PM
My thoughts on the RGB ordering. This should be handled in the hardware, e.i. the plugin/module should always send the data in R,G,B order.

(Although, if a software vender was nice, it would be nice to have an option. But options add complexity. A user who was given the choice of configuring RGB order in two places, may choose to do this in both places and never get the desired outcome...)

(If you are giving them the choice, then you should offer all permutations of the order. The Wikipedia page on permutations (http://en.wikipedia.org/wiki/Permutation)is apropo, wouldn't you say? :) )

JHinkle
02-05-2012, 02:26 PM
E1.31 Communication is complete.

Package size is dependent on the number of actual channels/slots.

Keep-alives are sent to make sure stream continues to all sinks at least every 800 to 1000 msec.

Stream rate is the same as your sequence rate.

The Stream is only active while a song/sequence or multiple songs/sequences are being displayed. The stream is terminated using the required 3 package transmission with the terminate bit set.

Most of these E1.31 features are not required by the controllers you are using today - but they are required by the specification. Better to fully implement them now then have it break in the future when new hardware becomes available.

Joe

rokkett
02-05-2012, 02:38 PM
Joe, you trying to make us look bad? Slow up, dude! (Just kidding - drive on, my friend...) :)

dpitts
02-05-2012, 11:38 PM
Joe,

First off you are making great progress and I am excited and look forward to see each and every post. Nice work.

I watched the last video on your RGB implementation. I have a question. Are the three channels that make up a single RGB channel displayed on a single row? For example if the three channels (say channels 1,2,3) together create a certain color is this color displayed on a single row. That row would be the channels 1,2 and 3 combined to create a single RGB channel. The reason i ask is because as channel counts get high having the channels that make up a RGB channel combined will minimize the amount of rows as well as let the user see the color created by three channels. This may already be the case but was not sure by video.

JHinkle
02-06-2012, 01:13 AM
When a RGB channel is identified - it is shown and processed as a single channel in the editor - so my demo showed a single channel with changing RGB values.

I separate the combined (single RGB) channel into three discrete ones as they are being process for transmission. You can select RGB or BGR decoding.

This should cut the channel count down (for editing) by a factor of three for anyone using RGB.

To directly answer your question - yes - three channels (red, green and blue) are shown and processed as one.

Did I answer the question?

Joe

dpitts
02-06-2012, 05:27 AM
Yes it did. thanks.

JHinkle
02-06-2012, 03:59 PM
I am shipping the software out to three gentleman that are kind enough to beta test it for me.

Instead of taking time to write a manual addressing RGB and E131 and Renard communication - I made a video.

You might find it interesting.

Once the beta team feels it's good enough for the public - I will make it available to you all on the forum.


http://youtu.be/6zus4w6DRKs

Joe

JHinkle
02-07-2012, 12:20 PM
HLS is blinking lights with a Renard controller.

I would like to export a minimal sequence file that a scheduler (which currently runs Vixen 2 sequence) could run.

I am searching the forum, etc - but if someone knows what the data structure looks like - please point me in the right direction.

Thanks.

Joe

rstehle
02-07-2012, 12:51 PM
Wow, great job on the video!! I really like your new sequencer. I'm thinking that this is sort of what I had envisioned the next generaton of Vixen to be. IMO, this sequencer may be a lot easier to pick up for us old timers that are used to using Vixen 2.1.4? Color me very impressed!!! :pragnie:

jrock64
02-07-2012, 03:24 PM
Starting to look really interesting.

When you added channels and said they were type RGB.
Did the three channels represent 1 pixel or 3??.

Do not see why you would assign any color to an RGB capable channel.
I guess I got confused when you made a red RGB channel turn yellow.
without changing the corresponding green channel

JOel

dslynx
02-07-2012, 03:28 PM
From what I took of it, the 1 channel in the software was actually 4 channels in real life. RGB+W and the color of the channel is what is displayed. So, the sequencer takes care of what levels each of the channels should be at to produce yellow.

Edit: I guess the confusion would be that he used 3 RGB channels to show you how it works, when in fact, it was 9 (or 12, I don't know if he is doing +W) channels.

angus40
02-07-2012, 03:30 PM
This looks great , can't wait for public version.

Thanks JHinkle for creating this .

JHinkle
02-07-2012, 03:36 PM
RGB requires three physical channels (one for each color).

The sequencer hides the fact that there are three distinct channels and lets the user design with one - one that is not fixed in color but has the ability to display any color on any pixel.

I said color in the beginning so that when I laid the first effect in the channel - it would have a color (default). As you change the RGB pixel color - the color of the channel actually changes also.

So - when designing with RGB - you no longer have to think in terms of three colors - three channels - but just think of one channel (like a normal non-RGB channel) and what color is this pixel going to be. The software does the rest for you.

Joe

JHinkle
02-07-2012, 03:39 PM
I'm adding a Show capability and want to be able to play Vixen 2 files along with my own.

Joe

rstehle
02-07-2012, 03:56 PM
Starting to drool.................:pragnie:


I'm adding a Show capability and want to be able to play Vixen 2 files along with my own.

Joe

Traneman
02-07-2012, 04:15 PM
Starting to drool ????????
I have been drooling for awhile now,can't wait

dslynx
02-07-2012, 04:21 PM
Yup, I've been drooling for a while now as well.. It's all appearing so fast.. any chance of making it open source or a Linux port?

JHinkle
02-07-2012, 05:48 PM
I've been looking at some VIX files to understand the format so that I can play them.

I pulled several from Holdman.

All of his VIX files have an EventPeriodInMilliseconds of 10.

Do people really run a 10 millsec sequences?

Joe

rstehle
02-07-2012, 05:57 PM
Most of us run 50. I have one sequence that I use 25 on, cause it has a lot of real fast movement. I believe Holdman uses LOR. Someone devised a LOR to Vixen converter but I'm not sure if it makes any change to the event period.

miw01
02-07-2012, 06:12 PM
Joe

Fantastic work, been up through the night watching the videos, dont keep me in suspense, I'm about to learn my first sequencing app, Vixen 2.X - why? would I, Vixen 3.X yes but not at the current development pace so why dont I Beta in Europe for you!!! PM Me

Mike

budude
02-07-2012, 09:12 PM
Randy has it - the default event period in LOR is 10mS - the converter does not change the timing so after the conversion you need to change the interval in Vixen. It will do it's best to fill in the missing bits here and there.

JHinkle
02-07-2012, 09:43 PM
I'm going to do the conversion.

I'm going to import Vixen 2 files - convert them - so they can be played as part of my show.

Depending how they look, I may even be able to import them into my format. I would have to look at the cell pattern and see if I could match multiple cells into one of my building blocks.

Joe

intwoit2002
02-07-2012, 11:38 PM
I really where you are taking this. Any thoughts about when and what cost it might be available? I am now up to 4000 channels and what you are doing would really help.

Thanks for support for the hobby.

Great job, really anxious.

Thanks,
Al

JHinkle
02-08-2012, 12:35 AM
Cost - free.

When - in the next 30 to 45 days.

Joe

djulien
02-08-2012, 12:35 AM
I would like to export a minimal sequence file that a scheduler (which currently runs Vixen 2 sequence) could run.

I am searching the forum, etc - but if someone knows what the data structure looks like - please point me in the right direction

There are a lot of examples at sequencecenter.com. A Vixen sequence file is just an XML file, so you can look at it with any text editor. The general structure is as follows:



<?xml version="1.0" encoding="utf-8"?>
<Program>
<Time>310730</Time>
<EventPeriodInMilliseconds>25</EventPeriodInMilliseconds>
<MinimumLevel>0</MinimumLevel>
<MaximumLevel>255</MaximumLevel>
<AudioDevice>-1</AudioDevice>
<AudioVolume>0</AudioVolume>
<Channels>
<Channel color="-16744193" output="0" id="1278797815796" enabled="True">mTree 2 Blue</Channel>
<Channel color="-16711872" output="1" id="1278797815797" enabled="True">mTree 2 Green</Channel>
<!-- etc -->
</Channels>
<PlugInData>
<PlugIn name="Adjustable preview" key="-1193625963" id="0" enabled="True" from="1" to="38">
<BackgroundImage />
<RedirectOutputs>False</RedirectOutputs>
<Display>
<Height>47</Height>
<Width>117</Width>
<PixelSize>5</PixelSize>
<Brightness>10</Brightness>
</Display>
<Channels>
<Channel number="0">JAAHACUABwAmAAcA</Channel>
<Channel number="1">JAAGACUABgAmAAYA</Channel>
<!-- etc -->
</Channels>
<DialogPositions>
<PreviewDialog x="402" y="262" />
</DialogPositions>
</PlugIn>
</PlugInData>
<SortOrders lastSort="-1" />
<Audio filename="music.mp3" duration="308323">music title</Audio>
<EventValues>AAAAAAGNhoIAAA <!-- base-64 encoded array of cell values -->AAA=</EventValues>
<LoadableData />
<EngineType>Standard</EngineType>
<Extensions>
<Extension type=".vix" />
</Extensions>
<WindowSize>1314,658</WindowSize>
<ChannelWidth>125</ChannelWidth>
</Program>


The contents of the plugins section will vary depending on which plugins are used by the sequence. That info will either be in the .vix file itself, or in a profile file that is referenced by the .vix file.

The Vixen scheduler is a file named "timers", also an XML file, in the following format:


<?xml version="1.0" encoding="utf-8"?>
<Timers enabled="True">
<Timer>
<StartDateTime>10/31/2010 6:00:00 PM</StartDateTime>
<TimerLength>03:00:00</TimerLength>
<Item length="00:07:18.4640000" type="Program">programfile.vpr</Item>
<RepeatInterval>0</RepeatInterval>
</Timer>
</Timers>


The program file is also an XML file, and will contain a list of .vix files to play, along with some other info.

If you need details about any specific section of the files, just indicate which ones.

hope this helps.

don

JHinkle
02-08-2012, 01:38 AM
Thanks Don:

My feeling is right now I only need Time, EventPeriod, Levels, output channels, and EventValues. Don't need the plugin info as I will require some configuration by the user.

All of the <MinimumLevel>0</MinimumLevel> and <MaximumLevel>255</MaximumLevel> values I have seen have always been 0 and 255. Is there a reason for anything different?

I have found some LOR converted vix files (Holdman) that have the event period at 10 msec - so I will have to come up with some time based compaction process that will provide a decent result. - Have to sleep on that a night or two.

I will be able to play Vixen 2 sequence files along with mine.

I will extract the data I want and save it in a modified format (not base 64). This will also give the user the ability to make small changes outside of Vixen.

Joe

djulien
02-08-2012, 02:13 AM
My feeling is right now I only need Time, EventPeriod, Levels, output channels, and EventValues. Don't need the plugin info as I will require some configuration by the user.

The EventValues are the actual cell values in Vixen. By "Levels", are you refering to MinimumLevel and MaximumLevel?


All of the <MinimumLevel>0</MinimumLevel> and <MaximumLevel>255</MaximumLevel> values I have seen have always been 0 and 255. Is there a reason for anything different?

I've used different values a few times. Since my lights are off below "20" and full-on at 220 or so, sometimes I set those as the min and max levels so that Ramp and Fade functions will start or stop at those levels. If I applied a Ramp or Fade that went all the way from 0 to 255, then there would be no visible effect at the beginning and the end, which made it look like the timing was incorrect (ie, a Ramp would end up with a delay until the lights became visible).

I've also use Min and Max to apply clipping. For example, when I had to convert some dimmed channels for use with on/off relays, I set them one apart to force a binary value, and then converted that back to 0 and 255 again afterward.

don

frankv
02-08-2012, 03:08 AM
I'm going to do the conversion.

I'm going to import Vixen 2 files - convert them - so they can be played as part of my show.

Depending how they look, I may even be able to import them into my format. I would have to look at the cell pattern and see if I could match multiple cells into one of my building blocks.


Seems to me that matching multiple cells to building blocks is very similar to data compression.

Joe, I have Java source code to read both LOR and one of the several VIX formats if you want it.

Frank

budude
02-08-2012, 03:49 AM
Some folks use an 80 or 90% max to save energy while still providing a relatively bright output. Over the course of a show and the several weeks it runs it adds up.

Slite
02-08-2012, 03:54 AM
Suggestion:

With the Raspberry Pi looking like an interesting (and very portable) solution for running a show and Xlights being able to run on Linux... Would you look into adding possibility to export to other schedulers?

Or even better, provide the guy writing Xlights with your file specification so it can be imported into the xlights scheduler?

JHinkle
02-08-2012, 08:21 AM
The EventValues are the actual cell values in Vixen. By "Levels", are you refering to MinimumLevel and MaximumLevel?



I've used different values a few times. Since my lights are off below "20" and full-on at 220 or so, sometimes I set those as the min and max levels so that Ramp and Fade functions will start or stop at those levels. If I applied a Ramp or Fade that went all the way from 0 to 255, then there would be no visible effect at the beginning and the end, which made it look like the timing was incorrect (ie, a Ramp would end up with a delay until the lights became visible).

I've also use Min and Max to apply clipping. For example, when I had to convert some dimmed channels for use with on/off relays, I set them one apart to force a binary value, and then converted that back to 0 and 255 again afterward.

don

Makes a lot of sense.

I will add the capability to my scheduler. My thought process was to address that concern through the use of dimming curves but if this alternative provides acceptable results - it's just clipping in the output stage - it has been implemented.

JHinkle
02-08-2012, 08:45 AM
Suggestion:

With the Raspberry Pi looking like an interesting (and very portable) solution for running a show and Xlights being able to run on Linux... Would you look into adding possibility to export to other schedulers?

Or even better, provide the guy writing Xlights with your file specification so it can be imported into the xlights scheduler?

Absolutely.

I too use XML as the mechanism to save and transport data.

I don't use "attributes" but address each item as an individual field.

I'm still sleeping on "illumination" data - what Vixen calls EventValues. (I figure out many issues in my sleep).

I am not concerned about file size - hard drive space is cheap. For the design - I save the hi-level effect information - so it is big by its nature. The illumination data is the only place compression can take place - and if you are already big - what's a little bigger.

I can always add the capability to zip and unzip if necessary.

I also design my XML files so that they can be opened in a simple editor. You have to watch how many characters are in a line as some editors will clip the line or add unwanted characters to it.

My thoughts right now are to define a block size which contains data for a specified number of EventValues. The block would be "time stamped" as to where it belongs in the sequence and would represent the values in hex (less the 0x). example - every block describes 1 seconds of illumination data. At a sequence resolution of 25 msec - that's 40 data points at 3 characters per data point (i'm including a comma delimiter) - that line length should not cause most editors to do bad nasty things.

What I'm still pondering on is do I make provisions to exclude "black" areas of time (illumination values of 0 after being passed through a dimming curve or by applying Min/Max clipping values. What it saves in space might not be worth the effort and coding

Joe

dowdybrown
02-08-2012, 11:01 AM
Absolutely.

I too use XML as the mechanism to save and transport data.

I don't use "attributes" but address each item as an individual field.

I'm still sleeping on "illumination" data - what Vixen calls EventValues. (I figure out many issues in my sleep).

I am not concerned about file size - hard drive space is cheap. For the design - I save the hi-level effect information - so it is big by its nature. The illumination data is the only place compression can take place - and if you are already big - what's a little bigger.

I can always add the capability to zip and unzip if necessary.

I also design my XML files so that they can be opened in a simple editor. You have to watch how many characters are in a line as some editors will clip the line or add unwanted characters to it.

My thoughts right now are to define a block size which contains data for a specified number of EventValues. The block would be "time stamped" as to where it belongs in the sequence and would represent the values in hex (less the 0x). example - every block describes 1 seconds of illumination data. At a sequence resolution of 25 msec - that's 40 data points at 3 characters per data point (i'm including a comma delimiter) - that line length should not cause most editors to do bad nasty things.

What I'm still pondering on is do I make provisions to exclude "black" areas of time (illumination values of 0 after being passed through a dimming curve or by applying Min/Max clipping values. What it saves in space might not be worth the effort and coding

Joe

Joe,

This is really exciting. I am the creator of xLights and I do plan to port xLights to the Raspberry Pi. Feel free to PM me with any questions you may have about xLights. Keep up the good work!

Matt

JHinkle
02-08-2012, 11:28 AM
I implementing Dimming Curves.

Here are my thoughts - if you have alternate suggestion - please reply quickly because I'm locking it down.

I know there might be dimming curves already in existence. If I find the current structure acceptable - I may just use the it.

To me a dimming curve is a simple translation array - 256 in size; In the ouput stage - once an illumination value is acquired - it is used as an index into this array to acquire the illumination value the user really wants.

Each dimming curve will reside in its own XML file. The file name defines the intent of the dimming curve (example - RedLED, Incandesent, BlueLED, etc).
The dimming curve file can contain the complete 256 byte array - or only those values that deviate from the default - default being same value out ans in.

<dcv>200,235</dcv> .... this as an example of an entry int the dimming curve file - take incoming illumination value 200 and output it as 235.

These dimming curves would then be associated with a channel. If you don't use one - you get the default.

Your thoughts?

Joe

budude
02-08-2012, 11:55 AM
That is how dimming curves are implemented in Vixen 2.5. A tool pops up with an initial linear graph from 0 to 255 (left to right) and you adjust that curve how you like and save it (you can save multiple curves) and then you can apply any saved curve to any one channel. It's very simple to use. Typically you would build one per light strand brand/type.

JHinkle
02-08-2012, 12:13 PM
Thanks.

I was taking the easy way out for now - just use a file that someone can create in an editor or from a dimming curve generator program.

How does Vixen save its dimming curves? What format?

Joe

JHinkle
02-08-2012, 12:33 PM
That is how dimming curves are implemented in Vixen 2.5. A tool pops up with an initial linear graph from 0 to 255 (left to right) and you adjust that curve how you like and save it (you can save multiple curves) and then you can apply any saved curve to any one channel. It's very simple to use. Typically you would build one per light strand brand/type.

If I do an internal curve generator - would probably provide the user with 256 points and allow the points to be moved/positioned.

My first reaction was to use lines - the default curve is just a simple line.

My concern is that to properly curve the illumination response from a type/style of LED/lamp - the curve is not linear in nature. To use line segments would also mask the true nature of the dimming response --- so it think it needs to be done - point by point - producing something more like a logarithmic response in looks.

Joe

budude
02-08-2012, 02:00 PM
If I do an internal curve generator - would probably provide the user with 256 points and allow the points to be moved/positioned.

My first reaction was to use lines - the default curve is just a simple line.

My concern is that to properly curve the illumination response from a type/style of LED/lamp - the curve is not linear in nature. To use line segments would also mask the true nature of the dimming response --- so it think it needs to be done - point by point - producing something more like a logarithmic response in looks.

Joe

Yes - there are 256 points to adjust and the default is a simple straight line (i.e. 10 = 10, 126 = 126, etc) in Vixen. And yes - ideally you use some light metering equipment and adjust the range accordingly to truly get 256 steps from full off to full on (at least perception of) but even just the simple readustment of the curve to give more levels at lower limits and less at the top end helps a bit. The initial reason for having the curves was to make incan and LED lighting as similar as possible when mixed together. This way when folks did their "Amazing Grace" whole house fades or something you got somewhat even results across all light types.

JHinkle
02-08-2012, 06:21 PM
For everyone using addressable RGB strings.

How do you intend to use them?

Are you going to create a plane (wall) of them and want to process stuff like video, pictures, walking text, etc across it ......

OR

Are you using them as low count strings and want some special effects (around windows, spiral mega-tree, etc).

Please tell me what YOU are doing - so I can sleep on it.

Thanks.

Joe

boarder3
02-08-2012, 06:44 PM
JHinkle thanks for your contribution to the sequencing world. If you were to make a matrix editor you would have the ultimate software. There is nothing but madrix and lightfactory that can do it. Alot of people would love to take the rgbs and make screens out of them or just use video transitions on there show. I have 3-681 boards plus ren24 boards and im currently testing vixen 3 if you need another tester let me know.

JHinkle
02-08-2012, 06:52 PM
That's exactly what I am thinking about!

So what matrix size are we talking about?

Joe

angus40
02-08-2012, 07:48 PM
Would it be possible to give the use an option to select a matrix from say 2x by 2y to say 48x by 48y and make the matrix map able ?

JHinkle
02-08-2012, 07:57 PM
I trying to figure out how big those numbers need to be to help 95% of the users.

Joe

boarder3
02-08-2012, 09:19 PM
I would be happy to just be able to make a matrix about the size 60x60 inches and be able to put images and video and text in it. Also if we can get everyone to follow a set mega tree like this one http://www.youtube.com/watch?v=bPgqNXjf-MY i set mine tree up exactly but cant do these effects without matrix.

angus40
02-08-2012, 09:30 PM
Check this mega tree out , but scale it down to fit the e680/1 controller x3 say 48 strings @ 42 nodes /per would give a matrix size 42x48 with every 4 strings requiring 1 universe/512 channels .16 ubiverses would be needed total .

would be great if this software was capable of doing this .

http://www.youtube.com/watch?feature=player_embedded&v=9hNorlgJcQs

JHinkle
02-08-2012, 09:32 PM
How many strings on a mega-tree?

How many lights per string.

Joe

boarder3
02-08-2012, 09:40 PM
10 strings of 16 high up,down,up i cut 2 off so 48 pixels.

JHinkle
02-08-2012, 10:26 PM
10 strings of 16 high up,down,up i cut 2 off so 48 pixels.

Please explain - I did not follow at all.

Joe

boarder3
02-08-2012, 10:32 PM
ok figure 10 strings of 50 ge pixels. i cut off 2 pixels to give me 48 total per string. Just like the mega tree in video i go up 16 pixels high than go down than up. So one string does 3 strings or at least it looks like 3 string. So when its looks like i have 30 strings but is really 10 . It looks great and if we had a matrix editor we could do exactly what is done in the video.

JHinkle
02-08-2012, 10:43 PM
May I ask the distance between pixels - 12 inches - that would give a little under a 16 ft tree - or are they closer together.

Thanks

Slite
02-09-2012, 03:23 AM
How many strings on a mega-tree?

How many lights per string.

Joe

That's like asking "How long is a string?" :)

People are doint mega trees with lots of different numbers of strings and lights per strgin. Most use a multiple of 8 strings though, so 8, 16, 32, 64 are the most common values for number of strings.

As for lights per string, thats more dificult to specify. Since some use one string from top to bottom, some user strings that go bottom->top->bottom (loops back down). there are also examples of bottom->top->bottom->top (i.e one string doubles up as three strings)

And then it comes down to, how long is a string? :)

But "most" pixel strings are 25, 50, 75 or 100 pixels a string... You can however cut the to your own length as you go along.

miw01
02-09-2012, 03:52 AM
Joe

smeighan is doing some interesting work on RGB Sequence building and has posted on this forum, link to his site is here

http://meighan.net/seqbuilder/index.php

Mike

Zeph
02-09-2012, 06:01 AM
Joe,

There are two types of light curves which matter. They often get confused.

One is about matching incans and LEDs, ideally linearizing them.

1) LED light output is fairly linear with average current once you get above the minimum Vfwd it takes to conduct, but incandescents have more complex higher power curves with time delays.
2) If AC powered with equal size time slices, the time slices near then zero crossings conduct less power to the lights than time slices near the sine wave peaks. If DC powered, doesn't apply.

The other indpendent aspect is about the eye's perception, which tends to be ratiometric rather than linear (ie: it's more logarithmic, or exponential, depending on which end of the telescope you look down). This shows up even in PWM modulation of DC LEDs (eg: 2801 controlled constant current pixels), where LED brightness is actually pretty linear with pulse width - none of that incan high exponent curves, or AC phase issues.
3) Nevertheless, even with a very linear light, going from 3/255 to 4/255 is a notable jump while going from 203/255 to 204/255 brightness is invisible (even avoiding the extreme ends of the scale, that is).

As it turns out, the inherent non-linearities of the incans partially compensates for the eye's non-linearities (they don't really have the same curves, one being exponential and the other being a high order polynomial, but the curves at least bend in the same directions), while LEDs don't. They are all too linear! (once they are above Vfwd and conducting at least)

The same light curve (lookup table) can potentially be used to compensate for both factors (light source non-linearities and human eye perceptual non-linearities), but they are different concepts; it's easy to confuse or conflate them.

One common and simple approximation, which can be tweaked to some degree to help either problem, is gamma curves. Remarkably, one floating point parameter creating the curve can improve many situations. Obviously no single parameter curve can have the flexiblity of an arbitrary lookup table which can map anything to anything, tho.

RJ's the expert on linearizing a variety of incans and LEDs; I believe he used a light meter to measure actual output for various models and incorporated the empirical results into Lynx Express tables. On the other hand, Jim St Johns uses gamma curves on the e68x, where he's starting with nice linear DC PWM pixels and thus only sets out to compensate for the eye. Not just Vixen, but many photo editing programs allow you to create an arbitrary curve which is really a 256->256 lookup table (the curve may be constrained to being monotonic).

Zeph
02-09-2012, 06:13 AM
I implementing Dimming Curves.

To me a dimming curve is a simple translation array - 256 in size; In the ouput stage - once an illumination value is acquired - it is used as an index into this array to acquire the illumination value the user really wants.

Each dimming curve will reside in its own XML file. The file name defines the intent of the dimming curve (example - RedLED, Incandesent, BlueLED, etc).
The dimming curve file can contain the complete 256 byte array - or only those values that deviate from the default - default being same value out ans in.

<dcv>200,235</dcv> .... this as an example of an entry int the dimming curve file - take incoming illumination value 200 and output it as 235.

Joe

I would not suggest the input,output pair style. The reason is that it's well designed to handle sparse exceptions, but less so for representing continuous curves. Basically, once you add one pair to the list, more than likely you will need to adjust everything above it or below it; the output=input default immediately ceases to make sense any more. With pairs, it's too easy to create gaps, or to double specify the output for the same input (yes, you can have first wins or last wins rules, but why even create the conflict in the first place?).

So the simplest approach is to expect a 256 bytes table, providing the output equivalents for the implicit input values of 0..255. Fill to 255 with the last value if there are less than 255 values. No table means output=input. There are no gaps, no double specifications, no redundancy.

A more complex approach would be to use pairs, but interpolate between them. So if you get 64,128 as the only pair, interpolate 0..64 inputs to 0..128, and then 65..255 to 129..255. Sort the pairs by input value, then interpolate. This allows a sparse set of points to approximate an arbitrary curve, again without gaps.

Zeph

JHinkle
02-09-2012, 09:07 AM
Zeth:

Nice write up for those that may not understand what a dimming curve is attempting to do and all of the limitations / trade-offs that come with it.

Nice Job.

Joe

JHinkle
02-09-2012, 02:19 PM
I've got Dimming Curves implemented.

Use them if you like or not at all.

Joe

Slite
02-09-2012, 02:21 PM
Joe

I just have to say I admire your effort and if you keep this up, you'll give the community one heck of a tool down the road.

Thank you so much!

JHinkle
02-09-2012, 03:50 PM
Here is a short video on Dimming Curves and Effect Library.


http://youtu.be/qYSaSB5Dioc

Joe

boarder3
02-09-2012, 05:36 PM
JHinkle my tree was determined by the length of 16 pixels of ge just like in the video. I strapped pixel one went up 16 ge pixels until tight and that was my height. But is there a possibility of adding a matrix editor to your software? And do you need any more testers i still have 2 boards out for testing of vixen 3 . vixen 3 doesnt look like its gonna have matrix ability and really would love to do some nice effects with my mega tree and make some sort of a sign out of pixels.

JHinkle
02-09-2012, 05:46 PM
I hear people say they need a matrix editor to do trees - but I'm having a had time wrapping my head around how someone takes a square grid matrix and easily sequences a tree; the tree being of a conical form where a grid matrix better represents a plane.

I have been thinking a lot on this topic - and it appears two type of editors are required - a matrix for those using RGB in a plane or long strings and a radial style editor for trees.

I'm still sleeping on this one.

Joe

boarder3
02-09-2012, 05:52 PM
Im not sure which is better i was just told to do those effects that were in that video i need some sort of matrix editor. But i do want to a sign with video and texts that im sure would be for a normal flat matrix.

Zeph
02-09-2012, 05:54 PM
But a pixel based tree typically is a regular 2d array conceptually, along with the concept of wrapping one axis around. It's just a flat grid rolled into a cylinder and then squeezed on one end :-)

When I've seen videos of text or images (eg: a sleigh) moving on a pixel tree, they usually concentrate on the lower part of the tree and do not seem to take into account the shrinking geometry aspect (ie: they treat it as a horizontally wrapped flat grid or cylinder). With enough pixels you could for example make letter wider at the top to compensate for the cone vs cylinder, but unless you have a lot of pixels that would cause more damage (jaggies) than improvement - and could be more distorted from some viewing angles.

dirknerkle
02-09-2012, 05:56 PM
Looks pretty good so far. On your effects library, have you considered inserting a timebase, for example, if a person creates the effect based on a specific BPM (beat per minute) tempo they obviously are trying to match the visual effect to the musical effect. Very often, users also use one or more tracks as a "beat track" for the music the sequence is designed to use. In a sense, the beat track is a visual timebase, or at least a form of it.

Inserting an effect that was made for one sequence/music matched pair that is subsequently transplanted into a sequence having a different tempo probably won't match the new sequence's beat. I understand that you have the ability to stretch a sequence, but it would be nice to be able to match the timebase from the source to the timebase of the destination. If done visually, the user would just have to match up the timebase markers and it would be in perfect sync. This would be vastly preferred to the trial-and-error method.

Just an idea...

angus40
02-09-2012, 05:57 PM
Basically the matrix is vertical or horizontal strands . Picture 1 x-y location on the grid being 1pixel or node configured as either 1 ,2 ,3 ,or 4 mappable channels.

here is a screen shot of a 42node x 48string mega tree

JHinkle
02-09-2012, 05:58 PM
How many pixels are you going to have? Single character height text can be done with 8 - but 10 - 12 looks better. Video - I would think - needs a resolution of 100+ or am I wrong?

JHinkle
02-09-2012, 06:04 PM
So what very specific features are people looking for in a matrix editor?

boarder3
02-09-2012, 06:05 PM
I will do as big as i can ill do 30 high i dont care i figured ill take one of my 681 boards and max it out with pixels.

JHinkle
02-09-2012, 06:07 PM
Looks pretty good so far. On your effects library, have you considered inserting a timebase, for example, if a person creates the effect based on a specific BPM (beat per minute) tempo they obviously are trying to match the visual effect to the musical effect. Very often, users also use one or more tracks as a "beat track" for the music the sequence is designed to use. In a sense, the beat track is a visual timebase, or at least a form of it.

Inserting an effect that was made for one sequence/music matched pair that is subsequently transplanted into a sequence having a different tempo probably won't match the new sequence's beat. I understand that you have the ability to stretch a sequence, but it would be nice to be able to match the timebase from the source to the timebase of the destination. If done visually, the user would just have to match up the timebase markers and it would be in perfect sync. This would be vastly preferred to the trial-and-error method.

Just an idea...

Nice idea.

boarder3
02-09-2012, 06:14 PM
You could be our only hope for some sort of matrix since vixen doesnt seem to be working on it and LSP doesnt work for most people. If you could make a video with text and import it into a matrix editor that would be insane.

angus40
02-09-2012, 06:16 PM
Here is a good demo :)

http://www.youtube.com/watch?v=aArcIiAbrD4&feature=related

aussiephil
02-09-2012, 06:31 PM
I hear people say they need a matrix editor to do trees - but I'm having a had time wrapping my head around how someone takes a square grid matrix and easily sequences a tree; the tree being of a conical form where a grid matrix better represents a plane.

I have been thinking a lot on this topic - and it appears two type of editors are required - a matrix for those using RGB in a plane or long strings and a radial style editor for trees.

I'm still sleeping on this one.

Joe

Joe

A Megatree for all intents is a Flat Plane Grid Matrix wrapped around a Conical Form... now if you want to do perspective/keystone corrections for the conical shape then you would need to do more maths.

Cheers
Aussiephil

JHinkle
02-09-2012, 06:46 PM
I've looked at some of the Madrix video that have been sent to me.

I consider that fairly easy since it requires little to no user interface - you tell the computer to process music through some dsp filters and the output from the filters drives channels and colors.

What I want to know - is if YOU want to design a custom sequence that requires hand manipulation of pixels (like a new mega-tree spiral effect) - what do you see yourself doing and what do you need to do it well?

boarder3
02-09-2012, 07:37 PM
do you think you will be able to import videos to it now problem. That will save a tremendous amount of time when scrolling text and making our pixel grid look like a tv?

JHinkle
02-09-2012, 07:43 PM
Scrolling text - no problem.

Video??? I will have to investigate how to get access to video frame data - never done that before.

Joe

boarder3
02-09-2012, 08:02 PM
this might help for video
http://auschristmaslighting.com/forums/index.php/topic,438.msg3063.html#msg3063

JHinkle
02-09-2012, 08:17 PM
That was a very helpful link.

Is that what you guys are looking for?

JHinkle
02-09-2012, 08:25 PM
OK - explain how you would address this.

I've got to properly match grid points to channels.

I'm fine with a single string being horizontal or vertical.

What about the string that goes up and down several times?

I would have to have you configure each row or column - and tell me if single or looped. Where the head starts and ends.

Does that sound right?

boarder3
02-09-2012, 08:27 PM
Im not sure if thats what is needed i just remember seeing that about using video thought it would help. Hey if im able to set up a grid of rgbs and make it play text and video im all for it. No need for madrix or anything else.

boarder3
02-09-2012, 08:29 PM
Most people would have to set up strings as up ,down, up, down

angus40
02-09-2012, 10:42 PM
For up-down as 1 string or folded string in madrix you are able to select this feature in the matrix generator , they call it snake mode .

here is a pic.

jrock64
02-09-2012, 11:26 PM
There is no set rule.
They may start on the top or the bottom.
On the left or the right.
They may be straight runs in only one direction.
They may zig - zag
or they may zig - zag - zig.

Anything is possible and has to be expected.

The E68x hardware can take all the kinks out of your strings and turn them into only straight runs.
Not all hardware can do this.

I have a 24w x 21h pixel matrix that starts in the lower left corner and zig-zags bottom to top to bottom.
My tree will be 24w x 25h matrix that starts in the lower left corner and zig-zags bottom to top to bottom.
My LEDancer sticks will form a 56h x 32w matrix, bottom to top only.

I would not worry about compensating for the distortion introduced in a mega tree,
though it would be nice if the preview could represent how it will actually look.

If you are going to break the video down into frames to apply to event periods the option to apply a video, or a directory of individual files would give maximum flexibility.

There really has to be some kind of animation program out there we could use to create the graphics.
You do not have to write everything yourself.:biggrin:

I think you are having way too much fun doing this.
You need to get a hobby.:thup2:

JOel

JHinkle
02-10-2012, 12:05 AM
If you were to play video - what frame rate?

Can it be played at a 25 or 50 msec rate?

Joe

jrock64
02-10-2012, 12:17 AM
fast forward to 0:35
those were produced by Vixen with a 50ms timing

http://www.youtube.com/watch?v=Q-b_Oc_5gmI

the spiral at 1:10 is so smooth you can't even see it move.
To me, the max the eye can see is 50ms, so why bother with more.

JOel

angus40
02-10-2012, 12:23 AM
joel how close were you when you video'd this ?

Could you record from further away ? does it get clearer at a distance ?

JHinkle
02-10-2012, 12:23 AM
Can someone provide me with a file that contains the type of data you want to show?

I want to see if I can process it - and play.

Joe

jrock64
02-10-2012, 01:18 AM
That video was taken from only 10 to 12 feet away.
I have a bad habit of doing a maximum zoom on anything I am filming.
I hate black frames, it seems a waste.
Those pixels are on 4in centers
and pixels are so bright they always overpower my camera

the files were already in avi format so they are already individual frames.

Here is the vertigo spiral,
change the .binary extension to .avi
If you can make this look good, you can do anything.

jrock64
02-10-2012, 01:48 AM
if you are an overachiever see this thread.
http://forums.planetchristmas.com/index.php?/topic/50051-led-video-curtain/page__fromsearch__1

I can't even count the number of pixels.

Joel

frankv
02-10-2012, 02:00 AM
Generally speaking, all displays are a 2D plane... they're designed to be viewed from a limited range of locations.

So a mega-tree is really a triangle, not a cone.

NB: Most of my strings are multi-channels on a single string... typically one string is 2 or 4 channels, 10 metres long, with 100 LEDs. I think in the USA they also have 3-channel strings.

2-channel strings have a channel controlling alternate LEDs. These LEDs may not all be the same colour... multi-colour strings have red and yellow as one channel, and green and blue as the other. 4-channel strings have every 4th LED on a particular channel.

It would be awesome to be able to draw a multi-coloured line on the screen, over a (darkened) photo of my house as a background, to be able to preview what the show will look like from the planned viewpoint.

I also have quite a few strings which are not computer-controlled, generally are just running some kind of random blinky pattern. It would be nice to see these on the preview.

JHinkle
02-10-2012, 07:14 AM
Generally speaking, all displays are a 2D plane... they're designed to be viewed from a limited range of locations.

So a mega-tree is really a triangle, not a cone.

NB: Most of my strings are multi-channels on a single string... typically one string is 2 or 4 channels, 10 metres long, with 100 LEDs. I think in the USA they also have 3-channel strings.

2-channel strings have a channel controlling alternate LEDs. These LEDs may not all be the same colour... multi-colour strings have red and yellow as one channel, and green and blue as the other. 4-channel strings have every 4th LED on a particular channel.

It would be awesome to be able to draw a multi-coloured line on the screen, over a (darkened) photo of my house as a background, to be able to preview what the show will look like from the planned viewpoint.

I also have quite a few strings which are not computer-controlled, generally are just running some kind of random blinky pattern. It would be nice to see these on the preview.

Enlighten me.

Maybe this is where my newbe knowledge is shining through:

I currently support the following - because that's all thought was out there.

1. Single channel string (most likely 95% of what is use today).
2. Three channel dumb RGB strings. I let you design as a single channel then break our either RGB or BGR channels in the output stage.
3. Considering addressable pixel three channel RGB strings

Please explain these 2 and 4 channel strings in more detail.

Thanks.

Joe

stenersonj
02-10-2012, 10:31 AM
Joe allowed me to do some beta testing of HLS. i spent about 20 minutes developing a simple test sequence in HLS and connected to a Renard and it worked. i am impressed with how capable it is already.

Zeph
02-10-2012, 02:58 PM
Joe,

Many people wrap 2 to 4 single color light strings (originally mostly 120v incan straings, but can be LED too) into a single thick strand of a mtree (or a segment of an arch). Two colors are often R+G, for Christmas. Three colors are often Red, Green and Blue. Four colors are often R, G, B and white. But of course you could use orange and purple instead if you want.

These lights still often show as Red lights plus Green lights (for example), rather than being intimately blended into a different color as true RGB lights are (the latter being within the same diffuser). That's why they would sometimes bother having White lights in the multistrand too, even if the alredy have Red, Green and Blue.

Logically these twisted multistrands could consider them as adjacent strands (so a 12 strand RGBW would be 48 strands in R G B W color sequence); that they are physically wrapped into one bundle is a mere real world detail which the software would not NEED to care about.

JHinkle
02-10-2012, 03:08 PM
Zeph:

The only RGB I'm concerned about are the single sting lights that require 3 channels and can produce the illusion of colored pixels.

What you described, to me, are just separate light strings which have color the user would like to identify (I do nothing special with them except dimming.

The true RGB, first sentence of this email, I provide a 3 to 1 reduction in channel count as far as designing the sequence goes.

Joe

rstehle
02-10-2012, 03:51 PM
Not to steer this in a different direction, but for those 90% of us that aren't using pixels, one of the things we probably do the most of is 'chasing', flowing from one element to another with adjacent ramps/fades. This effect is used almost all the time when working with mini trees, or the segments of an arch or Mega Tree. Will your new sequencer have the capability of creating 'chases' for elements without the user having to enter in to each cell individualy? Sorry if I missed this earlier.

JHinkle
02-10-2012, 03:58 PM
Just a click of a button.

Joe

Traneman
02-10-2012, 04:10 PM
arrrrghhhhh.....reading this thread is like your neighbor having a bbq,the smell is blowing your way
and your not invited LOL

JHinkle
02-10-2012, 04:18 PM
Depending what the Beta testers say - I may very well open a beta copy to the forum next week.

Joe

boarder3
02-10-2012, 06:40 PM
Im not sure if you tried vixen 3.0 but they have a chase effect with a curve editor that you can put points in so you could stretch it across 6 seconds and ramp it 3 times and even reverse it just by dragging the line from right bottom to top and left from top to bottom. Do you have or can you do something similar that is a great feature that saves alot of time.

JHinkle
02-10-2012, 07:01 PM
I'm not trying to duplicate Vixen.

I have not even run Vixen 2 or Vixen 3. I did implement the V3 output module so I could attempt to ask some questions on the Vixen 3 Beta thread back in December when I first found the site.

I wanted to take a fresh - unbiased approach to the sequencing issues I saw myself facing in this hobby.

Sounds like Vixen has a very nice feature - sounds like it does more than mine.

I think we are similar in some areas and very different in others.

Once I get out of development mode and into user mode - I may find that I want to modify my chase to to better fit the needs/issues I face.

Time will tell.

Joe

boarder3
02-10-2012, 07:37 PM
You should check it out its one of the best features ive used thus far. you can just drag it to time line drag it more or less seconds than click on it add intervals and make it start at one than start at end than go back to 1 all from one effect. They also have a spin which is same as chase but you can say spin 2,3,4,5 how ever many you want. Adding these features to yours wouldnt be copying its just feature that i think every editor should have to make your life easier. The fact that you are even working on something is a relief to us all vixen 3.0 looks great but doesnt seem to be going anywhere. An editor should really be ready by may or june if you really want to use in a big show. Heck im already starting some stuff for this year already. And i bet alot of users are doing the same. My bet is pixels will go from 3 percent to 60 percent of user implementing them. And will only grow from there thats why matrix editor would give your software an edge over all. Thanks for your work so far.

JHinkle
02-10-2012, 09:39 PM
I'm all for implementing features that make a difference.

As a newbee in this area, I have yet to create a "real" chase for a "real" sequence.

Once I am faced with producing results, I will modify the design to make life easier.

Its the the chicken and the egg - you can't design a product unless you understand how it is expected to work.

That's one of the reason I ask a lot of questions.

Any insight and suggestions are welcomed.

What I really prefer, is not "this is how this tool work" (please - don't take that statement with respect to what your stated - just a comment) - but tell me what the issue is - in detail - and what you think needs to be done to address it.

There are many ways to address issues and/or create features - just depends on your understand of all the facts.

The Vixen team has a very good understanding of all the chase requirements - me - I'm still learning.

Thanks again.

Joe

smeighan
02-11-2012, 01:21 AM
Hi Joe;

Your sequencer looks cool!

I started building a tool to create sequences for RGB devices. One difference is my tool is web based (php+mysql). It is meant to creat an animated sequence and then produce the xml file for either LOR, Vixen or LSP.

You can see what i have so far at http:/meighan.net/seqbuilder/index.php

I will share with you the new part i am working on, it may help you visiualize the different targets that have been made out of rgb strings

http://meighan.net/seqbuilder/main_login.php

use
login: smeighan
password: welcome1

I am still writing code , so it may be bumpy for the next few days.

you can use user 'guest' and password 'welcome1' to try out the software. This code is still alpha (I just wrote it yesterday).

I thought the various prompts and pictures might help you. Feel free to create and save. I will be emptying the database and reloading. so whatever gets put in right now is play data.

When you save here is what the tool does. It builds a target to your dimensions and assigns every pixel to its place. The X,Y,Z location of those pixels are saved in an array. Now i know the physical shape of the tree, i can do things like animating circles, snowflakes, text .etc.

Fell free to reach out to me and i will share the algorithms i have developed so far. I am about 100 hours into the project. I am expecting it to take 1000 hours to finish by this summer.

thanks
sean

frankv
02-11-2012, 04:25 AM
Enlighten me.

Please explain these 2 and 4 channel strings in more detail.


Joe, I'm no expert either. My terminology is that a 'string' is a physical thing, whereas a 'channel' is an electrical thing. When I put up the lights, I hang a string on the house. When I wire up the lights to the Renard, I connect each channel to a Renard output.

Typically, these are the cheap Chinese strings of LEDs, usually with hair-thin conductors, clear unstrippable insulation, and unreliably crimped connections. They come with an 8-function controller wired to one end (which I throw away). But they are CHEAP.

A 2-channel string is physically wired up as

----A-------------B------------A------------B-------------
-----------C------------D-------------C--------------D----

There's also a common wire. The wires are twisted together to make a single 'string'.
The A-B wire is one channel, so all A and B LEDs are the same brightness, and can be controlled independently of the other channel. The C-D wire is the other channel.
On multicoloured strings, the LEDs A & B are Red and Yellow, and LEDs C & D are Blue and Green.
I guess that, conceptually, you might think of this as a string of Red and a string of Yellow, both controlled by the same channel, and offset from each other by the distance A-B. And similarly strings of Green and Blue, offset from each other by distance A-B, and from the Red-Yellow string by 0.5 * A-B.
On single-coloured strings A, B, C, and D are all the same colour (obviously!), but there's still the same wiring of alternate LEDs into 2 channels.

A 4-channel string is wired up as

----A--------------------------A--------------------------
------------B-------------------------B-------------------
-------------------C-------------------------C------------
-------------------------D--------------------------D-----

Again, there's also a common wire, and the wires are twisted together to make a single 'string'.
All 4 wires are separate channels, and can be controlled independently.
On multicoloured strings, LED A is Red, B is Green, C is Yellow, and D is Green.
On single-coloured strings A, B, C, and D are all the same colour (obviously!), but there's still the same wiring.
So turning on/off channel A, B, C, D in turn creates a chase sequence along the string.
Whether they're all the same colour or different, there's no particular new concept here... just 4 wires along the same line, each translated from the previous by 0.25 * distance A-A.

http://www.dhgate.com/8-display-modes-multicolour-100-led-string/p-ff80808133cfdacc0134174308ea337a.html?recinfo=8,32 ,6 is an example. Looking closely at the controller, this is a 2-channel string, since there's 5 wires going to the controller (P & N from the mains plug, 2 channels plus a common).

As some-one else commented, the difference between these and simple strings of LEDs is just their physical position. That's irrelevant to your sequencer, UNLESS you have have a preview mode where you're trying to display what the display will look like.

JHinkle
02-11-2012, 08:50 AM
Thanks for the very nice explanation.

I probably did a injustice in asking RGB (I'll explain my defination below) related questions in a thread really related to sequencing the intensity associated with a group of lights. The color associated with these lights have no impact on the sequence editor other then to tell the user "You said red here - make sure you hook up red in the field".

Last Saturday, one of my beta testers stated that he had a large number of RGB "strings" and was inquiring as to my intent to handle those. Being unknowledgeable on RGB technology - I gave a put-off. He persisted in the discussion, educating me along the way, (a little knowledge is very dangerous) to the point where I said I could support that.

The THAT is this: RGB - a physical string of lights, that are not pixel addressable. All pixels will display the same color and the color is determined by the RGB intensity values provided by using 3 channels in the sequencer. He explained that the number of channels grew by a factor of 3 just for displaying a single "string" of color producing RGB leds. I stated that I could provide a 3 to 1 sequence channel reduction by simply letting the user sequence that RGB channel the same way as someone would sequence any single colored lamp channel - and I would provide control over the color that the string would show.

I have been told that these types of RGB "string" are also referred to as "dumb" RGB in the sense that all leds produce the same color - not the ability to be addressable and produce different colors.

Now I understand by your statement above, that people have created their own version of colored strings. From the sequencer's view point - you can program them as a single channels or tell me they are an RGB type and hook your 3 different physical strings to them ---- the difference is --- when you use my RGB attribute - the defined "RGB color" now drives much of the information to the channels and the intensity levels changes the hue. If you were to take a RED led physical string and a GREEN, and a BLUE - and tie wrap them together and call it a RGB string - you can do it but I don't think the color produced will be as expected.

I created an injustice when I started getting questions, and answering them, and asking more questions - about addressable RGB strings (a completely different animal). I created a new thread to discuss those because I believe my questions (trying to learn) were being interpreted differently by people talking about dumb RGB and those talking about smart (addressable) RGB.

The HLS tool today will sequence standard lights. If you use a string of leds that can show color based on encoding an RGB color - then the tool can provide a 3 to 1 channel count reduction.

It appears I will add a matrix capability to HLS so those with smart leds can produce higher level effects.

Thanks - and sorry if I created any confusion.

Joe

JHinkle
02-11-2012, 08:57 AM
Hi Joe;

Your sequencer looks cool!

I started building a tool to create sequences for RGB devices. One difference is my tool is web based (php+mysql). It is meant to creat an animated sequence and then produce the xml file for either LOR, Vixen or LSP.

You can see what i have so far at http:/meighan.net/seqbuilder/index.php

I will share with you the new part i am working on, it may help you visiualize the different targets that have been made out of rgb strings

http://meighan.net/seqbuilder/main_login.php

use
login: smeighan
password: welcome1

I am still writing code , so it may be bumpy for the next few days.

I have imported 4000 user names from DiyLightAnimation and LOR. Uers could use their login names. Passwords are welcome1 for everyone. This code is still alpha (I just wrote it yesterday).

I thought the various prompts and pictures might help you. Feel free to create and save. I will be emptying the database and reloading. so whatever gets put in right now is play data.

When you save here is what the tool does. It builds a target to your dimensions and assigns every pixel to its place. The X,Y,Z location of those pixels are saved in an array. Now i know the physical shape of the tree, i can do things like animating circles, snowflakes, text .etc.

Fell free to reach out to me and i will share the algorithms i have developed so far. I am about 100 hours into the project. I am expecting it to take 1000 hours to finish by this summer.

thanks
sean

Someone had suggested earlier in the thread to check out your site - I did - very nice.

I was going to PM you once I decided to provide matrix programming capability to HLS.

I see someone using your tool to produce a time sequenced mega-tree, exporting that to a file, and then importing that file into a matrix sequencer (like HLS) and presto - you have an instant custom sequenced mega-tree.

I will PM you soon.

Thanks - and again - very nice site.

Joe

JHinkle
02-13-2012, 10:34 PM
I am pleased to provide you access to my Christmas Light Sequencer - HLS.

The link below is to a self installing/extracting installation file.

It contains the HLS.exe program, 2 supporting dlls, 1 VC distribution installation program from Microsoft if you require it, and a HLS - Startup Manual that was created by StenersonJ (to whom I am very grateful for the hours he put into beta testing and writing this release of the manual to help anyone interested in HLS - come up to speed very quickly).

The installer will create a folder called HLS and install all of the files there. It will also place a shortcut to HLS on your Desktop.


http://www.joehinkle.com/HLS/HLS_Install.exe


Since my last video, I have added a playlist and Show scheduler.

I provide (what I call) a compiled illumination output file that can be used to transport the output of HLS to a third party for transmission.

All of my files are of type XML.

Anyone interested in looking at an output file file will find it simple and straight forward with no applied data compression.

The HLS Show player uses these compiled output files instead of having to deal with the hi-level effect structures.

Any who wish to use HLS - you are welcome (I maintain ownership of the program).

I feel I have a very capable sequencer to help me develop my 2012 Christmas display.

Any and all comments are welcome.

Joe

djulien
02-13-2012, 10:39 PM
I am pleased to provide you access to my Christmas Light Sequencer - HLS.

The link below is to a self installing/extracting installation file.

Joe, thanks! What are the system requirements?

don

JHinkle
02-13-2012, 10:49 PM
It has been tested on an XP and Window 7 machines. Sorry - it will only run on Windows.

I have no data at this time regarding cpu speed / memory size and how they may affect performance.

The program is written in native C++. The output thread is running at priority level 15 - (as close as I can get to real-time driver priority in Windows).

This is still in Beta - so treat it as such. It has been put through it's paces but something will show up - not enough hours on it yet.

Again - all those that try it - please provide feedback.

Joe

kychristmas
02-14-2012, 12:03 AM
I was pretty impressed with the first version you provided 2 weeks ago, so I'm anxious to see. Wish I had more time to play, but it will have to wait a week or so.

Thanks for all of the work. I like the alternatives.

frankv
02-14-2012, 04:27 AM
Sorry - it will only run on Windows.


Are you sure Joe?

I'm running Ubuntu Linux 11.10 with Wine 1.3.28, and it installed happily, and ran happily.

I got as far as telling it to use COM1, but with no sample show to try, and nothing connected to my serial port, its hard to know whether it will play the show OK or not. But it does look promising :)

I then read the documentation :pinch: and tried creating a sequence.

However, I'm stuck on loading the WAV file. (The *.mp3 in the WAV selection dialog confused me, but I've got past that). First I tried a WAV file I had lying around, and got "File does not appear to be a WAV file". :( Then I tried using Audacity to create one, but got the same result. Can you tell me what the requirements for a valid WAV file are?

JHinkle
02-14-2012, 09:42 AM
I went on the net and acquired a free MP3 to WAV converter. - There are several.

Take the resulting WAV and identify it to HLS.

I use WAV because of the many things I do with the audio - I don't have to worry about decompression time, etc.

I think this is the one I use:

http://download.cnet.com/MP3-to-WAV-Decoder/3000-2140_4-10060498.html

Joe

JHinkle
02-14-2012, 09:50 AM
Are you sure Joe?

I'm running Ubuntu Linux 11.10 with Wine 1.3.28, and it installed happily, and ran happily.


I'm just using vanilla Windows - so it may very well work.


However, I'm stuck on loading the WAV file. (The *.mp3 in the WAV selection dialog confused me, but I've got past that).


You found my first mistake. I originally started out doing the MP3 conversion to WAV myself - but then a sanity check - why reinvent the wheel. I missed changing the mp3 in the OPEN File dialog.

The software on download site has been corrected - thanks. New version is 0.C --- look in the lower left corner of HLS for Version identifier.

Joe

Imperialkid
02-14-2012, 09:53 AM
However, I'm stuck on loading the WAV file. First I tried a WAV file I had lying around, and got "File does not appear to be a WAV file". :( Then I tried using Audacity to create one, but got the same result. Can you tell me what the requirements for a valid WAV file are?

I had the same exact issue. I cant remember the exact converter I downloaded as I am at work. I will have at it again tonight with the one you suggested Joe.

EDIT: Issue resolved in post #185

JHinkle
02-14-2012, 09:57 AM
When you select the WAV file - it may take 4 or 5 seconds to do what I need to do with the file - so don't be impatient and get click happy - please wait for my response.

I will add a WAIT cursor (if I didn't do it already).

Here's the download link again:



http://www.joehinkle.com/HLS/HLS_Install.exe




Joe

stenersonj
02-14-2012, 10:48 AM
I will send a more complete manual that covers dimming curves, creating playlists, scheduling, and many other features tonite to include with the HLS download.

djulien
02-14-2012, 10:57 AM
When you select the WAV file - it may take 4 or 5 seconds to do what I need to do with the file - so don't be impatient and get click happy - please wait for my response.

I will add a WAIT cursor (if I didn't do it already).

Could you load the WAV file using a background thread, so the UI thread remains responsive to user actions?

don

JHinkle
02-14-2012, 11:03 AM
It is being worked on by a worker thread - the IU is still responsive - I acquire data from the audio that is required to move forward - like how long the song is so I can set up all of the data arrays to the proper lengths.

It's just not loading - I have to wait for my loading results.

It only occurs the first time it analyzes and loads the file (when you initially set up the sequence for the first time) - so its not every time.

Joe

Here's the download link again:

http://www.joehinkle.com/HLS/HLS_Install.exe

boarder3
02-14-2012, 08:19 PM
I cant get it to load any wav file i have . Do i have to load controllers first or something?
Update FIXED also used step from 185 post

boarder3
02-14-2012, 08:40 PM
sorry just watched the video it helped alot.

JHinkle
02-14-2012, 08:49 PM
sorry just watched the video it helped alot.
Jon has put a lot of time in the User Manual included in the download package.

Jon walks you through step by step getting a sequence up and running - so there need not be any trail and error.

I am adding Version number to my download so you can easily see if you have the latest.

With that in mind - I will now only provide the download site - you can select the file and download it if you like.

Download Site: http://www.joehinkle.com/HLS

Joe

JHinkle
02-14-2012, 09:31 PM
I'm starting a new thread so people don't get lost in this one.

http://doityourselfchristmas.com/forums/showthread.php?19568-HLS-Sequencer-Beta-Free-to-use

angus40
05-31-2012, 11:06 PM
Woot , Thanks Joe .

I can now use Rpm's usb - dmx dongle that I have waiting to run Hls sequences. :)

What a great addition as i can have dmx testing abilities from a simple usb rather that hook up all the others boards.

Cheers