PDA

View Full Version : HLS - Matrix capability for addressable pixel strings



JHinkle
02-10-2012, 07:42 AM
I started a new thread to address my questions regarding addressable pixel strings to evaluate how easy it would be to integrate into my existing data structures.

I need people to provide detailed info --- nothing like thinking you understand all of the requirements to design a 3 wheel automobile only to find out at the end of the development cycle it really needed to be 4 wheels.

Questions.

1. Are all addressable pixel strings three channel devices?
2. Are they always broken out in order of R, G, B?
3. Will a maximum size matrix of 100 x 100 handle 95% of the needs?
4. Are all of these arrangements transmitted via E131?

Now from a user point of view - I need to know 95% of what controllability is needed.

1. Insert avi file (or some type of file) into sequence at a specific point? File contents need to to synced to 50 msec frame rate. I'm concerned about performance issue with large channel counts at 25 msec - I have no feel for that yet.
2. All pixels on or off.
3. All pixels shift from one color to another color.
4. All pixels ramp up from black.
5. All pixels fade to black.

If you noticed - there are no individual pixel editing. Content would be supplied via an external file - the file contents would govern content and length of time in the sequence. I can provide color shifting and fades as overlays.

A matrix would appear in HLS as a single editable channel (effects defined above) - one channel for each universe required for transmission.

I need your comments now so I can make design decisions - I don't want a three wheeled vehicle two months from now.

Joe

jrock64
02-10-2012, 10:02 AM
1. Are all addressable pixel strings three channel devices?
not always. I have two potential matrices that are just white.
2. Are they always broken out in order of R, G, B?
you have alreay addressed this and some hardware can also make the correction.
3. Will a maximum size matrix of 100 x 100 handle 95% of the needs?
95% of the time maybe, what is considered extreme today, will be normal tomorrow.
4. Are all of these arrangements transmitted via E131?
not even close. the device that talks to the pixels can be E131, DMX, or even renard.
now there are devices out there that can listen to E131 and output pixel direct, or DMX to pixel, or Renard to pixel.
why does it matter??

Now from a user point of view - I need to know 95% of what controllability is needed.

1. Insert avi file (or some type of file) into sequence at a specific point? File contents need to to synced to 50 msec frame rate. I'm concerned about performance issue with large channel counts at 25 msec - I have no feel for that yet.
I see this as the most common use.
2. All pixels on or off.
3. All pixels shift from one color to another color.
4. All pixels ramp up from black.
5. All pixels fade to black.
so you would be doing these as hardcoded effects instead of thru video files???


If you noticed - there are no individual pixel editing. Content would be supplied via an external file - the file contents would govern content and length of time in the sequence. I can provide color shifting and fades as overlays.
That could work if it had to , but what about snow fall, meteor, of fireworks effects, which are just short random chases??

A matrix would appear in HLS as a single editable channel (effects defined above) - one channel for each universe required for transmission.
not sure why multiple channels. A single matrix will be 3 universes on average and maybe even more.

I know, I want it all.
You never know until you ask.

Joel

mschell
02-10-2012, 10:10 AM
Not all RGB elements (generic word for pixels, strips, nodes, etc.) are technically the same in order. Some are RGB, some RBG or some variant. However, most controllers (E68x, PIXAD, others) can mask those differences and always expect RGB order output.

Not everything goes thru E1.31 today. There have been some Renard protocol based RGB "pixel" controllers, and some DMX based ones that could be run via the "normal" dongle output. It's also possible to have some "dumb" RGB devices that are run via traditional (Renard, DMX, others) controller channels.

With this in mind, don't assume that every RGB element is a pixel or just a single point of light. I have 2 ft strips that are stacked vertically in columns across the front of the house and diffused with coro. Each strip is controlled with 3 channels of a Ren48LSD, and the entire length of strip lights up as one element. But it still could be treated as an RGB fixture and mapped to an effect or video.

Most pixel strings/strips/nodes are 3 channel devices.

JHinkle
02-10-2012, 11:26 AM
With this in mind, don't assume that every RGB element is a pixel or just a single point of light. I have 2 ft strips that are stacked vertically in columns across the front of the house and diffused with coro. Each strip is controlled with 3 channels of a Ren48LSD, and the entire length of strip lights up as one element. But it still could be treated as an RGB fixture and mapped to an effect or video.



Mark:

I think I have this covered today as a "dumb" RGB - I treat is as a single channel - export out 3 channels(R,G,B) - but no addressability.

This RGB string would be part of a normal sequence - not a matrix.

Joe

JHinkle
02-10-2012, 11:32 AM
[B]If you noticed - there are no individual pixel editing. Content would be supplied via an external file - the file contents would govern content and length of time in the sequence. I can provide color shifting and fades as overlays.
That could work if it had to , but what about snow fall, meteor, of fireworks effects, which are just short random chases??

A matrix would appear in HLS as a single editable channel (effects defined above) - one channel for each universe required for transmission.
not sure why multiple channels. A single matrix will be 3 universes on average and maybe even more.

Joel

I can see providing hi-level effects as those identified - not just having the user at the pixel level.

If a matrix required 3 universes to encompass the number of channels - my internal data structures wants at least a channel per universe - I need to know what mechanism to send them etc - they may be on different COM ports - etc.

mschell
02-10-2012, 02:08 PM
Mark:

I think I have this covered today as a "dumb" RGB - I treat is as a single channel - export out 3 channels(R,G,B) - but no addressability.

This RGB string would be part of a normal sequence - not a matrix.

Joe

Joe,

Great! I haven't had much time to look at your software in any detail, but what I've seen, it looks good!

JHinkle
02-11-2012, 09:38 AM
If will expand HLS and provide a matrix editor for addressable RGBs - but I can't begin until I get a concrete understanding of how I have to communicate channel information to the controller.

I am asking that a knowledgeable person in this area, take a few minutes and educate me.

Here is what I need clarification on.

I provide you with a X x Y matrix and then provide a way of filling all of those pixels each time tic in the sequencer.

Before I can send this information to a controller, I have to organize it into "channels" of data.

Lets ignore for now that my use of the term channel here actually refers to three physical RGB channels sent to the controller.

I looked at a Madrix demo - and they asked the user to configure the matrix as up-down "somethings" (I'm trying hard not to use the word channel) or zig-zag "somethings".

I have read Jim St, John's manuals on his controllers where you configure the physical RGB strings a straight, reversed, or zig-zag.

I conclude that since Jim's controllers are zig-zag aware, that when the controller may be (like a Renard) not aware - that the zig-zig needs to be handled by the sequencing software - where lies my incomplete understanding.

Use the following as an example - probable not real - but it's for educations purposes.

User has three 50 pixel strings and one 150 pixel string.

User wants to create a panel look by laying out the strings in this fashion

.....c..c..c..c..c
............---
.....|..|..|..|..|
.....|..|..|..|..|
.....|..|..|..|..|
.........---

Sorry about the dots - the browsers removes spaces so my picture fell apart.

There are five columns 1 to 5. columns 3 - 4 is a left to right zig-zag.

For sequencing - pixels would be addressed by column and row.

I now need to channelize the grid (ignore the 3 to 1 RGB expansion).

I think (please tell me if I'm wrong if this is not the case) that if the user was hooking up to a Jim St. John's controller - the sequencer would send data from C1 (pixels 1 - 50), then C2, C3, C4 and C5. The user would then configure Jim's controller to convert the C2-C3-C4 data into the proper zig-zag physical pixel locations.

What needs to be sent to a controller that is not zig-zag aware (those using Renard as an example).

Does the data need to be sent as follows

C1 - pixels 1 - 50
C2 - pixels 1 - 50
C3 - pixels 100 - 51
C4 - pixels 101 - 150
C5 - pixels 1 - 50

Is this correct?

Can everything be configure as columns even if the physical display has them horizontally - keeps me from have to do two types of config - one for sends data column related and one for row based data.

I probably need to have reverse also - Right? So instead of sending pixels 1 - 50, I would send 50 - 1.

Thanks for your help in advance.

Joe

JHinkle
02-11-2012, 09:58 AM
Now from a user point of view - I need to know 95% of what controllability is needed.

1. Insert avi file (or some type of file) into sequence at a specific point? File contents need to to synced to 50 msec frame rate. I'm concerned about performance issue with large channel counts at 25 msec - I have no feel for that yet.
I see this as the most common use.
2. All pixels on or off.
3. All pixels shift from one color to another color.
4. All pixels ramp up from black.
5. All pixels fade to black.

so you would be doing these as hardcoded effects instead of thru video files???


Joel

I am looking at giving you the ability to overlay an effect on top of your image - as in ramp-up - fade-to- black, Random sparkle, - NO file data - just all one color - shifting to another - these would be programmable within the sequencer. The matrix would be view as a single channel and the effects you could laid down are any of the above plus play a file.

Joe

TimW
02-12-2012, 08:48 AM
Joe - sorry in advance if I'm stating the obvious below. I can't see exactly where the gap in your thinking is (you seem to be working it out just fine so far!! )



I provide you with a X x Y matrix and then provide a way of filling all of those pixels each time tic in the sequencer.


To date that has been the approach in all systems I am aware of. It is necessary to define an XY matrix (for grid or other surface) in the sequencer in order to drive meaningful effects (be they transitions from a video, hard coded animations or other transforms). The matrix needs to define the coordinate relationships between the pixels. I assume you're starting initially in 2d (although I expect ultimately transitions will evolve to 3d... so you'd need another axis ). Note also that some sequencers allow positioning of the hardware relative to one another (so a transition can be applied just to a grid or across a range of fixtures to sweep across a yard for example)



Before I can send this information to a controller, I have to organize it into "channels" of data.


Yes - the XY matrix needs to be mapped to reality.

All the zig zags, reverses etc just overlay the physical pixel configuration to the XY grid. The variants exist primarily to help us overcome the physical reality of most pixel addressing implementations (ie the first pixel in any string is closest to the controller... and/or it is possible to feed the end of one column to the start of the next)

Also, sometimes multiple controllers are required to cover the grid so you can get gaps in channel numbering. For example 512 channels for a DMX universe pixel controller doesn't divide nicely by 3 channel pixels so you may have channels left over if your grid uses more than one controller across universes. The grid needs to know this!

In early implementations (before the controllers started to provide options to 'simplify' this relationship) we used to create a 'map' that connected XY coordinate (as per the video or animation file of the grid) to the physical channels as they actually appeared on the grid. All the hardware configuration does is make this process less taxing...



I looked at a Madrix demo - and they asked the user to configure the matrix as up-down "somethings" (I'm trying hard not to use the word channel) or zig-zag "somethings".


matrix orientation? just describes the mapping of XY grid to physical pixel addressing order?



I have read Jim St, John's manuals on his controllers where you configure the physical RGB strings a straight, reversed, or zig-zag.

I conclude that since Jim's controllers are zig-zag aware, that when the controller may be (like a Renard) not aware - that the zig-zig needs to be handled by the sequencing software - where lies my incomplete understanding.

Yes. you make that connection in the sequencing software. (actually you still make that connection even with Jim's zig zags, but its easier becase as far as the sequencer is concerned all the columns (or rows) are ascending (or descending))



Use the following as an example - probable not real - but it's for educations purposes.

User has three 50 pixel strings and one 150 pixel string.

User wants to create a panel look by laying out the strings in this fashion

There are five columns 1 to 5. columns 3 - 4 is a left to right zig-zag.

For sequencing - pixels would be addressed by column and row.

I now need to channelize the grid (ignore the 3 to 1 RGB expansion).

I think (please tell me if I'm wrong if this is not the case) that if the user was hooking up to a Jim St. John's controller - the sequencer would send data from C1 (pixels 1 - 50), then C2, C3, C4 and C5. The user would then configure Jim's controller to convert the C2-C3-C4 data into the proper zig-zag physical pixel locations.


The sequencer would just send out

C1 1-50
C2 1-50
C3 1-50
C4 1-50
C5 1-50

and within the sequencers configuration you would tell it that the grid is comprised as 5 columns of 50 pixels, each column in the same ascending or descending order... you would also tell the sequencer which column is leftmost.


What needs to be sent to a controller that is not zig-zag aware (those using Renard as an example).

Does the data need to be sent as follows

C1 - pixels 1 - 50
C2 - pixels 1 - 50
C3 - pixels 100 - 51
C4 - pixels 101 - 150
C5 - pixels 1 - 50

Is this correct?


What you are doing is mapping the physical location to the 'grid' location, so yes

In this case in the sequencer's configuration you would account for the zig zags. Its messier to cofigure but it has to be done somewhere (either SW or controller)



Can everything be configure as columns even if the physical display has them horizontally - keeps me from have to do two types of config - one for sends data column related and one for row based data.


Your call. In everything I've seen so far (megatrees + grids) the pixels seem to be arranged in columns - just because they are hung vertically. However, some pixel strips could lend themselves to horizontal mounting. But isn't that all in how you organise the mapping from xy grid to output channels? Other sequencers seem to allow both orientations.



I probably need to have reverse also - Right? So instead of sending pixels 1 - 50, I would send 50 - 1.


This feature is about location of controller relative to strings. If you have the controller in the middle of 2 strings - which often makes sense physically, reversing allows you to logically order the pixels left to right from end to end for example (by reversing the string to the left of the controller).

JHinkle
02-12-2012, 09:03 AM
Thanks Tim.

I was looking for confirmation my understanding was correct.

Joe

JHinkle
02-12-2012, 09:14 AM
To date that has been the approach in all systems I am aware of. It is necessary to define an XY matrix (for grid or other surface) in the sequencer in order to drive meaningful effects (be they transitions from a video, hard coded animations or other transforms). The matrix needs to define the coordinate relationships between the pixels. I assume you're starting initially in 2d (although I expect ultimately transitions will evolve to 3d... so you'd need another axis ). Note also that some sequencers allow positioning of the hardware relative to one another (so a transition can be applied just to a grid or across a range of fixtures to sweep across a yard for example)




Third dimension is already there - its time and configured in the normal sequence editor. Each PLANE will be shown as a single channel where you can apply effects - avi file, some other type of file - like a spiral tree from Sean Meighan, intensity ramp ups, fades to black, solid color shifts, random twinkles, and any other hi level effect I can come up with.

Joe