Friday, 7 May 2010

Transmission - Omneon

Even though we have been talking with Timeline about the possibility of getting an EVS from them to deal with playout I still felt that it would be use3ful to get the Omneon up and running as a backup. I connected the director to two media ports via firewire and then powered up the store. At this point I realised that I didn’t have the connector between the fibre channel connections on the Director and the 9 pin D-type connections on the store. After a fruitless search for this cable I emailed Omneon support using a link on their website.
 
 N-Store

Simon from Omneon support got back to me on the same day and has been incredibly useful helping to identify our system as a D1 (Director) with an NStore (Storage). However he also stated that “This is a very old unsupported system”, no surprises there, but it appeared that all we were missing was the connection cable which is hopefully being sent through as I write this.

 Director

Although we may not use the Omneon on the day we will certainly get the system up and running and ingest content onto it just to make sure that it still works. At the end of the day the EVS will have to be returned and while it is a viable work around I think we still need to be looking at re-integrating the Omneon with the Pharos control platform in the long term. Maybe this could be done next year as a side project integrating it with the mobile presentation racks.
 
Fibre Channel Connections

Tuesday, 4 May 2010

Transmission - Bug Burners


Over the past few weeks I have been involved in several smaller projects that I just haven't had the time to blog but I'll try and do all that in the next few days.

The first is the bug burners. After messing around with these a few weeks ago Jamie and I managed to get one Probel bug-burner keying and filling the old logo correctly while another one appeared to have troubles with it's SDI i/p and so we had to use it to generate the bug and then use a Microvideo bug-burner to key and fill (this was the only solution at the time because we couldn't figure out whether or not the Microvideos' had a way to store the image to be keyed).

I was still not happy with this situation so  couple of weeks ago I went back to the bug-burners and had another look. This started with a repeat of the original test which again proved that Channel 2's bug-burner wasn't working, so I disassembled the casing to take a look at the different cards inside. James Uren and I were able to isolate the fault down to the SDI i/p card, which was different on each bug-burner. After testing voltages over the card we began to further cut down the peripherals to a small pot on the card. Adjusting this proved to be the key as the background signal began to appear, I will openly admit however that we are still not sure what exactly this pot is for, we believe it to be some kind of i/p timing adjustment which would make sense.

After doing this I then began looking at how to get our new bugs being generated and keyed. I had previously been told that I would need to use the Mirada Animation tool on the Router PC to generate a different file-type (.oxt) but after several attempts at doing this and ending up with files that were far to large to fit on a floppy disk (1.44MB) I began looking for a user manual to see what I really needed.

As it happens the bug-burners will take TARGA (.tag) files, this can be either stored as a key or an image and must be named accordingly (imag.tga, key.tga). I got branding to create a white bug, correctly positioned on a 720 x 576pix black background which I then renamed as imag.tga and uploaded to the bug-burners. Thankfully it worked and the bug was keyed over the video, this can then be readjusted in terms of positioning and transparency but the basic idea was that it was working. The only problem now is that this bug has serious amounts of aliasing on it, which of course won't pass QC, that's another problem for another day though because at least the system is working!

Sunday, 11 April 2010

Transmission - Bug Burners

Last week Jamie (Head of TX) and I spent the morning trying to find ways in which to get the bug burners to work. Using the bug files from last year we each started work on an individual channel to see who could find the most efficient way of doing this. There was further complexity in the fact that the bug-burners had been wired in a very strange way last year as they were also used to insert GPI's for the WSS inserters. 

After trying many different attempts we finally figured out that one of the bug-burners didn't want to store the file on its internal storage, so we took a step back and used a spare burner to store and generate the key image which would then be passed into the original burner which would also fill the key. 

Luckily the other bug-burner was working fine and was able to both key and fill the image by itself. 

More information about the rest of the weeks tasks can be found on Jamies blog.


Pres 1 Standalone Bug/Keyer


 
  Pres 2 Bug Generator

Pres 2 Keyer


 Pres 2 Output

Tuesday, 6 April 2010

Engineering - Update

Seeing as all the engineering departments are now moving into TV systems to do their work we decided to have a massive tidy up. Not the most exciting of jobs but something that needed to be done none the less! We began by sorting through the piles of equipment and sorting it into different groups, transmission, audio, video etc. Then we created separate areas for all of these so that we would know where everything was when we needed to get to it.

Then we had the daunting task of sorting through and recoiling the cabling that was all over the floor, however with all of us helping out it didn't take that long before we had loads of need piles of video and audio cabling. 


After moving a few of the defunct racks about and re-positioning them at the back of TV Systems we then had a space in which we could place our newly created TX, OB and Interactive racks.


The last thing to do was to install a couple of internet connected PC's at the back of the room, a mega sound system and place a few tools here and there. As an afterthought we also obtained a white board and some engineeringesque posters to brighten the place up. TV Systems has now been affectionately been dubbed the "engineering office" and hopefully we'll be able to get a fair amount of Rave Live stuff done!

Transmission - Racking the Equipment

We have finally managed to find and sort out some half height mobile racks that we think will be suitable for the TX, Distribution and Presentation areas on the day of the event. Jamie has been busy drawing up system diagrams and working out what pieces of equipment would need to appear on the CTP's of each rack.

Meanwhile we racked the TX equipment (encoders, mux, IRD etc) into one of these racks in preparation. It got a bit fiddly in terms of leaving enough space so that everything could be cooled and so that each piece of equipment would be easily accessible from the back of the rack. In the end everything managed to fit and after plugging up and sorting out a few minor problems within the encoder configurations the rack was up and running again. 

 

Saturday, 3 April 2010

Engineering - Musion

On Wednesday Emily (Sponsorship & Branding), Kat (Branding), David (Talent Showdown Producer), Dave and I went up to Regents Park to visit Musion.

Musion are the world leaders in 3D holographic projection systems and what we saw during their demonstration was fairly incredible, the basic idea is that an HD image is projected onto a transparent foil which makes it appear as if it were floating in mid air. The image can then be interacted with by a rehearsed presenter providing a very realistic holographic feel to the set-up.

The idea for Rave Live would be to to have this holographic set-up on the main stage and use it to present channel listings and student animations, along hosting the initial opening to the event. There was also another idea to provide some kind of live 'telepresence' (making a person appear as a hologram) for which we would need to provide the camera.

Dave and I went along to view this from a more technical perspective and as such we were looking at the real-time implications behind using Musion on the day, as such we came up with a list of pros and cons:

Pros:
  • Looks really good when it is shot and filmed well
  • They will provide the equipment to achieve this and also the man power to rig it up
  • They will provide some help to us when filming and producing the content for this
  • They can do it with a feed of HDSDI which we can achieve with a HDX900 from stores
  • If we can include this in one of the shows as an insert then it will look really professional and engage the audience really well
Cons:
  • It needs a lot of light for it to work and also it needs a light show to be effective
  • The equipment that they will bring is heavy which will put some restrictions on our shows rigs due to weight
  • It will take 6+ hours to rig which eats into a whole days rigging time, assuming that we cannot rig around them on the stage so this cuts the stages rig time by a day
  • The screen is expensive and can be broken, with students on and off stage all the time there would be a possibility that it could get broken
  • It works best with high contrast meaning there should be as little light off the stage as possible. With the venue being used as an exhibition as well there will be a lot of light spilling onto the stage
  • We're not sure how this will fit into the schedule, with the shows on stage and the time needed for the turn around will there be time to show anything on this?
  • We do not have the content yet, it seems a little bit too late to be making content now
Obviously a lot of these points require us liaising with representatives from Musion, especially the items concerning the actual rig. The idea is to start doing this as soon as management have decided how they wish to proceed and whether or not we will actually be using Musion.

Wednesday, 31 March 2010

Engineering - Talent Showdown

Last weekend we visited Coopers Technology College to give the producers of Talent Showdown a hand with the rig of their flyaway. Unfortunately we ran into a few technical hitches, notably due to a couple of unknown flaws in the rig of the sync. The initial system diagram (drawn by George Alton) can be seen below.

During the run up to the show there were minor flaws, mostly due to dodgy cable ends meaning that colour information kept being dropped in a couple of the camera outputs. Eventually we also realised that several of the cameras were losing their colour when they were run through the main programme bank of the vision mixer. After an hour or so of tweaking we couldn't find a solution and seeing as the preview outputs weren't losing their colour information we decided that it was something to do with the main programme bank. As a work around we then took the PVW output and ran this into a VDA and thus distributed it to the VTR's and the monitors.

Once the programme started we then ran into more problems, after the first five minutes a couple of the cameras kept dropping their sync during a cut from the vision mixer. This eventually got worse until they were not genlocking at all. The only feasible solution was to put tapes into all of the cameras and iso record them, all we had to do was sync up the timecode between the cameras to make it easier during the edit. We also recorded the camera that we were originally going to use as a back-up onto a VT Deck so that production would have plenty of options when it came to the final edit

After the shoot we realised that part of the reason that some of the cameras may have been losing their sync was because when shooting with composite video you have to be incredibly careful with timing, far more careful than we had been. Originally we ran the Black & Burst into a DA and distributed it across all of the equipment, what we apparently should have done is run a feed of BB into the vision mixer, then taking a separate assigned output of BB for each of the inputs and run these into the cameras, then taken the final BB o/p of the mixer into a DA and distributed that accordingly. We still need to verify that this is the case but at the moment that appears to be the thing that went wrong, however it doesn't explain why the timing was pretty much fine all morning!

At the end of the day we all managed to find a suitable work-around, it does mean that the post production team have a little more work to do and hopefully the footage doesn't turn out to be that bad.

Engineering - Cable Pulling

Here are a few photos of the cable looms that we pulled out of the Presentation and TX rack areas. After working out how we were going to organise them, they were then untangled from this mess and either coiled as single cables or made into new looms containing 8 cables each. This would hopefully allow the rig for Talent Showdown to go a lot quicker.





We also set up a small testing rig so consisting of an SPG, and a WFM both of which had long lengths of cable attached which could then be barrelled onto the end of the cable being tested. This served two purposes, firstly we could test the ends of the cables, ensuring that there was no signal dropout due to dodgy ends, secondly we could test that the overall length of the three cables did not affect the analogue test signal too much, this enabled us to have a test length which we could rig around.


Tuesday, 23 March 2010

Engineering - Cable Pulling

On Monday we began the new term by removing all of the cabling from the Presentation suite. As we had previously taken all of the equipment out and stacked it up in TV systems it seemed pointless to just leave the cabling underneath the bays, just waiting to be taken out after the move. Instead we saw an opportunity to re-use this cabling and in doing so, hopefully save ourselves some money.

The plan is to use a lot of the long coax in the Talent Showdown pre-record which is scheduled for this Sunday (28th) and basically make up the long cable runs from the cameras to the vision mixer by barreling lengths of coax together. Once this is over we can then begin cleaning up the cables and readying them for Matter, although we tried to keep the ends on many of the cables (again to decrease the amount material we'd have to buy in) we will inevitably have to re-end some of them and while this is being done we can ensure that we have batches of cables that are the right length for inter-equipment connection in our fly-away.

I'll post some pictures of the cabling up when we start inspecting it for Talent Showdown.

Wednesday, 17 March 2010

Interactive - Update

After speaking with Alex we have gathered a list of things completed and a list of things to do.

From last term here is what was achieved:
  • Installed the Red5 server, this streams to our flash pods.
  • Set up the proxy server for the trolley network, so that the internet can be accessed from one box.
  • We have a much better understanding of how the Red5 streaming service should work and how the flash application (pods interface) will play the streams.
  • The trolley has been racked correctly.
  • The server and client are also setup, with the server now running the DHCP service required.

Things that need to be done:
  • We need to either develop a server side application to the pods using one of the demos provided by Red5, an alternative would be to use Adobe's Flash Media Streaming Server, although we need to talk to sponsorship about this.
  • For the client side of this application, Max will need to come up with an interface that we can use to bridge the gap to provide the data that he required.
  • We need to set up the FCS workflow, James Uren said that he would help out with this in the first week back. All of the information to be used as metadata also needs to be gathered.
  • We need to run power to all of the equipment in the racks.
  • A live stream needs to be sent to Narrowstep via the Cisco media encoder, Max will need to re-skin the interface of the Narrowstep player to re-brand it to 2010 Rave Live.
  • Jon was having trouble configuring the net booting process, we also found out that the Dell towers we are using as the pods might not be able to net boot at all, this may simply be an upgrade of the BIOS or revert back to booting from the USB pens.
  • We need to work out the ports required.

There is still a lot of work to be done and we believe that the only way to break the back of this is to get the first years involved. This requires us sitting down and working out what roles they can fill. We also need to contact Max about getting access to the Media Bugs database and the Rave Live sign-up page on Media Bugs, this should preferably be done before we go back next week.

Engineering - Talent Showdown

Talent Showdown:
This is a pre-recorded programme (due to be shot on the 28th March) and the producers have asked for a hand with the flyaway and the rig. The plan is to build the flyaway at Coopers Technology College from which they can shoot their pre-recorded content, this will then be posted online for a viewer vote which dictates which "talent" will perfrom during Rave Live at Matter.
We accompanied them on a site visit last week to see what would be required. In order to minimise the fuss we are aiming to keep this as simple as possible, a basic 4 camera rig with a vision mixer, 2 VT Decks and monitoring. While on the site visit we drew up the rough system diagram shown below to give us a better idea of what we would need.


Several points arose during these initial planning stages:

  • We would need to provide an SPG to keep everything genlocked
  • The cable runs to the cameras were around 100m, this would have to be done over coax as the only cameras available were DSR500's and 570's, we now need to find out whether these are digital out or composite out, this would dictate the nature of the entire flyaway.
  • Would we take both the monitoring and TX feed from the cameras (requiring twice as many cable runs) or would we just run the signals through a DA before the vision mixer thus eliminating the need for these cable runs.
Other than these points the rig should not be too difficult, as such we have notified the rest of the engineers and are waiting to see whether anyone would be willing to plan and then rig this show by the 26th March.

Wednesday, 3 March 2010

Engineering - Update

There have been several developments over the past week. The first concerns the transmission equipment, this was all removed from Pres today and placed up in TV Systems for the time being. The idea is to store it here whilst the rest of the chain is being constructed, this allows easy access to the individual items of equipment. Although this seems like a good idea on the surface, it may prove to be a bad move because now there is no reference to build the new system from, whereas before if a certain item was proving to be difficult to implement it could be checked out in Pres, now it will all have to be completely assembled from scratch.

The interactive build is proceeding well and after speaking to Max last week there are several things that he needs to implement before the next stage:
The sign-up section on the Media Bugs website (where production crews will register for their "Rave Live ID Numbers") needs to have an opt out check box or drop down in it so that people have the option to sign up-to Rave Live but will not remain on the Media Bugs system after the event.
There are a lot of engineers waiting to do some coding so we need to know what needs to be done in php and MySQL so that we can get on with it.
To proceed with Brandings' "augmented reality" logo we need the Rave Live logo as a .dae (Digital Asset Exchange) file.

The OB builds are fairly stagnant at the moment because the programme producers are yet to see the space that they have to work with at the venue (the next site visit is scheduled for the 8th March). Once they have visited then we can begin to gather their equipment requirements and contact the various companies with regards to borrowing some kit off them. There was also bad news in that SiS could not provide us with an OB unit this year due to scheduling constraints. After explaining this to management the decision was taken to approach Telegenic, who kindly provided us with a second OB unit last year, to see if they would be interested again this year. We have also began to build up a contingency plan should we not be able to obtain a truck, this would be in the form of a flyaway although positioning and equipment specifications have yet to be decided.

The last major news is that Jacques and Richard have finally managed to discuss where the lines are going to be run, they are now drawing up cable schedules for the day which can be forwarded onto Matter for them to approve. Once this is done we can begin costing and budgeting for the coax runs around the venue. We have also sent an email to be forwarded onto AEG inquiring about whether or not positioning an antenna on the O2 would be possible and if opening up the various ports needed for Adobe RTSP and MySQL access would be possible, and if there would be a cost involved in doing either of these.

Monday, 1 March 2010

Transmission - Encoders

After several frustrating weeks we now have three encoders up and running. Dan and I spent Saturday figuring out what was wrong and how to get a functional system. We began by connecting each encoder to the Thesys Controller one at a time via a cross-over cable, this let us boot them via TFTP and as such three began to work individually.

Installing a switch into the system meant that we could boot all three encoders without the need for us to constantly replug the cross-over cable. Once the encoders were running it was then a case of providing a system diagram for them to read from. This diagram also allows the user to adjust several other parameters such as bit-rates and IP addresses along with detailing any alarms in the system, it also means that the individual sources (video/audio) can be defined along with the type of output required from the encoders. Using an existing file that we knew worked from a previous year we began tailoring the individual parameters to the current system.

Once this was working we moved onto the Tandberg Multiplexer. In previous years this had been used as a re-multiplexer, with the initial multiplexing being done by the now defunct Divicom Mux. However this didn't present to much of a problem, all that we needed to do was identify the incoming Programme Association Tables (PAT) (which was done using the Transport Stream Analyser (TSA)) and re-assign the Packet Identifiers (PID) to what we required them to be. Other things done at this point was to ensure that the PCR was assigned so that it appeared with the video stream and define the services so that the channels had names and the video and audio were associated together.

Once all of this had been achieved it was merely a case of looping the Mux output through the TSA and then looping the stream again through three Integrated Receiving Devices (IRD), we used these to decode the pictures and audio so that we could see the channel outputs on monitors. All three encoders were used to code three separate channels because even though this isn't how it would be done in the event (the plan is to have a 1 to 'n' configuration in terms of redundancy) we wanted to make sure that all of them could work.

The next step is to rig up the equipment either side of the encoders, so the modulator and the presentation equipment (bug burners, WSS coders etc).

Wednesday, 10 February 2010

Engineering - Update

At the beginning of the week we had a meeting with the Rave Live upper management and the newly commissioned programme makers. Although much of the content was of little relevance to engineering there were a few ideas that had cropped up and needed to be looked into.

The first of these was an idea revolving around the BBC's coverage of last years Wimbledon Tournament. The producer wanted to create an interactive element to their pre-recorded show by taking individual isolated feeds (iso's) of each camera and then providing a red button option so that the viewer could choose which camera iso to watch.

The first thing to consider is building the 'Red Button' platform, usually this is done via an MHEG-5 stream (this is a language that is used to describe interactive TV services). An MHEG platform was built last year by George Alton (Head of OB's) and Alex Govett (Head of Interactivity) and although it was never implemented in our final DTT output we were still able to showcase a fully functional platform. So rebuilding and re-branding this shouldn't be too much of a problem, although the interactive team does have a lot on their plate as it stands. The only other trouble we would have with it would be the play-out and multiplex of this stream for which we would need to obtain some more TX equipment for.

The other problem comes with the output of the two camera iso's over DTT, the only way which we could see in doing this would to be effectively create two more channels, then using the red button service, get users to redirect their STB's receiver to these other channels. After checking with Martin I found out that this was also how the BBC had achieved their services during Wimbledon. However outputting another two channels would be a huge feat for the already overstretched TX department, we would need to obtain new encoders, and the multiplexer would need to be reconfigured. This would also have the detrimental knock-on effect of reducing our output quality; the idea would to be to output all of the channels at once and the two channels that are only needed for a half hour iso's would have slates put on them for the rest of the time. However because in our current situation we cannot achieve statistical multiplexing (where the channel stream Bit-Rates are adjusted according to how busy the video is) all of the channels would suffer a drop in quality.

As such it was decided that we could not feasibly achieve this producers goal over DTT, we didn't want to limit his ideas though, so we presented him with the option of having his iso feeds avaliable and selectable online, so the same kind of effect could be achieved. Maybe during the output of the programme there could be a notice detailing where viewers should click to obtain the iso's (much like at the end of a youtube video). How we achieve this still needs to be talked over with interactivity.

Other major points from talking with the producers revolved around the limitations that had been placed upon them because of budget and physical constraints. As such we tried to portray an open mindset, so instead of just saying no to every difficult idea, we would look into them further to try and achieve the producers goals. If these couldn't be done then we would present a viable work around that could be implemented.

There should be a meeting later on this week or at the beginning of next week to obtain final production requirements from the live shows, another visit to Matter has also been in the pipeline for this Friday but as of yet we are unsure whether or not it will go ahead.

The proposed progamme guide can be found on the wiki.

Thursday, 4 February 2010

Engineering - Update

Things have been progressing at a steady pace over the past few weeks. The Transmission equipment has been pulled out and reassembled on the back bench, the OB's have begun putting together system diagrams for the flyaway and the interactive team have built their server rack.

Dave (Head of Engineering), Adam (Logistics Liaison), Richard (Lines) and I went on another site visit on Jan 28th, the main reason was to investigate where the cable runs would go along with speaking again to the Matter technical staff. The main questions that had come out of the past few meetings were:

  • What is the bandwidth capabilities within Matter itself, and if we needed to open up some ports who do we need to contact.
  • In terms of DTT, what facilities are available and who else do we need to speak to.
  • What communications facilities are currently installed, can we make use of them.
It turns out that the bandwidth capabilities into and out of Matter range from 2Gb - 60Gb which is perfectly suitable for pushing the live channels onto Narrowstep. However when it comes to opening up a MySQL port, so that the local database and the website database can communicate, we need to get in contact with a higher authority within the O2.

For DTT we had previously been told that there may be some existing facilities within the O2, however the Matter staff didn't know anything about this so again they suggested getting in contact with the O2.

There is already a pre-installed communications architecture within Matter, as it stands we have been given access to one of their radio channels for use on Rave Live along with another channel which can be used to contact the Matter staff directly. This is good news as it now reduces the amount of cable runs that need to be made as hard-wired comms are probably no longer required (a redundant system will still need to be looked into).

The next stage in planning for this event now relies on commissioning, the programs should be commissioned by the end of this week, meaning that next week we can meet up with the producers with Operations and begin to get some technical requirements, once we have these the flyaway and truck engineers can then implement the requirements into their plans.

Tuesday, 26 January 2010

Interactive - Update

After the initial meetings with the head of interactivity (Max Saunders) and our interactive engineering team the ball is finally rolling. The main focus has been on the 'interactive pods', these will be placed around the venue and delegates will be able to use them to access live streams on the Channels along with on-demand content. The defining feature this year is the addition of a CV selection interface. The idea is that delegates can select a piece of media, watch it and then using the touch-screen, access a list of the crew that worked on the production. From here they will then enter a pin number that will enable them to select students CV's to be sent to their registered email address.

This requires several back-end database components to be built:
  • Student CV Database – this will contain each crew members contact details along with being tied with their stored CV’s. The proposed idea is to use an existing database that has been created for Media Bugs.
  • Media Asset Database – this will contain all of the metadata (crew student numbers) that is assigned to each media asset when it is created on the Final Cut Server (FCS).
  • Delegate Database – This will contain at the very least the delegates email addresses and assigned pin numbers.
For the website it has been suggested that we look into using Silverstripe (Link to website to help with the construction,), although there were several concerns that this may not be flexible enough for what we wanted to use it for.

Live Streaming:

When it comes to live streaming there are two ways that this can be achieved. The first is to use LiveStream, a free advertising subsided platform. The other is to continue using Narrowstep, with whom we already have a working relationship. Both situations require the streams to be pushed out of Matter after having been encoded, this will be done using the Cisco 2000 box, but it will require Matter to open up some ports on their connection.

Ticketing System:

The original idea was to use a free, third party ticketing system. However this would present the problem of gaining access to their database to obtain the relevant information (delegates email addresses) due to privacy laws.

The proposed now is to build our own system where a delegate is assigned a pin number upon entry into Matter, then in order to obtain a students CV they enter their pin on the touch-screen. What still needs to be looked into is whether or not the email sending is done automatically and instantaneously, or whether it is done manually after the event.

These are just the initial points and ideas that were raised by everyone, there are a couple of questions that need to be asked to Matter, in particular how fast their connection is and also whether or not we can get certain ports opened.

Thursday, 21 January 2010

Engineering - Update

The Matter contract was signed yesterday, so now things can really get going. Since the start of term we have has two meetings with all of the engineers (including the first years) along with several other meetings with each department. We have decided to split engineering into three areas each led by a different person:
  • Outside Broadcasts (George Alton) - Dealing with all of the live shows on the day, building the flyaway and operating the truck, QC and Post Production support have also been grouped under this area this year.
  • Interactivity (Alex Govett) - Creating the website and media management assets for the pods inside of Matter.
  • Transmission (Jamie Fletcher) - Applying for DTT license along with managing playout and the transmission flyaway.

On the day itself there are plans for:
  • A live channel and a prerecorded channel to be broadcast via DTT
  • 3 Live events - 2 run from the OB truck and the other run from a flyaway
  • A Post Production area where there is to be continuous ingest of material recorded throughout the day to create the final montage
  • Interactive pods placed throughout the venue the delegates will be able to view the live streams along with the pre-recorded content, they will also be able to access students CV's via email.
These areas are the main focus of each department, once general plans have been drawn up then we can start assembling equipment and testing different methodologies to find the best way to make Rave Live happen.

A wiki area for engineering has been set up so that at every stage we know exactly where we are heading:

http://confluence.rave.ac.uk/confluence/display/FComm/Rave+Live+Engineering+Page