Hi Terry, I'm happy to see that the administrative part of the project is nearly concluded, so that we can get back to technical matters. Of course we should get together to discuss the design documents. It turns out that week 46 is the last week of operation of the PS accelerator this year. It would probably be good that you come to us, so that you may still see the machine in operation and observe its signals. It would be the last opportunity to do so before April 16, 2007, which is when the machine is expected to be started again. I have visited your web site and I picked up a few texts that I haven't yet had time to read. I'll try to read them tonight and come up with comments. Now for your questions: > 1. The system can process 1 to 21 bunches of particles at a time. > It is not clear in the specification if the system's memory > should be fixed to store 21 bunches The number 21 is not a hard limit. There is talk of going to 24, to be used for accelerating PB54+ ions. As I see it, the number of measurements stored per beam revolution corresponds to the harmonic number of the machine at that instant. This harmonic number can be changed on the fly during an acceleration cycle. The sequence of harmonic numbers is known at the start, and the instant of switching from one to the next is given by the H-change triggers. > 2. I will shortly be looking at the TMS software API's. Do you have > any information on the current API's in use or any ideas > on this ? There are two levels of API in the system. One is the interface between the acquisition stations or modules, one per PU, and the hub computer or system controller, and the other is that between the hub computer and the accelerator control system. For the former, we can do basically whatever we please. If I were to do it, and if communication between PU station and hub was Ethernet based, I'd probably make up some simple special purpose protocol based on UDP. For the latter, we have to conform to the accelerator control system way of doing things. We'll ask the local specialists to deal with it. > 3. The OS software will be based on Linux. Do you have a particular > Linux distribution you would like us to use ? We would > probably base the system on Fedora Core 5 unless you have > special preferences. The locally favoured flavour is RedHat, but I have no strong feelings about that. The system is still weakly bound real-time at that level. Do you think it likely that this will cause trouble? > 4. We will need to decide on the actual hardware for the system > controllers. I understand from previous discussions that > you may want to implement some of your control/data applications > on this system rather than on a separate system. > Our recomendation thus is for a dual Intel Xeon system in a 4U > 19inch rack mount case with 2 GBytes of RAM, 2 Gigabit Ethernet > controllers and dual raided disks either SCSI or SATA. > However we could also use a lower power single processor system. > We could also supply the system with a 17inch LCD display, > keyboard and mouse if required. > Do you have any preference or opinions on this ? I know some who would object to have hard disks in an operational system, for reliability and maintenance reasons. I personally believe that with a pair of raided disks, even if I have to change one every two or three years, this is probably not the biggest maintenance headache. Your proposition sounds fine to me. Leave out the local display, keyboard and mouse. I expect to access the system almost exclusively remotely, even from the same room it's installed in. > 5. A processing cycle lasts about 1.2 seconds. Will processing cycles be > back to back or will there be a delay between cycles ? Yes, the basic machine cycle is 1200ms. There are no delays in between. However, beam is injected only at 170ms after the start of a basic period, and ejected normally well before its end. Often, but not always, there can be basic periods without any beam at all. On the other hand, the machine can execute long cycles, lasting up to three basic 1.2s periods, with beam in the machine continuously for up to about two seconds. > 6. If after a processing cycle the data collection takes longer > than the time available (as other processing cycles are > happening and the data buffer is of finite size) what > should we do ? Should we hold of the next Cycle or > abort the data collection with an error ? In the, hopefully rare, case that data hasn't been processed and is in danger of being overwritten by the acquisition of the next acceleration cycle, I think the best course of action is to allow the processing to run to completion, sacrificing the new acquisition. > 7. We, assume at the moment, that there will be a CYCLE_STOP > timing signal (ELFT) provided. Will this be the case > or should we include a Cycle timer to stop the cycle after a > programmed time interval ? You may use ELFT as an end-of-cycle marker. > 8. The Cycle Timing Table has entries every millisecond giving > a 1ms accuracy in data timing. Would it be usefull to have > more accurate timing information available ? Well, measurement requests are usually stated as a millisecond timing, plus some number of revolution frequency periods. Thus, 1ms resolution matches exactly what we need. The revolution frequency doesn't change all that much in 1ms, anyway. I hope this helps. Regards, Jeroen