Terry Barnaby wrote: > [...] > Note that quite a few of the documents in the web site are preliminary > and fluid during this early stage of the project. [...] Quite naturally. > Ok, the spec stated "All RF buckets are thus treated and stored, > irrespective of whether there’s beam in them or not. This is necessary > to limit the complexity of locating a requested measurement in memory". > I assumed this meant that the machine would be set to store, lets say, > 21 RF buckets every orbit even if the harmonic number was not 21, some > of these RF buckets would be zero. This would make it easy to retrieve > the required information from memory as it is a simple array operation > irrespective of harmonic changes. > If the actual number of bunches (RF Buckets), stored to memory, change > on each H-Change this will make memory retrieval harder. As the Cycle > Timer table (1ms C-Timer table) is not be coherent with the orbit and > the H-Change signal can occur anytime then we would need to store > information on the exact point of H-Change (in orbits/turns) after the > C-Timer table mark that the H-Change occurred and modify our memory > access as appropriate rather than simply use A = C(x) + 3ht + 3b > This would also make it hard to collect a set of data for just one bunch > as the offsets from one orbit to another in memory would change on > each H-Change. > Is my understanding correct here ? The harmonic number *is* the number of RF buckets in one revolution period, and yes, a H-change implies that this number changes. It is indeed necessary to build a table that marks the exact addresses where such changes occurred. Fortunately, that table won't be very large, say, no more than 8 entries per machine cycle. It can hold the INJ event too. Collecting data *across* the occurrence of a H-change makes little sense and I think I'll simply prohibit it in software. One could only collect *up to* the discontinuity, or start right after it. Even though the actual H-change is gradual, the acquisition system must decide at one discrete instant to go from 'before' to 'after' the change. For some information about how these H-changes work, please have a look at http://psdoc.web.cern.ch/PSdoc/ppc/md/md990115/md990115.pdf and http://accelconf.web.cern.ch/accelconf/e00/PAPERS/WEOAF102.pdf . One deals with bunch splitting, and the other with bunch compression. (If you have trouble accessing either, let me know.) > So are you happy for us just to implement the PU Module API in an > appropriate documented manner and let CERN implement the appropriate > code to communicate between our API and the accelerator control system > API ? Yes. I think I cannot reasonably bother you with the intricacies of our accelerator control system. > Do you have information on the accelerator control system API or other > information on how that control process works ? [...] It's hard to capture the flavour of the thing in a few paragraphs. Let me try: The system uses a large number of VME crates, containing processor modules with PowerPCs and various I/O modules, according to specific needs. The processors run the LynxOS RTOS and one or more real-time equipment control and acquisition programs. They all tie into a private ethernet. A central 'timing generator' sends messages over a dedicated network to timing receiver modules sitting in each crate. By means of a special software library, an RT program can decode these messages and wait on specific conditions to occur. The messages contain data about the type of particle to be accelerated, the type of cycle to be executed and also conveys a number of global machine events, such as, for example, SCY (Start of CYcle) and ELFT. There are many more. Thus, it knows at all appropriate instants what the accelerator does. Settings and acquisitions are stored in shared memory segments in each VME crate. The structure of data in these segments is kept in on-line databases, and can be accessed by the local RT programs and also by programs running on operator consoles in the central control room, or indeed, from anywhere on the site, using remote procedure calls. The details of access to these shared memory segments are hidden by another layer of software at both ends. This whole construct, the shared memory segments and the software libraries to access them are called 'Equipment Modules' (EM). The specific settings and bits and pieces of data therein, as well as the software routines to access them, are called 'properties'. There are many sorts of EMs, for timing, power supplies, function generators, RF equipment, pumps and valves, and a host of measuring instruments. Each has its own set of properties. Of course, this doesn't even start to scratch the surface. To dig just a little deeper, you might want to have a look at: "The PS controls for newcomers", http://ab-dep-co-ex.web.cern.ch/ab-dep-co-ex/doc/CONote9922.PDF . Comprehensive documentation for all this is surprisingly hard to come by. I keep a set of printed copies which I annotate when I discover something I didn't know yet. > [...] > Ok, as part of the TMS maintenance agreement, disk and fan changing at > regular intervals is included. If you don't need/want to run your own > software on the TMS system controllers we could down size them from dual > Xeon systems if required. Splendid. I believe a bit of processing power here is no luxury. If it's there, we'll end up using it, for sure. > Ok, so the accuracy of the timing of the bunch will be +- the revolution > frequency period (approx +- 2us). Yes, that's right. I've read through the preliminary design docs. I have little to say about the architecture, not being familiar with cPCI and Xilinx FPGAs. Certainly Greg will have more precise ideas. Regarding the minutes of the initial project meeting, and in particular the point about using the Xilinx internal PowerPC cores: If we want to put those to good use, I imagine that would eat deeply into the available resources of the FPGA, possibly to the point of not leaving enough room for our algorithm. Regarding the proposed register map for the PU Processing Engine, that's a good start. There are a few things I'd like to add: - We need a register in which to write the initial value of the revolution frequency. This value should be transferred into the actual frequency register of the PLL a few ms before injection. Since we do not have a hardware trigger for that event, we must either add one, or, what I prefer, generate one internally, which implies yet another register. - We'll certainly want a register to set the PLL loop gain. - You mention a phase delay register, which defines the PU position in the ring with respect to the injection point. Yes, we need such a register. I dubbed it the 'azimuth register'. Originally I wanted to just compare its value with the phase accumulator and trigger when the difference changes sign, but Greg pointed out that if we also apply the difference to the phase tables, these would end up being the same for all PUs, which is a definite advantage. - Your proposition for pattern selection DMA is certainly very useful. - And then there is the issue of diagnostics. We'll want to observe the behaviour of several signals as described in the IT3384 specs p.4.12. Several registers will be needed to control all that. - It would probably be handy to have the Cycle number register writable as well, so that we can make it correspond with the actual accelerator cycle number. That's all, for the moment. Quite a bit of text, to be sure. Please don't hesitate to point out any inconsistencies or vague areas. Have a nice weekend, Jeroen