This document attempts to explain how the ACNET DAQ works.
Data aquisition is initiated when damen sends a request to the Beams Division machine (bdmachine) to begin data aquisition. The request contains the ACNET device to read, how often to read the device, the size of a data packet, and where to send the data (in our case, back to damen).
Two points are worth noting:
Data must be pushed from bdmachine to damen because damen does not know when data occurs. At best, we could make a trigger that would look for beam, then ask bdmachine for data, then accept the data. However, issues such as speed (remember, the spill can occur at up to 15 hz) and synchronization arise. Furthermore, relying on a beam trigger assumes we only want data when beam hits the target. In practice, beam may be sent out whenever the proper clock event occurs in the control system. Beam may not reach our target, even though this event occurs, for many reasons (e.g., a safety system inhibit).
Thus, data is read from a buffer and sent to damen at a fixed rate. The buffer is deep enough, and the rate fast enough, that all data is sent.
The data is sent from bdmachine to damen via the internet just like any web page would be. The xml-rpc protocol is used to format the data, which allows for complex data types and independence of platforms. The size of the packet, number of new records, timestamps, and data, are all encoded in each packet.
Because the internet is used, there may be a lot of connection in between. When damen receives the data, it signals bdmachine; bdmacine will not send more data until a reply is received from damen. The underlying protocol is TCP/IP
When the data arrives at damen, it is upacked by a cgi routine. The routine unpacks each packet into records, one record per clock event, and each record written to disk. The name of the record is the timestamp suffixed my the device descriptor. The header for the packet also contains the number of new record since the last transmitted packet; only the new records are unpacked.
Although all events in a packet are time-ordered, the packets may pass each other over the internet. Thus, the disk act as a buffer for the data, and by naming the data according to its timestamp, a lexical listing automatically sorts the data.
Of course, with data for seven devices arriving at up to 5 Hz (average), an eight hour run would swamp the disk. Thus, an additional process concatenates the individual records onto a larger file. The process (with the clever name "concatenate") lists the files on the disk, ignores those written in the previous 15 or so seconds (this is the buffer time allowed for packets to pass each other on the internet), and writes the rest onto the merged file. After the record files are written, they are deleted. Thus, there are usually fewer than 500 files on the disk at any given time.
The merged file in named by run number. Unfortunate, the merging program does not know when a new run occurs until after the new run starts. When a new run begins, the concatenation is temporarily suspended, the start time of the new run is noted, the old run file is searched, and any record appended after the end of the run are popped off, to be remerged onto the correct run.
Obviously, there are a lot of links in the chain, and breaking any one could result in disaster (scenatios range from loosing one record to several hours). An additional process, called "keepalive", monitors all the other processes. "keepalive" is in turn monitored by the online GUI (which is monitored by the shifter). "keepalive" continually ckecks that the concatenation process is extant on damen, and that the aquisition jobs are extant on damen. If any of these processes disappear, "keepalive" restarts it.
A final process, "checkmerged", continually reads the merged data file. Each record contains an incrementer, which increments each time the clock event occurs. "checkmerged" writes the last incrementer for each device to a file, which, in turn, is read by the online GUI. This the incrementor stops incrementing, the GUI alarms.
this figure shows a lot of information. First, here are the machins and what they do:
When damen want to begin aquiring data, a command is sent to a servlet on dueXX (in the drawing, this is done through "keepalive"). The servlet passes this request to a process in the DAE on dueXX, and returns a status to damen. The DAE gets data from the IRM, and pushes is to damen. All communication between damen and the DAE is done via the servlet -- the only direct communication between the DAE and damen is when the DAE pushes data.
In the illustration, blue lines indicate TCP/IP, green lines indiacte UDP, and red lines indicate internal communication.
There are two main directories, the "current" directory (where all the action takes place) and the "test" directory (where you can test new code). Both have identical structures. Furthermore, the "current" directory is defined as a link to one of several directories -- this allows switching the data buffering area on-the-fly (in case of overloading a directory).
Here is the directory structure:
acnet/acnet-current -> acnet/dir2 or acnet/dir1
acnet/dir1/data -> acnet/data
acnet/dir1/info -> acnet/info
acnet/dir1/status -> acnet/status
acnet/dir2/data -> acnet/data
acnet/dir2/info -> acnet/info
acnet/dir2/status -> acnet/status
Note that the final destination for merged data, information, and status, does not change. All the directories named "work" have subdirectories "irm" and "mwr".
To see all this in action, you can go to The ACNET DAQ Monitoring Page which can be used to monitor the ACNET DAQ. This gives you oodles of obscure information and links to lots of monitoring and logging files. This page allows you to see what the ACNET DAQ is doing right now.