Flow of usage records

This document describes the handling of network usage records1) by OSB.

The essential steps of the processing chain are shown in diagram

Diagram Overview flow of CDR

Mediation retrieves the usage records from the network switches, converts them into the OSB Cdr format and assembles partial records.
Rating rates the records for all involved parties and stores the records in the Usage Record database.
In the last step, Billing reads the records for the business partner being billed and writes them to the generated Invoice.

In real life the flow of usage record is more complicated than show in diagram

1) The principles for record collection, stream registration and communication between the modules processing the stream are not restricted to usage records. Instead they are valid for all kind of records processed by OSB.

Communication between modules

In OSB every stream of usage records is registered in the overview database table RECORD_STREAM. This table indirectly identifies the module that should process the records in the stream:

The communication between the modules is based on the concept that every module generating record streams is a record source, similar to a network or a switch.

Modules processing usage records should be able to read records from a file or from a TCP/IP socket. Both modes of operation have different characteristics:

File input The fetch principle applies if a program reads its input from files. The program periodically browses the table RECORD_STREAM for any files to be processed. If an input file is available, it is already closed by its creator.
TCP/IP input

When a program reads its input from a TCP/IP socket, the bring principle applies. The creator of the record stream first registers the stream in the database table and then contacts the processing module on a published TCP/IP socket. For each record stream a dedicated socket connection should be used.
In this mode the stream is not completely processed and closed by the creator while the records are processed by the current module. Instead the later will assume the stream is completely worked out by its creator if the connection is closed.
Note that it is possible to verify if the connection was regularly closed by the record source or if it was broken. In the former situation, the table RECORD_STREAM will contain the information how many records have been sent from the source and the receiver (processor) of the stream can compare against its own statistics.

  For the first version OSB will support the file mode only. The design of the library however must also reflect the TCP/IP mode. This means, e.g., that FILE* must be used to read input instead of std::istream&. This is because for the latter there is no portable way to create it from a file descriptor as used in socket connections.

Module details

For each module in the usage record processing chain, this chapter describes the input and output in detail for file streams.

A general rule applies for all modules: If a whole input stream is rejected for whatsoever reasons no output streams should be generated, i.e. registered in the control table RECORD_STREAM.

As previously explained, the input and output streams do not change when working with TCP/IP sockets, the main difference is the way how the receiving module get aware of a new stream to process.

Record collection

Record collection retrieves the records from the elements of the own network, partner networks and other record sources:

Diagram Record collection with output files

The module is special in several ways.

If the source provides the records as a permanent stream (as compared to files) record collection has to apply some kind of artifical packaging in order to achieve an entry in the RECORD_STREAM table. At the same time a backup file should be maintained that contains the records of the registered stream.
The main reason for this is that call accounting is based on this table.
Another reason is recovery in case of a system crash where the stored information (yet to determine) can be used to identify which records need reprocessing. Details for this scenario must be worked out case by case.


Use the following guideline for the decision if record collection needs to create a backup:
The backup must provide easy access to the records of a stream as described by its data in RECORD_STREAM.

Conversion and Aggregation

Conversion is the process that converts the usage records from the format of the network elements into the OSB internal Cdr format. Aggregation is responsible to assemble partial records. Diagram shows the input and output record streams of both processes:

Diagram Record streams for Conversion and Aggregation


Conversion determines the Raw usage records files to process from the control table RECORD_STREAM. For each of its input file the process generates up to three output files, each of which is registered in the control table:


Aggregation uses the control table RECORD_STREAM to get the partial OSB Cdr files that it should process. These are the partial Cdr files generated by Conversion and the leftover files created and maintained by the process itself.
Aggregation has three different types of output files, each of which is registered in the control table:

In Aggregation that there's no one-to-many relationship between the input and output files. This characteristic is the major problem in call accounting, because several original (partial) records are combined into one complete Cdr and we must be able to trace each of the original records.
The handling of input and output file must be described in detail in the design of Aggregation.


The process Rating is responsible to rate the Cdr for all involved parties and to store the rated Cdr in the UsageRecord database. The discussion of the record streams is based on the deployment shown in diagram where the three sub-processes are incorporated into to the overall rating process. Dependent on the actual requirements of an OSB installation this arrangement may change and additional records streams may be introduced accordingly.
The input stream OSB Cdr as well as the two output streams Error and Filter contain OSB Cdr encoded as ASN.1.

Diagram Record streams of the rating process

Party analysis

This sub-process determines the input files for the rating process from the RECORD_STREAM control table. For each Cdr in the input stream Party Analysis creates a Cdr for every party involved in the network usage and sends. The generated Cdr are forwarded to the Rating Engine in memory, this means that no record stream is created. Any Cdr with errors are written the Error file.

Rating Engine

The Rating Engine validates the network usage for the involved party and rates each used service. Rated Cdr are forwarded to Record Storage in memory, Cdr with one or more errors are written to the file Error and filtered Cdr are written to the Filter file.

Currently the rating engine is the only process that should filter records. This is because this is the only module working with tariff objects that provide a generic mechanism to define filter criteria, e.g., service or tariff classes.

Record Storage

The main responsibility of Record Storage is to store the rated Cdr in the UsageRecord table(s) and to update the balance sheets accordingly3). For all Cdr where both tasks are successfully completed, the module updates the control table RECORD_STREAM accordingly.
Record Storage breaks the relationship between a Cdr and its record stream: For performance and database maintenance the usage record database stores the CDR grouped by time and/or subscriber. The module has another property that should be mentionned: It converts the general OSB Cdr format into the project specific record format. This is because for a given OSB installation not all information that is availble in the Cdr is needed on invoices.

3) Other project specific tasks, e.g. the creation of statistics or daily summary records, may be added to the functionality of Record Storage. Such additional tasks however do not impact the flow of CDR and its control with RECORD_STREAM.


To be described after all of the above has been understood and implemented. Billing completes the call accounting chain for the original incoming files. The key issue is performance: how do we store the call accounting information in a way that allows for updates without overhead?

Call accounting

Goes into a separate document