Syndicate content
News and Views
MedTech and Devices

CP Split? Technology



This post depicts how the publisher/subscriber functions of the node network and the CP Split technology are used to transmit information between the nodes in a node-to-node (n2n) architecture.

Cps_basic_components_3 Click image to enlarge

The image above describes the basic components and processes. The two arrows coming out of the Template Models box depict both publisher and subscriber/presenter grid-based template-models (spreadsheet workbooks) used by the nodes.

The Node as Publisher box depicts a node using one or more of its publisher template-models to:

  • Acquire content from the sources specified by its models -- as depicted by the arrow from Data Input to Data/Information Sources -- and place each content element in the cells of software grids (spreadsheet worksheets).

  • Manipulate the content in the cells as necessary using mathematical, statistical, financial, logical, inferential, date & time, and/or text functions as defined by the algorithms in its template-models.

  • Package the resulting content in encrypted, delimited (e.g., comma separated value) Content Files and transmit them to its subscribers via e-mail or any other methods as depicted by the Content File box. Note that these files contain the actual content to be reported, including (a) ranges of calculated and raw numeric values and (b) text (e.g., strings and blocks of alphanumeric text, hyperlinks to documents and graphic files, web site URLs, etc). They do not, however, contain any formatting instructions or metadata, which make the files very small.

The Node as Subscriber box depicts a node’s subscriber/presenter template-model taking the contents of the Content File it receives and placing each content element in pre-defined cells in its own template-model grid having a structure mirroring the template-model used by the publisher to create the Content File. In this way, the subscriber/presenter template-model “knows” what content elements are located in each cell by virtue the cell’s location in the grid. The subscriber/presenter template model then does two things:

  • It immediately formats its contents obtained from the content-file and displays it in interactive reports responsive to user requests, as depicted by the arrow to Report Display.

  • It allows the user to query or input new data into its user interface or to modify existing data, which is then sent to a database for storage, as depicted by the arrow from Data Input to Data/Information Sources.

Note that a single node can have both publisher and subscriber functionalities and a single node can publish to any number of subscribers. Also note that a node can interface with just about any software application via APIs.

Cps_node_2 Click image to enlarge

The image above depicts a node with both publisher and subscriber/presenter functionality. In this image, a node's:

  • Publisher functions involve managing subscriptions and transmitting Content Files to its subscribers as defined by its Publisher Template-Model.
  • Subscriber/Presenter functions, which are defined by a node's Subscriber/Presenter Template-Model, includes:
    • Subscriber functions involve retrieving Content Files sent from publisher nodes
    • Presenter functions involve formatting/rendering the Content File's contents and presenting it as interactive reports.
  • Node Processing functions, which are also defined by a node's Template-Models, include the processes that:
    • A publishing node uses for obtaining and manipulating the content
    • A subscribing/presenting node uses to (a) convert (format/render) a Content File into user-interactive reports, (b) enable manual data input, (c) modify existing data in reports, (d) query external data sources for additional content and adding it to the report
    • Enable a node to interface with external software applications through APIs.

Cps_nodenetwork_3  Click image to enlarge

The graphic above depicts how network of nodes operate to exchange information:

Step 1: The solid black line depicts the node at the top retrieving and processing content to create a Content File using node functions defined in its Publisher Template-Model.

Step 2: The solid blue arrows show the node at the top using the publisher functions defined in its Publisher Template-Model to send Content Files via encrypted e-mail attachments to the node at the upper right, the nodes on the left, and the node at the bottom.

Step 3: This dashed arrow shows the top node, after sending Content Files to the node on it left, subsequently receives Content Files from that same node via the subscriber functionality of its Subscriber/Presenter Template-Model. This means both these nodes invoke their publisher and subscriber functionality.

Step 4: These two nodes only receive Content Files; their publisher functionality is not invoked.

Step 5: These dotted arrows show Content Files being passed sequentially from one node to the next, with each node adding new information and/or modifying the files it receives, before sending extended Content Files to the next node.

Step 6: The bottom node receives Content Files from two other nodes. After forming a composite Content File from the accumulated content as defined by its Publisher Template-Models, it sends the composite Content File back to the node at the top.




We are introducing a novel technology that offers a simple, transparent way to exchange information securely and economically between any software applications and data stores via asynchronous, publish/subscribe, node-to-node networks using our patented CP Split™ software method.

This unique software technology is especially useful for industries in which loosely connected networks of people and computers analyze & exchange information from disparate sources in a variety of communication & working environments. It accommodates the needs of all users, from people with continuous broadband to occasionally-connected individuals using low speed dial-up service. And it facilitates collaboration across all organizational and physical boundaries (e.g., from functional unit to functional unit, company to company, and country to country).

The unique value proposition of our technology is it:

  • Saves businesses time, money, resources and hassle by:
    • Being non-disruptive to existing I.T. systems and networks
    • Reducing complexity and problems since it requires no VPN configuration, avoids firewall issues and needs little if any ongoing I.T. support
    • Reducing demands central servers and conserving precious resources
    • Enhancing network resiliency
    • Providing uniquely powerful security methods
    • Being fully compatible with XML and able to maintain hierarchical relationships, yet operates more efficiently.
  • Fosters learning, knowledge-building, and collaborative decision-making by:
    • Tailoring reports and just-in-time instructional materials to end-users' particular roles, responsibilities and needs
    • Enabling exceptionally rich and responsive portable reporting
    • Enabling networks of individuals across organizational boundaries to share diverse experiences, data sources, knowledge, ideas and insights to increase innovation and more effective decision-making.

The primary purpose of this blog is to make people aware of our innovation and its unique set of benefits in order to expand our collaborative network of information technology experts, software companies, and government agencies. While the discussion on this site focuses on use cases in healthcare, the technology can be used in any knowledge worker industry and profession.


Let's begin by defining key components and processes in a node-to-node network.

1. What is a node and a node-to-node network?

A node is a software application, with publisher and subscriber functionality, that manages the transfer of information between two or more computers in an asynchronous manner. A node on one computer is the publisher (sender) of information, and at least one other computer in its network is the subscriber (recipient) of that information. This node-to-node (N2N) information exchange is, in effect, an application-to-application data transfer process.

The data transfer process requires each computer in a network of nodes to support an operating system and a connection to the Internet via broadband, dial-up, or other communication service. At one end of the connection, the Publishing node must authorize the information transfer by authenticating that the Subscribing node is allowed to receive the information. At the other end of the connection, each Subscribing node must allow the Publishing node to deposit the information in an accessible place.

2. What other technologies do similar things (such as TCP/IP, an Internet protocol suite used by e-mail that includes the application file transfer protocol, FTP)?

The term File Transfer means copying a file from one machine to another. FTP allows authorized users to log into a remote system, identify themselves, list remote directories, copy files to or from the remote machine, and execute a few simple commands remotely. Although FTP allows direct interactive use by humans, the protocol is designed for program manipulation at the application layer for automating the file transfer process. FTP allows a user to access multiple machines in a single "session" and maintains separate TCP connections.

FTP can handle third party transfers. A client opens a control connection to servers on two remote machines, A and B. The client must have permission to transfer a file from A and permission to transfer a file to B. The client asks the server on A to transfer the file to B. The server on A forms a direct TCP connection with server B and transfers the data across the Internet to B. The client retains control of the transfer, but does not participate in moving data.

3. What are CP Split™ (CPS) Nodes?

A CPS Nodes leverage the CP Split™ software method as explained below and in subsequent posts. Briefly, CPS Nodes use automated data grid template (spreadsheet) software to interact with each other at the presentation level. A CPS Publisher Template (PT) retrieves data from the requisite data stores and assembles the data in an organized (meaningful/logical) way to form preplanned data structures in the cells of the grid template. The Publisher Node then ships the data to it subscribing nodes by automatically taking the data from the grid template and storing them in an encrypted delimited CPS Data File and sending the file. This creates an interoperable platform for the simple, secure, fluid exchange of information between disparate system architectures through the transmission of content stored in highly efficient data files. 

Upon receipt, the CPS Subscriber Nodes use their corresponding Subscriber Templates to render & present (and/or export) the contents of the CPS Data Files.

I will show how the CP Split method provides the only software codec (coder-decoder) that enables an encoder to organize data elements into configurations from which a decoder locates content elements for processing (e.g., formatting) based solely on their positions within the configurations, without using database queries or markup tags.

4. What is CPS Universal Translation?

Universal translation is a process by which a Subscribing Node notifies a Publishing Node as to how the information must be formatted or translated to accommodate the requirements of the subscribing node. This enables the Publishing Node to transform the information as necessary, so it can be used by different Subscribing Nodes (e.g., performing language translations, terminology replacements, data set modifications, and data format transformations).

5. What are CPS Composite Reports?

Composite reports are generated when (a) a Publishing Node accesses information from disparate sources, integrates the information into a single CPS Data File, and sends it to its subscribers where a composite report is generated or when (b) a subscribing node receives CPS Data Files containing different information from multiple publishing nodes and integrates it all into a composite report.

Exceptionally high-level security is maintained end-to-end using encrypted data and template files, virtual drives, and MultiCryption technology (discussed in a subsequent post).

Introducing the CP Split™ Technology

CP Split refers to the way our patented technology splits content (data & information) from presentation (reports) using grid software (spreadsheets). Separating content from presentation is familiar to all of us from XML and HTML, but only the CP Split does it with grid software templates and configurations of content in delimited files.

I will show how the CP Split technology -- interoperating with any Health IT tools -- enables mesh networks of nodes to composite comprehensive patient profile reports from disparate sources, while delivering these powerful benefits:

  • Saves time, money and resources by minimizing data transmission and storage costs, while consuming minimal bandwidth.
  • Has minimal impact on existing IT systems and networks, so current operations can continue without disruption.
  • Reduces complexity and hassle by requiring no VPN configuration, avoiding firewall issues, and needing little if any IT support.
  • It tailors reports to end-users' needs by supporting both report compositing whereby different reports can be combined into an integrated report of the "big picture," and report fragmenting whereby components of a single report can be divided into multiple smaller ones.
  • It tailors instructional materials to end-users' particular needs by enabling competency-based and just-in-time eLearning, whereby the curriculum content delivered to an individual is determined by the person's current level of knowledge and/or particular knowledge needs.
  • Allows people to obtain, compute, distribute and present information asynchronously using local resources and only brief, occasional network connectivity, which reduces demands central servers, speeds reporting, increases mobility/portability, and enhances network resiliency (i.e., the network keeps working even when individual nodes are disrupted, which is unlike central sever disruption that brings its entire network down).
  • Enables loosely coupled networks of individuals across organizational boundaries to share diverse experiences, data sources, information, knowledge, expertise, perspectives, ideas and insights, which increase innovation and more effective decision-making.
  • Is fully compatible with XML and able to maintain hierarchical relationships, but it does not require markup tags, namespaces, schemas, XSLT, stylesheets, etc.
When applied to Health IT, it provides a flexible, economical, robust, secure, and open information architecture that supports the broad information management needs of clinicians, researchers, and the public. It fosters the acquisition, analysis, organization, dissemination, and reporting of comprehensive content relevant to:
  • Biomedical informatics, including managing healthcare delivery information, reducing medical errors, providing decision support for clinicians, extracting outcome and public health information from large datasets, and predicting health events and
  • Bioinformatics involving managing and interpreting scientific research data.

I will discuss all of this in subsequent posts and welcome your questions and comments.

Steve Beller, PhD



The CP Split can utilize MultiCryption™ software security tools to provide a unique, multi-level, data security process for exceptional data protection.

MultiCryption software uses four special levels of encryption for a virtually foolproof way to secure data files as they move across the Internet. It sets a new standard for data protection -- that is even immune to brute force attacks -- with these unique security methods:

  1. File Decomposition/Recomposition - Breaks a data file into several sub-files for transmission and reassembles them upon receipt using keys
  2. Data Expansion/Contraction - Separates the words, punctuation and numeric data in each sub-file into individual characters for transmission and puts them back into the correct words and numbers upon receipt using keys
  3. Counter-Crypto - Adds additional characters, based on a statistical distribution analysis, for transmission and removes the extra characters upon receipt using keys
  4. Data Scrambling/Unscrambling - Mixes up all the characters in a random manner for transmission and rearranges them upon receipt using keys

Click this link for more: MultiCryption™ technology



As discussed in the previous posts, the nodes use the CP Split’s patented process uses template models to:

  1. Create Content Configurations in a Content File Production Grid (CFP) Grid using spreadsheet and third-party applications
  2. Transfer these Content Configurations into a Content File for storage and distribution
  3. Transfer the Content Configurations from the Content File into a Content File Consumption (CFC) Grid where (a) formatting instructions from spreadsheet and third-party applications present reports by rendering each content element based on its location within the spreadsheet and (b) the content elements may also be sent to populate databases.

This process differentiates the CP Split from all other technologies used for the distribution and presentation of reports. Following is a discussion of these differences and the benefits of using the CP Split technology.

How Does the CP Split Differ from Database Report Writers?

The CP Split technology differs from database report writers in the following operations:

  • Database report writers and the CP Split differ in the way they retrieve the content for a report:
    • With a database report writer, the end user/client (i.e., a Subscriber node) must query a database, and indicate how the returned data are to be analyzed and formatted. The user selects predefined report formats or creates new ones.
    • With the CP Split, the end user does not query a database when generating a report. Instead, a Publisher node does any required database querying, as well as any required data analyses and other non-formatting manipulations of the data returned from the queries. It then organizes the resulting content into Content Configurations in its CFP Grid. The framework (grid-based structure) for organizing the content into configurations are created when the Publisher and Subscriber/Presenter Templates are developed and assure that the Content Configurations in the Publisher's CFP Grid correspond to their Subscribers’ CFC Grids (the next post discusses best practices for building the Content Configurations). Like the database reports, the queries, analytics, and formats may be predefined for its Subscribers, or new ones may be created via instructions from Subscriber nodes.
  • Once the data queries and analytics are done, the resulting content must be formatted for report presentation. Types of report formats include columnar, crosstab, form, label, and OLAP/pivot table, and their views may include graphs/charts, lists, tables, text boxes, and more. Database report writers and the CP Split differ in the way they manage and format content for presentation:
    • Database report writers render queried content through instructions that format content elements based on their fields (and possibly other attributes). Reports can be published to a variety of file formats for distribution, including XML, PDF, HTML, RTF, Word, Excel, text, and more.
    • With the CP Split, the Publisher node places the Content Configurations from the CFP Grid to Content Files for storage and transmission, and then sends the files to its Subscriber nodes. Upon receipt, each Subscriber node places the Content Configurations from the Content File into its CFC Grid and formats the content elements for presentation through formatting instructions applied to particular content elements based on their cell locations in the CFC Grid.

Compared to database report writers, the CP Split has distinct advantages when disseminating interactive reports containing numeric values and related visualizations (e.g., charts/graphs, etc.). This is because the CP Split technology keeps the numeric content "live" – i.e., the numbers are not embedded in markup tags or converted to text – so they are ready for reuse immediately. This means there is no need to re-entry the data, use screen scrapers, or do time-consuming data parsing and transformations when using the CP Split. Furthermore, the CP Split enables content to be transmitted in its most efficient form, i.e., in delimited formats (such as in CSV files) that contain no formatting instructions, markup tags, or programming code.

How Does the CP Split Differ from Spreadsheet Reports?

To understand the CP Split more fully, it is necessary to compare it to technologies beyond database report writers.

For example, it is possible to distribute entire spreadsheet workbooks filled with the content, formatting instructions and macros. This is a very inefficient approach because every time the content is updated or used by a different model, new workbooks must be distributed, which can be very large (many megabytes). This approach also makes it difficult to track changes made to the content or models over time - for auditing purposes for example - since multiple version of the workbooks must be stored, which can require complex versioning controls.

A more sensible and elegant method for delivering report updates is to use the CP Split to distribute the content, and only the content, in delimited text Content Files. These files are a tiny fraction of the size of entire workbooks because they do not contain formatting instructions, code, or markup tags. In addition, they provide easy auditing (through change management methods) and file management (by using ID numbers to maintain the proper association between Content Files and the template files that produce and consume them). The workbooks containing the models are only redistributed if the models represented in the templates changes, which may me necessary, for example, if the schema of the source data changes.

Benefits of the CP Split Technology

The benefits of this unique approach are realized when content is shared between nodes using different template models to generate different reports, or between different nodes with the same template model to generate the same report.

The CP Split technology, therefore, offers this unique set of benefits:

  • Content from diverse sources can be integrated and stored in a uniform structure for use across multiple platforms and applications, which provides a simple, transparent (human readable), and auditable interoperable architecture

  • Different audiences can receive different portions of the content and have it rendered in particular ways based on what they need for personalized reports

  • Content can be integrated from different sources easily for distributing composite reports

  • Portions of a Content File can be sent to different data stores (e.g., to populate a data warehouse)

  • Content can be manipulated (e.g., analyzed, adjusted, transformed) without costly conversion from appearance-based text representations

  • Content can be repurposed quickly and easily for different presentation media

  • Content is transported in smaller files that consume less bandwidth because they do not contain formatting instructions, markup tags, or code

  • Exceptional speed and efficiency is achieved when dealing with numbers and computations, and when generating charts and graphs, because the Content Configurations can be distributed in their proper "serialized" order, which enables rendering engines to generate charts and graphs more quickly and easily

  • Different models with different formulas can be used to calculate the same set of data from a Content File, and sets of data from Content Files can be modified on-the-fly to accommodate different computational models (e.g., when doing real-time "what-if" scenarios, when slicing & dicing aggregated data) -- all without complex or time-consuming pre-processing, e.g., there is no need for database queries, for XML parsing and XSLT transformations, and for online analytic processing (OLAP) by the client.

  • Changes can be made to the data in a model and those changes distributed as necessary, which is important when (a) editing out mistakes or updating sets of data, (b) creating and analyzing what-if scenarios, and (c) computing the same set of data using several analytic models because these activities may require somewhat different data.

  • Data can be added to a model and those additions distributed when necessary, which is important in collaborative situations when (a) different people must input data to complete a data set, (b) automated (unmanned) nodes supply data, and (c) several analytic models are used to compute the same set of data, and certain models require additional data

  • Portions of a data set can be restricted from being accessed by particular models, which is important when different users with different roles require only a portion of a data set; it helps assure people get to see only the data they need to minimize information overload and protect data from being viewed by unauthorized persons

  • Content can be "scrambled" for a unique form of security that is forever immune to brute force attacks (see MultiCryption).



Up to this point, I described how the nodes' asynchronous publish-subscribe process works, and discussed the use of spreadsheet templates for producing and consuming content files. This post describes the inner workings of the CP Split technology.

The Publisher Template

If a node has publisher functionality, its Publisher Template must be pre-configured to execute the operations required for Content File creation and transmission. During this process, a reference tuple is created. Once CP Split is configured, the reference tuple is used in the process to ensure continuous referential integrity and entity integrity. Hence, this reference tuple promotes versioning.

One type of pre-configuration involves database queries. That is, if a Content File will contain data from one or more databases, the proper SQL/ODBC query code (macro/script) must be written in its template’s grid code layer. In an MS Windows operating system using Excel as the node’s underlying application, for example, the query for each database must be configured with the correct login ID & password, and it must use the correct ODBC drivers. In addition, the proper fields and records must be identified, as well as the particular spreadsheet cells into which the queried data are to be sent.

Using a Publisher Template’s macros to execute the queries enables them to be initiated automatically via remote request by having the Publisher node execute the correct queries for each of its Subscriber nodes based on ad hoc and pre-scheduled requests from its Subscriber(s). The following key processes occur automatically:

  • Prior to creating a Content File for a Subscribing node, a subscription manager in the Publisher Template must store the IP addresses and authorization information for all Subscriber nodes authorized to communicate with it. This can be done, for example, by distributing to each node a list of authorized nodes, by using a centralized directory, or by having information requests and responses go to intermediate nodes that do the authorization before allowing them to be sent.

  • The Publisher node must also be pre-configured to establish communication standards with each Subscriber, i.e., a “hand-shake” that ensures the nodes communicating with one another (a) are allowed to connect (i.e., their connections are authenticated) and (b) agree on how the transmission will proceed by defining how information requests from each Subscribing node must be structured and how the Content File is to be organized by the Publisher node to enable the Subscriber node’s template to read and present it as a report. For example, during the establishment of the node network, each node’s Template File would be set up with the forms, code, and metadata needed to send information requests and Content Files to each other in a way that enables each node to understand the specific data in specific cells by virtue of their cell locations in a spreadsheet.

  • For ad hoc requests – such as a doctor (Subscriber) requesting the data of one or more patients from other healthcare providers (Publishers) with a different EHR systems who treat the same patients – each Publisher node’s Template File bases its queries on the requested information sent by an authorized Subscriber’s node, including patient identifiers and requested data sets. It’s macros use this information to define the correct table, records and fields to query via metadata in the models (e.g., database schema maps) and subscription rules (e.g., rules defining allowable fields to be queried based on each Subscriber’s healthcare specialty).

  • For pre-scheduled requests, a node’s Publisher Template executes rules for performing such functions as sending certain Subscribers particular patient data automatically whenever the Publisher updates those data. These rules determine the queries to be executed and the spreadsheet cells into which the queried data are sent.

  • Whether ad hoc or scheduled, once the required data sets are queried and stored in the appropriate spreadsheet(s), these data are then processed by cell formulas (which may be in other spreadsheets) and macro functions as defined by the Publisher Template’s models. This functionality may be integrated into third-party products such as statistical/data mining applications, inferential logic engines, etc. This processing performs any required data analytics and transformations. It then organizes the resulting data value and text strings into pre-defined cellular configurations (i.e., “Content Configurations”) in a spreadsheet (i.e., the “Content File Production Grid” or “CFP Grid”), the cell positions of which are known by the Subscriber/Presenter Template as discussed below.

  • When this processing is done, the Publisher Template then saves the arrays of values and strings, without any formatting instructions and code, in a delimited text file (such as CSV format) for maximum efficiency, or in other less efficient file formats (such as spreadsheets, XML, etc.). This file is the Content File.

  • Once the Content File is created, other Publisher Template functions send it to the appropriate Subscriber nodes as an e-mail attachment or via other means (e.g., FTP).

The Subscriber/Presenter Template

If a node has subscriber and report presentation functionality, its Subscriber/Presenter Template must be pre-configured to execute the operations required for consuming and rendering particular Content Files. Using its template’s spreadsheets and macros to consume and render a Content File enables a node to composite and generate reports without ever having to query a database or connect to other data sources. Following are key processes:

  • Prior to receiving a Content File from a Publisher node, the Subscriber node’s Subscriber/Presenter Template must be pre-configured to establish a certain communication standards with the Publisher, as discussed above.

  • Once a Subscriber node receives a Content File from an authenticated Publisher node, it uses the pre-configured models that have been assigned to that Publisher node to consume the Content File. This process involves using a macro to take the data and information contained in the Content File and parsing them into specific, pre-determined cells in a spreadsheet (the “Content File Consumption Grid” or “CFC Grid”). There is a semantic correspondence between the cellular locations of the content in the CFP Grid and the CFC Grid, which enables both the Publisher and Subscriber to “know” the particular content elements stored in each cell.

  • Once the Content File’s contents are in the CFC Grid, Excel macros and cell functions format the contents of single cells or cell ranges and present them in reports as populated user forms, charts/graphs, grids, lists, text blocks, hyperlinks, etc. as specified by the template’s models. The models may limit reports to single views or provide user interactivity that enables different views of the data (e.g., data slicing/dicing/drill-down, “what-if” scenarios, choice of graphs, etc.). And if a composite report is to be generated, the Subscriber node takes multiple Content Files from one or more Publisher nodes and parses each on to pre-defined portions of the CFC Grid. It then combines parts of this content as defined by its models and renders it accordingly.

  • In addition to (or instead of) presenting reports through the Excel workbook, the Subscriber/Presenter Template could enable third-party report writers to access the CFC Grid and generate the reports. Or it can send the data from the Content File to a database the third-party report writers can access.

  • If a Subscriber/Presenter Template is configured to populate databases with the data from Content Files for report generation or other purposes, it must have the proper SQL/ODBC query code (macro/script) including the correct login ID & password and ODBC drivers. In addition, the proper fields and records must be identified, as well as the particular spreadsheet cells from which the data are to be obtained.



In this post, I discuss CP Split "Content Configurations," which are elements of data and information (content elements) that have been structured in configurations by the Content File Production (CFP) Grid of a node's Publisher Template. One or more of these configurations are transported in a Content File (by e-mail, FTP, streaming, etc.) and rendered into reports through the Content File Consumption (CFC) Grid of a node's Subscriber/Presenter Template. Content Configurations are unique to the CP Split technology and are fundamental to its operation.

Creating Content Configurations via the CFP Grid

The image below is an example of a Content Configuration in a Publisher's CFP Grid spreadsheet, which contains Basic Hematology Diff/Morph blood test data. After obtaining the content by querying the original data source, macro routines placed the content elements in their predefined cells.


Grid Framework. The location of each content element in a Content Configuration depends on the CFP Grid's “framework,” which provides a blueprint for the structuring the Content Configurations based on a report's data model. Best practice rules for defining such a framework include the following:

  • For reports containing charts, the data for the charts should be placed in contiguous cells that have corresponding labels located in the left column and in the row(s) at the top of each column containing data.
  • If the array contains multiple dates, place them in contiguous cells in ascending or descending time-series order.
  • Place content sharing the same "domain" of information into contiguous cell ranges. In the image above, the individual lab tests comprising the DIFF/MORPH panel are grouped together in rows 5-18 (with the reference ranges in rows 2-3) and the footnotes are grouped together in rows 21-24.
  • Use macros to make the cell ranges "dynamic," so they will expand and contract automatically as new content is added or deleted. In the current example, the Content Configuration should be able to expand to the right by adding data into additional contiguous columns of blood test data across time. Note that rows can also be added if, for example, new footnotes are added or new tests were added to the DIFF/MORPH panel.
  • If the report using the Content Configuration is to have user-interactive views, then operations such as sorting and filtering require a portion of at least one row or column should contain data providing the criteria for executing such functions. In the current example, the lab tests can be sorted alphabetically or filtered using the contents of row, and sorted and filtered by date & time using the contents of columns A & B.
  • When creating complex or multiple arrays in a single CFP Grid, the cells may be color coded and have notations attached to help visually identify their semantic relationships, which can be done with a few mouse clicks.

Macros and Cell Functions. Macro modules and cell functions are used to place the content elements into the proper cells, which create Content Configuration that are consistent with the defined framework.

Intermediate Grids & Third-Party Applications. Note that there may be intermediate spreadsheet grids. These grids that may work in conjunction with third-party products to process the content elements before storing the results in the CFP Grid. This may include the use of statistical/analytic/data mining tools, inferential logic tools, and text mining and manipulation tools.

Intermediate grids may, for example:

  • Have a pivot tables that computes aggregate data by multiple dimensions (as per OLAP views). The resulting aggregate data values and their labels can then be placed in a Content Configuration through the CFP Grid to enable rapid data slicing & dicing and interactive digital dashboard reports that are fully operational even when offline.
  • Exchange data with a statistical package via an API connection and receive from them statistical values that are placed into the Content Configuration(s).
  • Place hyperlinks, text from concept extraction tools, proofs from a Prolog-based applications, etc. into the Content Configuration.
  • Temporarily store data obtained from multiple Content Files and other sources, which are later combined into Content Configuration for composite reports.
  • Be used to store lists that populate menus in forms, hold data that signal the execution of specific macro functions when certain conditions are met.
  • Manage metadata used for indexing, defining content element attributes, managing data hierarchies, etc.
  • Hold data input via forms temporarily, before placing them in a Content Configuration.

Placing Content Configurations into a Content File

Once the CFP Grid's cells are populated with content, a macro places the Content Configurations into a Content File. The image below shows a Content File in comma delimited value (CSV) format with the Content Configuration from the blood test example. In it, a comma separates each column and a line break (not visible) separates each row. [Note that since Microsoft Excel was used, the dates and times were automatically converted to their date & time value equivalents.]


While there are many different ways to convey Content Configurations in a Content File, converting an exact replica of the configurations in CSV format, as done here, is the most efficient.

Note even though the labels ("DIFF/MORPH:", "Ref. Range", "Date", "Time", "Polys", "Bands", etc.) have been added to the Content File above, it is not necessary to include them since the labels already appear in the Subscriber/Presenter Template as discussed below.

Consuming Content Configurations via the CFC Grid

The image below is a Subscriber's CFC Grid spreadsheet before it is populated with the Content Configurations. Note the cell locations of the CFC Grid match the CFP Grid exactly, although the Content Configurations may be placed anywhere in the CFC Grid.


The graphic below shows the CFC Grid spreadsheet after macros in the Subscriber/Presenter Template populated it with the Content Configurations from the Content File it received.


Presenting the Reports

It's now time for the Subscriber/Presenter Template's presentation functions to generate reports by rendering the Content Configurations now stored in the CFC Grid spreadsheet. This is done by accessing content elements from the cells of the CFC Grid and formatting them for viewing and printing. This presentation process is executed via macros and cell functions, which may operate with third-party products via APIs for other visualizations, as well as using chart sheets and formatted spreadsheets.

For example, the image below shows a section of a spreadsheet used to display the blood test results. It contains the formatted labels for the report  -- created by merging certain cells, adding color to certain cells, as well as selecting fonts and alignment options -- but has not yet been populated with the Content Configuration values residing in the CFC Grid. Note that it also has two Active-X "Graph" buttons, which generate charts.


The image below shows the same spreadsheet after a macro located the content elements from the Content Configuration, based on their cell positions in the CFC Grid, and placed the values in the proper cells.

Notice that the certain cells have been colored dark blue and red to indicate values above and below the reference range. These colors were added automatically at run time via the "conditional formatting" functions of the spreadsheet cells, which associate specific colors with value outside the reference ranges.


And this final graphic (below) visualizes the Lymph data over time in a chart, which was done by pre-configuring a chart sheet to access and format a data series in rows 1, 2 and 5 of the CFC Grid spreadsheet.


Blog url:

Follow Us: