HL7 for Busy Professionals

Home > Other > HL7 for Busy Professionals > Page 2
HL7 for Busy Professionals Page 2

by Rahul Bhagat


  To assign a unique address to each device, an addressing system called MAC ID (Media Access Control ID) has been developed. It is a six-byte number, which is permanently stamped on every Network Interface Card (part of the device that connects to the network) by the manufacturer. That’s why it is also called the hardware address or physical address. You cannot change it. It is permanently etched in the chip.

  Packets carry this MAC ID in their header. When a device receives a packet, it compares the MAC ID of the packet with its own MAC ID. If it matches, it will store the packet, otherwise, it will ignore it. Once all the packets are received, the system stitches together the original message and sends it to the application for processing.

  TCP/IP and the Internet

  MAC ID works great for local networks where the administrator knows all the devices and their addresses. Communication breaks down when messages have to travel between networks. How are you going to find out the MAC ID of a computer on a far away network?

  Back in the days, for American armed forces, this was a serious problem. Army, Navy and Air Force networks didn’t have the capability to communicate with each other. Imagine the confusion in a theatre of war! But believe it or not, this was not the primary reason for the invention of the Internet. It was much more mundane. Computer science researchers were looking for a way to access supercomputers on other networks. There were only a few supercomputers around and if you were in San Diego and wanted to use the supercomputer at the University of California, Los Angeles, then your only recourse was to get on I5 and drive to LA.

  So the researchers at ARPA (Advanced Research Projects Agency) created something called TCP/IP and used it to successfully connect four research networks, three in California and one in Utah, to each other. This first network of networks was called ARPANET. It was the acorn that grew into the massive oak tree we know today as the Internet.

  So how did the folks at ARPA do it?

  They created a system of virtual global addresses. Instead of using the MAC ID, which is fixed and burned on a device, they developed a virtual addressing system of four numbers where each number can have a value from 0 to 255, for example, 125.0.200.75. A device was assigned a unique combination of these four numbers, which became its global, unique IP (Internet Protocol) address.

  If you are wondering how each device on the Internet gets a global unique IP address, then you should know that there is an entire organizational structure dedicated to the task. At the top is an organization called ICANN, based in LA, which does high-level coordination and decides on big things, such as, are we going to allow the xxx domain? Under it are five regional organizations that manage the actual allocation and assignment of IP numbers for their region.

  The regional organizations are ARIN (North America), RIPE (Europe & Middle East), APNIC (Asia Pacific), LACNIC (Latin America & Caribbean) and AfriNIC (Africa).

  If a network wants to connect to the Internet, it makes a request for a block of IP numbers to one of these organizations or their affiliates. Depending on the size of its network, it can get one of three classes of IP addresses: Class A, Class B or Class C.

  For a large company like AT&T, which has a network with hundreds of thousands of devices and still needs room for more, a Class A block of addresses are assigned, for example, 12.x.x.x. All packets starting with the IP address 12 will go to AT&T’s network, everything from 12.0.0.0 to 12.255.255.255. That’s almost seventeen million addresses!

  But if AT&T were smaller, it would receive a Class B block of addresses, for example, 12.200.x.x. In this case, all IP addresses from 12.200.0.0 to 12.200.255.255 would belong to the AT&T network.

  For Class C addresses, the first three numbers are fixed, for example 12.200.50.x. All IP addresses between 12.200.50.0 and 12.200.50.255 belong to the network.

  Once a network has its list of IP addresses, it creates an ARP (Address Resolution Protocol) table. An ARP table is nothing but a long list of physical addresses (MAC ID) of all the devices on the network and their corresponding IP addresses. With the help of this ARP table, the network can easily translate the IP address of an incoming packet to its corresponding MAC ID and vice versa.

  This setup devised by ARPA freed the networks from the requirement of having to know each other’s MAC IDs. For a device that wanted to communicate with the rest of the world, all it had to do was share its IP address. When the local network received a packet with this IP address, it would use the ARP table to find out the MAC ID of the device and attach it to the header of the packet before releasing it on its network.

  If you are thinking why didn’t they just use the MAC ID for addresses, keep in mind that there were already millions of devices out in the world before they started working on the problem. Cataloguing existing ID’s and coordinating between manufacturers would have been a nightmare. It was easier to just start over with a clean slate.

  To use this virtual address, when a packet is first created, a header with the IP address of the destination is attached to it. After that, another header with the MAC ID of the destination is added and then the packet is released on the Ethernet for transmission to the destination system.

  If the destination is on the same network then there is no need for the IP address. The device with the matching MAC ID picks up the packet and strip away both the physical address (MAC ID) and the virtual address (IP) headers of the packet to get to the actual content.

  Things are different when the message is for a device that is not on the local network.

  After a packet is created and destination IP header attached, the local system looks up the ARP table to get the corresponding MAC ID of the destination. If the device is on a different network, then there will be no entry for it in the ARP table.

  In that case, the physical address of a special device called a router is used. The packet is sent to the router on the network. The router reads the IP address of the destination and consults another table called the Forwarding Table, to determine where to send the packet on the Internet.

  Forwarding tables are similar to ARP tables but instead of the physical address of local devices, they contain addresses of routers and gateways for other networks. If the destination network is known then the packet is sent directly to it, otherwise it gets sent to the gateway, which is like a central post office. From there it gets bounced around the world, based on a host of factors, till it reaches its destination network.

  The router at the destination network uses the IP address of the packet to find the corresponding physical address in its ARP table. It then adds the corresponding physical address header to the packet and releases it on its local network, where the destination device with matching MAC ID picks it up for processing.

  This is how ARPA was able to connect different networks and usher in the Internet age.

  Layers

  If you noticed, with Internet there were two headers added to every packet, which allowed it to travel between networks. One was the virtual address header (IP) and the second was the physical address header (MAC ID). In techie speak, the packets passed through two layers where these headers were added. Layers can be seen as process steps that transform a packet.

  If you recall the discussion at the beginning of the chapter, communication between two applications is independent of the underlying system. The system does not understand the message. It only facilitates the movement of messages from one application to the other.

  But this system level facilitation is not just taking a message, chopping it into packets and passing them down the wire to the other system. There is more to it and we saw some of that. The packets had two address headers attached to them. In the real world there are many other things that can happen to a packet before it is sent down the wire.

  For example, a message can be compressed before it is sent or it can be encrypted before it is transmitted.

  For a better understanding of the message transformation process let’s compare it to an automobile assembly l
ine. Before a car rolls off the assembly line, its chassis passes through a number of workstations or stops. At each stop, it undergoes a transformation. First the chassis is welded, then the paint job is done, then the dashboard is put in place, the seats are assembled, and so on.

  Similarly, before a message is transmitted down the wire, it passes through a number of stops or what technical folks like to call layers. At each layer the message undergoes a transformation.

  Very often there are multiple ways of transformation. At the welding stop, you can choose between gas welding or arc welding or even laser welding for something really precise. Similarly, there are different methods for transforming a message at each layer. Technical folks like calling them protocols.

  These are the fundamental concepts in network communication. Packets that travel on the Internet today go through many layers that implement all kinds of functionality. At each layer, there could be many different protocols to bring about the desired transformation.

  To take an example, the researchers at ARPA also wanted to make sure that if a packet was lost during transmission, there was a process to resend it. To achieve this, they added another layer before the virtual address layer to ensure guaranteed delivery of each packet.

  This new layer was the transport layer. It added another header with a tracking number to the packet. When the destination system received the packet, it was required to send back a short acknowledgement with the original tracking number of the packet.

  This way the source system was able to figure out which packets were received by the destination. If a packet was lost, there would be no acknowledgement for it. After waiting for a couple of seconds the source system would automatically resend that packet.

  But not every application wanted guaranteed delivery of packets. For some applications, packets have to be delivered in real time. Think of video conferencing or streaming radio. If a packet is lost, it’s lost. For these applications, techies developed a different protocol, called UDP, which only cared about sending the packets in sequence, in real time. If a packet didn’t arrive on time, too bad, that will be a blip, we are moving to the next packet.

  Eventually, this led to a proliferation of protocols. There were protocols for Internet chat, protocols for peer-to-peer file sharing, protocols for Internet telephony and on and on. People were building all kinds of crazy things and someone needed to step in and bring order to the situation.

  Open Systems Interconnection (OSI) Model

  The anarchy with the protocols prompted the international standards setting organization to try and bring some order to the chaos. Academics huddled together and came up with a clean, seven level framework for network communication called Open Systems Interconnection (OSI). It divided the process of sending and receiving messages into seven levels (steps) and defined which actions can be performed at each level.

  Level 7 - Application Layer: At this layer, data to be sent is organized in a message according to the structure and rules of the application protocol. (e.g. HL7)

  Level 6 - Presentation Layer: At this layer, work like encryption and compression of data is carried out. Large files like video and image use this layer.

  Level 5 - Session Layer: At this layer, functionality can be added to maintain an ongoing conversation without having to confirm the identity of the system for every message. Other functionalities like voice and video synchronization can also be added.

  Level 4 - Transport Layer: This is the layer where the sending system divides the message into packets and the receiving system reassembles them. Tracking and acknowledgement of packets is also done at this level. A commonly used protocol for this layer is TCP.

  Level 3 - Network Layer: At this layer, a virtual address (IP) is added to the packet. The IP protocol is for this layer.

  Level 2 - Data Link Layer: This is the layer where a physical address is added to the packet. This is where the MAC ID is added.

  Level 1 - Physical Layer: This layer transmits the 0s and 1s of the packet as electric pulses down the wire. For cell phones the signal travels as microwaves and for fiber optic cables it is a pulse of light.

  The messaging process starts with the application layer or level 7 of the OSI model. After the transformation, the message is passed to level 6, which does its transformation and passes it further down the levels until the message reaches level 1. At this point the packet gets converted to 0’s and 1’s and is transmitted down the wire.

  At the receiving end, the packet undergoes a reverse transformation. It starts at level 1 and moves to upper levels until it reaches level 7. At that point, the message is handed to the receiving application, which processes and consumes the message.

  By creating this seven-level model of network communication, the standards committee expected everyone to adopt OSI and establish it as the standard. Unfortunately, that’s not how it turned out in real life.

  By the time OSI was developed, TCP (for guaranteed delivery) and IP (virtual addresses for communication on Internet) were well entrenched in the networking world. Together the TCP/IP combo was sufficient to ensure transmission of packets over the Internet. As a result, organizations just added layers for application-to-application communication and other features as needed, and didn’t bother conforming to the OSI model.

  Still, OSI has survived as a good reference model for understanding network communication. Many protocols have been developed and continue to be developed which implement the functionalities of a specific level. The protocol can just say that it conforms to a particular level of the OSI model and everyone will know the functionality it implements.

  Health Level 7 (HL7)

  Health Level 7 is one such specialized protocol that conforms to level 7 of the OSI model. It is an application layer protocol, specifically created for communication between healthcare applications. So whenever there is a need to exchange health data between applications, guess which protocol is going to be used? HL7 of course!

  Accepted, the name Health Level 7 is not exactly a friendly name. But if you look at it from the perspective of the people who developed it, you can see why they named it Health Level 7. It is an application layer protocol, which corresponds to level 7 of the OSI model. And the protocol is for the exchange of health information, hence the name, Health Level 7. I would argue that the name makes a lot of sense.

  HL7 conforms to the OSI model but it only defines the protocol for seventh level. For other levels, the implementer is free to choose any combo of protocols. Usually MLLP (Minimum Lower Layer Protocol) with TCP/IP is used to implement lower level functionalities.

  3. Integration Concepts

  From a technical perspective, the word integration means to connect different systems (and applications) together. When systems are integrated, they can communicate with each other and exchange information.

  This automatic flow of information is the reason we integrate systems. Because when systems are able to share information, it leads to a lot of benefits. One such very important benefit is maintenance of data consistency between different systems. Let me elaborate.

  A big issue with isolated systems is creation of data silos. Over a period of time, a system will accumulate a mountain of clinical information in its database. But that database is only for its exclusive use. The information is locked away. This limits the usefulness of the data. Others, who need that data, have to copy it and maintain a separate database. Over time, the data becomes inconsistent in the copies, and this leads to unwanted headache.

  Consider a stand-alone lab system that keeps a perfect record of tests ordered and their corresponding results. Since the information is locked away in the lab system, a physician who needs that information will receive a print out of the result. That paper result will eventually get filed away in that patient’s folder for future reference.

  So far so good. Both the lab system and the doctor’s office have the same information. But what happens if there is a correction to the lab test? The lab s
ystem will update its database and also send a paper copy to the doctor’s office. What happens if a staff member forgets to file it or the report is misplaced or ends up somewhere else?

  The point is, there are endless ways for data to become inconsistent between systems. An integrated system avoids this situation by automatically exchanging information with other systems whenever there is new data to be shared or an existing data element changes.

  Another benefit of integration is the ability to automate business processes. Systems don’t just have to talk one-on-one. They can also be part of an information assembly line where one system takes the order, another checks the validity of the credit card, and yet another coordinates shipment of the order. Integration allows for automatic movement of relevant information from one system to the next and this makes business process automation very simple.

  HL7 and the healthcare industry are late to the game of business process automation. The granddaddy protocol is EDI (Electronic Data Interchange), which is one of the oldest and most widely used standard for data integration. It is now primarily used by the retail/manufacturing industry. Banks and other financial organizations have a standard of their own, called SWIFT, which takes into account their need for ultra-high accuracy and security.

  And finally, the ability to integrate systems gives us a better method to create large aggregate systems without having to worry about the doomsday scenario - the day when a large, monolithic system crashes down. By stringing together smaller, independent systems through message-based integration, we can isolate them from each other and minimize the impact a particular system can have on the entire ecosystem. To use our example from before, if the order delivery system goes down, it will delay order shipment but the organization is still open for business and accepting orders - probably with a warning that shipments could be delayed.

 

‹ Prev