How to Read Multiple Dicom With Dicomparser.dicomparser

NPM version NPM downloads MIT License Build Status Coverage Status

dicomParser

dicomParser is a lightweight library for parsing DICOM P10 byte streams, equally well as raw (not encapsulated in part 10) byte streams, in modern HTML5 based web browsers (IE10+), Node.js and Meteor. dicomParser is fast, easy to employ and has no required external dependencies.

Alive Examples

The best style to see the power of this library is to actually see information technology in use. A number of alive examples are included that are not simply useful but also show how to apply dicomParser. Click here for a list of all live examples Make sure you endeavour out the DICOM Dump with Data Dictionary which is a very useful tool and excellent example of most features.

Community

Have questions? Attempt posting on our google groups forum.

Install

Become a packaged source file:

  • dicomParser.js
  • dicomParser.min.js

Or install via NPM:

npm install dicom-parser

Or install via temper for Meteor applications

shooting star add together chafey:dicom-parser

  • Note - make sure yous install pako if you need to support the Deflated Explicit VR Footling Endian transfer syntax

Usage

              // create a Uint8Array or node.js Buffer with the contents of the DICOM P10 byte stream // you want to parse (east.thousand. XMLHttpRequest to a WADO server) var arrayBuffer = new ArrayBuffer(bufferSize); var byteArray = new Uint8Array(arrayBuffer);  try {     // Allow raw files     const options = { TransferSyntaxUID: 'ane.2.840.10008.1.2' };     // Parse the byte assortment to become a DataSet object that has the parsed contents     var dataSet = dicomParser.parseDicom(byteArray, options);      // access a string element     var studyInstanceUid = dataSet.string('x0020000d');      // get the pixel data element (contains the showtime and length of the data)     var pixelDataElement = dataSet.elements.x7fe00010;      // create a typed array on the pixel data (this example assumes 16 scrap unsigned data)     var pixelData = new Uint16Array(dataSet.byteArray.buffer, pixelDataElement.dataOffset, pixelDataElement.length / 2); } grab(ex) {    console.log('Fault parsing byte stream', ex); }                          

See the live examples for more in depth usage of the library

Note that actually displaying DICOM images is quite complex due to the multifariousness of pixel formats and compression algorithms that DICOM supports. If you are interested in displaying images, please take a look at the cornerstone library and the cornerstoneWADOImageLoader which uses this library to extract the pixel data from DICOM files and display the images with cornerstone library. You tin notice the actual code that extracts grayscale pixel data using this library hither.

Options

dicomParser.parseDicom accepts an optional 2nd argument that is an options object. The accepted backdrop are:

TransferSyntaxUID

A string value used as the default transfer syntax uid for parsing raw DICOM (not encapsualted in Part 10). For raw DICOM files, this value should be the LEI UID value.

untilTag

A tag in the form xggggeeee (where gggg is the hexadecimal group number and eeee is the hexadecimal element number, e.chiliad. 'x7fe00010') that specifies the final tag to parse. Whatever tags occurring after this in the file volition be ignored. Useful for partial reading of byte streams.

vrCallback

A callback that, given a tag, will return the 2-character Value Representation associated with that tag (see PS 3.v of the DICOM standard for more data). It may render undefined to indicate that the VR was not provided.

inflater

A callback that given the underlying byteArray and position of the deflated buffer returns a byteArray containing the DICOM P10 header and inflated data gear up concatenated together.

Central Features

  • Parses all known valid DICOM Part 10 byte arrays
    • Explicit and implicit
    • Little endian and big endian
    • Deflated Explicit VR Little Endian transfer syntax
      • Uses zlib when running node.js
      • requires pako in spider web browsers
      • has callback to support utilise of other inflate libraries
  • Supports all VR'south including sequences
  • Supports elements with undefined length
  • Supports sequence items with undefined length
  • Provides functions to convert from all VR types to native Javascript types
  • Does not require a information lexicon
  • Designed for utilise in the browser
  • Each element exposes the first and length of its data in the underlying byte stream
  • Packaged using the module design, as an AMD module and as a CommonJS module for Node.js
  • No external dependencies
  • Supports extraction of encapsulated pixel data frames
    • Basic Offset Table decoded
    • Fragments decoded
    • Part to extract image frame when basic kickoff table is present
    • Function to extract image frame from fragments when no basic get-go table is present
  • Convenient utility functions to parse strings formatted in DA, TM and PN VRs and return JavaScript objects
  • Convenient utility part to create a string version of an explicit element
  • User-friendly utility function to convert a parsed explicit dataSet into a javascript object
  • Convenient utility role to generate a bones starting time table for JPEG images
  • Supports reading incomplete/partial byte streams
    • By specifying a tag to stop reading at (e.1000. parseDicom(byteArray, {untilTag: "x7fe00010"}); )
    • By returning the elements parsed and then far in the exception thrown during a parse error (the elements parsed will be in the dataSet belongings of the exception)
  • Supports reading from Uint8Arrays and Node.js Buffers

Build System

This project uses Webpack to build the software.

Pre-requisites:

NodeJs - click to visit web site for installation instructions.

Common Tasks

Update dependencies (after each pull):

npm install

Running the build:

npm run build

Automatically running the build and unit tests after each source alter:

npm run lookout

Publish

This library uses semantic-release to publish packages. The syntax of commits against the principal branch determine how the new version calculated.

Example Commit Release Blazon
prepare(pencil): terminate graphite breaking when too much pressure level applied Patch Release
feat(pencil): add 'graphiteWidth' option Feature Release
perf(pencil): remove graphiteWidth selection

BREAKING Alter: The graphiteWidth choice has been removed. The default graphite width of 10mm is e'er used for operation reasons.

Major Breaking Release

Excess

Future:

  • Add unit tests for sequence parsing functionality and encapsulated pixel frames
  • Figure out how to automatically generate documentation from the source (jsdoc)
  • Optimize findItemDelimitationItemAndSetElementLength() for speed
  • Optimize functions in byteArrayParser.js for speed
  • Add example that allows you to compare 2 sop instances against each other
  • Figure out how to not have a global dicomParser object when used with an AMD loader
  • See what needs to be done to support different graphic symbol sets (assumes ASCII currently)
  • Support for parsing from streams on Node.js and Meteor
  • Switch to JavaScript ES6
  • Separate the parsing logic from the dataSet creation logic (e.one thousand. parsing generates events which dataSet creation logic creates the dataSet from). Similar concept to SAX parsers.
    • dataSet creation logic could filter out unwanted tags to improve performance of parse
    • dataSet creation logic could defer creation of sequence dataSets to meliorate performance of parse
  • Part to parse not P10 byte streams given the byte stream and the transfer syntax
  • Back up for encrypted dicom

Contributors

  • @neandrake for help with getting Node.js support
  • @ggerade for implementing support for floats/doubles with VM > one
  • @yagni for problems fix related to parsing implicit fiddling endian files and big endian support
  • @snagytx, @doncharkowsky - for issues ready related to reading encapsulated frames
  • @bbunderson, @jpambrun - bug gear up for reading encapsulated frames
  • @henryqdineen, adil.tiadi@gmail.com - problems study for sequences with undefined lengths and zero items
  • @swederik - bug fixes on sequences with undefined lengths and zip items
  • @jkrot - performance enhancement in byteArrayParser
  • @cancan101 - result related to multi-frame with multiple fragments and no basic beginning tabular array

Why another Javascript DICOM parsing library?

While building the WADO Paradigm Loader for cornerstone, I couldn't find a Javascript DICOM parser that exactly met my needs. DICOM actually isn't that difficult to parse then I figured I would just make my own. Here are some of the cardinal things that I really wanted out of a DICOM library that I am hoping to evangelize:

  • License is extremely liberal so information technology could exist used in whatsoever type of projection
  • Simply deals with parsing DICOM - no lawmaking to actually display the images
  • Designed to piece of work well in a browser (modern ones at least)
  • Follows modern javascript best practices
  • Has documentation and examples on how to use it
  • Does not hibernate the underlying data stream from you
  • Does not require a data lexicon
  • Decodes private elements "on demand" - this goes with not needing a data dictionary
  • Code guards against decadent or invalid data streams past sanity checking lengths and offsets
  • Does not depend on whatever external dependencies - only drop it in and become
  • Has unit tests
  • Lawmaking is easy to understand

Interested in knowing why the higher up goals are of import to me? Here you lot go:

License is extremely liberal so it could exist used in any type of project

DICOM is an open standard and parsing information technology is easy enough that it should be freely available for all types of products - personal, open up source and commercial. I am hoping that the MIT license will help it run into the widest possible adoption (which will in the end help the most patients). I will dual license it nether GPL if someone asks.

Just deals with parsing DICOM - no code to really display the images

I am a big believer in small reusable pieces of software and loose coupling. There is no reason to tightly couple the parser with epitome display. I hope that keeping this library pocket-sized and unproblematic will assist it reach the widest adoption.

Designed to work well in a browser (modern ones at least)

There are some good javascript DICOM parsing libraries available for server development on node.js merely they won't automatically piece of work in a browser. I needed a library that let me easily parse WADO responses and I figured others would also adopt a unproblematic library to practice this with no dependencies. The library does make use of the ArrayBuffer object which is widely supported except for IE (it is available on IE10+). I have no electric current plans to add support for older versions of IE but would exist open up to contributions if someone wants to do the piece of work.

Follows modern javascript best practices

This of course means different things to different people only I accept found great benefit from making sure my javascript passes jshint and leveraging the module pattern. I also take a great analogousness to AMD modules just I empathise that not everyone wants to use them. Then for this library I am shooting for but making sure the code uses the module design and passes jshint.

Has documentation and examples on how to use information technology

Do I really demand to convince you that this is needed?

Does non hibernate the underlying data stream from you

I have used many DICOM parsing libraries over the years and nigh of them either hide the underlying byte stream from you or brand it difficult to access. There are times when you need to access the underlying bytes - and it is frustrating when the library works confronting y'all. A few examples of the demand for this include United nations VR's, private attributes, encapsulated pixel information and implicit little endian transfer syntaxes (which unfortunately are still widely existence used) when you don't have a complete data dictionary.

This library addresses this upshot by exposing the offset and length of the data portion of each element. It besides defers parsing (and type converting) the data until it is actually asked to do so. So what you lot get from a parse is basically a set of pointers to where the data for each element is in the byte stream then you phone call the function you want to extract the blazon yous want. An crawly side effect of this is that you don't need a data dictionary to parse a file fifty-fifty if it uses implicit lilliputian endian. It also turns out that parsing this way is very fast as it avoids doing unneeded blazon conversions.

Note that y'all cannot 100% reliably parse sequence elements in an implicit little endian transfer syntax without a data dictionary. I therefore strongly recommend that yous piece of work with explicit transfer syntaxes whenever possible. Fortunately most Image Athenaeum should be able to give you an explicit transfer syntax encoding of your sop instance even if information technology received it in implicit picayune endian.

Notation that WADO's default transfer syntax is explicit little endian so i would assume that an Image Annal supporting WADO would have a skilful information dictionary direction organization. Initially I wasn't going to back up parsing of implicit data at all but decided to mainly for convenience (and the fact that many of my exam data sets are in little endian transfer syntax and I am too lazy to catechumen them to explicit transfer syntax).

Does not crave a information dictionary

Equally a client, you usually y'all know which elements you want to access and know what type they are then designing a customer oriented parser around a data dictionary is calculation unnecessary complexity, especially if you can stick to explicit transfer syntaxes. I besides believe it is the the server'south responsibility to provide the client safe and easily digestable data (i.e. explicit transfer syntaxes). A server typically supports many types of clients and so information technology makes sense to centralize information lexicon direction in one place rather than burden each client with it.

Data dictionaries are not required for most client use cases anyway so I decided not to support it in this library at all. For those use cases that do require a data dictionary, you can layer it on acme of this library. An instance of doing then is provided in the live examples. If y'all do want to know the VR, request the instance in an explicit transfer syntax and you can take it. If your Paradigm Archive can't do this for you lot, get a new 1 - seriously.

Decodes individual elements "on demand" - this goes with non needing a data dictionary

Run across in a higher place, this is related to not requiring a data dictionary. Ordinarily you lot know exactly what elements you need and what their types are. The only time this is not the example is when y'all are edifice a DICOM Dump utility or you lot can't get an explicit transfer syntax and have one of those problematic elements that can be either OB or OW (and you tin can normally effigy out which ane information technology is without the VR anyway)

Code guards against decadent or invalid data streams past sanity checking lengths and offsets

Even though you lot would look an Image Archive to never transport yous data that isn't 100% DICOM compliant, that is not a bet I would make. As I similar to say - there is no "DICOM police" to penalize vendors who ship software that creates bytes streams that violate the DICOM standard. Regardless, it is practiced exercise to never trust data from some other system - even one that you are in full control of.

Does not depend on any external dependencies - just drop information technology in and go

Sort of addressed to a higher place as maximizing adoption requires that the library minimize the burden on its users. I did find a few interesting libraries that were targeted at making it easier and safer to parse byte streams but they just seemed like overkill so I decided to do it all in one to proceed it as simple as information technology could be. In general I am a big fan of edifice circuitous systems from lots of smaller simpler pieces. Some skillful references on this include the microjs site and the cujo.js manifseto

Has unit of measurement tests

I by and large feel that units tests are ofttimes a waste of fourth dimension for front cease development. Where unit tests practise make sense is code that is decoupled from the user interface - like a DICOM parsing module. I did use TDD on this project and had unit of measurement tests covering ~ 80% of the code paths passing before I even tried to load my first real DICOM file. Before I wrote this library, I did a quick image without unit tests that actually took me much less time (writing tests takes time....). So in the stop I don't think information technology saved me much time getting to a first release, but I am hoping it will pay for itself in the long run (especially if this library receives wide adoption). I also know that some people out there won't even look at it unless it has practiced examination coverage.

Interesting annotation here - I did non write unit tests for sequence parsing and undefined lengths mainly considering I constitute the standard hard to empathise in these areas and didn't desire to waste my time building tests that were not correct. I ended up making these piece of work by throwing a variety of data sets at information technology and fixing the issues that I found. Getting this working took about 3x longer than everything else combined so perhaps it would accept been faster if I had used TDD on this part.

Code is easy to understand

In my experience, writing code that is easy to sympathise is far more important than writing documentation or unit tests for that code. The reason is that when a developer needs to set up or enhance a piece of code, they nigh never start with the unit of measurement tests or documentation - they jump straight into the code and start thrashing about in the debugger. If some other programmer is looking at your code, you probably made a mistake - either a elementary typo or a design issue if y'all really blew it. In either instance, you should have mercy on them in accelerate and make their unenviable task of fixing or extending your code the best it tin exist. Some principles I endeavour to follow include:

  • Clear names for source files, functions and variables. These names tin can go very long but I observe that doing so is amend than writing comments in the source file
  • Pocket-size source files. More often than not I try to keep each source file to under 300 lines or then. The longer it gets, the harder it is to remember what you are looking at
  • Small functions. The longer the function is, the harder it is to understand

Y'all tin can find out more about this past googling for "self documenting code"

In the Wild

  • VS Code: DICOM Dump Extension

Copyright

Copyright 2016 Chris Hafey chafey@gmail.com

deleonawneyed.blogspot.com

Source: https://www.npmjs.com/package/dicom-parser

0 Response to "How to Read Multiple Dicom With Dicomparser.dicomparser"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel