Emergency Medical Services

Mobile Intensive Care Units

Data Base Management System

EMS MICU DBMS

Sometime in 1979, a gentleman named Harlan Felt approached me as I worked at The Data Domain of Schuamburg, to write what seemed at the time, a simple report generator.

The State of Illinois required statistics be collected for ambulance companies that described the care patients were given enroute to the hospitals. In Harlan's case, Loyola University Hospital.

It took a month to assemble the statistics and Harlan thought maybe these new personal computers would be up to the task. Particularly a 64K Apple ][ with dual 5½" 140K diskette drives, printer, and monitor.

Apple II w/dual drives
Apple ][ computer with monochrome monitor and dual 5¼" drives.

We came up with a budget and an estimated time frame, about two weeks, and all the reports were fixed, all that seemed to be needed was a robust data entry screen.

EMS MICU DBMS Proposal with report designs (15MB PDF)

What follows is a classic case of well intended feature creep.

The Overview

For each ambulance run, the crew fills out a Run Sheet. The Run Sheet identifies the Ambulance Service.

The Run Sheet is a questionnaire about patient information, the disorder the patient was experiencing (heart attack, burn, etc. all from a set of standardized code numbers), what services were rendered, and condition of patient.

Loyola Ambulnce Run Worksheet
Loyola Ambulance Run Worksheet

I thought some of questions were redundant, but then I realized they were asking the same question from a different direction to arrive at the truth. One question was the condition of the patient at the end of the run, "Worse", "Same", "Better". Harlan told me the crew will virtually never answer "Worse", the attitude was that once they were on the scene, the patient wouldn't get worse; Even if the patient died enroute, that wasn't something that was officially declared until a doctor examined the body, which was technically after the ambulance run.

The Run Sheets would be entered into the computer to be stored on 5¼" floppy diskettes that could hold up to 140K (kilobytes, AKA 1024 characters, total diskette size 143,360 characters). [Dealing with the floppy diskettes will be problematic and detailed extensively later] Multiple data floppy diskettes will be required to hold the month's worth of data for the 15 different ambulance companies, which we thought would be the maximum (Hah ha ha, famous last words).

The reports would read the data from the floppy diskettes, building the totals in memory, before generating the output.

Easy Peasy.

Data Entry

The Apple ][ screen was 40 columns wide and 24 lines deep. The AppleSoft Basic INPUT statement did not have the precision of control that was needed for this project.

So I wrote what was called an "Amper Routine", a machine language subroutine to handle user input.

The writing of the Amper Routine for input is detailed on my Language Plus page.

Apple ][ Data Entry Screen
Apple ][ Data Entry Screen

The red characters in the layout above, represent the type of input that was allowed. The green characters represent output in response to the user entry as input validation.

    N - Numeric characters only
    A - Alpha characters only
    X - Output in response to input

My Amper Input Routine responded to CTRL characters so the user could tab forwards and backwards through the input fields.

Report Generation

The writing of the report generator was as straightforward as I had predicted.

    The user would select the report they wanted to run.
    The report program would create arrays in memory to store the counts.
    The diskettes would be read to extract the information and add them to the internal arrays.
    After all the diskettes were read, the reports would be generated using the data in the arrays.

After the report generating program, there was just enough free memory to hold the arrays to store the information for up to 15 ambulance services. (Remember that limitation.)

Diskette Storage

We recognized early on that diskette management was going to be an issue.

The idea was to pre-format the number of diskettes that were projected to be needed over the course of a month. Over allocating diskettes was not going to be a problem due to the genius way I was going to manage them.

    Each diskette would have a number pre-assigned to it.
    There would be a flag saying if the diskette was in use.
    There would be a flag saying if the diskette was full.

Knowledge of usage was local to a diskette.

So if you mounted a diskette that was not in use, it would ask you to mount the diskette previous to it. If that diskette was not in use, it would ask you to mount the diskette previous to it. And so forth.

Until you finally mounted a diskette that was in use, but not full.

In a similar manner, if you mounted a full diskette, it would ask you to mount the diskette after it, until you finally mounted the diskette that was in use, but not full.

In the real world, we figured they'd keep track of the diskette they were entering data on and this elaborate method of diskette handling was just a backup in case they forgot.

The Apple ][ DOS 3.3 was a tad primitive in that you could not query a diskette to see if you had room to write a record. You had to attempt a write to the diskette with the data and if you didn't get an error, you were good.

You couldn't pre-calculate how many records you could store on a diskette because someone might store another file on the diskette that would ruin your planned storage count. (Not supposed to happen, but people make mistakes and I program in a failsafe manner.)

So you had to write until you encountered a disk full error, then...

    The computer would mark the current disk full and no longer in use.
    Then ask the user to mount the next diskette in the series.
    After the next diskette is mounted, the computer would mark it in use.
    Finally the program would write the record that caused the error to the new floppy

Brilliant, n'est-ce pas? Ha, ha, ha.

EMS MICU DBMS Design Notes 2.8MB PDF

Life would be fine without users

First failure

If you remember, the plan was to pre-format diskettes with sequence information embedded in them. The suggested amount was 10. If not all were used, they could be recycled.

It seemed the users were unaware of this and merrily pre-formatted a series consisting of only one diskette.

So when they filled the first diskette and it asked for the next, they said, "What next diskette?"

We realized then the users were never going to use the system in the manner I had intended. They were incapable of the foresight to project how many diskettes they'd need in a month and were unwilling to over commit (diskettes were considered expensive at $50/box of ten). Besides, they'd then need to keep track of the unused diskettes over the month. Easier to just do one diskette at a time.

So I rewrote the Disk Full handler, complete with safeguards so they couldn't overwrite the previous diskette, because I was beginning to suspect they would if they could.

Second Failure

After the first disk handling fix, things proceeded smoothly for a few months.

Then at the end of one day the data entry person had gotten the "Disk Full, Insert Blank diskette" message.

They looked around and didn't have another blank floppy handy, so they turned off the computer and went home. :-O

The next day, the computer wanted the next diskette and there was none.

I was called in to patch the disk record to get them back on the air again. They had also lost the last record that had been entered. Remember, you don't get to Disk Full until you try to write one more record than the diskette can handle. The record that triggered the error would then be the first record written to the next diskette.

So, how to deal with this situation?

The solution was to temporarily write the overflow record to the Program diskette on the other drive. These were two drive systems. One drive held the programs and the second drive was used to write data.

The Program diskette had space to spare on it. So I set up a temporary data file on it and wrote the overflow record to it.

At startup, if the program found a record in the overflow file, it knew that the record had not been written to the next diskette in the series and that one needed to be created.

After the new diskette was initialized, it copied the overflow record to the diskette and removed the copy from the Program diskette.

The Second System

Harlan went off and sold the software to another hospital, Ingalls, in Harvey, IL. I'd end up having a long term relationship with them due to the changes they wanted made to the software and hardware.

The initial problem was, while we had thought Loyola was going to be the largest system we'd sell to, it turned out Ingalls was almost twice as large.

I was already at the limits of internal memory. This was going to require packing ten pounds in a five pound sack. Grrrr.

EMS MICU DBMS Proposal 1982 (½ MB PDF)

Not Quite Rewriting Everything
(but it was close)

Ingalls Ambulance Run Sheet
Ingalls Ambulance Run Worksheet

The original system used a simplistic method of array 1, signified ambulance 1. Now I was going to have to make a translation table so array 1 might represent ambulance 12.

So all references now had to be bounced off a translation table.

When it came time to run the reports, the user would select up to 12 services to generate statistics for. To do all the services, they'd have to run all the data diskettes through twice.

It took about six hours to generate all the reports.

Pretty good I thought, compared to the thirty days it used to take.

Turns out it wasn't fast enough for them.

"Tell me how six hours is not fast enough compared to thirty days?" I asked.

"Well, if the operator makes a mistake, there's not enough time for a second attempt, and they'd have to wait until the next day to try again," was the response.

OK, I still didn't see the need to speed it up, but...

Rewriting it all
(again)

With the computer already running at full speed, the solution was going to require a change in hardware. In this case the solution was to get a Synetix 294K RAM card.

Synetix RAM card
Synetix 294K RAM card

I wrote drivers to access the RAM card and the user now fed all the diskettes into the system while the programs collected the statistics into the card.

This time around, it took 3 hours to run the reports. So the user could screw up in the morning and still have time to screw up in the afternoon.

Upgrade to Apple IIgs

Ingalls decided they wanted to upgrade to the Apple IIgs when they came out.

Apple IIgs
Apple IIgs with 3½" & 5¼" drives.

The Apple IIgs had 128K of RAM, ran at 2.8 MHz instead of the Apple ]['s 1 MHz, used a 16 bit processor, the 65816, compared to the 8 bit 6502, and used 3½" floppy drives that could hold 400K, instead of just 140K.

I obtained a quote and Ingalls said to get it and they'd reimburse me.

That was impossible for me, I couldn't afford to buy them a system and wait for them to reimburse me with their Accounts Payable running at least 90 days behind.

I told them they'd have to cut a check and then I'd buy it. They agreed, but it took a year for them to get approval and the check printed.

By then, the price of the GS had dropped to about half, and with payment from the work I had been doing for them, I was able to purchase one for myself as well.

Adding an Optical Mark-Sense Scanner

Ingalls decided that data entry was too time consuming and wanted to go to using a mark sense (pencil bubble) reader.

The thought was to have the ambulance crews fill out preprinted sheets with the bubble marks. They already had to fill out the old ambulance run sheets, this way they were already making the data available for machine reading instead of a human operator entering the data from manual run sheets.

Ingalls Optical Mark-Sense sheet
Sample of Ingalls Optical Mark-Sense ambulance run sheet.

Nice idea, the problem was that the mark sense machine they choose did not have a reject bin. If I detected a problem with the form, all I could do was stop the scanner on a bad sheet and show an error message on the screen.

Their final solution was to hand examine the sheets and correct any errors before feeding the sheets into the scanner. (I'll assume that made it faster in the end.)

Conclusion

The EMS MICU DBMS was the first large system that I wrote for the Apple ][.

It taught me a lot about user interactions and deficiencies of the Apple ][ DOS.

I'd fix a lot of those deficiencies with my Language Plus Amper Routines.

But the number 1 lesson was about feature creep.

How well intended attempts to make the software function in a failsafe manner could make a project expand beyond the initial specifications and take much longer than expected.

Also how parts of a project you did not think would be a problem, I thought the data entry routines were going to be the major task, can jump out and bite you in the end.

Rewriting the disk handling routines to be idiot proof consumed far more time than expected.

One last bit, Harlan Felt went on to work at Apple and no doubt influenced Steve Jobs to mention the system (without mentioning it by name) in one of Apple's national ad campaigns.

I wish I had a copy of that ad. (sigh)



home

E-mail curt@rostenbach.com