|Import & Export|
Export means to output the data on a view to an external file. Helix exports data in ASCII text format. When data is exported, it is not deleted from the collection: it is copied to the export file. By default, Helix creates a standard tab delimited file with return (ASCII 13) as the record delimiter.
Export is referred to as “dump” in older versions of Helix.
Import means to read data from an external file into the collection. Helix imports ASCII text data through a view (form) in the collection. The data is processed just as if you had typed it into the view. Field validations are respected and calculations (abaci) on the view are performed. Posting icons attached to the On Entry column are used.
Import is referred to as “load” in older versions of Helix.
|Changing the Field and Record Delimiters||
In the import/export dialog box there is a button labeled “Options” which displays a dialog box that lets you specify the start character, field separator (field delimiter), and stop character (record delimiter) of the text file. Any ASCII value from 0–255 may be specified for each of the delimiters.
The options dialog also allows you to specify whether the text file includes a header row at the top, what to do if an error is encountered, and more.
|Importing Windows (or Unix) Text Files||
Trouble importing a file that has come from a Windows or Unix system, is almost certainly caused by the record delimiter.
Classic Macintosh computers use a single character known as carriage return, return, or simply CR (ASCII 13) as a record delimiter. Unix computers use a single character known as linefeed or simply LF (ASCII 10) as a record delimiter. Windows computers combine the two (CR+LF) in a format commonly referred to as CRLF. macOS is transitioning from the CR character to the LF character, but it is doing it slowly. Each version of macOS makes the LF character more prevalent, and third party products are making the transition as well.
For backwards compatibility Helix continues to use the CR character as its default record delimiter, but this can be changed.
One solution is to first open the collection in a program (such as TextWrangler or BBEdit) that can modify the record delimiters, switching them to match the Helix view into which you are trying to import the data. If the text file uses a single character record delimiter (i.e: Mac or Unix, but not Windows) you can change the the record delimiter the Helix view uses. This will also work with Windows files, but the the last field imported for each record will have a CR appended to it.
|Importing From a Spreadsheet||
When importing data from a spreadsheet (such as Microsoft® Excel™) make sure the columns in the spreadsheet are in the same order as fields in the view, and save (or export) your spreadsheet as tab delimited ASCII text.
A common format for data exchange is the CSV or Comma Separated Values format. In a CSV file, a Comma (, — ASCII 44) is used instead of the Tab which Helix uses by default. Importing or exporting a pure CSV file is simply a matter of switching the field separator to comma.
However, the CSV format has a major drawback: sometimes a comma is actually part of the data, as in “John Smith, Jr.” To work around this problem of ambiguity, CSV files often enclose the fields in quote marks (" — ASCII 34), making the actual delimiters (the separator between fields) into ",". Because Helix only supports single character delimiters, this format presents a problem. (It presents problems outside of Helix as well: if the data actually contains the text "," it must be further encoded, creating a larger problem.)
Keep in mind that there is no single CSV standard, because of the ambiguities referred to above. If you need to import CSV formatted files, you will need to examine them in a text editor to determine how they handle ambiguities and program Helix to handle them.
Various programming techniques have been developed in Helix to handle CSV files. When exporting data that may include a comma, create an abacus that wraps the data in quote marks and export that abacus instead of the field itself. Importing CSV files where quote marks are used requires additional programming: the most popular method is to create the importing form in an inert relation (a relation containing only inert fields) and to use abaci to strip away the extraneous quotes before posting the data into the actual relation where the data is to be stored.
|Errors When Importing||
Data is validated on import to make sure it is the right data type for the field. If the data is the wrong type (e.g., importing text into a number field) Helix reports an error. Likewise, field validations are also checked during import: data that fails a validation check will also generate an error.
The Import Options dialog offers three actions when an error is detected: Stop Import, Ignore Field, or Ignore Record. The default option is to stop on error, so that you can compare the last record entered to the data in the next record in the text file to find the problem.
A good troubleshooting technique is to copy a portion (from the left) of the problematic line from the text file, switch to Helix, and use the Paste Record command to store the line. Through trial and error, this technique can help you identify the field that is triggering the error.
|Pictures and Documents||
Pictures (that is, the fields formatted as Picture type) are not imported or exported through the standard ASCII text file, for obvious reasons. The only method of exporting pictures is to use AppleScript to place the picture data on the clipboard, then use an external application (such as Photoshop or GraphicConverter) to create a new document using the clipboard data.
Documents (that is, the fields formatted as Document type) can be imported and exported. On import, the text file must contain a valid file path in the column where the Document field is found in the Helix view. On export, the text file contains the file path to the exported document. (The options dialog provides settings for controlling document export.)
The Picture data type is older and more limited than the Document data type. We recommend using the Document data type wherever possible.