OS 2200 is the operating system for the Unisys ClearPath Dorado family of mainframe systems. The operating system kernel of OS 2200 is a lineal descendant of Exec 8 for the UNIVAC 1108 and was previously known as OS 1100. Documentation and other information on current and past Unisys systems can be found on the Unisys public support website.[note 1]

Quick Facts Developer, Working state ...
OS 2200
DeveloperUnisys
OS familyOS 2200
Working stateCurrent
Source modelClosed source. Most source is available to clients under license.
Initial release1967; 57 years ago (1967) as Exec 8
Latest release20.0 (EXEC 50R1) / March 30, 2023; 19 months ago (2023-03-30)
Marketing targetEnterprise / Mainframes
Update methodExec and some other components: Line number based packaged changes. Most components: Interim corrections (ICs)
Package managerPRIMUS (internal), COMUS and SOLAR (client and internal)
PlatformsUNIVAC 1100/2200 series; Unisys ClearPath Dorado systems; ClearPath Software Series 2.1 and 3.0 (over VMware)
Kernel typeMonolithic kernel (uniquely hardware-assisted)[citation needed]
Default
user interface
Command-line interface
LicenseProprietary. Term license or pay for use (metered) licenses
Official websiteOS 2200 site
Close

See Unisys 2200 Series system architecture for a description of the machine architecture and its relationship to the OS 2200 operating system. Unisys stopped producing ClearPath Dorado hardware in the early 2010s, and the operating system is now run under emulation.[1]

History

There were earlier 1100 systems going back to the 1101 in 1951, but the 1108 was the first 1100 Series computer designed for efficient support of multiprogramming and multiprocessing. Along with this new hardware came the operating system Exec 8 (Executive System for the 1108).

The UNIVAC 1108 computer was announced in 1964 and delivered in late 1965. The first 1108 computers used Exec I and Exec II, which had been developed for the UNIVAC 1107. However, UNIVAC planned to offer symmetric multiprocessor versions of the 1108 with up to 4 processors and the earlier operating systems (really basic monitor programs) weren't designed for that, even though they supported limited multiprogramming.

Thumb
Genealogy of software

When the UNIVAC 1110 was introduced in 1972, the operating system name was changed to OS 1100 to reflect its support for the wider range of systems. The name OS 1100 was retained until 1988 with the introduction of the Sperry 2200 Series as a follow on to the 1100 Series when its name was changed to OS 2200. Since that time, the 2200 Series became the Unisys ClearPath IX Series and then the Unisys ClearPath Dorado Series, but the operating system retained the OS 2200 name.

The company name and its product names also changed over time.[2] Engineering Research Associates (ERA) of Saint Paul was acquired by Remington Rand Corporation. Remington Rand also acquired the Eckert–Mauchly Computer Corporation of Philadelphia which was then building the UNIVAC computer. The two were combined into the UNIVAC division of Remington Rand under the direction of William Norris. William Norris had been one of the founders of ERA and later left Remington Rand to start Control Data Corporation. The UNIVAC division of Remington Rand Corporation became the UNIVAC division of Sperry Rand Corporation after Remington Rand merged with Sperry Corporation. In the 1970s Sperry Rand began a corporate identity program that changed its name to Sperry Corporation and all the division names to begin with Sperry, so the computer systems division became Sperry UNIVAC. Later the division names were dropped and everything simply became Sperry.

The operating system kernel is still referred to as "the Exec" by most Unisys and customer personnel. However, when Unisys began releasing suites of products tested together as system base releases, later called "ClearPath OS 2200 Release n", the term OS 2200 changed to refer to the entire suite of products in the system release and others, such as BIS, released asynchronously for the Dorado hardware platforms.

In 1986 Burroughs and Sperry corporations merged to become Unisys (which some long time 2200 Series clients say stands for "UNIVAC Is Still Your Supplier").[3] The major mainframe product lines of both companies have continued in development including the MCP Operating System from Burroughs and OS 2200 from Sperry.

In 2016 Unisys made a virtual Microsoft Windows version of OS2200 available at no cost for educational and leisure purposes.[4]

Exec 8

EXEC 8 (sometimes referred to as EXEC VIII) was UNIVAC's operating system developed for the UNIVAC 1108 in 1964. It combined the best features of the earlier operating systems, EXEC I and EXEC II, which were used on the UNIVAC 1107. EXEC 8 was one of the first commercially successful multiprocessing operating systems. It supported simultaneous mixed workloads comprising batch, time-sharing and real-time. Its one file system had a flat naming structure across many drums and spindles. It also supported a well-received transaction processing system.

Previous systems were all real-mode systems with no hardware support for protection and separation of programs and the operating system. While there had been support for multiprogramming in previous systems, it was limited to running one user job concurrently with multiple supporting functions known to be well-behaved, such as the card reader, printer, and card punch spoolers.

The Exec 8 operating system was designed from the very beginning to be a multiprogramming and multiprocessing operating system because the 1108 was designed to have up to four CPUs. Memory and mass storage were the primary system constraints. While the 1100 Series was envisioned as targeting a more general market, extreme real time processing was a primary requirement.[5]

The specifications for Exec 8 were drawn up by December 1964 as a preliminary Programmers Reference Manual (user guide) and work began in May 1965.[6][7]

Exec 8 began as a real time operating system with early use mostly in general scientific and engineering work, but it was also used in message switching, process control, simulation, and missile firing control. It was designed to run on systems that often had only 128K words (576 K bytes—less than the maximum memory size for the IBM PC XT), and was focused on real time and batch processing. While the earliest release levels did work in 128KW, increasing functionality in later releases made that untenable, since it didn't leave enough space for programs of useful size. The maximum memory capacity of an 1108 was 256KW (1,152 KB) so efficient use of memory was the most important constraint since core memory was the most expensive part of the system.

Mass storage consisted of 6-foot long rotating drums that held 256KW (in the FH-432) to 2MW (in the FH-1782). The highest capacity mass storage was the FASTRAND drum, which held 22 MW (99 MB). File fragmentation was dealt with by a process called a "file save", which was generally done once per day, at night. It involved rolling all files out to tape, reinitializing the drum file system, then reading the files back in.

With severe memory constraints and real time use, keeping only a single copy of code loaded into core was a requirement. Since the 1108 was designed for multitasking, the system was fully "reentrant" (thread safe). Each reentrant module accessed program data through a single memory "base address", which was different for each instance of run data. Switching execution contexts could be done in a single instruction merely by setting a different base address in a single register. The system used fine-grained locking to protect shared data structures. The executive, compilers, utilities, and even sophisticated user applications that might have multiple copies running concurrently were written so that their code could be shared. This required loading only one copy into memory, saving both space and the time it took to load the code.

Another reason to separate code and data into different load entities was that memory was implemented as two independent banks (separate physical cabinets) called IBANK and DBANK (instruction and data). Each had its own access path, so the CPU could read both banks simultaneously. By loading executable code into one memory bank and data into the other, the run time of many programs could be almost halved.

Re-entrant code had to be thread safe (execute only); self-modifying code was not allowed. For other programs, modifying executable code during runtime was still an acceptable programming technique in the time of 1100-series computers, but users were encouraged not to do it because of the performance hit. Security benefits were touted but not highly valued because hacking most 1100-series applications would provide no benefit to anyone, and because few hackers were malevolent then.

Exec 8 was primarily a batch processing system that gave applications (called "tasks") very fine control of CPU scheduling priority for its threads (called "activities"). Processor switching was preemptive, with higher priority threads gaining control of the processor currently running the lowest priority thread of any program. Except in realtime systems, even the lowest priority tasks got some processor time. It was a multiprogramming and multiprocessing operating system with fully symmetric processor management. A test-and-set instruction built into the hardware allowed very efficient and fine-grained locking both within the OS and within multi-threaded applications.

In Exec 8, work is organized into jobs, called "runs," which are scheduled based on their priority and need for lockable resources such as Uniservo tape drives or Fastrand drum files. The control language syntax uses the "@" symbol (which Univac called "the master space") as the control statement recognition symbol. It was immediately followed by the command or program name, then a comma and any option switches. After a space character, the remainder of the statement differed for particular commands. A command to compile a FORTRAN program would look like "@FOR[,options] sourcefile, objectfile". Input data for an application could be read from a file (generally card images), or immediately follow the @ command in the run stream. All lines until the sentinel command "@END" were assumed to be input data, so forgetting to insert it led to the compiler interpreting subsequent commands as program data. For this reason, it was preferable to process data in files rather than inputting it in the run stream.

In 1968, work began on adding time-sharing capability to Exec 8. It was delivered with level 23 of the executive in 1969. Time sharing (called demand mode) had the same capabilities as batch and real time processes. Everything that could be done in batch could be done from an ASCII terminal. In demand mode, job stream I/O was attached to a terminal handler rather than card image (input) and spool (output) files. The same run control language was used for both. A few years later, more specific time sharing commands were added, and some control statements could be issued asynchronously for immediate processing, even when neither the executive or the running program were expecting data. Those commands, which could be entered only from a terminal, began with "@@". Because they could be performed without stopping other work in progress from the same terminal, they were called transparent commands. At first these were just statements to kill the current program or redirect terminal output to a file, but eventually, almost all control statements were allowed to be "immediate."

Both batch and demand runs terminate with an @FIN statement, and if a demand user terminates his session while his run is active, the Exec automatically terminates the run without requiring @FIN.

Communications software

A transaction processing capability was developed in the late 1960s as a joint project with United Airlines and later refined in another joint project with Air Canada. This capability was fully integrated into the operating system in 1972 and became the basis of much of the future growth of the 1100 Series. Early users controlled communication lines directly from within their real time programs. Part of the development of transaction processing included a communication message system that managed the communication lines and presented messages to Exec 8 to be scheduled as transactions. This moved all the low level communication physical line management and protocols out of the applications and into the CMS 1100 application.

CMS 1100 itself ran as a real time multi-threaded program with the privilege of acquiring control of communication lines and submitting transaction messages for scheduling. This led to the notions in Exec 8 that applications of any nature needed to be carefully controlled to ensure that they could not cause integrity issues. Security was certainly a concern, but in the early days system reliability and integrity were much larger issues. The system was still primarily batch and transaction processing and there was little chance that anyone could install unauthorized code on the system. CMS 1100 later added the capability to be the interface for demand terminals as well as transaction terminals so that terminals could be used for both and the early terminal drivers could be removed from the Exec. CMS 1100 was later replaced by a combination of CPComm (ClearPath Enterprise Servers Communications Platform) and SILAS (System Interface for Legacy Application Systems).[8][9] For the Intel-based Dorado server models, the lower level communications were moved to firmware, with the upper levels handled by SILAS and CPCommOS (ClearPath Enterprise Servers Communications Platform for Open Systems).[10]

The Exec

The Exec contains all the code in the system that is allowed to run at the highest privilege levels. There are no mechanisms for other code to be promoted to those privilege levels.

The Exec is responsible for managing the system hardware, scheduling and managing work, and communicating with operators and administrators.

In Release 16.0, the Exec is level 49R2 (49.70.5). The internal system levels use a three-part number such as 21.92.42 (which was the first widely used production system although earlier releases were used in production at a number of sites). The first number part is the major level and indicates a new version of the Exec with all previous updates integrated into a new base version. This is an infrequent process and occurs at intervals of years. The second number part indicates versions of updates to the major level and often occurs several times per week. When a decision is made to freeze the feature content and prepare for release, the third part comes into play and indicates versions of the pre-release level as fixes and minor feature updates are applied. Concurrently with preparing a level for release, updates to the "mainline" continue as engineers integrate changes in preparation for a future release. For many years the official release level was the full three-part number. Later releases were named simply 44R1, 44R2, 49R2, and so on although the three-part number is still used internally.

Performing work

The Exec is at heart a real time, multi-threaded batch processing system. Everything has been built around that model. The Exec itself is largely structured as a real time program. Functions that are performed as Services in Windows or Daemons in Linux and UNIX are implemented as either activities within the Exec or as batch programs that are always running in the background.

Time-sharing (known as demand mode) and transaction processing are implemented as special cases of batch. One result is that there are few restrictions on what a time-sharing user or transaction program can do. There are many warnings for writers of transaction programs that they will not be happy with performance if for example they call for a tape mount, but it is permitted.

The largest unit of work is the "Run." This is taken from the factory "production run" terminology and generally equates to job or session on other systems. A Run is defined by its "run stream." A run stream is a sequence of control statements that represent the steps to be taken. They may include file handling, program execution, and branches of control. A batch Run is typically stored as a file and is scheduled by a "Start" command from within another Run or by the operator. A time sharing Run is initiated by logging in from a time-sharing terminal and inputting the @RUN command. Often the @RUN statement and the second control statement (often @ADD or a program execution) are generated automatically based on the user profile. Security authorizations are validated based on the authenticated user-id and other information supplied on the Run control statement.

Transactions are a special case. There aren’t actually any control statements, but the internal data structures of a run are created. This enables the Exec to associate the same security, accounting, debugging, etc. mechanisms with transaction programs. Generally a security profile is cached in memory at the time the transaction user is authenticated and is copied from the user's session data to the transaction run state when the transaction is scheduled. Because each transaction instance is essentially a Run, accounting, logging, and error handling are all encapsulated by the Run mechanism.

Batch

Batch jobs (Runs) are characterized by having a runstream (job control language statements) stored in a file. A batch job always contains an @RUN statement as the first record in the file. This statement gives the run a name (runid), defines priorities, and defines the maximum number of SUPS (Standard Units of Processing) the job is expected to use. The job is started from some other job with a @START control statement or by the operator via an ST keyin. The system may be configured to automatically issue @START statements for any number of jobs when it boots. These jobs serve the purpose of performing initialization, recovery, and background functions.

All of the fields on the @RUN statement may be overridden by corresponding fields on the @START statement. Except when the @START is executed by a privileged user, the userid and other security state are always taken from the run doing the @START.

There are two priority fields on the @RUN statement. One is used to specify the backlog priority. There are 26 backlog priority levels (A – Z). The Exec has a configured maximum number of open batch runs. When that level is reached, jobs are then selected from the backlog queues in priority order. Within a priority selection is usually FIFO. However, the Exec pre-scans the job control statements up to the first program execution looking for file names and reel numbers. If the job would immediately stall because some resources it needs are not available, it may be bypassed to start other jobs at the same priority level.

The second priority level defines an execution processor resource group. In general higher execution group priorities typically get more processor time.

While the OS 2200 job control language does not support full programmability, it does allow dynamic additions of sequences of control language through an @ADD control statement. The file to be added may have been created by the same job immediately preceding adding it. The @ADD and most other control statements may also be submitted from within a running program via an API.[11] Additional programmability is available indirectly through the use of the Symbolic Stream Generator (SSG).[12] SSG is a programming language for manipulating and creating text files from input parameters and system information. It is used heavily for configuration management (make) processing and other functions where text images need to be created programmatically. The resulting output can be "@ADD"ed in the same run thus providing the indirectly programmable runstream.

Operator commands are available to change both the backlog and execution priorities of runs. As all operator commands are available by API to suitably privileged users, this can be automated or controlled by a remote administrator.

Deadline is a special case of batch. A deadline run looks just like any other batch run except that a deadline time is specified on the @RUN or @START control statement. The deadline time is used in conjunction with the maximum SUPS (time estimate) on the control statement. A deadline job runs at normal batch priorities unless or until it appears that it could miss its deadline time. Then the more the mismatch between time until the deadline and remaining SUPS, the higher the priority. While deadline can’t totally shut off transactions and has no effect on real time, it can effectively shut off most other processing in the system if necessary to achieve its goal.

Demand

OS 2200 time-sharing sessions are called demand (from "on demand") runs. They use the same control language as batch runs with a few additions known as "immediate" control statements. Immediate control statements use the "@@" sentinel which indicates that they are to be executed immediately even if a program is running. While they can be used to create or assign files, the most important ones allow a demand user to error terminate a running program or even send it a signal.

Transactions
Thumb
Transaction processing diagram

Transactions execute as runs but without any stored or submitted control statements. Instead when a message is received from a session defined as a transaction session, it is scanned to determine the transaction queue on which it is to be placed. This is normally determined by the first characters of the message but user-written scanners may be added.[13]

The communication manager, which is capable of handling up to 250,000 active sessions, takes incoming transaction messages and passes them to the message queuing software. It can handle an unlimited number of queued messages using the message queuing architecture. A call is made to the Transaction Interface Package (TIP) APIs in the operating system to queue the transaction on the appropriate queuing point. Each queuing point identifies the priority and concurrency level of the work and the associated transaction program to be executed.

Thumb
Transaction scheduling diagram

A transaction program scheduling tree allows the client to establish relative usage for groups of transaction programs. Concurrency limits avoid one type of work dominating the system to the exclusion of other work and avoid creating an over commitment of resources. Up to 4094 nodes may be created in the tree.

  • Maximum concurrency specified for each node in the tree
  • Concurrency of higher node limits total concurrency of dependent nodes
  • Concurrency of highest node limits system concurrency

Priority (0 to 63) and concurrency level (1 to 2047) can be specified for each transaction program.

The highest priority transaction is selected for scheduling except as limited by the concurrency policies in effect for its node and higher nodes.

Real time

Real time is not another type of run. Rather it is a set of priority levels which any activity may request. Real time is most typically used by long running batch programs, like the OS 2200 communications manager CPComm, but is not restricted to such.

There are 36 real time priority levels available by API for applications to use. The user and account must have the privilege to use real time priorities. It is up to the site to control how their applications use the priority levels. Real time priorities totally dominate all lower priorities so it's quite possible for a misbehaved real time program to tie up one or more processors.

The real time priority applies to an individual activity (thread) so a program may have both real time and non-real time threads executing at the same time.

CPU dispatch

Once a run has been started, getting access to the processor controls its rate of progress. The heart of the Exec is the Dispatcher which manages all the processors.[14]

Thumb
Dispatching priorities diagram

The Exec supports up to 4095 dispatching priorities although most sites define only a small subset of those. The two highest "priorities" aren’t switchable. They are recognition of certain types of processing that must be allowed to continue on the processor on which they started until they voluntarily give up control. Interrupt lockout occurs when an interrupt arrives or in a few special cases when other Exec code prevents all interrupts (in order to change some data that an interrupt handler may also access).

Interlock is used by interrupt post processing routines that either need to run on the same physical processor or simply should not be interrupted. The Dispatcher, I/O completions, and I/O initiation are some examples. All locks used by both of these priorities are spin locks as the only way they can be set by someone else is on another processor and the design requires that they only be set for very short instruction sequences.

High Exec priority is used by the operator command handler and some other functions that may have to run even when a real time program has control. They are expected to use only very short amounts of time. If they need more time, they should queue the work to be processed by a Low Exec activity.

Real time activities have an unlimited processor quantum and run without switching unless interrupted by a higher priority real time activity or High Exec activity. Real Time activities are given control of any available processor that is running something of lower priority. Interrupts are sent between processors when necessary to ensure immediate availability. Real time is used by customers to fly missiles, run simulators, and other functions that require immediate response.

Transaction priorities may be handled in two ways as defined by the site. They may be a sort of lower priority real time in that only the priority matters and the quantum size is essentially infinite. This is appropriate for very short-lived transactions such as airline reservations; if one loops due to a programming error, the Exec will terminate it when it reaches its very small configured maximum time. The other form allows the Exec to vary the priority within a range to optimize system resource usage. The approach gives higher priority and shorter time slices to programs that are I/O limited and progressively lower priorities but longer time slices to those that are computing. The Exec dynamically adjusts these priorities based on behavior as programs often behave both ways at different times. This approach is appropriate for longer running transactions like database queries or airline fare quotes.

Batch and demand always use dynamically adjusted priorities. Programs that are I/O limited or are in a conversation with a time-sharing user get higher priorities but short time slices. More compute-oriented programs get lower priorities and longer time slices.

The Exec has two additional mechanisms for optimizing dispatching. One is affinity-based dispatching. When possible the Exec will run an activity on the same processor that it was on the last time to get the greatest advantage of residual cache contents. If that isn't possible it tries to keep the activity on the "nearest" processor from the standpoint of cache and memory access times. The second is a "fairness" policy mechanism. The site can define the relative percentage of resources to be allocated to each of transactions, demand and batch. Within transactions and batch there are priority groupings that can further indicate what percentage of their group's time is to be allocated to the priority. This ensures that transactions cannot so dominate the system that no batch work gets done. Within the various priority groupings it ensures that some progress can be assured for each group (unless the group percentage is zero). These "fairness" algorithms only come into play when the processors are very busy, but OS 2200 systems often run with all processors at near 100% utilized.

Metering

OS 2200 supports several models for system performance management.[15] Customers may purchase a certain fixed performance level, and the Exec will monitor processor usage to ensure that performance does not exceed that level. Customers can also purchase additional performance either temporarily or permanently up to the full capacity of the system if their workload increases or an emergency requires it.

More recently the system has added a metered usage capability. In this mode the full power of the system is always available to the customer (although they may administratively limit that). The usage is accumulated over a month and then the reported usage is submitted to Unisys billing. Depending on the specific contract terms the client may receive a bill for excess usage above some contracted baseline for the month or just a statement showing that the total contracted usage has been decremented. The first form is like a cell phone bill with the potential for charging for excess minutes. The latter is like buying a pre-paid phone card.

File system

OS 2200 does not have a hierarchical file system as do most other operating systems. Rather it has a structured naming convention and the notion of container files called program files.

Files in OS 2200 are simply containers that may be addressed either by word offset in the file or by sector (28-word unit) offset in the file. The 28 words is a historical unit from an early mass storage device (the FASTRAND drum) that could hold 64 such units per physical track. Nonetheless, it is a fortunate historical accident. Four such 28-word units or 112 words occupy 504 bytes. With today's mass storage devices all using 512-byte physical records, OS 2200 clients have almost all adopted some multiple of 112 words as their physical record size and database page size. I/O processors automatically adjust for the 504<->512 byte mapping adding 8 bytes of zeros on writes and stripping them off on reads of each physical record. OS 2200 handles applications that use sizes other than multiples of 112 words by indivisibly reading the containing physical records and writing back out the unchanged and changed portions with data chaining. Special locking functions guarantee indivisibility even when there are device errors and across multiple systems in a cluster.

File formats and other internal data structures are described in the Data Structures Programming Reference Manual.[16]

File names

Ever since Exec-8, file names have taken the form: Qualifier*Filename(f-cycle) (e.g., "PERSONNEL*EMPLOYEES(+1)").[11] Qualifier and filename are simply twelve-character strings used to create whatever naming structure the client desires. F-cycle is a number from 0 to 999 that allows multiple generations of a file. These may be referenced by relative numbers: (+1) next or new cycle, (-1) previous cycle, (+0) current cycle. Leaving the cycle off defaults to the current cycle. Batch production runs that create new generations of files use this approach. The numbers wrap around after 999. Only 32 consecutive relative cycle numbers may exist at one time. Creating a (+1) deletes (-31).

Any file may be used as a program file. A program file contains elements which generally act as files. Element naming is Qualifier*Filename(f-cycle).Element/version(e-cycle) (e.g., "PERSONNEL*PROGRAMS.TAXCALC/2008"). Element and version are twelve-character names used in any way a user desires. E-cycle is similar to f-cycle in that it represents a generation number but without the restriction to 32 concurrent cycles and the limit is 256K cycles. However, e-cycle only applies to text elements and each line in a text element is marked with the cycle numbers at which it was inserted and deleted. Elements also have a type and sub-type. The most commonly used types are "text" and "object." If the default type is not suitable, options select the appropriate type. Text elements also have sub-types that typically represent the programming language (e.g., "ASM", "C", "COB", "FOR"). The default element name of an object file is the same as the text file from which it was created.

An object element may be executed if it is a main program or linked with other object elements including a main program. The linking may be static or dynamic. A main program may be executed without pre-linking provided all required sub-programs are in the same program file, are system libraries, or are otherwise known. Rules may be included in a program file to direct the dynamic linker's search for unfulfilled references. The linker may also be used to statically link multiple object modules together to form a new object module containing all instructions, data, and other information in the original object modules.

Omnibus elements may be used as data by applications or may serve to hold structured information for applications and system utilities. There is no assumed structure to an omnibus element.

For compatibility with earlier (basic mode) programming models, there are relocatable and absolute element types. Relocatable elements are the output of basic mode compilers. They may be combined by the basic mode static linker (@MAP – the collector) to form an "absolute" element which is executable.

File management

OS 2200 implements a fully virtual file system. Files may be allocated anywhere across any and all mass storage devices. Mass storage is treated as a large space pool similar to the way virtual memory is managed. While contiguous space is allocated if possible, mass storage is treated as a set of pages of 8KB size and a file can be placed in as many areas of the same or different devices as is required. Dynamic expansion of files attempts to allocate space adjacent to the previous allocation, but will find space wherever it is available. In fact, files need not even be present on mass storage to be used. The Exec and the file backup system are fully integrated. When file backups are made, the tape reel number(s) are recorded in the file directory. If space gets short on mass storage, some files are simply marked as "unloaded" if they have a current backup copy, and their space is available for use. If enough space can't be found that way, a backup is started.

Any reference to an unloaded file will be queued while the file is copied back to mass storage. The whole system is automatic and generally transparent to users.[17]

Access methods

In general, the Exec does not provide access methods. Files are simply containers. Access methods are provided by the language run time systems and the database manager. The one exception is a fixed-block access method provided for high-volume transaction processing.[18] It has much less overhead than the database manager, but does participate in all locking, clustering, and recovery mechanisms.

Removable packs

When clients want more explicit control over the location of files, they can use the "removable pack" concept. At one time these truly represented physically removable disk packs, and the operating system would automatically generate pack mount requests to operators as needed.

Today they are still used to place files, usually database files or transaction files, on one or more disk volumes. Files may still span multiple disk volumes, and now the list of volume names is given when the file is created. Files that are on such volume groups are still backed up but are not subject to automatic virtual space management.

CIFS

OS 2200 also provides a full implementation of the Common Internet File System (CIFS).[19] CIFS implements the SMB protocol used by Microsoft servers and the UNIX/Linux Samba software. CIFS for ClearPath OS 2200 is both a file server and file client to other CIFS-compliant systems. This includes desktop PCs running Windows. CIFS supports SMB message signing.

To maintain OS 2200 security, CIFS for ClearPath OS 2200 provides two levels of protection. First, OS 2200 files are not visible to the network until they have been declared as "shares" with a CIFS command. A specific privilege exists to control who may declare a share. The second level of control is that all access is still protected by OS 2200 security. Clients accessing OS 2200 via CIFS will either have to be automatically identified via NTLM or Kerberos or they will be presented with a query for their OS 2200 user id and password.

CIFS allows OS 2200 files to be presented in a hierarchical view. Typically the qualifier will appear as the highest level in the tree followed by filename, element name, and version. In addition, files may be stored on OS 2200 servers using the full Windows filename format. Windows applications will see OS 2200 as another file server. OS 2200 applications have APIs available to read and write files existing on other CIFS-compliant servers, such as Windows file servers, in the network. Text files are automatically converted to and from OS 2200 internal formats. Binary files must be understood by the application program.

The CIFSUT utility running under OS 2200 can exchange encrypted compressed files with other software, such as WinZip.

Subsystems

The concept of subsystems and protected subsystems are central to the design of OS 2200. A subsystem is most analogous to a .dll in Windows. It is code and data that may be shared among all programs running in the system.[20] In OS 2200 each subsystem has its own set of banks that reside in a separate part of the address space that cannot be directly accessed by any user program. Instead the hardware and the OS provide a "gate" that may be the target of a Call instruction. See Unisys 2200 Series system architecture for more information.

The database managers, run time libraries, messaging system, and many other system functions are implemented as subsystems. Some subsystems, usually consisting of pure code, such as the run time libraries, may be the direct target of a Call instruction without requiring a gate. These subsystems run in the user program's protection environment. Other subsystems, such as the database managers, consist of code and data or privileged code and may only be called via a gate. These subsystems may also have access control lists associated with them to control who may call them. More importantly, the gate controls the specific entry points that are visible, the protection environment in which the subsystem will run, and often a user-specific parameter that provides additional secure information about the caller.

Security

B1 security

The OS 2200 security system is designed to protect data from unauthorized access, modification, or exposure. It includes an implementation of the DoD Orange Book B1 level specification.[21] OS 2200 first obtained a successful B1 evaluation in September, 1989. That evaluation was maintained until 1994. After that point, OS 2200 developers continued to follow development and documentation practices required by the B1 evaluation.

Central to a B1 system are the concepts of users and objects.[22][23] Users have identities, clearance levels, compartments and privileges. Objects require certain combinations of those for various types of access. Objects in OS 2200 consist of files, protected subsystems, devices, and tape reels.

The security profile of a user session includes the user identity, clearance level (0-63), compartment set, and set of allowed privileges. OS 2200 implements both Mandatory Access Control (MAC) and Discretionary Access Control (DAC) based on the Bell-La Padula model for confidentiality (no read up, no write down) and the Biba integrity model (no read down, no write up). For a run to read or execute a file, the run's executing clearance level must be greater than or equal to the clearance level of the file, and the file's clearance level must be 0 or within the clearance level range of the run; in addition, the run's executing compartment set must contain the file's compartment set. Because OS 2200 combines the Bell-La Padula and Biba model requirements, a run's executing clearance level and compartment set must exactly match those of a file to permit writing to the file or deleting it.

DAC associates an access control list with an object; the list identifies users and user groups that have access and defines the type of access that user or group is allowed (read, write, execute, or delete).

Because the full set of B1 controls is too restrictive for most environments, system administrators can configure servers by choosing which controls to apply. A set of security levels from Fundamental Security through Security Level 3 serves as a starting point.

Security officer

Every OS 2200 system has one user designated as the security officer. On systems configured with fundamental security, only the security officer is allowed to perform certain tasks. On systems configured with higher levels of security, other trusted users may be allowed to perform some of these tasks.

OS 2200 provides a fine-grained security mechanism based on the principle of least privilege. This principle demands that only the minimum privilege be granted necessary to perform the task required. Thus, OS 2200 has no concept of a "Super User" role that can be assumed by any user. Rather it uses a large set of specific privileges which may be granted separately to each user. Each privilege is associated with a specific authority.

File security

On systems configured with security level 1 or higher levels, the user who creates an object is the object's owner. The default is that the object is private to the creating user, but it may also be public or controlled by an access control list. The owner or the security officer may create an access control list for that object.

On system configured with fundamental security, files do not have owners. Instead, they are created private to an account or project, or they are public. Access to them can be controlled by read and write keys.

Authentication

When users log on to the system, they identify themselves and optionally select the clearance level and compartment set they will use for this session.

OS 2200 offers a flexible authentication system. Multiple authentication mechanisms are supported concurrently. Client- or third party-written authentication software may also be used. Standard authentication capabilities include:

  • User id and password maintained in an encrypted file by OS 2200
  • Authentication performed by an external system such as Microsoft Windows using its user id and password mechanism
  • NTLM
  • Kerberos
  • LDAP

The last two permit the use of biometrics, smart cards, and any other authentication mechanism supported by those technologies.

Encryption

OS 2200 provides encryption for data at rest through Cipher API, a software subsystem that encrypts and decrypts caller data.[24] Cipher API also supports the use of a hardware accelerator card for bulk data encryption.

For CMOS-based Dorado servers, CPComm provides SSL/TLS encryption for data in transit. For Intel-based Dorado servers, SSL and TLS are provided by openSSL, which is included in the Dorado firmware. All Dorado servers support TLS levels 1.0 through 1.2, as well as SSLv3, but SSL is disabled by default because of vulnerabilities in the protocol.

Both CPComm and Cipher API use the encryption services of CryptoLib, a FIPS-certified software encryption module. The AES and Triple DES algorithms are among the algorithms implemented in CryptoLib.

OS 2200 also supports encrypting tape drives, which provide encryption for archive data.

Clustering

OS 2200 systems may be clustered to achieve greater performance and availability than a single system. Up to 4 systems may be combined into a cluster sharing databases and files via shared disks. A hardware device, the XPC-L, provides coordination among the systems by providing a high-speed lock manager for database and file access.[25]

A clustered environment allows each system to have its own local files, databases, and application groups along with shared files and one or more shared application groups. Local files and databases are accessed only by a single system. Shared files and databases must be on disks that are simultaneously accessible from all systems in the cluster.

The XPC-L provides a communication path among the systems for coordination of actions. It also provides a very fast lock engine. Connection to the XPC-L is via a special I/O processor that operates with extremely low latencies. The lock manager in the XPC-L provides all the functions required for both file and database locks. This includes deadlock detection and the ability to free up locks of failed applications.

The XPC-L is implemented with two physical servers to create a fully redundant configuration. Maintenance, including loading new versions of the XPC-L firmware, may be performed on one of the servers while the other continues to run. Failures, including physical damage to one server, do not stop the cluster, as all information is kept in both servers.

Operations and administration

Operations

OS 2200 operations is built around active operators and one or more consoles. Each console is a terminal window, part of which is reserved for a fixed display that is frequently updated with summary information about activity in the system.[26]

The rest of the console is used as a scrolling display of events. When a message is issued that requires an operator response, it is given a number from 0 to 9 and remains on the display until it is answered. Tape mount messages do scroll with other messages but will be repeated every two minutes until the tape is mounted.

Operations Sentinel is used for all OS 2200 operations.[27] OS 2200 consoles are simply windows within an Operations Sentinel display. There may be as many display PCs as desired. Remote operation is typical. Operations Sentinel supports any number of ClearPath, Windows, Linux, and UNIX systems.

An auto-action message database is released with the product.[28] This database allows Operations Sentinel to recognize messages. Scripts may be written to automatically respond to messages that require a response, hide unwanted messages, translate them to other languages, create events, etc. Full dark room operation is used by some clients. At most they will have Operations Sentinel displays at remote locations monitoring the system and creating alerts when certain events occur.

Administration

Administration of OS 2200 systems is performed using a wide variety of tools, each specialized to a particular area of the system. For example, there is a tool used for administering the transaction environment that allows new transaction programs to be installed, specifies all the necessary information about them, changes the queuing structure, priorities, and concurrency levels, and so on.[29]

Other tools are specific to the security officer and allow creation of users, changing allowed privileges, changing system security settings, etc.[22],[30],[23]

Most of the tools have a graphical interface although some do not. All provide a batch stored file interface where all actions are specified in the control stream. This allows scripting any and all of the administrative interfaces from either local sites, maybe based on time of day or other events, or from remote sites. Unique privileges are required for each administrative area.

Application groups

Application groups are a logical construct consisting of an instance of the Universal Data System (UDS),[31] an instance of the message queue subsystem, and some set of transactions. Each application group has its own audit trail. OS 2200 supports a maximum of 16 application groups in a system.

The notion of application group corresponds to what is often called "an application." That is, a set of programs and data that represent some larger unit of connected processing. For example, an application group might represent an airline system. Another application group might represent the corporate finance system. Or, application groups might represent instances of the same application and data models, as in bank branches. The important thing is that each application group has its own environment, sessions, recovery, etc.

Application groups may be started, stopped, and recovered independently.

Application groups do not have their own accounting and scheduling rules. Transactions in multiple application groups may share the same priorities and have interleaved priorities. This permits the site to control the relative priorities of transactions across the entire system.

See also

Other locations of source material

The Unisys History Newsletter contains articles about Unisys history and computers. In addition to all of the Unisys History Newsletters there are links to other sites.

Most of the historical archives of Unisys are at the Charles Babbage Institute at the University of Minnesota and at the Hagley Museum and Library in Delaware. The Charles Babbage Institute holds the archives from ERA, some early Remington Rand archives from Saint Paul, MN, and the Burroughs archives. The Hagley Museum and Library holds the bulk of the Sperry archives.

A very helpful introductory article about OS 2200 in the 2020s at Arcane Sciences.

References

Footnotes

Wikiwand in your browser!

Seamless Wikipedia browsing. On steroids.

Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.

Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.