Writing VMS Privileged Code
Part III: The Fundamentals, Part 3
Edward A. Heinrich
The term pool memory refers to one or more portions of memory that are reserved for dynamic memory allocations. The term pool refers to the pooling, or grouping, of available memory resources. Memory is allocated from the pool and may or may not be released back to that pool. Under OpenVMS, there are two kinds of pooled memory: nonpaged pool and paged pool. As the names suggest, the memory reserved for nonpaged pool is always present in physical memory, while paged pool can be paged in and out of physical memory.
Part of the physical memory of a VAX or Alpha AXP system running OpenVMS is dedicated for use by the OpenVMS operating system. This memory, which is the nonpaged portion of S0 space, is used to contain OpenVMS data structures that must be available to the operating system under these circumstances:
- when executing without process context;
- when executing at elevated IPL;
- when executing code that must be constantly present and that cannot be paged in.
The section of this memory that can be allocated for use by kernel code is referred to as nonpaged pool. The protection on the memory contained in nonpaged pool is ERKW, which allows code running in executive mode and kernel mode to read the data structures, but only kernel-mode code to write to them. The initial allocation of nonpaged pool is determined by the SYSGEN parameter NPAGEDYN and can be expanded dynamically, via the EXE$EXTENDPOOL executive routine, up to the limit defined by the SYSGEN parameter NPAGEVIR. Prior to OpenVMS VAX V6.0, EXE$EXTENDPOOL is defined in the OpenVMS source module MEMORYALC.MAR. In OpenVMS VAX V6.0 and OpenVMS AXP, it is defined in either MEMORYALC_MON, the module that implements pool checking, or in MEMORYALC_MIN, the version that does not include the pool checking code.
System programmers often need to allocate a chunk of nonpaged pool for use as a data structure or to contain code. For example, AST Control Blocks (ACB) are allocated from nonpaged pool. There are multiple entry points in the source module MEMORYALC for the allocation and deallocation of nonpaged pool. Some of these routines, such as EXE$ALLOCTQE, which allocates a Timer Queue Entry (TQE), allocate an explicit data structure and set the size and type fields in the structure, while others, such as EXE$ALONONPAGED, are general purpose allocation routines. Beginning with VAX/VMS V5.0, the executive contains a routine, EXE$DEBIT_BYTCNT_ALO, that can be called to allocate nonpaged pool and to also charge the number of bytes of allocated pool against the calling process’s byte limit quota.
The EXE$ALONONPAGED routine accepts the size of the desired memory block in R1. If the allocation is successful, the low bit of R0 is set and R1 and R2 contain the size and address of the allocated memory, respectively. The EXE$ALONONPAGED routine is called via a JSB instruction:
MOVL #BLOCK_SIZE,R1 ; Put size in R1 JSB G^EXE$ALONONPAGED ; Allocate nonpaged pool space BLBC R0,ERROR ; Return the error
Prior to OpenVMS VAX V6.0, nonpaged pool is divided into 4 regions, which includes the three lookaside lists:
- SRP (Small Request Packets)
- IRP (I/O Request Packets)
- LRP (Large Request Packets)
Lookaside lists are queues of fixed-length memory packets that have been grouped together to provide faster allocation and deallocation of memory for commonly-used data structures.
There is also a region of pool used when a request is made for a piece of nonpaged pool whose size does not fit the allocation requirements from one of the lookaside lists; this area is known as the variable-length list. In pre-V6.0 systems, you may call EXE$ALONONPAGVAR to force the allocation from the variable-length list, since EXE$ALONONPAGED may allocate from one of the lookaside lists.
With the memory management changes in OpenVMS VAX V6.0 and OpenVMS AXP, the lookaside lists have been made dynamic and are created from the variable nonpaged pool. This redesign eliminates the need to maintain separate SYSGEN parameters to control the size of the lookaside lists. There is a new macro, ALLOC_NPOOL in LIB.MLB, that will optimize the allocation of nonpaged pool by first attempting to grab the pool from one of the lookaside lists, if the requested size falls in the range of one of the lookaside lists. It will call EXE$ALONONPAGED if either the requested size is larger than that of any of the lookaside lists or if the matching lookaside queue is empty.
The OpenVMS AXP V1.5 implementation is a hybrid of V5.4 and V6.0, and, unfortunately, LIB.MLB does not contain the ALLOC_NPOOL macro.
Nonpaged pool is deallocated by calling EXE$DEANONPAGED with the address of the pool block in R0. The TYPE and SIZE fields in the block header must be properly set in order for EXE$DEANONPAGED to return the block to the proper pool.
NOTE If the TYPE field in a block of memory being released to pool contains a negative value, EXE$DEANONPAGED will return that block to a shared memory pool, which is NOT part of nonpaged pool.
Code threads owning spinlocks ranked as high as MAILBOX can deallocate non-paged pool; threads owning higher-ranked spinlocks can dequeue pool from a lookaside list directly. But with the changes to the lookaside lists, this mechanism is not portable between versions of OpenVMS and presents the problem of what recovery is available to the thread if the list is empty. Nonpaged pool cannot be deallocated by a thread holding any spinlocks ranked higher than SCHED. OpenVMS does, however, provide a routine, COM$DRVDEALMEM, which allows a thread holding a spinlock ranked above SCHED to request pool deallocation when spinlock ownership has fallen to SCHED or lower. COM$DRVDEALMEM converts the block of nonpaged pool into a fork block and requests a software interrupt at IPL$_QUEUEAST.
Code threads that execute in process context and allocate pool for data structures usually execute at IPL$_ASTDEL or higher while they are responsible for the structure’s existence. Since process deletion is implemented in OpenVMS by the delivery of a special kernel-mode AST, processes should block AST delivery to prevent a loss of pool.
Paged pool, which is allocated by calling EXE$ALOPAGED and deallocated by calling EXE$DEAPAGED, can be used for data structures that do not always have to be memory-resident. Since paged pool cannot be guaranteed to be in memory, only data structures that will be accessed by code threads executing at IPL$_ASTDEL or lower are allocated from paged pool. Examples of OpenVMS data structures that are allocated from paged pool include the logical name tables, global section descriptors, object rights blocks, and known file entries.
Like nonpaged pool, the protection on paged memory is ERKW, allowing executive-mode and kernel-mode reads and kernel-mode writes.
There are a few conventions regarding the use of registers in OpenVMS privileged code. These conventions include the consistent use of certain registers and the avoidance of certain instructions in time-critical code.
“Standard” register usages
When writing privileged code, it is important to be aware of the more common register usage conventions. These conventions will be discussed in greater detail throughout this series.
- R0 and R1: Used as temporary work registers. Status values are returned to calling routines in R0, and sometimes in R1.
- R2: Not used for anything specific; usually a scratch register.
- R3: In device driver code, normally points to the IRP (I/O Request Packet) for the I/O request.
- R4: Points to a process’s control block (PCB). The PCB address is automatically put in R4 by the $CMKRNL system service.
- R5: In device driver code, normally points to the UCB for the device. For ASTs, R5 points to the ACB (AST Control Block).
Device driver routines have certain rules for general register usage:
- FDT routines can freely use R0, R1, R2, R9, 10, and 11. The other registers are available only when saved before being used and restored before exiting the routines.
- Most other driver routines obtain their inputs in some subset of R2 – R5, and usually must save and restore any registers, other than R0 or R1, that they modify. Check the Device Driver Support Manual for details on the inputs for a particular routine.
- Inside special kernel-mode ASTs, R0-R5 are available for use; they are automatically preserved by the AST delivery code.
There are VAX instructions whose primary purpose is to save and restore registers. PUSHL pushes a longword on the stack; for example, the following instruction saves the contents of R5:
PUSHL R5 ; Push R5 contents on the stack
The corresponding instruction to pop the value off the stack, POPL, isn’t a real instruction; it’s an assembler-defined macro that expands to a MOVL instruction. To restore the contents of R5, you’d typically write
POPL R5 ; Pop contents back into R5
which is really
MOVL (SP)+,R5 ; Pop contents back into R5
There are two VAX instructions, PUSHR and POPR, whose only purposes are to save and restore registers. Both instructions take as an operand a word bitmask; each bit corresponds to one of the general purpose registers, R0-12. For each bit set in the mask, the corresponding register is pushed onto the stack. The MACRO-32 assembler and compiler make it easy to specify the register masks:
PUSHR #^M<r0,r1,r2,r3,r4,r5> ; Save R0--R5 [...] POPR #^M<r0,r1,r2,r3,r4,r5> ; Restore R0--R5
Despite their ease of use, though, these instructions are very slow. If you only need to push a few registers, you’d be better off using several PUSHL/POPL combinations. In fact, on most, if not all VAXen, 12 PUSHLs are faster than a PUSHR with all 12 registers specified. For this reason, you may wish to implement PUSHREG and POPREG macros that perform the needed number of PUSHLs and POPLs:
.MACRO PUSHREG REG0,REG1,REG2,REG3,REG4,REG5,REG6,REG7,REG8,REG9,- REG10,REG11 .IRP REG <reg0,reg1,reg2,reg3,reg4,reg5,reg6,reg7,reg8,reg9,reg10,reg11="">
.IIF NB, REG, PUSHL REG .ENDR .ENDM PUSHREG .MACRO POPREG REG0,REG1,REG2,REG3,REG4,REG5,REG6,REG7,REG8,REG9,- REG10,REG11 .IRP REG <reg11,reg10,reg9,reg8,reg7,reg6,reg5,reg4,reg3,reg2,reg1,reg0=""> .IIF NB, REG, POPL REG .ENDR .ENDM POPREG
To use these macros, simply specify the registers. It is important that the registers be specified in the same order for the PUSHREG and POPREG calls:
PUSHREG R4,R2,R6,R7 ; Push the registers [...] POPREG R4,R2,R6,R7 ; Pop the registers
Note that on a VAX, one significant advantage of using PUSHR and POPR is that the condition codes are not modified (they are modified by PUSHL and POPL). Code that depends on the values of the condition codes can easily restore registers without wiping out those codes.
On the AXP architecture, there are no branch condition codes. Instead, there are conditional move and branch instructions that test for specific relationships between two values.
On an AXP system, it may be necessary to save all 64 bits of a register. In this case, there are built-in macros defined in the MACRO compiler that can be used. These macros are $PUSH64 and $POP64. An example of a portable macro that will determine which environment your code is being built for and will ensure that the registers are saved correctly is:
; ; Define macros for saving and restoring registers independent of platform ; ; PUSHREG reg ; Saves the register specified in the REG argument on the stack. ; .MACRO PUSHREG reg .IF DEFINED EVAX ; AXP architecture $PUSH64 reg ; AXP - use $PUSH64 macro .IF_FALSE ; VAX platform PUSHL reg ; Simple quick PUSHL instruction .ENDC ; End architectural differences .ENDM PUSHREG ; ; POPREG reg ; Restores previously saved register from the stack ; .MACRO POPREG reg .IF DEFINED EVAX ; AXP platforms $POP64 reg ; USe the macro .IF_FALSE ; Vax platforms POPL reg ; Simple POPL pseudo-opcode .ENDC ; End architectural differences .ENDM POPREG ; End of macro definition
Writing “safe code”
As has been mentioned before, systems code can crash the system when it is improperly written. The four most common reasons that user-written privileged code causes OpenVMS bugchecks are:
- Page faults at elevated IPL (above 2)
- Access violations
- Bad return address for JSB/RSB linkage
- Programming errors
By following a few simple rules, it’s fairly easy to write kernel-mode code that avoids most system crashes.
Avoiding page faults at elevated IPL
Page faults above IPL 2 are probably the most common causes of system crashes when new software is being debugged. In complex software spanning several modules, it’s often difficult to determine which data and code regions must be locked into memory and whether or not the region is locked in at the right time. The problem is compounded by the fact that the software may run on a memory-rich system without ever incurring a page fault, because the target addresses may always be present in memory.
There are two ways code and data are usually locked into memory:
- the code and data are copied to nonpaged pool and executed or referenced from there;
- the code and data are locked into a process’s working set, and thus into physical memory, using a system service ($LKWSET or $LCKPAG).
The method that should be used varies with the type of code that is executing. In general, code executing or data residing within process context is locked down using the $LKWSET system service. Code that executes (or can execute) in system context or data that must be referenced from system-context code is usually copied to nonpaged pool.
A common mistake when writing code dealing with nonpaged pool is the reference to pool memory that has already been deallocated. Such bugs can be very difficult to trace, since the memory may have already been modified for use by other code. To aid programmers in locating such problems, the OpenVMS memory management routines provide a mechanism known as pool checking. When pool checking is enabled by setting the SYSGEN parameter POOLCHECK to a certain bitmask, the deallocation routines will “poison” a deallocated memory packet with a pattern. Any subsequent attempt to reference this poisoned pool will almost certainly cause a system crash.
Using nonpaged pool to prevent page faults
The use of nonpaged pool is very common in code that must always execute in system context. A chunk of memory from nonpaged pool is allocated using one of several system routines such as EXE$ALONONPAGED. Once the memory has been allocated, the appropriate code routines and data regions are copied to the allocated nonpaged memory. The code is then executed from that memory by any number of methods, depending on the function of the software.
Note that, due to the requirement that AXP linkage code must reside in a different program section (PSECT) than the actual executable code, the practice of copying code into nonpaged pool is only applicable to VAX systems.
This process will be demonstrated a number of times throughout this series.
Using $LKWSET and $LCKPAG
A process’s working set is that set of memory pages used by the process when it is currently executing. OpenVMS can add and remove pages from a process’s working set via its memory reclamation procedures. The $LKWSET (Lock Pages in Working Set) and $LCKPAG (Lock Pages in Memory) system services work by locking specific pages into the working set; pages thus locked cannot be removed from the working set until they are specifically unlocked using the $ULWSET and $ULKPAG system services. The difference between $LKWSET and $LCKPAG is that pages locked with $LWKSET may be swapped out if the process is swapped out, while pages locked with $LCKPAG reside in physical memory even if the process is not in memory. For code executing in process context, the $LWKSET service will suffice because the locked pages will only be referenced when the process is currently executed (and, by definition, in memory).
Both $LKWSET and $LCKPAG accept as parameters an address range (as a quadword array) that represents the beginning and ending addresses of the memory that is to be locked down. Because both services work with pages (or pagelets under OpenVMS AXP), the amount of memory actually locked down will probably be greater than the specified address range; all of the pages within the address range must be locked down. The system services will return an optional array of the actual address range that was locked down.
When locking code and data into memory, care should be taken to ensure that all of the critical portions are locked down. If you are not sure which portions of a routine or data section should be locked down, you should go ahead and lock down the entire routine or section. It’s always better to have too much memory locked down than not enough (within reason, of course, because the size of the process working set is governed by the working set quotas established for an account).
Because the $LWSET and $LCKPAG system services lock pages into a process working set, they are suitable only for use in code that executes in process context. Code that executes in system context must rely on other methods of preventing page faults. This is normally accomplished by copying the desired code and data in blocks of nonpaged pool and then referencing the data from those blocks. Because the blocks of memory were allocated from nonpaged pool, they are guaranteed to reside in memory at all times.
Many of the program examples that will be provided in future articles in this series will demonstrate the use of these techniques to lock down memory.
Alternate methods of avoiding page faults
There are a couple of other less-frequently used methods of locking down code and data. The first of these involves using two macros provided in LIB.MLB: LOCK_SYSTEM_PAGES and UNLOCK_ SYSTEM_PAGES. These macros are used to lock down pages that have been allocated from paged pool. If you have code that must reside in system memory, but does not always run at elevated IPL, you can copy the code to paged pool and use these macros to lock down the appropriate pages when necessary by locking them into the system working set. Outside of OpenVMS code, these macros are rarely used, because most programmers always use nonpaged pool. For more information on the use of these macros, please consult the VMS Device Support Reference Manual.
The second method is known as “poor-man’s lockdown.” This method will not work under OpenVMS AXP, hence its use is not recommended. You will find a lot of older OpenVMS MACRO code that uses “poor-man’s lockdown,” so you should be aware of what it is and how it works.
The VAX architecture guarantees that at least two pages of memory will reside in physical memory. “Poor-man’s lockdown” works by faulting in up to one more page of memory and raising IPL with the same non-interruptable instruction, thus blocking AST deliveries and preventing page faults. Code that is to be locked down using this method cannot be larger than 512 bytes, because only two pages are guaranteed to be resident. (Two pages are necessary because a routine of up to 512 bytes can actually span two 512-byte pages, depending upon the starting address of the routine.) The VAX instruction MTPR (Move To Processor Register) is the instruction used to perform the two functions. It is usually invoked via the DSBINT (DiSaBle INTerrupts) macro. The target IPL value is stored as a longword of data at the end of the routine; the address of that longword was then passed as the argument to DSBINT. When the MTPR accesses the longword containing the new IPL value, the page of memory holding that longword is faulted into physical memory, if it wasn’t already there. Because the MTPR then raises IPL in one non-interruptable cycle, the entire routine is effectively locked into memory until IPL is lowered again and page faulting is allowed to resume.
The following brief VAX MACRO code fragment shows how poorman’s lockdown is usually implemented. Notice that the DSBINT macro is used to raise IPL. Also, the ASSUME macro is used to ensure that the portion of the routine locked down via poorman’s lockdown is no more than 512 bytes in length.
10$: DSBINT IPL=20$,ENVIRON=UNIPROCESSOR ; Set IPL and save old in -(SP) LOCK LOCKNAME=SCHED,- ; Now grab the SCHED spinlock CONDITION=NOSETIPL ; ... but don't change IPL [...] UNLOCK LOCKNAME=SCHED,- ; Now release the lock and NEWIPL=(SP)+ ; ... reset the IPL RET ; Return to caller 20$: .LONG IPL$_SCHED ; The new IPL value ASSUME <.-10$> LE 512 ; Make sure it fits on 1 page
If the code fragment is longer than 512 bytes, the ASSUME macro will generate a warning at assembly time. It works by taking the difference between the current address (.) and the starting address (10$) and comparing that to 511 bytes. If the difference is not less than or equal to 512 bytes, the warning message is generated.
NOTE ``Poor-man's lockdown'' works because the VAX architecture allows it. The Alpha AXP architecture does not support the use of poor-man's lockdown for a number of reasons, including the different page sizes and the fact that MTPR is implemented in PALcode---it is no longer a single, non-interruptable instruction.
Testing input and output buffers
When you are developing privileged code, you should always write paranoid code that expects that every input and output address passed to it is invalid. This is especially important if you are writing routines that are to be included in a privileged shareable image or device driver; improper checking of input addresses can allow non-privileged users to crash the system. If the privileged routines are part of a complete system that will not be called by other applications, you can easily comment out the address checks once you’ve ensured that the programs works.
The VAX architecture provided the PROBER and PROBEW instructions, which probed an address for read or write access. If the address was invalid or the specified access was not allowed from the current access mode, code could branch to an error label based on the settings of the condition codes set by PROBEx. Under OpenVMS AXP, PROBE has been implemented in PALcode.
Fortunately, there are macros in SYS$LIBRARY:LIB.MLB that make validating addresses even easier. These macros are IFNORD (IF NO ReaD), IFRD (IF ReaD), IFNOWRT (IF NO WRiTe), and IFWRT (IF WRiTe). These macros accept a size, an address, and a destination address to branch to if the check is true. The generated PROBE instruction or call checks that the first and last byte in the range (determined by the address and the size) are valid and accessible. It is important to note that the bytes within that range are not checked by PROBE; only the first and last bytes are checked.
Inside of a device driver, you can also call the executive routines EXE$READCHK[R] and EXE$WRITECHK[R], which are documented in the Device Driver Support Manual.
The following MACRO-32 fragment uses IFNOWRT to ensure that the routine has write access to the longword whose address was passed in as a parameter to the routine:
P1 = 4 .Entry ROUTINE, ^M<r2,r3,r4,r5> TSTL P1(AP) ; Anything passed to us? BEQL 200$ ; No - that's an error IFWRT #4, @P1(AP), 400$ ; Continue if write access to it ; Else fall thru if no write access 200$: MOVL #SS$_ACCVIO, R0 ; Indicate an "ACCESS VIOLATION" RET ; And return to our caller 400$:
If the parameter is missing, the address is invalid, or resides on a page protected from writing at the current access mode, control would transfer to the 200$ label to return an access violation status to the caller.
To ensure that an entire buffer is valid and accessible, a loop could be written to check all the pages that may reside between the two buffers.
Validating system addresses
A very good guideline to follow when writing privileged code is to never assume an address is what it is supposed to be. Well-written code should always check to be sure that an address is, first, valid, and second, that it points to an expected data structure. Many system crashes occur because a code thread used a register containing garbage, when a valid address was assumed. There are a couple of useful rules you can follow to help catch such programming errors.
First, any program that is going to access an OpenVMS data structure should include a $DYNDEF macro invocation. It should then, whenever it references an OpenVMS structure for the first time, compare the TYPE code in the structure with the known type code from the $DYNDEF macro.
Most OpenVMS data structures have a 12-byte fixed header that contains, at offset 10 from the base of the structure, a TYPE field; the symbolic value contained in the TYPE field is prefaced with a DYN$C_ prefix. These structure type codes should always be tested to prevent programming errors.
MOVL IRP$L_UCB(R3), R5 ; Extract UCB address from IRP CMPB #DYN$C_UCB, UCB$B_TYPE(R5) ; Check the type code for UCB BNEQ NOT_A_UCB ; Don't use it if it's not a UCB
Testing the type field is necessary, but it is also possible that the value used as the base address of the structure is invalid. One way to ensure that the address in R5 in the fragment above is a legal S0 address (greater than %X80000000) is to check that the most significant bit is set, i.e., that it’s a negative value. If that value has been proven by a BGEQ instruction that does not cause the execution path to change, the address can be checked against the maximum S0 address allowed. On OpenVMS VAX systems prior to V6.0, this value is contained in the global longword MMG$GL_MAXSYSVA. On V6.0 and AXP systems, it is called MMG$GL_FRESVA.
MOVL IRP$L_UCB(R3), R5 ; Extract UCB address from IRP BGEQ INVALID_S0_ADDRESS ; Branch if it can't be S0 address .IF DEFINED EVAX CMPL R5, G^MMG$GL_FRESVA ; Insure it REALLY is valid S0 .IF_FALSE .IF DEFINED DPT$M_XPAMOD ; V6.0+ symbol defined? CMPL R5, G^MMG$GL_FRESVA ; Insure it REALLY is valid S0 .IF_FALSE ; Otherwise use VAX pre-6.0 name CMPL R5, G^MMG$GL_MAXSYSVA ; Insure it REALLY is valid S0 .ENDC .ENDC BGTRU INVALID_S0_ADDRESS ; Greater then is invalid S0 space CMPB #DYN$C_UCB, UCB$B_TYPE(R5) ; Check the type code for UCB BNEQ NOT_A_UCB ; Don't use if it isn't a UCB
There is still one potential problem that this test will not detect. The address could be a valid system address, but it could point to a page in paged pool. Accessing it above IPL$_ ASTDEL could still result in a page fault, resulting in a system crash.
Ensuring Proper JSB/RSB Linkage
Under OpenVMS VAX, subroutines called via the JSB (Jump SuBroutine) instruction return to their callers via the RSB (Return from SuBroutine) instruction. The RSB pops a longword off the top of the stack and uses that as the return address. A routine that uses the stack for work space and fails to return the stack pointer to its initial value will break the JSB/RSB linkage, which will more than likely result in a system crash one way or another. Extra care should be taken when the stack is used to make sure that all values pushed onto the stack are removed before returning.
The AXP MACRO-32 compiler will detect some cases where the stack has been manipulated so that the return address is not the last value on the stack when a RSB instruction is encountered.
Preventing Programming Errors
Naturally, programming errors account for the majority of the problems caused by privileged code under OpenVMS. While this article can’t provide you with rules to guarantee bug-free code, there are some additional suggestions that will help minimize coding errors (which are not the same as logic errors).
One of the advantages of using a mid-level language such as BLISS or C when writing privileged code is that the languages shield you from accidentally referencing wrong registers. When writing in MACRO-32, it’s all too easy to mis-type a register number, which can have disastrous effects when a routine is executed. All MACRO-32 code should be carefully checked to ensure that the register usage is consistent and accurate.
One trick that can help locate using registers that are not inputs is to save all non-output registers, and zero out all non-input registers. This will cause a system crash if your code accidently accesses a register that you do not believe to be an input register. The following example highlights this technique within MACRO-32 code that can be used either on an AXP or VAX machine:
XX_HANDLER: .IF DEFINED EVAX ; VMS/AXP architecture .JSB_ENTRY - input = <r3,r4,r5>, - ; IRP, PCB, UCB as inputs output = <r0,r7>, - ; Resultant values into R0/R7 scratch = <r1,r9,r10,r11>, - preserve = <r2,r3,r4,r5,r6,r8> .IF_FALSE ; For VMS/VAX architecture PUSHREG <r2,r3,r4,r5,r6,r8> ; Save dem non-volatile registers .ENDC ; .IF DEFINED EVAX .IF DEFINED DEVELOPMENT_CODE ; During development/debugging phase CLRL R0 ; Clear all non-input registers CLRL R1 ; ... to assist in locating CLRL R2 ; ... bugs related to incorrect CLRL R6 ; ... register usage CLRL R7 CLRL R8 CLRL R9 CLRL R10 CLRL R11 .ENDC ; .IF DEFINED DEVELOPMENT_CODE
Another problem that is especially crucial when writing code in MACRO-32 is the accidental improper reference to data lengths. For example, it’s very easy to treat a longword value as a word value; depending on the application, such a mistake could produce unexpected results. For example, moving a word to a register via the VAX instruction MOVW leaves the high word of the register unchanged. Most times, an instruction like MOVZWL is desired to make sure that the high word is properly initialized.
A number of topics have been introduced in the first three installments in this series. For more information about how such concepts as IPL and spinlocks are implemented, please consult the VMS Internals and Data Structures manual. All of the topics covered in this introduction will be examined in detail from a programmer’s viewpoint in following articles.
The most important point of this introduction is that the OpenVMS privileged programmer must exercise extreme caution when writing code. The smallest typo or logic error can result in an OpenVMS system crash. You should always visually check over your code not only for logic errors, but also for such things as correct usage of the synchronization techniques, correct locking down of code and data, and correct usage of registers and addresses.
Hunter Goatley, Western Kentucky University, Bowling Green, KY.
Edward A. Heinrich, Vice-President, LOKI Group, Inc., Sandy, UT.