The following are the steps involved in processing a typical job under JES2 and JES3. Each job follows a predictable series of steps:
How a Job is Entered into the System
In the early days of the System/360-370, the phrase "entering a job into the system" meant that a system operator removed a deck of cards containing a job's JCL and data from a file cabinet, placed the cards into a card reader, and pressed the card*reader's "start" button. The card reader, which was under control of a system program called a reader task, read the cards, which were placed in a system file on DASD called the job queue. From the job queue, the job would be selected for execution by the job scheduler.
Although many installations still have a card reader and may use it occasionally, it is no longer the predominant way to enter a job into the system. Rather than create the job using punched cards, a programmer today uses a display terminal to create the JCL and data for the job. In the process, the job stream is stored in a file on a DASD unit. At this point, the job has not been entered into the system. Even though it resides on a DASD unit that is attached to the system, MVS (or, more accurately, JES2 or JES3) does not know about the job.
To enter, or submit, the job into the system, the terminal user issues a SUBMIT command. That causes JES2 or JES3 to read the job stream from the DASD file and copy it to a job queue, which is a part of a special DASD file called the JES spool. Even though the job originated from DASD rather than cards, the process is essentially the same as if a card reader was used. In fact, the JES component that processes the input job stream is called an internal reader.
In some cases, a facility called Remote Job Entry, or RJE, is used as an alternative way to submit jobs. Originally, RJE provided a way to use an RJE station—a special facility that consisted of a card reader, card punch, and a printer—to enter jobs from a remote location; the RJE stations were connected to the computer system via telephone lines. Today, most RJE stations are themselves separate computer systems that include a facility to emulate a card reader, cardpunch, and printer. When you send a job stream to an MVS system using RJE, JES2/JES3 treats the job as if it originated at a local card reader or an internal reader.
How a Job is Scheduled for Execution
As we have already seen, MVS does not necessarily process jobs in the order in which they are submitted. Instead, JES examines the jobs in the job queue and selects the most important jobs for execution. That way, JES can prioritize its work, giving preference to more important jobs.
JES uses two characteristics to classify a job's importance, both of which can be specified in the job's JCL—job class and priority. Of the two, job class is more significant. If two or more jobs of the same class are waiting to execute, the JES scheduler selects the one with the higher priority.
Typical Job Class Assignments
Each job class is represented by a single character, either a letter (A-Z) or a digit (0-9). Job classes are assigned based on the processing characteristics of the job. To illustrate the above table shows the characteristics of seven typical job class assignments. Job classes.A through D classify jobs based on how quickly they must be scheduled for execution. Class A jobs execute within 15 minutes, class B jobs within 30 minutes, class C jobs within an hour, and class D jobs are scheduled for overnight execution. Class H jobs are held. That means they will not be scheduled at all until a system operator explicitly releases the job for execution. Class L jobs, like class A jobs, are scheduled within 15 minutes. In addition, class L jobs add the restriction that each job step can use no more than one minute of CPU time. Finally, class T is reserved for jobs that require tape volumes to be mounted. By placing those jobs in a special class, the system operators have better control over tape processing. Bear in mind that the job class assignments in the above table are only an example. Each installation makes its own job class assignments, so the job classes your installation uses will certainly be different.
At this point, it is not essential that you understand the details of how jobs are selected for execution. But I do want you to know about a special type of program called an initiator, because that knowledge will help you understand not only job scheduling, but MVS multiprogramming as well. An initiator is a program that runs in the system region of an address space that is eligible for batch job processing. (Not all of the address spaces on an MVS system can process batch jobs, so they do not all have initiators.) Each initiator can handle one job at a time. It examines the JES spool, selects an appropriate job for execution, executes the job in its address space, and returns to the JES spool for another job.
The number of active initiators on a system and, as a result, the number of address spaces eligible for batch job processing determines the number of batch jobs that can be multiprogrammed at once. Initiators (and their address spaces) can be started when MVS is activated. And, they can be started or stopped by an operator while MVS is running. That way, an installation can vary the number and type of active initiators to meet changing processing needs.
Assignment of Job Classes to Initiators
Each initiator has one or more job classes associated with it; it executes jobs only from those classes. That way, an installation can control how many jobs of each class can be executed simultaneously, and in what combinations. To illustrate. The above table shows the job classes associated with six initiators. Because there is only one initiator associated with class A, only one class A job can execute at a time. Jobs of other classes can execute in various combinations. (Once again, the job classes in use at your installation are undoubtedly different from the classes described here.) Within a job class, initiators select jobs for execution based on their priorities, which can range from 0 to 15. Jobs with higher priority values are selected for execution before jobs with lower priority values. As a result, a class A job with priority 13 will be executed before a class A job with priority 10. If two or more jobs have the same class and priority, they are executed in the order in which they were submitted.
How a Job is Executed
The following figure shows how a job is executed once an initiator has selected it for execution. As you can see, the initiator and several other MVS programs run in the system region. System region is a part of the private area of a user's address space. The first thing an initiator does after it selects a job for execution is invoke a program called the interpreter. The interpreter's job is to examine the job information passed to it by JES and create a series of control blocks in the scheduler work area (SWA), a part of the address space's private area. Among other things, these control blocks describe all of the data sets the job needs.
Figure 3.II Data Set Allocation and Job Step Execution
After the interpreter creates the SWA control blocks, the initiator goes through three phases for each step in the job. First, it invokes allocution routines that analyze the SWA control blocks to see what resources (units, volumes, and data sets) the job step needs. If the resources are available, they are allocated so the job step can process them. Next, the initiator builds a user region where the user's program can execute, loads the program into the region, and transfers control to it. As the user program executes, it uses the control blocks for the resources allocated to it. When the program is completed, the initiator invokes unallocation routines that release any resources used by the job step.
That, in a nutshell, is how a job is executed. For each job step, three activities occur: first, resources are allocated; second, a region is created and the program is loaded and executed; third, resources are released. This process continues until there are no more job steps to process. Then, the initiator releases the job and searches the spool again for another job of the proper class to execute.
As a user's program executes, it can retrieve data that was included as part of the job stream and stored in the JES spool. Input data processed in this way is called SYSIN data or in-stream data; the user's program treats the data as if it was read from a card reader. Similarly, the user's program can produce output data that is stored in the JES spool; the program treats the data, called SYSOUT data, as if it was written to a printer. SYSOUT data is held in a SYSOUT queue until JES2/ JES3 can process it.
How a Job's Output is Processed
Like jobs, SYSOUT data is assigned an output class that determines how the output will be handled. Most likely, an output class indicates which printer or printers can be used to print the output. In some cases, an output class specifies that the output not be printed; instead, it is held so that you can view it from a display terminal.
Common output classes are A for standard printer output, B for standard card punch output, and Z for held output. (Held output stays on the SYSOUT queue indefinitely; usually, output is held so that it can be examined from a TSO terminal.)
A single job can produce SYSOUT data using more than one output class. For example, you might specify that job message output—that is, MVS and JES information relating to your job—-be produced using class A. Similarly, you might specify class A for output produced by one or more of your programs. Then, all of that class A output is gathered together and printed as a unit. However, you might specify class D for some of your job's spooled output. Then, the class D output will be treated separately from the class A output.
JES lets you control how SYSOUT data is handled in other ways besides specifying output classes. For example, you can specify that output be routed to a specific printer, or you can specify that two or more copies of the output should be produced or that the output should be printed on special forms rather than on standard paper.
How a Job is Purged
After the job's output has been processed, the job is purged from the system. Simply put, that means that the JES spool space the job used is freed so it can be used by other jobs. And any JES control blocks associated with the job are deleted. Once a job has been purged, JES no longer knows of its existence. To process the job again, you return to step 1—submit the job.
Two Alternative Ways to Allocate Data Sets
In the job processing steps we have just seen, data sets are allocated to jobs by MVS on a step-by-step basis. MVS allocates data sets before it executes your program, and it deallocates them when your program is completed. This method of data set allocation is often called job-step allocation. There are two other ways in which data sets can be allocated. The first, called JES3 allocation is used only on JESS systems. The second, called dynamic allocation, is used on both JES2 and JES3 systems, primarily by time-sharing users.
When JES3 allocution is used, JES3 examines a job's JCL and allocates some or all of the units, volumes, and data sets the job requires before the job is scheduled for execution. Then, when the job executes, the MVS allocation routines are used to allocate the data sets that were not pre-allocated by JES3. The advantage of JES3 allocation is that JES3 knows the allocation needs of all the jobs submitted for execution. As a result, it can avoid scheduling jobs together if they have conflicting resource requirements. Another advantage of JES3 allocation is that initiators, and their address spaces, are not tied up during allocation. So the overall efficiency of the system is increased.
In a sense, dynamic allocution is the opposite of JES3 allocation. Rather than allocate resources before the job is scheduled and its programs are executed, dynamic allocation does not allocate data sets until an executing program requests that they* be allocated. In other words, dynamic allocation allocates data sets after normal step allocation rather than before it.
The main user of dynamic allocation is TSO, the MVS time-sharing facility. TSO lets you, as a terminal user, allocate data sets whenever you need them and deallocate them when you do not need them any more. The key to understanding this is realizing that MVS treats each TSO terminal session (from LOGON to LOGOFF) as a single job step. As a result, step allocation can be used only for data sets that are required for the duration of your terminal session. Dynamic allocation is used for data sets that you need only during part of your terminal session.
IBM Mainframe Related Interview Questions
|IBM Lotus Notes Interview Questions||IBM-CICS Interview Questions|
|COBOL Interview Questions||Linux Interview Questions|
|IBM-JCL Interview Questions||IBM Mainframe Interview Questions|
|IBM AIX Interview Questions||IBM WAS Administration Interview Questions|
|IBM Lotus Domino Interview Questions||IBM Integration Bus Interview Questions|
|Mainframe DB2 Interview Questions||Unix Production Support Interview Questions|
Ibm Mainframe Tutorial
Introduction To Software Development
Introduction To Ibm Mainframes
Tso And Ispf
Jes2, ]es3 And Sms
Introduction To Job Control Language (jcl)
The Job Statement
The Exec Statement
The Job And Exec Statements
The Dd Statement
Procedures And Symbolic Parameters
Generation Data Groups (gdg), Compile/link-edit And Run Jcls
Access Method Services (ams)
Additional Vsam Commands
Introduction To Rexx
Overview Of Rexx
Introduction To Cics
Exception Handling In Cics
Developing A Cics Application
Cics Programming Techniques
Basic Mapping Support (bms)
Transient Data Control
Temporary Storage Control
Interval And Task Control
Cics Application Design
Recovery And Restart
System Security And Intersystem Communication
Cics Debugging Facilities And Techniques
Bms Map Definition Macros And Copylib Members
Cics Response And Abend Codes
Data, Information And Information Processing
Introduction To Database Management Systems
Introduction To Relational Database Management Systems
Database Architecture And Data Modeling
Overview Of Db2
Structured Query Language (sql)
Data Security And Access
Db2 Application Development
Qmf And Db2i
Db2 Performance Monitoring, Utilities And Recovery/restart
Overview Of Information Management System (ims)
Introduction To Vs Cobol Ii
Overview Of Application Development In Vs Cobol Ii
Overview Of The Cobol Program
Sorting And Merging Files
Coding Cobol Programs That Run Under Cics. Ims, Db2 And Ispf
Compiling The Program
Link-editing The Program
Executing The Program
Improving Program Performance
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.