All inclusive plus tax. Deluxe.

Ein-Gedi.jpgYou have a new script developed and ready to be automated as part of a bigger business flow. What are the next steps?

  • Create a new job definition, and point it to the script location. Use your company naming convention standard for the job, application and other categorization attributes.
  • Specify the OS account that will be used to execute the script on the target server.
  • Define the scheduling criteria (the days on which the jobs should run) and the submission criteria (the time window and prerequisites which need to be fulfilled before the job can run).
  • Define the dependencies between the job and other processes in the business flow.
  • Configure post processing actions to recover from various scenarios on which the script can fail and the notifications that should be issued in such cases. This may also include opening helpdesk tickets to audit the failure or to get the relevant support team involved.
  • Add job documentation to have all related information available for Production Control.

Deadlines and SLA can be defined at the individual job level, but in most cases a much better approach is to define it at the batch service level.

Is that it?Ein-Gedi-Pool.jpg

You’ve ticked all the above check-boxes. You’ve run a forecast simulation to verify the scheduling definitions. You think you are ready to promote this change to production.

Before you do that – think about the following scenario:  what will happen if someone modifies the script on the target server? Will that change be audited? If the next run of the job fails because of the script modification – will Production Control be able to associate the change (assuming they are aware of it) to the failure? What tools does Production Control have in order to recover from the error?

Without getting into complicated or expensive change management practices, there is one very simple thing you can do:

Use Embedded Scripts.

When embedding scripts within the job definition you can immediately get the following benefits:

  1. Any change to the script is audited and can be identified by Production Control as a potential cause of the job failure.
  2. Changes can be rolled-back by Production Control without the need to access the target server and manually modify the script. In some cases Production Control are not even authorized to remote access to application servers. If the target server is mainframe, iSeries, Tandem, Unisys or OpenVMS for example, we are talking about a whole different set of skills which are not required when using embedded scripts.
  3. You can roll back all changes made up to a certain point in time, including both the job attributes and the embedded script. Deleted jobs will be restored. Modified jobs will be rolled back. New jobs that were created after that point in time will be deleted.
  4. You can compare between the job versions and see if the script was modified and if so – what was the change.
  5. The embedded script can be more than just a batch or a shell script. It can be a PowerShell script, a Perl script or other scripting languages. You can also embed SQL statements if you run database jobs.
  6. You can run a single copy of an embedded script on multiple target servers. This way if changes are required you can modify the script only once. Add agentless scheduling to the equation and you have a real “zero footprint” environment.

Including scripts as embedded parts of job definitions can be a very efficient practice for both Application Developers and Production Control. It will reduce the time it takes to recover from errors and increase your ability to meet your auditor requirements.

Do you use embedded scripts? If so – share the details with us: what type of scripts do you run, how does it work for you, what challenges do embedded scripts help you address and what challenges still remain…


Note: the photos in this post are my idea of an “all inclusive deluxe” vacation. It’s the most beautiful place in the world, and the lowest point on earth. This is where I live!

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post

Tom Geva

Tom Geva

Tom Geva is an IT Workload Automation expert. Tom served as a Control-M product manager for more than 10 years and was responsible for translating the workload automation market requirements gathered from customers and analysts into product and business strategies, roadmap and specifications. Tom is deeply involved in the development process of all the Control-M releases. As a Sr. Solution Marketing Manager he frequently speaks in conferences and workload automation events. Tom holds 19 years of experience in the IT industry. Prior to joining BMC Software in 2001, Tom spent 4 years as a production control manager in the Israeli Defense Force and was responsible for workload automation and mainframe education services.