Manual for the Design and Implementation of Recordkeeping Systems (dirks)
Вид материала | Документы |
- Design and installation of pipelines for heating systems usingpipes, 616.28kb.
- Design and installation of pipelines for heating systems usingpipes, 608.87kb.
- 1 Примеры программ ecad, 243.4kb.
- Audi Group Design Center. Уникальный проект audi ag и Audi Russia, направленный, 105.22kb.
- Тематика лекций по дисциплине «Механика грунтов, основания и фундаменты», 8.01kb.
- Web-дизайн Понятие веб-дизайна, 215.6kb.
- Web дизайн включает в себя визуальный дизайн (вообще), дизайн представления информации, 1039.6kb.
- Assessing Implementation of the eecca environmental Partnership Strategy – a baseline, 147.38kb.
- Государственный стандарт союза СССР информационная технология Комплекс стандартов, 428.01kb.
- Experience of tqm principles and iso 9000 implementation in the Pridneprovsky region, 67.13kb.
Planning for ongoing monitoring
Plans and timetables for ongoing monitoring of systems and system components should be a part of any DIRKS project. Mechanisms such as:
- help desk support
- suggestion forms, or
- user groups
should be considered and adequately resourced. Problems found through observation and use, reports and random checks may highlight where corrective action is needed.
Responsibility for ongoing monitoring should be assigned to people with the appropriate skills and knowledge to find and address problems.
^ Example: Using skilled staff for monitoring One organization had a project team consisting of representatives from each business unit who were trained and assisted in the development and implementation of the system. This same team then became the core members of a user group, feeding back complaints or suggestions from their business areas to the project leaders both during the implementation and afterward so that they could plan for remedial action or further development or refinement of the system. As the team members were in the work areas they heard more about problems or issues found. |
Planning the post implementation review
Overview
The scope of the review
Who should review?
When should you review?
Performance indicators
Review methods
What should you review?
Documentation required for review
Overview
This section outlines what you should consider when planning for a review of your recordkeeping system. Suitable resources should be allocated. You will need to decide:
- the scope of the review (what you want to evaluate)
- who will perform it
- when it will occur
- what performance indicators should be used
- what methods should be used, and
- the documentation required.
Ideally an evaluation framework should be developed as part of the original design process so that any performance data requirements can be built into management and administrative processes and the review can be conducted with minimal intrusion on work practices or the delivery of services.
^
The scope of the review
The scope of the review will be dependent on the original scope of your project, the resources available and organizational needs and priorities.
Who should review?
The choice of the reviewer(s) is dependant on a number of factors such as:
- the specific circumstances
- needs
- organizational culture
- knowledge possessed, and
- the size of the review.
Example: Who should review A large organization, such as the United Nations, with multiple layers of management and complex business activities which implemented major new systems may choose to engage external consultants to prepare an in-depth report. A small organization with few unique or high-risk functions may opt for peer review. |
To avoid any actual or perceived bias, broad system reviews should preferably by undertaken by personnel who were not involved in the system design and implementation process.
Members of the review team will need to have good analytical skills and knowledge appropriate to the task at hand. They should understand and have access to the project goals and design and implementation documentation, so that they are able to assess whether the system or system component is adequately meeting the project goals and organizational needs.
^
When should you review?
An initial post implementation review for a system should be carried out between six and twelve months after the system has been implemented and then repeated on an agreed cycle. Smaller reviews of elements of the system may be conducted at more regular intervals, or in accordance with organizational needs.
^
Performance indicators
As part of initial project planning the project team should have established performance indicators to measure the success of your project. The indicators (e.g. timeliness, teamwork, budget, satisfaction of sponsors and other stakeholders) should be used in the review process to measure project progress.
The project team should also have established expected project outcomes (e.g. comparison of inputs to outputs, behavioral change, cost savings, level of satisfaction or involvement) as part of initial planning. These should also be measured in the review along with other outcomes evident along the way.
Depending on the scope of the project other measurement tools may have been developed during the project.
^ Example: Information produced during other steps can become measurement tools The list of agreed recordkeeping requirements developed by the end of Step C: Identification of Recordkeeping Requirements can become performance indicators for assessing the performance of the system in meeting these requirements. Reports arising from earlier steps might include recommendations on improvements to existing systems will help inform the review process. |
Any criteria used must be objective, verifiable and quantifiable and should allow for comparisons to be drawn over time. Organizational constraints (cultural, technical, economic, political and other factors) and their impacts should also be assessed.
Questions for the review should be related to the particular project you have undertaken. See What should you review? for a list of possible questions (based on an entire system review).
There are various types of evaluation depending on the particular questions you want answered. Generally, they fall into the following areas:
- appropriateness
- appropriateness of solution compared to the organization's needs
- objectives compared with available resources, and
- comparison of need now with original need.
- appropriateness of solution compared to the organization's needs
- effectiveness
- original objectives compared with outcomes (what was desired and what was achieved)
- outcomes compared with needs
- outcomes compared with standards
- present outcomes compared with past outcomes, and
- comparison between target groups within the organization.
- original objectives compared with outcomes (what was desired and what was achieved)
- efficiency
- current costs compared with past costs (people, processes, technology and tools)
- costs compared with similar systems elsewhere (benchmarking), and
- extent of implementation compared with targets.
- current costs compared with past costs (people, processes, technology and tools)
Tip: Develop clear performance criteria One of the common faults in performance criteria and measures is that they are vague and ambiguous. Try to be clear about what you are measuring. |