Tuesday, October 13, 2009

Assignment #9

Identify an information environment of your choice and write an essay to address the following questions: (3000 words)• What should be your role within this environment?
• How can the principles of information organization and representation help you in performing this role?
• What are the challenges facing you in performing the role? How will you address these challenges?

Technology today is quickly evolving. The things that we don’t know will be known easily through the Internet and it is because of technology. In this assignment, my task is to identify an information environment of my choice and state my role as a student in this environment. In addition, I should be able to explain how the principles of information organization and representation help me in performing my role. Lastly, I will be able to know the challenges that I’m facing in performing the role and how I will address to these challenges, but before anything else, what is an Information Environment?

Information Environment is the aggregate of individuals, organizations, or systems that collect, process, or disseminate information; also included is the information itself. It helps people to access to electronic resources, new environments for learning, teaching and research, guidance on institutional change, and provide advisory and consultancy services.

There is now a critical mass of digital information resources that can be used to support researchers, learners, teachers and administrators in their work and study. The production of information is on the increase and ways to deal with this effectively are required. There is the need to ensure that quality information isn’t lost amongst the masses of digital data created everyday. If we can continue to improve the management, interrogation and serving of ‘quality’ information there is huge potential to enhance knowledge creation across learning and research communities. The aim of the Information Environment is to help provide convenient access to resources for research and learning through the use of resource discovery and resource management tools and the development of better services and practice. The Information Environment aims to allow discovery, access and use of resources for research and learning irrespective of their location.

The Information Environment that I have chosen is The Meta Information Environment of Digital Libraries. The meta-information environment of a library is the aspect of library structure that is likely to be most affected by Digital Library technology. It is important to design meta-information environments for Digital Libraries that simultaneously compensate for the loss of many of the services of librarians and take advantage of the ability to apply digital processing to information objects in the collection of Digital Libraries.

Meta-Information Environment of Digital Libraries

Libraries are organized to facilitate access to controlled collections of information. Traditional libraries (TL's) possess three organizational characteristics that, together, provide a basis for such access. These are the organization of information into physical information objects (IO's) such as books; the physical organization of the collections of IO's according to various attributes, such as subject matter and author; an organized information environment that facilitates direct access to the IO's based on such attributes as author, title, and subject matter, as well as a limited degree of indirect access to the information contained in the IO's.

This last characteristic of a TL typically involves multiple sources of information to support access, such as librarians, catalogs, and the manner in which the collections are organized physically. Since it involves information about information, we term this characteristic the meta-information environment of a library.

As currently conceived, digital libraries (DL's) are libraries in which the controlled collections are in digital form and access to the information in the collections is based almost entirely on digital technology. From a user's point of view, digital technology changes the three organizational characteristics of TL's. First, the organization of information into physical IO's is replaceable with a more flexible organization into logical IO's. Second, the single physical organization of a collection of IO's is replaceable with multiple logical organizations of IO's.

The third and most significant changes, however, occur in the meta-information environment of a library. In terms of advantages, having the IO's in digital form permits the use of digital technology in extracting information from the IO's. The extracted information may satisfy a user's ultimate need for information or it may be employed by “digital librarians'' in characterizing the IO's in the collection. In the latter case, this meta-information may be employed in providing access to the information encoded in the IO's. In terms of disadvantages, important interactions between librarians and users that occur in the meta-information environments of TL's may be lost with the near-automation of information access in DL's.

The term ``metadata'' has been applied in a large variety of contexts. For example, the topics of papers at a recent conference on metadata ranged from metadata in data dictionaries and its use in controlling the operations of database management systems; to metadata used for describing scientific datasets and supporting data sharing among scientists; to metadata used in DL's to support user access to information.

The concept of metadata, when applied in the context of current libraries, digital or traditional, typically refers to information that provides a (usually brief) characterization of the individual IO's in the collections of a library; is stored principally as the contents of library catalogs in TL's; is used principally in aiding users to access IO's of interest.

As an example of its use in the context of TL's, the term ``metadata'' is sometimes used to describe the descriptive cataloging that is specified by the Anglo-American cataloging rules and the MARC interchange format. Such information constitutes a major component of the cataloging information in most TL's. As an example of its use in the context of DL's, the term ``metadata'' has been used to describe the information of the ``Dublin Core'' and the associated ``Warwick Framework'' which is intended to support access to information on the World Wide Web. The Core specifies the concrete syntax for a small set of meta-information elements, and the Framework specifies a container architecture for aggregating additional metadata objects for interchange.

More generally, however, if one surveys the many contexts in which it has been applied, it becomes apparent that the concept associated with the term ``metadata'' is the principal focus of an emerging area of the information sciences whose goal is to discover appropriate methods for the modeling of various classes of IO's. Since a model of an IO is itself typically an IO, and since the concept that is generally associated with the term ``data'' is subsumed by the concept associated with the term ``information object'', it seems preferable to use the term ``meta-information'' and to define it as a model of an information object.

A Scenario for the Use of the Meta-information Environment in a Traditional Library

For the sake of concreteness, let us assume a user whose interest is in finding information on condor re-introduction programs in California. In order to access such information in a TL, the user may engage in a variety of activities. The four most important activities include consulting a librarian; consulting available catalog and reference materials; browsing through the open collections of the library; and processing the information that has been accessed.

Let us assume that the user begins a search by consulting a librarian, and indicates an initial interest in discovering whether programs for re-introducing condors from captive breeding populations have been a success. Several important processes may co-occur during these interactions. First, the librarian may build a ``cognitive model'' of the user that is employed in helping the user. As an example, the librarian may note the user's level of knowledge about the use of a library, and discover that the user does not understand the value of subject heading catalogs in searching for references to information on the decline of the condors.

Second, the librarian may build a cognitive model of the user's information requirements, or ``query'', typically in an iterative process during which the user may change the initial query. The librarian may discover, for example, that the user would like to know the locations of the release sites in order to visit them. Third, and depending on the context of the query, the librarian may also construct a model of the user's information processing requirements. In terms of our example, these might include estimating the time to hike to the release sites.

In conjunction with these emerging models of the user's knowledge base and information needs, the librarian employs a cognitive model of the library's information resources to determine an appropriate set of actions that will lead to the satisfaction of the user's information needs. Three classes of activities are worthy of note. First, the librarian may direct the user to meta-information, such as the subject catalog, that points directly to IO's of interest. Second, the librarian may guide the user to ``general'' meta-information that can be used in a less direct manner in finding IO's of interest. For example, the user may be directed to a gazetteer in order to find the geographical coordinates of the release sites, whose names the librarian may happen to know. These coordinates may then be used in accessing the appropriate maps from the library's map collection. Third, the librarian may suggest that the user browse in the ornithology section of the library to look for books that may be relevant to the topic of condors. In so doing, the user may assess meta-information in the form of titles and tables of contents.

Before leaving the library, the user may employ the relevant maps to estimate the time it would take to hike to the condor release areas.

A Characterization of the Meta-information Environment of a Traditional Library

The preceding example, which is by no means artificial, emphasizes the fact that the meta-information accessed by users of TL's in satisfying their information needs is not restricted to the meta-information in the author, title, and subject catalogs. In particular, the scenario was devised to emphasize that, during search, a user may conceivably employ as meta-information almost all the information sources in a library. Such sources range from the librarian's general knowledge of the world to information encoded in the IO's on the stacks.

An analysis of the preceding and similar usage scenarios suggests that one may further characterize the meta-information environment of a library in terms of a simple model involving sets of services for
coordinating user interactions with the meta-information environment, exemplified in the above scenario in terms of the user's interactions with the librarian; constructing models of the user, the user's query, and the user's workspace requirements, exemplified in our scenario by interactions with the librarian; providing access to models of IO's, exemplified in our scenario by use of the subject catalog and browsing among the stacks; making matches between the model of user queries and models of IO's, exemplified in our scenario in part by actions of the librarian and in part by actions of the user in relation to such library resources as the subject catalog; extracting information from retrieved IO's, exemplified in our scenario by the computation from the maps of the time it would take the user to hike to the release sites. creating models of IO's which, although an important service of the meta-information environment of libraries, is not exemplified in the preceding scenario.
The scenario emphasizes the key role played by librarians in providing services in the meta-information environment of many TL's.

Knowledge Representation Systems in the Meta- information Environments of Libraries

In order to analyze further the manner in which the preceding sets of services provide support for user access to information, it is useful to introduce the concept of knowledge representation systems (KRS's). We argue that an important component of the functionality of the six sets of meta-information services in TL's is provided by a diverse set of KRS's. This conceptualization in terms of KRS's provides a useful theoretical framework for the design and analysis of DL's.
A KRS may be defined as a system for representing and reasoning about the knowledge in some domain of discourse, and is generally comprised of: an underlying knowledge representation language (KRL), whose expressions are intended to represent knowledge about some domain of discourse;
a semantics that gives meaning to the expressions of the KRL in terms of the domain of discourse;
a set of reasoning rules that may be employed in inferring further useful expressions from a given set of expressions; a body of knowledge about the domain of discourse expressed in terms of the KRL.
Concepts similar to the concept of a KRS that have been used by other researchers in relation to meta-information include formal systems with interpretations and semi-formal systems.

In general, we may view the KRS's of a library as providing a diverse set of services that are of particular value in the modeling of both IO's and user queries. They are, for example, of particular significance in supporting the modeling of IO's in terms of their content, since, in principle, the content of library materials may refer to any representable aspect of our knowledge.
In order to gain further insight into the nature and significance of KRS's, we provide examples of their use in supporting key sets of services in the meta-information environments of TL's.

KRS supporting the User Query and IO Modeling Services
Thesauri are an important class of KRS's that are employed in constructing models of the subject matter (or ``content'') of IO's for the catalog systems of TL's. The motivation for the use of thesauri is the difficulties that arise from using a KRS based on natural language (NL) in this context. These difficulties arise from the syntactic and semantic complexity and the high levels of ambiguity that are typically associated with general expressions in NL. The KRL of a thesaurus, on the other hand, is designed to possess a restricted syntax and semantics that permits the representation of restricted domains of discourse in an unambiguous manner. These restrictions result in the construction of many domain-specific thesauri, which in essence represents a ``divide-and-conquer'' approach to building unambiguous representations of a complex world.

For the present purpose, we may use a highly-simplified view of a thesaurus that is abstracted from the ANSI-NISO standard for thesauri.

The KRL of a thesaurus may be viewed as specifying the terms of a simple language and a few relations (or predicates) defined on the terms. These predicates include the three ``broad term/narrow-term'' predicates, the ``related term'' predicate, and the ``synonymous term'' predicate.

In relation to the semantics associated with its KRL, a term defined in a thesaurus is intended to denote a single concept. Typically, terms represent classes of entities, although class instances are permitted. Ambiguity arising from synonymous and homonymous terms is effectively removed. The mapping from terms to concepts is provided informally by the cognitive processing of the reader of the terms.

With respect to reasoning procedures, the use of the basic inference rules of logic (such as ``if A and A implies B are both true, then B is true"), together with axioms involving the various predicates (such as ``if A is a narrow term for B, and B is a narrow term for C, then A is a narrow term for C''), it is possible to carry out simple reasoning that is interpretable in terms of the concepts being represented in the KRL.

In terms of viewing a thesaurus as representing a body of knowledge about some aspect of the world, the terms and predicates of a thesaurus represent a set of concepts and their relations that model some aspect of the world.

Large numbers of thesauri are currently employed in library contexts. The representation of the content of IO's is typically achieved by choosing a relatively small number of terms from some domain-specific thesaurus.

Another important research issue concerns the construction of semantic mappings between the KRL's of different KRS's. It is possible to employ different sets of KRS's for modeling user queries and for modeling IO's. There is therefore a need for translation during the application of matching services. One approach to constructing such mappings involves the use of human experts working in a top-down manner, which is likely to be a time-consuming and controversial process. An approach that is promising in terms of automation involves bottom-up techniques based on empirical analyses of the use of language.

As a student, I do research most of the time. Through digital library, I don’t need to go to the library, refer to the card catalog and find the book to get the information that I need. I will just simply refer to the digital libraries over the Internet. It is my responsibility to use this technology advantage well for better, but still don’t forget to give credits and importance to traditional library and its ways.

Assignment #8

What is outsourcing? What is in-source? Which is better outsource or in-source? Before anything else and before I state my stand on this topic, let us be acquainted with outsource and in-source, and what are their advantages and disadvantages.

Outsourcing is an arrangement in which one company provides services for another company that could also be or usually have been provided in-house. Outsourcing is a trend that is becoming more common in information technology and other industries for services that have usually been regarded as intrinsic to managing a business. In some cases, the entire information management of a company is outsourced, including planning and business analysis as well as the installation, management, and servicing of the network and workstations.
Outsourcing can range from the large contract in which a company like IBM manages IT services for a company like Xerox to the practice of hiring contractors and temporary office workers on an individual basis.

Outsourcing involves the transfer of the management and/or day-to-day execution of an entire business function to an external service provider. The client organization and the supplier enter into a contractual agreement that defines the transferred services. Under the agreement the supplier acquires the means of production in the form of a transfer of people, assets and other resources from the client. The client agrees to procure the services from the supplier for the term of the contract. Business segments typically outsourced include information technology, human resources, facilities, real estate management, and accounting. Many companies also outsource customer support and call center functions like telemarketing, CAD drafting, customer service, market research, manufacturing, designing, web development, print-to-mail, content writing, ghostwriting and engineering. Offshoring is the type of outsourcing in which the buyer organization belongs to another country.

Outsourcing and offshoring are used interchangeably in public discourse despite important technical differences. Outsourcing involves contracting with a supplier, which may or may not involve some degree of offshoring. Offshoring is the transfer of an organizational function to another country, regardless of whether the work is outsourced or stays within the same corporation/company.

Multisourcing refers to large outsourcing agreements. Multisourcing is a framework to enable different parts of the client business to be sourced from different suppliers. This requires a governance model that communicates strategy, clearly defines responsibility and has end-to-end integration.

Strategic outsourcing is the organizing arrangement that emerges when firms rely on intermediate markets to provide specialized capabilities that supplement existing capabilities deployed along a firm’s value chain. Such an arrangement produces value within firms’ supply chains beyond those benefits achieved through cost economies. Intermediate markets that provide specialized capabilities emerge as different industry conditions intensify the partitioning of production. As a result of greater information standardization and simplified coordination, clear administrative demarcations emerge along a value chain.

Due to the complexity of work definition, codifying requirements, pricing, and legal terms and conditions, clients often utilize the advisory services of outsourcing consultants or outsourcing intermediaries to assist in scoping, decision making, and vendor evaluation.

The competitive pressures on firms to bring out new products at an ever rapid pace to meet market needs are increasing. As such, the pressures on the R&D department are increasing. In order to alleviate the pressure, firms have to either increase R&D budgets or find ways to utilize the resources in a more productive way. There are situations when a firm may consider outsourcing some of its R&D work to a contract research organizations or universities.

Reasons why a firm could consider outsourcing are:
-new product design does not work
-project time and cost overruns
-loss of key staff
-competitive response
-problems of quality/yield.


Here are some reasons why companies prefer outsourcing:

Cost savings. The lowering of the overall cost of the service to the business. This will involve reducing the scope, defining quality levels, re-pricing, re-negotiation, cost re-structuring. Access to lower cost economies through offshoring called "labor arbitrage" generated by the wage gap between industrialized and developing nations.

Focus on Core Business. Resources are focused on developing the core business. For example often organizations outsource their IT support to specilaised IT services companies.

Cost restructuring. Operating leverage is a measure that compares fixed costs to variable costs. Outsourcing changes the balance of this ratio by offering a move from fixed to variable cost and also by making variable costs more predictable.

Improve quality. Achieve a step change in quality through contracting out the service with a new service level agreement.

Knowledge. Access to intellectual property and wider experience and knowledge.

Contract. Services will be provided to a legally binding contract with financial penalties and legal redress. This is not the case with internal services.

Operational expertise. Access to operational best practice that would be too difficult or time consuming to develop in-house.

Access to talent. Access to a larger talent pool and a sustainable source of skills, in particular in science and engineering.

Capacity management. An improved method of capacity management of services and technology where the risk in providing the excess capacity is borne by the supplier.

Enhance capacity for innovation. Companies increasingly use external knowledge service providers to supplement limited in-house capacity for product innovation.

Reduce time to market. The acceleration of the development or production of a product through the additional capability brought by the supplier.

Commodification. The trend of standardizing business processes, IT Services and application services enabling businesses to intelligently buy at the right price. Allows a wide range of businesses access to services previously only available to large corporations.

Risk management. An approach to risk management for some types of risks is to partner with an outsourcer who is better able to provide the mitigation.

When done for the right reasons, outsourcing will actually help your company grow and save money. There are other advantages of outsourcing that go beyond money and here are some of them:

Focus On Core Activities. In rapid growth periods, the back-office operations of a company will expand also. This expansion may start to consume resources (human and financial) at the expense of the core activities that have made your company successful. Outsourcing those activities will allow refocusing on those business activities that are important without sacrificing quality or service in the back-office.

Cost And Efficiency Savings. Back-office functions that are complicated in nature, but the size of your company is preventing you from performing it at a consistent and reasonable cost, is another advantage of outsourcing.

Reduced Overhead. Overhead costs of performing a particular back-office function are extremely high. Consider outsourcing those functions which can be moved easily.

Operational Control. Operations whose costs are running out of control must be considered for outsourcing. Departments that may have evolved over time into uncontrolled and poorly managed areas are prime motivators for outsourcing. In addition, an outsourcing company can bring better management skills to your company than what would otherwise be available.

Staffing Flexibility. Outsourcing will allow operations that have seasonal or cyclical demands to bring in additional resources when you need them and release them when you’re done.

Continuity & Risk Management. Periods of high employee turnover will add uncertainty and inconsistency to the operations. Outsourcing will provided a level of continuity to the company while reducing the risk that a substandard level of operation would bring to the company.

Along with its advantages are the disadvantages and criticism of the public about outsourcing. Here are some of them.

Loss Of Managerial Control. Whether you sign a contract to have another company perform the function of an entire department or single task, you are turning the management and control of that function over to another company. True, you will have a contract, but the managerial control will belong to another company. Your outsourcing company will not be driven by the same standards and mission that drives your company. They will be driven to make a profit from the services that they are providing to you and other businesses like yours.

Hidden Costs. You will sign a contract with the outsourcing company that will cover the details of the service that they will be providing. Any thing not covered in the contract will be the basis for you to pay additional charges. Additionally, you will experience legal fees to retain a lawyer to review the contacts you will sign. Remember, this is the outsourcing company's business. They have done this before and they are the ones that write the contract. Therefore, you will be at a disadvantage when negotiations start.

Threat to Security and Confidentiality. The Life-blood of any business is the information that keeps it running. If you have payroll, medical records or any other confidential information that will be transmitted to the outsourcing company, there is a risk that the confidentiality may be compromised. If the outsourced function involves sharing proprietary company data or knowledge (e.g. product drawings, formulas, etc.), this must be taken into account. Evaluate the outsourcing company carefully to make sure your data is protected and the contract has a penalty clause if an incident occurs.

Quality Risk is the propensity for a product or service to be defective, due to operations-related issues. Quality risk in outsourcing is driven by a list of factors. One such factor is opportunism by suppliers due to misaligned incentives between buyer and supplier, information asymmetry, high asset specificity, or high supplier switching costs. Other factors contributing to quality risk in outsourcing are poor buyer-supplier communication, lack of supplier capabilities/resources/capacity, or buyer-supplier contract enforceability. Two main concepts must be considered when considering observability as it related to quality risks in outsourcing: the concepts of testability and criticality.

There is a strong public opinion regarding outsourcing (especially when combined with offshoring) that outsourcing damages a local labor market. Outsourcing is the transfer of the delivery of services which affects both jobs and individuals. It is difficult to dispute that outsourcing has a detrimental effect on individuals who face job disruption and employment insecurity; however, its supporters believe that outsourcing should bring down prices, providing greater economic benefit to all.

In the area of call centers end-user-experience is deemed to be of lower quality when a service is outsourced. This is exacerbated when outsourcing is combined with off-shoring to regions where the first language and culture are different. The questionable quality is particularly evident when call centers that service the public are outsourced and offshored.

Outsourcing sends jobs to the lower-income areas where work is being outsourced to, which provides jobs in these areas and has a net equalizing effect on the overall distribution of wealth. Some argue that the outsourcing of jobs (particularly off-shore) exploits the lower paid workers. A contrary view is that more people are employed and benefit from paid work. Despite this argument, domestic workers displaced by such equalization are proportionately unable to outsource their own costs of housing, food and transportation.

On the issue of high-skilled labor, such as computer programming, some argue that it is unfair to both the local and off-shore programmers to outsource the work simply because the foreign pay rate is lower. On the other hand, one can argue that paying the higher-rate for local programmers is wasteful, or charity, or simply overpayment. If the end goal of buyers is to pay less for what they buy, and for sellers it is to get a higher price for what they sell, there is nothing automatically unethical about choosing the cheaper of two products, services, or employees.

Social responsibility is also reflected in the costs of benefits provided to workers. Companies outsourcing jobs effectively transfer the cost of retirement and medical benefits to the countries where the services are outsourced. This represents a significant reduction in total cost of labor for the outsourcing company. A side effect of this trend is the reduction in salaries and benefits at home in the occupations most directly impacted by outsourcing.

Quality of service
Quality of service is measured through a
service level agreement (SLA) in the outsourcing contract. In poorly defined contracts there is no measure of quality or SLA defined. Even when an SLA exists it may not be to the same level as previously enjoyed. This may be due to the process of implementing proper objective measurement and reporting which is being done for the first time. It may also be lower quality through design to match the lower price.

There are a number of stakeholders who are affected and there is no single view of quality. The CEO may view the lower quality acceptable to meet the business needs at the right price. The retained management team may view quality as slipping compared to what they previously achieved. The end consumer of the service may also receive a change in service that is within agreed SLAs but is still perceived as inadequate. The supplier may view quality in purely meeting the defined SLAs regardless of perception or ability to do better.

Quality in terms of end-user-experience is best measured through customer satisfaction questionnaires which are professionally designed to capture an unbiased view of quality. Surveys can be one of research. This allows quality to be tracked over time and also for corrective action to be identified and taken.

Staff turnover
The staff turnover of employee who originally transferred to the outsourcer is a concern for many companies. Turnover is higher under an outsourcer and key company skills may be lost with retention outside of the control of the company.
In outsourcing offshore there is an issue of staff turnover in the outsourcer companies call centers. It is quite normal for such companies to replace its entire workforce each year in a call center. This inhibits the build-up of employee knowledge and keeps quality at a low level.

Company knowledge
Outsourcing could lead to communication problems with transferred employees. For example, before transfer staff have access to broadcast company
e-mail informing them of new products, procedures etc. Once in the outsourcing organization the same access may not be available. Also to reduce costs, some outsource employees may not have access to e-mail, but any information which is new is delivered in team meetings.

Failure to deliver business transformation
Business transformation has traditionally been promised by outsourcing suppliers, but they have usually failed to deliver. In a commoditised market where any half-decent service provider can do things cheaper and faster, smart vendors have promised a second wave of benefits that will improve the client’s business outcomes. According to Vinay Couto of Booz & Company “Clients always use the service provider’s ability to achieve transformation as a key selection criterion. It’s always in the top three and sometimes number one.” Often vendors have promised transformation on the basis of wider domain expertise that they didn’t really have, though Couto also says that this is often down to client’s unwillingness to invest in transformation once an outsourcing contract is in place.

Productivity
Offshore outsourcing for the purpose of saving cost can often have a negative influence on the real productivity of a company. Rather than investing in technology to improve productivity, companies gain non-real productivity by hiring fewer people locally and outsourcing work to less productive facilities offshore that appear to be more productive simply because the workers are paid less. Sometimes, this can lead to strange contradictions where workers in a developing country using hand tools can appear to be more productive than a U.S. worker using advanced computer controlled machine tools, simply because their salary appears to be less in terms of U.S. dollars.

In contrast, increases in real productivity are the result of more productive tools or methods of operating that make it possible for a worker to do more work. Non-real productivity gains are the result of shifting work to lower paid workers, often without regards to real productivity. The net result of choosing non-real over real productivity gain is that the company falls behind and obsoletes itself overtime rather than making investments in real productivity.

Security
Before outsourcing an organization is responsible for the actions of all their staff and liable for their actions. When these same people are transferred to an outsourcer they may not change desk but their legal status has changed. They no-longer are directly employed or responsible to the organization. This causes legal, security and compliance issues that need to be addressed through the contract between the client and the suppliers. This is one of the most complex areas of outsourcing and requires a specialist third party adviser.

Fraud is a specific security issue that is criminal activity whether it is by employees or the supplier staff. However, it can be disputed that the fraud is more likely when outsourcers are involved, for example credit card theft when there is scope for fraud by credit card cloning. In April 2005, a high-profile case involving the theft of $350,000 from four Citibank customers occurred when call center workers acquired the passwords to customer accounts and transferred the money to their own accounts opened under fictitious names. Citibank did not find out about the problem until the American customers noticed discrepancies with their accounts and notified the bank.

I already mentioned some advantages and disadvantages of Outsourcing, and in addition I will site what Insourcing and its advantages and disadvantages.

Insourcing often involves bringing in specialists to fill temporary needs or training existing personnel to perform tasks that would otherwise have been outsourced. An example is the use of in-house engineers to write technical manuals for equipment they have designed, rather than sending the work to an outside technical writing firm. In this example, the engineers might have to take technical writing courses at a local college, university, or trade school before being able to complete the task successfully. Other challenges of insourcing include the possible purchase of additional hardware and/or software that is scalable and energy-efficient enough to deliver an adequate return on investment (ROI).

Insourcing can be viewed as outsourcing as seen from the opposite side. For example, a company based in Japan might open a plant in the United States for the purpose of employing American workers to manufacture Japanese products. From the Japanese perspective this is outsourcing, but from the American perspective it is insourcing. Nissan, a Japanese automobile manufacturer, has in fact done this.

The opposite of outsourcing can be defined as insourcing. When an organization delegates its work to another entity, which is internal yet not a part of the organization, it is termed as insourcing. The internal entity will usually have a specialized team who will be proficient in the providing the required services. Organizations sometimes opt for insourcing because it enables them to maintain a better control of what they outsource. Insourcing has also come to be defined as transferring work from one organization to another organization which is located within the same country. Insourcing can also mean an organization building a new business centre or facility which would specialize in a particular service or product.

Organizations involved in production usually opt for insourcing in order to cut down the cost of labor and taxes amongst others. The trend towards insourcing has increased since the year 2006. Organizations who have been dissatisfied with outsourcing have moved towards insourcing. Some organizations feel that they can have better customer support and better control over the work outsourced by insourcing their work rather than outsourcing it. According to recent studies, there is more wok insourced than outsourced in the U.S and U.K. These countries are currently the largest outsourcers in the world. The U.S and U.K outsource and insource work equally.

If I were to choose between Outsorcing and Insourcing, my stand would be with Insourcing. On my own opinion, it is much better if the school will do insourcing due to the following reasons: the university has the capable individuals that can perform the task regarding the information system, it is safer for the university to let its own employee manage the system than letting others manipulate it, and it is also cheaper to insource because free resources are abundant nowadays. While in ousourcing, the outsourcing company will be motivated by profit. Since the contract will fix the price, the only way for them to increase profit will be to decrease expenses. As long as they meet the conditions of the contract, you will pay. In addition, you will lose the ability to rapidly respond to changes in the business environment. The contract will be very specific and you will pay extra for changes.

Assignment #7

Sona or the State of the Nation Address is delivered every year by the President of the Republic of the Philippines to report the status of the nation. SONA is given by the President before a joint session of both houses of Congress, pursuant to Article VII, Section 23 of the 1987 Constitution, which reads: “The President shall address the Congress at the opening of its regular session. He may also appear before it at any other time.”

As what I have understood from the full text of the state of the nation address of President Gloria Macapagal Arroyo, I have I identified three areas that are related to ICT and here are the following:

· Telecommunications
· Development of BPO (Business Process Outsourcing)
· Formation of Department of Information and Communications Technology (DICT)


Telecommunications
“Sa telecommunications naman, inatasan ko ang Telecommunications Commission na kumilos na tungkol sa mga sumbong na dropped calls at mga nawawalang load sa cellphone. We need to amend the Commonwealth-era Public Service Law. And we need to do it now.”

Philippines is one of the texting capitals in Asia, almost of the Filipinos in the urban area even in the rural area own a cellular phone. It is part of living of several Filipinos, telecommunications has always been tough and strong in helping the growth and development of the economical conditions especially in modernization and in technology. The telecommunication networks are most likely to be blamed since they are the ones which distribute cell phone loads. Fortunately, the government has found ways to prevent this further. If Telecommunication companies will reduce their rates, it will be a great help in minimizing the expenses of the people.

BPO (Business Process Outsourcing)
“Kung noong nakaraan, lumakas ang electronics, today we are creating wealth by developing the BPO and tourism sectors as additional engines of growth. Electronics and other manufactured exports rise and fall in accordance with the state of the world economy. But BPO remains resilient. With earnings of $6 billion and employment of 600,000, the BPO phenomenon speaks eloquently of our competitiveness and productivity.”

Business process outsourcing (BPO) is a form of outsourcing that involves the contracting of the operations and responsibilities of a specific business functions (or processes) to a third-party service provider. BPO is distinct from information technology (IT) outsourcing, which focuses on hiring a third-party company or service provider to do IT-related activities, such as application management and application development, data center operations, or testing and quality assurance. Frequently, BPO is also referred to as ITES -- information technology-enabled services. Since most business processes include some form of automation, IT enables these services to be performed.

Formation of ICT
The House of Representatives has approved on second reading a bill seeking to create a Department of Information and Communications Technology (DICT). The bill must also be passed by the Senate and signed by President Arroyo before it becomes law.

After the DICT is created, all offices of the Department of Transportation and Communications dealing with communications and information technology would be transferred to the new department. These include the National Telecommunications Commission and the Philippine Postal Corp. The National Computer Center, which is attached to the Department of Science and Technology, would also be transferred to the DICT.

Catanduanes Rep. Joseph Santiago, chairman of the House information technology committee, said a DICT must be created because the DOTC’s administrative and jurisdictional foundations can no longer cope with the rapid advances in ICT. “(The new DICT would) ensure the provision of strategic, dependable and cost-efficient ICT infrastructure, systems and resources as instruments for nation-building and global competitiveness,” he said. It would also promote a “policy environment of fairness, broad private sector participation in ICT development and balanced investment between high-growth and economically depressed districts,” he added.

Santiago’s committee, together with the committees on appropriations and government reorganization, has endorsed the DICT bill. Santiago listed other objectives of the proposed department:
• Ensure universal access and high-speed connectivity at fair and reasonable cost; provide ample ICT services in areas not sufficiently served by the private sector;
• Promote the widespread use and application of emerging ICT; and
• Establish a strong and effective regulatory system.

In the bill ICT is defined as “the aggregate of all electronic means to collect, store, process, and present information to end-users in support of their activities.” ICT consists of computer systems, office channels and consumer electronics, as well as networked information infrastructure, the components of which include telephone systems, the Internet, and satellite and cable television.

If ICT will be successfully formed, the development of Information technology in the Philippines will continue. It will give more jobs to the people and will bring more knowledge regarding the continuous evolution of Technology.

Assignment #6

If you were hired by the university president as an IT consultant, what would you suggest (technology, infrastructure, innovations, steps, processes, etc) in order for the internet connectivity be improved? (3000words)

If I were hired by the University President as an IT consultant, I would suggest innovation in order for the internet connectivity be improved since Internet connection is an important factor for communication. It is really relevant to the students and also for all the people inside the campus. It is the fastest way to communicate, send files, and work within the campus. So why I would suggest innovation? Innovation refers to a new way of doing something. It may refer to incremental and emergent or radical and revolutionary changes in thinking, products, processes, or organizations.

Working online nowadays has a far wider meaning than just surfing to while away your time. There are a lot more reasons to log on to the internet and be in touch with the rest of the world. This stresses the importance of good connectivity all the more. It is important to compare broadband deals which are available so as to obtain the most suitable deal with the best connectivity to suit your needs. The necessity of being net-savvy has grown manifold in the past years. People of every country all over the world have started to expand their horizons by connecting to the internet. To keep track of all such developments in the world of technology, it is important that you too keep in touch too. Getting cable or DSL connections for the net can be a task which involves many hassles.
The best way to be net savvy now is to obtain broadband internet which provides you superb bandwidth and great downloading speed too. By comparing all the deals that are available, you can make sure that only the best deal is secured by you. This will help you get the best connection to log on to the net with good speed and bandwidth and that too at the lowest of prices. Compare broadband deals on the internet itself and choose the one which suits you the best. With the increase in the provision of broadband services that it becomes all the more necessary to compare broadband deals which are available. Internet, phone and television connections are made available by the same provider and the cost too is low. It does not mean that a cheap cost service provider will get you the best of connectivity. This is what the consumers have to compare before taking up the broadband deal and make sure only the services of the best in the industry are obtained. Compare broadband deals online and only then make a decision of the service provider for internet. The right choice of the service provider will make it easier for you to always keep in touch.

Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 2.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States (US) Federal Communications Commission (FCC) as of 2009, defines "Basic Broadband" as data transmission speeds exceeding 768 kilobits per second (Kbps), or 768,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet). The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services.
Programs of organizational innovation are typically tightly linked to organizational goals and objectives, to the business plan, and to market competitive positioning.
For example, one driver for innovation programs in corporations is to achieve growth objectives. As Davila et al. (2006) note, "Companies cannot grow through cost reduction and reengineering alone . . . Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results"

In general, business organisations spend a significant amount of their turnover on innovation i.e. making changes to their established products, processes and services. The amount of investment can vary from as low as a half a percent of turnover for organisations with a low rate of change to anything over twenty percent of turnover for organisations with a high rate of change.
The average investment across all types of organizations is four percent. For an organisation with a turnover of say one billion currency units, this represents an investment of forty million units. This budget will typically be spread across various functions including marketing, product design, information systems, manufacturing systems and quality assurance.
The investment may vary by industry and by market positioning.

One survey across a large number of manufacturing and services organisations found, ranked in decreasing order of popularity that systematic programs of organizational innovation are most frequently driven by:

-Improved quality
-Creation of new markets
-Extension of the product range
-Reduced labor costs
-Improved production processes
-Reduced materials
-Reduced environmental damage
-Replacement of products/services
-Reduced energy consumption
-Conformance to regulations

These goals vary between improvements to products, processes and services and dispel a popular myth that innovation deals mainly with new product development. Most of the goals could apply to any organisation be it a manufacturing facility, marketing firm, hospital or local government.

There are two fundamentally different types of measures for innovation: the organizational level and the political level. The measure of innovation at the organizational level relates to individuals, team-level assessments, private companies from the smallest to the largest. Measure of innovation for organizations can be conducted by surveys, workshops, consultants or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations. For the political level, measures of innovation are more focussing on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1995) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo manual from 2005 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys.

Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of Innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. This being said, the traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3 % of GNP.
The Oslo Manual is focused on North America, Europe, and other rich economies. In 2001 for Latin America and the Caribbean countries it was created the
Bogota Manual

Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is widely ignored. For an example, that means you can have the better high tech or software, but there are also crucial learning tasks important for innovation. But these measurements and research are rarely done.

A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research (CER) is a form of price control which, by reducing returns to industry, limits R&D expenditure, stifles future innovation and compromises new products access to markets Some academics claim the CER is a valuable value-based measure of innovation which accords truly significant advances in therapy (those that provide 'health gain') higher prices than free market mechanisms. Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse. The Australian academic Thomas Alured Faunce has developed the case that national comparative cost-effectiveness assessment systems should be viewed as measuring 'health innovation' as an evidence-based concept distinct from valuing innovation through the operation of competitive markets (a method which requires strong anti-trust laws to be effective) on the basis that both methods of assessing innovation in pharmaceuticals are mentioned in annex 2C.1 of the AUSFTA.

I also suggest that the university will use high end computer devices to perform the task faster. Upgrade the speed of the internet, or change the internet connection to a better one.

The Internet is a worldwide network comprising government, academic, commercial, military and corporate networks. The Internet was originally used by the US military, before becoming widely used for academic and commercial research. Users accessing the Internet can read and download data from almost anywhere in the world. You can communicate across the Internet using Internet e-mail.In order to connect to the Internet, use e-mail and access the World Wide Web you must obtain and set up a modem. This allows the PC to access the Internet over a telephone line, or obtain a LAN Modem to provide WAN access to several people simultaneously on your LAN. This will allow several PCs to share a single connection to the Internet. Obtain an Internet account from an Internet Service Provider or ISP. An ISP is a company that can provide access to the Internet and give you an Internet e-mail address. You then access the Internet by using your modem to dial into the ISP server. Obtain and install a web browser. This allows you to view web pages as well as send and receive e-mail.For High speed internet connection the university must subscribe a net connection in form of broadband, DSL type of connection with the highest speed feature. Broadband is the transmission capacity with sufficient bandwidth to permit combined provision of voice, data and video. According to ITU report, it refers to DSL and cable modem services with band width greater than 128kbps in at least one direction. In determining the speed connection we should determine the bandwidth of internet type subscribe by the university. Bandwidth is the range of frequencies available to be occupied by signals. In analogue systems it is measured in terms of Hertz (Hz) and in digital systems in bit/s per second (bit/s), the higher the bandwidth, the greater the amount of information that can be transmitted in a given time. High bandwidth channels are referred to as broadband which typically means 1.5/2.0 Mbit/s or higher. So if the connections int the laboratories increases more bandwidth is needed.

The first versions of Ethernet used coaxial cable to connect computers in a bus topology. Each computer was directly connected to the backbone. These early versions of Ethernet were known as Thicknet, (10BASE5) and Thinnet (10BASE2).10BASE5, or Thicknet, used a thick coaxial that allowed for cabling distances of up to 500 meters before the signal required a repeater. 10BASE2, or Thinnet, used a thin coaxial cable that was smaller in diameter and more flexible than Thicknet and allowed for cabling distances of 185 meters.The ability to migrate the original implementation of Ethernet to current and future Ethernet implementations is based on the practically unchanged structure of the Layer 2 frame. Physical media, media access, and media control have all evolved and continue to do so. But the Ethernet frame header and trailer have essentially remained constant. The early implementations of Ethernet were deployed in a low-bandwidth LAN environment where access to the shared media was managed by CSMA, and later CSMA/CD. In additional to being a logical bus topology at the Data Link layer, Ethernet also used a physical bus topology. This topology became more problematic as LANs grew larger and LAN services made increasing demands on the infrastructure.The original thick coaxial and thin coaxial physical media were replaced by early categories of UTP cables. Compared to the coaxial cables, the UTP cables were easier to work with, lightweight, and less expensive. The physical topology was also changed to a star topology using hubs. Hubs concentrate connections. In other words, they take a group of nodes and allow the network to see them as a single unit. When a frame arrives at one port, it is copied to the other ports so that all the segments on the LAN receive the frame. Using the hub in this bus topology increased network reliability by allowing any single cable to fail without disrupting the entire network. However, repeating the frame to all other ports did not solve the issue of collisions. Later in this chapter, you will see how issues with collisions in Ethernet networks are managed with the introduction of switches into the network. By the way, Ethernet was invented by Xerox Corporation and developed jointly by Xerox, Intel, and Digital Equipment Corporation (DEC) and is a widely used LAN technology.
To innovate the LAN Technology is also a great help for the University...

Assignment # 5 - Barriers in IT Implementation

In every move, in every implementation barriers are unavoidable. Sometimes it is for the improvement of the performance but most of the time it’s a hindrance in implementing something. Organizations are as alike and unique as human beings. Similarly, group processes can be as straightforward or as complex as the individuals who make up the organization. It is vital to successfully launching a new program that the leaders understand the strengths, weaknesses, and idiosyncrasies of the organization or system in which they operate. Try to anticipate barriers to implementation so that you can develop strategies to minimize their impact or avoid them altogether.
The following list of common barriers can be used to help your leadership team identify potential obstacles. The list of essential elements for change can help the team brainstorm possible solutions. The lists are a good starting point for a planning session that will be most effective if it also takes into account the organizations unique.

In our adapted company, the technical department head point out few of the barriers that they encountered on IT implementation, one of them is the resistance of their employees. This resistance of employees is due to lack of knowledge and ability to adapt to changes, that’s why they are having a difficulty on implementing a new system or to change their system.

Common Barriers:

-Studying the problem too long without acting
-Trying to get everyone's agreement first
-Educating without changing structures or expectations
-Tackling everything at once
-Measuring nothing or everything
-Failing to build support for replication
-Assuming that the status quo is OK
-More Barriers to Change
-Lack of such resources as time and commitment
-Resistance to change
-Lack of senior leadership support or physician champion
-Lack of cooperation from other agencies, providers, departments, and facilities
-Ineffective teams
-Burdensome data collection
-Essential Elements for Change Effort
-Define the problem
-Define the target population
-Define effective treatment strategies and establish procedural guidelines
-Establish performance measures; set goals
-Define effective system changes and interventions
-Develop leadership and system change strategy

A barrier is an obstacle which prevents a given policy instrument being implemented, or limits the way in which it can be implemented. In the extreme, such barriers may lead to certain policy instruments being overlooked, and the resulting strategies being much less effective. For example, demand management measures are likely to be important in larger cities as ways of controlling the growth of congestion and improving the environment. But at the same time they are often unpopular, and cities may be tempted to reject them simply because they will be unpopular. If that decision leads in turn to greater congestion and a worse environment, the strategy will be less successful. The emphasis should therefore be on how to overcome these barriers, rather than simply how to avoid them. ECOCITY provides a useful illustration of the ways in which such barriers arise, and of how obstacles have been overcome, in case study cities.
Many information systems projects take considerably longer than originally planned. State-local projects, with their added layers of legal and organizational complexity are especially vulnerable to this problem. Since so many different organizations are affected by them, time delays lead to serious difficulties in planning for and adjusting to changes in operations.

Existing state-local systems suffer from the lack of a ubiquitous, consistent computing and communications infrastructure. This makes it difficult or impossible to operate technology supported programs in a consistent way from place to place and organization to organization. It also slows and complicates communication among state and local staff involved in joint programs. New York State is currently embarking on a statewide networking strategy called the NYT that will help solve this problem for future systems.

What are the principal barriers?
In our work in PROSPECTS, we grouped barriers into the four categories listed below. More recent work in TIPP has demonstrated that failure to adopt a logical approach to the process of strategy development can also impose a barrier to effective planning. This Guidebook is designed to help cities avoid this happening. TIPP also provides a set of recommendations.

1) Legal and institutional barriers
These include lack of legal powers to implement a particular instrument, and legal responsibilities which are split between agencies, limiting the ability of the city authority to implement the affected instrument. The survey of European cities in PROSPECTS indicates that land-use, road building and pricing are the policy areas most commonly subject to legal and institutional constraints. Information measures are substantially less constrained than other measures.

2) Financial barriers
These include budget restrictions limiting the overall expenditure on the strategy, financial restrictions on specific instruments, and limitations on the flexibility with which revenues can be used to finance the full range of instruments. PROSPECTS found that road building and public transport infrastructure are the two policy areas which are most commonly subject to financial constraints, with 80% of European cities stating that finance was a major barrier. Information provision is the least affected.

3) Political and cultural barriers
These involve lack of political or public acceptance of an instrument, restrictions imposed by pressure groups, and cultural attributes, such as attitudes to enforcement, which influence the effectiveness of instruments. The surveys in PROSPECTS show that road building and pricing are the two policy areas which are most commonly subject to constraints on political acceptability. Public transport operations and information provision are generally the least affected by acceptability constraints.

4) Practical and technological barriers
While cities view legal, financial and political barriers as the most serious which they face in implementing land use and transport policy instruments, there may also be practical limitations. For land use and infrastructure these may well include land acquisition. For management and pricing, enforcement and administration are key issues. For infrastructure, management and information systems, engineering design and availability of technology may limit progress.
Generally, lack of key skills and expertise can be a significant barrier to progress, and is aggravated by the rapid changes in the types of policy being considered.

How should we deal with barriers in the short term?
It is important not to reject a particular policy instrument simply because there are barriers to its introduction. One of the key elements in a successful strategy is the use of groups of policy instrument which help overcome these barriers. This is most easily done with the financial and political and cultural barriers, where one policy instrument can generate revenue to help finance another (as, for example, fares policy and service improvements), or one can make another more publicly acceptable (for example rail investment making road pricing more popular). These principles are discussed more fully in. A second important element is effective participation, as outlined in ,which can help reduce the severity of institutional and political barriers, and encourage joint action to overcome them. Finally, effective approaches to implementation can reduce the severity of many barriers, as discussed in.

How can we overcome barriers in the longer term?
It is often harder to overcome legal, institutional and technological barriers in the short term. There is also the danger that some institutional and political barriers may get worse over time. However, strategies should ideally be developed for implementation over a 15-20 year timescale. Many of these barriers will not still apply twenty years hence, and action can be taken to remove others. For example, if new legislation would enable more effective instruments such as pricing to be implemented, it can be provided. If split responsibilities make achieving consensus impossible, new structures can be put in place. If finance for investment in new infrastructure is justified, the financial rules can be adjusted. TIPP makes a number of recommendations for longer term institutional change. Barriers should thus be treated as challenges to be overcome, not simply impediments to progress. A key element in a long term strategy should be the identification of ways of resolving these longer term barriers.

The need for the improved implementation of information technology (IT) has been identified in both empirical and highly structured research studies as being critical to effective innovation and development at an industry and enterprise level. This need is greater in the construction industry as it has been relatively slow to embrace the full potential of IT-based technologies.
Beany & Gordon (1988) researched the barriers that exist to successful introduction of IT systems (in this case CAD/CAM) and reported that they fall into three categories, structural, human and technical. Structural barriers are those factors inherent in the organisations structure or systems that are not compatible with the new technology.This can include communication, authority flows and planning systems, and reflect how the organisation has traditionally done things. A failure to perceive the strategic benefits of the investment, a lack of co-ordination and co-operation due to organisational fragmentation, and a perception of high risk are all symptoms of organisational problems. Human barriers include psychological problems that arise in most periods of change, such as uncertainty avoidance, and resistance to loss of power or status. Technical barriers, they noted, were factors in the technology itself, such as lack of system compatibility.

Functional barriers relate to information and work flows and include identification of strategic information needs. Technical factors relate to the need for flexibility and information handling capacity, with the dangers of disjointed islands of automation being created which limit information flow. Human factors are the need for job redefinition and the resistance to change which lead to a lack of company wide flexibility or commitment. Other authors have confirmed that the key barriers to IT implementation tend to be organisational, rather than technical, and that these barriers are often understated. Galliers, for instance, focused on general management problems in successful planning of strategic information systems and concluded that key factors were the attitude, commitment and involvement of management: the current sophistication of IS within the company: the ability to measure and justify the benefits of strategic IS: and the justify the benefits of strategic IS: and the integration of IS into business strategy. While the perception of the importance of these barriers is likely to change over time.

Assignment # 2 - Risk associated with business and IS/IT change

Based on the organization(s) that you visited, what do you think are the risks associated with business and IS/IT change? (1000 words)

Change is inevitable in any aspect of life. When it comes to business and IT, change is constant. According to the company that we have interviewed which is Concentrix, undergoing change is not that easy. A feasible study should be done first before modifying their system and according to their MIS Manager the probable risk are the following:
  • Security
  • System failure
  • Loss of data
  • Cost


In organizations without a formal information technology (IT) change management process, it is estimated that 80% of IT service outage problems are caused by updates and alterations to systems, applications, and infrastructure. Consequently, one of the first areas to address to improve service reliability is to track all changes and systematically manage change with full knowledge of the risks of the change and the potential organizational impact. While tracking change events is fairly well understood and is a common practice, consistently and reliably predicting the impact of change requires a disciplined, standards-based approach to assessing risk and likelihood of impact, a technique not usually found in off-the-shelf change tracking tools.

The automated process for determining composite risk assessment is broken into two standardized evaluations: the organizational criticality of the system to be changed and the likelihood of adverse impact resulting from the change. First, on an annual basis, the criticality of the system is appraised in order to establish its relative value to the enterprise. The standardized criteria for this assessment are:

  • Number of users that could be impacted by a service interruption
  • Financial impact of an extended service outage or unrecoverable loss of data

Likelihood that a system/service failure could result in:
-Disclosure of sensitive information that needs to be protected
-Misuse of client-owned resources
-Malicious interruption of services or research operations
-Possibility that a system/service failure could result in an event or condition that may have adverse safety, health, security, operational, environmental, or mission implication
-Potential future impacts based on prior system/service interruption experiences

Secondly, when a change is proposed, each request is graded against the following criteria:
-How many users will be visibly affected by the proposed change?
-What is the anticipated difficulty for user and support personnel to learn the new or modified system/service?
-What is the stability and supportability of the technology or vendor products utilized by the system?
-In the event the change implementation fails or adversely impacts other systems or services, what will be the impact of executing the implementation contingency plan?
-Based upon past experience, what is the likelihood of failure or adverse problems resulting from the change?

The criticality appraisal blends with the change risk evaluation to produce a composite risk assessment that is used to pre-populate an automated release plan that guides the implementation of the change. Depending on the level of the composite risk, requirements of the release plan such as testing rigor and end-user communications are strengthened to mitigate higher levels of risk. Conversely, a lower risk score results in a reduced scope of change implementation requirements.

In an era of dynamism the only thing that remains constant is change. Organizations execute change programmes to implement strategic, regulatory and other such business drivers. Whatever the organization and in whichever sector it exists in, be it Public or Private, and howsoever it may be structured, it has to witness and face an ever increasing rate of change. These business transformation changes can be implemented and managed effectively by using Programme Management methodologies. As is inherent within any organizational activity successful delivery of these Business Change Programmes lies to a large extent in successfully managing the risks that are being faced while executing the programme.

The paper attempts to focus on the Risk Management activities that need to be considered in such a Programme environment. It tries to present a framework that could be tied into the practices of Programme Management to effectively manage the loose ends presented by Risks.
Risk is defined as the uncertainty of outcome, whether positive opportunity or negative threat, of actions and events. The risk has to be assessed in respect of the likelihood of something happening, and the impact, which would arise if it actually happens. Risk management includes identifying and evaluating risks and then suitably responding to them. Risk management enables informed decisions. Managers at all levels in an organization such as programme managers, project managers, general managers and executive managers make multiple decisions each day as a primary function of their jobs. Apart from having access to factual information, knowledge of potential risks faced can improve the decision process by allowing the decision maker to weigh potential alternatives or trade-offs in order to maximize the reward/risk ratio.

Risk management brings a level of predictability to the dynamic environment within which programmes of business change operate. By understanding and bounding various uncertainties faced by the programme, the programme manager is able to manage the risks effectively.


(i) Risks that would lead to Programme not being implemented. These would be risks to Strategy that initiated the Business Change. These could also be risks to Operations that would undergo this change depending upon the type and criticality of change.


(ii)Risks created by the programme. These are Business Change Risks that need to be mitigated by proper planning at Strategic Level and fed back into the same Programme by changing its scope or another Business Change Programme in case it’s related outside the scope of current change.


(iii)Risks to the programme itself. These are risks internal to the Programme and could be due to one or more of the projects that fall under the umbrella of programme or any other transformational activity that the programme is undertaking.

With the interview that we had and all the articles that I have read, I’ve learned that “You have to risk it, to take the biscuit”. Just like in IT and any organization, it won’t grow and develop unless you try and take the possible risk.