Friday, September 6, 2019

Images of Womans Sexuality in Advertisements Essay Example for Free

Images of Womans Sexuality in Advertisements Essay Considering the time an average American spends in front of the TV screen, it is obvious that the things he/she sees there influence greatly his/her perception of the world around. The stereotypes media offers us make a great impact of our perception of people. Thus, its no wonder that the images of womens sexuality in advertisements partly form our gender stereotypes. For to get more information on this issue, I analyzed an article by Christina N. Baker, published in the Sex Roles: A Journal of Research in January 2005. The name of the article is Images of womens sexuality in advertisements: a content analysis of Black- and White-oriented womens and mens magazines. This article analyzes the stereotypes of womens sexuality given in advertisements, the differences of those stereotypes for the White and Black woman. It gives the peculiarities of images created for the representatives of different races, and analyzes the origins and the influence of stereotypes that appear due to the TV and magazine commercials. It has always seemed to me that people in our society share some distorted view of woman and their sexuality. They express the concepts about it that are sometimes totally ridiculous (like that a woman should not express her sexual desires, as it is socially disapproved). Those concepts are very widespread nowadays, and I have been interested for a long time already why people trust those stereotypes, why lots man judge the woman that surround them on the strength of those concepts. Later I understood that the media also have the considerable role on forming the gender stereotypes. Thus I felt I wanted to know more about the specific features of these stereotypes, and, about the mechanism of their functioning. The author developed three hypotheses about the portrayal of woman in media. The first was that sexual women will be portrayed with characteristics such as submissiveness and dependency in both womens and mens mainstream/White-oriented magazines. According to the review of literature the author made, we live in a patriarchal society, where man a considered to be superior to women, thus they put the criteria of sexuality for woman. For man †¦sexual attractiveness in women is associated with physical beauty. A sign of status for a man is to have a physically attractive woman by his side. The more physically attractive a woman is, the more prestige she will bring to her male partner/spouse. The woman portrayed in commercials, and on the pages of the magazines is bound to be submissive, as it is one of the demands of patriarchal society. The author also notes that some of the scientific findings hypothesize that the continuous showing in the media of women as submissive sex objects whose main goal is to satisfy mans desires, reinforces the gender hierarchy existing in the contemporary society. The second hypothesis is that sexual Black women are more likely than sexual White women to be portrayed as dominant and independent. The author noted that despite of the fact that all of the women are more likely to be portrayed as the sexual objects, White woman are seen as the etalon of beauty, thus they are portrayed as the sex objects more frequently than the Black women are. It is also the fact that Black women have always been depicted as dominant towards Black man. It is historically that Black man cant get a decent job, thus Black women often have to bring the bacon home. This is the reason why Black woman are often portrayed as the heads of the families in the advertisements. The author also noted that the two stereotypes that exist about black woman are Mommy – the matriarch of the big family, and the mother that is raising her child by herself. The stereotype also exists in the contemporary society that Black woman usually dont have a husband. The author adds that the Black matriarch is that is portrayed as deviant because she challenges the assumption of the patriarchal family. The third hypothesis is that Black-oriented magazines are more likely than White-oriented magazines to portray sexual women as dominant and independent. The literature review conducted by the author states that despite of the fact that television commercials that targeted Black audiences contained about as many stereotypical images of Blacks as did those directed toward Whites, the Blackoriented magazines portrayed women in more active and even aggressive role. It was also that in the magazines for the Blacks women were more often portrayed in the role of the mother than women in the magazines for the whites. The characteristic feature of the portrayal of woman in the Blackoriented magazines was that there woman were rather shown in an extended families than in nuclear one, which conforms to the matriarch stereotype. The last hypotheses said that black women will be portrayed with physical characteristics that conform to White standards of beauty. However, Black women are more likely to have European features in White-oriented magazines than in Black-oriented magazines. The research showed that nowadays Blackoriented magazines portray women which conform to the White standart of beauty. The color of skin of those woman is dark, but the features are thin, they are slender, and they usually have long and straight hair. In fact, the only phenotypic difference between Caucasian and Afro-American models is the color of skin. Blackoriented magazines dont consider the fact that the features portrayed are not typical for the Black woman, and dont respond to the African canons of beauty. The sexual attractiveness in our society is associated with Whiteness, thus the magazines try to fulfill the desires of their readers. The findings of the articles author coincide with the results of researches conducted by the psychologists, sociologists and psychologists during the last fifty years. For example, Poe, (1976), and Silverstein Silverstein,(1974) found that in most of the TV advertisements woman were less physically active that man were, and they were the recipients of the advice given by man. It confirms the first hypothesis of the articles author, the one which says that women are depicted as submissive to man. The persuasion is that the woman has to be weak for to be attractive.

Thursday, September 5, 2019

Improving the Performance of Overbooking

Improving the Performance of Overbooking Improving the Performance of Overbooking by Application Collocate Using Affinity Function ABSTRACT: One of the main features provided by clouds is elasticity, which allows users to dynamically adjust resource allocations depending on their current needs. Overbooking describes resource management in any manner where the total available capacity is less than the theoretical maximal requested capacity. This is a well-known technique to manage scarce and valuable resources that has been applied in various fields since long ago. The main challenge is how to decide the appropriate level of overbooking that can be achieved without impacting the performance of the cloud services. This paper focuses on utilizing the Overbooking framework that performs admission control decisions based on fuzzy logic risk assessments of each incoming service request. This paper utilizes the collocation function (affinity) to define the similarity between applications. The similar applications are then collocated for better resource scheduling. I. INTRODUCTION Scheduling, or placement, of services is the process of deciding where services should be hosted. Scheduling is a part of the service deployment process and can take place both externally to the cloud, i.e., deciding on which cloud provide the service should be hosted, and internally, i.e., deciding which PM in a datacenter a VM should be run on. For external placement, the decision on where to host a service can be taken either by the owner of the service, or a third-party brokering service. In the first case, the service owner maintains a catalog of cloud providers and performs the negotiation with them for terms and costs of hosting the service. In the later case, the brokering service takes responsibility for both discovery of cloud providers and the negotiation process. Regarding internal placement, the decision of which PMs in the datacenter a service should be hosted by is taken when the service is admitted into the infrastructure. Depending on criteria such as the current loa d of the PMs, the size of the service and any affinity or anti-affinity constraints [23], i.e., rules for co-location of service components, one or more PMs are selected to run the VMs that constitute the service. Figure 1 illustrates a scenario with new services of different sizes (small, medium, and large) arriving into a datacenter where a number of services are already running. Figure 1: Scheduling in VMs Overload can happen in an oversubscribed cloud. Conceptually, there are two steps for handling overload, namely, detection and mitigation, as shown in Figure 2. Figure 2: Oversubscription view A physical machine has CPU, memory, disk, and network resources. Overload on an oversubscribed host can manifest for each of these resources. When there is memory overload, the hyper visor swaps pages from its physical memory to disk to make room for new memory allocations requested by VMs (Virtual Machines). The swapping process increases disk read and write traffic and latency, causing the programs to thrash. Similarly, when there is CPU overload, VMs and the monitoring agents running with VMs may not get a chance to run, thereby increasing the number of processes waiting in the VMs CPU run queue. Consequently, any monitoring agents running inside the VM also may not get a chance to run, rendering inaccurate the cloud providers view of VMs. Disk overload in shared SAN storage environment can increase the network traffic, where as in local storage it can degrade the performance of applications running in VMs. Lastly, network overload may result in an under utilization of CPU, disk, and memory resources, rendering ineffective any gains from oversubscription. Overload can be detected by applications running on top of VMs, or by the physical host running the VMs. Each approach has its pros and cons. The applications know their performance best, so when they cannot obtain the provisioned resources of a VM, it is an indication of overload. The applications running on VMs can then funnel this information to the management infrastructure of cloud. However, this approach requires modification of applications. In the overload detection within physical host, the host can infer overload by monitoring CPU, disk, memory, and network utilizations of each VM process, and by monitoring the usage of each of its resources. The benefit of this approach is that no modification to the applications running on VMs is required. However, overload detection may not be fully accurate. II. RELATED WORK The scheduling of services in a datacenter is often performed with respect to some high-level goal [36], like reducing energy consumption, increasing utilization [37] and performance [27] or maximizing revenue [17, 38]. However, during operation of the datacenter, the initial placement of a service might no longer be suitable, due to variations in application and PM load. Events like arrival of new services, existing services being shut down or services being migrated out of the datacenter can also affect the quality of the initial placement. To avoid drifting too far from an optimal placement, thus reducing efficiency and utilization of the datacenter, scheduling should be performed repeatedly during operation. Information from monitoring probes [23], and events such as timers, arrival of new services, or startup and shutdown of PMs can be used to determine when to update the mapping between VMs and PMs. Scheduling of VMs can be considered as a multi-dimensional type of the Bin Packing [10] problem, where VMs with varying CPU, I/O, and memory requirements are placed on PMs in such a way that resource utilization and/or other objectives are maximized. The problem can be addressed, e.g., by using integer linear programming [52] or by performing an exhaustive search of all possible solutions. However, as the problem is complex and the number of possible solutions grow rapidly with the amount of PMs and VMs, such approaches can be both time and resource consuming. A more resource efficient, and faster, way is the use of greedy approaches like the First-Fit algorithm that places a VM on the first available PM that can accommodate it. However, such approximation algorithms do not normally generate optimal solutions. All in all, approaches to solving the scheduling problem often lead to a trade-o↠µ between the time to find a solution and the quality of the solution found. Hosting a ser vice in the cloud comes at a cost, as most cloud providers are driven by economical incentives. However, the service workload and the available capacity in a datacenter can vary heavily over time, e.g., cyclic during the week but also more randomly [5]. It is therefore beneficial for providers to be able to dynamically adjust prices over time to match the variation in supply and demand. Cloud providers typically offer a wide variety of compute instances, differing in the speed and number of CPUs available to the virtual machine, the type of local storage system used (e.g. single hard disk, disk array, SSD storage), whether the virtual machine may be sharing physical resources with other virtual machines (possibly belonging to different users), the amount of RAM, network bandwidth, etc. In addition, the user must decide how many instances of each type to provision. In the ideal case, more nodes means faster execution, but issues of heterogeneity, performance unpredictability, network overhead, and data skew mean that the actual benefit of utilizing more instances can be less than expected, leading to a higher cost per work unit. These issues also mean that not all the provisioned resources may be optimally used for the duration of the application. Workload skew may mean that some of the provisioned resources are (partially) idle and therefore do no contribute to the performance during those periods, but still contribute to cost. Provisioning larger or higher performance instances is similarly not always able to yield a proportional benefit. Because of these factors, it can be very difficult for a user to translate their performance requirements or objectives into concrete resource specifications for the cloud. There have been several works that attempt to bridge this gap, which mostly focus on VM allocation [HDB11, VCC11a, FBK+12, WBPR12] and d etermining good configuration parameters [KPP09, JCR11, HDB11]. Some more recent work also considers shared resources such as network or data storage [JBC+12], which is especially relevant in multi-tenant scenarios. Other approaches consider the provider side of things, because it can be equally difficult for a provider to determine how to optimally service resource requests [RBG12]. Resource provisioning is complicated further because performance in the cloud is not always predictable, and known to vary even among seemingly identical instances [SDQR10, LYKZ10]. There have been attempts to address this by extending resource provisioning to include requirement specifications for things such as network performance rather than just the number and type of VMs in an attempt to make the performance more predictable [GAW09, GLW+10, BCKR11, SSGW11]. Others try to explicitly exploit this variance to improve application performance [FJV+12]. Accurate provisioning based on application requirements also requires the ability to understand and predict application performance. There are a number of approaches towards estimating performance: some are based on simulation [Apad, WBPG09], while others use information based on workload statistics derived from debug execution [GCF+10, MBG10] or profiling sample data [TC11, HDB11]. Most of these approaches still have limited accuracy, especially when it comes to I/O performance. Cloud platforms run a wide array of heterogeneous workloads which further complicates this issue [RTG+12]. Related to provisioning is elasticity, which means that it is not always necessary to determine the optimal resource allocation beforehand, since it is possible to dynamically acquire or release resources during execution based on observed performance. This suffers from many of the same problems as provisioning, as it can be difficult to accurately estimate the impact of changing the resources at runtime, and therefore to decide when to acquire or release resources, and which ones. Exploiting elasticity is also further complicated when workloads are statically divided into tasks, as it is not always possible to preempt those tasks [ADR+12]. Some approaches for improving workload elasticity depend on the characteristics of certain workloads [ZBSS+10, AAK+11, CZB11], but these characteristics may not generally apply. It is therefore clear that it can be very difficult to decide, f or either the user or the provider, how to optimally provision resources and to ensure that those resources that are provisioned are utilized fully. Their is a very active interest in improving this situation, and the approaches proposed in this thesis similarly aim to improve provisioning and elasticity by mitigating common causes of inefficient resource utilization. III. PROPOSED OVERBOOKING METHOD The proposed model utilizes the concept of overbooking introduced in [1] and schedules the services using the collocation function. 3.1 Overbooking: The Overbooking is to exploit overestimation of required job execution time. The main notion of overbooking is to schedule more number of additional jobs. Overbooking strategy used in economic model can improve system utilization rate and occupancy. In overbooking strategy every job is associated with release time and finishing deadline, as shown in Fig 3. Here successful execution will be given with fee and penalty for violating the deadline. Figure 3: Strategy of Overbooking Data centers can also take advantage of those characteristics to accept more VMs than the number of physical resources the data center allows. This is known as resource overbooking or resource over commitment. More formally, overbooking describes resource management in any manner where the total available capacity is less than the theoretical maximal requested capacity. This is a well-known technique to manage scarce and valuable resources that has been applied in various fields since long ago. Figure 4: Overview of Overbooking The above Figure shows a conceptual overview of cloud overbooking, depicting how two virtual machines (gray boxes) running one application each (red boxes) can be collocated together inside the same physical resource (Server 1) without (noticeable) performance degradation. The overall components of the proposed system are depicted in figure 5. Figure 5: Components of the proposed model The complete process of the proposed model is explained below: The user requests the scheduler for the services The scheduler first verifies the AC and then calculates the Risk of that service. Then already a running service is scheduling then the request is stored in a queue. The process of FIFO is used to schedule the tasks. To complete the scheduling the collocation function keeps the intermediate data nodes side by side and based on the resource provision capacity the node is selected. If the first node doesn’t have the capacity to complete the task then the collocation searches the next node until the capacity node is found. The Admission Control (AC) module is the cornerstone in the overbooking framework. It decides whether a new cloud application should be accepted or not, by taking into accounts the current and predicted status of the system and by assessing the long term impact, weighting improved utilization against the risk of performance degradation. To make this assessment, the AC needs the information provided by the Knowledge DB, regarding predicted data center status and, if available, predicted application behavior. The Knowledge DB (KOB) module measures and profiles the different applications’ behavior, as well as the resources’ status over time. This module gathers information regarding CPU, memory, and I/O utilization of both virtual and physical resources. The KOB module has a plug-in architectural model that can use existing infrastructure monitoring tools, as well as shell scripts. These are interfaced with a wrapper that stores information in the KOB. The Smart Overbooking Scheduler (SOS) allocates both the new services accepted by the AC and the extra VMs added to deployed services by scale-up, also de-allocating the ones that are not needed. Basically, the SOS module selects the best node and core(s) to allocate the new VMs based on the established policies. These decisions have to be carefully planned, especially when performing resource overbooking, as physical servers have limited CPU, memory, and I/O capabilities. The risk assessment module provides the Admission Control with the information needed to take the final decision of accepting or rejecting the service request, as a new request is only admitted if the final risk is bellow a pre-defined level (risk threshold). The inputs for this risk assessment module are: Req CPU, memory, and I/O capacity required by the new incoming service. UnReq The difference between total data center capacity and the capacity requested by all running services. Free the difference between total data center capacity and the capacity used by all running services. Calculating the risk of admitting a new service includes many uncertainties. Furthermore, choosing an acceptable risk threshold has an impact on data center utilization and performance. High thresholds result in higher utilization but the expense of exposing the system to performance degradation, whilst using lower values leads to lower but safer resource utilization. The main aim of this system is to use the affinity function that aid the scheduling system to decide which applications are to be placed side by side (collocate). Affinity function utilizes the threshold properties for defining the similarity between the applications. The similar applications are then collocated for better resource scheduling. IV. ANALYSIS: The proposed system is tested for time taken to search and schedule the resources using the collocation the proposed system is compared with the system developed in [1]. The system in [1] doesn’t contain a collocation function so the scheduling process takes more time compared to the existing system. The comparison results are depicted in figure 6. Figure 6: Time taken to Complete Scheduling The graphs clearly depict that the modified (Proposed overbooking takes equal time to complete the scheduling irrespective of the requests.

Wednesday, September 4, 2019

The Corrosion Of Metals Engineering Essay

The Corrosion Of Metals Engineering Essay Each year, billions of dollars are spent on repairing and preventing the damage of metal parts caused by corrosion, the electrochemical deterioration of metals. The majority of metallic materials in a practical context are generally exposed to corrosion in both atmospheric and aqueous environments. Metallic corrosion has become a global problem which has negatively affected the industrialised society; hence why it has been studied in such comprehension since the beginning of the industrial revolution in the late eighteenth century. Corrosion also affects the average daily life both directly, as it affects the commonly used service possessions and indirectly, as producers and suppliers of goods and services incur corrosion costs, which they pass on to consumers. (ASM International, 2012). The effects of corrosion are distinctively recognized on automobile parts, charcoal grills and metal tools all of which will have a depleted efficiency once corroded. This corrosion may result in con tamination which then poses health risks. For example, the pollution due to escaping product from corroded equipment or due to a corrosion product itself. As a result of these consequences, corrosion prevention has been studied in great depth. Corrosion of various metals may be prevented by applying a coating of paint, lacquer, grease of a less active metal to keep out air and moisture. These coatings will continue to suppress the effects of coating so long as they stay intact. Examples of metals that are heavily protected in the industrial world are iron and aluminium. Vast quantities of the ores or each metal are mined and processed each year using large scale chemical reactions to produce metals of the purity required for their end use. For this report, the chemistry involved in the corrosion of both iron and aluminium will be researched as well as the methods employed to prevent their corrosion. Justification as to why corrosion happens will be explained with reference to physic al and chemical properties, electrochemistry, equilibrium, rates of reaction, enthalpy and solubility at every point where it is appropriate. Before explaining why corrosion happens, it is important to define corrosion in terms of electrochemical processes. An electrochemical reaction is defined as a chemical reaction involving the transfer of electrons through redox. Corrosion is a broad and complex subject that can be examined in three different categories; electrochemical corrosion, galvanic corrosion and electrolytic corrosion. In all forms of corrosion, three components must be present an anode, a cathode, a metallic path for electrons to flow through, and an electrolyte for the ions to flow through. Both the anode and the cathode must be in contact with the electrolyte to allow the ions to flow. As well as this, oxygen and hydrogen must also be available, either directly or as a result of chemical action and the resultant dissociation of water into its two constituents. In this report, electrochemical will be investigated in terms of its spontaneous nature and self-sustainability. Firstly, spontaneity is dependent on the sign of free energy. Gibbs free energy can be defined by the following equation:; where is the enthalpy, is entropy and is the temperature in kelvins. When is negative, the reaction will occur spontaneously (Zhang, H. 2012). For this to occur the entropy must increase and the enthalpy must decrease. This can be proven as a system of spontaneity aims towards disorder which directly coincides with entropy. Also, the change in enthalpy must be negative as thermal energy will be released from the energy stored within chemical bonds in a spontaneous system. Furthermore, in this electrochemical procedure, the negative electrode is the cathode and the positive electrode is the anode. Note that metals are used as they are good conductors of electric current due to the specific ionic bonding which then allows the electrons to be delocalized and move relatively freely. When these two electrodes are connected by a wire, free electrons flow through the wire from the anode to the cathode forming an electric current. Both the anode and cathode are submerged in separate substances respective to the elements of both electrodes from which the positive ions are attracted to the anode and the negative electrons are attracted to the cathode. The anode atoms are being oxidised as they are losing electrons and forming positive ions which then dissolves into solution. This results in a loss of overall quantity of zinc metal. In practical terms, this could be considered the pitting of the corrosion process which can be defined as a form of extremely local ized corrosion that leads to the creation of small holes in the metal (ASM International, 1987). Electrons formed at the anode travel to the cathode where they combine with the positive ions in solution to turn into the respective metal. Therefore the cathodic ions in solution are being reduced as they are gaining electrons. This production of extra cathode metal can be compared with rust which is a reddish- or yellowish-brown flaky coating of iron oxide that is formed on a metal by redox reactions. With just this in mind, the electric current would flow for only a limited time as the anode would have a build-up of positive ions being formed. While at the cathode increased amounts of electrons are being pumped into it. The result is an excessive positive charge that builds up at the anode that attracts electrons (negative) and prevents them moving away. While at the cathode the negative build up repels the electrons. As a consequence of this build-up of charge, no electron flow occurs and the cell eventually fails (Dynamic Science, 2012). Note that a solution cannot have a full charge and only a partial charge. To negate this issue, a salt bridge is used which contains ions that complete the circuit by moving freely from the bridge to the half cells. The substance that is placed into the salt bridge is usually an inert electrolyte whose ions are neither involved in any electrochemical change nor do they react chemically with the electrolytes in the two half-cells (IIT, 2012). As well as completing the circuit, it ensures that the charge between the two half cells remains electrically neutral. It does this by passing negative ions into the anodic half-cell where there shall be an accumulation of extra positive ions due to oxidation resulting in a slightly positive charge. Similarly, an accumulation of negative ions will exist in the cathodic half-cell due to the deposition of positive ions by reduction. Electrical neutralization is once again achieved by the salt bridge providing positive ions to the cathodic substance. Thus, the salt bridge maintains electrical neutrality. IRON CORROSION Only a few metals, such as copper, gold and platinum occur naturally in their elemental forms. Most metals occur in nature as oxides in ores, combined with some unusable metal like clay or silica. Ores must be processed to get the pure metals out of them, and there are nearly as many different processes for this purpose as there are metals. The process, as well as the elements present, greatly influences the properties of the metal. An important characteristic of metals is the extremely significant effect that very small amounts of other elements can have upon their properties. The huge difference in properties resulting from a small amount of carbon allowed with iron to make steel is an example of this. Taking into consideration the amount of iron that is used globally, the effect of corrosion on iron alone requires millions of dollars each year. The problem with iron as well as many other metals is that the oxide formed by oxidation does not firmly adhere to the surface of the meta l and flakes off easily causing pitting (KKC, 2012). Extensive pitting eventually causes structural weakness and disintegration of the metal. The iron oxide acts as a sacrificial anode which is a stronger reducing agent than iron that is oxides instead of the protected metal. Therefore it can be said that it acts as the anode. Since the oxide does not firmly adhere, it does little to protect the iron metal. As mentioned, iron in contact with moisture and air (oxygen) is corroded by a redox reaction. The anode reaction can be expressed as an oxidation of iron atoms: Both water and oxygen are required for the next sequence of reactions. The iron ions are further oxidized to form ferric ions (iron ) ions. This can be written as: These electrons are then conducted through the metal and are used to reduce atmospheric oxygen to hydroxide at another region of the iron. Therefore the cathodic reaction is: Considering that iron atoms dissolve at the anodic sides to form pits and ions which diffuse toward the cathodic sites; ions are formed at cathodic sites diffuse toward the anodic sites. Iron (II) hydroxide forms in a random location between the cathode and the anode which is then oxidised by atmospheric oxygen to iron (III) hydroxide. This can be expressed by: From here, the iron (III) hydroxide is then gradually converted to rust otherwise known as hydrated iron (III) oxide: ; Where generally equals 3. The formation of rust does not have a designated position as it can occur at random away from the actual pitting or corrosion of iron. A possible explanation of this is that the electrons produced in the initial oxidation of iron be electrically conducted through the metal and the iron ions can diffuse through the water layer to another position on the metal surface which is available to the atmospheric oxygen (KKC, 2012). Also, points of stress, such as where the piece of metal has been shaped, are more active than unstressed regions and thus act as anodic sites. The electric current between the anodic and cathodic sites is completed by ion migration; thus, the presence of electrolytes increases the rate of corrosion by hastening this mitigation. Therefore it is evident that the corrosion of iron can be directly related to a voltaic cell and can both be defined as electrochemical cells due to their spontaneous nature. ALUMINIUM CORROSION Similar to Iron, aluminium is also susceptible to electrochemical corrosion when exposed to moister. Aluminium, both in its pure state and allow, is truly a remarkable metal as it is light, tough, strong and readily worked by all common processes. Unlike iron however, It has excellent resistant to corrosion in the marine environment, and it requires little maintenance. The fundamental reactions of the corrosion of aluminium in aqueous medium have been the subject of many studies. In simplified terms, the oxidation of aluminium in water proceeds according to the equation (ELSIVIER, 2012): This specific reaction is balanced by a simultaneous reduction reaction, similar to iron, in ions available in the solution which then consumes the oxidised electrons. In an aqueous solution such as fresh water, seawater or moisture, thermodynamic considerations can be used to represent only two possible reduction reactions that can occur. The other occurring reaction is the reduction of oxygen dissolved in the moisture: Quite similar to the corrosion of iron, the aluminium atoms dissolve at the anodic sites to once again form pits and which diffuse toward the cathodic sites while ions are formed at the cathodic sites and diffuse toward the anodic sites. Therefore: ; Where generally equals 3. Although aluminium is still susceptible to corrosion, the metal itself is very resistive. Aluminium alloys generally have excellent resistance to atmospheric corrosion; require no protective coatings or maintenance beyond cleaning, which aids greatly in preventing unsightly pitting where dirt or salt accumulate. When aluminium is exposed to oxygen, it forms an oxide surface film that protects it from corrosive attack. The oxide acts as a sacrificial anode which is a stronger reducing agent than aluminium. It is then oxidised instead of the protected aluminium metal, serving as the anode. For the most part, damage due to atmospheric corrosion is pretty much limited to fairly slightly pitting of the surface with no significant loss of material or strength. Duration of exposure is an important consideration in aluminium allows, the rate of corrosion decreases with time to a low steady rate regardless of the type of allow or the specific environment. Thus corrosion of both aluminium and iron can both be defined as electrochemical processes which are similar in nature but have different protection potentials. PROTECTION METHODS Corrosion avoidance begins in the design process. Although corrosion concerns may ultimately reduce structural integrity, they should be a consideration to decrease money loss. Good maintenance practices are another way of avoiding corrosion, such as rinsing away salt water or avoid standing water. Corrosion protection systems, for the most part, are designed to control corrosion, not necessarily eliminate it. The primary goal is to reduce the rate of corrosion by having the smallest possible current. Current is defined as the flow of charge, or electrons, per time through a conductor hence. Since corrosion is the movement of electrons through redox, it can be quantified using this equation which represents the corrosion reaction per time or the corrosion rate. To do this, two efficient protection methods are available: cathodic protection systems and coatings. All cathodic protection schemes operate on the basis of the voltaic corrosion process, so like voltaic corrosion; cathodic protection systems require an anode, a cathode, an electrical connection and an electrolyte. Cathodic protection will not reduce the corrosion rate if any of these four things are missing. The basis of this protection method depends on the difference in corrosion potentials between the two metals immersed in the same electrolyte. This causes electrons to flow from the metal with the higher activity and negative potential (anode) to the metal with less activity and negative potential (cathode). This flow of electrons continues until the two metals are at the same potential, that is, there is equilibrium between the voltages. Electrode potential is a measure of the tendency for a material to be reduced e.g. accepts electrons. Also, activity is a measure of how easily a metal will give up electrons. Thus, the more active a metal is, the more negative the electrode p otential. This principle, directly relates to the two types of cathodic protection systems: sacrificial anode systems called passive protection and impressed current systems also known as active protection. Sacrificial anode systems are simple, require little but regular maintenance, and have low installation costs. We intentionally add a metal to the circuit to supply the electrons to the cathode. When metals are in a voltaic couple, the difference in there negative potentials causes the anodic metal to corrode and release metallic ions into the electrolyte. The more negativity in the corrosion potential means it will be a stronger reducing agent and will more readily give away electrons thus corroding first. Since the more negative metal in the closed circuit corrodes first, we can control corrosion by simply adding to the circuit a metal that possess two necessary characteristics: a corrosion potential more negative than the metal that is being protected, it is expendable which is not essential to the operation of any particular system. Therefore when a metal possessing these characteristics is made the anode, corrosion is controlled. The impressed-current type of cathodic protection system depends on an external source of direct current. Alternating current cannot be used since the protected metal would likewise be alternating, between anodic and cathodic. Basically, the anode is immersed in the electrolyte is connected to one side of a DC power supply and the metal to be protected is connected to the other side. The voltaic current flow is detected and measure against a reference electrode. If unfavourable, current flow is adjusted automatically by the power supply control system to compensate. Due to the high currents involved in many seawater systems, it is not uncommon to use impressed current systems in marine situations. Impressed current systems use anodes (ICCP anode) of a type that are not easily dissolved into metallic ions, but rather sustain an alternative reaction, oxidization of the dissolved chloride ions (Deepwater, 2012). Advantages of this cathodic protection are that they can develop so much higher voltages than sacrificial anode systems, so they can either push current through lower conductivity electrolytes or through longer distances. Disadvantages include the possibility of over protecting certain metals. This can cause hydrogen embrittlement in high strength steels. In aluminium specifically, accelerated corrosion can occur of the very structure that is being protected. Therefore it is evident that this form of cathodic protection, although more complex, poses some reliable advantages as well as some detrimental disadvantages.

Tuesday, September 3, 2019

The New Deal :: essays research papers

In 1933 the new president, Franklin Roosevelt, brought an air of confidence and optimism that quickly rallied the people to the banner of his program, known as the New Deal. "The only thing we have to fear is fear itself," the president declared in his inaugural address to the nation. Perhaps he should have said the only thing we have to fear is complacency. What was truly unique about the New Deal was the speed with which it accomplished what previously had taken generations. However, many of the reforms were created in haste and weakly executed. And during the New Deal, public reproach and contention were never interrupted or suspended. When Roosevelt took the presidential oath, the banking and credit system of the nation was in a state of collapse. With astonishing speed the nation's banks were first closed and then reopened only if they were solvent. The administration adopted a policy of moderate currency inflation to start an upward movement in commodity prices and to afford some relief to debtors. New governmental agencies brought generous credit facilities to industry and agriculture. The Federal Deposit Insurance Corporation (FDIC) insured savings-bank deposits up to $5,000, and severe regulations were imposed upon the sale of securities on the stock exchange. In addition to aggressive legislation to corral the failing bank system FDR vigorously attacked unfair business practices. The National Recovery Administration (NRA), established in 1933 with the National Industrial Recovery Act (NIRA), attempted to end cut-throat competition by setting codes of fair competitive practice to generate more jobs and thus more buying. Although the NRA was welcomed initially, business complained bitterly of over-regulation as recovery began to take hold. The NRA was declared unconstitutional in 1935. By this time other policies were fostering recovery, and the government soon took the position that administered prices in certain lines of business were a severe drain on the national economy and a barrier to recovery. It was also during the New Deal that organized labor made greater gains than at any previous time in American history. NIRA had guaranteed to labor the right of collective bargaining (bargaining as a unit representing individual workers with industry), while not a new concept it was quite radical. Then in 1935 Congress passed the National Labor Relations Act, which defined unfair labor practices, gave workers the right to bargain through unions of their own choice and prohibited employers from interfering with union activities.

To Kill a Mockingbird by Harper Lee :: To Kill a Mockingbird Essays

Throughout history, racism has played a major role in social relations. In Harper Lee's novel, To Kill A Mockingbird, this theme is presented to the reader and displays the shallowness of white people in the south during the depression. The assumption that Blacks were inferior is proved during the trial of Tom Robinson. Such characteristics served to justify the verdict of the trial. In this trial, Tom Robinson is accused of raping Mayella Ewell and is found guilty. Many examples from this novel support the fact that Tom Robinson was in fact innocent. Atticus Finch represented Tom Robinson in the trial. He showed that Tom's left arm was crippled due to a former injury using a cotton gin. Atticus expanded on this point by unexpectedly throwing a ball at Tom Robinson. Tom's only reaction was to catch the ball with his right arm. This point is connected to Heck Tate's testimony in telling the court that the right side of Mayella's face had been severely bruised. A left-handed person would logically have inflicted this injury. Tom's left hand is shriveled and totally useless. On the other side of the coin, Atticus shows the court that Mr. Ewell is not ambidextrous but is only right-handed.   Ã‚  Ã‚  Ã‚  Ã‚  A second testimony that supports the opposite of the verdict, was the fact that Mr. Ewell never called a doctor after learning of Mayella's injuries. Following the incident, there had not been any physical examination performed by a certified physician. If indeed Mr. Robinson had committed the crime, Mr. Ewell's first instinct would have been to get his daughter checked out. Upon finding his daughter 'assaulted';, he would have wanted to have her injuries treated including the injury that might been caused by rape.   Ã‚  Ã‚  Ã‚  Ã‚  The third example of the trial that strongly contrasts with the outcome of the verdict was Mayella's testimony. If Mayella was so sure that Tom Robinson was the one that assaulted her, her testimony would have been clearly stated. Instead, during the trial, Mayella seemed to be unsure of herself at times and hesitated when thinking about certain answers. When Atticus asked Mayella if she remembered the person beating her face, she first answers that she does not recollect if the person hit her. Under her next breath, she says the man did in fact hit her. Once Atticus challenges this statement she gets flustered and continues to use the excuse that she does not remember.

Monday, September 2, 2019

BCG Matrix Application

BCG Matrix is devised by Boston Consulting Group. The underpinning philosophy for the development of this matrix was the portfolio analysis. The aim was to develop a methodology to determine what type of strategic decision needs to be taken, especially in terms of investment to the products within the portfolio of a company. He divided all the products into four categories, on the basis of two dimensions.These categories were cash cow, dog, star and question mark. These four categories were based on two dimensions, market growth (high or low) and the market share of the concerned product. PurposeThe basic purpose of the BCG matrix was to establish a picture of the product portfolio for an organization which classifies the products into the categories based on market growth and market share. This classification, as they posed, was deemed to help in taking strategic decisions related to investment, divestiture etc. The Impact of BCG Matrix: The popularity of BCG Matrix in early days ca n be highlighted from the fact that in 1979, there was around 360 out of the fortune 1000 companies which were using this tool and considered it to have a positive effect on the management decisions. (Haspeslagh 1982)InterpretationBefore moving on to the actual case, it is better to understand the interpretation of each category as it would help in gaining deeper insight of the case. Star: In this tool, those products are classified as star which has high market growth and the product itself has high market share. For such product, the main focus is to protect the market share. Cash Cow: ‘Cash Cow’ are those products which have low market growth, yet high market share of the product itself. The extra cash generated out of it is usually used to protect market share and distributed to other products (usually question marks) to support their share.Question Mark: These are the products which are high in market growth, but the product itself does not have high market share. This situation demands either more investment in those products to increase the share or to divest them, if the competitor is very strong and increasing share does not seems to be a possibility. Dog: These are the products that have low market share and the market growth is also low. In this case, the best strategy is to liquidate or divest it for as much amount as possible. (Keller and Kotler, 2005)Applying to the CaseThe case states that the company has developed the BCG matrix for its divisions. The findings of the BCG matrix show that Electronics Division is on the upper right side of the matrix (which means question mark). However, the Appliances Division is on the lower left side of the matrix (Cash Cow). The Appliances Division: (Cash Cow) This means that the appliances developed by the company have low market growth and the appliances made by the company have high market share. As there is high market share, so the profit generation form these products is high and as the mar ket growth is low, so the investment required is low.This means that the additional cash can be used to grow other businesses / divisions or products. The Electronic Division: (Question Mark) These are the products where there is a significant market growth, but the company itself is not able to gain a significant market share. This is the worst of all other case, since the market is growing yet the firm is not able to capitalize the situation. If questions marks are kept going like this, they would absorb so much cash and ultimately become a dog when the market growth drops.Thus, there is a need of significant investment into the electronic division to enable it to capitalize the growing market and become a ‘star’. Strategic Recommendation: Since the appliances division is in a position to generate more cash than the cost of running the division as well as the cost of investment required protecting the market share, the additional cash can either be used to support the question marks (such as electronics department where significant investment is required to make it star) and make them star or it can be used for Research and Development of those products which may prove to have high growth potential for future.In case of electronic division, it is recommended that significant investments must be made with the aim of gaining some market share. If there is some untapped market, it is a bit easy, however, it the market is almost saturated and there is a need to grab share from competitors, it is a bit difficult. The investment can be made to add new features to the products to attract customers, launching aggressive marketing and sales campaigns etc. Reliability of BCG Matrix Nevertheless, it was used extensively by the companies in last quarter of the 20th century; however it has certain critiques as well, which harms its reliability.One of the biggest critiques on the BCG matrix is on its assumption that higher market share means higher profit. It may not be the case. For example, there is a possibility that a company has lower market share (due to niche marketing or due to high prices) but its prices are too high, so it leads to a higher profit, despite lower share. In that case, the BCG matrix won’t provide a true picture. Moreover, the matrix ignores the market share growth rate. There may be some start-ups with low market share yet high market share growth rate.Such firms which may prove to be a potential danger (especially in Information Technology industry) are totally neglected by the BCG matrix. These findings suggest that though apparently it looks like appliances division is having a good time in the market while the electronic division is in trouble, however this conclusion should not be drawn unless, all other factors ignored by the BCG matrix, such as market share growth rate, duration of entry into the market, competitor’s growth rate etc.Are revisited and the same situation is apparent from other tools like, Mc Kinsey and General Electric Matrix (that uses factors like industry attractiveness and business strength), SWOT Analysis for each product, porter’s five force analysis (to understand the environment in which the product is there) and above all the use of data mining tools etc. , (Bendel et al, 2006) using different variables than the one used by BCG Matrix.So BCG matrix can provide an idea, but final decision must be based on the conclusions from multiple tools, measurements, market situation, analysis and above all, management insight.BIBLIOGRAPHYBendle, N. , Farris, P. , Pfeifer, P. , & Reibstein, D. (2006).Marketing Metrics: 50+ Metrics Every Executive Should Master. Upper Saddle River, NJ: Wharton School Publishing. Haspeslagh, P. (1982).Portfolio planning: Uses and limits. Harvard Business Review, 60(1), 58-73. Keller, K. , & Kotler, P. (2005).Marketing Management (12th Edition) (Marketing Management). Alexandria, VA: Prentice Hall.