Thursday, October 31, 2019

The US and Economic Development Essay Example | Topics and Well Written Essays - 4750 words

The US and Economic Development - Essay Example According to Nye (2004.p1) â€Å"power is the ability to influence the behavior of others to get the outcomes one wants†. The concept of soft power has its origin in the late 1980s by Joseph Nye Jr. Soft power is defined as the ability to attract and persuade others thereby shaping their preferences and making them do what you want. Hard power is the ability to make others what you want through inducement (Nye, 1990). On the one hand, it is argued that in many cases soft power works rather than hard power since it helps to get the desired outcomes without threats or force (Nye, 2004). Rather, it makes others do what we want through co-opting them.On the other hand, critics argue that imitation or attraction does not always necessarily lead to desirable outcomes(Cooper,2004).Four different definitions of power are given by Barrett et al(2001).These are the power inherent in an individual itself, ability to make others do what one wants, ability to control the contexts of people ’s interaction and structural power. In general, power can hence be defined as a kind of ability to influence or control others to make them what we want based on all these definitions. This influence is obtained either through inducement, which is defined as hard power or through attracting others or shaping others preferences, which is defined as soft power. Hard power is often associated with military and economic strength while soft power is associated with the attractiveness of culture, institutions and information technologies (Windsor, 2000).6. Though economic strength is associated to hard power, it can be argued that economic wealth can also be used to attract others to get desired outcomes. Hence, it can be linked to soft power too. According to one viewpoint, only hard power gives US the powerful status (Cooper,2004)7 while the other viewpoint gives equal credit to soft power, which has worked well there (Fukuyama,2007)8. Moreover, the soft power index developed by Chicago Council obtains the soft power in US as high (USAPC Washington Report, 2008)9. The next sections critically evaluate this issue for US by examining the various dimensions of soft power in US

Tuesday, October 29, 2019

19th Century Philippines Essay Example for Free

19th Century Philippines Essay The Philippines was governed by Spain through a viceroy from Mexico. The highest office was that of the Governor-General, the chief executive of the Spanish colonial government, appointed by the Spanish king. The town is managed by a gobernadorcillo. The barangay is the smallest political unit under a cabeza de barangay. The social hierarchy was in this order: at the top were the peninsulares or the Spaniards from Spain, next were the insulares, Spaniards born in the Philippines and also called Filipinos, the mestizos, born of Spanish and Chinese descent, at the bottom were the indios, the local inhabitants. A total of 300 insurections and rebellions by the Filipinos all over the achipelago were recorded in the more than 3000 years of Spanish colonialization. 19th century was defined by liberal thinking for the following reasons: 1)Mexico rebelled against Spain and this brought revolutionary thinking to Manila; 2) the opening of the Suez Canal made the trip to Manila from Europe faster thereby bringing liberal ideas to the Philippines; and 3) rise of the middle class ? Liberalism is a set of political beliefs which puts primary consideration on the freedom and rights of the individual which includes the freedom of speck, of expression and of the press. In 1869, Carlos Maria de la Torre became the first liberal governor-general of the Philippines. For two year, until 1871, he instituted liberal reforms that benefited the Filipino middle class. ?Padre Jose Burgos campaigned for the Filipinization of the parochial churches in the Philippines and asked for the expulsion of friars back to Spain. ?The Cavite Mutiny of 1872 was used to condemn Frs. Burgos, Zamora, and Gomez to death by garrote or musketry. ?The martyrdom of Gomburza was winessed by Paciano Rizal, Jose’s brother. Rizal’s first novel Noli Me Tangere was dedicated to the martyred priests. Economic Conditions ?The economic policies of Gov. Gen. Jose Basco y Vargas opened the Philippines to the world market. ?These economic policies were the galleon trade and the monopolies of tobacco, wine and gambling. ?The galleon trade made Mexico Philippines’ trade partner. The route of the trade was from manila to Acapulco and back. ?From Acapulco, Mexico the Philippines got its silver and gold coins while the Philippines exported tobacco, wine, sugar and goods from China. The Philippines was the bridge of Asia to Europe and this trade allowed the emergence of the Filipino middle class composed mainly by insulares and mestizos. The encomienda system was transformed into Hacienda system wherein the vast tracks of land were devoted for the planting of single crops for export. (e. g. Ilocos for tobacco, Negros for sugar cane, etc. The first banks in the Philippines were managed by Spanish friars knows as Obras Pias. This banks lent money to the members of middle class which were used by them as capital for their export business. The first rural bank established was the Rodriguez Bank. The Mercado family was a typical middle class family of the 19th century who rented land from the Dominican friars. Social Conditions ?Schools and universities were opened and managed by Spanish friars. The most popular among them were the Ateneo de Municipal under the Jesuit fathers and Universidad de Santo Tomas under the Dominican friars. ?There were schools for boys and girls. For boys, schools teach history, languages, humanities, medicine, theology and law. While for girls, shools offer courses for dress making, home making, cooking and gardening.

Saturday, October 26, 2019

Christian Anthropology

Christian Anthropology Introduction This essay will explore, from the perspective of Catholic anthropology, the Churchs views on resurrection. The paper begins by looking at Platos dualist theory of the soul and its impact on the development of thinking. The views of Aristotle and his influence on the writings of St Thomas Aquinas on the nature of the human soul. It will also explore the notion of the whole person and then relate this to different anthropological approaches. The essay will conclude with the teaching of the Catholic Church Magisterium. Plato Dualism In the tradition of philosophy there are two main views of human beings; Dualism where immaterial soul and material body meet and Materialism where we are one being. (Selman 2000, pg13). The Father of Dualism may be said to be Plato who lived in Athens from around 428-347 BC and who was, as far we are aware, the first to write on the subject of the soul at any length. Plato presents at least two theories. The best known, because of its enduring influence, was the one he developed in the Phaedo, which describes a dialogue his friend Socrates has with some friends shortly before his death on what happens at death. Selman (2000, pg 12) states that there are two main theories about the human body and its relationship with the soul. One of these is the dualist view, which suggests that there is a total division between the immaterial soul and the material body. The other is the idea that the body and soul of a human being are completely unified. In his theory, through the words of Socrates, Plato holds that the soul is separate from the body, is immortal, immaterial and pre exists the body and therefore does not depend on the body for its existence or survival. This concept -that the body and soul are two different entities, which happen to uncomfortably occupy the same space during life -is termed dualism. Platos theory goes further by elevating the role of the soul. The pre existent, immortal soul spends time in the body -a period of punishment -and death releases the soul from its exile in the body. Not surprisingly, Platos concept of dualism produced difficulties for early Christian philosophers and theologians, although his views were not unpopular and his view of the soul remained the dominant one in Christian thinking for the first thousand years (Selman 2000, pg15). Aristotle Aristotle was another philosopher who tried to explain the idea of the body and mind. Even though Aristotle was a pupil of Plato, his thoughts on dualism were very different from that of Plato. He still believed that the soul was the part of the body that gives it life and that the soul turned all physical form into a living organism of its particular type. However Aristotle believed that the body and soul were inseparable, the soul still develops peoples skills, character and temper, but it cannot survive death. Once the body dies then the soul dies with it. The soul is the form of the body, because it is what makes the body be a living body (Selman 2000, pg17). Aristotle developed the concept that the soul was the principle of life and life is manifest in activity. From these activities, he distinguished three types of soul: vegetative, sensitive and rational. Plants have the basic or vegetative soul allowing them to grow and reproduce. Animals have a sensitive soul enabling them to grow, reproduce, and experience sensation and movement. Humans have a rational soul, which enables them to grow, reproduce, and experience sensation and movement and to think, reason and understand. In all it is the type of soul, which defines the form of the body and thus body and soul are untied as one being. (Selman 2000, pg 19). For Aristotle then a body without a soul is dead matter. Dead matter no longer acts. It is only acted upon. While Aristotle could see that the body and soul were united he could not make the leap to speak about an immortal soul. This would be left to later philosophers such as Aquinas who would consider this point from a Christian perspective. Aquinas agreed with Aristotle in the sense that he thought that the soul animated the body and gave it life and he called the soul the anima. Aquinas believed that that the soul operated independently of the body and that things that are divisible into parts, are destined to decay. As the soul is not divisible, it is able to survive death. However, because of the link with a particular human body, each soul becomes individual so even when the body does die, the soul once departed still retains the individual identity of the body it once occupied. Descartes believed the soul retains its nature in the absence of the body but Aquinas argued that the disembodied soul is in an unnatural state. The human soul is naturally the form of the living body. Now that the soul is what makes our body live; so the soul is the primary source of all these activities that differentiate levels of life: growth, sensation, movement, understanding mind or soul, it is the form of our body (St Thomas Aquinas, Summa Theological). St Augustine, like most of the Church fathers, was influenced by the teaching of Plato who considered that the body and soul were two substances. (Selman 2000, pg 18), St Augustine held that the soul, like the body, is derived from the parents in the act of creation. According to Augustine, original sin is transmitted from Adam down through the ages in this way. This is the way in which he explains how original sin could exist in a soul created by God because God could only create that which was good. He later renounced his view that the soul is traduced. This heresy was condemned by the Council of Braga in 561 which stated that the soul is not traduced but is directly created by God (Neuner and Dupuis, pg 167). The title phrase introduces the idea of the whole person as opposed to parts of a person, which requires us to discuss how a person could be understood to be in parts. The most common way to talk about the relationship of the body to soul is Cartesian dualism, of the separateness of the two. Cartesian dualism comes from Descartes, who in fact first argued that the body and mind,soul were separate and distinct so that he would be able to continue making medical advances without the interference of the Church. In saying that the body and soul were separate he made the soul the domain of the Church, leaving secular scientists to look at the body, whereas before secular scientists had been looked at with suspicion or even imprisoned for trying to make discoveries However, dualism has a longer history than this even in the West, with Plato and other classical philosophers discussing ideas about the material world as a shadow world of a pure world of ideas. This could be seen as another wa y of describing the sinfulness of the material world body and the perfection of heaven, which will be the eventual home of the soul, freed from its imperfect trappings (The way of perfection by St Teresa of Avila CH 1 17). The Resurrection of the Flesh The quote in the title comes from the The Reality of Life after Death, written by the Sacred Congregation for the Doctrine of the Faith in 1979 and published amongst the Vatican II writings in 1982. It refers to the teaching of the Catholic Church of the resurrection of the flesh, in which it is not just the soul, which survives after death, but the body as well. This can be related to other Catholic teachings, such as its tradition about Mary, who ascended bodily into heaven (LG 58), and teachings about the role of the flesh and denial of the flesh in salvation. Tertullian, talks extensively about the role of the body in salvation, making a claim for the potential purity of the flesh by pointing out that man was made of flesh before the fall: the clay, therefore, was obliterated and absorbed into flesh. When did this happen? At the time that man became a living soul by the inbreathing of God (Tertullian 2004, pg 49). He also shows the link between the actions of the flesh and the state of salvation of the soul: the flesh, indeed is washed, in order that the soul may be cleansed, the flesh is signed with the cross, that the soul too may be fortified the flesh feeds on the body and blood of Christ, that the soul likewise may fatten on its God. (Tertullian 2004, pg 63) His intention is to show the relationship between body and soul, to assert that resurrection at the end of days will be bodily, and to extol the mortification of the flesh in the name of Christ, but in talking so extensively of the differences between the two. Selman (2000, pg 60) states that the human body can be raised up on the last day because it will be joined once again to its soul which has remained in existence since they were separated at death. Furthermore, if the soul is not immortal then there can be no Resurrection (Selman 2000, pg 60). For Aquinas, when God raises the dead on the last day, souls will be reunited with what is materially continuous with what came from the mothers womb Selman (2000, pg 59) states that the same person can be raised up because the body will be restored to the same form as it originally had in this life. The above views contrast very differently to, for example, the attitude of the Mormon church, as studied by Fanella Cannell (2005, pg 335- 51 ) . In her article The Christianity of Anthropology, she looks at the assumptions in anthropology, which are descended from its Christian background a particular sort of Christian background though. The Mormon Church show how the same teachings can be interpreted in different ways and that dualism is not necessarily, what Christianity has to result in. Not only do Mormons believe in full, literal resurrection, but also they believe that heaven is going to be exactly like earth, but perfected. In particular, they believe that people will continue to have children and families into eternity, and it is legitimate to ask questions like will there be chocolate in heaven? a question that most other denominations of Christianity would view to be frivolous or inappropriate Church Teaching Magisterium The Catechism (365) declares that the unity of soul and body is so profound that one has to consider the soul to be the form of the body. The Council of Vienne (1312) refuted all other doctrines, which were not consistent with this declaration (CCCC 365). The Lateran Council (1513) also condemned any philosophies, which denied that the soul is essentially the form of the human body (CCC 366). The The Second Vatican Council (GS 14) declared that man made of body and soul is a unity. Furthermore, the human body is not to be despised as it is part of Gods Creation (Gen 2:7) and will be raised up on the last day. St Paul said that the human body is the temple of the Holy Spirit (1 Cor 3:16). As a result it should never be undermined, or seen as something that separates humanity from God. Vatican II teaching of the soul as a very separate entity to the soul: we believe that the souls of all those who die in the grace of Christ, whether they must still make expiation in the fire of Purgatory, or whether from the moment they leave their bodies they are received by Jesus into Paradise like the good thief, go to form that People of God. (Austin Flannery 1982, 394). By using the phrase leave their bodies, Vatican II demonstrates that they see the soul and body as detachable. Even if the body is to be resurrected eventually, it is still the soul that gets to heaven first, after leaving the body behind (Teaching notes Perth). Conclusion In considering the question, I have looked at the nature of the soul from main philosophies of the soul as put forward by Plato and Aristotle. I have shown how Augustine, Tertullian, and Thomas Aquinas to present a Christian anthropology. I have contrasted this view with the Mormon Church and their belief of the resurrection. I have found that the Magisterium, in seeking to hold true to revelation and Biblical tradition, has preferred to use the teaching of St Thomas Aquinas, which holds that the soul is the form of the body. The soul is with the body now and will be again after the resurrection from the dead Bibliography Wansbrough, Henry. 1994.(gen ed) The New Jerusalem Bible. London: Darton, Longman Todd Flannery Austin, O. P. 1982. Vatican Council II Vol 2. New York: Costello Publishing Co. Neuner J. and Dupuis J. 2001. The Christian Faith. New York: St. Pauls/Alba House The Catechism of the Catholic Church. 1994 London: G. Chapman Aquinas, St Thomas. Summa Theologica Part Ia q.75 articles 2 and 6; and q.76 art1. Tertullian, 2004. On the Resurrection of the Flesh. Kessinger Publishers. Cannell, F. 2005. The Christianity of Anthropology Anthropology Today 43: 335-51 Selman, Francis. et al.2002. Christian Anthropology. Birmingham: Maryvale Inst Internet International Theological Commission. (2002) Communion and Stewardship: Human Persons Created in the Image of God. (online) Available from: Vatican web (April 2008) Saint Teresa of Avila. The way of perfection. (1995) (online) Available from: http://www.ourladyswarriors.org/saints/wayperf.htm. (April 2008)

Friday, October 25, 2019

Himalayan Herders: The Significance of Latitudinality Essay -- Cultura

Himalayan Herders: The Significance of Latitudinality Melvyn Goldstein and Donald Messerschmidt, the authors of "The Significance of Latitudinality in Himalayan Mountain Ecosystems" argues that the altitude oriented "mixed mountain agriculture" model where mountain people move to higher altitudes in the summer and lower ones in the summer does not accurately reflect many areas of the Himalayans (Goldstein and Messerschmidt, 117). Instead, latitudinality lies at the core of cultural adaptation to the high altitude mountain ecosystem for many native Nepalese (Goldstein and Messerschmidt, 126). Of the three studied Nepalese regions (Limi, Ghaisu and Bhot Khola), latitudinal movement is just as important and common as latitudinal movements for the local inhabitants. The authors’ illustrate the point that in some Himalayan areas, the people do not depend on altitude variation, but use latitudinal (north-south) habitats to create "habitat and production zones" (Goldstein et. al, 120). In the Mountainous areas of Limi, Ghaisu and Bhot Khola, even the sons of snow (Yaks) will not survive the winter snow. To escape the snow, the people and their herds migrate only 50 to 75 miles south to pasture-land not covered by snow. This 50 to 75 mile trek is strictly latitudinal as they do not descend in elevation. These southern wintering grounds provide more grasses for grazing because of a more moderate climate. This latitudinal adjustment is central to the success of pastoralism. Animal husbandry and agriculture are also important activities in the Limi, Ghaisu, and Bhot Khola regions. For example, agriculture is considered to be the foundation of Limis’ economy. However, because of high altitudes, agriculture cannot be expanded ... ...y and its effects on pastoralism and agriculture. For example, Melemchi herders use different vertical zones throughout the year as grazing land for their animals. The book spurred a few questions of uncertainty about the article. Bishop thoroughly described the recent trend in which Nepalese men sell their herds and for many months abandon their family to try and make cash in an unskilled job in India. This insight makes the reader realize that the few isolated regions studied in Goldstein and Messerschmidt’s article are not typical communities even in the mountainous and rural country of Nepal. Works Cited Bishop, Naomi. Himalayan Herders. Texas, Harcourt Brace College Publishers, 1998. Goldstein, Melvyn and Donald Messerschmidt. "The Significance of Latitudinality in Himalayan Mountain Ecosystems." Human Ecology, Vol. 8. No. 2, 1980:117-133.

Wednesday, October 23, 2019

Reflection on Film: Psycho Essay

In the movie Psycho, we see a character that is the one at fault but is so sweet she is obviously the victim here. When the $40,000 is no longer what we see from Marion Crane, it is because she was murdered, she is now the victim. Robert Ebert, from the Chicago Sun Times states â€Å"Marion Crane does steal $40,000, but still she fits the Hitchcock mold of an innocent to crime.† She was originally at fault here, and then she is brutally murdered for no reason by Norman Bates, who now becomes the center of attention. We must now figure him out! â€Å"Marion has overheard the voice of Norman’s mother speaking sharply with him, and she gently suggests that Norman need not stay here in this dead end, a failing motel on a road that has been bypassed by the new interstate. She cares about Norman. She is also moved to rethink her own actions. And he is touched. So touched, he feels threatened by his feelings. And that is why he must kill her.† states Ebert. This point being made, never occurred to me while watching the movie, I saw just a crazy guy that thought she was pretty and his â€Å"mother† didn’t want him to be with her, so out of fear he killed her. Psycho was a great film that truly was the setup up for future horror films. It is the masterpieces of Hitchcock that really set the standard of movies we see today, he is the master of them and people use his example. Psycho being this horror movie that has a huge unexpected twist in the plot really makes you feel for the characters and engages you in to the film, you almost feel like you are with Marion in the bath tub while she is murdered, you can feel your heart pound from the screeching sound of the music. Everything that was put in this movie was put there for a reason and it all pulls you right in with it.

Tuesday, October 22, 2019

Human Cognition and System Design Essays

Human Cognition and System Design Essays Human Cognition and System Design Essay Human Cognition and System Design Essay Name: Lecturer: Course: Date: Human Cognition and System Design Introduction This paper will serve to analyze the Linux software application from a human information processing perspective. Primarily, the Linux software application is a computer operating system with a Unix-like design assembled from the open source and free software distribution and development. Originally developed to perform as an operating system for personal Intel x86 based computers (Dibris, 5). The software application has been ported to a wider variety of computer hardware platforms. The development of the Linux design is considered the most prominent example of open and free source collaboration of software. In this regard, this paper will hence focus the Linux software design and its consideration on human information processing capabilities, memory, perception, attention and learning. Memory The memory concept in this case implies the various ways through the user of the Linux software can interact or otherwise communicate with the computer system. Recognition is considered an easier strategy to use compared to recall when using the Linux software application. Regarding recognition, Linux offers the users with appropriate recognition stimuli. However, the application has been programmed with limited information as too much would divert the attention of the user or confuse them. The software application implements the use of programmed intelligence to provide appropriate stimuli for tackling the task presented by the user (Dibris, 7). The Linux software application is both a command line interface and a graphical user interface. The hard way of learning commands is through remembering them. Keying in commands the user had already typed on the previous or current console amounts to tedious or unnecessary effort. Instead of retyping a previous command, the software application offers a variety of options that save time for the user when remembering older commands used in current or earlier sessions. For example, the first option involves keying in the control key plus R prior to issuing the command. This acts by initiating the command re-caller through backward mode. This occurs with the most recent command being presented as the first. The user can now type part or the whole characters of the command he or she is searching. Once the command is found, the user is expected to hit the enter button and the command will be initiated. The Linux visual representation includes a pearl script that is capable of reading the traffic counters of the computer’s routers, and a fast program that creates presentable graphs representing the monitored network connection. In addition to its detailed view, the Linux software application is capable of creating traffic visual representations visited in the previous seven days, five weeks, or three months. This made possible from its ability to keep a log of the entire data from the router. Additionally, since the Linux software application of graphical user interface nature, it therefore represents programs, directions, and files through spatial relations and pictures. In the graphical user interface, the user has a simpler choice of choosing commands by manipulating or activating pictures –for example, dragging an icon or clicking on a button with the mouse device. The graphical user interface is intended to make the computer easier to use by simplifying decisions and tasks, and creating visual representations that the user can easily relate. A significant aspect of the Linux software application revolves around its ability of raising the user efficiency in user memory as well as usage over interfaces with text bases. The Linux software program not only utilizes chunking and encodes information; it also offers streamlined ways of finishing tasks taking into account the expectations and needs of the user. However, it is prudent to understand that the Linux software program fails to support its user’s in remembering how to use it in certain ways. This is manifested through its shortcut ability to identify previous or currently used commands. This is thought to induce a lazy culture to the user since he or she does not have to remember the main components of a command. The main challenge is however realized when entering a new command. Attention Computer systems using the Linux software application output their signals through actuators. With these displays, the system acts or reacts to the environment. Developments in software creation have enabled the user to process information from their auditory and visual senses. The Linux software application uses graphical programming language for the execution of many processing functions, sound generation and processing, and video. The timing of this software is versatile and can be communicated to other computers through a network. The graphical nature prompts the incorporation of visual user interface. Though it is good with its timing, its quality of being timeline based becomes a hindrance when the user is considering interactive structures (Hives, 8). Other authoring tools are also incorporated within the Linux software program. A high level for programming languages has a speeding effect on the development process and is a tradeoff in terms of flexibility. If the user deems that an application needs maximum flexibility, he or she can use low-level programming. The Linux software program is also user computer interface of high nature that involves multiple sense interactions, real time interaction and simulation, including visual, auditory and tactile. In addition, there are certain Linux software programs that use multimodal user interfaces that combine two or more human senses in their interaction with other computers (Hives, 14). The design of this program was achieved based on the needs of blind computer users. This was to enable them to control and perceive information in an effective and efficient way. However, research maintains that the audio user interface is not a particularly efficient interaction solution even it is the most used with blind computer users. The efficiency of the tactile user interface in accomplishing certain tasks is considered similar to the interface for audio senses. Perception The Linux interface helps its users understand the sensory information they receive in a number of ways. The Linux software program possesses a lot of information related to how it runs. The memory and hardware of the system, current processes and the user’s latest activity information is made available by the system. In many cases, the user can view the system information through specified commands (Z?agar, 25). A number of these commands are specifically written to give information. The rests of these commands are intended to alter the system to include ways of viewing the current state of the system. In certain cases, the user can understand the received sensory information through configuration files and system information. Majority of these files are in the form of plain text. This enables the user to view the basic commands that output the content of a file to the command line. Among the simplest commands designed for reading the information system is ‘arch’. This command gives back the computer’s architecture. A different set of commands give the user information relating to processes running within the system. One commonly used command process is ‘top’. This gives the user a continuous update on the process responsible for consuming the most memory space of the computer system. ‘Pstree’ is a command that gives the user a highlight of parent and child processes- meaning processes that start others. Ultimately, the Linux interface utilizes previous knowledge to enable the user understand the sensory information they receive. Among this is the collection of logs from experiences. Majority of these logs can be read using standard reading commands, but this is dependent on the distribution (Welsh, 18). In this case, some logs may be in the form of a specialized format and hence require reading with a special command. However, the Linux software program at times fails to support the perspective of perception. In this regard, the interface does not provide information on certain processes or the information itself is too complicated for the user to understand. Learning Linux interface has been designed to be user friendly and includes tips intended to help the user learn how to use the application. Similar to Microsoft windows, the Linux system files are arranged in a structure resembling the hierarchical directory format. Linux gives the user a graphical interface that gives them an easy time in understanding how to use it, and still allows those with the knowledge to change settings to a different user. Primarily, the interface allows the user to understand that everything in the system will henceforth be treated as a file (Welsh, 24). The user uses this file to make a drawing or write a text. The system lets the user understand that the written texts or drawings made will have to sorted and stored for easy location. Behind every configured option, there lie simplified readable text files the user can edit to their best suit. The current versions of the Linux interface are incorporated with graphical user interface to guide the user through the program. There is another option where the user can choose to gain full control of the program through manual adjustment of the configuration files. The design of the Linux interface is based on the premise that every person gaining access to the system has their personal username and password (Welsh, 45). Every single file has a source group and user and possesses certain attributes. In addition, the program provides the user with an opportunity of feeding it with a command to attain certain information. This enables the user to learn on a variety of processes associated with the interface. Conclusion The intensity of using Linux on laptops and standard desktop computers has been under constant rise in the recent past. The current versions include a graphical user environment. With regard to the objective of this paper, the major findings gave rise to the conclusion that Linux software program has been designed in the form of a user-friendly interface. In terms of human cognition to the design of the software, the paper was able to establish a positive view regarding the memory, attention, perception, and learning perspectives of the software program. However, one cannot overlook the hindrances associated with the shortcomings of the interface. Dibris, Dora. â€Å"Introduction to Linux Programming.† 2004 Tripoli Library Association. Greenspan Hotel, Tripoli. 29 March 2004. Hives, John G., Brian G. Brestan, and Ruth A. Dale. â€Å"Linux Software Program.† Human Cognitive Review 26.1 (2007): 1-16. Print. Michal, Pierce G., and Sarah Orsworth. â€Å"Technology and Human Cognitive Behavior: Review by a Group of Experts.† System Design 7.2 (2004): 121-30. Web. 11 Sep. 2012. Welsh, Matt, Matthias K. Dalheimer, and Lar Kaufman. Running Linux. Sebastopol, CA: O’Reilly, 2009. Print. Z?agar, Klemen, Janez Golob, and Anz?e Z?agar. â€Å"Efficient Access to Timing System’s Time in Linux User Mode.† Control Sheet. 9 (2010). Print.

Monday, October 21, 2019

Compare Contrast essays

Compare Contrast essays Comparison Between Novel and Film Version of "Lord of the Flies" Many novels are so successful that producers can't wait to adapt the story into a film. The majority of times, however, the original novel is much stronger than the movie because it is able to capture the emotions of each character, all the symbols and meaningful events. Due to the novel's flexibility, readers are able to extend the use of their imagination. Similarly, this was the case with William Golding's masterpiece, "The Lord of the Flies." Overall, the novel is far superior to the film because it has thorough descriptions of a character's feelings and depictions of symbolic meaning concerning the objects and important happenings. First of all, the movie version of the classic, "The Lord of the Flies," seems to be lacking in detail involving the characters. Mainly due to the limited length of the movie, a character's role and his feeling are non existent. In the novel, readers can clearly notice how Piggy feels and that he is being treated as an "Outsider" but, in the film version it restricts the audience's comprehension of Piggy's emotions. Similarly, other characters such as Simon and Roger are so unclear in the movie that they may puzzle viewers because the movie fails to distinguish their role. The cinema is unsuccessful in establishing Simon as a "Christ" figure and Roger's murderous nature. On the other hand, the novel installs all these ideas and allows the reader to use their creativity. Therefore, due to the film's inability to give audiences more information about the characters, their role and their emotions, the novel is much more informative. Secondly, the novel is capable of giving readers more insight into the story with the use of symbols and hidden meanings. The novel is able to do this because it depicts important underlying messages and critical incidents. For instance, Piggy's glasses represent civilization, reality and reason but once ...

Sunday, October 20, 2019

Compensation of strategic network board

Compensation of strategic network board Firms have embarked on strategies that help catch up with the stiff competition in the market. As a result, they partner with other firms to form strategic networks that constitute many members. This arrangement enables them attain strategic renewal through sharing resources, participation in joint technological activities and drawing similar marketing strategies.Advertising We will write a custom article sample on Compensation of strategic network board specifically for you for only $16.05 $11/page Learn More It is evident that this helps in strengthening the firm’s competitiveness; as a result, most governments and business related regulatory agencies support its implementation especially for small businesses. Another reason why many governments’ support this concept is because it assists firms to build independent entities that will help them address their deficiencies. Most importantly, to ensure the networks accomplish their goals, it is e ssential to establish a network board that will foresee their operations. This board might come in handy in situations where companies that have formed a network are experiencing problems related to insufficient resources (Thorgren, Wincent and Anokhin 131). The network board consists of firm’s representatives, experts in the industry or representatives of different related public agencies. Most importantly, the main responsibility of the network board is to augment network authenticity. Additionally, it heartens, harmonizes and offers support for joint activities among the networked firms. Basing on the roles played by this network board, it is essential that they are compensated so as to motivate them. It is obvious that the administrative structure that governs the operations of this board is different from the normal organization structure. This is because the board foresees the operations of more than one firm and each of them has its own administrative structure. Thus, the process of compensating the network board is far different from that of an organization’s board. As evident, this board acts as agents that link different organizations. Research has incorporated agency theory with stewardship theory to suggest that unforeseen events shape the affiliation linking networking firms and network board members in a way that can add weight to the effects of board compensation on network performance (Thorgren et al. 132). Analysis In this article, the author portrays the importance of the small businesses coming together and establishing an empire that will sustain them presently and in the future. In order for these businesses to succeed, they must form a board to foresee their operations. Additionally, this board must be compensated as a means of motivating it to perform its duties well.Advertising Looking for article on business economics? Let's see if we can help you! Get your first paper with 15% OFF Learn More It i s noteworthy that the author’s qualification is certain as he brings out the most important facts concerning the avenue that leads to small businesses prosperity. Establishment of a strategic network by firms as presented by the author seems to be a practicable fact. However, the administrative structure that links these companies in a way to facilitate the compensation of this board does not seem to be practicable. It is obvious that the board members are derived from the industry. Thus, each stakeholder must compensate its representatives as part of its contribution towards the sustainability of the board. It is essential for the firms concerned to come up with well constructed strategies, which will ensure the costs incurred during compensation process are shared equally. Basing on this fact, it is an appropriate for firms to produce equal number of representatives on the board to ensure equivalent contribution towards their compensation. In as much as the article portrays the importance of compensating the network board members, it does not give the criteria to be employed when administering the compensation. It is a fact that the compensation strategy is a critical issue that needs to be addressed in detail. Some experts reveal that issues arising from the compensation strategy might result to controversies, hence collapse of the strategic networks. In the establishment of these networks, it is essential for its proposers to put in mind the fact that it requires a lot of consultation to link the companies. In the board, it is essential to constitute experts with experience on joint company ventures. This acts as a mitigation measure incase there occurs future disagreements. Application It is noted that, this information is important in business as it depicts ways in which different companies can combine their resources to come up with a solid venture able to outdo its competitors. The information on compensation of the network board is necessary to companies that are already in strategic network as it enlightens them on ways of strengthening their links. Most importantly, this information is essential to the management as it reveals a rough idea of what to expect when they involve their institutions in strategic networks. Additionally, this information is vital to institutions already in involved in strategic network but having problems to do with the compensation and constitution of the network board. Conclusion As evident, stiff competition in the market leads to employment of new strategies to enable firms catch up with the market leaders. Thus, there are companies that opt for the strategic network so as to combine their resources; as a result, it boosts their market share. In any organization, employee motivation is considered essential because it ensures more productivity.Advertising We will write a custom article sample on Compensation of strategic network board specifically for you for only $16.05 $11/page Learn More Thus, for the strategic network board to be motivated it should receive compensation from the network partners. It is appropriate for each company to have a representative on the board so that their interests are well represented. Furthermore, a company will be well represented if the individual has got formal links with the company and understands what goes on in the company. The board’s success is determined by the mode of treatment and concern it receives from the stakeholder. Thus, its compensation is an important element that will boost their working morale. As evident, the compensation process must be well structured so as to ensure there are no obstacles towards the implementation of the strategic network. Hence, the parties involved must seek the advice and guidance of experts before executing this strategy. Work Cited Thorgren, Sara. Wincent, Joakim. Anokhin, Sergey. The Importance of Compensating Strategic Network Board Members for Network Performance: A Contingency Approach. British Journal of Management, 2010, 21:Â  131–151. Print.

Saturday, October 19, 2019

Getting financing Essay Example | Topics and Well Written Essays - 1000 words

Getting financing - Essay Example The inadequacy of working capital has prevented many companies from exploiting potential market opportunities that are available to them. This paper tends to explore the scope of raising finance for my business ‘JBR Watches’ located in Los Angeles. Obviously, it is difficult for an entrepreneur to meet all capital requirements for business expansion all by oneself. Hence, for opening a showroom in Los Angeles, I should seek various sources of financing. At this juncture, it is important to identify significant tactics that an entrepreneur can initiate. The most notable factor that denies a firm’s access to financial sources is its negative market stature. This bad situation can be changed if the company deals with large contracts because they offer comparatively higher profit. At the same time, majority contract terms insist that the supplier must provide 30 to 60 days for the client to pay his invoices (Burstnet). So as to meet these credit requirements of the cu stomers effectively and to earn more profit from large business contracts, it is advisable for the JBR Watches to try for venture capital. Venture capital is an option for small companies that possess innovative business plans but have no adequate operating finance (venture capital). Generally, venture capitalists would not be willing to invest their money in risky ventures; hence, the JBR must formulate effective business designs in order to convince the capital provider about the potentiality of its business. Even though, venture capital is offered for a short period of time, JBR can make returns within this period and repay the amount before the maturity of the stipulated period. Similarly, my company may seek assistance from angel investors. An angel investor maybe a wealthy individual or group of individuals who wish to invest in pre-venture capital companies with the objective of uplifting certain communities (Angel Investors). In the case of JBR, the management can highlight the growth requirement of employees’ community; and it may assist the firm to get financial assistance from angel investors. So as to find a potential angel investor, JBR can employ the internet tools like Google search engine. According to Carbajo (2011) bank is another potential financial source for every business. However, it is observed that the banks do not provide loans and other credit facilities to small companies unless the companies possess substantial assets and all other financial records. Although JBR Watches is a notable concern in the industry, its decreased growth rate would not satisfy the banks’ credit criteria. In the opinion of Yates, banks provide credit facilities to small companies also if the business owners personally guarantee the credit repayment. This type of fund raising is a very risky practice for small companies if the business does not realize anticipated profit and the owner is unable to repay the loan amount. Therefore, JBR must be ve ry careful while applying this tool. Use of credit card is another method that can effectively contribute to the working capital requirements of JBR Watches. Credit cards assist the card holders to make purchases or obtain cash advances and pay them later. Business owners must note that credits cards are very expensive source of funding even if it has reasonably low interest rates. As in the case of bank lines of credit, the business owner personally guarantees the debt repayment while employing this technique also. Therefore, this

Friday, October 18, 2019

Research paper about India. ( How people make a living in india Essay

Research paper about India. ( How people make a living in india Current countrys economic status - Essay Example India grows and export major food commodities such as rice, sugar, wheat, cotton, and vegetables. The country also produces and export animal agricultural products, which include buffalo milk, eggs, meat, and chicken. Most of Indian farmers are small-scale farmers who grow their crops or rear their animals on small pieces of land. However, the country has favorable climate and soils that support agricultural activities. The agricultural sector contributes about 16% of the country’s GDP and provides employment opportunity to about 50% of the total Indian population (Department of Revenue web). The agricultural sector provides employment mainly to rural Indian population. The industrial sector is a major contributor to the Indian economy. Currently the sector contributes about 14% to the Indian GDP (Panagariya 453). India is known worldwide as a major manufacturing country. The sector employs about 25% of the Indian population. Majority of industrial workers live in urban centers and other industrial towns that spread across the country. The Indian industries are recognized worldwide for their production of affordable and long lasting commodities. Indian industries produce products ranging from heavy duty equipment such as steel beams to light duty equipment such as bicycles. Indian industries are also involved in production of pharmaceutical products that are marketed across the globe. Currently the industrial sector is eying the booming technology sector. The industrial sector provides employment opportunity to both the skilled and unskilled labor force. In addition, the industrial sector has small-scale industries that provide opportunities to tho usands of Indians. Cottage industries or home-based industries produce basic commodities for export and domestic market. Indians are known to be business people. Indian are successful business people who have set up businesses in many parts around the globe. The

Commercial Law Essay Example | Topics and Well Written Essays - 3000 words - 1

Commercial Law - Essay Example The purchase of the conveyor belt raises the issue of whether or not Contigrain is an innocent third party and can claim damages for fraudulent misrepresentation on the part of Hampshire. The sale of the truck to farmer Giles may also expose Contigrain to liability on the grounds that the truck was not of merchantable quality. In order to determine whether or not Contigrain is entitled to demand possession of the Brazilian peanut extract from the liquidators of Agrigus or demand payment in full from Munchy Feeds for the turnip fibre it is necessary to examine each contract by reference to the Sale of Goods Act 1979. To start with Section 2(1) defines a contract for the sale of goods as an agreement where the vendor â€Å"transfers or agrees to transfer the property in goods to the buyer† for a price.1 On the facts of the case for discussion there is a sale of goods contract in both instances. Clearly, Contigrain and Agrigus agreed that in consideration of the sum of 1000 pounds per ton, Agrigus would transfer 100 tonnes of peanut extract to Contigrain for which the latter made a payment of 50,000 pounds. Similarly, Contigrain agreed to and did deliver 500 tonnes of turnip fibre to Munchy in consideration of the sum of 1000 pounds per ton to be paid in full within 30 days of delivery. Having established that contracts for the sale of goods have been completed between Contigrain and Agrigus and Contigrain and Munchy, it is necessary to determine whether or not and at which point title to the property passes from the seller to the purchaser. This is important for ascertaining who bears liability for any risk associated with the goods. Section 20(1) of the Sale of Goods Act 1979 provides: Unless otherwise agreed, the goods remain at the sellers risk until the property in them is transferred to the buyer, but when the property in them is transferred to the buyer the goods are at the buyers risk whether delivery has been

Mathematics Essay Example | Topics and Well Written Essays - 250 words - 1

Mathematics - Essay Example An example of where it is important to understand integers in the financial world is with banking (Glydon). If someone spends more than they have available in their bank account then their balance will be negative. Also, it is important to understand integers in geography because of the different points either above or below sea level (Glydon). 3. The reason why many students find fractions difficult is because fractions are usually never taught to be looked at visually (Miller). Many teachers like to explain all the different rules of fractions, which can be very confusing. A simple way to learn fractions is to remember that the numerator always goes over the denominator. The denominator indicates how many pieces make up the whole, while the numerator refers to how many of all those pieces we are talking about (Akers). 4. Someone who worked at a pizza company would need to be able to add mixed numbers because a pizza can be cut up into different fractions (â€Å"Mixed Fractions†). Someone who worked in the Human Resources department would also need to use mixed fractions because they would need to calculate employees’ wages based on an hourly rate and they number of hours worked. Finally, taxi drivers would need to use mixed numbers because they would need to work out how many kilometers a trip is so they could charge their passenger the correct

Thursday, October 17, 2019

Hunger in America Essay Example | Topics and Well Written Essays - 250 words

Hunger in America - Essay Example This is significantly above the figure of Americans who go hungry daily. This shows the irony that underlines the food situation in America. Hunger is mostly associated to poverty. It is, therefore, natural that one should expect that the groups that are affected by hunger are the low class and the homeless, generally the social classes that belong to the low-income categories. This is, nevertheless, not the scenario as highlighted in the video Food For Thought, which shows the shocking reality that the middle class is adversely affected by hunger. Statistics support this with America recording 3.5 million homeless individuals. This number is significantly low as compared to the mammoth number of 35 million who are affected by the hunger issue. Though the number of middle-income individuals who suffer from hunger is not as large as that of the lower social classes, they make up a significant number of the number of the affected population. The reason cited from the problem is loss of income through unemployment, which jeopardizes individuals’ ability to access food (Donavan and Mash, p1). It is, therefore, imperative that America takes measures to curb this dire situation. This will involve putting measures in place that will reduce food wastage. Initiatives should also be put in place to distribute food and to empower individuals economically hence curbing

Define supply and explain what causes change (shifts) of supply and Essay - 1

Define supply and explain what causes change (shifts) of supply and how supply can determine prices - Essay Example These players will try to compete with each other to provide significant amount of value to the customers and thereby generating competitive advantage. It is important to note that the state of equilibrium attained by the intersection of demand and supply curve keeps on moving and is not constant in nature. As a matter of fact, it can be said that there can be various factors which may lead to shifts in supply curve. Abrupt rise of prices of certain commodities, which has happened due to the rise of inflation rate in recent times, can at times lead to significant changes in supply. Due to the significant rise in prices of commodities, the general masses become incapable to purchase the same at high rates. This results in building up of inventory. As a precautionary measure to cool down inflation and maintain a significant amount of balance in the market, the suppliers and manufacturers focus on lowering down the supply rate of the commodities (Mankiw, 1998, p. 80). The effect of recession can also induce significant amount of supply shift. In times of recession, for the purpose of boosting the economy, the rate of interest is generally reduced. This automatically contributes to a significant rise in the institutional lending as well as boosting of production of various commodities in the economy. Hence, recession can also initiate significant shifts of supply of commodities in the economy of a particular region (Mankiw, 2011, p. 745). It is observed that the price of multiple input variables and resources can bring about a significant influence in the supply of a particular commodity. It can be said that in the case of rising input prices, there might be immense pressure on the manufacturer to cut down on various costs. This might contribute to a lower amount of production by the manufacturer. Hence, this can automatically contribute to a movement in commodity supply in the market

Wednesday, October 16, 2019

Mathematics Essay Example | Topics and Well Written Essays - 250 words - 1

Mathematics - Essay Example An example of where it is important to understand integers in the financial world is with banking (Glydon). If someone spends more than they have available in their bank account then their balance will be negative. Also, it is important to understand integers in geography because of the different points either above or below sea level (Glydon). 3. The reason why many students find fractions difficult is because fractions are usually never taught to be looked at visually (Miller). Many teachers like to explain all the different rules of fractions, which can be very confusing. A simple way to learn fractions is to remember that the numerator always goes over the denominator. The denominator indicates how many pieces make up the whole, while the numerator refers to how many of all those pieces we are talking about (Akers). 4. Someone who worked at a pizza company would need to be able to add mixed numbers because a pizza can be cut up into different fractions (â€Å"Mixed Fractions†). Someone who worked in the Human Resources department would also need to use mixed fractions because they would need to calculate employees’ wages based on an hourly rate and they number of hours worked. Finally, taxi drivers would need to use mixed numbers because they would need to work out how many kilometers a trip is so they could charge their passenger the correct

Tuesday, October 15, 2019

Define supply and explain what causes change (shifts) of supply and Essay - 1

Define supply and explain what causes change (shifts) of supply and how supply can determine prices - Essay Example These players will try to compete with each other to provide significant amount of value to the customers and thereby generating competitive advantage. It is important to note that the state of equilibrium attained by the intersection of demand and supply curve keeps on moving and is not constant in nature. As a matter of fact, it can be said that there can be various factors which may lead to shifts in supply curve. Abrupt rise of prices of certain commodities, which has happened due to the rise of inflation rate in recent times, can at times lead to significant changes in supply. Due to the significant rise in prices of commodities, the general masses become incapable to purchase the same at high rates. This results in building up of inventory. As a precautionary measure to cool down inflation and maintain a significant amount of balance in the market, the suppliers and manufacturers focus on lowering down the supply rate of the commodities (Mankiw, 1998, p. 80). The effect of recession can also induce significant amount of supply shift. In times of recession, for the purpose of boosting the economy, the rate of interest is generally reduced. This automatically contributes to a significant rise in the institutional lending as well as boosting of production of various commodities in the economy. Hence, recession can also initiate significant shifts of supply of commodities in the economy of a particular region (Mankiw, 2011, p. 745). It is observed that the price of multiple input variables and resources can bring about a significant influence in the supply of a particular commodity. It can be said that in the case of rising input prices, there might be immense pressure on the manufacturer to cut down on various costs. This might contribute to a lower amount of production by the manufacturer. Hence, this can automatically contribute to a movement in commodity supply in the market

Extension 2 English Proposal Essay Example for Free

Extension 2 English Proposal Essay The audience of my major work will be firstly experienced English teachers for marking. The story itself will be aimed however at an audience of teen years and above, preferably interested in modern history. Any readers of a younger age may lack the necessary understanding of the context of my piece, and thus may not be able to understand the decisions and feelings of characters in the piece. The story will aim to incite passionate questions of the actions and experiences of my characters in the mind of the reader, as well as an emotional response based on the life and personal experiences of the reader. For instance, if they have experienced the death of a family member, they may identify with the emotions of my characters. Purpose/Statement of Intent After much deliberation, I propose to compose a prose fiction short story based on the experiences of fictional guard in Dachau concentration camp during WWII. I came across this idea when studying for the text Romulus, My father from the English Advanced course. Part of this text describes the main character Romulus and his lover Christina living in Nazi Germany, and I was reading through articles on the Internet regarding various leaders of the regime. This led me to reading letters between various concentration camp officials and Heinrich Himmler, the then leader of the SS, and high-ranking Nazi general. The writing is so simply put that it somewhat masks the cruel and indifferent intention of the letters. After reading through these letters, I came to ask myself How does a human being come to take these attitudes, and how could a person become seemingly so cruel and twisted, without any apparent conscience? What sort of life has this person lived, and what are their thoughts? Did they ever face struggle in their minds for the decisions they made? From this, I devised a perspective for a piece; the perspective of a male guard in a concentration camp. But not just any ruthless guard. I want to write about a rather troubled guard. A complex character that over the course of the piece actually begins to question the morality of his actions and thoughts. I want the reader to feel both anger towards the actions of this beast, but also at different times sympathy for his predicament. I want the reader to question their understanding of morality, and to put themselves in the shoes of this man. Concept The form will be fictional short story, in prose. However, I plan to include real correspondence between military officials of the camp from the time. I wanted to use this idea to give the story some real meaning, and to remind the audience that people just like my character did exist. Most of the piece will be in third person, but I will include monologues of the characters thought processes. During scenes of increased tension, for example when the man is ordered to shoot a prisoner, I will include in between the dialogue the thoughts of the guard himself. This will hopefully engage the reader in the scene, rather than the reader being a fly on the wall, which would leave a wall between the reader and the characters. I would like to vary the style of language used, from short and punchy for tension and emotion to long and reflective for the monologues. Inspiration I chose to write a story on the holocaust because it is something that I would enjoy researching in detail, absorbing every scrap of information, and also I think it would be challenging to confront on my own terms. Reading the information I have already come across, I cant help but feel so lucky to live in a free country, and in such a privileged life. A life where I am free to do what I choose, including writing this piece. I visited Dachau concentration camp in 2003, and this experience had a great effect on me. The feelings of disgust and general confusion as to how this could happen has probably lead me to be so interested in studying the topic today. I will use this experience to describe the surrounds of the setting, and some of the experiences of the prisoners. The following is an extract of a speech given by Himmler regarding the extermination of the Jews. Reading it today, I find it strange and foreign. To better understand why Himmler would take this approach to the extermination of a whole race, I will have to research the culture and attitudes of his time in-depth. Also, the ideas held by this quote is what I want to base my characters questioning on. I also want to mention a very difficult subject before you here, completely openly. It should be discussed amongst us, and yet, nevertheless, we will never speak about it in public. I am talking about the Jewish evacuation: the extermination of the Jewish people. It is one of those things that is easily said. The Jewish people are being exterminated, every Party member will tell you: Perfectly clear, its part of our plans, were eliminating the Jews, exterminating them, ha! , a small matter. -Heinrich Himmler, 4 October 1943 Links to Advanced Extension The idea for my piece came from researching the text Romulus, my father, in the Advanced course. The piece will specifically tie-in with the concept of belonging in many ways. For example, my main character will be in a position where he is forced to belong to the regime, and to his position and rank. If he chooses to disagree with his superiors, or the regime itself, he will be shot. Also, nature of the holocaust relates directly to belonging, as anyone who belonged to the Jewish religion, was a gypsy or was disabled was persecuted and often killed by the Nazi regime. The regime aimed to wipe-out all those who, in the eyes of the officials, did not fit the requirements of a pure society based on the Aryan legend. However, the piece does not necessarily link to the topic of crime writing in the Extension 1 course in any way. Research I have read many articles and letters on various websites, which have been very detailed and very helpful in giving me a broad account of events and people surrounding the Nazi regime. However, I will need to continue to research further in-depth into the holocaust and in particular the events and nature of the Dachau camp. I also plan to read Anne Franks Diary of a girl, to further understand the experience of life in a concentration camp. This text should provide the experience of a prisoner, with which I can use to contrast the ideals and experience of the guard. Over the holidays I will make a visit to the state library to find historical diaries and personal accounts of experiences in concentration camps.

Monday, October 14, 2019

Predicting Effects of Environmental Contaminants

Predicting Effects of Environmental Contaminants 1.1. Debunking some chemical myths†¦ In October 2008, the Royal Society of Chemistry announced they were offering  £1 million to the first member of the public that could bring a 100% chemical free material. This attempt to reclaim the word ‘chemical from the advertising and marketing industries that use it as a synonym for poison was a reaction to a decision of the Advertising Standards Authority to defend an advert perpetuating the myths that natural products were chemical free (Edwards 2008). Indeed, no material regardless of its origin is chemical free. A related common misconception is that chemicals made by nature are intrinsically good and, conversely, those manufactured by man are bad (Ottoboni 1991). There are many examples of toxic compounds produced by algae or other micro-organisms, venomous animals and plants, or even examples of environmental harm resulting from the presence of relatively benign natural compounds either in unexpected places or in unexpected quantities. It is therefore of prime impo rtance to define what is meant by ‘chemical when referring to chemical hazards in this chapter and the rest of this book. The correct term to describe a chemical compound an organism may be exposed to, whether of natural or synthetic origins, is xenobiotic, i.e. a substance foreign to an organism (the term has also been used for transplants). A xenobiotic can be defined as a chemical which is found in an organism but which is not normally produced or expected to be present in it. It can also cover substances which are present in much higher concentrations than are usual. A grasp of some of the fundamental principles of the scientific disciplines that underlie the characterisation of effects associated with exposure to a xenobiotic is required in order to understand the potential consequences of the presence of pollutants in the environment and critically appraise the scientific evidence. This chapter will attempt to briefly summarise some important concepts of basic toxicology and environmental epidemiology relevant in this context. 1.2. Concepts of Fundamental Toxicology Toxicology is the science of poisons. A poison is commonly defined as ‘any substance that can cause an adverse effect as a result of a physicochemical interaction with living tissue'(Duffus 2006). The use of poisons is as old as the human race, as a method of hunting or warfare as well as murder, suicide or execution. The evolution of this scientific discipline cannot be separated from the evolution of pharmacology, or the science of cures. Theophrastus Phillippus Aureolus Bombastus von Hohenheim, more commonly known as Paracelsus (1493-1541), a physician contemporary of Copernicus, Martin Luther and da Vinci, is widely considered as the father of toxicology. He challenged the ancient concepts of medicine based on the balance of the four humours (blood, phlegm, yellow and black bile) associated with the four elements and believed illness occurred when an organ failed and poisons accumulated. This use of chemistry and chemical analogies was particularly offensive to his contempo rary medical establishment. He is famously credited the following quote that still underlies present-day toxicology. In other words, all substances are potential poisons since all can cause injury or death following excessive exposure. Conversely, this statement implies that all chemicals can be used safely if handled with appropriate precautions and exposure is kept below a defined limit, at which risk is considered tolerable (Duffus 2006). The concepts both of tolerable risk and adverse effect illustrate the value judgements embedded in an otherwise scientific discipline relying on observable, measurable empirical evidence. What is considered abnormal or undesirable is dictated by society rather than science. Any change from the normal state is not necessarily an adverse effect even if statistically significant. An effect may be considered harmful if it causes damage, irreversible change or increased susceptibility to other stresses, including infectious disease. The stage of development or state of health of the organism may also have an influence on the degree of harm. 1.2.1. Routes of exposure Toxicity will vary depending on the route of exposure. There are three routes via which exposure to environmental contaminants may occur; Ingestion Inhalation Skin adsorption Direct injection may be used in environmental toxicity testing. Toxic and pharmaceutical agents generally produce the most rapid response and greatest effect when given intravenously, directly into the bloodstream. A descending order of effectiveness for environmental exposure routes would be inhalation, ingestion and skin adsorption. Oral toxicity is most relevant for substances that might be ingested with food or drinks. Whilst it could be argued that this is generally under an individuals control, there are complex issues regarding information both about the occurrence of substances in food or water and the current state-of-knowledge about associated harmful effects. Gases, vapours and dusts or other airborne particles are inhaled involuntarily (with the infamous exception of smoking). The inhalation of solid particles depends upon their size and shape. In general, the smaller the particle, the further into the respiratory tract it can go. A large proportion of airborne particles breathed through the mouth or cleared by the cilia of the lungs can enter the gut. Dermal exposure generally requires direct and prolonged contact with the skin. The skin acts as a very effective barrier against many external toxicants, but because of its great surface area (1.5-2 m2), some of the many diverse substances it comes in contact with may still elicit topical or systemic effects (Williams and Roberts 2000). If dermal exposure is often most relevant in occupational settings, it may nonetheless be pertinent in relation to bathing waters (ingestion is an important route of exposure in this context). Voluntary dermal exposure related to the use of cosmetics raises the same questions regarding the adequate communication of current knowledge about potential effects as those related to food. 1.2.2. Duration of exposure The toxic response will also depend on the duration and frequency of exposure. The effect of a single dose of a chemical may be severe effects whilst the same dose total dose given at several intervals may have little if any effect. An example would be to compare the effects of drinking four beers in one evening to those of drinking four beers in four days. Exposure duration is generally divided into four broad categories; acute, sub-acute, sub-chronic and chronic. Acute exposure to a chemical usually refers to a single exposure event or repeated exposures over a duration of less than 24 hours. Sub-acute exposure to a chemical refers to repeated exposures for 1 month or less, sub-chronic exposure to continuous or repeated exposures for 1 to 3 months or approximately 10% of an experimental species life time and chronic exposure for more than 3 months, usually 6 months to 2 years in rodents (Eaton and Klaassen 2001). Chronic exposure studies are designed to assess the cumulative toxici ty of chemicals with potential lifetime exposure in humans. In real exposure situations, it is generally very difficult to ascertain with any certainty the frequency and duration of exposure but the same terms are used. For acute effects, the time component of the dose is not important as a high dose is responsible for these effects. However if acute exposure to agents that are rapidly absorbed is likely to induce immediate toxic effects, it does not rule out the possibility of delayed effects that are not necessarily similar to those associated with chronic exposure, e.g. latency between the onset of certain cancers and exposure to a carcinogenic substance. It may be worth here mentioning the fact that the effect of exposure to a toxic agent may be entirely dependent on the timing of exposure, in other words long-term effects as a result of exposure to a toxic agent during a critically sensitive stage of development may differ widely to those seen if an adult organism is exposed to the same substance. Acute effects are almost always the result of accidents. Otherwise, they may result from criminal poisoning or self-poisoning (suicide). Conversely, whilst chronic exposure to a toxic agent is general ly associated with long-term low-level chronic effects, this does not preclude the possibility of some immediate (acute) effects after each administration. These concepts are closely related to the mechanisms of metabolic degradation and excretion of ingested substances and are best illustrated by 1.1. Line A. chemical with very slow elimination. Line B. chemical with a rate of elimination equal to frequency of dosing. Line C. Rate of elimination faster than the dosing frequency. Blue-shaded area is representative of the concentration at the target site necessary to elicit a toxic response. 1.2.3. Mechanisms of toxicity The interaction of a foreign compound with a biological system is two-fold: there is the effect of the organism on the compound (toxicokinetics) and the effect of the compound on the organism (toxicodynamics). Toxicokinetics relate to the delivery of the compound to its site of action, including absorption (transfer from the site of administration into the general circulation), distribution (via the general circulation into and out of the tissues), and elimination (from general circulation by metabolism or excretion). The target tissue refers to the tissue where a toxicant exerts its effect, and is not necessarily where the concentration of a toxic substance is higher. Many halogenated compounds such as polychlorinated biphenyls (PCBs) or flame retardants such as polybrominated diphenyl ethers (PBDEs) are known to bioaccumulate in body fat stores. Whether such sequestration processes are actually protective to the individual organisms, i.e. by lowering the concentration of the toxicant at the site of action is not clear (OFlaherty 2000). In an ecological context however, such bioaccumulation may serve as an indirect route of exposure for organisms at higher trophic levels, thereby potentia lly contributing to biomagnification through the food chain. Absorption of any compound that has not been directed intravenously injected will entail transfer across membrane barriers before it reaches the systemic circulation, and the efficiency of absorption processes is highly dependent on the route of exposure. It is also important to note that distribution and elimination, although often considered separately, take place simultaneously. Elimination itself comprises of two kinds of processes, excretion and biotransformation, that are also taking place simultaneously. Elimination and distribution are not independent of each other as effective elimination of a compounds will prevent its distribution in peripheral tissues, whilst conversely, wide distribution of a compound will impede its excretion (OFlaherty 2000). Kinetic models attempt to predict the concentration of a toxicant at the target site from the administered dose. If often the ultimate toxicant, i.e. the chemical species that induces structural or functional alterations resulting in toxicity, is the compound administered (parent compound), it can also be a metabolite of the parent compound generated by biotransformation processes, i.e. toxication rather than detoxication (Timbrell 2000; Gregus and Klaassen 2001). The liver and kid neys are the most important excretory organs for non-volatile substances, whilst the lungs are active in the excretion of volatile compounds and gases. Other routes of excretion include the skin, hair, sweat, nails and milk. Milk may be a major route of excretion for lipophilic chemicals due to its high fat content (OFlaherty 2000). Toxicodynamics is the study of toxic response at the site of action, including the reactions with and binding to cell constituents, and the biochemical and physiological consequences of these actions. Such consequences may therefore be manifested and observed at the molecular or cellular levels, at the target organ or on the whole organism. Therefore, although toxic responses have a biochemical basis, the study of toxic response is generally subdivided either depending on the organ on which toxicity is observed, including hepatotoxicity (liver), nephrotoxicity (kidney), neurotoxicity (nervous system), pulmonotoxicity (lung) or depending on the type of toxic response, including teratogenicity (abnormalities of physiological development), immunotoxicity (immune system impairment), mutagenicity (damage of genetic material), carcinogenicity (cancer causation or promotion). The choice of the toxicity endpoint to observe in experimental toxicity testing is therefore of critical importance. In recent years, rapid advances of biochemical sciences and technology have resulted in the development of bioassay techniques that can contribute invaluable information regarding toxicity mechanisms at the cellular and molecular level. However, the extrapolation of such information to predict effects in an intact organism for the purpose of risk assessment is still in its infancy (Gundert -Remy et al. 2005). 1.2.4. Dose-response relationships 83A7DC81The theory of dose-response relationships is based on the assumptions that the activity of a substance is not an inherent quality but depends on the dose an organism is exposed to, i.e. all substances are inactive below a certain threshold and active over that threshold, and that dose-response relationships are monotonic, the response rises with the dose. Toxicity may be detected either as all-or-nothing phenomenon such as the death of the organism or as a graded response such as the hypertrophy of a specific organ. The dose-response relationship involves correlating the severity of the response with exposure (the dose). Dose-response relationships for all-or-nothing (quantal) responses are typically S-shaped and this reflects the fact that sensitivity of individuals in a population generally exhibits a normal or Gaussian distribution. Biological variation in susceptibility, with fewer individuals being either hypersusceptible or resistant at both end of the curve and the maj ority responding between these two extremes, gives rise to a bell-shaped normal frequency distribution. When plotted as a cumulative frequency distribution, a sigmoid dose-response curve is observed ( 1.2). Studying dose response, and developing dose response models, is central to determining safe and hazardous levels. The simplest measure of toxicity is lethality and determination of the median lethal dose, the LD50 is usually the first toxicological test performed with new substances. The LD50 is the dose at which a substance is expected to cause the death of half of the experimental animals and it is derived statistically from dose-response curves (Eaton and Klaassen 2001). LD50 values are the standard for comparison of acute toxicity between chemical compounds and between species. Some values are given in Table 1.1. It is important to note that the higher the LD50, the less toxic the compound. Similarly, the EC50, the median effective dose, is the quantity of the chemical that is estimated to have an effect in 50% of the organisms. However, median doses alone are not very informative, as they do not convey any information on the shape of the dose-response curve. This is best illustrated by 1.3. While toxicant A appears (always) more toxic than toxicant B on the basis of its lower LD50, toxicant B will start affecting organisms at lower doses (lower threshold) while the steeper slope for the dose-response curve for toxicant A means that once individuals become overexposed (exceed the threshold dose), the increase in response occurs over much smaller increments in dose. Low dose responses The classical paradigm for extrapolating dose-response relationships at low doses is based on the concept of threshold for non-carcinogens, whereas it assumes that there is no threshold for carcinogenic responses and a linear relationship is hypothesised (s 1.4 and 1.5). The NOAEL (No Observed Adverse Effect Level) is the exposure level at which there is no statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The NOEL for the most sensitive test species and the most sensitive indicator of toxicity is usually employed for regulatory purposes. The LOAEL (Lowest Observed Adverse Effect Level) is the lowest exposure level at which there is a statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The main criticism of NOAEL and LOAEL is that there are dependent on study design, i.e. the dose groups selected and the number of individuals in each group. Statistical methods of deriving the concentration that produces a specific effect ECx, or a benchmark dose (BMD), the statistical lower confidence limit on the dose that produces a defined response (the benchm ark response or BMR), are increasingly preferred. To understand the risk that environmental contaminants pose to human health requires the extrapolation of limited data from animal experimental studies to the low doses critically encountered in the environment. Such extrapolation of dose-response relationships at low doses is the source of much controversy. Recent advances in the statistical analysis of very large populations exposed to ambient concentrations of environmental pollutants have however not observed thresholds for cancer or non-cancer outcomes (White et al. 2009). The actions of chemical agents are triggered by complex molecular and cellular events that may lead to cancer and non-cancer outcomes in an organism. These processes may be linear or non-linear at an individual level. A thorough understanding of critical steps in a toxic process may help refine current assumptions about thresholds (Boobis et al. 2009). The dose-response curve however describes the response or variation in sensitivity of a population. Biologica l and statistical attributes such as population variability, additivity to pre-existing conditions or diseases induced at background exposure will tend to smooth and linearise the dose-response relationship, obscuring individual thresholds. Hormesis Dose-response relationships for substances that are essential for normal physiological function and survival are actually U-shaped. At very low doses, adverse effects are observed due to a deficiency. As the dose of such an essential nutrient is increased, the adverse effect is no longer detected and the organism can function normally in a state of homeostasis. Abnormally high doses however, can give rise to a toxic response. This response may be qualitatively different and the toxic endpoint measured at very low and very high doses is not necessarily the same. There is evidence that nonessential substances may also impart an effect at very low doses ( 1.6). Some authors have argued that hormesis ought to be the default assumption in the risk assessment of toxic substances (Calabrese and Baldwin 2003). Whether such low dose effects should be considered stimulatory or beneficial is controversial. Further, potential implications of the concept of hormesis for the risk management of the combinations of the wide variety of environmental contaminants present at low doses that individuals with variable sensitivity may be exposed to are at best unclear. 1.2.5. Chemical interactions In regulatory hazard assessment, chemical hazard are typically considered on a compound by compound basis, the possibility of chemical interactions being accounted for by the use of safety or uncertainty factors. Mixture effects still represent a challenge for the risk management of chemicals in the environment, as the presence of one chemical may alter the response to another chemical. The simplest interaction is additivity: the effect of two or more chemicals acting together is equivalent to the sum of the effects of each chemical in the mixture when acting independently. Synergism is more complex and describes a situation when the presence of both chemicals causes an effect that is greater than the sum of their effects when acting alone. In potentiation, a substance that does not produce specific toxicity on its own increases the toxicity of another substance when both are present. Antagonism is the principle upon which antidotes are based whereby a chemical can reduce the harm ca used by a toxicant (James et al. 2000; Duffus 2006). Mathematical illustrations and examples of known chemical interactions are given in Table 1.2. Table 1.2. Mathematical representations of chemical interactions (reproduced from James et al., 2000) Effect Hypothetical mathematical illustration Example Additive 2 + 3 = 5 Organophosphate pesticides Synergistic 2 + 3 = 20 Cigarette smoking + asbestos Potentiation 2 + 0 = 10 Alcohol + carbon tetrachloride Antagonism 6 + 6 = 8 or 5 + (-5) = 0 or 10 + 0 = 2 Toluene + benzene Caffeine + alcohol Dimercaprol + mercury There are four main ways in which chemicals may interact (James et al. 2000); 1. Functional: both chemicals have an effect on the same physiological function. 2. Chemical: a chemical reaction between the two compounds affects the toxicity of one or both compounds. 3. Dispositional: the absorption, metabolism, distribution or excretion of one substance is increased or decreased by the presence of the other. 4. Receptor-mediated: when two chemicals have differing affinity and activity for the same receptor, competition for the receptor will modify the overall effect. 1.2.6. Relevance of animal models A further complication in the extrapolation of the results of toxicological experimental studies to humans, or indeed other untested species, is related to the anatomical, physiological and biochemical differences between species. This paradoxically requires some previous knowledge of the mechanism of toxicity of a chemical and comparative physiology of different test species. When adverse effects are detected in screening tests, these should be interpreted with the relevance of the animal model chosen in mind. For the derivation of safe levels, safety or uncertainty factors are again usually applied to account for the uncertainty surrounding inter-species differences (James et al. 2000; Sullivan 2006). 1.2.7. A few words about doses When discussing dose-response, it is also important to understand which dose is being referred to and differentiate between concentrations measured in environmental media and the concentration that will illicit an adverse effect at the target organ or tissue. The exposure dose in a toxicological testing setting is generally known or can be readily derived or measured from concentrations in media and average consumption (of food or water for example) ( 1.7.). Whilst toxicokinetics help to develop an understanding of the relationship between the internal dose and a known exposure dose, relating concentrations in environmental media to the actual exposure dose, often via multiple pathways, is in the realm of exposure assessment. 1.2.8. Other hazard characterisation criteria Before continuing further, it is important to clarify the difference between hazard and risk. Hazard is defined as the potential to produce harm, it is therefore an inherent qualitative attribute of a given chemical substance. Risk on the other hand is a quantitative measure of the magnitude of the hazard and the probability of it being realised. Hazard assessment is therefore the first step of risk assessment, followed by exposure assessment and finally risk characterization. Toxicity is not the sole criterion evaluated for hazard characterisation purposes. Some chemicals have been found in the tissues of animals in the arctic for example, where these substances of concern have never been used or produced. This realization that some pollutants were able to travel far distances across national borders because of their persistence, and bioaccumulate through the food web, led to the consideration of such inherent properties of organic compounds alongside their toxicity for the purpose of hazard characterisation. Persistence is the result of resistance to environmental degradation mechanisms such as hydrolysis, photodegradation and biodegradation. Hydrolysis only occurs in the presence of water, photodegradation in the presence of UV light and biodegradation is primarily carried out by micro-organisms. Degradation is related to water solubility, itself inversely related to lipid solubility, therefore persistence tends to be correlated to lipid solubility (Francis 1994). The persistence of inorganic substances has proven more difficult to define as they cannot be degraded to carbon and water. Chemicals may accumulate in environmental compartments and constitute environmental sinks that could be re-mobilised and lead to effects. Further, whilst substances may accumulate in one species without adverse effects, it may be toxic to its predator(s). Bioconcentration refers to accumulation of a chemical from its surrounding environment rather than specifically through food uptake. Conversely, biomagnification refers to uptake from food without consideration for uptake through the body surface. Bioaccumulation integrates both paths, surrounding medium and food. Ecological magnification refers to an increase in concentration through the food web from lower to higher trophic levels. Again, accumulation of organic compounds generally involves transfer from a hydrophilic to a hydrophobic phase and correlates well with the n-octanol/water partition coefficient (Herrchen 2006). Persistence and bioaccumulation of a substance is evaluated by standardised OECD tests. Criteria for the identification of persistent, bioaccumulative, and toxic substances (PBT), and very persistent and very bioaccumulative substances (vPvB) as defined in Annex XIII of the European Directive on the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) (Union 2006) are given in table 1.3. To be classified as a PBT or vPvB substance, a given compound must fulfil all criteria. Table 1.3. REACH criteria for identifying PBT and vPvB chemicals Criterion PBT criteria vPvB criteria Persistence Either: Half-life > 60 days in marine water Half-life > 60 days in fresh or estuarine water Half-life > 180 days in marine sediment Half-life > 120 days in fresh or estuarine sediment Half-life > 120 days in soil Either: Half-life > 60 days in marine, fresh or estuarine water Half-life > 180 days in marine, fresh or estuarine sediment Half-life > 180 days in soil Bioaccumulation Bioconcentration factor (BCF) > 2000 Bioconcentration factor (BCF) > 2000 Toxicity Either: Chronic no-observed effect concentration (NOEC) substance is classified as carcinogenic (category 1 or 2), mutagenic (category 1 or 2), or toxic for reproduction (category 1, 2 or 3) there is other evidence of endocrine disrupting effects 1.3. Some notions of Environmental Epidemiology A complementary, observational approach to the study of scientific evidence of associations between environment and disease is epidemiology. Epidemiology can be defined as â€Å"the study of how often diseases occur and why, based on the measurement of disease outcome in a study sample in relation to a population at risk.† (Coggon et al. 2003). Environmental epidemiology refers to the study of patterns and disease and health related to exposures that are exogenous and involuntary. Such exposures generally occur in the air, water, diet, or soil and include physical, chemical and biologic agents. The extent to which environmental epidemiology is considered to include social, political, cultural, and engineering or architectural factors affecting human contact with such agents varies according to authors. In some contexts, the environment can refer to all non-genetic factors, although dietary habits are generally excluded, despite the facts that some deficiency diseases are envir onmentally determined and nutritional status may also modify the impact of an environmental exposure (Steenland and Savitz 1997; Hertz-Picciotto 1998). Most of environmental epidemiology is concerned with endemics, in other words acute or chronic disease occurring at relatively low frequency in the general population due partly to a common and often unsuspected exposure, rather than epidemics, or acute outbreaks of disease affecting a limited population shortly after the introduction of an unusual known or unknown agent. Measuring such low level exposure to the general public may be difficult when not impossible, particularly when seeking historical estimates of exposure to predict future disease. Estimating very small changes in the incidence of health effects of low-level common multiple exposure on common diseases with multifactorial etiologies is particularly difficult because often greater variability may be expected for other reasons, and environmental epidemiology has to rely on natural experiments that unlike controlled experiment are subject to confounding to other, often unknown, risk factors. However, it may still be of i mportance from a public health perspective as small effects in a large population can have large attributable risks if the disease is common (Steenland and Savitz 1997; Coggon et al. 2003). 1.3.1. Definitions What is a case? The definition of a case generally requires a dichotomy, i.e. for a given condition, people can be divided into two discrete classes the affected and the non-affected. It increasingly appears that diseases exist in a continuum of severity within a population rather than an all or nothing phenomenon. For practical reasons, a cut-off point to divide the diagnostic continuum into ‘cases and ‘non-cases is therefore required. This can be done on a statistical, clinical, prognostic or operational basis. On a statistical basis, the ‘norm is often defined as within two standard deviations of the age-specific mean, thereby arbitrarily fixing the frequency of abnormal values at around 5% in every population. Moreover, it should be noted that what is usual is not necessarily good. A clinical case may be defined by the level of a variable above which symptoms and complications have been found to become more frequent. On a prognostic basis, some clinical findings may carry an a dverse prognosis, yet be symptomless. When none of the other approaches is satisfactory, an operational threshold will need to be defined, e.g. based on a threshold for treatment (Coggon et al. 2003). Incidence, prevalence and mortality The incidence of a disease is the rate at which new cases occur in a population during a specified period or frequency of incidents. Incidence = The prevalence of a disease is the proportion of the population that are cases at a given point in time. This measure is appropriate only in relatively stable conditions and is unsuitable for acute disorders. Even in a chronic disease, the manifestations are often intermittent and a point prevalence will tend to underestimate the frequency of the condition. A better measure when possible is the period prevalence defined as the proportion of a population that are cases at any time within a stated pe

Sunday, October 13, 2019

Les Gens De Couleur Libres, The Free People of Color in New Orleans Ess

Shattered dreams. Broken promises. They were hung between freedom and slavery. They struggled to find a different kind of freedom and independency where justice has yet to exist and racism wasn’t just a part of life, but what life was all about. New Orleans New Orleans is a city in southern Louisiana, located on the Mississippi River. Most of the city is situated on the east bank, between the river and Lake Pontchartrain to the north. Because it was built on a great turn of the river, it is known as the Crescent City. New Orleans was founded in 1718 by Jean Baptiste Le Moyne, sieur de Bienville, and named for the regent of France, Philippe II, duc d'Orleans. It remained a French colony until 1763, when it was surrendered to the Spanish. In 1800, Spain ceded it back to France; in 1803, New Orleans, along with the entire Louisiana Purchase, was sold by Napoleon I to the United States. Like the early American settlements along Massachusetts Bay and Chesapeake Bay, New Orleans served as a distinctive cultural gateway to North America, where people from Europe and Africa initially intertwined their lives and customs with those of the native inhabitants of the New World. The resulting way of life differed dramatically from the culture than was spawned in the English colonies of North America. New Orleans is a place where Africans, Indians and European settlers shared their cultures and blended together. Encouraged by the French government, this strategy for producing a tough, durable culture in a difficult place, marked New Orleans as different and special and it still continues to distinguish the city today. The Africans African Americans make up about half of the city of New Orleans population to date. How did this come about? Well, during the eighteenth century, Africans came to the city directly from West Africa. The majority passed neither through the West Indies nor South America, so they developed complicated relations with both the Indian and Europeans. The Spanish rulers (1765-1802) reached out to the black population for support against the French settlers; in doing so, they allowed many to buy their own freedom. These free black settlers along with Creole slaves formed the earliest black urban settlement in North America. The Creoles A Creole is a person born in the West Indies or Spanish America but of European, usually Spanish, ancestry. And it... ...dren, noisy with tinkling bells and dressed in masks and gay dominoes, come out of their houses and visit from door to door in their neighborhood. Later in the day there is a street parade, and another one at night. The Mardi Gras gayeties end with the most brilliant ball of the season. In conclusion I would like to repeat that from the earliest days of New Orleans history, free persons of color have coexisted with those of European extraction. They didn’t have to get along fine, but that was just a way of life, which many, had to either accept or fight against. The free people of color, although free, did not have all of the rights of their white counterparts. As Charles E. O’Neill, in Our People and Our History, defined it â€Å"They shared neither the privileges of the master class nor the degradation of the slave. They stood between -- or rather apart -- sharing the cultivated tastes of the upper caste and the painful humiliation attached to the race of the enslaved†. SOURCES Our People and Our History by Rodolphe Lucien Desdunes and Dorothea Olga McCants. Creole New Orleans: Race and Americanization by Arnold R. Hirsch Joseph Logsdon. http://www.wholehostno.com/nohistory.html