Search results for: time-domain waveguide mode
159 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 70158 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 239157 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 149156 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 70155 Investigation and Comprehensive Benefit Analysis of 11 Typical Polar-Based Agroforestry Models Based on Analytic Hierarchy Process in Anhui Province, Eastern China
Authors: Zhihua Cao, Hongfei Zhao, Zhongneng Wu
Abstract:
The development of polar-based agroforestry was necessary due to the influence of the timber market environment in China, which can promote the coordinated development of forestry and agriculture, and gain remarkable ecological, economic and social benefits. The main agroforestry models of the main poplar planting area in Huaibei plain and along the Yangtze River plain were carried out. 11 typical management models of poplar were selected to sum up: pure poplar forest, poplar-rape-soybean, poplar-wheat-soybean, poplar-rape-cotton, poplar-wheat, poplar-chicken, poplar-duck, poplar-sheep, poplar-Agaricus blazei, poplar-oil peony, poplar-fish, represented by M0-M10, respectively. 12 indexes related with economic, ecological and social benefits (annual average cost, net income, ratio of output to investment, payback period of investment, land utilization ratio, utilization ratio of light energy, improvement and system stability of ecological and production environment, product richness, labor capacity, cultural quality of labor force, sustainability) were screened out to carry on the comprehensive evaluation and analysis to 11 kinds of typical agroforestry models based on analytic hierarchy process (AHP). The results showed that the economic benefit of each agroforestry model was in the order of: M8 > M6 > M9 > M7 > M5 > M10 > M4 > M1 > M2 > M3 > M0. The economic benefit of poplar-A. blazei model was the highest (332, 800 RMB / hm²), followed by poplar-duck and poplar-oil peony model (109, 820RMB /hm², 5, 7226 RMB /hm²). The order of comprehensive benefit was: M8 > M4 > M9 > M6 > M1 > M2 > M3 > M7 > M5 > M10 > M0. The economic benefit and comprehensive benefit of each agroforestry model were higher than that of pure poplar forest. The comprehensive benefit of poplar-A. blazei model was the highest, and that of poplar-wheat model ranked second, while its economic benefit was not high. Next were poplar-oil peony and poplar-duck models. It was suggested that the model of poplar-wheat should be adopted in the plain along the Yangtze River, and the whole cycle mode of poplar-grain, popalr-A. blazei, or poplar-oil peony should be adopted in Huaibei plain, northern Anhui. Furthermore, wheat, rape, and soybean are the main crops before the stand was closed; the agroforestry model of edible fungus or Chinese herbal medicine can be carried out when the stand was closed in order to maximize the comprehensive benefit. The purpose of this paper is to provide a reference for forest farmers in the selection of poplar agroforestry model in the future and to provide the basic data for the sustainable and efficient study of poplar agroforestry in Anhui province, eastern China.Keywords: agroforestry, analytic hierarchy process (AHP), comprehensive benefit, model, poplar
Procedia PDF Downloads 166154 Potential Cross-Protection Roles of Chitooligosaccharide in Alleviating Cd Toxicity in Edible Rape (Brassica rapa L.)
Authors: Haiying Zong, Yi Yuan, Pengcheng Li
Abstract:
Cadmium (Cd), one of the toxic heavy metals, has high solubility and mobility in agricultural soils and is readily taken up by roots and transported to the vegetative and reproductive organs which can cause deleterious effects on crop yield and quality. Excess Cd in plants can interfere with many metabolic processes, such as photosynthesis, transpiration, respiration or nutrients homeostasis. Generally, the main methods to reduce Cd accumulation in plants are to decrease the concentration of Cd in the soil solution through reduction of Cd influx into the soil system, site selection, and management practices. However, these approaches can be very costly and consume a lot of energy Therefore, it is critical to develop effective approaches to reduce the Cd concentration in plants. It is proved that chitooligosaccharide (COS) can enhance the plant's tolerance to abiotic stress including drought stress, salinity stress, and toxic metal stress. However, so far little information is known about whether foliar application with COS modulates Cd-induced toxicity in plants. The metal detoxification processes of plants treated with COS also remain unclear. In this study, edible rape (Brassica rapa L.), one of the most widely consumed leafy vegetables, was selected as an experimental mode plant. The effect of foliar application with COS on reducing Cd accumulation in edible rape was investigated. Moreover, Cd subcellular distribution pattern in response to Cd stress in the rape plant sprayed with COS was further tested in order to explore the potential detoxification mechanisms in plants. The results demonstrated that spraying COS at different concentrations (25, 50,100 and 200 mg L-1) possess diverse functions including growth-promoting,chlorophyll contents-enhancing, malondialdehyde (MDA) level-decreasing in leaves, Cd2+ concentration-decreasingin shoots and roots of edible rape under Cd stress. In addition, it was found that COS can also dramatically improve superoxide dismutase (SOD) activity, catalase (CAT) activity and peroxidase (POX) activity of edible rape leaves. The relievingeffect of COS was related to theconcentration and COS with 50-100 mg L-1 displayed the best activity. Furtherly, theexperiments results exhibitedthat COS could decrease the proportion of Cd in the organelle fraction of leaves by 40.1% while enhance the proportion of Cd in the soluble fraction by 13.2% at the concentration of 50 mg L-1. The above results showed that COS may have thepotential to improve plant resistance to Cd via promoting antioxidant enzyme activities and altering Cd subcellular distribution. All the results described here open up a new way to study the protection role of COS in alleviating Cd tolerance and lay the foundation for future research about the detoxification mechanism at subcellular level.Keywords: chitooligosaccharide, cadmium, edible rape (Brassica rapa L.), subcellular distribution
Procedia PDF Downloads 295153 Developing a Quality Mentor Program: Creating Positive Change for Students in Enabling Programs
Authors: Bianca Price, Jennifer Stokes
Abstract:
Academic and social support systems are critical for students in enabling education; these support systems have the potential to enhance the student experience whilst also serving a vital role for student retention. In the context of international moves toward widening university participation, Australia has developed enabling programs designed to support underrepresented students to access to higher education. The purpose of this study is to examine the effectiveness of a mentor program based within an enabling course. This study evaluates how the mentor program supports new students to develop social networks, improve retention, and increase satisfaction with the student experience. Guided by Social Learning Theory (SLT), this study highlights the benefits that can be achieved when students engage in peer-to-peer based mentoring for both social and learning support. Whilst traditional peer mentoring programs are heavily based on face-to-face contact, the present study explores the difference between mentors who provide face-to-face mentoring, in comparison with mentoring that takes place through the virtual space, specifically via a virtual community in the shape of a Facebook group. This paper explores the differences between these two methods of mentoring within an enabling program. The first method involves traditional face-to-face mentoring that is provided by alumni students who willingly return to the learning community to provide social support and guidance for new students. The second method requires alumni mentor students to voluntarily join a Facebook group that is specifically designed for enabling students. Using this virtual space, alumni students provide advice, support and social commentary on how to be successful within an enabling program. Whilst vastly different methods, both of these mentoring approaches provide students with the support tools needed to enhance their student experience and improve transition into University. To evaluate the impact of each mode, this study uses mixed methods including a focus group with mentors, in-depth interviews, as well as engaging in netnography of the Facebook group ‘Wall’. Netnography is an innovative qualitative research method used to interpret information that is available online to better understand and identify the needs and influences that affect the users of the online space. Through examining the data, this research will reflect upon best practice for engaging students in enabling programs. Findings support the applicability of having both face-to-face and online mentoring available for students to assist enabling students to make a positive transition into University undergraduate studies.Keywords: enabling education, mentoring, netnography, social learning theory
Procedia PDF Downloads 122152 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal
Authors: Linta Rose, Prasad K. Bhaskaran
Abstract:
Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind
Procedia PDF Downloads 221151 Comparative Evaluation of Ultrasound Guided Internal Jugular Vein Cannulation Using Measured Guided Needle and Conventional Size Needle for Success and Complication of Cannulation
Authors: Devendra Gupta, Vikash Arya, Prabhat K. Singh
Abstract:
Background: Ultrasound guidance could be beneficial in placing central venous catheters by improving the success rate, reducing the number of needle passes, and decreasing complications. Central venous cannulation set has a single puncture needle of a fixed length of 6.4 cm. However, the average distance of midpoint of IJV to the skin is around 1 cm to 2 cm. The long length needle has tendency to go in depth more than required and this is very common during learning period of any individual. Therefore, we devised a long needle with a guard which can be adjusted according to the required length. Methods: After approval from the institute ethics committee and patient’s written informed consent, a prospective, randomized, single-blinded controlled study was conducted. Adult patient aged of both sexes with ASA grade 1-2 undergoing surgery requiring internal jugular venous (IJV) access was included. After intubation, the head was rotated to the contralateral side at 30 degree head rotation on the position of the right IJV. The transducer probe a 6.5 to 13-MHz linear transducer (Sonosite, USA) had been placed at the apex of triangle with minimal pressure to avoid IJV compression. The distance from skin to midpoint of the right IJV and skin to anterior wall of Common Carotid Artery (CCA) had been done using B-mode duplex sonography with a 6.5 to 13-MHz linear transducer. Depending upon the results of randomization 420 patients had been divided into two groups of equal numbers (n=210). Group 1. USG guided right sided IJV cannulation was done with conventional (6.4 cm) needle; and Group 2. USG guided right sided IJV cannulation was done with conventional (6.4 cm) needle with guard fixed to a required length (length between skin and midpoint of IJV) by an experienced anesthesiologist. Independent observer has noted the number of attempts and occurrence of complications (CCA puncture, pneumothorax or adjacent tissue damage). Results: Demographic data were similar in both the group. The groups were comparable when considered for relationship of IJV to CCA. There was no significant difference between groups as regard to distance of midpoint of IJV to the skin (p<0.05). IJV cannulation was successfully done in single attempts in 180 (85.7%), in two attempts in 27 (12.9%) and three attempts in 3 (1.4%) in group I, whereas in single attempt in 207 (98.6%) and second attempts in 3 (1.4%) in group II (p <0.000). Incidence of carotid artery puncture was significantly more in group I (7.1%) compared to group II (0%) (p<0.000). Incidence of adjacent tissue puncture was significantly more in group I (8.6%) compared to group II (0%) (p<0.000). Conclusion: Therefore IJV catheterization using guard over the needle at predefined length with the help of real-time ultrasound results in better success rates and lower immediate complications.Keywords: ultrasound guided, internal jugular vein cannulation, measured guided needle, common carotid artery puncture
Procedia PDF Downloads 224150 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka
Authors: Sherly Shelton, Zhaohui Lin
Abstract:
In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature
Procedia PDF Downloads 133149 Enhancement of Cross-Linguistic Effect with the Increase in the Multilingual Proficiency during Early Childhood: A Case Study of English Language Acquisition by a Pre-School Child
Authors: Anupama Purohit
Abstract:
The paper is a study on the inevitable cross-linguistic effect found in the early multilingual learners. The cross-linguistic behaviour like code-mixing, code-switching, foreign accent, literal translation, redundancy and syntactic manipulation effected due to other languages on the English language output of a non-native pre-school child are discussed here. A case study method is adopted in this paper to support the claim of the title. A simultaneously tetra lingual pre-school child’s (within 1;3 to 4;0) language behaviour is analysed here. The sample output data of the child is gathered from the diary entries maintained by her family, regular observations and video recordings done since her birth. She is getting the input of her mother tongue, Sambalpuri, from her grandparents only; Hindi, the local language from her play-school and the neighbourhood; English only from her mother and occasional visit of other family friends; Odia only during the reading of the Odia story book. The child is exposed to code-mixing of all the languages throughout her childhood. But code-mixing, literal translation, redundancy and duplication were absent in her initial stage of multilingual acquisition. As the child was more proficient in English in comparison to her other first languages and had never heard code-mixing in English language; it was expected from her input pattern of English (one parent, English language) that she would maintain purity in her use of English while talking to the English language interlocutor. But with gradual increase in the language proficiency in each of the languages of the child, her handling of the multiple codes becomes deft cross-linguistically. It can be deduced from the case study that after attaining certain milestone proficiency in each language, the child’s linguistic faculty can operate at a metalinguistic level. The functional use of each morpheme, their arrangement in words and in the sentences, the supra segmental features, lexical-semantic mapping, culture specific use of a language and the pragmatic skills converge to give a typical childlike multilingual output in an intelligible manner to the multilingual people (with the same set of languages in combination). The result is appealing because for expressing the same ideas which the child used to speak (may be with grammatically wrong expressions) in one language, gradually, she starts showing cross-linguistic effect in her expressions. So the paper pleads for the separatist view from the very beginning of the holophrastic phase (as the child expresses in addressee-specific language); but development of a metalinguistic ability that helps the child in communicating in a sophisticated way according to the linguistic status of the addressee is unique to the multilingual child. This metalinguistic ability is independent of the mode if input of a multilingual child.Keywords: code-mixing, cross-linguistic effect, early multilingualism, literal translation
Procedia PDF Downloads 299148 Caribbean Universities and the Global Educational Market: An Examination of Entrepreneurship and Leadership in an Era of Change
Authors: Paulette Henry
Abstract:
If Caribbean Universities wish to remain sustainable in the global education market they must meet the new demands of the 21st Centuries learners. This means preparing the teaching and learning environment with the human and material and resources so that the University can blossom out into the entrepreneurial University. The entrepreneurial University prepares the learner to become a global citizen, one who is innovative and a critical thinker and has the competencies to create jobs. Entrepreneurship education provides more equitable access to university education building capacity for the local and global economy. The entrepreneurial thinking, the mindset, must therefore be among academic and support staff as well as students. In developing countries where resources are scarce, Universities are grappling with a myriad of financial and non-financial issues. These include increasing costs, Union demands for increased remuneration for staff and reduced subvention from governments which has become the norm. In addition, there is the political pressure against increasing tuition fees and the perceptions on the moral responsibilities of universities in national development. The question is how do small universities carve out their niche, meet both political and consumer demands for a high quality, low lost education, fulfil their development mandate and still remain not only viable but competitive. Themes which are central to this discourse on the transitions necessary for the entrepreneurial university are leadership, governance and staff well-being. This paper therefore presents a case study of a Caribbean University to show how transformational leadership and the change management framework propels change towards an entrepreneurial institution seeking to have a competitive advantage despite its low resourced context. Important to this discourse are the transformational approaches used by the University to prepare staff to move from their traditional psyche to embracing an entrepreneurial mindset whilst equipping students within the same mode to become work ready and creative global citizens. Using the mixed methods approach, opinions were garnered from both members of the University community as well as external stakeholder groups on their perception of the role of the University in the business arena and as a primary stakeholder in national development. One of the critical concepts emanating from the discourse was the need to change the mindset of the those in university governance as well as how national stakeholders engage the university. This paper shows how multiple non-financial factors can contribute to change. A combination of transformational and servant leadership, strengthened institutional structures and developing new ones, rebuilding institutional trust and pride have been among the strategies employed within the change management framework. The university is no longer limited by borders but through international linkages has transcended into a transnational stakeholder.Keywords: competitiveness, context, entrepreneurial, leadership
Procedia PDF Downloads 210147 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)
Authors: F. Pal-Fam, S. Keszthelyi
Abstract:
Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.Keywords: fungicides, treatments, phytopathogens, sunflower
Procedia PDF Downloads 142146 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure
Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu
Abstract:
In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency
Procedia PDF Downloads 178145 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 234144 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 178143 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 96142 Research on the Performance Management of Social Organizations Participating in Home-Based Care
Authors: Qiuhu Shao
Abstract:
Community home-based care service system, which is based on the family pension, supported by community pension and supplied by institutions pension, is an effective pension system to solve the current situation of China's accelerated aging. However, due to the fundamental realities of our country, the government is not able to bear the unilateral supply of the old-age service of the community. Therefore, based on the theory of welfare pluralism, the participation of social organizations in the home-based care service center has become an important part of the diversified supply of the old-age service for the elderly. Meanwhile, the home-based care service industry is still in the early stage, the management is relatively rough, which resulted in a large number of social resources waste. Thus, scientific, objective and long-term implementation is needed for social organizations to participate in home-based care services to guide its performance management. In order to realize the design of the performance management system, the author has done a research work that clarifies the research status of social organization's participation in home-based care service. Relevant theories such as welfare pluralism, community care theory, and performance management theory have been used to demonstrate the feasibility of data envelopment analysis method in social organization performance research. This paper analyzes the characteristics of the operation mode of the home-based care service center, and hackles the national as well as local documents, standards and norms related to the development of the home-based care industry, particularly studies those documents in Nanjing. Based on this, the paper designed a set of performance management PDCA system for home-based care service center in Nanjing and clarified each step of the system in detail. Subsequently, the research methods of performance evaluation and performance management and feedback, which are two core steps of performance management have been compared and screened in order to establish the overall framework of the performance management system of the home-based care service center. Through a large number of research, the paper summarized and analyzed the characteristics of the home-based care service center. Based on the research results, combined with the practice of the industry development in Nanjing, the paper puts forward a targeted performance evaluation index system of home-based care service center in Nanjing. Finally, the paper evaluated and sub-filed the performance of 186 home-based care service centers in Nanjing and then designed the performance optimization direction and performance improvement path based on the results. This study constructs the index system of performance evaluation of home-based care service and makes the index detailed to the implementation level, and constructs the evaluation index system which can be applied directly. Meanwhile, the quantitative evaluation of social organizations participating in the home-based care service changed the subjective impression in the previous practice of evaluation.Keywords: data envelopment analysis, home-based care, performance management, social organization
Procedia PDF Downloads 271141 Redeeming the Self-Settling Scores with the Nazis by the Means of Poetics
Authors: Liliane Steiner
Abstract:
Beyond the testimonial act, that sheds light on the feminine experience in the Holocaust, the survivors' writing voices first and foremost the abjection of the feminine self brutally inflicted by the Nazis in the Holocaust, and in the same movement redeems the self by the means of poetics, and brings it to an existential state of being a subject. This study aims to stress the poetics of this writing in order to promote the Holocaust literature from the margins to the mainstream and to contribute to the commemoration of the Holocaust in the next generations. Methodology: The study of the survivors' redeeming of self is based on Julia Kristeva's theory of the abject: the self-throws out everything that threatens its existence and Liliane Steiner's theory of the post- abjection of hell: the belated act of vomiting the abject experiences settles cores with the author of the abject to redeem the self. The research will focus on Ruth Sender's trilogy The Cage, To Life and The Holocaust Lady as a case study. Findings: The binary mode that characterizes this writing reflects the experience of Jewish women, who were subject(s), were treated violently as object(s), debased, defeminized and, eventually turned into abject by the Nazis. In a tour de force, this writing re-enacts the postponed resistance, that vomited the abject imposed on the feminine self by the very act of narration, which denounces the real abject, the perpetrators. The post-abjection of self is acted out in constructs of abject, relating the abject experience of the Holocaust as well as the rehabilitation of the surviving self (subject). The transcription of abject surfaces in deconstructing the abject through self- characterization, and in the elusive rendering of bad memories, having recourse to literary figures. The narrative 'I' selects, obstructs, mends and tells the past events from an active standpoint, as would a subject in control of its (narrative) fate. In a compensatory movement, the narrating I tells itself by reconstructing the subject and proving time and again that I is other. Moreover, in the belated endeavor to revenge, testify and narrate the abject, the narrative I defies itself, and represents itself as a dialectical I, splitting and multiplying itself in a deconstructing way. The dialectical I is never (one) I. It voices not only the unvoiced but also and mainly the other silenced 'I's. Drawing its nature and construct from traumatic memories, the dialectical I transgresses boundaries to narrate her story, and in the same breath, the story of Jewish women doomed to silence. In this narrative feat, the dialectical I stresses its essential dialectical existence with the past, never to be (one) again. Conclusion: The pattern of I is other generates patterns of subject(s) that defy, transgress and repudiate the abject and its repercussions on the feminine I. The feminine I writes itself as a survivor that defies the abject (Nazis) and takes revenge. The paradigm of metamorphosis that accompanies the journey of the Holocaust memoirist engenders life and surviving as well as a narration that defies stagnation and death.Keywords: abject, feminine writing, holocaust, post-abjection
Procedia PDF Downloads 104140 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 248139 Rheological Evaluation of a Mucoadhesive Precursor of Based-Poloxamer 407 or Polyethylenimine Liquid Crystal System for Buccal Administration
Authors: Jéssica Bernegossi, Lívia Nordi Dovigo, Marlus Chorilli
Abstract:
Mucoadhesive liquid crystalline systems are emerging how delivery systems for oral cavity. These systems are interesting since they facilitate the targeting of medicines and change the release enabling a reduction in the number of applications made by the patient. The buccal mucosa is permeable besides present a great blood supply and absence of first pass metabolism, it is a good route of administration. It was developed two systems liquid crystals utilizing as surfactant the ethyl alcohol ethoxylated and propoxylated (30%) as oil phase the oleic acid (60%), and the aqueous phase (10%) dispersion of polymer polyethylenimine (0.5%) or dispersion of polymer poloxamer 407 (16%), with the intention of applying the buccal mucosa. Initially, was performed for characterization of systems the conference by polarized light microscopy and rheological analysis. For the preparation of the systems the components described was added above in glass vials and shaken. Then, 30 and 100% artificial saliva were added to each prepared formulation so as to simulate the environment of the oral cavity. For the verification of the system structure, aliquots of the formulations were observed in glass slide and covered with a coverslip, examined in polarized light microscope (PLM) Axioskop - Zeizz® in 40x magnifier. The formulations were also evaluated for their rheological profile Rheometer TA Instruments®, which were obtained rheograms the selected systems employing fluency mode (flow) in temperature of 37ºC (98.6ºF). In PLM, it was observed that in formulations containing polyethylenimine and poloxamer 407 without the addition of artificial saliva was observed dark-field being indicative of microemulsion, this was also observed with the formulation that was increased with 30% of the artificial saliva. In the formulation that was increased with 100% simulated saliva was shown to be a system structure since it presented anisotropy with the presence of striae being indicative of hexagonal liquid crystalline mesophase system. Upon observation of rheograms, both systems without the addition of artificial saliva showed a Newtonian profile, after addition of 30% artificial saliva have been given a non-Newtonian behavior of the pseudoplastic-thixotropic type and after adding 100% of the saliva artificial proved plastic-thixotropic. Furthermore, it is clearly seen that the formulations containing poloxamer 407 have significantly larger (15-800 Pa) shear stress compared to those containing polyethyleneimine (5-50 Pa), indicating a greater plasticity of these. Thus, it is possible to observe that the addition of saliva was of interest to the system structure, starting from a microemulsion for a liquid crystal system, thereby also changing thereby its rheological behavior. The systems have promising characteristics as controlled release systems to the oral cavity, as it features good fluidity during its possible application and greater structuring of the system when it comes into contact with environmental saliva.Keywords: liquid crystal system, poloxamer 407, polyethylenimine, rheology
Procedia PDF Downloads 458138 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation
Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes
Abstract:
Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor
Procedia PDF Downloads 137137 Development of Method for Detecting Low Concentration of Organophosphate Pesticides in Vegetables Using near Infrared Spectroscopy
Authors: Atchara Sankom, Warapa Mahakarnchanakul, Ronnarit Rittiron, Tanaboon Sajjaanantakul, Thammasak Thongket
Abstract:
Vegetables are frequently contaminated with pesticides residues resulting in the most food safety concern among agricultural products. The objective of this work was to develop a method to detect the organophosphate (OP) pesticides residues in vegetables using Near Infrared (NIR) spectroscopy technique. Low concentration (ppm) of OP pesticides in vegetables were investigated. The experiment was divided into 2 sections. In the first section, Chinese kale spiked with different concentrations of chlorpyrifos pesticide residues (0.5-100 ppm) was chosen as the sample model to demonstrate the appropriate conditions of sample preparation, both for a solution or solid sample. The spiked samples were extracted with acetone. The sample extracts were applied as solution samples, while the solid samples were prepared by the dry-extract system for infrared (DESIR) technique. The DESIR technique was performed by embedding the solution sample on filter paper (GF/A) and then drying. The NIR spectra were measured with the transflectance mode over wavenumber regions of 12,500-4000 cm⁻¹. The QuEChERS method followed by gas chromatography-mass spectrometry (GC-MS) was performed as the standard method. The results from the first section showed that the DESIR technique with NIR spectroscopy demonstrated good accurate calibration result with R² of 0.93 and RMSEP of 8.23 ppm. However, in the case of solution samples, the prediction regarding the NIR-PLSR (partial least squares regression) equation showed poor performance (R² = 0.16 and RMSEP = 23.70 ppm). In the second section, the DESIR technique coupled with NIR spectroscopy was applied to the detection of OP pesticides in vegetables. Vegetables (Chinese kale, cabbage and hot chili) were spiked with OP pesticides (chlorpyrifos ethion and profenofos) at different concentrations ranging from 0.5 to 100 ppm. Solid samples were prepared (based on the DESIR technique), then samples were scanned by NIR spectrophotometer at ambient temperature (25+2°C). The NIR spectra were measured as in the first section. The NIR- PLSR showed the best calibration equation for detecting low concentrations of chlorpyrifos residues in vegetables (Chinese kale, cabbage and hot chili) according to the prediction set of R2 and RMSEP of 0.85-0.93 and 8.23-11.20 ppm, respectively. For ethion residues, the best calibration equation of NIR-PLSR showed good indexes of R² and RMSEP of 0.88-0.94 and 7.68-11.20 ppm, respectively. As well as the results for profenofos pesticide, the NIR-PLSR also showed the best calibration equation for detecting the profenofos residues in vegetables according to the good index of R² and RMSEP of 0.88-0.97 and 5.25-11.00 ppm, respectively. Moreover, the calibration equation developed in this work could rapidly predict the concentrations of OP pesticides residues (0.5-100 ppm) in vegetables, and there was no significant difference between NIR-predicted values and actual values (data from GC-MS) at a confidence interval of 95%. In this work, the proposed method using NIR spectroscopy involving the DESIR technique has proved to be an efficient method for the screening detection of OP pesticides residues at low concentrations, and thus increases the food safety potential of vegetables for domestic and export markets.Keywords: NIR spectroscopy, organophosphate pesticide, vegetable, food safety
Procedia PDF Downloads 151136 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview
Authors: Yasmeen Cheema, Parvinder Singh
Abstract:
The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack
Procedia PDF Downloads 231135 Techno Economic Analysis for Solar PV and Hydro Power for Kafue Gorge Power Station
Authors: Elvis Nyirenda
Abstract:
This research study work was done to evaluate and propose an optimum measure to enhance the uptake of clean energy technologies such as solar photovoltaics, the study also aims at enhancing the country’s energy mix from the overdependence on hydro power which is susceptible to droughts and climate change challenges The country in the years 2015 - 2016 and 2018 - 2019 had received rainfall below average due to climate change and a shift in the weather pattern; this resulted in prolonged power outages and load shedding for more than 10 hours per day. ZESCO Limited, the utility company that owns infrastructure in the generation, transmission, and distribution of electricity (state-owned), is seeking alternative sources of energy in order to reduce the over-dependence on hydropower stations. One of the alternative sources of energy is Solar Energy from the sun. However, solar power is intermittent in nature and to smoothen the load curve, investment in robust energy storage facilities is of great importance to enhance security and reliability of electricity supply in the country. The methodology of the study looked at the historical performance of the Kafue gorge upper power station and utilised the hourly generation figures as input data for generation modelling in Homer software. The average yearly demand was derived from the available data on the system SCADA. The two dams were modelled as natural battery with the absolute state of charging and discharging determined by the available water resource and the peak electricity demand. The software Homer Energy System is used to simulate the scheme incorporating a pumped storage facility and Solar photovoltaic systems. The pumped hydro scheme works like a natural battery for the conservation of water, with the only losses being evaporation and water leakages from the dams and the turbines. To address the problem of intermittency on the solar resource and the non-availability of water for hydropower generation, the study concluded that utilising the existing Hydro power stations, Kafue Gorge upper and Kafue Gorge Lower to work conjunctively with Solar energy will reduce power deficits and increase the security of supply for the country. An optimum capacity of 350MW of solar PV can be integrated while operating Kafue Gorge power station in both generating and pumping mode to enable efficient utilisation of water at Kafue Gorge upper Dam and Kafue Gorge Lower dam.Keywords: hydropower, solar power systems, energy storage, photovoltaics, solar irradiation, pumped hydro storage system, supervisory control and data acquisition, Homer energy
Procedia PDF Downloads 118134 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 82133 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311132 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space
Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari
Abstract:
Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.Keywords: amino acid, head space, gas chromatography, total error
Procedia PDF Downloads 150131 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials
Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova
Abstract:
Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system
Procedia PDF Downloads 406130 Prospects of Agroforestry Products in the Emergency Situation: A Case Study of Earthquake of 2015 in Central Nepal
Authors: Raju Chhetri
Abstract:
Agroforestry is one of the main sources of livelihood among the people of Nepal. In particular, this is the only one mode of livelihood among the Chepangs. The monster earthquake (7.3 MW) that hit the country on the 25th of April in 2015 and many of its aftershocks had devastating effects. As a result, not only the big structures collapsed, it incurred great losses on fabrication, collection centers, schools, markets and other necessary service centers. Although there were a large number of aftershocks after the monster earthquake, the most devastating aftershock took place on 12th May, 2015, which measured 6.3 richter scale. Consequently, it caused more destruction of houses, further calamity to the lives of people, and public life got further perdition. This study was mainly carried out to find out the food security and market situation of Agroforestry product of the Chepang community in Raksirang VDC (one of the severely affected VDCs of Makwanpur district) due to the earthquake. A total of 40 households (12 percent) were randomly selected as a sample in ward number 7 only. Questionnaires and focus groups were used to gather primary data. Additional, two Focus Group Discussions (FGD) were convened in the study area to get some descriptive information on this study. Estimated 370 hectares of land, which was full of Agroforestry plantation, ruptured by the earthquake. It caused severe damages to the households, and a serious loss of food-stock, up to 60-80 percent (maize, millet, and rice). Instead of regular cereal intake, banana (Muas Paradisca) consumption was found ‘high scale’ in the emergency period. The market price of rice (37-44 NRS/Kg) increased by 18.9 percent. Some difference in the income range before and after the earthquake was observed. Before earthquake, sale of Agroforestry, and livestock products were continuing, but after the earthquake, Agroforestry product sale is the only one means of livelihood among Chepangs. Nearly 50-60 percent Agroforestry production of banana (Mass Paradisca), citrus (Citrus Lemon), pineapple (Ananus comosus) and broom grass (Thysanolaena maxima) declined, excepting for cash income from the residual. Heavy demands of Agroforestry product mentioned above lay high farm gate prices (50-100 percent) helps surveyed the community to continue livelihood from its sale. Out of the survey samples, 30 households (75 percent) respondents migrated to safe location due to land rupture, ongoing aftershocks, and landslides. Overall food security situation in this community is acute and challenging for the days to come. Immediate and long term both response from a relief agency concerning food, shelter and safe stocking of Agroforestry product is required to keep secured livelihood in Chepang community.Keywords: earthquake, rupture, agroforestry, livelihood, indigenous, food security
Procedia PDF Downloads 325