Search results for: Computer based Training (CBT)
160 Teachers Leadership Dimension in History Learning
Authors: Lee Bih Ni, Zulfhikar Rabe, Nurul Asyikin Hassan
Abstract:
The Ministry of Education Malaysia dynamically and drastically made the subject of History mandatory to be in force in 2013. This is in recognition of the nation's heritage and treasures in maintaining true facts and information for future generations of the State. History reveals the civilization of a nation and the fact of national cultural heritage. Civilization needs to be preserved as a legacy of sovereign heritage. Today's generation is the catalyst for future heirs who will support the principle and direction of the country. In line with the National Education Philosophy that aims to shape the potential development of individuals holistically and uniquely in order to produce a balanced and harmonious student in terms of intellectual, spiritual, emotional and physical. Hence, understanding the importance of studying the history subject as a pillar of identity and the history of nationhood is to be a priority in the pursuit of knowledge and empowering the spirit of statehood that is nurtured through continuous learning at school. Judging from the aspect of teacher leadership role in integrating history in a combined way based on Teacher Education Philosophy. It empowers the teaching profession towards the teacher to support noble character. It also supports progressive and scientific views. Teachers are willing to uphold the State's aspirations and celebrate the country's cultural heritage. They guarantee individual development and maintain a united, democratic, progressive and disciplined society. Teacher's role as a change and leadership agent in education begins in the classroom through formal or informal educational processes. This situation is expanded in schools, communities and countries. The focus of this paper is on the role of teacher leadership influencing the effectiveness of teaching and learning history in the classroom environment. Leadership guides to teachers' perceptions on the role of teacher leadership, teaching leadership, and the teacher leadership role and effective teacher leadership role. Discussions give emphasis on aspects of factors affecting the classroom environment, forming the classroom agenda, effective classroom implementation methods, suitable climate for historical learning and teacher challenges in implicating the effectiveness of teaching and learning processes.Keywords: Teacher leadership, leadership lessons, effective classroom, effective teacher.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1135159 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model
Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri
Abstract:
The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.Keywords: Fuzzy time series, neural network, forecasting error, average error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593158 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.
Keywords: Even harmonic components of sloshing waves, Green–Naghdi equations, nonlinearity, solitary waves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 864157 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators
Authors: Radwa Mabrook
Abstract:
Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.
Keywords: Collaborative culture, content creation, experimental culture, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792156 Utilization of Industrial Byproducts in Concrete Applications by Adopting Grey Taguchi Method for Optimization
Authors: V. K. Bansal, M. Kumar, P. P. Bansal, A. Batish
Abstract:
This paper presents the results of an experimental investigation carried out to evaluate the effects of partial replacement of cement and fine aggregate with industrial waste by-products on concrete strength properties. The Grey Taguchi approach has been used to optimize the mix proportions for desired properties. In this research work, a ternary combination of industrial waste by-products has been used. The experiments have been designed using Taguchi's L9 orthogonal array with four factors having three levels each. The cement was partially replaced by ladle furnace slag (LFS), fly ash (FA) and copper slag (CS) at 10%, 25% and 40% level and fine aggregate (sand) was partially replaced with electric arc furnace slag (EAFS), iron slag (IS) and glass powder (GP) at 20%, 30% and 40% level. Three water to binder ratios, fixed at 0.40, 0.44 and 0.48, were used, and the curing age was fixed at 7, 28 and 90 days. Thus, a series of nine experiments was conducted on the specimens for water to binder ratios of 0.40, 0.44 and 0.48 at 7, 28 and 90 days of the water curing regime. It is evident from the investigations that Grey Taguchi approach for optimization helps in identifying the factors affecting the final outcomes, i.e. compressive strength and split tensile strength of concrete. For the materials and a range of parameters used in this research, the present study has established optimum mixes in terms of strength properties. The best possible levels of mix proportions were determined for maximization through compressive and splitting tensile strength. To verify the results, the optimal mix was produced and tested. The mixture results in higher compressive strength and split tensile strength than other mixes. The compressive strength and split tensile strength of optimal mixtures are also compared with the control concrete mixtures. The results show that compressive strength and split tensile strength of concrete made with partial replacement of cement and fine aggregate is more than control concrete at all ages and w/c ratios. Based on the overall observations, it can be recommended that industrial waste by-products in ternary combinations can effectively be utilized as partial replacements of cement and fine aggregates in all concrete applications.
Keywords: Analysis of variance, ANOVA, compressive strength, concrete, grey Taguchi method, industrial by-products, split tensile strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821155 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III
Authors: M. Okan Tasar
Abstract:
Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2288154 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.
Keywords: Acceptance, coping strategies, measurement instrument, questionnaire, stress, validation process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923153 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Controlled Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, natural and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer method. graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of CS, the amino reaction was performed to form amide transplantation, and the DOX was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX were characterized by FT-IR and TGA to recognize new functional groups which show the new bonding of CS to GO, RAMA and SEM to recognize size of layers that show changing in size and number of layers. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.
Keywords: Graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 235152 Biological Hotspots in the Galápagos Islands: Exploring Seasonal Trends of Ocean Climate Drivers to Monitor Algal Blooms
Authors: Emily Kislik, Gabriel Mantilla Saltos, Gladys Torres, Mercy Borbor-Córdova
Abstract:
The Galápagos Marine Reserve (GMR) is an internationally-recognized region of consistent upwelling events, high productivity, and rich biodiversity. Despite its high-nutrient, low-chlorophyll condition, the archipelago has experienced phytoplankton blooms, especially in the western section between Isabela and Fernandina Islands. However, little is known about how climate variability will affect future phytoplankton standing stock in the Galápagos, and no consistent protocols currently exist to quantify phytoplankton biomass, identify species, or monitor for potential harmful algal blooms (HABs) within the archipelago. This analysis investigates physical, chemical, and biological oceanic variables that contribute to algal blooms within the GMR, using 4 km Aqua MODIS satellite imagery and 0.125-degree wind stress data from January 2003 to December 2016. Furthermore, this study analyzes chlorophyll-a concentrations at varying spatial scales— within the greater archipelago, as well as within five smaller bioregions based on species biodiversity in the GMR. Seasonal and interannual trend analyses, correlations, and hotspot identification were performed. Results demonstrate that chlorophyll-a is expressed in two seasons throughout the year in the GMR, most frequently in September and March, with a notable hotspot in the Elizabeth Bay bioregion. Interannual chlorophyll-a trend analyses revealed highest peaks in 2003, 2007, 2013, and 2016, and variables that correlate highly with chlorophyll-a include surface temperature and particulate organic carbon. This study recommends future in situ sampling locations for phytoplankton monitoring, including the Elizabeth Bay bioregion. Conclusions from this study contribute to the knowledge of oceanic drivers that catalyze primary productivity and consequently affect species biodiversity within the GMR. Additionally, this research can inform policy and decision-making strategies for species conservation and management within bioregions of the Galápagos.
Keywords: Bioregions, ecological monitoring, phytoplankton, remote sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385151 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses
Authors: Emre Kara, Ahmet F. Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul
Abstract:
The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminum foam core because of very good properties such as flexural rigidity and energy absorption capability. In the current investigation, the static threepoint bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load and energy absorption capacity. For this purpose, the skins with two different types of fabrics which have same thickness value and the aluminum foam core with two different thicknesses were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.Keywords: Aluminum foam, Composite panel, Flexure, Transport application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329150 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania
Authors: Japhet N. Mwambusi
Abstract:
High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.Keywords: Climate change, deforestation, gluing technology, joint failure, wood-glue, wood species.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2420149 Maize Tolerance to Natural and Artificial Infestation with Diabrotica virgifera virgifera Eggs
Authors: Snežana T. Tanasković, Sonja M. Gvozdenac, Branka D. Popović, Vesna M. Đurović, Matthias Erb
Abstract:
Western corn rootworm – WCR (Diabrotica virgifera sp.virgifera, Coleoptera, Chrysomelidae) is economically the most important pest of maize worldwide. WCR natural population is already very abundant on Serbian fields, and keeps increasing each year. Tolerance is recognized by larger root size and bigger root regrowth. Severe larval injuries cause lack of compensatory regrowth and lead to reduction of plant growth and yield. The aim of this research was to evaluate tolerance of commercial Serbian maize hybrid NS 640, under natural WCR infestation and under conditions of artificial infestation, and to obtain the information about its tolerance to WCR larval feeding in two consecutive years. Field experiments were conducted in 2015 and 2016, in Bečej (Vojvodina province, Serbia). In experimental field, 96 plants were selected, marked and arranged in 48 pairs. Each pair represented two plants. The first plant was artificially infested with 4 mL WCR egg suspension in agar (550 eggs plant-1) in the root zone (D plant). The second plant represented control plant (C plant) with injection of 4 mL distilled water in root zone. The experimental field was inspected weekly. A hybrid tolerance was assessed based on root injury level and root mass. Root injury was rated using the Node-Injury Scale 1-6, during the last field inspection (September – October). Comparing the root injuries on D and C plants in 2015, more severe damages were recorded on D plants (12 plants - rate 5 and 17 plants - rate 6) compared to C plants (2 plants - rate 5 and 8 plants - rate 6). Also, the highest number of plants with healthy roots (rate 1), was registered in the control (25 plants), while only 4 D plants were rated as injury level 1. In 2016, root injuries caused by WCR larvae on D and C plants did not differ significantly. The reason is the difference in climatic conditions between the years. The 2015 was extremely dry and more suitable for WCR larval development and movement in the soil, compared to 2016. Thus, more severe damages appeared on artificially infested plants (D plants). Root mass was in strong correlation with the level of root injury, but did not differ significantly between D and C plants, in both years.Keywords: D. v. virgifera, maize, root injury, tolerance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878148 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon
Authors: Allaw Kamel, Bazzi Hasan
Abstract:
Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.
Keywords: Sustainable development, landfill, municipal solid waste, geographic information system, GIS, multi criteria decision analysis, environmentally sensitive area.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882147 Switching Studies on Ge15In5Te56Ag24 Thin Films
Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das
Abstract:
Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.
Keywords: Chalcogenides, vapor deposition, electrical switching, PCM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685146 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries
Authors: Malek Makki
Abstract:
The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.
Keywords: Developing countries, International marketing, non-profit organization, segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990145 A Ground Observation Based Climatology of Winter Fog: Study over the Indo-Gangetic Plains, India
Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva
Abstract:
Every year, fog formation over the Indo-Gangetic Plains (IGPs) of Indian region during the winter months of December and January is believed to create numerous hazards, inconvenience, and economic loss to the inhabitants of this densely populated region of Indian subcontinent. The aim of the paper is to analyze the spatial and temporal variability of winter fog over IGPs. Long term ground observations of visibility and other meteorological parameters (1971-2010) have been analyzed to understand the formation of fog phenomena and its relevance during the peak winter months of January and December over IGP of India. In order to examine the temporal variability, time series and trend analysis were carried out by using the Mann-Kendall Statistical test. Trend analysis performed by using the Mann-Kendall test, accepts the alternate hypothesis with 95% confidence level indicating that there exists a trend. Kendall tau’s statistics showed that there exists a positive correlation between time series and fog frequency. Further, the Theil and Sen’s median slope estimate showed that the magnitude of trend is positive. Magnitude is higher during January compared to December for the entire IGP except in December when it is high over the western IGP. Decade wise time series analysis revealed that there has been continuous increase in fog days. The net overall increase of 99 % was observed over IGP in last four decades. Diurnal variability and average daily persistence were computed by using descriptive statistical techniques. Geo-statistical analysis of fog was carried out to understand the spatial variability of fog. Geo-statistical analysis of fog revealed that IGP is a high fog prone zone with fog occurrence frequency of more than 66% days during the study period. Diurnal variability indicates the peak occurrence of fog is between 06:00 and 10:00 local time and average daily fog persistence extends to 5 to 7 hours during the peak winter season. The results would offer a new perspective to take proactive measures in reducing the irreparable damage that could be caused due to changing trends of fog.
Keywords: Fog, climatology, Mann-Kendall test, trend analysis, spatial variability, temporal variability, visibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756144 Genetic Polymorphism of the Acute Lymphoblastic Leukaemia and Hyperhomocysteinemia its Relation with the for a Group of Children in the East of Algeria
Authors: Yahia Massinissa, Kalla A, Yahia M, Benbia S
Abstract:
A lot of recent research have spoken on the relation between the increase of the homocysteinemia and some kinds of cancer . For that, our study was based on the research of a possible relation between the increase of the concentration of this amino-acid in the plasma and the appearance of the disease of the Acute Lymphoblastic Leukaemia in a part of Algerian children with Berber origin in the East of Algeria . The study has done on 47 ill persons with an average age of (09±06 ) years , with whom the disease has diagnosed by blood and marrow examination in the hospital of blood diseases in the CHU of Batna, and on 194 healthy witnesses of the same age. The two groups were benefited by a dosage of the concentration of the homocysteine vitamin B9 ,vitamin B12 , and also of the study of special polymorphisms of indispensable enzymes in the metabolism of this acid , and that by the use of the method ( Light cycler ) Real time PCR , on the following enzymes : MS ( C2756G ), MSR ( A66G ) ,MTHFR1 ( C677T ) and MTHFR2 (A1298C). The obtained results have revealed that the rate of the homozygote muted genotype is the less frequent in the two groups , and that exist at list one genotype of each enzyme in the ill group and in which the percentage exceed with remarkable way the same genotype in the healthy group and we notice specially the muted genotype GG of -the methionine synthetase-and the form TT of the enzyme – methyline tetra hydrofolate reductase – We notice the existence of considerable number of genotypes in the ill group lied with characteristic increase of this Amino-acid ,and that for the reduction of the biologic activity of these enzymes which become inefficient in the transfer of the homocysteine into the methionine and cause the diminution of the biologic activity of these enzymes and with consequence the reduction of the percentage of methylic radicals in the DNA of studied genes and that lead to the increase of the activity and the capacity of transcription , and it-s so probably that this last one is one of the factors of this disease especially if we know that the specific check-up of vitamins is normal and similar in the two groups , which ovoid the hypothesis of the reduction of vitamins . We notice also that the heterozygote genotype is the less in the sick category except the MTHFR2. Wild genotype is more frequent in the witness group except MSR. Even these results are partials; they open a new way in the genetic diagnosis of this malicious disease which allow a precocious diagnosis and the use of an effective and appropriated treatment in the same time.Keywords: Genetic polymorphism, Acute Lymphoblastic Leukaemia, Biomarkers, Metabolism of homocystein
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262143 Economic Evaluation of Degradation by Corrosion of an on-Grid Battery Energy Storage System: A Case Study in Algeria Territory
Authors: Fouzia Brihmat
Abstract:
Economic planning models, which are used to build microgrids and Distributed Energy Resources (DER), are the current norm for expressing such confidence. These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1 GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation, the trade-off is that the model is more accurate, but the computation takes longer. We initially utilized the optimizer to run the model without multi-year in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower Cost Of Energy (COE) of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated.
Keywords: Battery, Corrosion, Diesel, Economic planning optimization, Hybrid energy system, HES, Lead-acid battery, Li-ion battery, multi-year planning, microgrid, price forecast, total net present cost, wind.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171142 Fetal and Infant Mortality in Botucatu City, São Paulo State, Brazil: Evaluation of Maternal - Infant Health Care
Authors: Noda L. M., Salvador I. C, C. M. L. G. Parada, Fonseca C. R. B.
Abstract:
In Brazil, neonatal mortality rate is considered incompatible with the country development conditions, and has been a Public Health concern. Reduction in infant mortality rates has also been part of the Millennium Development Goals, a commitment made by countries, members of the Organization of United Nations (OUN), including Brazil. Fetal mortality rate is considered a highly sensitive indicator of health care quality. Suitable actions, such as good quality and access to health services may contribute positively towards reduction in these fetal and neonatal rates. With appropriate antenatal follow-up and health care during gestation and delivery, some death causes could be reduced or even prevented by means of early diagnosis and intervention, as well as changes in risk factors and interventions. Objectives: To study the quality of maternal and infant health care based on fetal and neonatal mortality, as well as the possible actions to prevent those deaths in Botucatu (Brazil). Methods: Classification of prevention according to the International Classification of Diseases and the modified Wigglesworth´s classification. In order to evaluate adequacy, indicators of quality of antenatal and delivery care were established by the authors. Results: Considering fetal deaths, 56.7% of them occurred before delivery, which reveals possible shortcomings in antenatal care, and 38.2% of them were a result of intra- labor changes, which could be prevented or reduced by adequate obstetric management. These findings were different from those in the group of early neonatal deaths which were also studied. Adequacy of health services showed that antenatal and childbirth care was appropriate for 24% and 33.3% of pregnant women, respectively, which corroborates the results of prevention. These results revealed that shortcomings in obstetric and antenatal care could be the causes of deaths in the study. Early and late neonatal deaths have similar characteristics: 76% could be prevented or reduced mainly by adequate newborn care (52.9%) and adequate health care for gestational women (11.7%). When adequacy of care was evaluated, childbirth and newborn care was adequate in 25.8% and antenatal care was adequate in 16.1%. In conclusion, direct relationship was found between adequacy and quality of care rendered to pregnant women and newborns, and fetal and infant mortality. Moreover, our findings highlight that deaths could be prevented by an adequate obstetric and neonatal management.
Keywords: Fetal Mortality, Infant Mortality, Maternal-Child Health Services, Program Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5069141 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Geryes Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.
Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266140 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes
Authors: Alan Luo, Hunter N. B. Moseley
Abstract:
Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from X-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for X-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across X-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.
Keywords: Biomacromolecular structure, coenzyme, electron density discrepancy analysis, X-ray crystallography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 258139 Urban Accessibility of Historical Cities: The Venetian Case Study
Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera
Abstract:
The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.
Keywords: Accessibility and inclusive design, historical heritage preservation, technological and social innovation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384138 Obese and Overweight Women and Public Health Issues in Hillah City, Iraq
Authors: Amean A. Yasir, Zainab Kh. A. Al-Mahdi Al-Amean
Abstract:
In both developed and developing countries, obesity among women is increasing, but in different patterns and at very different speeds. It may have a negative effect on health, leading to reduced life expectancy and/or increased health problems. This research studied the age distribution among obese women, the types of overweight and obesity, and the extent of the problem of overweight/obesity and the obesity etiological factors among women in Hillah city in central Iraq. A total of 322 overweight and obese women were included in the study, those women were randomly selected. The Body Mass Index was used as indicator for overweight/ obesity. The incidence of overweight/obesity among age groups were estimated, the etiology factors included genetic, environmental, genetic/environmental and endocrine disease. The overweight and obese women were screened for incidence of infection and/or diseases. The study found that the prevalence of 322 overweight and obese women in Hillah city in central Iraq was 19.25% and 80.78%, respectively. The obese women types were recorded based on BMI and WHO classification as class-1 obesity (29.81%), class-2 obesity (24.22%) and class-3 obesity (26.70%), the result was discrepancy non-significant, P value < 0.05. The incidence of overweight in women was high among those aged 20-29 years (90.32%), 6.45% aged 30-39 years old and 3.22% among ≥ 60 years old, while the incidence of obesity was 20.38% for those in the age group 20-29 years, 17.30% were 30-39 years, 23.84% were 40-49 years, 16.92% were 50-59 years group and 21.53% were ≥ 60 years age group. These results confirm that the age can be considered as a significant factor for obesity types (P value < 0.0001). The result also showed that the both genetic factors and environmental factors were responsible for incidents of overweight or obesity (84.78%) p value < 0.0001. The results also recorded cases of different repeated infections (skin infection, recurrent UTI and influenza), cancer, gallstones, high blood pressure, type 2 diabetes, and infertility. Weight stigma and bias generally refers to negative attitudes; Obesity can affect quality of life, and the results of this study recorded depression among overweight or obese women. This can lead to sexual problems, shame and guilt, social isolation and reduced work performance. Overweight and Obesity are real problems among women of all age groups and is associated with the risk of diseases and infection and negatively affects quality of life. This result warrants further studies into the prevalence of obesity among women in Hillah City in central Iraq and the immune response of obese women.
Keywords: Obesity, overweight, Iraq, body mass index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278137 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.
Keywords: Carbon capture and storage, oxy-combustion, netpower cycle, oxyturbine power cycles, heat exchanger design, supercritical carbon dioxide, pinch point analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692136 Landowers' Participation Behavior on the Payment for Environmental Service (PES): Evidences from Taiwan
Authors: Wan-Yu Liu
Abstract:
To respond to the Kyoto Protocol, the policy of Payment for Environmental Service (PES), which was entitled “Plain Landscape Afforestation Program (PLAP)", was certified by Executive Yuan in Taiwan on 31 August 2001 and has been implementing for six years since 1 January 2002. Although the PLAP has received a lot of positive comments, there are still many difficulties during the process of implementation, such as insufficient technology for afforestation, private landowners- low interests in participating in PLAP, insufficient subsidies, and so on, which are potential threats that hinder the PLAP from moving forward in future. In this paper, selecting Ping-Tung County in Taiwan as a sample region and targeting those private landowners with and without intention to participate in the PLAP, respectively, we conduct an empirical analysis based on the Logit model to investigate the factors that determine whether those private landowners join the PLAP, so as to realize the incentive effects of the PLAP upon the personal decision on afforestation. The possible factors that might determine private landowner-s participation in the PLAP include landowner-s characteristics, cropland characteristics, as well as policy factors. Among them, the policy factors include afforestation subsidy amount (+), duration of afforestation subsidy (+), the rules on adjoining and adjacent areas (+), and so on, which do not reach the remarkable level in statistics though, but the directions of variable signs are consistent with the intuition behind the policy. As for the landowners- characteristics, each of age (+), education level (–), and annual household income (+) variables reaches 10% of the remarkable level in statistics; as for the cropland characteristics, each of cropland area (+), cropland price (–), and the number of cropland parcels (–) reaches 1% of the remarkable level in statistics. In light of the above, the cropland characteristics are the dominate factor that determines the probability of landowner-s participation in the PLAP. In the Logit model established by this paper, the probability of correctly estimating nonparticipants is 98%, the probability of correctly estimating the participants is 71.8%, and the probability for the overall estimation is 95%. In addition, Hosmer-Lemeshow test and omnibus test also revealed that the Logit model in this paper may provide fine goodness of fit and good predictive power in forecasting private landowners- participation in this program. The empirical result of this paper expects to help the implementation of the afforestation programs in Taiwan.
Keywords: Forestry policy, logit, afforestation subsidy, afforestation policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606135 A Study to Assess the Employment Ambitions of Graduating Students from College of Applied Medical Sciences, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
Authors: J. George, M. Al Mutairi, W. Aljuryyad, A. Alhussanan, A. Alkashan, T. Aldoghiri, Z. Alamari, A. Albakr
Abstract:
Introduction: Students make plans for their career and are keen in exploring options of employment in those carriers. They make their employment choice based on their desires and preferences. This study aims to identify if students of King Saud Bin Abdulaziz for Health Sciences, College of Applied Medical Sciences after obtaining appropriate education prefer to work as clinicians, university faculty, or full-time researchers. There are limited studies in Saudi Arabia exploring the university student’s employment choices and preferences. This study would help employers to build the required job positions and prevent misleading employers from opening undesired positions in the job market. Methodology: The study included 394 students from third and fourth years both male and female among the eighth programs of college of applied medical sciences, King Saud Bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh campus. A prospective quantitative cross-sectional study was conducted; data were collected by distributing a seven item questionnaire and analyzed using SPSS. Results: Among the participants, 358 (90.9%) of them chose one of the three listed career choices, 263 (66.8%) decided to work as hospital staff after their education, 75 students (19.0%) chose to work as a faculty member in a university after obtaining appropriate degree, 20 students (5.1%) preferred to work as full-time researcher after obtaining appropriate degree, the remaining 36 students (9.1%) had different career goals, such as obtaining a master degree after graduating, to obtain a bachelor of medicine and bachelor in surgery degree, and working in the private sector. The most recurrent reason behind the participants' choice was "career goal", where 276 (70.1%) chose it as a reason. Conclusion: The findings of the study showed that most student’s preferred to work in hospitals as clinicians, followed by choice of working as a faculty in a university, the least choice was to be working as full-time researchers.
Keywords: College of Applied Medical Sciences, employment ambitions, graduating students, King Saud bin Abdulaziz University for Health Sciences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976134 On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: Aerosol Nano-Particle, CFD, Electrical Mobility Spectrometer, Von Neumann entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861133 Biospeckle Supported Fruit Bruise Detection
Authors: Adilson M. Enes, Juliana A. Fracarolli, Inácio M. Dal Fabbro, Silvestre Rodrigues
Abstract:
This research work proposed a study of fruit bruise detection by means of a biospeckle method, selecting the papaya fruit (Carica papaya) as testing body. Papaya is recognized as a fruit of outstanding nutritional qualities, showing high vitamin A content, calcium, carbohydrates, exhibiting high popularity all over the world, considering consumption and acceptability. The commercialization of papaya faces special problems which are associated to bruise generation during harvesting, packing and transportation. Papaya is classified as climacteric fruit, permitting to be harvested before the maturation is completed. However, by one side bruise generation is partially controlled once the fruit flesh exhibits high mechanical firmness. By the other side, mechanical loads can set a future bruise at that maturation stage, when it can not be detected yet by conventional methods. Mechanical damages of fruit skin leave an entrance door to microorganisms and pathogens, which will cause severe losses of quality attributes. Traditional techniques of fruit quality inspection include total soluble solids determination, mechanical firmness tests, visual inspections, which would hardly meet required conditions for a fully automated process. However, the pertinent literature reveals a new method named biospeckle which is based on the laser reflectance and interference phenomenon. The laser biospeckle or dynamic speckle is quantified by means of the Moment of Inertia, named after its mechanical counterpart due to similarity between the defining formulae. Biospeckle techniques are able to quantify biological activities of living tissues, which has been applied to seed viability analysis, vegetable senescence and similar topics. Since the biospeckle techniques can monitor tissue physiology, it could also detect changes in the fruit caused by mechanical damages. The proposed technique holds non invasive character, being able to generate numerical results consistent with an adequate automation. The experimental tests associated to this research work included the selection of papaya fruit at different maturation stages which were submitted to artificial mechanical bruising tests. Damages were visually compared with the frequency maps yielded by the biospeckle technique. Results were considered in close agreement.
Keywords: Biospeckle, papaya, mechanical damages, vegetable bruising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2576132 Challenges and Professional Perspectives for Pedagogy Undergraduates with Specific Learning Disability: A Greek Case Study
Authors: Tatiani D. Mousoura
Abstract:
Specific learning disability (SLD) in higher education has been partially explored in Greece so far. Moreover, opinions on professional perspectives for university students with SLD, is scarcely encountered in Greek research. The perceptions of the hidden character of SLD along with the university policy towards it and professional perspectives that result from this policy have been examined in the present research. This study has applied the paradigm of a Greek Tertiary Pedagogical Education Department (Early Childhood Education). Via mixed methods, data have been collected from different groups of people in the Pedagogical Department: students with SLD and without SLD, academic staff and administration staff, all of which offer the opportunity for triangulation of the findings. Qualitative methods include ten interviews with students with SLD and 15 interviews with academic staff and 60 hours of observation of the students with SLD. Quantitative methods include 165 questionnaires completed by third and fourth-year students and five questionnaires completed by the administration staff. Thematic analyses of the interviews’ data and descriptive statistics on the questionnaires’ data have been applied for the processing of the results. The use of medical terms to define and understand SLD was common in the student cohort, regardless of them having an SLD diagnosis. However, this medical model approach is far more dominant in the group of students without SLD who, by majority, hold misconceptions on a definitional level. The academic staff group seems to be leaning towards a social approach concerning SLD. According to them, diagnoses may lead to social exclusion. The Pedagogical Department generally endorses the principles of inclusion and complies with the provision of oral exams for students with SLD. Nevertheless, in practice, there seems to be a lack of regular academic support for these students. When such support does exist, it is only through individual initiatives. With regards to their prospective profession, students with SLD can utilize their personal experience, as well as their empathy; these appear to be unique weapons in their hands –in comparison with other educators− when it comes to teaching students in the future. In the Department of Pedagogy, provision towards SLD results sporadic, however the vision of an inclusive department does exist. Based on their studies and their experience, pedagogy students with SLD claim that they have an experiential internalized advantage for their future career as educators.
Keywords: Specific learning disability, dyslexia, pedagogy department, inclusion, professional role of SLDed educators, higher education, university policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1037131 The Threats of Deforestation, Forest Fire, and CO2 Emission toward Giam Siak Kecil Bukit Batu Biosphere Reserve in Riau, Indonesia
Authors: S. B. Rushayati, R. Meilani, R. Hermawan
Abstract:
A biosphere reserve is developed to create harmony amongst economic development, community development, and environmental protection, through partnership between human and nature. Giam Siak Kecil Bukit Batu Biosphere Reserve (GSKBB BR) in Riau Province, Indonesia, is unique in that it has peat soil dominating the area, many springs essential for human livelihood, high biodiversity. Furthermore, it is the only biosphere reserve covering privately managed production forest areas. In this research, we aimed at analyzing the threat of deforestation and forest fire, and the potential of CO2 emission at GSKBB BR. We used Landsat image, arcView software, and ERDAS IMAGINE 8.5 Software to conduct spatial analysis of land cover and land use changes, calculated CO2 emission based on emission potential from each land cover and land use type, and exercised simple linear regression to demonstrate the relation between CO2 emission potential and deforestation. The result showed that, beside in the buffer zone and transition area, deforestation also occurred in the core area. Spatial analysis of land cover and land use changes from years 2010, 2012, and 2014 revealed that there were changes of land cover and land use from natural forest and industrial plantation forest to other land use types, such as garden, mixed garden, settlement, paddy fields, burnt areas, and dry agricultural land. Deforestation in core area, particularly at the Giam Siak Kecil Wildlife Reserve and Bukit Batu Wildlife Reserve, occurred in the form of changes from natural forest in to garden, mixed garden, shrubs, swamp shrubs, dry agricultural land, open area, and burnt area. In the buffer zone and transition area, changes also happened, what once swamp forest changed into garden, mixed garden, open area, shrubs, swamp shrubs, and dry agricultural land. Spatial analysis on land cover and land use changes indicated that deforestation rate in the biosphere reserve from 2010 to 2014 had reached 16 119 ha/year. Beside deforestation, threat toward the biosphere reserve area also came from forest fire. The occurrence of forest fire in 2014 had burned 101 723 ha of the area, in which 9 355 ha of core area, and 92 368 ha of buffer zone and transition area. Deforestation and forest fire had increased CO2 emission as much as 24 903 855 ton/year.Keywords: Biosphere reserve, CO2 emission, deforestation, forest fire.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145