Search results for: perfect trees
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 903

Search results for: perfect trees

213 Evidence of a Negativity Bias in the Keywords of Scientific Papers

Authors: Kseniia Zviagintseva, Brett Buttliere

Abstract:

Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.

Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics

Procedia PDF Downloads 165
212 Going beyond Elementary Algebraic Identities: The Expectation of a Gifted Child, an Indian Scenario

Authors: S. R. Santhanam

Abstract:

A gifted child is one who gives evidence of creativity, good memory, rapid learning. In mathematics, a teacher often comes across some gifted children and they exhibit the following characteristics: unusual alertness, enjoying solving problems, getting bored on repetitions, self-taught, going beyond what teacher taught, ask probing questions, connecting unconnected concepts, vivid imagination, readiness for research work, perseverance of a topic. There are two main areas of research carried out on them: 1)identifying gifted children, 2) interacting and channelizing them. A lack of appropriate recognition will lead the gifted child demotivated. One of the main findings is if proper attention and nourishment are not given then it leads a gifted child to become depressed, underachieving, fail to reach their full potential and sometimes develop negative attitude towards school and study. After identifying them, a mathematics teacher has to develop them into a fall fledged achiever. The responsibility of the teacher is enormous. The teacher has to be resourceful and patient. But interacting with them one finds a lot of surprises and awesomeness. The elementary algebraic identities like (a+b)(a-b)=a²-b², expansion of like (a+b)²(a-b)² and others are taught to students, of age group 13-15 in India. An average child will be satisfied with a single proof and immediate application of these identities. But a gifted child expects more from the teacher and at one stage after a little training will surpass the teacher also. In this short paper, the author shares his experience regarding teaching algebraic identities to gifted children. The following problem was given to a set of 10 gifted children of the specified age group: If a natural number ‘n’ to expressed as the sum of the two squares, will 2n also be expressed as the sum of two squares? An investigation has been done on what multiples of n satisfying the criterion. The attempts of the gifted children were consolidated and conclusion was drawn. A second problem was given to them as: can two natural numbers be found such that the difference of their square is 3? After a successful solution, more situations were analysed. As a third question, the finding of the sign of an algebraic expression in three variables was analysed. As an example: if a,b,c are real and unequal what will be sign of a²+4b²+9c²-4ab-12bc-6ca? Apart from an expression as a perfect square what other methods can be employed to prove an algebraic expression as positive negative or non negative has been analysed. Expressions like 4x²+2y²+13y²-2xy-4yz-6zx were given, and the children were asked to find the sign of the expression for all real values of x,y and z. In all investigations, only basic algebraic identities were used. As a next probe, a divisibility problem was initiated. When a,b,c are natural numbers such that a+b+c is at least 6, and if a+b+c is divisible by 6 then will 6 divide a³+b³+c³. The gifted children solved it in two different ways.

Keywords: algebraic identities, gifted children, Indian scenario, research

Procedia PDF Downloads 162
211 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mojo Mengistu Gelasso

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 56
210 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mengistu Gelasso Mojo

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 64
209 A Bottom-Up Approach for the Synthesis of Highly Ordered Fullerene-Intercalated Graphene Hybrids

Authors: A. Kouloumpis, P. Zygouri, G. Potsi, K. Spyrou, D. Gournis

Abstract:

Much of the research effort on graphene focuses on its use as building block for the development of new hybrid nanostructures with well-defined dimensions and behavior suitable for applications among else in gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biology. Towards this aim, here we describe a new bottom-up approach, which combines the self-assembly with the Langmuir Schaefer technique, for the production of fullerene-intercalated graphene hybrid materials. This new method uses graphene nanosheets as a template for the grafting of various fullerene C60 molecules (pure C60, bromo-fullerenes, C60Br24, and fullerols, C60(OH)24) in a bi-dimensional array, and allows for perfect layer-by-layer growth with control at the molecular level. Our film preparation approach involves a bottom-up layer-by-layer process that includes the formation of a hybrid organo-graphene Langmuir film hosting fullerene molecules within its interlayer spacing. A dilute water solution of chemically oxidized graphene (GO) was used as subphase on the Langmuir-Blodgett deposition system while an appropriate amino surfactant (that binds covalently with the GO) was applied for the formation of hybridized organo-GO. After the horizontal lift of a hydrophobic substrate, a surface modification of the GO platelets was performed by bringing the surface of the transferred Langmuir film in contact with a second amino surfactant solution (capable to interact strongly with the fullerene derivatives). In the final step, the hybrid organo-graphene film was lowered in the solution of the appropriate fullerene derivative. Multilayer films were constructed by repeating this procedure. Hybrid fullerene-based thin films deposited on various hydrophobic substrates were characterized by X-ray diffraction (XRD) and X-ray reflectivity (XRR), FTIR, and Raman spectroscopies, Atomic Force Microscopy, and optical measurements. Acknowledgments. This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)‐Research Funding Program: THALES. Investing in knowledge society through the European Social Fund (no. 377285).

Keywords: hybrids, graphene oxide, fullerenes, langmuir-blodgett, intercalated structures

Procedia PDF Downloads 307
208 Radar Fault Diagnosis Strategy Based on Deep Learning

Authors: Bin Feng, Zhulin Zong

Abstract:

Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.

Keywords: radar system, fault diagnosis, deep learning, radar fault

Procedia PDF Downloads 62
207 Significance of Treated Wasteater in Facing Consequences of Climate Change in Arid Regions

Authors: Jamal A. Radaideh, A. J. Radaideh

Abstract:

Being a problem threatening the planet and its ecosystems, the climate change has been considered for a long time as a disturbing topic impacting water resources in Jordan. Jordan is expected for instance to be highly vulnerable to climate change consequences given its unbalanced distribution between water resources availability and existing demands. Thus, action on adaptation to climate impacts is urgently needed to cope with the negative consequences of climate change. Adaptation to global change must include prudent management of treated wastewater as a renewable resource, especially in regions lacking groundwater or where groundwater is already over exploited. This paper highlights the expected negative effects of climate change on the already scarce water sources and to motivate researchers and decision makers to take precautionary measures and find alternatives to keep the level of water supplies at the limits required for different consumption sectors in terms of quantity and quality. The paper will focus on assessing the potential for wastewater recycling as an adaptation measure to cope with water scarcity in Jordan and to consider wastewater as integral part of the national water budget to solve environmental problems. The paper also identified a research topic designed to help the nation progress in making the most appropriate use of the resource, namely for agricultural irrigation. Wastewater is a promising alternative to fill the shortage in water resources, especially due to climate changes, and to preserve the valuable fresh water to give priority to securing drinking water for the population from these resources and at the same time raise the efficiency of the use of available resources. Jordan has more than 36 wastewater treatment plants distributed throughout the country and producing about 386,000 CM/day of reclaimed water. According to the reports of water quality control programs, more than 85 percent of this water is of a quality that is completely identical to the quality suitable for irrigation of field crops and forest trees according to the requirements of Jordanian Standard No. 893/2006.

Keywords: climate change effects on water resources, adaptation on climate change, treated wastewater recycling, arid and semi-arid regions, Jordan

Procedia PDF Downloads 98
206 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans

Authors: O. Ekrami, P. Claes, S. Van Dongen

Abstract:

Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.

Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing

Procedia PDF Downloads 123
205 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 407
204 Evolutionary Prediction of the Viral RNA-Dependent RNA Polymerase of Chandipura vesiculovirus and Related Viral Species

Authors: Maneesh Kumar, Roshan Kamal Topno, Manas Ranjan Dikhit, Vahab Ali, Ganesh Chandra Sahoo, Bhawana, Major Madhukar, Rishikesh Kumar, Krishna Pandey, Pradeep Das

Abstract:

Chandipura vesiculovirus is an emerging (-) ssRNA viral entity belonging to the genus Vesiculovirus of the family Rhabdoviridae, associated with fatal encephalitis in tropical regions. The multi-functionally active viral RNA-dependent RNA polymerase (vRdRp) that has been incorporated with conserved amino acid residues in the pathogens, assigned to synthesize distinct viral polypeptides. The lack of proofreading ability of the vRdRp produces many mutated variants. Here, we have performed the evolutionary analysis of 20 viral protein sequences of vRdRp of different strains of Chandipura vesiculovirus along with other viral species from genus Vesiculovirus inferred in MEGA6.06, employing the Neighbour-Joining method. The p-distance algorithmic method has been used to calculate the optimum tree which showed the sum of branch length of about 1.436. The percentage of replicate trees in which the associated taxa are clustered together in the bootstrap test (1000 replicates), is shown next to the branches. No mutation was observed in the Indian strains of Chandipura vesiculovirus. In vRdRp, 1230(His) and 1231(Arg) are actively participated in catalysis and, are found conserved in different strains of Chandipura vesiculovirus. Both amino acid residues were also conserved in the other viral species from genus Vesiculovirus. Many isolates exhibited maximum number of mutations in catalytic regions in strains of Chandipura vesiculovirus at position 26(Ser→Ala), 47 (Ser→Ala), 90(Ser→Tyr), 172(Gly→Ile, Val), 172(Ser→Tyr), 387(Asn→Ser), 1301(Thr→Ala), 1330(Ala→Glu), 2015(Phe→Ser) and 2065(Thr→Val) which make them variants under different tropical conditions from where they evolved. The result clarifies the actual concept of RNA evolution using vRdRp to develop as an evolutionary marker. Although, a limited number of vRdRp protein sequence similarities for Chandipura vesiculovirus and other species. This might endow with possibilities to identify the virulence level during viral multiplication in a host.

Keywords: Chandipura, (-) ssRNA, viral RNA-dependent RNA polymerase, neighbour-joining method, p-distance algorithmic, evolutionary marker

Procedia PDF Downloads 177
203 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 84
202 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 237
201 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 71
200 A Systematic Approach to Mitigate the Impact of Increased Temperature and Air Pollution in Urban Settings

Authors: Samain Sabrin, Joshua Pratt, Joshua Bryk, Maryam Karimi

Abstract:

Globally, extreme heat events have led to a surge in the number of heat-related moralities. These incidents are further exacerbated in high-density population centers due to the Urban Heat Island (UHI) effect. Varieties of anthropogenic activities such as unsupervised land surface modifications, expansion of impervious areas, and lack of use of vegetation are all contributors to an increase in the amount of heat flux trapped by an urban canopy which intensifies the UHI effect. This project aims to propose a systematic approach to measure the impact of air quality and increased temperature based on urban morphology in the selected metropolitan cities. This project will measure the impact of build environment for urban and regional planning using human biometeorological evaluations (mean radiant temperature, Tmrt). We utilized the Rayman model (capable of calculating short and long wave radiation fluxes affecting the human body) to estimate the Tmrt in an urban environment incorporating location and height of buildings and trees as a supplemental tool in urban planning, and street design. Our current results suggest a strong correlation between building height and increased surface temperature in megacities. This model will help with; 1. Quantify the impacts of the built environment and surface properties on surrounding temperature, 2. Identify priority urban neighborhoods by analyzing Tmrt and air quality data at pedestrian level, 3. Characterizing the need for urban green infrastructure or better urban planning- maximizing the cooling benefit from existing Urban Green Infrastructure (UGI), and 4. Developing a hierarchy of streets for new UGI integration and propose new UGI based on site characteristics and cooling potential.

Keywords: air quality, heat mitigation, human-biometeorological indices, increased temperature, mean radiant temperature, radiation flux, sustainable development, thermal comfort, urban canopy, urban planning

Procedia PDF Downloads 126
199 Algorithm for Improved Tree Counting and Detection through Adaptive Machine Learning Approach with the Integration of Watershed Transformation and Local Maxima Analysis

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

The Philippines is long considered as a valuable producer of high value crops globally. The country’s employment and economy have been dependent on agriculture, thus increasing its demand for the efficient agricultural mechanism. Remote sensing and geographic information technology have proven to effectively provide applications for precision agriculture through image-processing technique considering the development of the aerial scanning technology in the country. Accurate information concerning the spatial correlation within the field is very important for precision farming of high value crops, especially. The availability of height information and high spatial resolution images obtained from aerial scanning together with the development of new image analysis methods are offering relevant influence to precision agriculture techniques and applications. In this study, an algorithm was developed and implemented to detect and count high value crops simultaneously through adaptive scaling of support vector machine (SVM) algorithm subjected to object-oriented approach combining watershed transformation and local maxima filter in enhancing tree counting and detection. The methodology is compared to cutting-edge template matching algorithm procedures to demonstrate its effectiveness on a demanding tree is counting recognition and delineation problem. Since common data and image processing techniques are utilized, thus can be easily implemented in production processes to cover large agricultural areas. The algorithm is tested on high value crops like Palm, Mango and Coconut located in Misamis Oriental, Philippines - showing a good performance in particular for young adult and adult trees, significantly 90% above. The s inventories or database updating, allowing for the reduction of field work and manual interpretation tasks.

Keywords: high value crop, LiDAR, OBIA, precision agriculture

Procedia PDF Downloads 383
198 Application of Zeolite Nanoparticles in Biomedical Optics

Authors: Vladimir Hovhannisyan, Chen Yuan Dong

Abstract:

Recently nanoparticles (NPs) have been introduced in biomedicine as effective agents for cancer-targeted drug delivery and noninvasive tissue imaging. The most important requirements to these agents are their non-toxicity, biocompatibility and stability. In view of these criteria, the zeolite (ZL) nanoparticles (NPs) may be considered as perfect candidates for biomedical applications. ZLs are crystalline aluminosilicates consisting of oxygen-sharing SiO4 and AlO4 tetrahedral groups united by common vertices in three-dimensional framework and containing pores with diameters from 0.3 to 1.2 nm. Generally, the behavior and physical properties of ZLs are studied by SEM, X-ray spectroscopy, and AFM, whereas optical spectroscopic and microscopic approaches are not effective enough, because of strong scattering in common ZL bulk materials and powders. The light scattering can be reduced by using of ZL NPs. ZL NPs have large external surface area, high dispersibility in both aqueous and organic solutions, high photo- and thermal stability, and exceptional ability to adsorb various molecules and atoms in their nanopores. In this report, using multiphoton microscopy and nonlinear spectroscopy, we investigate nonlinear optical properties of clinoptilolite type of ZL micro- and nanoparticles with average diameters of 2200 nm and 240 nm, correspondingly. Multiphoton imaging is achieved using a laser scanning microscope system (LSM 510 META, Zeiss, Germany) coupled to a femtosecond titanium:sapphire laser (repetition rate- 80 MHz, pulse duration-120 fs, radiation wavelength- 720-820 nm) (Tsunami, Spectra-Physics, CA). Two Zeiss, Plan-Neofluar objectives (air immersion 20×∕NA 0.5 and water immersion 40×∕NA 1.2) are used for imaging. For the detection of the nonlinear response, we use two detection channels with 380-400 nm and 435-700 nm spectral bandwidths. We demonstrate that ZL micro- and nanoparticles can produce nonlinear optical response under the near-infrared femtosecond laser excitation. The interaction of hypericine, chlorin e6 and other dyes with ZL NPs and their photodynamic activity is investigated. Particularly, multiphoton imaging shows that individual ZL NPs particles adsorb Zn-tetraporphyrin molecules, but do not adsorb fluorescein molecules. In addition, nonlinear spectral properties of ZL NPs in native biotissues are studied. Nonlinear microscopy and spectroscopy may open new perspectives in the research and application of ZL NP in biomedicine, and the results may help to introduce novel approaches into the clinical environment.

Keywords: multiphoton microscopy, nanoparticles, nonlinear optics, zeolite

Procedia PDF Downloads 395
197 The Effects of Different Agroforestry Practices on Glomalin Related Soil Protein, Soil Aggregate Stability and Organic Carbon-Association with Soil Aggregates in Southern Ethiopia

Authors: Nebiyou Masebo

Abstract:

The severities of land degradation in southern Ethiopia has been increasing due to high population density, replacement of an age-old agroforestry (AF) based agricultural system with monocropping. The consequences of these activities combined with climate change have been impaired soil biota, soil organic carbon (SOC), soil glomalin, soil aggregation and aggregate stability. The AF systems could curb these problems due it is an ecologically and economically sustainable. This study was aimed to determine the effect of agroforestry practices (AFPs) on soil glomalin, soil aggregate stability (SAS), and aggregate association with SOC. Soil samples (from two depth level: 0-30 & 30-60 cm) and woody species were collected from homegarden based agroforestry practice (HAFP), cropland based agroforestry practice (ClAFP), woodlot based agroforestry practice (WlAFP) and trees on soil and water conservation based agroforestry practice (TSWAFP) using systematic sampling. In this study, both easily extractable glomalin related soil protein (EEGRSP) and total glomalin related soil protein (TGRSP) were significantly (p<0.05) higher in HAFP compared to others, with decreasing order HAFP>WlAFP>TSWAFP>ClAFP at upper surface but in subsurface in decreasing order: WlAFP>HAFP>TSWAFP>ClAFP. On the other hand, the macroaggregate fraction of AFPs ranged from 22.64-36.51% where the lowest was in ClAFP, while the highest was in HAFP, moreover, the order for subsurface was also the same but SAS decreased with the increasing of soil depths. The micro-aggregate fraction ranged from 15.9–24.56%, where the lowest was in HAFP, but the highest was in ClAFP. Besides, the association of OC with both macro-and micro-aggregates was greatest in HAFP and followed by WlAFP. The findings also showed that both glomalin and SAS were significantly high with woody species diversity and richness. Thus, AFP with good management practice can play role on maintenance of biodiversity, glomalin content and other soil quality parameters with future implications for a stable ecosystem.

Keywords: agroforestry, soil aggregate stability, glomalin, aggregate-associated carbon, HAFP, ClAFP, WlAFP, TSWAFP.

Procedia PDF Downloads 72
196 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 336
195 Evaluating the Challenges of Large Scale Urban Redevelopment Projects for Central Government Employee Housing in Delhi

Authors: Parul Kapoor, Dheeraj Bhardwaj

Abstract:

Delhi and other Indian cities accommodate thousands of Central Government employees in housing complexes called ‘General Pool Residential Accommodation’ (GPRA), located in prime parcels of the city. These residential colonies are now undergoing redevelopment at a massive scale, significantly impacting the ecology of the surrounding areas. Essentially, these colonies were low-rise, low-density planned developments with a dense tree cover and minimal parking requirements. But with increasing urbanisation and spike in parking demand, the proposed built form is an aggregate of high-rise gated complexes, redefining the skyline of the city which is a huge departure from the mediocre setup of Low-rise Walk-up apartments. The complexity of these developments is further aggravated by the need for parking which necessitates cutting huge number of trees to accommodate multiple layers of parking beneath the structures thus sidelining the authentic character of these areas which is laden with a dense tree cover. The aftermath of this whole process is the generation of a huge carbon footprint on the surrounding areas, which is unaccounted for, in the planning and design practice. These developments are currently planned as mix-use compounds with large commercial built-up spaces which have additional parking requirements over and above the residential parking. Also, they are perceived as gated complexes and not as neighborhood units, thus project isolated images of high-rise, dense systems with little context to the surroundings. The paper would analyze case studies of GPRA Redevelopment projects in Delhi, and the lack of relevant development control regulations which have led to abnormalities and complications in the entire redevelopment process. It would also suggest policy guidelines which can establish comprehensive codes for effective planning of these settlements.

Keywords: gated complexes, GPRA Redevelopment projects, increased densities, huge carbon footprint, mixed-use development

Procedia PDF Downloads 103
194 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 110
193 Classification of Forest Types Using Remote Sensing and Self-Organizing Maps

Authors: Wanderson Goncalves e Goncalves, José Alberto Silva de Sá

Abstract:

Human actions are a threat to the balance and conservation of the Amazon forest. Therefore the environmental monitoring services play an important role as the preservation and maintenance of this environment. This study classified forest types using data from a forest inventory provided by the 'Florestal e da Biodiversidade do Estado do Pará' (IDEFLOR-BIO), located between the municipalities of Santarém, Juruti and Aveiro, in the state of Pará, Brazil, covering an area approximately of 600,000 hectares, Bands 3, 4 and 5 of the TM-Landsat satellite image, and Self - Organizing Maps. The information from the satellite images was extracted using QGIS software 2.8.1 Wien and was used as a database for training the neural network. The midpoints of each sample of forest inventory have been linked to images. Later the Digital Numbers of the pixels have been extracted, composing the database that fed the training process and testing of the classifier. The neural network was trained to classify two forest types: Rain Forest of Lowland Emerging Canopy (Dbe) and Rain Forest of Lowland Emerging Canopy plus Open with palm trees (Dbe + Abp) in the Mamuru Arapiuns glebes of Pará State, and the number of examples in the training data set was 400, 200 examples for each class (Dbe and Dbe + Abp), and the size of the test data set was 100, with 50 examples for each class (Dbe and Dbe + Abp). Therefore, total mass of data consisted of 500 examples. The classifier was compiled in Orange Data Mining 2.7 Software and was evaluated in terms of the confusion matrix indicators. The results of the classifier were considered satisfactory, and being obtained values of the global accuracy equal to 89% and Kappa coefficient equal to 78% and F1 score equal to 0,88. It evaluated also the efficiency of the classifier by the ROC plot (receiver operating characteristics), obtaining results close to ideal ratings, showing it to be a very good classifier, and demonstrating the potential of this methodology to provide ecosystem services, particularly in anthropogenic areas in the Amazon.

Keywords: artificial neural network, computational intelligence, pattern recognition, unsupervised learning

Procedia PDF Downloads 344
192 Radio Frequency Heating of Iron-Filled Carbon Nanotubes for Cancer Treatment

Authors: L. Szymanski, S. Wiak, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska

Abstract:

There exist more than one hundred different types of cancer, and therefore no particular treatment is offered to people struggling with this disease. The character of treatment proposed to a patient will depend on a variety of factors such as type of the cancer diagnosed, advancement of the disease, its location in the body, as well as personal preferences of a patient. None of the commonly known methods of cancer-fighting is recognised as a perfect cure, however great advances in this field have been made over last few decades. Once a patient is diagnosed with cancer, he is in need of medical care and professional treatment for upcoming months, and in most cases even for years. Among the principal modes of treatment offered by medical centres, one can find radiotherapy, chemotherapy, and surgery. All of them can be applied separately or in combination, and the relative contribution of each is usually determined by medical specialist in agreement with a patient. In addition to the conventional treatment option, every day more complementary and alternative therapies are integrated into mainstream care. There is one promising cancer modality - hyperthermia therapy which is based on exposing body tissues to high temperatures. This treatment is still being investigated and is not widely available in hospitals and oncological centres. There are two kinds of hyperthermia therapies with direct and indirect heating. The first is not commonly used due to low efficiency and invasiveness, while the second is deeply investigated and a variety of methods have been developed, including ultrasounds, infrared sauna, induction heating and magnetic hyperthermia. The aim of this work was to examine possibilities of heating magnetic nanoparticles under the influence of electromagnetic field for cancer treatment. For this purpose, multiwalled carbon nanotubes used as nanocarriers for iron particles were investigated for its heating properties. The samples were subjected to an alternating electromagnetic field with frequency range between 110-619 kHz. Moreover, samples with various concentrations of carbon nanotubes were examined. The lowest frequency of 110 kHz and sample containing 10 wt% of carbon nanotubes occurred to influence the most effective heating process. Description of hyperthermia therapy aiming at enhancing currently available cancer treatment was also presented in this paper. Most widely applied conventional cancer modalities such as radiation or chemotherapy were also described. Methods for overcoming the most common obstacles in conventional cancer modalities, such as invasiveness and lack of selectivity, has been presented in magnetic hyperthermia characteristics, which explained the increasing interest of the treatment.

Keywords: hyperthermia, carbon nanotubes, cancer colon cells, ligands

Procedia PDF Downloads 250
191 Reverse Engineering Genius: Through the Lens of World Language Collaborations

Authors: Cynthia Briggs, Kimberly Gerardi

Abstract:

Over the past six years, the authors have been working together on World Language Collaborations in the Middle School French Program at St. Luke's School in New Canaan, Connecticut, USA. Author 2 brings design expertise to the projects, and both teachers have utilized the fabrication lab, emerging technologies, and collaboration with students. Each year, author 1 proposes a project scope, and her students are challenged to design and engineer a signature project. Both partners have improved the iterative process to ensure deeper learning and sustained student inquiry. The projects range from a 1:32 scale model of the Eiffel Tower that was CNC routed to a fully functional jukebox that plays francophone music, lights up, and can hold up to one thousand songs powered by Raspberry Pi. The most recent project is a Fragrance Marketplace, culminating with a pop-up store for the entire community to discover. Each student will learn the history of fragrance and the chemistry behind making essential oils. Students then create a unique brand, marketing strategy, and concept for their signature fragrance. They are further tasked to use the industrial design process (bottling, packaging, and creating a brand name) to finalize their product for the public Marketplace. Sometimes, these dynamic projects require maintenance and updates. For example, our wall-mounted, three-foot francophone clock is constantly changing. The most recent iteration uses Chat GPT to program the Arduino to reconcile the real-time clock shield and keep perfect time as each hour passes. The lights, motors, and sounds from the clock are authentic to each region, represented with laser-cut embellishments. Inspired by Michel Parmigiani, the history of Swiss watch-making, and the precision of time instruments, we aim for perfection with each passing minute. The authors aim to share exemplary work that is possible with students of all ages. We implemented the reverse engineering process to focus on student outcomes to refine our collaborative process. The products that our students create are prime examples of how the design engineering process is applicable across disciplines. The authors firmly believe that the past and present of World cultures inspire innovation.

Keywords: collaboration, design thinking, emerging technologies, world language

Procedia PDF Downloads 22
190 Heritage Preservation and Cultural Tourism; The 'Pueblos Mágicos' Program and Its Role in Preserving Traditional Architecture in Mexico

Authors: Claudia Rodríguez Espinosa, Erika Elizabeth Pérez Múzquiz

Abstract:

The Pueblos Mágicos federal program tries to preserve the traditional environment of small towns (under 20,000 inhabitants), through economic investments, legislation, and legal aid. To access the program, it’s important to cover 8 requirements; one of them is the fourth, which considers ‘Promotion of symbolic and differentiated touristic attractions, such as architecture, emblematic buildings, festivities and traditions, artisan production, traditional cuisine, and touristic services that guarantee their commercialization along with assistantship and security services’. With this objective in mind, the Federal government of Mexico had developed local programs to protect emblematic public buildings in each of the 83 towns included in the Pueblos Mágicos program that involved federal and local administrations as well as local civil associations, like Adopte una Obra de Arte. In this paper, we present 3 different intervention cases: first the restoration project (now concluded) of the 16th century monastery of Santa María Magdalena in Cuitzeo, an enormous building which took 6 years to be completely restored. Second case, the public spaces intervention in Pátzcuaro, included the Plaza Grande or Vasco de Quiroga square, and the access to the arts and crafts house known as Casa de los once patios or eleven backyards house. The third case is the recovery project of the 16th century atrium of the Tzintzuntzan monastery that included the original olive trees brought by Franciscans monks to this town in the middle 1500’s. This paper tries to present successful preservation projects in 3 different scales: building, urban spaces and landscape; and in 3 different towns with the objective to preserve public architecture, public spaces and cultural traditions. Learn from foreign experiences, different ways to manage preservation projects focused on public architecture and public spaces.

Keywords: cultural tourism, heritage preservation, traditional architecture, public policies

Procedia PDF Downloads 270
189 Evaluating Textbooks for Brazilian Air Traffic Controllers’ English Language Training: A Checklist Proposal

Authors: Elida M. R. Bonifacio

Abstract:

English language proficiency has become an essential issue in aviation communication after aviation incidents, and accidents happened. Lack of proficiency or inappropriate use of the English language has been found as one of the factors that cause most of those incidents or accidents. Therefore, the International Civil Aviation Organization (ICAO) established the requirements for minimum English language proficiency of aviation personnel, especially pilots and air traffic controllers in the 192 member states. In Brazil, the discussions about this topic became patent after an accident that occurred in 2006, which was a mid-air collision and costed the life of 154 passengers and crew members. Thus, the number of schools and private practitioners willing to teach English for aviation purposes started to increase. Although the number of teaching materials internationally used for general purposes is relatively large, it would be inappropriate to adopt the same materials in classes that focus on communication in aviation contexts. On the contrary, the options of aviation English materials are scarce; moreover, they are internationally used and may not fulfill the linguistic needs of all their users around the world. In order to diminish the problems that Brazilian practitioners may encounter in the adoption of materials that demand a great level of adaptation to meet their students’ needs, a checklist was thought to evaluate textbooks. The aim of this paper is to propose a checklist that evaluates textbooks used in English language training of Brazilian air traffic controllers. The criteria used to compound the checklist are based on materials development literature, as well as on linguistic requirements established by ICAO on its publications, on English for Specific Purposes (ESP) principles, and on Brazilian aviation English language proficiency test format. The checklist has as main indicators the language learning tenets under which the book was written, graphical features, lexical, grammatical and functional competencies required for minimum proficiency, similarities to official testing format, and support materials, totaling 117 items marked as YES, NO or PARTIALLY. In order to verify if the use of the checklist is effective, an aviation English textbook was evaluated. From this evaluation, it is possible to measure quantitatively how much the material meets the students’ needs and to offer a tool to help professionals engaged in aviation English teaching around the world to choose the most appropriate textbook according to their audience. From the results, practitioners are able to verify which items the material does not fulfill and to make proper adaptations since the perfect material will be difficult to find.

Keywords: aviation English, ICAO, materials development, English language proficiency

Procedia PDF Downloads 114
188 Water Footprint for the Palm Oil Industry in Malaysia

Authors: Vijaya Subramaniam, Loh Soh Kheang, Astimar Abdul Aziz

Abstract:

Water footprint (WFP) has gained importance due to the increase in water scarcity in the world. This study analyses the WFP for an agriculture sector, i.e., the oil palm supply chain, which produces oil palm fresh fruit bunch (FFB), crude palm oil, palm kernel, and crude palm kernel oil. The water accounting and vulnerability evaluation (WAVE) method was used. This method analyses the water depletion index (WDI) based on the local blue water scarcity. The main contribution towards the WFP at the plantation was the production of FFB from the crop itself at 0.23m³/tonne FFB. At the mill, the burden shifts to the water added during the process, which consists of the boiler and process water, which accounted for 6.91m³/tonne crude palm oil. There was a 33% reduction in the WFP when there was no dilution or water addition after the screw press at the mill. When allocation was performed, the WFP reduced by 42% as the burden was shared with the palm kernel and palm kernel shell. At the kernel crushing plant (KCP), the main contributor towards the WFP 4.96 m³/tonne crude palm kernel oil which came from the palm kernel which carried the burden from upstream followed by electricity, 0.33 m³/tonne crude palm kernel oil used for the process and 0.08 m³/tonne crude palm kernel oil for transportation of the palm kernel. A comparison was carried out for mills with biogas capture versus no biogas capture, and the WFP had no difference for both scenarios. The comparison when the KCPs operate in the proximity of mills as compared to those operating in the proximity of ports only gave a reduction of 6% for the WFP. Both these scenarios showed no difference and insignificant difference, which differed from previous life cycle assessment studies on the carbon footprint, which showed significant differences. This shows that findings change when only certain impact categories are focused on. It can be concluded that the impact from the water used by the oil palm tree is low due to the practice of no irrigation at the plantations and the high availability of water from rainfall in Malaysia. This reiterates the importance of planting oil palm trees in regions with high rainfall all year long, like the tropics. The milling stage had the most significant impact on the WFP. Mills should avoid dilution to reduce this impact.

Keywords: life cycle assessment, water footprint, crude palm oil, crude palm kernel oil, WAVE method

Procedia PDF Downloads 147
187 Presenting of 'Local Wishes Map' as a Tool for Promoting Dialogue and Developing Healthy Cities

Authors: Ana Maria G. Sperandio, Murilo U. Malek-Zadeh, João Luiz de S. Areas, Jussara C. Guarnieri

Abstract:

Intersectoral governance is a requirement for developing healthy cities. However, this achievement is difficult to be succeeded, especially in regions at low resources condition. Therefore, it was developed a cheap investigative procedure to diagnose sectoral wishes related to urban planning and health promotion. This procedure is composed of two phases, which can be applied to different groups in order to compare the results. The first phase is a conversation guided by a list of questions. Some of those questions aim to gather information about how individuals understand concepts such as healthy city or a health promotion and what they believe that constitutes the relation between urban planning and urban health. Other questions investigate local issues, and how citizens would like to promote dialogue between sectors. At second phase individuals stand around the investigated city (or city region) map and are asked to represent their wishes on it. They can represent it by writing text notations or inserting icons on it, with the latter representing a city element, for example, some trees, a square, a playground, a hospital, a cycle track. After groups had represented their wishes, the map can be photographed, and then the results from distinct groups can be compared. This procedure was conducted at a small city in Brazil (Holambra), in 2017 which is the first out of four years of the mayor’s term. The prefecture asked for this tool in order to make Holambra become a city of Potential Healthy Municipalities Network in Brazil. Two sectors were investigated: the government and the urban population. By the end of our investigation, the intersection from the group (i.e., population and government) maps was accounted for creating a map of common wishes. Therefore, the material produced can be used as a guide for promoting dialogue between sectors and as a tool of monitoring politics progress. The report of this procedure was directed to public managers, so they could see the common wishes between themselves and local populations, and use this tool as a guide for creating urban politics which intends to enhance health promotion and to develop a healthy city, even at low resources condition.

Keywords: governance, health promotion, intersectorality, urban planning

Procedia PDF Downloads 117
186 Spatial Distribution of Virus-Transmitting Aphids of Plants in Al Bahah Province, Saudi Arabia

Authors: Sabir Hussain, Muhammad Naeem, Yousif Aldryhim, Susan E. Halbert, Qingjun Wu

Abstract:

Plant viruses annually cause severe economic losses in crop production and globally, different aphid species are responsible for the transmission of such viruses. Additionally, aphids are also serious pests of trees, and agricultural crops. Al Bahah Province, Kingdom of Saudi Arabia (KSA) has a high native and introduced plant species with a temperate climate that provides ample habitats for aphids. In this study, we surveyed virus-transmitting aphids from the Province to highlight their spatial distributions and hot spot areas for their target control strategies. During our fifteen month's survey in Al Bahah Province, three hundred and seventy samples of aphids were collected using both beating sheets and yellow water pan traps. Consequently, fifty-four aphid species representing 30 genera belonging to four families were recorded from Al Bahah Province. Alarmingly, 35 aphid species from our records are virus transmitting species. The most common virus transmitting aphid species based on number of collecting samples, were Macrosiphum euphorbiae (Thomas, 1878), Brachycaudus rumexicolens (Patch, 1917), Uroleucon sonchi (Linnaeus, 1767), Brachycaudus helichrysi (Kaltenbach, 1843), and Myzus persicae (Sulzer, 1776). The numbers of samples for the forementioned species were 66, 24, 23, 22, and 20, respectively. The widest range of plant hosts were found for M. euphorbiae (39 plant species), B. helichrysi (12 plant species), M. persicae (12 plant species), B. rumexicolens (10 plant species), and U. sonchi (9 plant species). The hottest spot areas were found in Al-Baha, Al Mekhwah and Biljarashi cities of the province on the basis of their abundance. This study indicated that Al Bahah Province has relatively rich aphid diversity due to the relatively high plant diversity in a favorable climatic condition. ArcGIS tools can be helpful for biologists to implement the target control strategies against these pests in the integrated pest management, and ultimately to save money and time.

Keywords: Al Bahah province, aphid-virus interaction, biodiversity, global information system

Procedia PDF Downloads 163
185 Efficacy of Coconut Shell Pyrolytic Oil Distillate in Protecting Wood Against Bio-Deterioration

Authors: K. S. Shiny, R. Sundararaj

Abstract:

Coconut trees (Cocos nucifera L.) are grown in many parts of India and world because of its multiple utilities. During pyrolysis, coconut shells yield oil, which is a dark thick liquid. Upon simple distillation it produces a more or less colourless liquid, termed coconut shell pyrolytic oil distillate (CSPOD). This manuscript reports and discusses the use of coconut shell pyrolytic oil distillate as a potential wood protectant against bio-deterioration. Since botanical products as ecofriendly wood protectant is being tested worldwide, the utilization of CPSOD as wood protectant is of great importance. The efficacy of CSPOD as wood protectant was evaluated as per Bureau of Indian Standards (BIS) in terms of its antifungal, antiborer, and termiticidal activities. Specimens of Rubber wood (Hevea brasiliensis) in six replicate each for two treatment methods namely spraying and dipping (48hrs) were employed. CSPOD was found to impart total protection against termites for six months compared to control under field conditions. For assessing the efficacy of CSPOD against fungi, the treated blocks were subjected to the attack of two white rot fungi Tyromyces versicolor (L.) Fr. and Polyporus sanguineus (L.) G. Mey and two brown rot fungi, Polyporus meliae (Undrew.) Murrill. and Oligoporus placenta (Fr.) Gilb. & Ryvarden. Results indicated that treatment with CSPOD significantly protected wood from the damage caused by the decay fungi. Efficacy of CSPOD against wood borer Lyctus africanus Lesne was carried out using six pairs of male and female beetles and it gave promising results in protecting the treated wood blocks when compared to control blocks. As far as the treatment methods were concerned, dip treatment was found to be more effective when compared to spraying. The results of the present investigation indicated that CSPOD is a promising botanical compound which has the potential to replace synthetic wood protectants. As coconut shell, pyrolytic oil is a waste byproduct of coconut shell charcoal industry, its utilization as a wood preservative will expand the economic returns from such industries.

Keywords: coconut shell pyrolytic oil distillate, eco-friendly wood protection, termites, wood borers, wood decay fungi

Procedia PDF Downloads 350
184 A Corpus-Based Analysis of Japanese Learners' English Modal Auxiliary Verb Usage in Writing

Authors: S. Nakayama

Abstract:

For non-native English speakers, using English modal auxiliary verbs appropriately can be among the most challenging tasks. This research sought to identify differences in modal verb usage between Japanese non-native English speakers (JNNS) and native speakers (NS) from two different perspectives: frequency of use and distribution of verb phrase structures (VPS) where modal verbs occur. This study can contribute to the identification of JNNSs' interlanguage with regard to modal verbs; the main aim is to make a suggestion for the improvement of teaching materials as well as to help language teachers to be able to teach modal verbs in a way that is helpful for learners. To address the primary question in this study, usage of nine central modals (‘can’, ‘could’, ‘may’, ‘might’, ‘shall’, ‘should’, ‘will’, ‘would’, and ‘must’) by JNNS was compared with that by NSs in the International Corpus Network of Asian Learners of English (ICNALE). This corpus is one of the largest freely-available corpora focusing on Asian English learners’ language use. The ICNALE corpus consists of four modules: ‘Spoken Monologue’, ‘Spoken Dialogue’, ‘Written Essays’, and ‘Edited Essays’. Among these, this research adopted the ‘Written Essays’ module only, which is the set of 200-300 word essays and contains approximately 1.3 million words in total. Frequency analysis revealed gaps as well as similarities in frequency order. Specifically, both JNNSs and NSs used ‘can’ with the most frequency, followed by ‘should’ and ‘will’; however, usage of all the other modals except for ‘shall’ was not identical to each other. A log-likelihood test uncovered JNNSs’ overuse of ‘can’ and ‘must’ as well as their underuse of ‘will’ and ‘would’. VPS analysis revealed that JNNSs used modal verbs in a relatively narrow range of VPSs as compared to NSs. Results showed that JNNSs used most of the modals with bare infinitives or the passive voice only whereas NSs used the modals in a wide range of VPSs including the progressive construction and the perfect aspect, both of which were the structures where JNNSs rarely used the modals. Results of frequency analysis suggest that language teachers or teaching materials should explain other modality items so that learners can avoid relying heavily on certain modals and have a wide range of lexical items to reflect their feelings more accurately. Besides, the underused modals should be more stressed in the classroom because they are members of epistemic modals, which allow us to not only interject our views into propositions but also build a relationship with readers. As for VPSs, teaching materials should present more examples of the modals occurring in a wide range of VPSs to help learners to be able to express their opinions from a variety of viewpoints.

Keywords: corpus linguistics, Japanese learners of English, modal auxiliary verbs, International Corpus Network of Asian Learners of English

Procedia PDF Downloads 112